Formal Aspects in Security and Trust: 6th International Workshop, FAST 2009, Eindhoven, The Netherlands, November 5-6, 2009, Revised Selected Papers ... Computer Science Security and Cryptology)
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max-Planck Institute of Computer Science, Saarbruecken, Germany
5983
Pierpaolo Degano Joshua D. Guttman (Eds.)
Formal Aspects in Security and Trust 6th International Workshop, FAST 2009 Eindhoven, The Netherlands, November 5-6, 2009 Revised Selected Papers
13
Volume Editors Pierpaolo Degano Università di Pisa, Dipartimento di Informatica Largo Bruno Pontecorvo, 3, 56127 Pisa, Italy E-mail: [email protected] Joshua D. Guttman Worcester Polytechnic Institute 100 Institute Rd, Worcester, MA 01609, USA E-mail: [email protected]
Library of Congress Control Number: 2010924066 CR Subject Classification (1998): C.2.0, K.6.5, D.4.6, E.3, K.4.4, H.3-4 LNCS Sublibrary: SL 4 – Security and Cryptology ISSN ISBN-10 ISBN-13
0302-9743 3-642-12458-5 Springer Berlin Heidelberg New York 978-3-642-12458-7 Springer Berlin Heidelberg New York
The present volume contains the proceedings of the 6th International workshop on Formal Aspects of Security and Trust (fast 2009), held in Eindhoven, The Netherlands, 5–6 November 2009, as part of Formal Methods Week 2009. fast is sponsored by IFIP WG 1.7 on Foundations of Security Analysis and Design. The previous five fast workshop editions have fostered cooperation among researchers in the areas of security and trust, and we aimed to continue this tradition. As computing and network infrastructures become increasingly pervasive, and as they carry increasing economic activity, society needs well-matched security and trust mechanisms. These interactions increasingly span several enterprises and involve loosely structured communities of individuals. Participants in these activities must control interactions with their partners based on trust policies and business logic. Trust-based decisions effectively determine the security goals for shared information and for access to sensitive or valuable resources. fast sought original papers focusing of formal aspects of: security and trust policy models; security protocol design and analysis; formal models of trust and reputation; logics for security and trust; distributed trust management systems; trust-based reasoning; digital assets protection; data protection; privacy and id issues; information flow analysis; language-based security; security and trust aspects in ubiquitous computing; validation/analysis tools; Web service security/trust/privacy; grid security; security risk assessment; and case studies. The fast proceedings contain—in addition to an abstract of the invited talk by Anindya Banerjee—revisions of full papers accepted for presentation at fast. The 18 papers appearing here were selected out of 50 submissions. Each paper was reviewed by at least three members of the Program Committee, whom we wish to thank for their effort. Many thanks also to the organizers of the Formal Methods Week 2009 for accepting fast 2009 as an affiliated event and for providing a perfect environment for running the workshop. We are also grateful to the Easychair organization, which created a helpful framework for refereeing and PC discussion, and helped to construct these proceedings. November 2009
Pierpaolo Degano Joshua Guttman
Organization
Program Committee Gilles Barthe Fr´ed´eric Cuppens Pierpaolo Degano Theo Dimitrakos Sandro Etalle Roberto Gorrieri Joshua Guttman Masami Hagiya Chris Hankin Bart Jacobs Christian Jensen Yuecel Karabulut Igor Kotenko Fabio Martinelli Catherine Meadows Ron van der Meyden Mogens Nielsen Dusko Pavlovic Riccardo Pucella Peter Ryan Steve Schneider Jean-Marc Seigneur Ketil Stølen
IMDEA Software, Spain Telecom Bretagne, France University of Pisa, Italy (Program Co-chair) BT, UK TU Eindhoven, The Netherlands University of Bologna, Italy Worcester Polytechnic Institute, USA (Program Co-chair) University of Tokyo, Japan Imperial College (London), UK Radboud University Nijmegen, The Netherlands DTU, Denmark SAP Research, USA SPIIRAS, Russia IIT-CNR, Italy Naval Research Lab, USA University of New South Wales, Australia University of Aarhus, Denmark Kestrel Institute, USA and Oxford, UK Northeastern University, USA University of Luxembourg University of Surrey, UK University of Geneva, Switzerland SINTEF, Norway
Semantics and Enforcement of Expressive Information Flow Policies Anindya Banerjee IMDEA Software, Madrid, Spain [email protected]
The following is intended as an overview of my invited talk at the 2009 FAST workshop. The primary reference for this work remains the earlier paper [4] that contains the necessary technical details, motivating examples and commentary on particular design choices. The talk focuses on confidentiality policies of sequential, heap-manipulating programs (typically formalized as noninterference) and shows how to exploit techniques from type systems, program logics and verification for – Specification of expressive confidentiality policies based on declassification of information. – Modular enforcement of confidentiality policies mixing security type-based analysis and verification. Policy specifications often can be incomplete and restrictive in that they do not capture requirements adequately. For example, one might desire a policy that captures the requirement “secret until Tuesday” rather than “secret forever”. In a medical setting, one might want to capture the requirement that a patient’s medical status can be revealed to a specialist only under consent from the patient and her primary care physician and only after a log entry has been written to the effect. As part of a disaster relief plan, one might want to capture the requirement that the medical histories of all patients — but not their doctors’ notes — be revealed. The above requirements are intended as examples for the need to specify expressive declassification policies that include conditions under which downgrading of confidential information is permitted. In the terminology of Sabelfeld and Sands [12], a policy may need to encompass when declassification may happen, what information can be declassified, where in the code declassification is allowed, etc. What is the end-to-end semantics of such declassification policies? How can such policies be specified and enforced? The semantics of policies draws inspiration from Askarov and Sabelfeld’s knowledgebased formulation [3] of noninterference. The knowledge-based formulation permits an end-to-end semantic property based on a model that allows observations of intermediate public states as well as termination. An attacker’s knowledge only increases at explicit declassification steps, and within limits set by policy.
Partially supported by US NSF awards CNS-0627748, ITR-0326577 and by a sabbatical visit to Microsoft Research, Redmond.
P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 1–3, 2010. c Springer-Verlag Berlin Heidelberg 2010
2
A. Banerjee
Static enforcement is provided by combining type-checking with program verification techniques applied to the small subprograms (or sessions) that carry out declassifications. The enforcement has been proven sound for the simple imperative language. The verification techniques are based on a relational Hoare logic [6] that combines reasoning about ordinary assertions as well as “two-state” assertions called agreement assertions that express what is released. The use of two-state assertions stems from the observation that noninterference can be specified using Hoare triples that assert the equality of observable variables over two runs of a program [1,2]. The talk shows how the logic can take care of the what, when and where aspects of declassification policies. In the case of object-oriented programs the ordinary assertions belong to region logic [5] which facilitates reasoning about the heap. An illustrative example of static enforcement involves the verification of a heap-based data structure with declassification. In summary, there are three steps in the enforcement process: type checking for baseline policies, assertion checking using e.g., region logic, and relational verification of two-state assertions. One benefit of using assertion checking in concert with relational verification is that the approach fits well with access control. For example, it is possible to track a program’s currently enabled permissions in a ghost variable (say SecurityCtx). The specification can express what is released given various permissions. The policy “release h provided permission p is enabled” has two specifications with preconditions p ∈ SecurityCtx ∧ A(h) and p ∈ SecurityCtx. The “A(h)” in the first precondition is a relational (agreement) assertion: it says that h is released, that is, in two runs of the program the values of h are equal. The bibliography below provides the main inspirations for the work. A complete bibliography appears in the original paper [4]. Acknowledgements. I am very grateful to the organizers of FAST 2009 for their invitation and to Joshua Guttman, in particular, for encouragement. This work is in collaboration with David Naumann and Stan Rosenberg. I would like to thank them for the many hours (years!) of stimulating discussions, and for their patience and camaraderie.
References 1. Amtoft, T., Banerjee, A.: Information flow analysis in logical form. In: Giacobazzi, R. (ed.) SAS 2004. LNCS, vol. 3148, pp. 100–115. Springer, Heidelberg (2004) 2. Amtoft, T., Bandhakavi, S., Banerjee, A.: A logic for information flow in object-oriented programs. In: ACM Symposium on Principles of Programming Languages (POPL), pp. 91– 102 (2006) 3. Askarov, A., Sabelfeld, A.: Gradual release: Unifying declassification, encryption and key release policies. In: IEEE Symposium on Security and Privacy, pp. 207–221 (2007) 4. Banerjee, A., Naumann, D., Rosenberg, S.: Expressive declassification policies and their modular static enforcement. In: IEEE Symposium on Security and Privacy, pp. 339–353 (2008) 5. A. Banerjee, D. Naumann, and S. Rosenberg. Regional logic for local reasoning about global invariants. In ECOOP. pages 387–411, 2008.
Semantics and Enforcement of Expressive Information Flow Policies
3
6. Benton, N.: Simple relational correctness proofs for static analyses and program transformations. In: POPL, pp. 14–25 (2004) 7. Broberg, N., Sands, D.: Flow locks. In: ESOP, pp. 180–196 (2006) 8. Chong, S., Myers, A.C.: Security policies for downgrading. In: ACM CCS, pp. 198–209 (2004) 9. Myers, A.C.: JFlow: Practical mostly-static information flow control. In: POPL, pp. 228–241 (1999) 10. Rushby, J.: Noninterference, transitivity, and channel-control security policies. Technical report, SRI (December 1992) 11. Sabelfeld, A., Myers, A.C.: A model for delimited information release. In: Futatsugi, K., Mizoguchi, F., Yonezaki, N. (eds.) ISSS 2003. LNCS, vol. 3233, pp. 174–191. Springer, Heidelberg (2004) 12. Sabelfeld, A., Sands, D.: Dimensions and principles of declassification. Journal of Computer Security (2007) 13. Zdancewic, S.: Challenges for information-flow security. In: Proceedings of the 1st International Workshop on the Programming Language Interference and Dependence, PLID 2004 (2004)
An Algebra for Trust Dilution and Trust Fusion Baptiste Alcalde and Sjouke Mauw University of Luxembourg [email protected], [email protected] Abstract. Trust dilution and trust fusion are two operators that are used to calculate transitive trust in a trust network. Various implementations of these operators already exist but are not fully motivated. In this paper we define the basic properties of these two operators by developing a trust algebra. We evaluate several new and existing models against the axioms of this algebra, amongst which a number of variations of the Subjective Logic. The algebra enables the comparison of models and gives more insight in the available recommendation models and their properties.
1
Introduction
Trust transitivity is defined as the possibility to use trust information from other entities in order to infer a trust evaluation to a given entity. Trust transitivity is a key concept of recommendation systems and it attracts an ever increasing interest in the very recent years [4,6,12,13]. To date, we can identify two main recommendation model families. The first is qualitative and uses, for instance, modal logic [4]. The second, which is the focus of this paper, is quantitative and defines special trust operators, named fusion and dilution operators, in order to compute the resulting trust of a trust network [9,14,15]. Dilution is used to calculate the trust along trust chains. This operator combines agent A’s trust in agent B with agent B’s trust in agent C, to derive A’s trust in C. Fusion is used to compute the overall trust if there are different sources of information. If agent A has two independent sources of information, say B and D, on the trustworthiness of agent C, then B and D’s information can be combined using the fusion operator. In literature, several different definitions of fusion and dilution operators have been proposed. These definitions are often mainly motivated by technical observations, rather than by strong and defendable intuition. Hence these definitions can be hard to understand or to interpret from the point of view of an outsider. Therefore, one of the main motivations for the current research is the lack of determination of the intrinsic properties of these operators. In order to judge whether dilution and fusion definitions provide a suitable modeling of the phenomena, a description of these phenomena at a higher level of abstraction is required. In addition, such abstraction will enable to compare the relative merits of alternative definitions and extensions. Hence, the first contribution of this paper is to develop a higher level of abstraction in the form of a trust algebra. This algebra consists of a number of P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 4–20, 2010. c Springer-Verlag Berlin Heidelberg 2010
An Algebra for Trust Dilution and Trust Fusion
5
defining properties of the fusion and dilution operators. One of the merits of this approach is that these properties, stated in the form of equational axioms, can be motivated from the domain of trust and recommendation. Therefore, rather than proposing a new recommendation model, this paper aims at giving formal guidelines to possible implementation of recommendation models as well as a means for their comparison. To our knowledge, this approach was never explored until now. Such an algebraic approach has shown very beneficial e.g. in the realm of parallel systems, in which the development and analysis of process algebras has added to the understanding of the many different process models. Similarly, the development of a trust algebra will help to understand the different trust models. Using this approach, Subjective Logic (SL), as proposed in [9], can then be seen as one of a number of possible (and plausible) models for the abstract algebra. We consider the investigation of existing recommendation models, amongst which SL, and the development of new models as one of the contributions of this paper. Our study also reveals weaknesses in some of the models, and can provide a valuable feedback for the establishment of future models. The paper is structured as follows. In Sect. 2 we clarify some general definitions and assumptions needed as a background for this research. In Sect. 3 we present the trust algebra and some extensions as well as the motivation for the rules composing this algebra. In Sect. 4 we show the applicability of the algebra through the comparison of the canonical models (with three and four elements), and Subjective Logic variations. As a result from the evaluation, we can prove impossibility results in some models, show the limitations of others, and prove the correctness of a newly crafted model. A summary of the results is provided in a table at the end of Sect. 4. In the conclusion we interpret our results and propose a number of interesting venues for future work.
2
Trust Relations
Trust has been defined in several different ways. The definition of trust adopted here, first formulated by Gambetta [5], is often referred to as “reliability trust”. Thus, we define trust as the belief or subjective probability of the trustor that the trustee will adequately perform a certain action on which the trustor’s welfare depends. We also refer to the trustor and trustee as agents, which may be humans or computer programs acting on the behalf of humans. Trust is hence a quantifiable relation between two agents. In literature, many factors have been identified that can be taken into account when calculating the trust relation between two agents (see [1] for an overview). These factors comprise e.g. the trustor’s personality, the trustee’s competence, contextual information such as local norms and customs, and the opinions of other agents. The introduction of opinions allows a trustor to take other agents’ opinions into account when determining the trustworthiness of a trustee, thus yielding a trust network. We start off with the observation from [11] that there are different notions of trust involved. First, we make a distinction between two
6
B. Alcalde and S. Mauw
variants of trust, viz. functional trust and referral trust. Functional trust is the belief in an entity’s ability (and willingness) to carry out or support a specific function on which the relying party depends. Referral trust is the belief in an entity’s ability to recommend another entity w.r.t functional trust. The other distinction is between two types of trust, viz. direct trust and indirect trust. A direct trust relation occurs when the trustor trusts the trustee directly (without intermediaries), e.g. based on past experiences between them. An indirect trust occurs when the trustor trusts a trustee based on one or more opinions from third parties. By combining a trust variant with a trust type we can obtain functional direct trust, functional indirect trust, referral direct trust, and referral indirect trust. For example, Alice wants to know where to find a good car mechanic to fix her car. She asks Bob’s opinion because he is knowledgeable about cars (direct referral trust). Bob happens to know a good car mechanic (direct functional trust). Bob then suggests Alice the name of this car mechanic (recommendation). Alice can then bring her car to this car mechanic (indirect functional trust). We can note that after this transaction, Alice will transform her indirect functional trust into direct functional trust (since she will then have a direct experience with the car mechanic). In addition to these definitions we set a number of assumptions. First, we assume that the trustor knows all trust relations between agents that are relevant for her own trust calculations. In literature, this rather strong assumption is often called perfect forward, as to indicate that all agents are willing to forward other agents’ trust values without modification. Further, we assume that each agent keeps track of his own functional and referral trust in other agents. We will use the same domain for expressing trust values of all variants and types of trust. In a given trust graph all trust relations concern referral trust, except for the arrows directly ending at the trustee, which concern functional trust. In the remainder of the paper, we will therefore not explicitly mention the type of a given trust relation if it can be derived from the context. In literature, a distinction is made between two ways of composing trust values, viz. fusion and dilution. Trust fusion occurs if there are multiple trust paths from a trustor to a trustee, meaning that the trustee is recommended by several agents. In order to calculate his trust in the trustee, the trustor then has to fuse the trust values of these other agents. The fusion of trust values does not necessarily lead to a higher level of trust. The dilution of trust occurs if there is a trust chain from the trustor to the trustee. Every link in the chain reduces (or dilutes) the overall trust of the trustor in the trustee implied by the trust chain.
3
A Trust Algebra
In this section we develop the algebra of trust expressions, which is the first contribution of this paper. This abstract algebra is partly based on the more concrete operators found in literature (see e.g. [14,15,8]).
An Algebra for Trust Dilution and Trust Fusion
7
We develop our algebra in four layers. The first layer introduces the basic constants and operators and their basic properties. The second layer provides an extension of this algebra which allows us to compare trust expressions. In the third layer we express the duality of belief and disbelief, while in the fourth layer we treat the special case of full direct functional belief and disbelief. Basic Fusion and Dilution Algebra. A trust expression is obtained by (recursively) applying some trust operators to a number of trust atoms (or trust values). See the upper frame of Fig. 1 for the signature of basic trust expressions. The set of trust expressions is denoted by T and the set of trust atoms by A. We consider two basic trust operators: trust fusion (denoted by + ) and trust dilution (denoted by · ). The set of trust atoms is not specified in detail. We require that it contains at least the three constants υ, β, and δ. Constant υ denotes full uncertainty, i.e. the absence of any information that can help to assess the trustworthiness of the trustee. Constant β denotes full trust of the trustor in the trustee, without any uncertainty. Constant δ is the dual of β. It denotes full distrust of the trustor in the trustee, without any uncertainty. We use parentheses to disambiguate trust expressions. An example of a trust expression is (β · β) + (υ · δ). This expresses that, although the trustor has no direct trust relation with the trustee, he knows two independent sources that have a direct functional trust relation with the trustee. The trustor has full direct referral trust in the first source, who has full direct functional trust in the trustee (β ·β). Further, the trustor is completely uncertain whether to trust his second source or not. His second source does not trust the trustee at all; he has full distrust in the trustee (υ · δ). In order to simplify trust expressions we assume that the dilution operator · binds stronger than the fusion operator + . We will often omit the · operator from expressions if no confusion can arise. In this way, the above example can be simplified to ββ + υδ. T set of trust expressions · :T ×T →T A⊆T set of trust atoms υ∈A x, y ∈ T variables β∈A + : T × T → T fusion (operator) δ∈A (B1) x + y = y + x (C1) x + υ = x (C4) β (B2) x + (y + z) = (x + y) + z (C2) x · υ = υ (C5) β (B3) x(yz) = (xy)z (C3) υ · x = υ
dilution (operator) full uncertainty (constant) full belief (constant) full disbelief (constant) +β =β (C6) δ + δ = δ ·x = x (C7) δ · x = υ
Fig. 1. Basic Fusion and Dilution algebra (BFD)
The properties of these constants and operators are expressed by a set of axioms, which we call the Basic Fusion and Dilution (BFD) algebra (Fig. 1). The first three axioms express properties of the basic operators. The fusion operator is commutative (B1) and associative (B2), since the order in which the trustor receives independent recommendations is irrelevant. Calculating transitive trust along a trust chain is also associative, so dilution is an associative operator as well (B3). However, dilution is not commutative. This can be seen by a simple
8
B. Alcalde and S. Mauw
example. Assume that agent A fully trusts agent B’s opinion on agent C and assume that B fully distrusts C. Then A should also fully distrust C. However, if we swap the values, i.e. A has full distrust in B, who fully trusts C, then A should not necessarily (dis)trust C, so βδ = δβ. Axioms C1–C7 define the properties of the three constants. The uncertainty constant υ behaves like a zero element. Adding a fully uncertain opinion to an opinion x does not give any extra information, so x + υ = x (axiom C1). By combining this with axiom B1 we obtain the symmetric case υ + x = x. Axioms C2 and C3 express that full uncertainty in a trust chain annihilates any other information in this chain, so x · υ = υ · x = υ. If we fuse full belief with itself, it remains full belief (axiom C4). Axiom C5 expresses that the full belief constant β behaves as a left-unit for dilution. This follows from the fact that if we fully belief another agent, we adopt his opinion without any hesitation. Clearly, β is not a right-unit, so we don’t have x·β = x. If A distrusts B and B trusts C, then this does not mean that A should distrust C, δβ = δ. The disbelief constant δ behaves similar to β in a fusion context: if we get our full disbelief confirmed by another source, the fusion is still full disbelief (axiom C6). Finally, if we consider the opinion of somebody whom we disbelief, it will give us no information at all, so δ · x = υ (axiom C7). Obviously, the converse, x · δ = υ, does not hold, since e.g. β · δ = δ. Later we will come back to expressions of the form x · β and x · δ. Jøsang [11] also mentions interpretations which are different from the intuition sketched above. If we assume that “the friend of my enemy is my enemy”, then the interpretation of β as a right-unit would make sense. This interpretation would also have consequences for axiom C7, since then we would have δβ = δ. However, following Jøsang, we consider these interpretations as rather exotic and we will leave them for future study. It is important to notice that there exist expressions that are not equal (after applying the axioms) to a constant. An example is β + δ which expresses that via one route we obtain the information that the trustee can be trusted without any uncertainty, while via another independent route we learn that the trustee must be distrusted without any uncertainty. There are different ways to interpret the fusion of such dogmatic opinions, some of which are discussed in [10]. In order to allow for such different interpretations, we decided to not settle for a fixed interpretation in the algebra. Alternative interpretations can then be expressed by defining different models of the algebra. As discussed by Jøsang [8] the fusion and dilution operators do not distribute. For instance, xz + yz = (x + y)z is not a desired property, because in the lefthand side of this equation the two occurrences of z represent two independent opinions, which must both be taken into consideration in the fusion. Hence, they will reinforce each other. However, in the right-hand side of the equation, opinion z is only considered once. For the same reason, idempotence of the fusion operator (x + x = x), is not a required property either. Comparing Trust Expressions. In the following, we impose some additional structure on trust expressions by introducing a number of auxiliary operators. The first extension of the basic algebra allows us to compare trust expressions.
An Algebra for Trust Dilution and Trust Fusion
9
In order to evaluate the results of a trust calculation, one must be able to compare trust values. This will, for instance, allow one to select an agent that he considers most trusted for a specific task. Given the three-valued basis (υ, β, δ) of our algebra, a one-dimensional measure on trust values will be insufficient. Therefore, we will introduce three different measures, one for each of the components. These measures will be formally modeled as total orders on trust expressions: ≤u , ≤b , and ≤d (see Fig. 2). The inequality x ≤u y expresses that the uncertainty component in expression x is at most as high as the uncertainty component in y. The inequality x ≤b y expresses that the belief component in expression x is at most as high as the belief component in y. Likewise, the inequality x ≤d y expresses that the disbelief component in expression x is at most as high as the disbelief component in y. ≤u : T × T ≤b : T × T ≤d : T × T (T1) x ≤u υ (T2) β ≤u x (T3) δ ≤u x
compare uncertainty(total order) compare belief (total order) compare disbelief (total order) (T4) υ ≤b x (T7) υ ≤d x (T10) x + y ≤u x (T5) x ≤b β (T8) β ≤d x (T11) x ≤u x · y (T6) δ ≤b x (T9) x ≤d δ (T12) y ≤u x · y Fig. 2. Axioms for the total orders (TO)
Axiom T1 states that full uncertainty υ is the top element in the uncertainty order ≤u , since it dominates all other elements. Axioms T2 and T3 state that full belief β and full disbelief δ do not express any uncertainty, and hence they are bottom elements. Axioms T4–T9 specify similar properties for the belief order (in which β is the top element) and the disbelief order (in which δ is the top element). Axiom T10 expresses a basic property of trust fusion, namely that uncertainty does not increase if we receive more information on the trustworthiness of a trustee. In presence of the symmetry axiom B1, this axiom is equivalent to x+y ≤u y. Axioms T11 and T12 state a similar basic property for trust dilution: along a trust chain, uncertainty can only grow. The set of axioms T1–T12 forms the TO (for Total Order) extension of the BFD algebra. The Duality of Belief and Disbelief. The second extension of the algebra serves to express the duality of belief and disbelief. In order to express this duality, we introduce the inverse operator x (see Fig. 3). This operator swaps the belief and disbelief components of a trust expression. Axiom I1 expresses the basic inversion property. Distributivity of inversion over fusion is expressed in axiom I2. This means that belief and disbelief are treated similarly when fusing trust opinions. Distributivity of inversion over dilution, x · y = x · y, does not hold, because in a trust chain belief and disbelief are not each other’s duals. Axiom I3 states that if we have full uncertainty (so no belief nor disbelief), the inverse operator has no effect. This also stresses that υ is a zero element. Axiom I4 expresses the duality of belief and disbelief. In presence of axiom I1 this axiom is equivalent to β = δ. Axioms I5 and I6 state that uncertainty
10
B. Alcalde and S. Mauw :T →T (I1) x = x (I2) x + y = x + y
is orthogonal to belief and disbelief. Finally, axiom I7 states that the inverse function swaps the belief and disbelief values of a trust expression. The set of axioms I1–I7 forms the INV (for INVerse) extension of the BFD algebra. Using the dilution and inverse operators we now have that one constant suffices to define the other two. For instance, υ = δ · δ and β = δ (or υ = β · β and δ = β). Further, by using I2 we achieve equivalence of axioms C4 and C6. Full Direct Functional Belief and Disbelief. The third, and final, extension of our algebra concerns the meaning of full direct functional belief and disbelief. This means that we consider trust chains ending in β or δ, as in x · δ and x · β, which capture the situation that the last agent in a chain has full belief or disbelief in the trustee. Although it is tempting to set e.g. x · β = x, we consider this as a too strong axiom. The belief component of x · β is clearly identical to the belief component of x, but their disbelief components are not. Therefore, we weaken this axiom to x · β =b x (see axiom R1 in Fig. 4). Here we use x =b y as a shorthand notationfor x ≤b y ∧ y ≤b x. Thus, x =b y means that trust expressions x and y express equal belief. Likewise, we define =d and =u . Intuitively, axiom R1 states that if the last element in a chain has full belief, then belief of the whole chain is determined by the remainder of the chain. Axiom R2 states that in this case there is no disbelief. If we consider a trust chain ending in full disbelief, then the whole chain does not express any belief (axiom R3) and the disbelief expressed in the whole chain is exactly the belief that we have in the last agent before the trustee (axiom R4). We consider these axioms as a separate module since they are of a less basic nature. The set of axioms R1–R4 forms the RM (for Right Multiplication) extension of the BFD algebra. (R1) x · β =b x
(R2) x · β =d β
(R3) x · δ =b δ
(R4) x · δ =d x
Fig. 4. Axioms for right-multiplication (RM)
In the remaining of the paper we will refer to combinations of the rules of the basic algebra BFD and one or more of the extensions. For instance, BFD+TO+INV denotes the algebra consisting of the basic rules of BFD, and the extensions of TO and INV.
An Algebra for Trust Dilution and Trust Fusion
4
11
Models
In this section we investigate possible models of the algebra. First we look at small models containing three and four elements, and next we consider some models derived from the Subjective Logic. A distinguishing factor is the interpretation of the term β + δ. The main purpose of this chapter is to show how trust evaluation algorithms can be validated. In particular, several published variants of the Subjective Logic will be studied. We propose two new variants of the Subjective Logic which satisfy a larger set of axioms than the existing variants. Given an algebra (Σ, E), consisting of a signature Σ and a set of equations E, a model is a mathematical structure which interprets the sorts and functions of Σ as sets and (total) functions on these sets. This interpretation must be such that all equations of E are valid in the model. An equation t = t (where t and t are terms over the signature, possibly containing variables) is valid in a model if for every instantiation of the variables the interpretation of t is the same element as the interpretation of t . 4.1
Three-Element Models
First, we investigate the canonical model of three distinct elements {b, d, u} with interpretation β → b, δ → d, υ → u. This interpretation does not uniquely define the model, since there is still freedom in choosing a suitable interpretation of the fusion and dilution operators. Nevertheless, in order to satisfy the axioms, there is only little freedom left, namely in defining the outcome of b + d, which we consider a parameter σ of the model. Thus, we introduce the class of possible models M3 (σ) in which σ ∈ {b, d, u} represents the fusion of b and d. The interpretation of the operators is given in Fig. 5. The tables in this figure must be read in the order “row-operator-column”. For instance, in the second table we can look up the value of b · d by crossing the row labeled b (first row) with the column labeled d (second column). This yields b · d = d. The fusion table is determined, up to σ, by axioms B1, C1, C4, and C6. The dilution table is fully determined by axioms C3, C5, and C7. The inversion table is determined by axioms I1, I3, and I4. + b d u
b b σ b
d σ d d
u b d u
· b d u
b b u u
d d u u
u u u u
x b d u
x d b u
Fig. 5. The three-element models M3 (σ)
Next, we investigate possible choices for σ. M3 (u) is not a model of BFD, because associativity of fusion yields the following derivation b = b + u = b + (b + d) = (b + b) + d = b + d = u. Thus M3 (u) does not satisfy axiom B2.
12
B. Alcalde and S. Mauw
On the other hand, a simple case distinction suffices to verify that M3 (b) and M3 (d) are indeed models of BFD. Surprisingly, these are not models of the extended algebra BFD+TO+INV (irrespective of the definition of the total orders). Using axiom I2 we can, e.g., derive the following equality for M3 (d): b = d = b + d = b + d = d + b = d. The origin of the problem is in the requirement that belief and disbelief are treated equally by the fusion operator (axiom I2), which cannot be realized with three elements. By generalizing this reasoning we obtain the following impossibility result. Theorem 1. There does not exist a (non-trivial) three-element model of BFD+ TO+ INV. 4.2
Four-Element Models
As a consequence of the previous observations, we investigate somewhat richer models consisting of four elements {u, b, d, i}, where i denotes inconsistency or contradiction. The element i is used to give a meaning to β + δ. The underlying idea is that if we receive fully certain, but contradictory information, we cannot combine this in a consistent way. We interpret the constants as before (β → b, δ → d, υ → u) and the operators as in Fig. 6. The table for the fusion operator follows from the fusion axioms in BFD. Observe that inconsistencies in a fusion are persistent (yielding e.g. i+u = i). The table for the dilution operator has three parameters, π, ρ, and σ. The other values are determined by the dilution axioms in BFD. The inequalities are straightforward; they express the inconsistent nature of i by assigning it minimal uncertainty, minimal belief and minimal trust. We shall denote these models by M4 (π, ρ, σ). They show some resemblance with Belnap’s four-valued logic [2], but since the operators + and · are different from logical conjunction and disjunction, M4 (π, ρ, σ) is not isomorphic to Belnap’s logic. + b d u i
b b i b i
d i d d i
u b d u i
i i i i i
· b d u i
b b u u π
d d u u ρ
u u u u u
i i u u σ
x b d u i
x d b u i
u =b d =b i ≤b b u =d b =d i ≤d d b =u d =u i ≤u u
Fig. 6. The four-element models M4 (π, ρ, σ)
Using tool support to exhaustively verify all possible instantiations of π, ρ and σ, we found six different models satisfying all axioms: M4 (u, u, u), M4 (i, u, u), M4 (i, u, i), M4 (b, d, i), M4 (i, d, b), and M4 (i, d, i). The last three models, however, require a simplified definition of the total order, viz. one in which all elements are equivalent (e.g. u =b d =b i =b b). The model M4 (b, d, i) is isomorphic
An Algebra for Trust Dilution and Trust Fusion
13
to a model defined by Gutscher [7]. The variety of models implies that there are several different ways to interpret the proliferation of inconsistencies in a dilution context. Theorem 2. M4 (u, u, u), M4 (i, u, u), and M4 (i, u, i) are models of BFD+ TO+ INV+ RM. Proof. We will sketch the proof for M4 (i, u, i). Axioms B1, C1–C7, T1–T9, I1, and I3–I6 follow easily by inspecting the tables. For instance, B1 follows from the symmetry of the table for +. Axiom B2 x + (y + z) = (x + y) + z follows from a simple case distinction. If any of x, y or z equals i or u, then associativity clearly holds. Next if x, y and z are all b or all d, then associativity is also simple. Finally, if there is at least one b and at least one d and no u, then the associativity holds because the outcome is always i. The verification of axiom B3 (associativity of dilution) follows in a similar way, but needs some more case distinctions. Axiom T10 x + y ≤u x is true because every element of a row in the + table is ≤u dominated by the left-hand argument. A similar check of the · table suffices to verify axioms T11 and T12. Axiom I2 follows from a straightforward verification of all cases (ten cases, using symmetry of +). Axiom I7 clearly holds if x = u or x = i because they are minimal w.r.t. ≤b and ≤d . If x = b or x = d it follows from the duality of b and d. Axioms R1-R4 follow by simple inspection of the · table. 4.3
Subjective Logic
Subjective Logic [9] is a framework to compute the trust between two agents in a trust network. In its simplest form, a trust value is represented by a triplet (b, d, u), representing belief, disbelief, and uncertainty, respectively. These values satisfy b, d, u ∈ [0, 1] and b + d + u = 1. Each such triplet can be represented as a point in a triangle. Coordinate b of point p = (b, d, u) (see 1 U Uncertainty Fig. 7) determines the (perpendicular) distance between p and side DU . Likewise, d 0 0 determines the distance between p and side BU , and u the distance between p and BD. d b p Some examples: point B has coordinates (1, Disbelief Belief u 0, 0) and represents full belief, the middle D B 0 1 1 point between B and D is ( 12 , 12 , 0) and represents the fully certain opinion that there is Fig. 7. The SL triangle as much belief as disbelief in the trustee. In this framework, the fusion and dilution operators are called consensus (notation ⊕) and conjunction (notation ⊗). Reformulated in our notation, the operators are defined as follows. bu + b u du + d u uu (b, d, u) ⊕ (b , d , u ) = , , u + u − uu u + u − uu u + u − uu (b, d, u) ⊗ (b , d , u ) = (bb , bd , d + u + bu )
14
B. Alcalde and S. Mauw
The fusion operator is undefined if and only if u = u = 0 (assuming u, u ∈ [0, 1]). This indicates that the fusion of two dogmatic opinions (e.g. (1, 0, 0) ⊕ (0, 1, 0)) is not straightforward. In order for the Subjective Logic to serve as a model of our algebra, the definition of the fusion operator must be extended. We investigate several extensions in the following sections. The Model SLγ . Recent versions of the Subjective Logic [13,10] use a limit construction to define fusion for u = u = 0. γb + b γd + d , ,0 (b, d, 0) ⊕ (b , d , 0) = γ+1 γ+1 According to [13], γ is defined by γ =
lim
u,u →0
u u
. It expresses the relative
dogmatism between the expressions (b, d, 0) and (b , d , 0) (or rather, between the agents expressing these dogmatic opinions). The higher the value of γ, the higher the relative weight of opinion (b, d, 0) in a fusion with (b , d , 0). The “default value” of γ is 1, meaning that in general dogmatic values are averaged. We shall denote this model by SLγ . In [10] it is stated that “in case of dogmatic opinions the associativity of the consensus operator does not emerge directly”. Indeed, taking γ = 1, we can use associativity to derive ( 12 , 12 , 0) = (1, 0, 0) ⊕ (0, 1, 0) = (1, 0, 0) ⊕ ((0, 1, 0) ⊕ (0, 1, 0)) = ((1, 0, 0) ⊕ (0, 1, 0)) ⊕ (0, 1, 0) = ( 12 , 12 , 0) ⊕ (0, 1, 0) = ( 14 , 34 , 0). This gives the following negative result. Theorem 3. SLγ is not a model of the algebra BFD. As a possible solution, Jøsang et al. [10] introduce an algorithm which can be used to calculate the fusion of three or more dogmatic beliefs. Expressions like ((1, 0, 0)⊕(0, 1, 0))⊕(0, 1, 0) are interpreted by applying a ternary fusion operator to these three arguments. This approach consists in fact of the introduction of a collection of n-ary fusion operators, each of which is still not associative. An additional problem, overlooked by the authors, is the fact that expressions must first undergo some kind of normalization procedure before their algorithm can be applied correctly. For instance, the expression 1, 0) ⊕ ((1, 0, 0)⊗ (( 41 , 34 , 0)⊕ (0, 3 1 1 3 3 1 ( 4 , 4 , 0))) must first be reduced and not to to (0, 1, 0) ⊕ ( 4 , 4 , 0) ⊕ ( 4 , 4 , 0) 1 1 4 8 , 12 , 0), while (0, 1, 0) ⊕ (1, 0, 0) ⊗ ( 2 , 2 , 0) . The former expression equals ( 12 the latter equals ( 14 , 34 , 0). This normalization is not a simple innermost-first rewriting, but must make use of the property (1, 0, 0) ⊕ x = x, which coincides with axiom C5 of our algebra. We conclude by stating that, even though the authors claim associativity of their logic, this is not the case for any of the interpretations they give. Hence, SLγ is not a model of BFD. The Model SLc . Based on the idea of taking the average of conflicting dogmatic beliefs, we define the extension SLc . Elements of this model are fourtuples (b, d, u, c), where b, d, and u play their usual role and c counts the
An Algebra for Trust Dilution and Trust Fusion (T13) β + β =bdu β
15
(T14) δ + δ =bdu δ
Fig. 8. Weakening axioms C4 and C6
weight of a dogmatic opinion. Thus, we have the following set of trust atoms A = {(b, d, u, c) | b, d, u ∈ [0, 1] ∧ c ∈ N+ ∧ b + d + u = 1 ∧ (u = 0 ∨ c = 1)}. The last condition means that only opinions with uncertainty equal to zero can have a counter different from 1. Fusion and dilution are defined as follows. ⎧ bc+b c c ⎪ , dc+d , 0, c + c if u = u = 0 ⎪ c+c c+c ⎪ ⎪ ⎨(b, d, u, c) if u = 0, u = 0 (b, d, u, c) ⊕ (b , d , u , c ) = ⎪ (b ifu = 0, u = 0 ⎪ , d , u , c ) ⎪ ⎪ ⎩ bu +b u , du +d u , uu , 1 else
(b, d, u, c) ⊗ (b , d , u , c ) =
u+u −uu
u+u −uu
u+u −uu
(b , d , u , c ) ( bb , bd , d + u + bu , 1)
if (b, d, u, c) = (1, 0, 0, c) else
The first case of the fusion definition clarifies the role of the counter. If two dogmatic views are combined, then the resulting belief is calculated as the weighted average of the individual beliefs. The counter of the resulting trust value is the sum of the counters of the individual trust values. The other three cases are straightforward extensions of SL. For the dilution operator we treat the case where the left operand equals (1, 0, 0, c) differently. The reason is that in this case the resulting value must inherit the counter value of the right operand. This is motivated by the fact that (1, 0, 0, c) acts as a left-unit for dilution (cf. axiom C5 and the discussion on this axiom in the previous section). We interpret the constants as follows: β → (1, 0, 0, 1), δ → (0, 1, 0, 1), υ → (0, 0, 1, 1). The inverse operator is defined by (b, d, u, c) = (d, b, u, c), and the total orders by: (b, d, u, c) ≤b (b , d , u , c ) ⇔ b ≤ b , (b, d, u, c) ≤d (b , d , u , c ) ⇔ d ≤ d , and (b, d, u, c) ≤u (b , d , u , c ) ⇔ u ≤ u . This model, which we call SLc , satisfies all axioms except (C4) β + β = β and (C6) δ + δ = δ. This is because the weight of β + β is higher than the weight of β: (1, 0, 0, 1) ⊕ (1, 0, 0, 1) = (1, 0, 0, 2). However, SLc satisfies two weaker axioms T13 and T14 (see Fig. 8). We use x =bdu y as a shorthand notation for x =b y ∧ x =d y ∧ x =u y. It easily follows that axiom C4 implies T13 and that C6 implies T14. If we denote by BFD− the axiom system BFD minus axioms C4 and C6, we can formulate the following theorem. Theorem 4. SLc is a model of the algebra BFD− +TO+INV+RM+T13+T14. Proof. Axiom B1 follows by observing the symmetry in the definition of the c b c +bc fusion operator (e.g. bc+b c+c = c +c ). The proof of axiom B2 consists of a case distinction and a number of straightforward calculations. As an illustration, we show the calculation for the first component b+ 1 of ((b, d, u, c) ⊕ (b , d , u , c )) ⊕ (b , d , u , c ) if u = 0, u = 0.
16
B. Alcalde and S. Mauw
b+ 1
=
bu +b u uu u+u −uu u + b u+u −uu uu uu u+u −uu + u − u+u −uu u
=
(bu + b u)u + b uu uu + u (u + u − uu ) − uu u
The first component b+ 2 of (b, d, u) ⊕ ((b , d , u ) ⊕ (b , d , u )) is
b+ 2 =
b u +b u u u +u −u u u u − u u +u −u u
b u +uuu−u u + u+
u u u +u −u u
=
u(u
bu u + (b u + b u )u + u − u u ) + u u − uu u
It is easy to check that the resulting expressions are equal. For axiom B3 (associativity of dilution) the most complex case is equality of the third component. For instance, if (b, d, u) = (1, 0, 0), (b , d , u ) = (1, 0, 0), and (b , d , u ) = (1, 0, 0), the third component of (b, d, u, c) ⊗ ((b , d , u , c ) ⊗ (b , d , u , c )) is d + u + b(d + u + b u ). This is equal to the third component of ((b, d, u, c) ⊗ (b , d , u , c )) ⊗ (b , d , u , c ), which is bd + (d + u + bu) + bb u . Axioms C1, C2, C3, C5, C7 and T1–T9 can be verified easily. For axiom T10 we have to consider four cases, three of which are trivial. The fourth case u (u = 0, u = 0) is treated as follows: u+uuu −uu ≤ u ⇔ u+u −uu ≤ 1 ⇔ u ≤ u + u − uu ⇔ 0 ≤ u − uu , which is true for u, u ∈ [0, 1]. The most interesting case for axiom T11 is (b, d, u) = (1, 0, 0). We then have u ≤ d + u + bu , which holds for u, b, d, u ∈ [0, 1]. Likewise, axiom T12 follows from u = (d + u + b)u = du + uu + bu ≤ d + u + bu. The remaining axioms T13, T14, I1–I7, and R1–R4 are trivial. The Model SLi . Finally, we will construct a model by extending SL with a constant for inconsistency (as for the M4-models in Section 4.2). We define A = {(b, d, u) | b, d, u ∈ [0, 1] ∧ b + d + u = 1} ∪ {i}. The fusion and dilution operators are a merger of their definitions in SL and M4(i, u, i). i ⊕ (b, d, u) = (b, d, u) ⊕ i = i ⊕ i = i (b, d, 0) ⊕ (b, d, 0) = (b, d, 0) if b = b (b, d, 0) ⊕ (b , d , 0) = i bu +b u du +d u (b, d, u) ⊕ (b , d , u ) = ( u+u −uu , u+u −uu ,
uu u+u −uu )
if u = 0 ∨ u = 0
i ⊗ (1, 0, 0) = (1, 0, 0) ⊗ i = i ⊗ i = i (b, d, u) ⊗ i = i ⊗ (b, d, u) = (0, 0, 1) if (b, d, u) = (1, 0, 0) (b, d, u) ⊗ (b , d , u ) = ( bb , bd , d + u + bu ) We interpret the constants as follows: β → (1, 0, 0), δ → (0, 1, 0), υ → (0, 0, 1). The inverse operator is defined by (b, d, u) = (d, b, u) and i = i. The total orders are given by: (b, d, u) ≤b (b , d , u ) ⇔ b ≤ b (b, d, u) ≤d (b , d , u ) ⇔ d ≤ d (b, d, u) ≤u (b , d , u ) ⇔ u ≤ u
i ≤b (b, d, u) i ≤d (b, d, u) i ≤u (b, d, u)
This model, which we call SLi , satisfies all axioms. The proof follows the same line of reasoning as the proof of Theorem 4.
An Algebra for Trust Dilution and Trust Fusion
17
Theorem 5. SLi is a model of the algebra BFD+TO+INV+RM. All results of the current section are gathered in the table displayed in Fig. 9. The result for M4 holds only for certain values of the parameters and the result for SLc only for BFD− , T13, and T14.
Fig. 9. Results of the evaluation of the models ( means satisfied)
5
Related Work
There are many existing models that propose ways to combine trust values or more widely to combine opinions. In the simplest models, the trust values are discrete or continuous values on a given interval (implying at least two elements in the model, i.e. a bottom and a top element). This is the case for instance in PGP [17]. Other models are taking the uncertainty into account, such as Subjective Logic [9], Dempster-Shafer [14], or Yager [15] to name only a few. The uncertainty level adds another dimension to the trust metrics and can profitably be used in order to compute more accurate trust values. Subjective Logic was extended several times, e.g. with a limit construction enabling the fusion of two fully certain opinions [10], with an algorithm enabling the commutativity of the fusion operator [10], with different operators definitions depending on the (partial) dependence of the trust values [13]. Nevertheless, the combination of trust values in 3-elements models can also lead to further questions. For instance, this raises the question on the dogmatic belief composition [16]. The extension to a 4-element model such as Gutscher’s [7] (or based on Belnap’s [2] or Bergstra’s [3] theories) seems to answer this issue partially. We noticed that the available models were developed in a bottom-up fashion, i.e. starting from the model and showing which properties it satisfies or not. To our knowledge, in the domain of trust, the development of a top-down approach such as the algebra proposed in the current paper, is novel. This algebra takes the fusion and dilution operators as a starting point since these are the common point of all the available models with only differences in their naming (fusion and dilution can respectively be referred to as consensus and recommendation in some models). The developed algebra focuses specifically on the trust application domain and all axioms of the algebra are motivated in this context. For this reason, the algebra may or may not make sense for other domains.
18
6
B. Alcalde and S. Mauw
Conclusion
Taking the Subjective Logic as a starting point, we developed an abstract algebra expressing the basic properties of trust fusion and trust dilution. To the core of this algebra belong the three absolute trust values β, δ, and υ. In a modular way, we extended this core algebra with some auxiliary operators to capture more properties of the operators involved. Since there are different ways to fuse dogmatic beliefs (such as considering β + δ as an inconsistency), we decided to not enforce one particular choice in the algebra. As a consequence, the algebra is not complete for any of the models studied. This is also reflected in the fact that the initial algebra (which we did not study in this paper) is not particularly interesting. An interesting next step would be to extend the algebra with additional properties (and possibly operators) that more precisely capture certain interpretations of the fusion of dogmatic beliefs, as to develop complete axiomatizations. We studied two types of models of this algebra: canonical models with only a few elements, and models based on SL with an infinite number of elements. Partly to our surprise, there is no three-element model of the full algebra, indicating that the expression β + δ necessarily must be interpreted by a special fourth element. As expected, SL with a partially defined fusion operator cannot be considered a model. This also applies to the SL model extended with a limit construction presented in [13], in contradiction with its (unproven) claim of associativity. More surprising is that the extension of SL with a limit construction is not a model because it lacks associativity of fusion. This contradicts the (unproven) claim of associativity in [13]. The algorithmic approach to associativity of fusion proposed in [10] does not imply associativity of the (binary) fusion operator either. In fact, while verifying the axioms of our algebra, it turned out that the reduction of terms according to axiom C5 is an essential, but omitted, step for the algorithm to work correctly. If this reduction step is not performed before evaluating a trust expression, then the algorithm does not take all +-related terms into account and gives the wrong result. In order to overcome these problems, we experimented with two extensions of SL. The first extension tries to achieve the same results as SL with a limit construction by introducing a weight for dogmatic opinions. Due to this weight, which can be any positive natural number, the collection of possible interpretations of β becomes infinite and has no maximum element. As a consequence, axiom C4 which states that β is maximum, becomes invalid. This extension of SL satisfies a slightly weaker algebra. The second extension of SL concerns the introduction of a special element expressing inconsistency. This is a model of the full algebra. In addition to this, the validation of the axioms of our algebra for SL also gives more insight in the properties that SL satisfies. Whereas e.g. associativity and commutativity have been discussed in detail by Jøsang et al., properties as expressed in e.g. T10–T12 have not been mentioned explicitly.
An Algebra for Trust Dilution and Trust Fusion
19
The proofs presented here mostly consist of a number of straightforward case distinctions. Rather than in the advanced level of the proofs, the complexity of our work lies in the design. A slight modification of the definition of e.g. M4 or SLi will already invalidate essential properties like associativity. An important next step is to validate other extensions of SL that were proposed in literature and to model other ways to deal with dogmatic beliefs. It is also interesting to look at more practical models, such as the model underlying PGP. A particularly interesting model to investigate is the model of trust graphs (or transitive trust networks [11]). An open question is the reduction of such networks. Because not every trust graph can be represented as a trust expression, our theory has to be extended (e.g. with the notion of recursive equations) to deal with trust graphs. Finally, we mention that our model does not consider dynamic aspects, such as the possible decay of trust or the occurrence of events that influence opinions. Extending our algebra in this direction would also be an interesting topic for future research. Acknowledgment. This work as been partially funded by the Fonds National de la Recherche (Luxembourg), grant number TR-PDR BFR08-038.
References 1. Alcalde, B., Dubois, E., Mauw, S., Mayer, N., Radomirovi´c, S.: Towards a decision model based on trust and security risk management. In: AISC 2009, vol. 98, Australian Computer Society (2009) 2. Belnap, N.D.: A useful four-valued logic. In: Epstein, G., Dunn, J. (eds.) Modern uses of multiple valued logics, pp. 8–37. Reidel, Dordrecht (1977) 3. Bergstra, J.A., Bethke, I., Rodenburg, P.: A propositional logic with 4 values: true, false, divergent and meaningless. Journal of Applied Non-Classical Logics 5(2) (1995) 4. Dong, C., Russello, G., Dulay, N.: Trust transfer in distributed systems. In: Trust Management, number 238/2007 in IFIP, pp. 17–30. Springer, Heidelberg (2007) 5. Gambetta, D. (ed.): Trust: Making and breaking cooperative relations. Department of Sociology. University of Oxford, Oxford (1988) 6. Gray, E., Seigneur, J.-M., Chen, Y., Jensen, C.: Trust propagation in small worlds. In: Nixon, P., Terzis, S. (eds.) iTrust 2003. LNCS, vol. 2692, pp. 239–254. Springer, Heidelberg (2003) 7. Gutscher, A.: Reasoning with uncertain and conflicting opinions in open reputation systems. In: STM 2008, Trondheim, Norway (2008) 8. Jøsang, A.: An algebra for assessing trust in certification chains. In: Proceedings of the Network and Distributed Systems Security, NDSS (1999) 9. Jøsang, A.: A logic for uncertain probabilities. Int. J. Uncertain. Fuzziness Knowl.Based Syst. 9(3), 279–311 (2001) 10. Jøsang, A., Daniel, M., Vannoorenberghe, P.: Strategies for combining conflicting dogmatic beliefs. In: Proceedings of the 6th International Conference on Information Fusion, pp. 1133–1140 (2003)
20
B. Alcalde and S. Mauw
11. Jøsang, A., Gray, E., Kinateder, M.: Simplification and analysis of transitive trust networks. Web Intelli. and Agent Sys. 4(2), 139–161 (2006) 12. Jøsang, A., Kinateder, M.: Analysing topologies of transitive trust. In: Workshop of Formal Aspects of Security and Trust (FAST), pp. 9–22 (2003) 13. Jøsang, A., Marsh, S., Pope, S.: Exploring different types of trust propagation. In: Stølen, K., Winsborough, W.H., Martinelli, F., Massacci, F. (eds.) iTrust 2006. LNCS, vol. 3986, pp. 179–192. Springer, Heidelberg (2006) 14. Shafer, G.: A Mathematical Theory of Evidence. Princeton Univ. Press, Princeton (1976) 15. Yager, R.R.: On the Dempster-Shafer framework and new combination rules. Information Sciences 4, 93–137 (1987) 16. Zadeh, L.A.: Review of mathematical theory of evidence by Glenn Shafer. AI Magazine 5(3), 81–83 (1984) 17. Zimmermann, P.R.: The Official PGP User’s Guide. MIT Press, Cambridge (1995)
HMM-Based Trust Model Ehab ElSalamouny1, Vladimiro Sassone1 , and Mogens Nielsen2 1
ECS, University of Southampton, UK 2 University of Aarhus, Denmark
Abstract. Probabilistic trust has been adopted as an approach to taking security sensitive decisions in modern global computing environments. Existing probabilistic trust frameworks either assume fixed behaviour for the principals or incorporate the notion of ‘decay’ as an ad hoc approach to cope with their dynamic behaviour. Using Hidden Markov Models (HMMs) for both modelling and approximating the behaviours of principals, we introduce the HMM-based trust model as a new approach to evaluating trust in systems exhibiting dynamic behaviour. This model avoids the fixed behaviour assumption which is considered the major limitation of existing Beta trust model. We show the consistency of the HMM-based trust model and contrast it against the well known Beta trust model with the decay principle in terms of the estimation precision.
1 Introduction In modern open network systems where principals can autonomously enter and leave the environment at any time, and generally in a global computing environment, any particular principal has incomplete information about other principals currently in the same environment. In such an environment, interactions of a principal A with other principals are not assumed to be at the same level of satisfaction, or even safety, to A. One approach of taking security sensitive decisions in a global computing environment regarding interactions with principals is to adopt the notion of probabilistic trust, which can broadly be characterised as aiming to build probabilistic models upon which to base predictions about principals’ future behaviours. Using these models, the trust of a principal A in another principal B is the probability distribution, estimated by A, over outcomes of the next interaction with B. Here the estimation process is based on the history of interactions h with the principal B. This notion of trust ensembles the trusting relationship between humans as seen by Gambetta [8]. In many existing frameworks the so-called Beta model [12] is adopted. This is a static model in the precise sense that the behaviour of any principal is assumed to be representable by a fixed probability distribution over outcomes, invariantly in time. That is each principal p is associated with a fixed real number 0 ≤ Θ p ≤ 1 indicating the assumption that an interaction involving p yields success with probability Θ p . Using this assumption, the Beta model for trust is based on applying Bayesian data analysis (see e.g. [20]) to the history of interactions h with a given principal p to estimate the probability Θ p that an interaction with p yields success. In this framework the family of beta probability density functions (pdfs) is used, as a conjugate prior, together with the data h to derive a posterior beta probability density function for Θ p . Full explanation can be found in [12,19]. P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 21–35, 2010. c Springer-Verlag Berlin Heidelberg 2010
22
E. ElSalamouny, V. Sassone, and M. Nielsen
There are several examples in the literature where the Beta model is used, either implicitly or explicitly, including Jøsang and Ismail’s Beta reputation system [12], the systems of Mui et al. [15] and of Buchegger [4], the Dirichlet reputation systems [11], TRAVOS [21], and the SECURE trust model [5]. Recently, the Beta model and its extension to interactions with multiple outcomes (the Dirichlet model) have been used to provide a first formal framework for the analysis and comparison of computational trust algorithms [19,16,13]. In practice, these systems have found space in different applications of trust, e.g., online auctioning, peer-to-peer filesharing, mobile ad-hoc routing and online multiplayer gaming. One limitation of current Beta based probabilistic systems is that they assume a fixed probabilistic behaviour for each principal; that is for each principal, there exists a fixed probability distribution over possible outcomes of its interactions. This assumption of fixed behaviour may not be realistic in many situations, where a principal possibly changes its behaviour over time. Just consider, e.g., the example of an agent which can autonomously switch between two internal states, a normal ‘on-service’ mode and a ‘do-not-disturb’ mode. This limitation of the Beta model systems has been recognised by many researchers [12,4,22]. This is why several papers have used a ‘decay’ principle to favour recent events over information about older ones [12]. The decay principle can be implemented in many different ways, e.g., by a using a finite ‘buffer’ to remember only the most recent n events, or linear and exponential decay functions, where each outcome in the given history is weighted according to the occurrence time (old outcomes are given lower weights than newer ones). Whilst decay-based techniques have proved useful in some applications, we have shown in [7] that the decay principal is useful (for the purpose of estimating the predictive probability) only when the system behaviour is highly stable, that is when it is very unlikely to change its behaviour. Given the above limitations of existing probabilistic trust systems, we try to develop a more general probabilistic trust framework which is able to cover cases where a principal’s behaviour is dynamic. Following the probabilistic view of the behaviour, one can represent the behaviour of a principal p at any time t by a particular state qt which is characterised by a particular probability distribution over possible outcomes of an interaction with p. If p exhibits a dynamic behaviour, it indeed transits between different states of behaviour. This suggests using a multiple state transition system to represent the whole dynamic behaviour of a principal, where each state is defined by a probability distribution over observables. Since the definition of hidden Markov models (HMMs) coincides with this description, we elect to use HMMs for modelling and approximating the dynamic behaviour of principals. Aiming at avoiding the assumption of fixed behaviour in Beta systems, we introduce the HMM-based trust as a more sophisticated trust model which is capable of capturing the natural dynamism of real computing systems. Instead of modelling the behaviour of a principal by a fixed probability distribution representing one state of behaviour, the behaviour of a principal p is approximated by a finite state HMM η, called the approximate behaviour model. Then, given any sequence of outcomes of interactions with p, the approximate model η is used to estimate the probability distribution over the potential outcomes of the next interaction with p. We call the resulting probability distribution the estimated predictive probability distribution of p under the approximate
HMM-Based Trust Model
23
model η. Following the existing notion of probabilistic trust, the estimated predictive probability distribution represents the trust in the principal p. In order to evaluate the quality of the HMM-based trust, we contrast its estimated predictive probability distribution against the real predictive probability distribution which depends on the real behaviour of the concerned principal. For this purpose, we adopt the relative entropy measure [14,6]. Relying on this measure we evaluate the expected estimation error as a measure for the quality of the trust evaluation. Note that this notion of estimation error has been used for comparison between trust algorithms in other works. See for example [16,19]. Original contribution of the paper. In this paper we describe the basics of the HMMbased trust model. Namely, the methods of obtaining the approximate behaviour model η for a principal p, and also estimating the probability distribution over possible outcomes of the next interaction with p using η. We show that maximising the probability of the observations, using the Baum-Welch algorithm detailed in [2,18], minimises the expected estimation error and therefore is a consistent method for obtaining the approximate behaviour HMM η. For the sake of comparison to the traditional Beta trust model with a decay factor, we use Monte-Carlo methods to evaluate the expected estimation error in both cases. Structure of the paper. The next section briefly describes the Beta trust model and the decay principle. Section 3 provides a basic and precise description for hidden markov models. Subsequently, the basic model of HMM-based trust is described in Section 4. Then it is formally shown in Section 5 that the maximum likelihood estimation, as the basis of the HMM-based trust model, is adequate in the sense that it minimises the expected relative entropy between the real and estimated predictive probability distributions. Section 6 provides an experimental comparison between the HMM-based trust model and the well known Beta trust model with decay factor. Finally we conclude our results in section 7.
2 Beta Model with a Decay Factor In the Beta trust model introduced by [12] an interaction with any principal yields either success s or failure f. It is also based on the assumption that any interaction with a given principal p yields success with a fixed probability θ p . Under this assumption a sequence of outcomes h = o0 · · · o−1 is a sequence of Bernoulli trials, and the number of successful outcomes in h is probabilistically distributed by a binomial distribution. The objective of the Beta trust model is then to estimate the parameter θ p given a historical sequence of outcomes h . Using the Bayesian data analysis (see e.g. [20]), θ p is seen as a random variable whose prior (initial) probability density function (pdf) is updated to a posterior pdf using given observations. With the fact that the beta pdf is a conjugate prior to the binomial distribution, the posterior pdf of θ given the sequence h is also a beta pdf. The Beta trust model then gives an estimate for θ p as the expected value of its posterior beta pdf. This estimate, denoted by B (s | h), is related to the sequence h as follows.
24
E. ElSalamouny, V. Sassone, and M. Nielsen
#s (h ) + 1 #s (h ) + #f (h ) + 2
B (s | h ) =
(1)
where #s (h ) and #f (h ) are the numbers of successful and unsuccessful interactions in h respectively. In order to cope with the cases where the behaviour of a principal is dynamic, the notion of exponential decay (or forgetting) has been incorporated in the Beta trust model [12]. The intuitive idea is to capture the most recent behaviour of the principal by favouring the recent outcomes over old ones. This is performed by associating each outcome oi in h with an exponential weight r−i−1 , where 0 ≤ r ≤ 1 is called the decay (forgetting) factor. Observe that recent outcomes are associated with higher weights than older outcomes. With the decay factor r, the Beta estimate for the distribution over {s, f} is denoted by Br (. | h ), and given by the following equations. Br (s | h ) =
mr (h ) + 1 mr (h ) + nr (h ) + 2
and mr (h ) =
−1
,
Br (f | h ) =
r−i−1 δi (s)
nr (h ) =
i=0
nr (h ) + 1 mr (h ) + nr (h ) + 2
−1
r−i−1 δi (f)
(2)
(3)
i=0
for δi (X) =
1 0
if oi = X otherwise
(4)
Note that incorporating the decay principle in the Beta trust model is implemented by replacing the counts #s (h ) and #f (h ) in Equation (1) by the sum of weights associated with the past outcomes. Although this approach has been used in many works, we have shown in [7] that it is not effective when the principal’s behaviour is highly dynamic; that is when the system tends to change its state of behaviour, characterised by the exhibited probability distribution over possible outcomes. Another major limitation of this approach is that it appears hard to formally determine the optimal value for the decay factor from only observations.
3 Hidden Markov Models (HMMs) A Hidden Markov Model (HMM) [1] is a well-established probabilistic model essentially based on a notion of system state. Underlying any HMM there is a Markov chain modelling (probabilistically) the system’s transitions between a set of internal states. Each state in this chain is associated with a particular probability distribution over the set of possible outcomes (observations). The output of an HMM is a sequence of outcomes where each outcome is sampled according to the probability distribution of the underlying state. In the following, we denote the state of the HMM and the observation at time t by qt and ot respectively. Definition 1 (hidden Markov model). A (discrete) hidden Markov model (HMM) is a tuple λ = (Q, π, A, O, B) where Q is a finite set of states; π is a distribution on Q,
HMM-Based Trust Model
25
the initial distribution; A : Q × Q → [0, 1] is the state transition matrix, with Ai j = P (qt+1 = j | qt = i) and j∈Q Ai j = 1; O is a finite set of possible observations; and B : Q × O → [0, 1] is the observation probability matrix, with Bik = P (ot = k | qt = i), k∈O Bik = 1. HMMs provide the computational trust community with several obvious advantages: they are widely used in scientific applications, and come equipped with efficient algorithms for computing the probabilities of events and for parameter estimation (cf. [18]), the chief problem for probabilistic trust management. It is worth noticing that an HMM is a generalisation of the Beta model. Indeed, in the context of computational trust, representing the behaviour of a principal p by a HMM λ p provides a different distribution B j over O for each possible state j of p. In particular, the states of λ p can be seen as a collection of independent Beta models, the transitions between which are governed by the Markov chain formed by π and A, as principal p switches its internal state. According to the above definition of HMM, the probability of a sequence of outcomes h = o1 o2 · · · on given a HMM λ is given by the following equation. P(h | λ) = π(q1 ) · Bq1 o1 · Aq1 q2 · Bq2o2 · · · Aqn−1 qn · Bqnon q1 ,...,qn ∈Q
The above probability is evaluated efficiently by an algorithm called the forwardbackward algorithm. One instance of this algorithm, called the forward instance, is based on inductively (on time t) evaluating the forward variable αt ( j) = P(o1 o2 · · · ot , qt = j | λ), that is the joint probability that the partial sequence o1 o2 · · · ot is observed and the state at time t is j. The required probability P(h | λ) is then obtained by P(h | λ) = αn ( j) j∈Q
Alternatively, P(h | λ) can be obtained using the backward instance of the algorithm, where the backward variable βt ( j) = P(ot+1 ot+2 · · · on , | qt = j, λ) is inductively (on time t) evaluated. More details on these instances of the forward-backward algorithm can be found in [18]. Another major problem of HMMs is to find the model λ which maximises the above probability of a sequence h. This problem has been addressed by Baum and his colleagues whose efforts resulted in the Baum-Welch algorithm [2,18]. This algorithm
0.1
+2
1k 0.12
π1 = 0.5 B( 1, s) = 0.95 B(1, f) = 0.05
O = {s, f}
π2 = 0.5 B(2, s) = 0.05 B(2, f) = 0.95
Fig. 1. Example Hidden Markov Model
26
E. ElSalamouny, V. Sassone, and M. Nielsen
iteratively estimates the parameters of an HMM λ which maximises the probability of a given sequence of outcomes h. One limitation of this algorithm is that it finds a local maxima in the model space rather than the global one. Example 1. Figure 1 shows a two-state HMM over the observation set {s, f}. Both states are relatively stable. That is the probability of making both transitions 1 → 2 and 2 →1 are relatively small (0.1,0.12 respectively). Also at state 1, it is very likely to observe s (with probability 0.95), whereas at state 2 it is very likely to observe f (with probability 0.95). This HMM describes the behaviour of a stable principal whose internal state is unlikely to change. In the area of trust, we remark that Markovian models have also been used in [10] to model the evolution of trust in the users of collaborative information systems. However, in our work, HMMs model the principal’s behaviour upon which trust is computed.
4 HMM-Based Trust Model As described in the introduction, the HMM-based trust relies on approximating the behaviour of any given principal by a finite-state HMM η called the approximate behaviour model. The approximate behaviour model is then used to estimate the predictive probability distribution. In order to precisely define this model, it is required to define a method for computing the approximate behaviour model η, and also for estimating the predictive probability distribution using η. As a general notation which will be used in these definitions we will write the probability of any random variable ζ, under a given probabilistic model R, as P (ζ | R). For computing η, the maximum likelihood criterion is adopted as follows. Let y = y0 y2 · · · y−1 be an observed sequence of outcomes of interactions with a given principal, where is an arbitrary length. Let also Rn denote any n-state HMM. Then, using the sequence y, the n-state approximate behaviour model η is obtained by the following equation. (5) η = argmax P (h = y | Rn ) Rn
That is η is the n-state HMM under which the probability of the given history y is maximised. The HMM model η can be therefore obtained by the Baum-Welch algorithm which is described briefly in Section 3 and detailed in [2,18]. Now we address the problem of estimating the predictive probability distribution given a particular sequence of outcomes. Let h = o0 o1 · · · o−1 be a random variable representing any sequence of observed outcomes of interaction with the principal p, where o0 and o−1 represent respectively the least and the most recent outcomes, and is an arbitrary length. Extending this notation to future outcomes, the outcome of the next interaction with p is denoted by o . Note that each outcome oi is therefore a random variable representing the outcome at time i. Let also O = {1, 2, . . . , κ} be the alphabet of each single outcome. Using the n-state approximate behaviour HMM η defined by Equation (5), the estimated predictive probability distribution given a particular sequence of outcomes w is denoted by Hη (. | w) and defined by the following equation.
HMM-Based Trust Model
Hη (z | w) = P (o = z | h = w, η) =
P (h = w, o = z | η) P (h = w | η)
27
(6)
where z ∈ O. The above probabilities are efficiently evaluated by the forward-backward algorithm briefly described in Section 3, and detailed in [18].
5 Consistency of Maximum Likelihood Estimation Like other existing probabilistic trust models, the objective of the HMM-based trust model is to estimate the predictive probability distribution for a given principal p, that is the probability of each possible outcome in the next interaction with p. Therefore it is a fundamental requirement that the approximate behaviour model η computed for p is chosen such that the error of such an estimation is minimised. To analyse this error, we need to model the real behaviour of the principal p. This allows expressing the real predictive probability distribution of p. The estimation error can be therefore evaluated as the difference between the real and estimated predictive probability distributions. In this section it is shown that the maximum likelihood criterion, defined by Equation (5) for choosing the approximate behaviour model provides a consistent method to minimise the estimation error. 5.1 Modelling the Real System In this work we are interested in studying systems which exhibit a dynamic behaviour, that is changing their behaviour over time. We mathematically model the behaviour of the system at any time by a particular probability distribution over possible outcomes. A system p with a dynamic behaviour can be therefore modelled by a multiple state transition system where each state exhibits a particular behaviour (probability distribution). This naturally leads to choosing a generic Hidden Markov Model (HMM) λ as the real model of p’s behaviour. Here the state of a system real model λ at the time of observing oi is denoted by the random variable qi . Thus, given that the current underlying state is x, i.e. q−1 = x, we can compute the real predictive probability distribution, denoted by P (. | x, λ), that is the probability of each possible next observation, z ∈ O, using the following equation. P (z | x, λ) = P (o = z | q−1 = x, λ) P (q = y | q−1 = x, λ) P (o = z | q = y, λ) = y∈Qλ
=
(Aλ ) xy (Bλ )yz
(7)
y∈Qλ
where Qλ , Aλ , and Bλ are respectively the set of states, the state transition matrix, and the observation probability matrix of λ. We shall also work under the hypothesis that λ is ergodic. This corresponds to demanding that the Markov chain underlying λ is irreducible and aperiodic (more details on these properties can be found in [9,17,3]).
28
E. ElSalamouny, V. Sassone, and M. Nielsen
5.2 The Estimation Error In this paper the relative entropy measure [6] is used for evaluating the difference between the real and estimated predictive probability distributions, given by Equations (7) and (6) respectively. Namely, given a sequence of outcomes h = w and the current state q−1 = x, this difference measure is written as follows. P (z | x, λ) P (z | x, λ) log D P (. | x, λ) || Hη (. | w) = (8) Hη (z | w) z∈O The above difference can be seen as the estimation error given a particular current state q−1 of λ, and the sequence of outcomes h . Hence we define the expected estimation error as the expected relative entropy between the real and estimated predictive probability distributions, where the expectation is evaluated on the underlying random variables q−1 and h . This error is denoted by Error λ, Hη . Thus,
Error λ, Hη = E D P (. | q−1 , λ) || Hη (. | h ) (9) Now we formally show that choosing the approximate behaviour model η by maximising the likelihood of a given sufficiently long sequence y (by Equation (5)) minimises the expected estimation error. Equation (9) can be written as follows. Error λ, Hη = P (h = w, q−1 = x | λ) · w∈O x∈Qλ
· D P (. | x, λ) || Hη (. | w)
(10)
Using Equation (8) we rewrite the above equation. Error λ, Hη = P (h = w, q−1 = x | λ) · w∈O x∈Qλ
·
P (z | x, λ) log
z∈O
P (z | x, λ) Hη (z | w)
(11)
Substituting P (z | x, λ) and Hη (z | w) using Equations (7) and (6) respectively, we write the above equation as follows. Error λ, Hη = P (h = w, q−1 = x | λ) · w∈O x∈Qλ
·
P (o = z | q−1 = x, λ) log
z∈O
=
P (o = z | q−1 = x, λ) P (o = z | h = w, η)
P (o = z | q−1 = x, λ) ·
w∈O x∈Qλ z∈O
· P (h = w, q−1 = x | λ) log
P (o = z | q−1 = x, λ) P (o = z | h = w, η) (12)
HMM-Based Trust Model
29
Since the next outcome o depends only on the current state q−1 regardless of the history sequence h , we have P (o = z | q−1 = x, λ) = P (o = z | h = w, q−1 = x, λ)
(13)
Thus Equation (12) becomes Error λ, Hη = P (o = z | h = w, q−1 = x, λ) · w∈O x∈Qλ z∈O
P (o = z | q−1 = x, λ) P (o = z | h = w, η) P (o = z | q−1 = x, λ) = P (o = z, h = w, q−1 = x | λ) log P (o = z | h = w, η) x∈Q z∈O · P (h = w, q−1 = x | λ) log
w∈O
λ
(14) The above equation can be simplified to the following equation. Error λ, Hη = E log P (o | q−1 , λ) − E log P (o | h , η) (15) Observe that the first term in the above equation depends only on the real behaviour model λ, while the second term depends on both the real and approximate behaviour models λ and η. Denoting the first and second terms respectively by C (λ) and H (λ, η), we rewrite the above equation as following. Error λ, Hη = C (λ) − H (λ, η) (16) Assuming that (Aη )i j > 0, that is the state transition probabilities of η are strictly positive, it has been proved by Baum and Petrie in [1] that the following limit exists. lim H (λ, η) = H (λ, η)
→∞
(17)
Observe also that the limit lim→∞ C (λ) = C (λ) exists. This is because the ergodicity of λ implies that the distribution of the random variable q−1 converges to a stationary (fixed) distribution according to which the expectation E log P (o | q−1 , λ) is evaluated. The convergence of both C (λ) and H (λ, η) implies the convergence of the esti mation error (as → ∞) to an asymptotic estimation error denoted by Error λ, Hη , and expressed as follows. Error λ, Hη = C (λ) − H (λ, η) (18) Also, (by Theorem 3.2 in [1]) the log-probability of any observation sequence h is related to H (λ, η) as follows. 1 a.s. log P (h | η) → H (λ, η)
(19)
30
E. ElSalamouny, V. Sassone, and M. Nielsen
The above equation means that the log-probability of a random sequence h under the approximate model η, divided by its length converges almost surely to H (λ, η). Here ‘almost surely’ (also known as ‘almost everywhere’ and ‘with probability 1’) convergence means that the probability that the function 1 log P (h | η) converges to the above limit is 1. That is 1 P lim log P (h | η) = H (λ, η) = 1 →∞ Equation (19) implies that choosing an approximate model η which maximises the probability of a sufficiently long sequence h almost surely maximises H(λ, η), and therefore reduces the asymptotic estimation error given by Equation (18). Thus, the maximum data likelihood criterion, expressed by Equation (5) is a consistent method to obtain the approximate behaviour model, which is used to estimate the predictive probability distribution.
6 Comparison with Beta-Based Trust with Decay Principle In this section we contrast the HMM-based trust model described above against the existing Beta trust model with exponential decay, described in [12] and Section 2 in terms of the expected estimation error. Here the estimation error is defined as the relative entropy between the real and estimated predictive probability distributions. In Section 5.2 above, we used the results obtained by Baum and Petrie in [1] to derive an expression for the expected estimation error (see Equation (16)). It appears difficult to evaluate this error analytically, or even numerically. So we use a simulation framework for HMMs to simulate the real model and adopt Monte Carlo methods to evaluate the estimation error using both HMM-based and Beta-based trust models, and therefore perform the comparison. 6.1 Evaluation of Estimation Error Using Monte Carlo Simulation In general, any probabilistic trust model is described by an estimating algorithm Aσ , with a parameter σ. The estimating algorithm is fed with any observation sequence h generated by the real system λ and computes an estimated predictive probability distribution denoted by Aσ (. | h). In the case of Beta trust model, the estimating algorithm is denoted by Br , where the parameter r is the decay factor, and the estimated predictive probability distribution Br (. | h) is evaluated by Equations (2). In the case of HMMbased trust model, on the other hand, the estimating algorithm is denoted by Hη , where the parameter η is an approximate behaviour HMM. Note that the parameter η is obtained by maximising the probability of any sufficiently long sequence y generated by λ as shown in Section 4. The estimated predictive probability distribution Hη (. | h) is evaluated by Equation (6). Given a real HMM model λ, let the random variables h denote any generated sequence of observations of length . Let also the random variable q denote the underlying hidden state sequence. Given an estimating algorithm Aσ (e.g. Br or Hη ), the expected estimation error using Aσ is given by the following equation. Error (λ, Aσ ) = E D (P (. | q , λ) || Aσ (. | h )) (20)
HMM-Based Trust Model
31
The above expected error can be approximated by the following Monte-Carlo procedure. 1. Simulate the real model λ to generate a large sample S m of size m: S m = {(w1 , u1 ), (w2 , u2 ), . . . , (wm , um )} where w j and u j are respectively the observation sequence, and the underlying state sequence generated in the jth simulation run. 2. For each pair w j , u j , (a) compute both P . | u j , λ and Aσ (. | h ), that is the real and estimated predictive probability distributions, respectively. (b) Evaluate the estimation error, denoted by e j , as e j = D P . | u j , λ || Aσ . | w j (21) 3. Approximate the required expected estimation error by evaluating the sample average. m 1 ej (22) Error (λ, Aσ ) ≈ m j=1 The above approximation of the expected estimation error by the sample average is based on the law of large numbers. Note that the approximation error can be made arbitrarily small by making the sample size m sufficiently large. 6.2 Experiments Throughout our comparison we will a 4-state real model λ with the observation alphabet O = {1, 2}, the observation probability matrix is ⎡ ⎢⎢⎢ 1.0 ⎢⎢⎢ 0.7 Bλ = ⎢⎢⎢⎢ ⎢⎢⎣ 0.3 0.0 and the state transition matrix is ⎡ ⎢⎢⎢ s ⎢⎢⎢ ⎢⎢⎢ ⎢⎢⎢ 1 − s ⎢⎢⎢ ⎢ 3 Aλ = ⎢⎢⎢⎢ ⎢⎢⎢ 1 − s ⎢⎢⎢ ⎢⎢⎢ 3 ⎢⎢⎢ ⎣1−s 3
1−s 3 s 1−s 3 1−s 3
⎤ 0.0 ⎥⎥ ⎥ 0.3 ⎥⎥⎥⎥ ⎥ 0.7 ⎥⎥⎥⎥⎦ 1.0
1−s 3 1−s 3 s 1−s 3
(23)
1 − s ⎤⎥ ⎥ 3 ⎥⎥⎥⎥ ⎥⎥ 1 − s ⎥⎥⎥⎥ ⎥ 3 ⎥⎥⎥⎥ ⎥⎥ 1 − s ⎥⎥⎥⎥⎥ 3 ⎥⎥⎥⎥ ⎥⎥⎥ ⎥ s ⎦
(24)
where the parameter s is called the system stability, which indicates the tendency of the system to staying in the same state rather than transiting to a different one.
32
E. ElSalamouny, V. Sassone, and M. Nielsen
Fig. 2. Beta and HMM estimation errors versus decay factor given stability < 0.5
In the following experiments, we study the effect of the system stability on both Beta estimation with a decay factor and HMM based estimation. For simplicity we confine our HMM-based trust model to use only 2-state approximate behaviour models. We also base our trust estimation on sequences of length 300. For different stability values 0 ≤ s < 1 and decay values 0 ≤ r ≤ 1, we apply the Monte-Carlo procedure described above to evaluate the expected estimation error using both Beta (Br ) and HMM (Hη ) trust algorithms. Each generated sample is of size 10000. Figure 2 shows Beta and HMM estimation errors when the system λ is unstable (s < 0.5). It is obvious that the minimum error value for Beta error is obtained when the decay tends to 1. The reason for this is that an unstable system is relatively unlikely to stay in the same state, and therefore unlikely to preserve the previous distribution over observations. If the estimation uses low values for the decay, then the resulting estimate for the predictive probability distribution is close to the previous distribution; this is unlikely to be the same as in the next time instant, due to instability. On the other hand, using a decay r tending to 1 favours equally all previous observations, and the resulting
HMM-Based Trust Model
33
probability distribution is expected to be the average of the distributions exhibited by the model states. Such an average provides a better estimate for the predictive probability distribution than approximating the distribution of the most recent set of states using low decay values. It is also obvious that the HMM estimation error is lower than Beta estimation error. The reason is that the 2-state HMM η is a more flexible model to approximate the real HMM λ than the Beta model which is, with decay 1, equivalent to 1-state HMM model. It is worth noting that when stability is 0.25, the minimum expected beta error is 0, when the decay is 1. The HMM-estimation error is also approximately 0. In this case all elements of the transition matrix Aλ are equal and therefore, the whole behaviour can effectively be modelled by a single probability distribution over observations. This single probability distribution is perfectly approximated by taking the whole history into account using Beta model with decay 1, and also with 2-state HMM where both states are equivalent. Figure 3 shows Beta and HMM estimations errors when the system λ is stable (stability > 0.5). Observe that both Beta with decay 1 and HMM estimation errors are
Fig. 3. Beta and HMM estimation errors versus decay factor given stabilities 0.6, 0.7, 0.8, and 0.9
34
E. ElSalamouny, V. Sassone, and M. Nielsen
increasing as the stability is higher. The reason is that, at relatively high stability, old observations become irrelevant to the current behaviour which determines the real predictive probability distribution. Hence, the estimation based on the whole history using HMM or Beta with decay 1 is worse than the estimation with the same parameters when the system is unstable, where both old and recent outcomes are relevant to the current behaviour. Observe also in the cases of high stability that HMM based estimation is better than Beta estimation for most values of decay. However, for a particular range of decay, Beta estimation is slightly better than HMM estimation. Using any decay value in this range for Beta estimation has the effect of considering only relatively recent outcomes which characterize the current system behaviour and therefore give a better estimation for the predictive distribution. Although using any value from this specific range of decay makes Beta estimation better than HMM estimation, it appears hard to formally determine this range given only observations. When the stability is 1, the assumption of irreducibility is violated (see Section 5.1). In this case any sequence y of observations characterises only one single state and therefore the approximate behaviour model η trained on y fails to approximate the whole behaviour of the real system.
7 Conclusion In this paper we introduced the foundations for the HMM-based trust model. This model is based on approximating the behaviour of the principal by the n-states HMM η which maximises the likelihood of the available history of observations. The approximate behaviour model η is then used to evaluate the estimated predictive probability distribution given any sequence of observations. Modelling the real dynamic behaviour of principals by hidden Markov models, and using the results obtained by Baum and Petrie in [1], we justified the consistency of the HMM-based trust model. This justification relies on showing that maximising the likelihood of a given observation sequence minimises the relative entropy between the real and estimated predictive probability distributions. To assess the estimation quality of a particular trust algorithm, we use the notion of expected estimation error that is the expected difference between the real and predictive probability distribution. Since we have no means yet to evaluate the expected estimation error expressed by Equation (18) for the HMM-based trust model using analytical or numerical methods, we use a Monte-Carlo algorithm, described in Section 6.1, for evaluating the expected estimation error. Using an implementation of this algorithm, and adopting the relative entropy as a measure for the estimation error, we performed an experimental comparison between HMM-based trust algorithm and the Beta-based trust algorithm with an exponential decay scheme. The results of this comparison are given in Section 6.2. These results shows that HMM-based trust algorithm gives a better estimation for the predictive probability distribution when the principal behaviour is highly dynamic. When the real behaviour is more stable (less dynamic), the Beta-based algorithm with the optimal value of decay gives slightly better estimation than the HMM-based algorithm.
HMM-Based Trust Model
35
References 1. Baum, L.E., Petrie, T.: Statistical inference for probabilistic functions of finite-state Markov chains. Annals of Mathematical Statistics 37(6), 1554–1563 (1966) 2. Baum, L.E., Petrie, T., Soules, G., Weiss, N.: A maximization technique occurring in the statistical analysis of probabilistic functions of markov chains. The Annals of Mathematical Statistics 41(1), 164–171 (1970) 3. Br´emaud, P.: Markov chains: Gibbs fields, Monte Carlo simulation, and queues. Springer, Heidelberg (1998) 4. Buchegger, S., Le Boudec, J.-Y.: A Robust Reputation System for Peer-to-Peer and Mobile Ad-hoc Networks. In: P2PEcon 2004 (2004) 5. Cahill, V., Gray, E., Seigneur, J.-M., Jensen, C.D., Chen, Y., Shand, B., Dimmock, N., Twigg, A., Bacon, J., English, C., Wagealla, W., Terzis, S., Nixon, P., di Marzo Serugendo, G., Bryce, C., Carbone, M., Krukow, K., Nielsen, M.: Using trust for secure collaboration in uncertain environments. IEEE Pervasive Computing 2(3), 52–61 (2003) 6. Cover, T.M., Thomas, J.A.: Elements of Information Theory, 2nd edn. Wiley Series in Telecommunications and Signal Processing. Wiley Interscience, Hoboken (2006) 7. ElSalamouny, E., Krukow, K., Sassone, V.: An analysis of the exponential decay principle in probabilistic trust models. Theoretical Computer Science 410(41), 4067–4084 (2009) 8. Gambetta, D.: Can We Trust Trust? Basil Blackwell (1988) 9. Grimmet, G., Stirzaker, D.: Probability and Random Processes, 3rd edn. Oxford University Press, Oxford (2001) 10. Javanmardi, S., Lopes, C.V.: Modeling trust in collaborative information systems. In: International Conference on Collaborative Computing: Networking, Applications and Worksharing, pp. 299–302 (2007) 11. Jøsang, A., Haller, J.: Dirichlet reputation systems. In: The Second International Conference on Availability, Reliability and Security, 2007. ARES 2007, pp. 112–119 (2007) 12. Jøsang, A., Ismail, R.: The beta reputation system. In: Proceedings from the 15th Bled Conference on Electronic Commerce, Bled (2002) 13. Krukow, K., Nielsen, M., Sassone, V.: Trust models in Ubiquitous Computing. Philosophical Transactions of the Royal Society A 366(1881), 3781–3793 (2008) 14. Kullback, S., Leibler, R.A.: On information and sufficiency. Annals of Mathematical Statistics 22(1), 79–86 (1951) 15. Mui, L., Mohtashemi, M., Halberstadt, A.: A computational model of trust and reputation (for ebusinesses). In: Proceedings from 5th Annual Hawaii International Conference on System Sciences (HICSS 2002), p. 188. IEEE, Los Alamitos (2002) 16. Nielsen, M., Krukow, K., Sassone, V.: A bayesian model for event-based trust. In: Festschrift in hounour of Gordon Plotkin. Electronic Notes in Theoretical Computer Science (2007) 17. Norris, J.R.: Markov chains. Cambridge University Press, Cambridge (1997) 18. Rabiner, L.R.: A tutorial on hidden markov models and selected applications in speech recognition. Proceedings of the IEEE 77(2), 257–286 (1989) 19. Sassone, V., Krukow, K., Nielsen, M.: Towards a formal framework for computational trust. In: de Boer, F.S., Bonsangue, M.M., Graf, S., de Roever, W.-P. (eds.) FMCO 2006. LNCS, vol. 4709, pp. 175–184. Springer, Heidelberg (2007) 20. Sivia, D.S.: Data Analysis: A Bayesian Tutorial (Oxford Science Publications). Oxford University Press, Oxford (1996) 21. Teacy, W., Patel, J., Jennings, N., Luck, M.: Travos: Trust and reputation in the context of inaccurate information sources. Autonomous Agents and Multi-Agent Systems 12(2), 183– 198 (2006) 22. Xiong, L., Liu, L.: PeerTrust: Supporting reputation-based trust for peer-to-peer electronic communities. IEEE Transactions on knowledge and data engineering 16(7), 843–857 (2004)
Deriving Trust from Experience Florian Eilers and Uwe Nestmann Technische Universit¨ at Berlin, Germany [email protected], [email protected]
Abstract. In everyday life, trust is largely built from experience. Reputation-based trust models have been developed to formalize this concept. The application to networks like the Internet where a very large number of predominantly unknown principal identities engage in interactions is appealing considering that the evaluation of trusted experience may result in a more successful choice of trusted parties to interact with. In this paper we pick the SECURE framework, as developed within the equally named EU project on Global Computing, which builds upon event structures to model possible outcomes of interactions. We extend it by three concepts: (i) a flexible way to determine a degree of trust from given past behavior, (ii) a basic notion of context, exemplarily in the form of roles the interacting parties may occupy, and (iii) we explicitly equip observed events with a time component to refine the granularity of observations. We extend definitions of concepts used in SECURE in order to incorporate our notion of context information, we provide the syntax and semantics of an LTL-like logic, in its basics similar to the one proposed by Krukow, Nielsen and Sassone, that allows for layered reasoning about context information. We then show how this new language relates to the one used in SECURE and we determine under which conditions our concept of deriving trust from experience may be used within SECURE’s computational model to obtain a global state of trust.
1
Introduction
A model for trust and experience. The SECURE framework as described in [Kru06] introduces so called trust structures to represent different degrees of trust and relations between them. A trust structure is a triple T = (D, , ) consisting of a set D of trust values ordered by two partial orderings: the trust ordering () and the information ordering (). The trust ordering simply sorts trust values by the level of trust they assert (a b means that b denotes at last as much trust as a), while the information ordering models a refinement w.r.t. the amount of available information (a b means that a can be refined into b should more information become available). In SECURE, the trust of one individual in another is always represented by a single trust value. There are a few side conditions such as being a lattice. While they are all important for the calculation of a global trust state (intuitively a matrix that holds all the information of “who trusts whom to what degree” for a given set of principal identities) it is only the lattice property that is important to us here. P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 36–50, 2010. c Springer-Verlag Berlin Heidelberg 2010
Deriving Trust from Experience
37
Gathering experience is understood as the observation of events from a given mathematical structure. A prime event structure is a triple (E, ≤, #) consisting of a set E of events that are partially ordered by ≤, the necessity relation (or causality relation), and # is a binary, symmetric, irreflexive relation # ⊂ E × E, called the conflict relation. The relations satisfy for all e, e , e ∈ E: [e] = {e ∈ E | e ≤ e} is finite def
and
if e#e and e ≤ e then e#e
Local interaction histories as defined in [Kru06] hold all the information gathered during interactions with another principal identity. Consequently there is a local interaction history for each principal identity. They are sequences of configurations (i.e. sets of events that are free of conflict and consistent with respect to the necessity relation), each of which models information about a “session”, that is (partial) knowledge about the outcome of an interaction. Calculating a trust value. The SECURE trust model offers a powerful tool to reason about these local interaction histories. In [KNS05] Krukow, Nielsen and Sassone describe a Linear Temporal Logic that contains events from an underlying event structure as atoms, conjoined with the classical connectives and pure-past temporal modalities. They use this logic to make binary decisions, i.e. whether to interact with an entity or not. They also show a way to calculate a trust value from a local interaction history, which requires the trust structure to consist of triples that count good, bad and neutral outcomes of interactions. In the following we propose a way to use this logic to obtain a trust value, without the constraint of a fixed trust structure, that consists purely of ordered triples. Seeing how our whole approach is largely motivated by real life situations, it seems reasonable to ask: “How does one solve the problem of deciding upon a degree of trust given some previous experience in real life?” The most obvious way is probably to ask oneself “Am I justified in trusting them this much?” for every conceivable degree of trust and then picking the one that appears most appropriate. Taking this concept over to SECURE would mean to define for each trust value a condition under which we are justified to trust this much and then, from the set of all justified trust values, pick the one that is “most appropriate”. In order to explore how “most appropriate” could be formalized in this context, we consider a few distinct cases. If only one trust value is justified then this is trivial. Consider a set of justified trust values {a, b}. Obviously if we trust enough to justify a and b and in our trust structure a b1 is true, then we should pick b. In those cases the answer which one to pick is rather easy and the method can be extended to -ordered sets of finite size by picking the supremum of the set. But what should happen if a and b from our example are unrelated by ? As an example, consider a file system with trust values restricted, read, write and random, where random also includes deletion of files which is not included in either read or write. Both trust- and information-wise read and write are greater than restricted and less than random, while being unrelated to each 1
being the trust ordering of the trust structure we are considering.
38
F. Eilers and U. Nestmann
other. Now assume we are justified to trust a principal with reading and writing, i.e. J = {restricted, read , write} but not with deletion. If we had to name a single trust value that defines the trust we have in them, we would not know what to do: “restricted ” would obviously be wrong, “read ” or “write” would both be inaccurate and we would not know which of the two to choose, while “random” would grant the principal too many rights. How can such a situation occur in general? It would mean that in our trust structure we had two distinct unrelated trust values for which there is no justified trust value, that incorporates the trust of both lower trust values. In this case we can rightfully say that our trust structure is “wrong”: Under the assumption that it should always be possible to model the trust in a principal with a single trust value (which is one of the main principles in SECURE), this trust structure is not adequate for our scenario. We would either have to change our reasoning about justified trust values (i.e. restrict the user to either reading or writing) or change the trust structure itself, because it obviously lacks a trust value for our asserted degree of trust (i.e. allow reading and writing, but not deletion). It therefore appears reasonable to demand that if we are justified in trusting with two trust values, then there has to be a justified trust value, which incorporates the trust of both. Given this side condition it makes sense to pick the “trust-supremum” (using the supremum from the lattice (D, ) of the justified trust values) to determine the representative trust value. The fact that this trust value does not necessarily lie within the set of justified trust values is not disconcerting, because it would mean that it was our reasoning about this set that was wrong in the first place. Criteria for justified trust values. We now know how to pick a representative trust value given a set of justified trust values. But when is a trust value justified? When we are given a condition we should be able to check whether it is met and thereby determine whether a trust value is justified or not. A standard solution is to use a logical formula and then to evaluate it within a concrete structure. Contributions. The main contributions of this paper are the following: (i) We introduce a flexible, yet formal, way to calculate a trust value from a local interaction history by equipping trust values with user-defined conditions; (ii) We equip events from local interaction histories with additional context information; (iii) We refine local interaction histories by explicitly storing the time at which an event has been observed; (iv) We determine under which conditions our method can be applied in the SECURE framework. Related work. There are a number of experience based trust models, such as the EigenTrust model (see [KSGm03]) or PathTrust (see [KHKR06]). The SECURE trust model we base our work on can be found in an extensive compilation by Krukow in [Kru06]. Closely related is the concept of attestation, i.e. finding evidence to support the prediction of a certain behavior (see [CGL+ 08]). The field of trust and security is a prominent example for the application of modal logics. Different approaches include deontic and doxastic logics (see
Deriving Trust from Experience
39
f.ex. [CD97]) as well as temporal logics (see [KNS08]) which we follow in this paper. Other approaches for reasoning about trust include probabilistic logic (see [HP03] or [NKS07]).
2
Calculating Trust from Experience
Preliminaries. We briefly recall the logic by Krukow et al. to reason about local interaction histories as defined in [KNS05], with a few minor syntactic changes. The semantics of this logic are based on interpreting a local interaction history as a Kripke-Structure, which is done as follows: We start off with a local interaction history h with respect to an event structure ES, from which we define a corresponding Kripke structure K = (W, R) together with an assignment β for our atomic formulas and call the pair KES,h = (K, β) a history structure. Definition 1 (History Structure). Let ES = (E, ≤, #) be an event structure and h = c0 . . . cn , n ∈ N a local interaction history. Then KES,h = ((W, R), β) with W = {0, . . . , n}, R = {(i, i + 1) | i ∈ {0, . . . n − 1}}) and β : E → P(W ) defined as β = {(e, w) | e ∈ E ∧ w = {c ∈ W | e ∈ c}} is called the history structure with respect to ES and h. We are now ready to define a logic to reason about local interaction histories. This is done in the standard way to define a temporal logic. Events become the atoms of the logic, conjoined with the classical connectives and temporal modalities X (“in the next step”) and U (“until”). In SECURE the formulas from this logic are called policies, a term that is also used in its computational model to calculate of a global trust state. In order to distinguish these two kinds of policies and to emphasize what we use this language for, we pick the new term justification for such a formula together with an associated trust value. Definition 2 (Justification Language). Let ES = (E, ≤, #) be an event structure. The justification language L(ES) is defined as follows: Syntax: – If e ∈ E, then e ∈ L(ES) – If ϕ and ψ ∈ L(ES), then {¬(ϕ), (ϕ → ψ), X(ϕ), (ϕU ψ)} ⊆ L(ES) Semantics: Let KES,h = ((W, R), β) be a history structure and w ∈ W arbitrary. The truth of a L(ES)-formula is inductively defined as follows: – (KES,h , w) |= e ⇔ w ∈ β(e), for any e ∈ E – (KES,h , w) |= ¬(ϕ) ⇔ (KES,h , w) |= ϕ, for any ϕ ∈ L(ES) |= ϕ or (KES,h , w) |= ψ, for any – (KES,h , w) |= (ϕ → ψ) ⇔ (KES,h , w) ϕ, ψ ∈ L(ES) – (KES,h , w) |= X(ϕ) ⇔ ∃(w, w ) ∈ R ∧ (KES,h , w ) |= ϕ, for any ϕ ∈ L(ES) – (KES,h , w) |= (ϕU ψ) ⇔ ∃w ∈ W : (w, w ) ∈ r(t(R)) ∧ (KES,h , w ) |= ψ ∧ ∀w ∈ W : ((w, w ) ∈ r(t(R)) ∧ (w , w ) ∈ t(R)) → (KES,h , w ) |= ϕ where t(R) denotes the transitive closure and r(t(R)) the reflexive and transitive closure of the relation R.
40
F. Eilers and U. Nestmann
Notation 1. For reasons of legibility, we are going to define the following standard abbreviations for formulas ϕ and ψ: ≡ ¬(⊥) ⊥ ≡ (ϕ ∧ ¬(ϕ)) (ϕ ∨ ψ) ≡ ¬(¬(ϕ) ∧ ¬(ψ)) (ϕ → ψ) ≡ (¬ϕ ∨ ψ) (ϕ ↔ ψ) ≡ ((ϕ → ψ) ∧ (ψ → ϕ)) F (ϕ) ≡ (U ϕ) G(ϕ) ≡ ¬(F (¬(ϕ))) Evaluating Experience. As mentioned before, in order to specify conditions for trust values, we bind formulas from this logic to trust values. Definition 3 (Justification). Let ES be an event structure, TS = (D, , ) a trust structure, d ∈ D a trust value and ϕ a formula in L(ES), then a pair J = (d, ϕ) is called a justification (with respect to ES and TS). A set of justifications with respect to ES and TS τ is called complete if it is a function τ : D → L(ES). We have now defined a formal way to specify a condition under which a trust value is justified. The aforementioned “picking” of a single trust value is then done by choosing the supremum of the set of all justfied trust values. Definition 4 (History Evaluation). Let ES be an event structure, TS = (D, , ) a trust structure and τ a complete set of justifications and h a local interaction history with respect to ES and TS respectively. We define the history evaluation τ ∗ by: ⊥ , if h = λ def ∗ τ (h) = ({d ∈ D |(KES,h , 0) |= τ (d)}), otherwise where
denotes the supremum with respect to .
Compatibility with the SECURE framework. SECURE offers a way to calculate a global trust state (an exact definition can be found in [Kru06], for the purpose of this section an intuitive idea of a matrix that contains information about “who trusts whom to what degree” is sufficient), given a set of policies for each principal identity. These policies may contain user defined functions to calculate trust values. In order for them to preserve the soundness of the calculation they have to be continuous with respect to the information ordering of the underlying trust structure. Allowing our history evaluation to be used in these policies may lead to disturbances in the calculation process if the result of the history evaluation fluctuates too much as more experience is gathered. One obvious solution would be to simply restart the calculation of the global trust state, should the history evaluation change to an information-wise lower trust value. While for small systems, where this calulation does not take a considerable amount of time, this might be acceptable, for bigger systems one of the main advatages of SECURE is lost. We therefore instead extend the definition of information continuity and give a condition under which the calculation of the global trust state is not endangered.
Deriving Trust from Experience
41
Notation 2. Let h = a1 ·a2 ·. . .·an and h = b1 ·b2 ·. . .·bm be two local interaction histories with respect to the same event structure ES. We write h ⊆ h if n ≤ m and for all i with 1 ≤ i ≤ n : ai ⊆ bi Definition 5 (Information continuity). A L(ES)-formula ϕ is called information-continuous if for any two local interaction histories h and h with h ⊆ h : KES,h |= ϕ → KES,h |= ϕ We can ensure the soundness of the calculation of the global trust state by keeping the fomula of every justification information-continuous. While this seems a severe restriction, we claim that in a lot of cases it is still favorable over the counting of outcomes of interactions. A certain kind of behaviour may be considered “unforgivable” while others are merely considered “bad”, so a finer and more flexible concept is often in order. We leave for future work to analyze in how far the technique used in [KNS08] can be simulated with our method. An example. Given these definitions we can take a look at how our concept works out in an example. Consider trust and event structures in the context of an online shop that sells books and CDs: The observable events are the following: {good quality, bad quality, fast delivery, slow delivery, no delivery} As for the causality relation, naturally an observation of the quality of an ordered article can only be observed if the article has in fact been delivered. The conflict relation is straightforward too, different degrees of quality are in conflict with each other and so are outcomes of the delivery of an item. The trust structure for our example knows the following degrees of trust: {no trust, no info, buy books, buy CDs, buy anything} Information-wise no info is the least value, followed by no trust, buy books and buy CDs both greater than no info and finally by buy anything greater than both buy books and buy CDs. Trust-wise no trust is the least value with no info being greater, buy books and buy CDs both greater than no info but unrelated to each other followed by buy anything as the “top” value. A principal’s (complete) set of justifications τ could now look like this: trust value condition no trust no info G(⊥) buy books G(good quality) ∧ F (fast delivery) buy CDs G(fast delivery ∨ good quality) ∧ F (good quality) buy anything G(fast delivery ∧ good quality) ∧ F (fast delivery) Now, consider the three following local interaction histories. (In general, there is always only one local interaction history for each principal identity, but as we look at ways to interpret them it makes sense to consider more than just one.)
42
F. Eilers and U. Nestmann
h1 = {fast delivery, good quality}·{good quality, slow delivery}·{fast delivery}· {fast delivery, bad quality} h2 = λ h3 = {fast delivery, good quality} · {fast delivery, good quality} Given the above definitions we can now calculate the principal’s history evaluation for each local interaction history: τ ∗ (h1 ) = ({no trust, buy CDs}) = buy CDs τ ∗ (h2 ) = ⊥ = no info τ ∗ (h3 ) = ({no trust, buy books, buy CDs, buy anything}) = buy anything.
3
Adding Context Information
Until now, we have been working quite naturally with the definition of a local interaction history. The question arises whether this definition implies some unwanted limitations. Indeed it does: While we do not abstract completely from the order in which events have occurred, some information is lost. It is possible to order sessions in some way and assuming that there is always only one active session (i.e. at any one point in time there is at most one configuration in the local interaction history that is not complete) this does not pose a problem. If we allow concurrent sessions then we have to make abstractions that may go too far: we only have means to order sessions, but neither events within configurations (which holds true even in the case of only one active session at a time) nor events from different configurations (except for the order given by the configurations that contain the events). If we are looking at independent events (i.e. events unrelated by conflict or causality relations) then this may become a problem. Assume a and b are events we can observe. It may be possible that a trustworthy behavior requires to observe only a or only b or first a and then b but not the other way around. We can not extract information about the order in which these events have occurred with the given definition of a local interaction history2 . As this information seems quite valuable, it stands to reason to start looking for an alternative definition of “history”. What we would like is a concept that allows for reasoning about events in the context of their session, about events independent from the session in which they have occurred and about a mix of those two ideas. In other words, we would like to know the order in which events have been observed, while still being aware of their respective session. Furthermore we introduce the concept of roles. While we only consider observable evidence from a single principal identity at a time, this principal may have acted in different roles. As a simple example, consider a chat system in 2
Note that in SECURE this is not considered a limitation, for example in [KNS08] it is noted that “in a scenario where this order of events is relevant, one can always use a ’serialized’ event structure in which this order of occurrences is recorded”. Hence the idea of recording the time of events explicitly is not new, but to our best knowledge it has not been formally done before.
Deriving Trust from Experience
43
which there are moderated channels. Users can act in their role as a chatter or a moderator. Certain behavior may be acceptable from moderators but not from chatters. It is therefore important to know the role that a principal has played within a given interaction. 3.1
A Language of Reasonable Complexity
The logic we define allows for references to roles and sessions. Since quantification is a nice tool, but at the same time increases the complexity of checking formulas, we limit it in such a way that quantifications can not be nested to unlimited depth. Arguing how much complexity in formulas is needed to cover all relevant cases can only be a matter of experience. While it is easy to construct examples that the logic can not handle, we claim that most sensible ways to reason about experience are still covered. Definition 6 (Local Interaction History). Let ES = (E, ≤, #) be an event structure and S and R be sets (the set of sessions and the set of roles), then a function h : N → P(R × S × E) is called a local interaction history (with respect to ES, S and R). A local interaction history h is called consistent if – – – – –
S and R are finite ∀n ∈ N : h(n) is finite ∃n ∈ N : ∀n > n : h(n ) = ∅ ∀s ∈ S : ∀r ∈ R : ∀k ∈ N : kn=0 {e|(r, s, e) ∈ h(n)} is a configuration ∀s ∈ S : ∀r ∈ R : ∀e ∈ E : ∀n ∈ N : (r, s, e) ∈ h(n) → ∀n ∈ N : (r, s, e) ∈ h(n ) → n = n
Our local interaction histories are now functions that map a point in time (represented by a natural number) to a set of events within a context that have been observed at that time. In order to distinguish and compare this new concept to the one originally defined in the SECURE framework, we refer to these local interaction histories as “old” and “new”. We omit this attribute if it is clear from the context which one is meant. Definition 7 (Conversion old to new local interaction history). Let h = c0 . . . cn , n ∈ N be an old local interaction history with respect to ES = (E, ≤, #), then the conversion h : N → P({0} × {0 . . . n} × E) of h is defined as {(0, i, e) | e ∈ ci }, if 0 ≤ i ≤ n def h(i) = ∅, otherwise Lemma 1. Let h = c0 . . . cn , n ∈ N be an old local interaction history with respect to ES = (E, ≤, #), then the conversion h is a consistent local interaction history with respect to ES, S = {0, . . . , n} and R = {0} Proof. Simple inspection.
44
F. Eilers and U. Nestmann
This new definition voids our way to reason about observed events. However the general principle of using Kripke-semantics still seems to offer a powerful tool to do so. Hence we are going to introduce a way to obtain a Kripke-structure together with an assignment from a local interaction history: Definition 8 (History Structure). Let h be a local interaction history. The history structure Kh is defined by: Kh = ((W, R), h) with W = N and R = {(n, n + 1)|n ∈ N} The next step is to define a language to reason about this history structure, the basic idea is that we use a three-layered Kripke-semantics: The lowest layer only contains information about events within a given role and a given session. The middle layer adds information about sessions and the topmost layer allows reasoning about roles. Definition 9 (Justification Language) Syntax: Let ES = (E, ≤, #) be an event structure and R and S sets (of roles and sessions). The justification language L(R, S, ES) is defined as follows: – – – – – – – – – – – – –
If If If If If If If If If If If If If
e ∈ E then {e, ♦(e)} ⊆ L(ES) ϕ ∈ L(ES) then {¬(ϕ), X(ϕ)} ⊆ L(ES) {ϕ, ψ} ⊆ L(ES) then {(ϕ ∧ ψ), (ϕU ψ)} ⊆ L(ES) (s, e) ∈ (S × E) then (s, e) ∈ L(S, ES) ϕ ∈ L(ES) then ∗S (ϕ) ∈ L(S, ES) ϕ ∈ L(ES) and s ∈ S then s : (ϕ) ∈ L(S, ES) ϕ ∈ L(S, ES) then {¬(ϕ), X(ϕ)} ⊆ L(S, ES) {ϕ, ψ} ⊆ L(S, ES) then {(ϕ ∧ ψ), (ϕU ψ)} ⊆ L(S, ES) (r, s, e) ∈ (R × S × E) then (r, s, e) ∈ L(R, S, ES) ϕ ∈ L(S, ES) then ∗R (ϕ) ∈ L(R, S, ES) ϕ ∈ L(S, ES) and r ∈ R then r : (ϕ) ∈ L(R, S, ES) ϕ ∈ L(R, S, ES) then {¬(ϕ), X(ϕ)} ⊆ L(R, S, ES) {ϕ, ψ} ⊆ L(R, S, ES) then {(ϕ ∧ ψ), (ϕU ψ)} ⊆ L(R, S, ES)
Semantics: The definition is as expected. As it is rather lengthy, we only give a few examples here. The full Definition can be found in [EN09]. – (Kh (r, s), w) |=R,S e ⇔ (r, s, e) ∈ h(w) for any e ∈ E – (Kh (r), w) |=R ∗S (ϕ) ⇔ ∀s ∈ S : (Kh (r, s), w) |=R,S ϕ for any ϕ ∈ L(ES) – (Kh , w) |= r : (ϕ) ⇔ (Kh (r), w) |=R ϕ for any ϕ ∈ L(S, ES) Notation 3. Again, we use symbols like ⊥, → and G to improve legibility. Furthermore for ϕ ∈ L(ES): +S (ϕ) ≡ ¬(∗S (¬(ϕ))) and for ϕ ∈ L(S, ES): +R (ϕ) ≡ ¬(∗R (¬(ϕ))) Again, we extend the notion of information continuity to ensure the soundness of the calculation of a global trust state.
Deriving Trust from Experience
45
Notation 4. Let h and h be two local interaction histories (with respect to the same ES, S and R), then we write h ⊆ h if ∀n ∈ N : h(n) ⊆ h (n). Definition 10 (Information continuity). A L(R, S, ES)-formula ϕ is called information-continuous if for all local interaction histories h , for all h ⊆ h and for all n ∈ N : (Kh , n) |= ϕ → (Kh , n) |= ϕ. The set of formulas we are able to define obviously depends on the sets of sessions and roles (as well as the underlying event structure). When, while interacting with a principal identity, new information becomes available and the sets of sessions and roles change (i.e. grow) we would still like to use our previously defined formulas. This however is not a problem as can be seen in the following lemma, the language only becomes richer. Lemma 2 (Language extension via bigger context). Let ES = (E, ≤, #) be an event structure and S ⊆ S and R ⊆ R sets, then L(R, S, ES) ⊆ L(R , S , ES) Proof. Simple inspection. Obviously truth can not necessarily be preserved when the context becomes bigger. As an example consider a formula that says “in all roles and all sessions ϕ holds.” and is true in some context. If we now add more sessions or roles and in some of which ϕ does not hold, then the truth of our formula is lost. Consequently formulas have to be designed accordingly. This does not necessarily mean avoiding universal quantification, but rather choosing ϕ wisely or S and R respectively. As we have seen earlier we can convert an old local interaction history to a new one. We would like to be able to do the same with formulas. Definition 11 (Conversion old to new formula). Let ES = (E, ≤, #) be an event structure and ϕ ∈ L(ES), then the conversion ϕ is recursively defined as: – – – – –
If If If If If
ϕ ∈ E then ϕ = +R : (+S : (ϕ)) def ϕ = ¬(ψ) for some ψ ∈ L(ES), then ϕ = ¬ψ def ϕ = X(ψ) for some ψ ∈ L(ES), then ϕ = Xψ def ϕ = (ψ → χ) for some ψ, χ ∈ L(ES), then ϕ = (ψ → χ) def ϕ = (ψU χ) for some ψ, χ ∈ L(ES), then ϕ = (ψU χ) def
Theorem 1 (Conservative language extension). Let ES = (E, ≤, #) an event structure and h be an old local interaction history with respect to ES, then for any ϕ ∈ L(ES) and w ∈ N : (KES,h , w) |= ϕ ⇔ (Kh , w) |= ϕ Proof. The idea here is to show this property by structural induction over the structure of formulas ϕ. See [EN09] for the complete proof.
46
3.2
F. Eilers and U. Nestmann
Working with Local Interaction Histories
We now have a means to reason about local interaction histories, therefore we are going to elaborate a bit, how we can use it. The most obvious way to work with a local interaction history, besides reasoning about the experience it holds, is to add new observations. Some of which may not be possible, e.g. within a role and a session an event that is in conflict with an event previously observed does not make any sense and should make us reconsider our event structure. Definition 12 (Valid Observation, Updated Local Interaction History). Let h : N → P(R × S × E) be a local interaction history with respect to ES, S and R. A tuple (r, s, e, i) is called a valid observation with respect to h, iff – – – –
e∈E i∈N e∈ / {e | ∃n ∈ N : (r, s, e ) ∈ h(n)} 0 ∀n ∈ N : {e | ∃n ∈ {0..n} : (r, s, e ) ∈ h(n )} ∪ {e} ∈ CES
Let h : N → P(R × S × E) be a local interaction history with respect to ES, S and R and u = (r, s, e, i) a valid observation, then (h · u) : N → P((R ∪ {r}) × (S ∪ {s}) × E) with h(n) ∪ {(r, s, e)}, if n = i (h · u)(n) = h(n), otherwise is called the local interaction history h updated with u. The validity of an observation ensures that a local interaction history remains consistent when the observation is added. Lemma 3. Let h be a consistent local interaction history with respect to ES, S and R and u = (r, s, e, i) a valid observation, then (h · u) is consistent. Proof. To show the consistency of (h · u) the following five points must hold: – S ∪ {s} and R ∪ {r} are finite: Since h is consistent and therefore S and R are finite this is obvious. – ∀n ∈ N : (h·u)(n) is finite: Since this holds for h and only a single observation is added this also holds for (h · u). – ∃n ∈ N : ∀n > n : (h · u)(n ) = ∅: Since h is consistent there exists m ∈ N for which this holds for h. h is only changed at i, so we can choose n = max(m, i + 1). – ∀s ∈ S ∪ {s} : ∀r ∈ R ∪ {r} : ∀k ∈ N : kn=0 {e |(r, s, e ) ∈ (h · u)(n)} is a configuration: By consistency of h this already holds true for all s ∈ S \ {s} and r ∈ R \ {r}. For r and s this follows directly from the validity of (r, s, e, i). – ∀s ∈ S ∪ {s} : ∀r ∈ R ∪ {r} : ∀e ∈ E : ∃n ∈ N : (r , s , e ) ∈ (h · u)(n) → ∀n ∈ N : (r , s , e ) ∈ (h · u)(n ) → n = n : By consistency of h this already holds true for all s ∈ S and r ∈ R. The validity of (r, s, e, i) ensures that e∈ / {e | ∃n ∈ N : (r, s, e ) ∈ h(n)} and therefore there exists at most one n for which (h · u)(n) = (r, s, e).
Deriving Trust from Experience
47
As mentioned before, there should be some way to reason about experience that is not actually our own. In real life situations we often trust another indiviual enough to assume that any experience they share with us is genuine. We therefore sometimes handle this experience as if it was actually our own. The equivalent in our trust model would be to combine two local interaction histories. Before we do that, we have to make sure, that these two local interaction histories are not conflicting in some way: Definition 13 (Compatibility) – Two event structures ES = (E, ≤, #) and ES = (E , ≤ , # ) are compatible iff (E ∪ E , ≤ ∪ ≤ , # ∪ # ) is an event structure. – Two local interaction histories h (with respect to ES = (E, ≤, #), S and R) and h (with respect to ES = (E , ≤ , # ), S and R ) are compatible with respect to a function insert : N → N iff • insert is continuous with respect to ≤ (on N), • ES and ES are compatible, • ∀s ∈ S ∪ S : ∀r ∈ R ∪ R : ∀n ∈ N : ∀(r, s, e) ∈ h (insert(n)) : n ∈ N : ∃(r, s, e ) ∈ h(n ) : (e#e ∨ (e = e ∧ n = insert(n))). Definition 14 (Combined Local Interaction History). Let h and h be two local interaction histories with respect to ES = (E, ≤, #), S and R and ES = (E , ≤ , # ), S and R respectively, compatible with respect to a function insert : N → N then (h +insert h ) : N → P((R ∪ R ) × (S ∪ S ) × (E ∪ E )) with (h +insert h )(n) = h(n) ∪ h (insert(n)) def
is called the combined local interaction history of h and h with respect to insert. Note that this method of using simple unions is merely one way to combine two local interaction histories. Disjoint unions may be used if identification of observations is unwanted. A generic solution would be to adapt the concept used in graph grammars (see f.ex. [EEPT06]) that builds on a so-called double-pushout construction to define interfaces (that is defining which items are supposed to be identified and which are not). Of course we would not want to lose consistency when combining two local interaction histories, but compatibility ensures that: Lemma 4. Let h and h be two consistent local interaction histories with respect to ES = (E, ≤, #), S and R and ES = (E , ≤ , # ), S and R respectively, compatible with respect to a function insert : N → N then (h +insert h ) is consistent. Proof. To show the consistency of (h +insert h ) the following points must hold: – S ∪ S and R ∪ R are finite: h and h are consistent, therefore R, R , S and S are finite and so are their unions.
48
F. Eilers and U. Nestmann
– ∀n ∈ N : (h +insert h )(n) is finite: Since h and h are consistent they both only yield finite sets of observations so their be finite too. – ∃n ∈ N : ∀n > n : (h +insert h )(n ) = ∅: h and h are consistent, so there exist m and m such that this holds for h and h respectively. Since insert is continuous with respect to ≤ we can choose n = max(m, insert(m )). – ∀s ∈ S ∪ S : ∀r ∈ R ∪ R : ∀k ∈ N : kn=0 {e|(r, s, e) ∈ h(n)} is a configuration: A configuration has to be necessity-closed and conflict free. h and h are consistent, therefor necessity-closed. insert is continuous with respect to ≤, so the projection of h to the combined local interaction history is necessity-closed, too. Since h and h are compatible with respect to insert, ∀s ∈ S ∪ S : ∀r ∈ R ∪ R : ∀n ∈ N : ∀(r, s, e) ∈ h (insert(n)) : n ∈ N : ∃(r, s, e ) ∈ h(n ) : e#e , so all the sets of observed events within the same role and sesson are conflict free and therefore configurations. – ∀s ∈ S ∪ S : ∀r ∈ R ∪ R : ∀e ∈ E ∪ E : ∃n ∈ N : (r, s, e) ∈ h(n) → ∀n ∈ N : (r, s, e) ∈ h(n ) → n = n : In other words, no event has been observed more than once. That is already the case for h and h on their own, so we only need to ensure that h and h do not contain the same event within the same role and session at a different time. However the compatibility of h and h ensures that: ∀s ∈ S ∪ S : ∀r ∈ R ∪ R : ∀n ∈ N : ∀(r, s, e) ∈ h (insert(n)) : n ∈ N : ∃(r, s, e ) ∈ h(n ) : (e = e ∧ n = insert(n)). 3.3
An Example
Let us look at an example to show how this concept could be put into practice. Consider an online service that performs a certain task after a user has filed a request. Assume the task is time-critical in some cases so the service offers a “premium membership” that guarantees responses to queries within a certain time frame. Responses can turn out to be correct or incorrect later on. Roles in our example will be “user” (u) and “premium user” (pu). Observable events are E = {query, response, correct, incorrect} with the obvious dependencies. The trust structure conists of four elements “unreliable”, “fast”, “correct” and “reliable”. Its definition is as expected and can be taken from the complete set of justifications: trust value condition unreliable fast pu : (∗S : (G(query → X(response)))) correct ∗R : (∗S : ¬F (incorrect)) reliable ⊥ Note that with these justifications we consider the server to be reliable until it has turned out not to be. Also note that we can annotate the highest degree of trust “reliable” with ⊥ because our history will evaluate to “reliable” if “fast” and “correct” are justified, which is exactly what we consider reliable behavior in our example.
Deriving Trust from Experience
49
Given the set of roles R = {u, pu} and the set of sessions S = N a local interaction history h could look like this: n h(n) 0 {(u, 1, query)} 1 {(pu, 2, query), (u, 3, query)} 2 {(u, 1, response), (pu, 2, response)} 3 {(u, 1, correct), (pu, 2, correct)} 4 {(u, 3, response)} 5... ∅ A trust value derived from this local interaction history can now be obtained using the complete set of justifications. Again we first calculate the set of all justified trust values J = {unreliable, fast, correct} and then take the supremum of this set, “reliable” as our derived trust value. Assume we make the (valid) observation (u, 3, incorrect, 6). We can add it to our local interaction history and after evaluation, we obtain “fast” as the most approriate trust value since the observation of “incorrect” invalidated the condition for “correct”. Assume that another principal gathered experience with the same service: n h(n) 4 {(pu, 4, query)} 5 {(u, 5, query), (u, 5, response)} 6 {(pu, 4, response)} 7... ∅ Since we fully trust the principal, we simply integrate their interaction history into our own. Consistency conditions hold, so the combined (updated) local interaction history is as follows: n h(n) 0 {(u, 1, query)} 1 {(pu, 2, query), (u, 3, query)} 2 {(u, 1, response), (pu, 2, response)} 3 {(u, 1, correct), (pu, 2, correct)} 4 {(u, 3, response), (pu, 4, query)} 5 {(u, 5, query), (u, 5, response)} 6 {(pu, 4, response), (u, 3, incorrect)} 7... ∅ Notice that the response in session 4 takes too long and thus violates the condition for “fast”. The set of justified trust values becomes J = {unreliable}, thus attesting the service to sometimes provide incorrect and/or late responses.
50
4
F. Eilers and U. Nestmann
Conclusion and Future Work
We presented a formal concept to derive trust from experience within the SECURE framework by adding conditions in the form of modal-logic formulas to trust values. We introduced context information as well as a finer granularity of time in observations and showed the relation to the original SECURE model. As for future work, it looks promising to extend the concept of context to a more generic level, where not only roles, but information like location or resources may be considered. Another strand would be the extension of the concept of roles to include capabilities or obligations, whose actual status may in turn depend on other current contexts. Furthermore complexity analysis of the resulting logic would grant new insights into the practical applicability of the presented method.
References [CD97] [CGL+ 08]
[EEPT06] [EN09] [HP03] [KHKR06]
[KNS05]
[KNS08]
[Kru06] [KSGm03]
[NKS07]
Cuppens, F., Demolombe, R.: A modal logical framework for security policies. In: Foundations of Intelligent Systems (1997) Coker, G., Guttman, J.D., Loscocco, P., Sheehy, J., Sniffen, B.T.: Attestation: Evidence and trust. In: Proc. Information and Communications Security, 10th International Conference (2008) Ehrig, H., Ehrig, K., Prange, U., Taentzer, G.: Fundamentals of Algebraic Graph Transformation. Springer, Heidelberg (2006) Eilers, F., Nestmann, U.: Deriving trust from experience (2009), http://www.mtv.tu-berlin.de/menue/forschung/publikationen Halpern, J.Y., Pucella, R.: A logic for reasoning about evidence. In: Proc. 19th Conference on Uncertainty in Artificial Intelligence (2003) Kerschbaum, F., Haller, J., Karabulut, Y., Robinson, P.: PathTrust: A trust-based reputation service for virtual organization formation. In: Stølen, K., Winsborough, W.H., Martinelli, F., Massacci, F. (eds.) iTrust 2006. LNCS, vol. 3986, pp. 193–205. Springer, Heidelberg (2006) Krukow, K., Nielsen, M., Sassone, V.: A framework for concrete reputation-systems with applications to history-based access control. In: Proc. of the 12th CCS, pp. 7–11 (2005) Krukow, K., Nielsen, M., Sassone, V.: A logical framework for historybased access control and reputation systems. Journal of Computer Security 16(1), 63–101 (2008) Krukow, K.: Towards a Theory of Trust for the Global Ubiquitous Computer. PhD thesis, University of Aarhus (2006) Kamvar, S.D., Schlosser, M.T., Garcia-molina, H.: The eigentrust algorithm for reputation management in p2p networks. In: Proceedings of the 12th International World Wide Web Conference (2003) Nielsen, M., Krukow, K., Sassone, V.: A bayesian model for event-based trust. Electronic Notes on Theoretical Computer Science, pp. 172 (2007)
Reflections on Trust: Trust Assurance by Dynamic Discovery of Static Properties Andrew Cirillo and James Riely DePaul University, School of Computing
Abstract. Static analyses allow dangerous code to be rejected before it runs. The distinct security concerns of code providers and end users necessitate that analysis be performed, or at least confirmed, during deployment rather than development; examples of this approach include bytecode verification and proof-carrying code. The situation is more complex in multi-party distributed systems, in which the multiple web services deploying code may have their own competing interests. Applying static analysis techniques to such systems requires the ability to identify the codebase running at a remote location and to dynamically determine the static properties of a codebase associated with an identity. In this paper, we provide formal foundations for these requirements. Rather than craft special-purpose combinators to address these specific concerns, we define a reflective, higherorder applied pi calculus and apply it. We treat process abstractions as serialized program files, and thus permit the direct observation of process syntax. This leads to a semantics quite different from that of higher-order pi or applied pi.
1 Security in Distributed Open Systems In an open system, program code is under the control of mutually distrusting parties prior to deployment. Local software security may be maintained in such a system by using dynamic verification at load time, rejecting code that fails analysis. For example, a client browser may validate embedded scripts before execution; a server application may validate SQL queries derived from client input. It is common for virtual machines to perform bytecode verification on class files loaded from remote sources [1]. Similar approaches are taken in [2,3]. Such analysis can establish local properties, for example, that unauthorized code does not gain access to sensitive system resources. It is more difficult to obtain global security guarantees, since no single observer has access to all of the required code. Consider the simplest possible such system, consisting of a client and server. The client may wish to ensure that sensitive data does not escape the server. Note that the client’s trust in the organization running the server is not sufficient—the client must also trust the software running on the server. If the software is buggy, the client may need to trust all other clients as well. The client may not require a full proof of correctness on the part of the server, but may be satisfied to know that the server’s runtime system has all current security patches applied or that it performs some simple integrity checks on data
This work was supported by the National Science Foundation under Grant No. 0347542.
P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 51–65, 2010. c Springer-Verlag Berlin Heidelberg 2010
52
A. Cirillo and J. Riely
supplied by other users. The server has symmetric concerns, for example, restricting client software in order to establish non-repudiation of a commercial transaction. In current practice, attempts to establish such global properties are ad hoc and informal: “If I only give my credit card number to pay-pal, everything will be fine.” The biggest flaw of such policies is not that they lack formality, but that they are overly restrictive. Lesser known vendors are high risk simply because they are lesser known. Trusted computing [4] has the potential to enable less restrictive policies. Systems that use trusted computing and remote attestation [5] conditionalize their interactions with currently running, but physically distant, processes based on the identity of the program code the remote party is running. Secure messages identify the code of their senders; recipients trust the contents based on static properties of the senders’ code. In prior work [6], we made a first step toward formalizing such systems, developing a higher-order π calculus with ad hoc primitives for remote attestation and a type system that enforced memory safety in the face of arbitrary attackers. Here we improve this work by generalizing both the primitives of the language and the policies to which it applies; we also provide a more powerful and realistic attacker model. In practice, there are several operations available on an executable: (a) one can communicate it as data, (b) one can execute it, (c) one can identify it by comparing it syntactically to another value, (d) one can extract data from it, or (e) one can disassemble it and operate on its components. Operations (a) and (b) are features of what is commonly referred to as higher order programming. Operations (c) through (e), which expose the syntax of mobile code, are features of what might be called introspective, or reflective programming. Formalizing many aspects of open systems requires a reflective approach; trusted computing requires at least syntactic identification of code, dynamic verification also requires disassembly. While identification and data extraction are reasonably straightforward operations (see e.g., [6]), modeling the disassembly of an executable can be complicated. For example, if primitive destructors for process syntax are used one must take special precautions to keep names and variables from escaping their scopes, and also to ensure that syntax is preserved by substitution. Our key observation is that all three can be represented by extending higher order π with pattern matching on abstractions. Our interest is to internalize static analysis at the level of specification, rather than implementation. We are thus able to restrict pattern variables to match subvalues, rather than subvalues and subprocesses. The language can encode arbitrary calculations on the syntax of an abstraction by “guessing” the structure of the program and substituting pattern variables for values that are not known a priori. To make up for the loss of induction over process syntax, we allow an infinite number of such processes in parallel. The language can thus model abstract specifications of verifiers for static properties. The paper is structured as follows. In Section 2, we present the syntax and operational semantics of the language. In Section 2, we then develop a systematic method for describing processes that perform dynamic verification. In Section 3 we apply the theory to a simple type system that guarantees memory safety. In Section 4 we apply it to a trusted computing platform.
Reflections on Trust: Trust Assurance by Dynamic Discovery
53
2 A Reflective Pattern-Matching π-Calculus Higher-Order π. The higher-order π calculus (HOπ) [8,9,10] is a natural model for systems that communicate program code. In the face of attackers, however, HOπ raises subtle issues. Consider the following example, where pub represents a public channel, accessible to all parties including attackers, and passwd a channel accessible to only and .
νsecret.pub!((x, y)if x = passwd then y!secret) pub?(prog)νb.(prog · (passwd, b) | b?(z)Q) νpub.( | νpasswd.( | )) Alice creates a secret name (secret) and embeds it in an abstraction, which is then written on the public channel. If the first argument x of the abstraction matches passwd then the secret is written on the second argument y. Bob reads the code from the public channel and instantiates it with passwd and a callback channel. After unlocking the secret Bob continues as Q with z bound to secret. Consider an arbitrary attacker, Mallory, who knows pub but not passwd. Because Mallory has access to pub, he can intercept Alice’s program before it is received by Bob. Once in possession of the program, Mallory need only inspect its contents to extract the embedded secret without executing the code, thus circumventing the password check. HOπ does not model this sort of attack, since HOπ abstractions may only be communicated as data or run. By analogy to object [11] and class [12, Ch. 5] serialization, HOπ allows process abstractions to be serialized, but does not allow for inspection of the serialized form. Nonetheless, such attacks are of direct relevance to practical systems [13], therefore in this section we extend HOπ with reflection features. Reflective π. We define a local, value-passing, asynchronous higher-order π parameterized over a signature that specifies value constructors. A general-purpose patternmatching destructor works for any kind of value, including abstractions. As in pattern matching spi [14], we equip pattern matching with a notion of Dolev-Yao derivability that gives a semantics to cryptographic primitives by restricting patterns to those that represent implementable operations. The resulting language is simple, yet powerful. Syntax and Operational Semantics. A value signature (Σ) comprises three components: a set of value constructors ( f ), a sorting that assigns each constructor an arity, and a Dolev-Yao derivability judgment () that constrains value patterns. Fix a value signature. Values include names, variables, process abstractions and constructor applications; processes include stop, local input, asynchronous output, application, parallel composition, restriction, replication, value construction and a pattern matching destructor. We also allow some processes to contain infinite parallel components.
54
A. Cirillo and J. Riely
R EFLECTIVE π Syntax: L, M, N, S, T ::= a x (x)P f (M) in P O, P, Q, R ::= 0 a?N M!N M · N Πi Pi νa.P ∗P let x = f M case M of ∃ x.N in P where f n(N), ( f v(N) − x), N x Reduction Axioms: (COMM) a?M | a!N −→ M · N (APP) (CONST) (CASE)
((x)P) · N −→ P{x := N} in P −→ P{x := f (M)} let x = f M of ∃ case M{ x := N} x.M in P −→ P{ x := N}
We distinguish variables from names, allowing input only on names; therefore only output capabilities may be communicated. This restriction makes examples simpler but is not essential to the theory. We require that process abstractions have finite syntax except when they are used as the right-hand side of an input process. The name a is bound in P” in “νa.P” with scope P. The variable x is bound in “(x)P” and in “let x = f M with scope P. The variables x are bound in “case M of ∃ x.N in P” with scope N and P. Let f n and f v return free names and variables, respectively. Identify syntax up to renaming of bound names and variables. Write “P{x := M}” and “N{a := M}” for the capture-avoiding substitution of M for x in P and M for a in N. A constructor applica is well-sorted if |M| matches the arity of f . Constructor applications in both tion, f (M), the value and process languages are assumed to be well sorted, as in applied π [15]. The variables x are pattern bound in ∃ x.N with scope N. We say that ∃ x.N is a well-formed pattern if x ⊆ f n(N). A term (or process) is well-formed if every pattern it contains is well-formed and if any variable x that occurs under a constructor application is pattern bound by an enclosing pattern. For example, “case M of ∃x. f (x) in 0” is wellformed, but “case M of ∃x. f (x) in a! f (x)” and “(x)a! f (x)” are not well-formed. In the sequel, we assume that all terms are well-formed. Note that while first order value passing languages, such as applied π [15], are often abstract with respect to the time at which a value is constructed, mixing reflection and cryptography requires that we distinguish the code that creates a value from the value itself. As an example, suppose “enc(M, N)” represents encryption of M with key N and consider the abstraction “(x)a!enc(x, b)”; the missing payload implies that the encryption has not yet taken place, in which case an observer should be able to extract b. Similarly in “(x)a!enc(b, x)” we expect b to be visible. The case of “(x)a!enc(b, b )” is, however, ambiguous; if it represents a program that does an encryption then both b and b should be visible, but if it represents a program embedded with an alreadyencrypted message then neither should be visible. We resolve this ambiguity by providing an explicit construction call in the process language and requiring that constructor applications in the value language contain no free (non-pattern) variables. The pattern-matching destructor “case M of ∃ x.N in P” allows nested matching into constructed values and abstractions. We require that all bound pattern variables ( x) occur at least once in N, and they may occur more than once. To match, all occurrences of
Reflections on Trust: Trust Assurance by Dynamic Discovery
55
a pattern variable must match identical values. When matching abstractions we assume that pattern variables are always chosen so as not to conflict with variables bound by the abstraction. Patterns are also constrained by the Dolev-Yao derivability judgment. The judgment N” expresses that the values N can be constructed by agents with knowledge “M of the values M. We then require that pattern variables be derivable from the terms mentioned explicitly in the pattern. For example, a sensible derivability judgment might include “enc(x, M) x,” which would allow decryption when the key is specified, but not “enc(x, y) x, y,” which would allow extracting both the contents and the key of an encrypted message without specifying the key. For clarity, we make use of a more concise syntax in written examples by observing the following notational conventions. We omit binders from patterns clauses when they are clear from context (as in case M of (x, y) in P). We omit unused bound variables, writing ()P for (x)P when x ∈ f n(P). We omit explicit let binders when the meaning is clear, for example writing “a! f x” for “let y = f x in a!y.” We also assume that a value constructor for pairs is available and use the obvious derived forms for tuples. As usual, operational semantics are described in terms of separate structural equivalence and reduction relations. We elide the definition of structural equivalence and the context rules for reduction, which are entirely standard for π calculi, and present only the reduction axioms. COMM brings an abstraction and an argument together over a named channel; APP applies an argument to an abstraction, substituting the argument for the parameter variable; and CONST constructs a new value from a constructor symbol and a series of arguments. CASE allows a pattern match to proceed only if the value is syntactically identical (up to α-equivalence) to the pattern modulo a substitution for the bound variables of the pattern. For example, the pattern ∃x.(y)a!x does not match (y)a!(y, b) because the substitution of (y, b) for x would capture y, however the pattern ∃x.(z)a!(z, x) does match because the bound z can be renamed to y. Equivalences. Behavioral equivalences are not the focus of this paper (see [10] for a thorough introduction), however we very briefly note that adding reflection to HOπ in almost any capacity will have a dramatic effect on its equivalences. In particular, any equivalence closed under arbitrary contexts, which may have holes under abstraction binders, collapses immediately to syntactic identity. An interesting equivalence would therefore only consider contexts without holes in abstractions (these could be called non-value contexts). Since they are transparent, passing process abstractions in this context is no different than for any ordinary values such as pairs or integers, hence the standard definitions for value-passing π-calculi [10, Sec 6.2] can be used. While complications do arise in the presence of non-transparent (i.e., cryptographic) values, these issues are orthogonal to higher-orderness and reflection and have already been addressed in the literature [16,15]. Embedded Password Attack Revisited. We now reconsider the example above and see that, as desired, it is not secure. Consider an attacker, Mallory, defined as follows.
= pub?(prog)case prog of ∃(z1 , z2 ).((x, y)if x = z1 then y!z2 ) in (. . .)
56
A. Cirillo and J. Riely
As was the case with only a higher-order features, Mallory is able to intercept the program file with an input on the public channel, pub. By using reflection, however, Mallory is now also able to extract both the password and secret without running the program. The continuation (. . .) has z1 bound to password and z2 bound to secret. Dynamic Verification. Inspection of mobile code is not useful only for attackers, however. It can also be used to dynamically establish trust in code that was received, for example over a public channel, via dynamic verification. A verifiable property is a property of abstractions that can be decided by a static analysis tool that is invoked at runtime. We call such tools verifiers. The proper use of a verifier can ensure the safety of a process even when it executes code obtained from an untrusted source. Formally, a verifiable property is a predicate on finite abstractions subject to the following constraints. First, it must be at least semi-decidable. Second, we require that it depend on a specific usage of only a finite set of names. Given a property, P , and a set of names, S, we say that S supports P if for every M ∈ P and every a, b ∈S where a ∈ f n(M), M{b := a} ∈ P . We write supp(P ) for the smallest set of names that supports P and restrict our attention to properties that have supp(P ) finite. As an example of a predicate on values without finite support, impose an total ordering on infinite subset of names ni such that ni < ni+1 and consider the predicate that insists that only ni+1 may be output on ni . Such a predicate is not interesting to us, since names have no inductive structure and therefore one cannot define an algorithm to decide it. It is relatively easy to describe processes that implement verifiers using infinite syntax. Treating properties as sets of values, we quantify clauses over elements of the set. Note, however that it is not quite as simple as specifying one pattern that exactly matches each element of the set. For example, the naive verifier, a?((z))ΠM∈P case z of M in ((x)P · z) | ΠN∈P case z of N in Q inputs a value to be checked and then pattern matches all values that satisfy the property continuing as P with the value bound to x, and all values that do not satisfy the property continuing as Q. Quantification over all values, however, means that such a process would reference not just an infinite subset of names but the whole universe of names, thus violating important assumptions about bound names. With a little more effort, though, we can build a verifier that has a finite set of free names provided that the underlying property has finite name support, as in the following derived form. D ERIVED F ORM : V ERIFY
verify M as P (x) in P else Q = νb. ΠN∈P (case M of ∃ z.N{ a := z} in b?()((x)P · N{ z})) a := z.L{ a := z} in b?()Q) | b!b | ΠL ∈P (case M of ∃ where a = f n(M) − supp(P ) , z ∩ ( f v(P) ∪ f v(Q) ∪ {x}) = 0/ and | z| = | a|
Now f n(verify M as P (x) in P else Q) = supp(P ) ∪ f n(M) ∪ f n(P) f n(Q), hence it is finite if and only if supp(P ) is finite and M, P, Q have finite free names. The elimination of uninformative names from patterns allows finite name dependency, but also causes some of the pattern clauses under the quantification to overlap. We can be assured that
Reflections on Trust: Trust Assurance by Dynamic Discovery
57
this overlap is safe because names outside of supp(P ) by definition cannot affect the satisfaction of the property, hence true patterns may only overlap with other true patterns and false with false. The use of b as a signal channel prevents more than one clause from executing so the behavior resembles that of a naive implementation. Note that this representation is general enough to allow one to express verifiers for a wide range of analyses and properties, including even those that may not be finitely realizable. In particular, when properties are only semi-decidable this representation will be unrealistically powerful, however for the purpose of establishing safety theorems the approach is adequate.
3 Typability as a Verifiable Property The framework for dynamic verification presented above may be applied to any verifiable property. A verifiable property is not necessarily a useful security property. For example, it is verifiable that an executable is signed, but this does not impart any security in itself. To establish a security theorem of some sort, we must choose a property with provable security guarantees. In this section we consider an example of such a property, formalizing a common approach to typing that guarantees the absence of certain runtime errors in the presence of arbitrary attackers [17,18,19,20,21]. Typability in this type system represents a verifiable property subject to implementation as an analysis procedure that can be invoked at runtime. To provide support for interesting examples, we use a signature that includes some basic constructs that are useful in open distributed systems, including dynamically typed messages [22] and cryptographic hashes and symmetric-key encryption. The novelty is not in the type system itself, which is mostly standard, so much as how it serves as an example for dynamic verification. For this reason we simplify the typed language by supporting nested pattern matching only when extracted values can be treated at type Top, and type-safe pattern matching only for top-level patterns. Many of these restrictions can be eased using, for example, techniques developed in [14]. S IGNATURE (Σ) Value Constructors (where f k is a constructor f of arity k): Σ = unit0 , 2 , 2 , #1 , 2 , →2 , Unit0 , ×2 , Dyn2 , Hash1 , Un0 , Top0 , Ch1 , Key1 Derivability Rules: N1 . . . M Nk M NN N1 , . . . , Nk M, M
N N N, M M, L f ∈ {#, } M L f (N) f (N) enc(N, N ) N, M M, L M, L
Language. Assume a signature with the following values: unit and , which work as usual; dyn(M, T ), for dynamically-typed message that asserts that M is a value of type T ; #(M) for the cryptographic hash of M; enc(M, N) for the message M encrypted with key N; and type constructors Unit, Un, Top, Ch(T ), T → Proc, Hash(T ), Key(T ), and Dyn(M, T ). We write “(M, N)” as shorthand for “ (M, N),” we write abstraction types postfix, as in “T → Proc,” and we write “Dyn” for “Dyn(unit, Unit).”
58
A. Cirillo and J. Riely
Derivability rules exclude only patterns that would allow one to derive the original value of a cryptographic hash or the contents of an encrypted message without the key. We elide the derivability rules for processes since process syntax is always transparent. Since they appear in dynamically typed messages, types are nominally first-class values. Informally, we use T, S for values that represent types, however note that there is no dedicated syntactic category for type values. Our treatment of dynamic typing is standard except for our use of the type Dyn(M, T ), which is explained later. We avoid annotating processes with types primarily so we do not have to commit to whether annotations should be visible to inspection or not (in comparison to untyped machine code vs. typed bytecode). Annotations can instead be coded up using dynamically typed messages. We write “ν(a : T )P” for “νa.let x = a, T in P” where x ∈ f n(P) when we wish to force the typechecker to commit to a specific type or simply add clarity. Safety and Robust Safety. Our objective is simply to prevent the misuse of a fixed set of typed initial channels. Let the metavariable T range over a language of types that includes type values, plus the non-first class type T YPE. A type environment (Γ) binds names and variables to types in the usual fashion; we write “Γ a : T ” to mean that ∈ dom(Γ ). An initial typing is a type environment taking a set Γ = Γ , a : T , Γ and a of initial channels to channel types. An error occurs if a process violates the contract of an initial channel by writing a non-abstraction value on a channel with a type of the form Ch(T → Proc). Our focus on shape errors involving abstractions is arbitrary; other errors are also possible. Let Δ be an initial typing with domain a1 , . . . , an . We say that a process P is Δ-safe if whenever P =⇒ ν b.(ai ?M | ai !N | Q) and Δ(ai ) = Ch(T → Proc), N is of the form (x)R. We say that a process O is an initial Δ-opponent if for all a ∈ ( f n(O) ∩ dom(Δ)), Δ(a) = Ch(Un). We say that P is robustly Δ-safe if (O | P) is safe for an arbitrary initial Δ-opponent O. Type System. We now present a type system that enforces robust safety. The system includes type judgments for well-formed values and well-formed processes. The rules for well-formed values are mostly standard: hashes of values of type T type at Hash(T ); names that are used as signing keys for values of type T type at Key(T ); encrypted messages type at Un and require that the content type be compatible with the key type. The one novelty is in the rules for dynamically typed messages, which allow a forwarder to delegate part of the task of judging the trustworthiness of a message to the recipient. A message dyn(M, T ) types at Dyn(N, S) if either M can be typed at T , or N cannot be typed at S. Opponent values are constructed from names that type at Ch(Un), cryptographic hashes and encrypted messages. The rules for well-formed processes are similarly standard, except for the rules for pattern matching. Specific rules are defined for top-level (non-nested) pair splitting, typecase and decryption operations. A separate general-purpose rule permits pattern matching with arbitrarily nested patterns but restricts pattern variables to Top. The type rules support the use of dynamic types to authenticate data based on the trust placed in the program that created it. For example, the type Dyn(#(N), Hash(S → Proc)) can be given to messages that are known to have been received from a residual of
Reflections on Trust: Trust Assurance by Dynamic Discovery
59
W ELL -F ORMED VALUES (Γ M : T ) Trusted Values: Γ a, x : T Γ a, x : T
Γ, x : T P Γ (x)P : T → Proc
Γ T : T YPE Γ Ch(T ) : T YPE
Γ T : T YPE Γ T → Proc : T YPE
Γ T : T YPE Γ Hash(T ) : T YPE ΓM : T Γ M : Top
Γ unit : Unit
Γ T : T YPE Γ Key(T ) : T YPE
ΓN : S ΓM : T Γ (M, N) : T × S
Γ Top, Un, Unit, Dyn : T YPE Γ M : T Γ S : T YPE Γ Dyn(M, S) : T YPE
Γ T : T YPE Γ M : T Γ N : S Γ dyn(M, T ) : Dyn(N, S )
ΓM : T Γ #(M) : Hash(T )
Γ M : T Γ N : Key(T ) Γ enc(M, N) : Un Opponent Values: Γ a : Ch(T ) T ∈ {Un, Top} Γ a : Un : Un ΓM : Un Γ f (M)
Γ, x : Un P
(∀a ∈ f n(P)) Γ a : Un Γ (x)P : Un
Γ M : Un Γ N : T Γ N : S Γ M : Dyn(N, S )
Γ M : T Γ #(M) : Un
the abstraction N applied to an argument of type S. If the identity but not typability of the sender is known, a forwarder can thus record the (code) identity of the sender without judging whether the sender is actually well-typed. If a later recipient can establish that N does type at S → Proc they can use the contents of the value safely. Results. The main result of the type system is the following theorem of robust safety, which states that well-formed processes are robustly safe. We elide the proof, which is fairly standard and follows from lemmas for subject reduction (if Γ P and P −→ Q then Γ Q) and opponent typability (if O is an initial Δ-opponent then Δ O). T HEOREM (ROBUST S AFETY ). If Δ P then P is robustly Δ-safe. Robust safety can be ensured, for example, by limiting interactions with opponents to untyped data communicated over untyped initial channels, however using dynamic verification one should also be able to safely accept and conditionally execute an abstraction from an opponent if the abstraction can be proved to be well typed. To this aim we internalize the type system into the language by describing it as a verifier. M : Let T be a type and Δ type environment. Then P (M) = Δ, f n(M) : Top T denotes a verifiable property supported by dom(Δ). A verifier then has the form: to the type en“verify M as P (x) in P else Q.” (Note that the addition of f n(M) : Top vironment allows accepted abstractions to contain arbitrary extra free names as long as they do not affect typability.) We are helped by the fact that the verifier is itself well-typed more or less by definition because the relevant clauses in the encoding are drawn from the set of well-typed
60
A. Cirillo and J. Riely
W ELL -F ORMED P ROCESSES (Γ P) Trusted Processes: Γ 0
Γ a : Ch(T ) Γ M : T → Proc Γ a?M
Γ M : T → Proc Γ N : T Γ M ·N : T Γ f (M)
Γ, x : T P
Γ M : Ch(T ) Γ N : T Γ M!N
ΓP ΓQ Γ P|Q
Γ, a : T P Γ νa.P
ΓP Γ ∗P
Γ M : T × S Γ, x : T , y : S P Γ case M of ∃(x, y).(x, y) in P
in P Γ let x = f M
Γ M : Dyn(N, S ) Γ N : S Γ T : T YPE Γ case M of ∃x.dyn(x, T ) in P Γ M : Key(T ) Γ, x : T P Γ case M of ∃x.enc(x, M) in P
ΓM : T
Γ, x : T P
N : T Γ, y : Top P Γ, y : Top Γ case M of ∃ y.N in P
Opponent Processes: Γ a : Un Γ M : Un Γ a?M Γ M : Un
Γ M : Un Γ N : Un Γ M!N
Γ M : Un Γ N : Un Γ M·N
N : Un Γ, x : Un P Γ, x : Un Γ case M of ∃ x.N in P
terms, which allows us to type x at T → Proc. If the verification succeeds x gets bound to N in P. Since N types only at Un, the veri f y construct implements what amounts to a dynamic cast, allowing one to take arbitrary data from an untyped opponent and cast it to a well-typed abstraction. = Ch(Un) For example, suppose Δ = anet : Ch(Un), b1 : T1 , . . . , bn : Tn where T1−n M : (T1 × . . . × Tn ) → Proc. Then the following and define P (M) as Δ, f n(M) : Top process is robustly Δ-safe. b) ∗anet ?(x : Un)verify x as P (y) in (y · The process repeatedly reads arbitrary values from an open network channel (anet ) and tests them dynamically to see if they are well-typed at (T1 × . . . × Tn ) → Proc before applying them to a series of protected channels. If b represent, for example, a series of protected system calls this process could represent a virtual machine that performs bytecode verification, as well as many other applications of dynamic verification.
4 Example: Dynamic Verification and Trusted Computing On its own, dynamic verification can be used to conditionalize the application of an abstraction on the results of static analysis of the program code. In this section we expand the use of dynamic verification to also conditionalize interactions with running
Reflections on Trust: Trust Assurance by Dynamic Discovery
61
processes using remote attestation. This solution utilizes a notion of code identity, whereby an active process is identified by the process abstraction it started as. Background. Trusted computing is an architecture for secure distributed computing where trust is rooted in a small piece of hardware with limited resources known as the trusted platform module (TPM). The TPM is positioned in the boot sequence in such a way that it is able to observe the BIOS code as it is loaded. It takes and stores the hash of the BIOS as the system boots, thus establishing itself as the root of a chainof-trust; a secure BIOS records the hash of the operating system kernel with the TPM before it loads, and a secure operating system records the hash of an application before it is executed. If the BIOS and operating system are known to be trustworthy, then the sequence of hashes will securely identify the currently running program. Remote attestation is a protocol by which an attesting party demonstrates to a remote party what code it is currently running by having the TPM sign a message with a private key and the contents of its hash register. If the recipient trusts the TPM to identify the BIOS correctly, and knows of the programs that hash to each identity in the chain, then they can use static analysis of the program code to establish trust in the message. Representing a Trusted Computing Platform. We represent a trusted computing framework as follows. The TPM is represented by a process parameterized on a boot channel (aboot ) and an attestation identity key (aaik ). The TPM listens on the boot channel for an operating system abstraction to load; upon receiving the OS (xos ) it reserves fresh attestation (bat ) and check (bchk ) channels and instantiates the OS with the new channels. This calling convention is expressed as an abstraction type for “certifiable” programs, which we abbreviate Cert. The TPM accepts requests on the attestation channel in the form of a message and callback channel. An attestation takes the form of a message signed by the TPMs attestation identity key where the contents are a dynamically typed message where the type is bounded by a provenance tag; that is, of the form dynymsg , Dyn(#(xos ), Hash(Cert)). This message is then encrypted with aaik and returned on the callback. The check channel is provided to clients so that they can verify TPM signatures; the TPM simply tests the signature and, if successful, returns the payload typed at Dyn. An example of a trustworthy operating system, OS, is initialized with an attestation and a check channel. It repeatedly accepts outside requests to run abstractions; a fresh attestation channel is created for each request that binds a message to the identity of the abstraction before passing it on to the TPM. For user programs, such as virtual machines or Internet browsers, that themselves host outside code, this protocol can be extended arbitrarily. Each layer provides the next layer up with an attestation service that appends the clients identity to a message before passing the request down. Because attestation channels are general-purpose, dynamic types are needed to type the payload of an attestation. The form of an attestation is therefore a nested series of dynamically typed messages, with the innermost carrying the payload and actual type and each successive layer being of the form dyn(M, Dyn(#(N))) where #(N) identifies the layer that generated M. The outermost message is then signed by the TPM.
Initial Processes. We assume that initial processes have the following configuration. Execution occurs in the context of an initial environment (Δ) consisting of a fixed number of Ch(Top)-typed channels (a1 , . . . , a j ), an arbitrary number of Ch(Un)-typed chanb) at various types. nels (a j+1 , . . . , ak ) and some number of additional channels ( The trusted world consists of k copies of T PM which share a single aik key name but listen on individual boot channels, and j subjects (P1 , . . . , Pj ) with f n(Pi ) ⊆ {ai } ∪ bi where b1 , . . . , b j are disjoint subsets of b, and a Δ-opponent (O) with f n(O) ⊆ {ai | i > j}. The opponent may control any number of TPM channels, but none that are in use by another subject. No two subjects initially share a name that is not also known to the opponent, therefore any secure communications between subjects has to be brokered by the TPM. νaaik .Πi≤k T PM(aaik , ai ) | P1 | . . . | Pj | O A typical subject “boots up” by sending the TPM an OS file and a fresh channel. After receiving an OS callback, the subject loads some number of concurrent applications. Each application receives its own identifying attestation channel from the operating system. Pi = νb.(ai !(OS, b) | b?(x)x!(APP1 , bi ) | . . . | x!(APPk , bi )) The robust safety of an initial process follows from the typability of TPM(aaik , ai ) for all i, which we establish informally by noting that (1) when the TPM receives a well-typed OS, the new attestation channel will type at Ch(Dyn× Ch(Un)), and attestations will have the form dyn(M, Dyn(#(OS), Hash(Cert))) which will be well typed because Γ M : Dyn; and (2) when the TPM loads an untyped OS, the new attestation channel will type at Ch(Top) and attestations will have the form dyn(M, Dyn(#(OS), Hash(Cert))), which will be well typed because Γ OS : Cert. Using Attestations. Even a signed attestation cannot be automatically trusted. Because the opponent controls some number of TPMs, the signature provides assurance only that the message was created by a TPM that was initially running the particular abstraction that hashes to the attested identity. To trust the contents one must also trust that the attesting abstraction (1) protects its attestation channel, and (2) only generates accurate dynamic types, which in the case of nesting implies that a host program correctly identifies a hosted application when attestations are created. Destructing an attestation is a three-step process. First the signature is validated using the bchk channel provided by the TPM, which returns dyn(M, Dyn(#(OS), Hash(Cert))).
Reflections on Trust: Trust Assurance by Dynamic Discovery
63
Second, the identity #(OS) is checked to ensure that it corresponds to an abstraction that types at Cert. Dynamic verification cannot be used here because the original program code is not recoverable from the hash, so checking the identity amounts to testing equality with something with which there is a priori trust. Attested messages will generally be nested so this process is repeated once per layer, eventually exposing a value of the form dyn(L, T ). This is matched against an expected type and the payload L is recovered, typed at T . The processes of creating and destructing attestations are summarized in the following derived forms: D ERIVED F ORMS : ATTEST AND C HECK
= νb.Mat !(dynN, T , b) | b?(x)P let x = check(Mchk , N, (L1...n ), T ) in P = νb.Mchk !(N, b) | b?(x) case x of dyn(y1 , Dyn(L1 , Hash(Cert))) in .. . case yn−1 of dyn(yn , Dyn(Ln , Hash(Cert))) in case yn of dyn(z, T ) in P let x = attest(Mat , N, T ) in P
The robust safety theorem, combined with the typability of the derived forms for attest and check, implies that any well-typed program written to use this infrastructure is robustly safe. Bidirectional Authentication with a Trusted Verifier. We now turn to a specific example that uses trusted computing to allow two mutually distrusting parties to authenticate. The parties initially share no secure channels, have no knowledge of the other’s program code and are unwilling to share their source code with the other. (Swapping source code may be unacceptable in practice due to proprietary interests, or simply performance reasons.) The parties do however initially trust the same verifier which together with the TPM is sufficient to establish bidirectional trust. This very general example is broad enough to suggest a wide range of applications, particularly in the context of communication over the public Internet where parties are frequently anonymous. The example comprises three software components: TV defines a trusted third-party verifier, CLIENT defines the initiator of the communication, and SERVER defines the other party to the communication. The trusted verifier inputs an abstraction on a public channel (aver ) and uses dynamic verification to test it for typability. If successful, the hash of the abstraction is taken and packed into an attestation typed at Hash(cert), which is returned to the requester to be used as a certificate. CLIENT and SERVER are each passed their own abstractions when they are initialized, which they send to the verifier to obtain certificates. CLIENT initiates the communication by sending first its certificate and second an attested response channel on the public channel areq . SERVER reads the certificate and uses it to trust the second message and recover the typed response channel, on which it writes its own certificate and another attestation containing the secret data. TV = ((xat , xchk , ))aver ?((yval , yrtn )) M : Cert}(z1 ) in verify yval as {M | Γ, f n(M) : Top let z2 = attest(xat , #(z1 ), Hash(Cert)) in yrtn !z2
64
A. Cirillo and J. Riely
CLIENT = ((xat , xchk , xarg ))νb.xarg !b | b?((xsel f ))(νa.aver !(xsel f , a) | a?(ycert )areq !ycert ) | (νbrsp .let yreq = attest(xat , brsp , Ch(Top)) in (a pub!yreq ) | brsp?(y)let zsid = check(xchk , y, (#(OS), #(TV )), Hash(Cert)) in b?(y)let zdat = check(xchk , y, (#(OS), zsid ), Ch(T )) in P)
SERVER = ((xat , xchk , xarg ))νb.xarg !b | b?((xsel f , xdat ))νa.aver !(xsel f , a) | a?(ycert ) a pub?(y)let ycid = check(xchk , y, (#(OS), #(TV )), Hash(Cert)) in a pub ?(y)let yreq = check(xchk , y, (#(OS), ycid ), Ch(Top)) in (yreq !ycert ) | let yresp = attest(xchk , xdat , T ) in (yreq !yresp | Q) We assume that all three components will be run on trusted platforms with CLIENT and SERVER on distinct TPMs. Trust in the verifier is based on the identity of the program code, not the party running it, therefore it can be run on its own TPM, or on the same TPM as either party, or even as separate processes on both. The TPM therefore allows parties to reliably certify their own code.
5 Conclusions We have presented a new reflective variant of the higher-order π calculus that allows for the dynamic inspection of process syntax and is useful for modeling open systems, which often rely on such operations. Reflection has also been considered for the λ-calculus [23,24], and dynamic verification using type-checking primitives has been considered in a π-calculus [25]. A language which allows explicit decomposition of processes has recently been proposed by Sato and Sumii [7]; the language considered here represents a middle-ground, giving a simpler syntax and semantics but with a slight cost in terms of expressiveness. In particular, while we can model arbitrary verifiers, we do not permit the verifiers themselves to be treated as programs, which would then be subject to verification. We considered two specific applications that use reflection: dynamic verification, which relies on an ability to dynamically typecheck mobile code prior to execution, and trusted computing, which relies on an ability to associate a running process with the identity of the process abstraction it started as. The genesis of this work was our previous work with trusted computing in higherorder pi [6]. Many issues, such as code identity and allowing attackers to extract names from mobile code, were considered in the previous paper but handled in an ad-hoc fashion. This paper fulfills two additional objectives. First, it comprises a more foundational and expressive approach to understanding such systems. Second, it has allowed us to internalize static analysis. The approach to trusted computing in this paper lacks rich access control features which were the focus of the prior paper, however adding them would not be difficult.
Reflections on Trust: Trust Assurance by Dynamic Discovery
65
3. Riely, J., Hennessy, M.: Trust and partial typing in open systems of mobile agents. In: Principles of Programming Languages, POPL 1999 (1999) 4. Trusted Computing Group: TCG TPM Specification Version 1.2 (March 2006),
5. Brickell, E., Camenisch, J., Chen, L.: Direct anonymous attestation. In: Computer and Communications Security (CCS), pp. 132–145. ACM Press, New York (2004) 6. Cirillo, A., Riely, J.: Access control based on code identity for open distributed systems. In: Barthe, G., Fournet, C. (eds.) TGC 2007 and FODO 2008. LNCS, vol. 4912, pp. 169–185. Springer, Heidelberg (2008) 7. Sato, N., Sumii, E.: A higher-order, call-by-value applied pi-calculus. In: Hu, Z. (ed.) APLAS 2009. LNCS, vol. 5904, pp. 311–326. Springer, Heidelberg (2009) 8. Sangiorgi, D.: Expressing Mobility in Process Algebras: First-Order and Higher-Order Paradigms. PhD thesis, University of Edinburgh (1993) 9. Sangiorgi, D.: Asynchronous process calculi: the first-order and higher-order paradigms (tutorial). Theoretical Computer Science 253, 311–350 (2001) 10. Sangiorgi, D., Walker, D.: The π-calculus: a Theory of Mobile Processes. Cambridge University Press, Cambridge (2001) 11. Sun Microsystems: Java Object Serialization Specification (2005),
12. Lindholm, T., Yellin, F.: The Java Virtual Machine Specification Second Edition. Sun Microsystems (1999) 13. Anderson, N.: Hacking Digital Rights Management. ArsTechnica.com (July 2006),
14. Haack, C., Jeffrey, A.S.A.: Pattern-matching spi-calculus. In: Proc. IFIP WG 1.7 Workshop on Formal Aspects in Security and Trust (2004) 15. Abadi, M., Fournet, C.: Mobile values, new names, and secure communication. In: Principles of Programming Languages, POPL 2001 (2001) 16. Abadi, M., Gordon, A.: A calculus for cryptographic protocols: The spi calculus. Information and Computation 148, 1–70 (1999) 17. Abadi, M.: Secrecy by typing in security protocols. J. ACM 46(5) (1999) 18. Gordon, A.D., Jeffrey, A.S.A.: Authenticity by typing for security protocols. J. Computer Security 11(4) (2003) 19. Fournet, C., Gordon, A., Maffeis, S.: A type discipline for authorization policies. In: Sagiv, M. (ed.) ESOP 2005. LNCS, vol. 3444, pp. 141–156. Springer, Heidelberg (2005) 20. Gordon, A.D., Jeffrey, A.S.A.: Secrecy despite compromise: Types, cryptography, and the picalculus. In: Abadi, M., de Alfaro, L. (eds.) CONCUR 2005. LNCS, vol. 3653, pp. 186–201. Springer, Heidelberg (2005) 21. Fournet, C., Gordon, A., Maffeis, S.: A type discipline for authorization in distributed systems. CSF 00, 31–48 (2007) 22. Abadi, M., Cardelli, L., Pierce, B., Plotkin, G.: Dynamic typing in a statically typed language. ACM Transactions on Programming Languages and Systems 13(2), 237–268 (1991) 23. Alt, J., Artemov, S.: Reflective lambda-calculus. Proof Theory in Computer Science, 22–37 (2001) 24. Artemov, S., Bonelli, E.: The intensional lambda calculus. Logical Foundations of Computer Science, 12–25 (2007) 25. Maffeis, S., Abadi, M., Fournet, C., Gordon, A.D.: Code-carrying authorization. In: Jajodia, S., Lopez, J. (eds.) ESORICS 2008. LNCS, vol. 5283, pp. 563–579. Springer, Heidelberg (2008)
Model Checking of Security-Sensitive Business Processes Alessandro Armando and Serena Elisa Ponta DIST, Universit` a di Genova, Italy {armando,serena.ponta}@dist.unige.it www.avantssar.eu
Abstract. Security-sensitive business processes are business processes that must comply with security requirements (e.g. authorization constraints). In previous works it has been shown that model checking can be profitably used for the automatic analysis of security-sensitive business processes. But building a formal model that simultaneously accounts for both the workflow and the access control policy is a time consuming and error-prone activity. In this paper we present a new approach to model checking security-sensitive business processes that allows for the separate specification of the workflow and of the associated security policy while retaining the ability to carry out a fully automatic analysis of the process. To illustrate the effectiveness of the approach we describe its application to a version of the Loan Origination Process featuring an RBAC access control policy extended with delegation.
1
Introduction
A business process is a set of coordinated activities carried out concurrently by different entities and using a set of resources with the aim to achieve a goal or to deliver a service. The design and development of business processes is a non trivial activity as they must meet several contrasting requirements, e.g. the compliance with mandatory regulations and the ability to support a wide range of execution scenarios. Security-sensitive business processes are business processes in which security requirements play a significant role. In this paper we focus on security requirements on authorization. Failure to meet authorization constraints may lead to economic losses and even to legal implications. The evolution from static, well established processes to dynamic ones—a current trend in the development of business processes—may seriously affect their security and often this occurs in subtle and unexpected ways. As an example consider a business process in which agents can be dynamically delegated to perform tasks they were not initially authorized to execute. This is desirable as delegation provides additional flexibility to the process, but it also offers new ways to circumvent security. Since these
This work was partially supported by the FP7-ICT-2007-1 Project no. 216471, “AVANTSSAR: Automated Validation of Trust and Security of Service-oriented Architectures”.
P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 66–80, 2010. c Springer-Verlag Berlin Heidelberg 2010
Model Checking of Security-Sensitive Business Processes
67
kinds of vulnerabilities are very difficult to spot by simple inspection of the workflow and of the associated security policy, security-sensitive business processes are a new, promising application domain for formal methods. Model checking is a technique for the automatic analysis of concurrent systems. Given a formal model of the system formally specified by a finite transition system and the expected property specified by a formula in some temporal logic, e.g. LTL, a model checker either establishes that the system enjoys the property or returns an execution trace witnessing its violation. Model checking has been remarkably successful in many key application areas, such as hardware and protocol verification and, more recently, software and security protocol verification. A natural question is whether model checking can be profitably used for the automatic analysis of security-sensitive business processes. Previous works [1,2,3] provide a positive answer to this question by showing that business processes under authorization constraints can be formally specified as transition systems and automatically analyzed by model checkers taken off the shelf. However the manual definition of the transition system starting from the workflow and the associated security policy is a complex and error-prone activity, which—if not carried out correctly—may undermine the significance of the whole method. In this paper we present a new approach to the specification and model checking of security-sensitive business processes that comprises the following steps: 1. formal modeling of the security-sensitive business process as an access-controlled workflow system; 2. formal modeling of the expected security property as an LTL formula φ; 3. automatic translation of the access-controlled workflow system into a planning system, a formal framework amenable to automatic analysis; and 4. model checking of the planning system to determine whether it enjoys φ. An access-controlled workflow system is a formal, yet natural framework for specifying security-sensitive business processes and results from the combination of a workflow system and of an access control system. A workflow system supports the specification of the control flow of the business process by extending Petri Nets [4] through a richer notion of state that accounts not only for the concurrent execution of the tasks but also for the effects that their execution has on the global state of the system. An access control system provides a declarative, rule-based language for specifying a wide variety of security policies and operations for updating them. Our approach thus facilitates the modeling activity, while supporting a fully automatic analysis of the process. To illustrate the effectiveness of the approach we have applied it against a version of the Loan Origination Process (LOP) that features an RBAC access control policy extended with delegation. By using SATMC [5,6], a model checker for planning systems, we have detected serious flaws in our original specification of the LOP and this has led us to the definition of a new, improved version of the business process. Our approach improves upon existing works on model checking of business processes by simultaneously supporting:
– the separate specification of the workflow and of the associated security policy; – the formal and declarative specification of a wide range of security policies; – the specification of tasks with non-deterministic effects; – LTL to specify complex security properties at the level of access-controlled workflow system, a higher level than that provided by transition systems; – full automation of the analysis. To the best of our knowledge no other model checking platform encompassing all the above features exists.
2
Security-Sensitive Business Processes
Let us consider the BPEL [7] specification of the Loan Origination Process (LOP) given in Fig. 1a The process starts with the input of the customer’s data (inputCustData). Afterwards a contract for the current customer is prepared (prepareContract) while the customer’s rating evaluation takes place concurrently. The rating enables the bank to determine whether the customer can be granted the requested loan. To this end, the execution may follow different paths: if the risk associated with the loan is low (lowRisk), then an internal rating suffices (intRating); otherwise the internal rating is followed by an external evaluation (extRating) carried out by a Credit Bureau, a third-party financial institution. The lowRisk condition indicates a situation in which the internal
Model Checking of Security-Sensitive Business Processes
69
Table 1. Permission assignment for the LOP Task inputCustData prepareContract intRating extRating approve sign
Role preprocessor postprocessor if (isIndustrial) then supervisor else postprocessor supervisor if (isIndustrial) then manager else supervisor if (intRatingOK) then manager else director
rating is positive and the amount of the loan is not high. The loan request must then be approved (approve) by the bank. Subsequently, if the customer and the bank have reached an agreement, the contract is signed (sign). Notice that the execution of a task may affect the state of the process. For example, the task approve modifies the state of the execution by issuing a statement asserting if the proposed product is suitable or not for the customer. An agent can execute a task only if she has the required permissions. As it is common in the business domain, the LOP relies on an access control model based on RBAC [8] extended with delegation. According to the RBAC model, to perform a task an agent must be assigned a role that is enabled to execute the task and the agents must be also active in that role. The roles used in our case study are given in Table 1 together with the tasks they are enabled to execute. Roles can be organized hierarchically. In our case study, a director is more senior than a manager and a supervisor is more senior than a postprocessor. Senior roles inherit the permission to perform tasks assigned to more junior roles. As a consequence, an agent can execute a task if her role (i) is directly assigned the required permissions or (ii) is more senior than a role owning such permissions. The permission assignment relation in Table 1 associates each task of the LOP with the most junior role entitled to execute it. Following the idea of conditional delegation presented in [9], we consider delegation rules of the form: P reConds, ARole, DRole, T ask, where ARole and DRole are roles, T ask is a task, and P reConds is a set of conditions that must hold for the delegation to be applicable. A delegation rule states that if PreConds holds and ARole is authorized to perform Task according to the permission assignment relation, then ARole can delegate DRole to execute Task . Notice that this is a task delegation rather than a role delegation. In fact, the delegated agent does not acquire a new role but she only obtains the permission to perform Task by means of ARole. Examples of delegation rules considered in our case study are: - D1: intRatingOK, manager, supervisor, approve, - D2: intRatingOK, manager, supervisor, sign, As far as the security requirements are concerned, here we focus on Separation of Duty (SoD) properties which are used for internal control and are probably the most common application-level properties that business processes must comply
70
A. Armando and S.E. Ponta
with. SoD amounts to requiring that some critical tasks are executed by different agents. This can be achieved by constraining the assignment of roles (Static SoD ), their activation (Dynamic SoD) or even the execution of tasks [1]. In this paper we focus on Object-based SoD (ObjSoD) and Operational SoD (OpSoD). The former requires that no agent performs all the tasks accessing the same object, while the latter requires that no agent performs all the tasks of the workflow. Object-based SoD for the LOP. Since intRating, extRating, and approve access and deal with the rating of the customer they form a set of critical tasks and the LOP is thus expected to meet the following ObjSoD property: “If the process terminates successfully, then no single agent has performed all the critical tasks (namely intRating, extRating, and approve).” Operational SoD for the LOP. The OpSoD for the LOP can be expressed as follows: “An agent cannot perform all the tasks of a successful process execution.” Notice that in this case the set of critical tasks depends on lowRisk whose value cannot be predicted in advance. If lowRisk is false then the set of critical tasks comprises all the tasks of the process, otherwise it contains all tasks but extRating.
3
Access-Controlled Workflow Systems
At the core of our approach lies the notion of access-controlled workflow system, a formal framework supporting the separate specification of the workflow of the business process and of the associated security policy in terms of a workflow system and of an access control system respectively. A workflow system is a Petri Net extended with a richer notion of state that accounts not only for the concurrent execution of tasks but also for the effects that their execution has on the global state of the system. An access control system supports the formal and declarative specification of the access control policy (stating which agent can perform which task) including advanced but often used features such as delegation. In this section we provide a detailed account of our specification framework, a trace-based semantics, and a temporal logic that allows for the formal statement of properties of business processes. 3.1
Workflow Systems
A fact is an atomic proposition. Let F be a set of facts. A literal over F is either a fact in F or the negation of a fact in F . The set of literals over F is denoted by L(F ). A set of literals L is consistent if and only if L does not contain a fact and its negation. A formula over F is a propositional combination of facts and the propositional constant true using the usual propositional connectives (i.e. ¬, ∧, ∨, and ⇒).
Model Checking of Security-Sensitive Business Processes
71
Table 2. Mapping BPEL programs into workflow systems
< invoke > t invoke > < if > < condition > c condition > p if > Legenda: t task
Let F be a set of facts and T be a set of transitions. A workflow system over F and T is a tuple W S = P, F , IW S , T , In, Out, γ, α, where P is a set of places, In ⊆ (P × T ), Out ⊆ (T × P), γ is a function that associates the elements of In with formulae over F expressing applicability conditions, α : T → 2L(F ) × 2L(F ) × 2L(F ) is a partial function mapping transitions into sets of literals corresponding to their preconditions π(t), their deterministic effects η(t), and their non-deterministic effects ν(t) respectively, i.e. α(t) = π(t), η(t), ν(t), where π(t) and η(t) are consistent. We call tasks the transitions for which α is defined. If T ⊆ T is a set of transitions, then Tα denotes the set of tasks in T . A marking is a function M : P → N. A state of W S is a pair (M, L), where M is a marking and L ⊆ L(F ) is a maximally consistent set of literals, (i.e. a truth-value assignment to the facts in F ). IW S is the set of initial states of W S. The preset of a transition t, in symbols •t, is the set {p ∈ P : (p, t) ∈ In}. The set of preconditions of a transition t, in symbols ∗ t, is the set {γ(p, t) : (p, t) ∈ In}. The postset of a transition t, in symbols t• , is the set {p ∈ P : (t, p) ∈ Out}. Let T ⊆ T be a set of transitions. We define π(T ) = t∈Tα π(t), η(T ) = t∈Tα η(t), and ν(T ) = t∈Tα ν(t). A step is a set of transitions T ⊆ T such that π(T ) and η(T ) are consistent. A step T is enabled in a state (M, L) iff - L |= γ(p, t) for all p ∈ P and t ∈ T s.t. (p, t) ∈ In, where |= is the consequence relation in classical propositional logic. - π(T ) ⊆ L, and - M (p) ≥ t∈T In(p, t) for all p ∈ P. Notice that here and in the sequel we use In(p, t) and Out(t, p) to denote also the characteristic function of In and Out relations respectively.
72
A. Armando and S.E. Ponta
It is worth pointing out that workflow systems extend Petri nets by providing a richer notion of state given by the pair (M, L), that accounts not only for the concurrent execution of the component activities (by means of the marking M ) but also for the effects that these activities have on the state of the system (represented by the set of literals L). For this reason workflow systems are a natural formal model for business processes. The mapping in Table 2 shows how the constructs occurring in the BPEL program of Fig. 1a can be mapped to workflow system templates. The mapping can be readily extended to support other basic activities (e.g. , ) as well as more complex BPEL structured activities (e.g. <while> and the construct which is used to join activities occurring in different branches of a ). 3.2
Access Control Systems
An access control system over F and T is a tuple ACS = F, IACS , A, T , U, α, H where F , A, T , U are sets of facts, agents, tasks, and policy updates respectively, α : U → 2L(F ) × 2L(F ) is a function mapping policy updates into consistent sets of literals corresponding to their preconditions π(u) and their effects η(u) respectively, i.e. α(u) = π(u), η(u), and H is a set of rules of the form 0 ← 1 , . . . , n with i ∈ L(F ) for i = 0, . . . , n and n ≥ 0. A state of ACS is a maximally consistent set of literals S ⊆ L(F ) such that H(S) ⊆ S where H(S) = {0 : (0 ← 1 , . . . , n ) ∈ H and i ∈ S for all i = 1, . . . , n}. We assume that the set F of an ACS always contains a fact granted(a, t) for all a ∈ A and t ∈ T expressing the authorization of an agent a to execute a task t. IACS ⊆ L(F ) are the initial states of ACS. Let U be a set of policy updates. We define π(U ) = u∈U π(u) and η(U ) = u∈U η(u). A step of ACS is a set of policy updates U ⊆ U such that π(U ) and η(U ) are consistent. A step U is enabled in a state L iff π(U ) ⊆ L. 3.3
Access-Controlled Workflow Systems
An access-controlled workflow system over F and T is a pair AWS = W S, ACS where W S is a workflow system over F and T and ACS is an access control system over F and Tα . A state of AWS is a pair (M, L), where M is a marking of the workflow system W S and L is a maximally consistent set of literals in L(F ). The initial states of AWS = W S, ACS are the initial states (M, L) of W S such that L is also an initial state of ACS. We use IAWS to denote the set of initial states of AWS . A task allocation for Tα is a total function λ : Tα → A. A step W of AWS = W S, ACS is a triple (T, U, λ) where T ⊆ T and U ⊆ U such that both π(T ) ∪ π(U ) and η(T ) ∪ η(U ) are consistent and λ is a task allocation for Tα . A step W = (T, U, λ) is enabled in a state (M, L) iff granted(λ(t), t) ∈ L for all t ∈ Tα , T is enabled in (M, L), and U is enabled in L. If a step W = (T, U, λ) is enabled in S = (M, L), then the occurrence of W in S leads to a new state S = (M , L ), in symbols S[W S . A literal l is caused by a step transition S[W S iff at least one of the following conditions holds:
Model Checking of Security-Sensitive Business Processes
73
– l ∈ η(T ) ∪ η(U ), i.e. l is a deterministic effect of some transition or update, – l ∈ ν(T ) and l ∈ L , i.e. l is a non-deterministic effect of some transition, – there exists l ← l1 , . . . , ln ∈ H with li ∈ L for i = 1, . . . , n, i.e. l is the head of a rule in H whose body holds in L , – l ∈ L and l ∈ L , i.e. the truth-value of l is left untouched by the occurrence of W . A step transition S[W S is causally explained according to AWS iff S coincides with the set of literals caused by the occurrence of W in S. (The notion of causal explanation is adapted from [10].) An execution path χ of AWS is an alternating sequence of states and steps S0 W0 S1 W1 · · · such that Si [Wi Si+1 are causally explained step transitions for i ≥ 0. We use χs (i) and χw (i) to denote Si and Wi respectively. An execution path χ is initialized iff χs (0) ∈ IAWS . A state S is reachable in AWS iff there exists an initialized execution path χ such that χs (i) = S for some i ≥ 0. A state (M, L) is n-safe iff M (p) ≤ n for all p ∈ P. An access-controlled workflow system AWS is n-safe iff all its reachable states are n-safe. Notice that the mapping of Table 2 yields 1-safe access-controlled workflow systems, provided that the initial states are 1-safe. This trivially follows from the observation that every place of the workflow system has at most one incoming edge. In the sequel we will restrict our attention to 1-safe access-controlled workflow systems. 3.4
A Logic for Access-Controlled Workflow System
Properties of access-controlled workflow system can be expressed by means of LTL formulae. Let AWS be an access-controlled workflow system. The set of LTL formulae associated with AWS is the smallest set containing F , an atomic proposition for each place in P, an atomic proposition exec(a, t) for each a ∈ A and t ∈ Tα , an atomic proposition exec(t) for each transition t ∈ T \ Tα , an atomic proposition exec(u) for each policy update u ∈ U, and such that if φ and ψ are LTL formulae, then also ¬φ, (φ ∨ ψ), (φ ∧ ψ), (φ ⇒ ψ), X φ, F φ, and G φ are LTL formulae. Let χ be an initialized path of AWS . An LTL formula φ is valid on χ, in symbols χ |= φ, if and only if (χ, 0) |= φ, where (χ, i) |= φ, for i ≥ 0, is inductively defined as follows: - if φ is a fact then (χ, i) |= φ iff χs (i) = (M, L) with φ ∈ L, - if φ is an atomic proposition corresponding to a place p ∈ P, then (χ, i) |= φ iff χs (i) = (M, L) with M (p) ≥ 1, - if φ is an atomic proposition of the form exec(a, t), then (χ, i) |= φ iff χw (i) = (T, U, λ) with t ∈ Tα and λ(t) = a, - if φ is an atomic proposition of the form exec(o), then (χ, i) |= φ iff χw (i) = (T, U, λ) with o ∈ (T \ Tα ) ∪ U , - (χ, i) |= ¬φ iff (χ, i) |= φ, - (χ, i) |= (φ ∨ ψ) iff (χ, i) |= φ or (χ, i) |= ψ, - (χ, i) |= X φ iff (χ, i + 1) |= φ, and - (χ, i) |= F φ iff there exists j ≥ i s.t. (χ, j) |= φ.
74
A. Armando and S.E. Ponta
The semantics of the remaining connectives readily follows from the following equivalences: (φ ∧ ψ) ≡ ¬(¬φ ∨ ¬ψ), (φ ⇒ ψ) ≡ (¬φ ∨ ψ), and G(φ) ≡ ¬ F ¬φ. A formula φ is valid in AWS , in symbols |=AWS φ, iff χ |= φ for all initialized execution paths χ of AWS . By using LTL we can now give a formal definition of the SoD properties presented in Sect. 2. ObjSoD for the LOP. If the process terminates successfully, then for all agents a ∈ A there exists a task t ∈ T , where T = {intRating, extRating, approve}, such that a does not perform t: F(p10 ∧ productOK) ⇒ G ¬ exec(a, t). (1) a∈A t∈T
OpSoD for the LOP. For all agents a ∈ A there exists at least a task in the workflow that is never executed by a: ⎛ ⎞ G ¬ exec(a, inputCustData) ∨ G ¬ exec(a, prepareContract)∨ ⎝ G ¬ exec(a, intRating) ∨ G(lowRisk ∨¬ exec(a, extRating))∨ ⎠ . (2) G ¬ exec(a, approve) ∨ G ¬ exec(a, sign) a∈A
4
Model Checking Access-Controlled Workflow Systems
A planning system is a formal framework for the specification of concurrent systems inspired by the model used by the AI community to specify planning domains: states are represented by sets of literals and state transitions by actions, where an action is specified by preconditions (i.e. literals that must hold for the action to be executable) and effects (i.e. which literals are possibly affected by the execution of the action); the possible behaviors of the system can also be constrained by means of rules. It must be noted that the ability to specify rules and actions with non-deterministic effects goes beyond the expressiveness of traditional planning languages, e.g. STRIPS, but are supported by more recent developments [10,11]. In this section we formally define a planning system and provide a trace-based semantics, a temporal logic for specifying properties, and a formal translation from access-controlled workflow systems to planning systems. 4.1
Planning Systems
A planning system is a tuple P S = FP , IP S , OP , ω, HP , where FP is a set of facts, OP is a set of planning operators, ω : OP → 2L(FP ) × 2L(FP ) × 2L(FP ) is a function mapping planning operators into sets of literals corresponding to their preconditions π(o), their deterministic effects η(o), and their non-deterministic effects ν(o) respectively, i.e. ω(o) = π(o), η(o), ν(o), where π(o) and η(o) are consistent, and HP is a set of rules over FP . A state of P S is a maximally consistent set of literals over FP . IP S is the set of initial statesof P S. If O is a set of planning operators, then π(O) = o∈O π(o), η(O) = o∈O η(o), and
Model Checking of Security-Sensitive Business Processes
75
ν(O) = o∈O ν(o). A step O of P S is a set of planning operators such that π(O) and η(O) are consistent. A step O is enabled in state S iff π(O) ⊆ S. If a step O is enabled in a state S, then the occurrence of O in S leads to a new state S , in symbols S[OS . A literal l is caused by a step transition S[OS iff at least one of the following conditions holds: – l ∈ η(O), i.e. l is a deterministic effect of some planning operator, – l ∈ ν(O) and l ∈ S , i.e. l is a non-deterministic effect of some planning operator, – there exists l ← l1 , . . . , ln ∈ HP with li ∈ S for i = 1, . . . , n, i.e. l is the head of a rule in HP whose body holds in S , – l ∈ S and l ∈ S , i.e. the truth-value of l is left untouched by the occurrence of O. A step transition S[OS is causally explained according to P S iff S coincides with the set of literals caused by the occurrence of O in S. An execution path χ of P S is an alternating sequence of states and steps S0 O0 S1 O1 · · · such that Si [Oi Si+1 are causally explained step transitions for i ≥ 0. We use χs (i) and χo (i) to denote Si and Oi respectively. An execution path χ is initialized iff χs (0) ∈ IP S . A state S is reachable iff there exists an initialized execution path χ such that χs (i) = S for some i ≥ 0. Let P S be a planning system. The set of LTL formulae associated with P S can be defined analogously to the set of LTL formulae for an access-controlled workflow system (cf. Sect. 3.3) by using facts and planning operators as atomic propositions. The validity of an LTL formula φ of P S on an execution trace also closely follows the one given for access-controlled workflow systems. Finally, we say that φ is valid in P S, in symbols |=P S φ, iff χ |= φ for all initialized execution paths χ of P S. 4.2
From Access-Controlled Workflow Systems to Planning Systems
Let AWS = W S, ACS be an access-controlled workflow system over F and T with W S = P, F , IW S , T , In, Out, γ, α and ACS = F, IACS , A, Tα , U, α, H. The planning system associated with AWS is P S = FP , IP S , OP , ω, HP , where FP is obtained from F by adding a new fact for each place in P,1 IP S contains a state L ∪ {p : M (p) = 1} ∪ {¬p : M (p) = 0} for each initial state (M, L) of AWS , HP = H, OP contains - a planning operator exec(a, t) for each a ∈ A and t ∈ Tα , - a planning operator exec(t) for each transition t ∈ T \ Tα , - a planning operator exec(u) for each policy update u ∈ U, and ω is such that: - for all a ∈ A and t ∈ Tα , π(exec(a, t)) = π(t) ∪ •t ∪ ∗ t ∪ {granted(a, t)}, η(exec(a, t)) = η(t) ∪ t• ∪ ¬•t, ν(exec(a, t)) = ν(t), where ¬F = {¬f : f ∈ F } for F ⊆ F; 1
We will not bother distinguishing between a place and the corresponding fact.
76
A. Armando and S.E. Ponta
- for all t ∈ T \Tα , π(exec(t)) = •t∪∗ t, η(exec(t)) = t• ∪¬•t, and ν(exec(t)) = ∅; - for all u ∈ U, π(exec(u)) = π(u), η(exec(u)) = η(u), and ν(exec(u)) = ∅. Notice that the set of LTL formulae used to specify the properties of P S coincides with the set of LTL formulae used to specify the properties of AWS . Theorem 1. Let φ be an LTL formula, AWS be an access-controlled workflow system and P S be the planning system associated with AWS , then |=AWS φ iff |=P S φ. This allows us to reduce the problem of checking whether AWS enjoys a given property φ to the problem of checking whether P S enjoys φ. The proof of the theorem (available in [12]) amounts to showing that AWS and P S are bisimulation equivalent. We have developed a prototype implementation of the above translation from access-controlled workflow systems to planning systems within SATMC, a SATbased bounded model checker for planning systems. SATMC [5,6] is one of the back-ends of the AVISPA Tool [13] and has been key to the discovery of serious flaws in security protocols [14,15]. Given a planning system P S, an LTL formula φ, and a positive integer k as input, SATMC builds a propositional formula whose models (if any) correspond to initialized execution paths χ of P S of length at most k such that χ |= φ. (We have recently extended SATMC so to handle planning systems as defined in Sect. 4.1, i.e. planning systems featuring rules as well as operators with non-deterministic effects.) The propositional formula is then fed to a state-of-the-art SAT solver and any model found by the solver is translated back into a counterexample. It can be shown (see, e.g., [16]) that the encoding time (i.e. time required to build the propositional formula) is polynomial in the size of the planning system and of the goal formula for any given value of k > 0. Given an LTL formula φ and an access-controlled workflow system AWS , the translator automatically reduces the problem of checking whether |=AWS φ to that of checking whether |=P S φ, where P S is the planning system obtained by applying the translation to AWS . The resulting planning system P S is given as input to SATMC. Therefore by the addition of the above translator, SATMC is now capable to model check access-controlled workflow systems.
5
Experiments
We have formalized the LOP of Sect. 2 as an access-controlled workflow system AWS in a scenario characterized by the following agents: davide, the director, maria and marco, managers, pierPaolo, who can act both as preprocessing clerk and as postprocessing clerk, pierSilvio, who can act both as preprocessing clerk and as supervisor, pietro, postprocessing clerk, and stefano, supervisor. (See [12] for more details.) We have then automatically translated AWS into the corresponding planning system and by using SATMC we have analyzed the planning system of the LOP w.r.t. the SoD properties (1) and (2).
Model Checking of Security-Sensitive Business Processes
77
ObjSoD for the LOP. Let AWS0 be the access-controlled workflow system for the LOP we presented in Sect. 2. Our first experiment was to check whether |=AWS0 (1). SATMC found a counterexample where pierSilvio can execute all the tasks intRating, extRating and approve through his role supervisor, thereby violating the property. By inspecting the intermediate states of the trace it is easy to conclude that the violation occurs if the customer is not industrial, the internal rating is not ok, and the loan is neither highValue nor lowRisk. Indeed, in this scenario the permission assignment relation, together with the seniority relation between supervisor and postprocessor, allows a supervisor to perform all the critical tasks. It is easy to see that this violation can be prevented by restricting the permission assignment relation. However this solution has the negative effect of reducing the flexibility of the business process. We therefore considered an alternative solution based on the idea of implementing in the LOP a mechanism that prevents an agent to activate a role if she had already executed intRating and extRating in the same role. Notice that the mechanism has the effect of restricting the execution paths to those satisfying the following LTL formula: G(executed(a, r, intRating) ⇒ a∈A r∈R
G(executed(a, r, extRating) ⇒ G ¬ activated(a, r)))
(3)
where R is the set of roles involved in the process and executed(a, r, t) abbreviates the formula (granted(a, r, t) ∧ X exec(a, t)). Thus instead of changing AWS 0 into a new access-controlled workflow system AWS 1 implementing the above mechanism and checking whether |=AWS 1 (1), we asked SATMC to check whether |=AWS 0 ((3) ⇒ (1)). SATMC did not find the previous counterexample any more, but found a new one. In the new counterexample the agent responsible for the violation is stefano, who executes intRating and extRating as supervisor, but can nevertheless execute approve because a manager, maria, delegates him to approve the document by means of the delegation rule D1 and this leads to the violation. A further inspection of the intermediate states of the trace shows that the violation occurs if the customer is industrial, the internal rating is ok, and the loan is highValue and not lowRisk. To avoid this new violation we constrained the applicability of rule D1 by conjoining its applicability condition with the literal ¬ highValue. By changing AWS 0 in this way we obtained a new access-controlled workflow system, say AWS 2 , and asked SATMC to check whether |=AWS 2 ((3) ⇒ (1)). SATMC did not find any counterexamples to this formula. OpSoD for the LOP. We then checked |=AWS2 (2) and SATMC found a new violation. The violation occurs if the customer is not industrial, the internal rating is ok, and the loan is lowRisk and not highValue. Notice that in this situation the loan is lowRisk and extRating is not performed and therefore the ObjSoD is ensured. However pierSilvio, who is assigned to roles preprocessor and supervisor, successfully completes the LOP and thus violates the OpSoD. Notice that the user and permission assignment relations do not enable pierSilvio to sign the contract. In fact sign must be executed by an agent who is at least
78
A. Armando and S.E. Ponta
manager. However a manager, maria in the trace, delegates pierSilvio to sign the document by executing the delegation rule D2. By inspecting the counterexample leading to the violation, it appears that pierSilvio inherits the permission to execute prepareContract and intRating by the more junior role postprocessor. Thus, a possible solution is to modify the role hierarchy by breaking the seniority relation between supervisor and postprocessor. An alternative solution is to restrict the applicability condition of D2 by conjoining it with the fact highValue. In fact, when the loan is not highValue, the access control policy is less restrictive and the application of D2 must be prevented. SATMC does not find any violation in the access-controlled workflow systems obtained by modifying AWS2 in both ways. The experiments were performed on a notebook with an Intel Core 2 Duo processor with 1.50GHz clock and 2GB of RAM memory. All the vulnerabilities were found by SATMC in little times: the encoding time ranges from 2.08 sec for |=AWS0 (1) to 6.43 sec for |=AWS2 (2) while the solving time is less than 0.5 sec for all the experiments considered.
6
Related Work
An approach to the automatic analysis of security-sensitive business processes is put forward in [1]. The paper shows that business processes with RBAC policies and delegation can be formally specified as transition systems and that SoD properties can be formally expressed as LTL formulae specifying the allowed behaviors of the transition systems. The viability of the approach is shown through its application to the LOP and the NuSMV model checker is used to carry out the verification. Our approach provides the user with a level of abstraction which is much closer to the process being modeled and provides a number of important advantages including (i) the separate specification of the workflow and of the access control policy and (ii) the formal specification of the security properties as LTL formulae that specify the allowed behavior of the access-controlled workflow system. This is not the case in the approach presented in [1], where even small changes in the workflow or in the access control policy may affect the specification of the whole transition system and the specification of the security property is relative to the (low level) transition system. In our approach the compilation of the access controlled workflow system and of the expected security properties into the corresponding planning system and properties (resp.) can be done automatically and proved correct once and for all as we have done in this paper. Our approach therefore greatly simplifies the specification process, reduces the semantic gap, and considerably reduces the probability of introducing bugs in the specification. A formal framework that integrates RBAC into the semantics of BPEL and uses the SAL model checker to analyze SoD properties as well as to synthesize a resource allocation plan has been presented in [2]. However the approach supports the RBAC model with tasks rigidly associated with specific roles, while our approach supports the specification of a wide variety of access control models
Model Checking of Security-Sensitive Business Processes
79
and policy updates. Moreover the semantics of BPEL adopted in [2] does not take into account the global state of the process and assumes an interleaving semantics. This is not the case in our approach as it accounts for a global state that can be affected by the execution of the tasks as well as for the simultaneous execution of actions. An approach to the combined modeling of business workflows with RBAC models is presented in [3]. The paper proposes an extended finite state machine model that allows for the model checking of SoD properties by using the model checker SPIN. It considers a simple RBAC model only based on previous activation (or non-activation) of roles and it does not take into account delegation. Another approach to the automated analysis of business processes is presented in [17]. The paper proposes to model workflows and security policies in a security enhanced BPMN notation, a formal semantics based on Coloured Petri nets, an automatic translation from the process model into the Promela specification language and the usage of SPIN to verify SoD properties. However no provision is made for the assignment of an agent to multiple roles, role hierarchy, delegation, and the global state of the process. An approach based on model checking for the analysis and synthesis of finegrained security policies is presented in [18]. The framework supports the specification of complex policies (including administrative policies), but mutual exclusion in the user assignment relation is not supported, nor it is possible to express role inheritance. Moreover, the modeling of the workflow is not in the scope of [18], whereas modeling and analyzing the interplay of the workflow and the access control policy is one of the main objectives of our work.
7
Conclusions and Future Work
We have presented a new approach to the formal modeling and automatic analysis of security-sensitive business processes that greatly simplifies the specification activity while retaining full automation. Our approach improves upon the stateof-the-art by supporting the separate specification of the workflow and of the security policy as well as a translation into a specification framework amenable to automatic analysis. Our experiments confirm that model checking can be very effective not only to detect security flaws but also to identify possible solutions. The analysis of business processes via reduction to planning is currently being implemented and integrated by SAP within SAP NetWeaver BPM [19] by using SATMC as back-end. The version of SATMC currently integrated supports a simpler definition of planning systems than that given in Sect. 4.1 (i.e. it does not feature rules nor operators with non-deterministic effects). The integration of the latest version of SATMC will enable the SAP platform to support the advanced features that have been presented in this work.
References 1. Schaad, A., Lotz, V., Sohr, K.: A model-checking approach to analysing organisational controls in a loan origination process. In: SACMAT, pp. 139–149. ACM, New York (2006)
80
A. Armando and S.E. Ponta
2. Cerone, A., Xiangpeng, Z., Krishnan, P.: Modelling and resource allocation planning of BPEL workflows under security constraints. TR 336, UNU-IIST (2006), http://www.iist.unu.edu/ 3. Dury, A., Boroday, S., Petrenko, A., Lotz, V.: Formal verification of business workflows and role based access control systems. In: SECURWARE 2007, pp. 201–210 (2007) 4. Peterson, J.L.: Petri Net Theory and the Modeling of Systems. Prentice Hall, Englewood Cliffs (1981) 5. Armando, A., Compagna, L.: SATMC: a SAT-based model checker for security protocols. In: Alferes, J.J., Leite, J. (eds.) JELIA 2004. LNCS (LNAI), vol. 3229, pp. 730–733. Springer, Heidelberg (2004) 6. Armando, A., Compagna, L.: Sat-based model-checking for security protocols analysis. In: IJIS. Springer, Heidelberg (2007) 7. OASIS: Web Services Business Process Execution Language Version 2.0 (2007), http://docs.oasis-open.org/wsbpel/2.0/OS/wsbpel-v2.0-OS.html 8. Sandhu, R.S., Coyne, E.J., Feinstein, H.L., Youman, C.E.: Role-based access control models. Computer 29(2), 38–47 (1996) 9. Atluri, V., Warner, J.: Supporting conditional delegation in secure workflow management systems. In: SACMAT, pp. 49–58. ACM Press, New York (2005) 10. Giunchiglia, E., Lifschitz, V.: An action language based on causal explanation: Preliminary report. In: AAAI 1998, pp. 623–630. AAAI Press, Menlo Park (1998) 11. Ferraris, P., Giunchiglia, E.: Planning as satisfiability in nondeterministic domains. In: AAAI 2000 and IAAI 2000, pp. 748–753. AAAI Press / The MIT Press (2000) 12. Armando, A., Ponta, S.E.: Model checking of security-sensitive business processes (2009), http://www.ai-lab.it/serena/tr090724.pdf 13. Armando, A., Basin, D., Boichut, Y., Chevalier, Y., Compagna, L., Cuellar, J., Drielsma, H.P., He´ am, P.C., Kouchnarenko, O., Mantovani, J., M¨ odersheim, S., von Oheimb, D., Rusinowitch, M., Santiago, J., Turuani, M., Vigan` o, L., Vigneron, L.: The AVISPA Tool for the Automated Validation of Internet Security Protocols and Applications. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 281–285. Springer, Heidelberg (2005) 14. Armando, A., Carbone, R., Compagna, L.: LTL model checking for security protocols. In: CSF-20, pp. 385–396. IEEE Computer Society, Los Alamitos (2007) 15. Armando, A., Carbone, R., Compagna, L., Cu´ellar, J., Tobarra, M.L.: Formal analysis of SAML 2.0 web browser single sign-on: breaking the SAML-based single sign-on for google apps. In: FMSE, pp. 1–10. ACM, New York (2008) 16. Kautz, H., McAllester, H., Selman, B.: Encoding Plans in Propositional Logic. In: KR, pp. 374–384 (1996) 17. Wolter, C., Miseldine, P., Meinel, C.: Verification of business process entailment constraints using SPIN. In: Massacci, F., Redwine Jr., S.T., Zannone, N. (eds.) ESSoS 2009. LNCS, vol. 5429. Springer, Heidelberg (2009) 18. Guelev, D.P., Ryan, M., Schobbens, P.Y.: Model-checking access control policies. In: Zhang, K., Zheng, Y. (eds.) ISC 2004. LNCS, vol. 3225, pp. 219–230. Springer, Heidelberg (2004) 19. SAP NetWeaver Business Process Management, http://www.sap.com/platform/netweaver/components/sapnetweaverbpm/ index.epx
Analysing the Information Flow Properties of Object-Capability Patterns Toby Murray and Gavin Lowe Oxford University Computing Laboratory Wolfson Building, Parks Road, Oxford, OX1 3QD, United Kingdom {toby.murray,gavin.lowe}@comlab.ox.ac.uk
Abstract. We consider the problem of detecting covert channels within security-enforcing object-capability patterns. Traditional formalisms for reasoning about the security properties of object-capability patterns require one to be aware, a priori, of all possible mechanisms for covert information flow that might be present within a pattern, in order to detect covert channels within it. We show how the CSP process algebra, and its model-checker FDR, can be applied to overcome this limitation.
1
Introduction
The object-capability model [9] is a security architecture for the construction of software systems that naturally adhere to the principle of least authority [9], a refinement of Saltzer and Schroeder’s principle of least privilege [19]. Several current research projects, including secure programming languages like E [9], Joe-E [8] and Google’s Caja [10], and microkernel operating systems like the Annex Capability Kernel [5] and seL4 [1], implement the object-capability model to provide platforms for cooperation in the presence of mutual suspicion. Security properties are enforced in object-capability systems by deploying security-enforcing abstractions, called patterns, much in the same way that a program’s ordinary functional properties are implemented by ordinary programming abstractions and design patterns. It is therefore very important to be able to understand precisely the security properties that an individual object-capability pattern does, and does not, enforce. For systems in which confidentiality is a primary concern, we are most often interested in those security properties that capture the ways in which information may flow within them. In object-capability applications that involve confidentiality, the information flow properties of security-enforcing object-capability patterns are of vital importance. In particular, it is necessary to be able to detect the existence of covert channels within object-capability patterns. Whilst the formal analysis of object-capability patterns has received some attention [20], the previous formalisms that were employed require all effects that are to be reasoned about to be explicitly included in any model of an objectcapability pattern that is being analysed. Thus, in order for covert channels to be detected within an object-capability pattern, the mechanisms for covert P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 81–95, 2010. c Springer-Verlag Berlin Heidelberg 2010
82
T. Murray and G. Lowe
information propagation must be explicitly modelled. This requires one who wishes to detect covert channels in a pattern to be aware, a priori, of the possible mechanisms for covert information flow within it. In this paper, we show how the CSP process algebra [14], and its model checker FDR [4], can be applied to model object-capability patterns and detect covert channels within them, without forcing the programmer to enumerate the mechanisms by which information may covertly propagate. We adopt CSP for modelling object-capability systems, as opposed to what others might consider to be a more natural formalism such as the π-calculus, because we can use FDR to automatically check our properties via CSP’s formal theory of refinement which, as will become evident, is integral to our understanding of both object-capability systems and information flow within them. We conclude this section by briefly explaining the object-capability model and the fragment of CSP used in this paper. Further details about CSP can be found in [14]. In Section 2, we explain how object-capability systems can be modelled in CSP. In doing so, we present an example model of a Data-Diode pattern, from [9], that is designed to allow data to flow from low-sensitivity objects to high-sensitivity ones, whilst preventing data propagating in the reverse direction. In Section 3, we give a general definition for information flow security for objectcapability systems modelled in CSP and argue that the information flow property Weakened RCFNDC for Compositions [12], which can be automatically tested in FDR, is an appropriate test to apply to such systems. Applying this test to our model from Section 2, we find that it does indeed contain covert channels, before showing how to refine the model to an implementation that passes the test. The analysis here considers only a small instance of the Data-Diode pattern composed with a handful of other objects. Therefore, in Section 4, we show how to generalise our results to systems of arbitrary size in which objects may create arbitrary numbers of other objects, applying the theory of data-independence [6]. Finally, we conclude and consider related work in Section 5. Some proofs are omitted but appear in [11]. Thanks to Bill Roscoe for useful discussions about data-independence, and to the anonymous reviewers. The Object-Capability Model. The object-capability model [9] is a model of computation and security that aims to capture the semantics of many actual object-based programming languages and capability-based systems, including all of those mentioned in Section 1. An object-capability system is an instance of the model and comprises just a collection of objects, connected to each other by capabilities. An object is a protected entity comprising state and code that together define its behaviour. An object’s state includes both data and the capabilities it possesses. A capability, c, is an unforgeable object reference that allows its holder to send messages to the object it references by invoking c. In an object-capability system, the only overt means for objects to interact is by sending messages to each other. Capabilities may be passed between objects only within messages. In practice, object o can pass one of its capabilities, c, directly to object p only by invoking a capability it possesses that refers to p,
Analysing the Information Flow Properties
83
including c in the invocation. This implies that capabilities can be passed only between objects that are connected, perhaps via intermediate objects. Each object may expose a number of interfaces, known as facets. A capability that refers to an object, o, also identifies a particular facet of o. This allows the object to expose different functionality to different clients by handing each of them a capability that identifies a separate facet, for example. An object may also create others. In doing so, it must supply any resources required by the newly created object, including its code and any data and capabilities it is to possess initially. Hence, a newly created object receives its first capabilities solely from its parent. When creating an object, the parent exclusively receives a capability to the child. Thus, an object’s parent has complete control over those objects the child may come to interact with in its lifetime. This is the basis upon which mandatory security policies can be enforced [9]. In object-capability operating systems like seL4, each process may be thought of as a separate object. In object-capability languages like Caja, objects are akin to those from object-oriented languages; capabilities are simply object references. CSP. A system modelled in CSP comprises a set of concurrently executing processes that execute by performing events. Processes communicate by synchronising on common events, drawn from the set Σ of all visible events. The process ST OP represents deadlock and cannot perform any events. The process ?a : A → Pa is initially willing to perform all events from the set A and offers its environment the choice of which should be performed. Once a particular event, a ∈ A, has been performed, it behaves like the process Pa . CSP allows multi-part events to be defined, where a dot is used to separate each part of an event. Suppose we define the set of events {plot.x.y | x, y ∈ N}. Then the process plot?x?y → ST OP offers all plot events whilst the process plot?x : {1, . . . , 5}!3 → ST OP offers all events from {plot.x.3 | x ∈ {1, . . . , 5}}. {|c1 . . . . .ck |} denotes the set of events whose first k components are c1 , . . . , ck . The process P Q can behave like either the process P or the process Q and offers its environment the initial events of both processes, giving the environment the choice as to which process it behaves like. The process P Q can also behave like either P or Q but doesn’t allow the environment to choose which; instead, it makes this choice internally. P \ A denotes the process obtained when P is run but all occurrences of the events in A are hidden from its environment. The process P Q runs the processes P and Q in parallel forcing them to A
synchronise on all events from the set A. The process S = (Pi , Ai ) is the 1≤i≤n alphabetised parallel composition of the n processes P1 , . . . , Pn on their corresponding alphabets A1 , . . . , An . Each process Pi may perform events only from its alphabet Ai , and each event must be synchronised on by all processes in (Pi , Ai ). whose alphabet it appears. P1 A1 A2 P2 is equivalent to 1≤i≤2 A process diverges when it performs an infinite number of internal τ events. A process terminates by performing the special termination event . In this paper, we restrict our attention to processes that never diverge nor terminate.
84
T. Murray and G. Lowe
Given a CSP process P , traces(P ) denotes the set that contains all finite sequences of visible events (including all prefixes thereof) that it can perform. A stable-failure is a pair (s, X) and denotes a process performing the finite sequence of events s and then reaching a stable state in which no internal τ events can occur, at which point all events from X are unavailable, i.e. X can be refused. We write failures(P ) for the set that contains all stable-failures of the process P . We write divergences(P ) for the set of traces of P after which it can diverge. For all of the processes P that we consider in this paper, divergences(P ) = {}. For any divergence-free process P , traces(P ) = {s | (s, X) ∈ failures(P )}. CSP’s standard denotational semantic model is the failures-divergences model [14]. Here a process P is represented by the two sets: failures(P ) and divergences(P ). One CSP process P is said to failures-divergences refine another Q, precisely when failures(P ) ⊆ failures(Q) ∧ divergences(P ) ⊆ divergences(Q). In this case, we write Q P . Sequences are written between angle brackets; denotes the empty sequence. sˆt denotes the concatenation of sequences s and t. s \ H denotes the sequence obtained by removing all occurrence of events in the set H from the sequence s. s |` H denotes the sequence obtained by removing all non-H events from s.
2
Modelling Object-Capability Systems in CSP
In this section, we describe our approach to modelling object-capability systems in CSP. Note that we ignore the issue of object creation for now. This will be handled later on in Section 4. We model an object-capability system System that comprises a set Object of objects as the alphabetised parallel composition of a set of processes {behaviour (o) | o ∈ Object } on their corresponding alphabets {α(o) | o ∈ Object }. So System = (behaviour (o), α(o)). o∈Object
The facets of each object o ∈ Object are denoted facets(o). We restrict our attention to those well-formed systems in which facets(o) ∩ facets(p) = {} ⇒ o = p. Recall that an individual capability refers to a particular facet of a particular object. Hence, we define the set Cap = {facets(o) | o ∈ Object } that contains all entities to which capabilities may refer. The events that each process behaviour (o) can perform represent it sending and receiving messages to and from other objects in the system. We define events of the form f1 .f2 .op.arg to denote the sending of a message from the object with facet f1 to facet f2 of the object with this facet, requesting it to perform operation op, passing the argument arg and a reply capability f1 , which can be used later to send back a response. Here f1 , f2 ∈ Cap. Arguments are either capabilities, data or the special value null, so arg ∈ Cap ∪ Data ∪ {null}, for some set Data of data. An operation op comes from the set {Call, Return}. These operations model a call/response remote procedure call sequence in an object-capability operating system or a method call/return in an object-capability language. The alphabet of each object o ∈ Object contains just those events involving o. Hence, α(o) = {|f1 .f2 | f1 , f2 ∈ Cap ∧ (f1 ∈ facets(o) ∨ f2 ∈ facets(o))|}.
Analysing the Information Flow Properties
85
We require that the process behaviour (o) representing the behaviour of each object o ∈ Object adheres to the basic rules of the object-capability model, such as not being able to use a capability it has not legitimately acquired. We codify this by defining the most general and nondeterministic process that includes all permitted behaviours (and no more) that can be exhibited by an object o. Letting facets = facets(o) denote the set that comprises o’s facets, and caps ⊆ Cap and data ⊆ Data denote the sets of capabilities and data that o initially possesses, the most general process that includes all behaviours permitted by the objectcapability model that o may perform is denoted Untrusted(facets, caps, data). Untrusted(facets, caps, data) = ⎛
This object can invoke only those capabilities c ∈ caps ∪ facets that it possesses. In doing so it requests an operation op, and may include only those arguments arg ∈ caps ∪ data ∪ {null} it has, along with a reply capability me ∈ facets to one of its facets. Having done so, it returns to its previous state. This object can also receive any invocation from any other, where the reply capability included in the invocation is from ∈ Cap − facets, to one of its facets me ∈ facets, requesting an arbitrary operation op, and containing an arbitrary argument arg. If such an invocation occurs, the object may acquire the reply capability from, as well as any capability or datum arg in the argument. This process may also deadlock at any time, making it maximally nondeterministic. The behaviour behaviour (o) of an object o ∈ Object , whose initial capabilities and data are caps(o) and data(o) respectively, is then valid if and only if all behaviours it contains are present in Untrusted(facets(o), caps (o), data(o)). This leads to the following definition of a valid object-capability system. Definition 1 (Object-Capability System). An object-capability system is a tuple (Object , behaviour , facets, Data), where Object , behaviour , facets and Data are as discussed above and, letting Cap = {facets(o) | o ∈ Object }, there exist functions caps : Object → P Cap and data : Object → P Data that assign minimal initial capabilities and data to each object so that, for each o ∈ Object , Untrusted(facets(o), caps(o), data(o)) behaviour (o). 2.1
An Example Pattern
We illustrate these concepts by modelling the Data-Diode pattern [9, Figure 11.2], which is designed to allow low-sensitivity objects to send data to to high-sensitivity ones whilst preventing information flowing the other way.
86
T. Murray and G. Lowe
Fig. 1. A system in which to analyse the Data-Diode pattern. Bold circles indicate objects with arbitrary behaviour.
A data-diode is an object that has two facets, a read-facet and a writefacet1 . It stores a single datum and begins life holding some initial value. Invoking its read-facet causes it to return its current contents. Invoking its write-facet with an argument causes it to replace its current contents with the argument. We model a data-diode with read-facet readme and write-facet writeme that initially contains the datum val from the set Data as the CSP process ADataDiode(readme, writeme, val ), defined as follows. ADataDiode(readme, writeme, val ) = ?from : Cap − {readme, writeme}!readme!Call!null → readme!from!Return!val → ADataDiode(readme, writeme, val ) ?from : Cap − {readme, writeme}!writeme!Call?newVal : Data → writeme!from!Return!null → ADataDiode(readme, writeme, newVal ). Observe that this process passes Data items only, refusing all Cap arguments. To analyse this pattern, we instantiate it in the context of the object-capability system System depicted in Figure 1. Here, we see a data-diode, DataDiode, with read- and write-facets DDReader and DDWriter respectively. An arbitrary highsensitivity object High has a capability to the data-diode’s read-facet, allowing it to read data written by an arbitrary low-sensitivity object Low, which has a capability to the data-diode’s write-facet. High and Low possess the data HighDatum and LowDatum respectively. Let Object = {High, DataDiode, Low}, HighData = {HighDatum}, LowData = {LowDatum}, Data = HighData ∪ LowData, facets(DataDiode) = {DDReader, DDWriter} and facets(other ) = {other } for other = DataDiode. The process System is then defined as explained earlier using the behaviours: behaviour (DataDiode) = ADataDiode(DDReader, DDWriter, null), behaviour (High) = Untrusted(facets(High), facets(High) ∪ {DDReader}, HighData), behaviour (Low) = Untrusted(facets(Low), facets(Low) ∪ {DDWriter}, LowData).
3
Information Flow in Object-Capability Patterns
Performing some basic refinement checks in FDR, which test whether certain events cannot occur in System, reveals that Low cannot obtain any HighData 1
It is unclear whether the read and write interfaces should be implemented as facets of a single object or as forwarding objects of a composite object. We choose the former option at this point and will explore the latter in Section 3.1.
Analysing the Information Flow Properties
87
but that High can obtain LowDatum in this system. We now consider how to test whether, despite preventing this overt flow of data, DataDiode might provide a covert channel from High to Low. We will argue that the correct property to apply to System is Weakened RCFNDC for Compositions, introduced in [12]. Information flow properties have been well-studied in the context of process algebras, including CSP (e.g. [3,2,17,7]). The obvious approach would take one of these properties and apply it to the process behaviour (DataDiode) to see whether it allows information to flow from its high interface DDReader to its low interface DDWriter. However, this approach doesn’t take into account the constraints imposed by the object-capability model on the objects like High and Low that may interact with DDReader and DDWriter. This is because these constraints are not reflected in behaviour (DataDiode) but are instead imposed upon behaviour (High) and behaviour (Low). For example, observe that initially the process behaviour (DataDiode) can perform the event High.DDWriter.Call.null; however, this event cannot be performed in System because it can occur there only when both High and DataDiode are willing to perform it, and behaviour (High) cannot perform it initially because High does not initially possess a capability to DDWriter. In order to get accurate results, therefore, one needs to analyse the entire system System, using an appropriate information flow property. Recall that the processes behaviour (High) and behaviour (Low), which both instantiate the process Untrusted, are purposefully highly nondeterministic, in order to ensure that each is as general as possible. This makes the entire system System very nondeterministic. It has long been recognised that many standard information flow properties suffer from the so-called “refinement paradox” in which a property holds for a system but can be violated by one of the system’s refinements. The refinements of a system capture the ways in which nondeterminism can be resolved in it. The refinement paradox is dangerous because it allows a nondeterministic system to be deemed secure when, under some resolution of the system’s nondeterminism, it may actually be insecure [7]. A fail-safe way to avoid the refinement paradox is to apply an information flow property that is refinement-closed [7]. A property is refinement-closed when, for every process P , it holds for P only if it holds for all P ’s refinements. While we want to avoid the refinement paradox, refinement-closed properties are too strong for our purposes. This is because the refinements of a parallel composition include those in which the resolution of nondeterminism in one component can depend on activity that occurs within the system that the component cannot overtly observe. For example, System is refined by a process that has the trace High.High.Call.High, Low.Low.Call.null but also has the stable-failure ( , {Low.Low.Call.null}). This refinement means that System fails a number of refinement-closed information flow properties, e.g. Roscoe’s Lazy Independence [14, Section 12.4] and Lowe’s Refinement-Closed Failures NonDeducibility on Compositions [7]. These two behaviours arise because of the nondeterminism in Low: initially Low may either perform Low.Low.Call.null or
88
T. Murray and G. Lowe
may refuse it, depending on how this nondeterminism is resolved. In the trace above, where High performs the event High.High.Call.High, Low’s nondeterminism is resolved such that Low.Low.Call.null occurs; while in the stable-failure, where High doesn’t perform its event, this nondeterminism in Low is resolved the other way. The resolution of the nondeterminism in Low here thus depends on whether High has performed its event, in which it interacts with just itself. A system that exhibits both of these behaviours therefore allows High’s interactions with just itself to somehow influence Low. In such systems it is impossible to talk sensibly about the information flow properties of the Data-Diode pattern. We see that in general, one cannot talk sensibly about the information flow properties of object-capability patterns without assuming that the only way for one object to directly influence another is by sending it a message or receiving one from it, since it is only overt message passing that any pattern can hope to control. Thus, in any system, we assume that the resolution of nondeterminism in any object can be influenced only by the message exchanges in which it has partaken before the nondeterminism is resolved. Without specifying how the nondeterminism in any object may be resolved after it has engaged in some sequence s of message exchanges, this therefore implies that whenever it performs s, the nondeterminism should be resolved consistently [12]. Two resolutions of the nondeterminism in a process after it has performed s are inconsistent when it can perform some event e in one but refuse e in the other. Under this definition, the two different resolutions of the nondeterminism in Low above, depending on whether High has performed its event that doesn’t involve Low, are inconsistent: in each case, Low performs/refuses the event Low.Low.Call.null after performing no others. We therefore confine ourselves to the ways of resolving the nondeterminism in each object in which this kind of inconsistency does not arise. Note that these are precisely the deterministic refinements of each object, under the standard definition of determinism for CSP processes. Definition 2 (Determinism). A divergence-free process P is said to be deterministic, written det (P ), iff ∃ s, e • sˆ e ∈ traces(P ) ∧ (s, {e}) ∈ failures(P ). With this in mind, we seek an information flow property that holds for a sys(behaviour (o), α(o)) just when those refinements of tem System = o∈Object System, in which the nondeterminism in each object is resolved to produce a deterministic process, are deemed secure. Any such deterministic componentwise refinement may be written as System = (bo , α(o)) where o∈Object
∀ o ∈ Object • behaviour (o) bo ∧ det (bo ). Let DCRef (System) denote the set of all deterministic componentwise refinements of System. Any such refinement System ∈ DCRef (System) will itself be deterministic [14]. Many information flow properties, which might otherwise disagree, agree when applied to deterministic processes. Hence, given any such property Prop, we arrive at the following definition of information flow security for object-capability systems.
Analysing the Information Flow Properties
89
Definition 3. An object-capability system captured by the CSP process System = (behaviour (o), α(o)) is secure under componeno∈Object twise refinement with respect to the information flow property Prop iff ∀ System ∈ DCRef (System) • Prop(System ). 3.1
Testing Information Flow
The information flow property Weakened RCFNDC for Compositions [12] is equivalent to Definition 3 when Prop is Lowe’s Refinement-Closed Failures NonDeducibility on Compositions [7] (RCFNDC). RCFNDC is equivalent when applied to deterministic processes to a number of standard information flow properties, including [16] at least all those that are no stronger than Roscoe’s Lazy Independence [14, Section 12.4] and no weaker than Ryan’s traces formulation of noninterference [18, Equation 1]. Therefore, we adopt Weakened RCFNDC and its associated automatic refinement check [12] to test for information flow here. Like similar information flow properties, given two sets H and L that partition the alphabet of a system, Weakened RCFNDC tests whether the occurrence of events from H can influence the occurrence of events from L. In [12], it is shown that any divergence-free alphabetised parallel composition S = (Pi , Ai ) 1≤i≤n
satisfies Weakened RCFNDC, written WRCFNDC (S), iff: ∃ s, = ∧ l ∈ L ∧
l • s |` H sˆ l ∈ traces(S) ∧ s \ H ∈ traces(S) ∧ ∨ = s \ H |` Ai ∧ (s \ H |` Ai , {l}) ∈ failures(Pi ) ∃ i • l ∈ Ai ∧ s |` Ai (1)
s \ H ˆ l ∈ traces(S) ∧ s ∈ traces(S) ∧ . = s \ H |` Ai ∧ (s |` Ai , {l}) ∈ failures(Pi ) ∃ i • l ∈ Ai ∧ s |` Ai Let H = {|h.DDReader, DDReader.h, h.h | h, h ∈ facets(High)|} denote the set of events that represent High interacting with DDReader and itself. Similarly let L = {|l.DDWriter.Call.arg, DDWriter.l.Return.null, l.l | l, l ∈ facets(Low)|}. Then a refinement check in FDR reveals that System from Section 2.1 can perform no events outside of H ∪ L. This implies, for example, that neither High nor Low can obtain a capability to the other. Therefore, H and L partition the effective alphabet of System. Applying the refinement check for Weakened RCFNDC to System with these definitions of H and L, using FDR, reveals that Weakened RCFNDC doesn’t hold. Interpreting the counter-example returned from FDR, we see that System can perform the trace Low.DDWriter.Call.LowDatum but also has the failure ( High.DDReader.Call.null , {Low.DDWriter.Call.LowDatum}). This indicates that initially Low can invoke DDWriter but that if High invokes DDReader, it can cause Low’s invocation to be refused. This occurs because DataDiode cannot service requests from High and Low at the same time. This constitutes a clear covert channel, since High can signal to Low by invoking DDReader which alters whether Low’s invocation is accepted. Low may be unable to observe this covert channel in some object-capability systems, e.g. those in which a sender of a message is undetectably blocked until
90
T. Murray and G. Lowe
the receiver is ready to receive it. For this kind of system, one might wish to replace Prop with another property, such as Focardi and Gorrieri’s Traces NDC [3], that detects only when high events can cause low events to occur, rather than also detecting when they can prevent them from occurring as happens in the counter-example above. Modifying Weakened RCFNDC to do so simply involves removing the second disjunct from Equation 1. However, we choose to make the conservative assumption that this counter-example represents a valid fault. Correcting the fault here involves modifying the data-diode implementation so that its interfaces for writing and reading, DDWriter and DDReader, can be used simultaneously. We do so by promoting these interfaces from being facets of a single process to existing as individual processes in their own right. These processes simply act now as proxies that forward invocations to the facets of an underlying ADataDiode process, as depicted in Figure 2. The behaviour of a proxy me that forwards invocations it receives using the capability target is given by the process AProxy(me, target) defined as follows. AProxy(me, target ) =?from : Cap − {me}!me!Call?arg : Data ∪ {null} → me!target!Call!arg → target !me!Return?res : Data ∪ {null} → me!from!Return!res → AProxy(me, target ). The data-diode is now a composite of three entities, DDReader, DDWriter and DataDiode, and as such is referred to as DDComposite. We model the system depicted in Figure 2 as an object-capability system comprising the objects from Object = {High, DDComposite, Low}, where facets(DDComposite) = {DDReader, DDWriter, DDR, DDW} and, letting R = {|DDReader.x, x.DDReader | x ∈ facets(DDComposite) − {DDReader}|}, W = {|DDWriter.x, x.DDWriter | x ∈ facets(DDComposite) − {DDWriter}|}, DD = ADataDiode(DDR, DDW, null) and the other definitions be as before, behaviour (DDComposite) =
(AProxy(DDReader, DDR) DD) AProxy(DDWriter, DDW) \ (R ∪ W ). R
W
DDComposite is formed by taking the two proxies, DDReader and DDWriter, and composing them in parallel with DataDiode, whose read- and write-interfaces are now DDR and DDW respectively. Notice that we then hide the internal communications within DDComposite since these are not visible to its outside environment and it is unclear how to divide these events between the sets H and L. FDR can be used to check that this system, System, satisfies Definition 1.
Fig. 2. An improved Data-Diode implementation
Analysing the Information Flow Properties
91
Performing the appropriate refinement checks in FDR reveal that High can acquire LowDatum but Low cannot acquire any HighData, and that System can perform no events outside of H ∪ L, as before. FDR reveals that Weakened RCFNDC holds for System. Hence, we are unable to detect any covert channels in this model of the improved Data-Diode implementation.
4
Generalising the Results
We have verified this improved Data-Diode implementation in the context of only a handful of other objects and in the absence of object creation. In this section, we show how to generalise our analysis to all systems that have the form of Figure 3, and have arbitrary HighData and LowData. Here, the objects within each cloud can be interconnected in any way whatsoever; however, the only capability to an object outside of the high object cloud that each high object may possess is DDReader. The same is true for the low objects and DDWriter. This figure captures all systems containing an arbitrary number of high- and lowsensitivity objects and, thus, all those in which each object may create arbitrary numbers of others that share its level of sensitivity. Roughly, the approach we take is to show that the improved system analysed in the previous section is a safe abstraction of all systems captured by Figure 3, such that if the safe abstraction is deemed secure then so will all of the systems (behaviour (o), α (o)) to be a it abstracts. For one system System = o∈Object
safe abstraction of another System =
o∈Object (behaviour (o), α(o)), we require
that if System is deemed secure, then so must System. Recall that, by Definition 3, System is secure iff Prop(System D ) holds for all System D ∈ DCRef (System) for some information flow property Prop. We therefore insist that in order for System to be a safe abstraction of System, that each System D is also present in DCRef (System ). Definition 4 (Safe Abstraction). Given any System and System as above, System is a safe abstraction of System iff DCRef (System) ⊆ DCRef (System ). We now show that each system System captured by Figure 3 can be safely abstracted by a system System of the form of Figure 2. We form System by taking each cloud of objects in System and aggregating all of the objects in the
Fig. 3. Generalising the results
92
T. Murray and G. Lowe
cloud into a single object in System . In order to be a proper aggregation, each object from System must have all facets, capabilities, data and behaviours of all the objects from System that it aggregates. We formally capture that System is an aggregation of System via a surjection Abs : Object → Object that maps each object of System to the object that aggregates it in System . Definition 5 (Aggregation). Let (Object , behaviour , facets, Data) and (Object , behaviour , facets , Data) be two object-capability systems with identical sets of data, captured by System = (behaviour (o), α(o)) and System =
o∈Object (behaviour (o), α (o))
o∈Object
respectively. Then the second is an
aggregation of the first when there exists some → Object surjection Abs : Object −1 such that for all o ∈ Object , facets (o ) = {facets(o) | o ∈ Abs (o )} and ∀ s ∈ traces(System) • ∀ X ∈ P Σ • (s |` α (o ), X) ∈ failures( −1
o∈Abs
(behaviour (o), α(o))) ⇒
(o )
(s |` α (o ), X) ∈ failures(behaviour (o )),
where Abs −1 (o ) = {o | o ∈ Object ∧ Abs(o) = o }. The proof of the following theorem requires some technical results beyond the scope of this paper; given limitations on space, it can be found in [11]. Theorem 1. Let System and System capture two object-capability systems as stated in Definition 5. Then if System is an aggregation of System, it is also a safe abstraction of System. collection K ⊆ Object of objects be aggregated by a single object with behaviour Untrusted( o∈K facets(o), o∈K caps(o), o∈K data(o)) that has all of their capabilities, data and facets. Briefly, by Definition 1, behaviour (o) Untrusted(facets(o), caps (o), data(o)) for each o ∈ K for some sets caps(o) and data(o) of capabilities and data that it possesses initially. Further Untrusted(facets(o) ∪ facets(o ), caps(o) ∪ caps (o ), data(o) ∪ data (o )) Untrusted(facets(o), caps(o), data(o)) α(o) α(o ) Untrusted(facets(o ), caps(o ), data(o )). The claim then follows by induction on the size of K. So consider any system that has the form of Figure 3 and let T denote the facets of the high objects, U the facets of the low objects and V = HighData ∪ LowData, i.e. V = Data. Then this system can be safety abstracted by a system System T,U,V of the form of Figure 2 in which facets(High) = T , facets(Low) = U , HighData = V and LowData = V , so that Data = V . Notice that we allow High and Low to both possess all data in the safe abstraction in order to obtain maximum generality. If we can show that System T,U,V is secure for all non-empty choices of T , U and V , by Theorem 1, we can conclude that the improved Data-Diode implementation is secure in all systems captured by Figure 3 with arbitrary HighData and LowData. The theory of data-independence [6] can be applied to show that a property Prop holds of a process PT , parameterised by some set T , for all non-empty We
claim
that
any
finite
o∈K (behaviour (o), α(o)), can
Analysing the Information Flow Properties
93
choices of T , if Prop(PT ) can be shown for all non-empty T of size N or less, for some N . N is called the data-independence threshold for T for Prop(PT ). The theory requires that PT be data-independent in T , meaning roughly that PT handles members of the type T uniformly, not distinguishing one particular value of T from another. We apply data-independence theory to show that thresholds of size 1, 2 and 2 for T , U and V respectively are sufficient to demonstrate the security of System T,U,V for all non-empty choices of each set. We will use the following standard result. Let PT be a process that is dataindependent in some set T and satisfies NoEqT for T , meaning that it never needs to test two values of T for equality. Let φ be a surjection whose domain is T , where we write φ(T ) for {φ(t)|t ∈ T } and φ−1 (X) for {y | y ∈ T ∧ φ(y) ∈ X}. Then [6, Theorem 4.2.2], lifting φ to events and traces, {(φ(s), X) | (s, φ−1 (X)) ∈ failures(PT )} ⊆ failures(Pφ(T ) ).
(2)
(PT,i , AT,i ) be an alphabetised parallel composiTheorem 2. Let ST = 1≤i≤n tion, whose components and alphabets are polymorphically parameterised by some set T , such that ST and each PT,i are data-independent in T and satisfy NoEqT for T . Also let HT and LT be two sets polymorphically parameterised by T that partition the alphabet of ST for all non-empty T . Let W denote the maximum number of distinct elements of T that appear in any single event from LT . Then W + 1 is a sufficient data-independence threshold for T for WRCFNDC (ST ). Proof. Assume the conditions of the theorem. Suppose for some T with size greater than W , ST fails Weakened RCFNDC for HT and LT . Then let T˜ = {t˜0 , . . . , t˜W } for fresh elements t˜0 , . . . , t˜W . We show that ST˜ fails Weakened RCFNDC for HT˜ and LT˜ . Let φ : T → T˜ be a surjection; we fix the choice of φ below. Lift φ to events by applying φ to all components of type T . Then φ maps an event in the alphabet of ST to an event in the alphabet of ST˜ . Also, lifting φ to sets of events, φ(AT,i ) = AT˜,i for 1 ≤ i ≤ n, φ(HT ) = HT˜ and φ(LT ) = LT˜ . Observe that Sφ(T ) = ST˜ . So, by Equation 2, the presence of certain behaviours in ST implies the presence of related behaviours in ST˜ . Recall the characterisation of Weakened RCFNDC from Equation 1. Suppose ST fails the first disjunct of Equation 1 for HT and LT . We show that ST˜ fails this disjunct for HT˜ and LT˜ . The second disjunct is handled similarly. Then there exists some s, l and i ∈ {1, . . . , n} such that s |` HT = ∧ l ∈ LT ∧ sˆ l ∈ traces(ST ) ∧ s \ HT ∈ traces(ST ) ∧ l ∈ AT,i ∧ s |` AT,i = s \ HT |` AT,i ∧ (s \ HT |` AT,i , {l}) ∈ failures(PT,i ). Let t0 , . . . , tk−1 be the distinct members of T that appear in l. Then k ≤ W . Choose φ(ti ) = t˜i for 0 ≤ i ≤ k − 1 and let φ(t) = t˜k for all other t ∈ T − ˜ ˜ ∧ ˜l ∈ {t0 , . . . , tk−1 }. Let s˜ = φ(s) and ˜l = φ(l). Then s˜ |` H = ∧ ˜l ∈ L ˜ AT˜,i ∧ s˜ |` AT˜ ,i = s˜ \ H |` AT˜ ,i . Applying Equation 2 to ST , we have s˜ ˆ ˜l ∈ ˜ ∈ traces(S ˜ ). Further, {l} = φ−1 ({˜l}) by construction. So, traces(ST˜ ) ∧ s˜ \ H T ˜ |` A ˜ , {˜l}) ∈ failures(P ˜ ). s\H applying Equation 2 to PT,i , we obtain (˜ T ,i T ,i
94
T. Murray and G. Lowe
Set HT,U,V = {|t.DDReader, DDReader.t, t.t |t, t ∈ T |} and LT,U,V = {|u.DDWriter.Call.d, DDWriter.u.Return.null, u.u | u, u ∈ U, d ∈ Data ∪ {null}|}. Then System T,U,V and all of its components are data-independent in T , U and V and satisfy NoEqT for each. Applying Equation 2, it is easily shown that HT,U,V and LT,U,V partition the alphabet of System T,U,V for all non-empty choices of T , U and V , if they do so when each of these sets has size 1. FDR confirms the latter to be true. To verify Weakened RCFNDC, Theorem 2 suggests thresholds for T , U and V of 1, 4 and 2 respectively. This threshold for U arises from events in LT,U,V of the form u.u .op.u for u, u , u ∈ U . In fact, in the proof of Theorem 2, l is necessarily an event in the alphabet of a process that can perform both HT and LT events. Hence, we can strengthen this theorem to take W to be the maximum number of distinct values of type T in all such events in LT . For System T,U,V , this means that all events from {|u.u | u, u ∈ U |} can be excluded when calculating the threshold for U , reducing it to 2. The most expensive of the 4 tests implied by these thresholds examines about 6 million state-pairs, taking around 4 minutes to compile and complete on a desktop PC; the others are far cheaper. All tests pass, generalising our results.
5
Conclusion and Related Work
We have shown how to apply CSP and FDR to automatically detect covert channels in security-enforcing object-capability patterns without forcing the programmer to specify the mechanisms by which information may propagate covertly. Our approach couples the objects that implement a pattern with arbitrary, Untrusted, high- and low-sensitivity objects that exhibit all behaviours permitted by the object-capability model. This has the added advantage that we can compare how a pattern functions in different kinds of object-capability system, such as single-threaded versus concurrent systems, by simply refining the definition of the Untrusted process. Investigating how the information flow properties of patterns are affected by changing the context in which they are deployed is an obvious avenue for future work. The assumption that objects affect each other only by passing messages means that our analysis cannot be applied to timed systems in which objects have access to a global clock, for instance. Extending this work to cover such systems may allow us to detect possible timing channels that may exist in them. Spiessens’ [20] is the only prior work of which we are aware that examines the security properties of object-capability patterns. The ideas of safe abstraction and aggregation defined in Section 4 were heavily inspired by similar ideas in [20]. Spiessens’ formalism has the advantage of not requiring the use of dataindependence arguments to generalise analyses of small systems to large systems. On the other hand, our approach, unlike Spiessens’, can detect covert channels in a pattern without forcing the programmer to specify the means by which information can propagate covertly. Instead, these means are captured by information flow properties that can be applied to any pattern being analysed.
Analysing the Information Flow Properties
95
The notion of aggregation is also similar to (the inverse of) van der Meyden’s architectural refinement [21]. Finally, data-independence theory has been applied before to generalise analyses of small systems to larger systems, including to the analysis of cryptographic protocols [15] and intrusion detection systems [13].
References 1. Elkaduwe, D., Klein, G., Elphinstone, K.: Verified protection model of the seL4 microkernel. In: Shankar, N., Woodcock, J. (eds.) VSTTE 2008. LNCS, vol. 5295, pp. 99–114. Springer, Heidelberg (2008) 2. Focardi, R.: Comparing two information flow security properties. In: Proceedings of CSFW 1996, pp. 116–122. IEEE Computer Society, Los Alamitos (1996) 3. Focardi, R., Gorrieri, R.: A classification of security properties for process algebras. Journal of Computer Security 3(1), 5–33 (1995) 4. Formal Systems (Europe), Limited. FDR2 User Manual (2005) 5. Grove, D., Murray, T., Owen, C., North, C., Jones, J., Beaumont, M.R., Hopkins, B.D.: An overview of the Annex system. In: Proceedings of ACSAC 2007 (2007) 6. Lazi´c, R.S.: A Semantic Study of Data Independence with Applications to Model Checking. D.Phil. thesis. Oxford University Computing Laboratory (1999) 7. Lowe, G.: On information flow and refinement-closure. In: Proceedings of the Workshop on Issues in the Theory of Security, WITS 2007 (2007) 8. Mettler, A.M., Wagner, D.: The Joe-E language specification, version 1.0. Technical Report EECS-2008-91, University of California, Berkeley (August 2008) 9. Miller, M.S.: Robust Composition: Towards a Unified Approach to Access Control and Concurrency Control. PhD thesis. Johns Hopkins University (2006) 10. Miller, M.S., Samuel, M., Laurie, B., Awad, I., Stay, M.: Caja: Safe active content in sanitized JavaScript, draft (2008) 11. Murray, T.: Analysing the Security Properties of Object-Capability Patterns. D.Phil. thesis. University of Oxford (2010) (Forthcoming) 12. Murray, T., Lowe, G.: On refinement-closed security properties and nondeterministic compositions. In: Proceedings of AVoCS 2008, pp. 49–68 (2009) 13. Rohrmair, G.T., Lowe, G.: Using data-independence in the analysis of intrusion detection systems. Theoretical Computer Science 340(1), 82–101 (2005) 14. Roscoe, A.W.: The Theory and Practice of Concurrency. Prentice-Hall, Englewood Cliffs (1997) 15. Roscoe, A.W., Broadfoot, P.J.: Proving security protocols with model checkers by data independence techniques. J. Comput. Secur. 7(2-3), 147–190 (1999) 16. Roscoe, A.W., Goldsmith, M.H.: What is intransitive noninterference? In: Proceedings of CSFW 1999, p. 228. IEEE Computer Society, Los Alamitos (1999) 17. Ryan, P., Schneider, S.: Process algebra and non-interference. Journal of Computer Security 9(1/2), 75–103 (2001) 18. Ryan, P.Y.A.: A CSP formulation of non-interference and unwinding. IEEE Cipher, 19–30 (Winter 1991) 19. Saltzer, J.H., Schroeder, M.D.: The protection of information in computer systems. Proceedings of the IEEE 63(9), 1208–1308 (1975) 20. Spiessens, A.: Patterns of Safe Collaboration. PhD thesis, Universit´e catholique de Louvain, Louvain-la-Neuve, Belgium (February 2007) 21. van der Meyden, R.: Architectural refinement and notions of intransitive noninterference. In: Massacci, F., Redwine Jr., S.T., Zannone, N. (eds.) ESSoS 2009. LNCS, vol. 5429, pp. 60–74. Springer, Heidelberg (2009)
Applied Quantitative Information Flow and Statistical Databases Jonathan Heusser and Pasquale Malacaria School of Electronic Engineering and Computer Science Queen Mary University of London {jonathanh,pm}@dcs.qmul.ac.uk
Abstract. We firstly describe an algebraic structure which serves as solid basis to quantitatively reason about information flows. We demonstrate how programs in form of partition of states fit into that theoretical framework. The paper presents a new method and implementation to automatically calculate such partitions, and compares it to existing approaches. As a novel application, we describe a way to transform database queries into a suitable program form which then can be statically analysed to measure its leakage and to spot database inference threats.
1
Introduction
Quantitative Information Flow (QIF) [5,6] provides a general setting for measuring information leaks in programs and protocols. In QIF programs are interpreted as equivalence relations on input states: two inputs are equivalent if they generate the same observations, e.g. if the program-run on those two inputs terminates with the same output. These equivalence relations form a complete lattice, the Lattice of Information [13] that satisfies nice algebraic properties. Also, once input states are equipped with a probability distribution the equivalence relations correspond to random variables. Information theoretical notions like entropy can be used on these relations to quantify information leaks. Applied Quantitative Information Flow, i.e. the automatic interpretation of programs in the lattice of information and the related information theoretical computations, is steadily coming of age. Based on a growing number of impressive progress in the field of model checking, SAT solvers, theorem provers and program analysis it is now possible to test quantitative information flow ideas on real code [2,17]. Of course there are still severe limitations of this kind of automatic analysis and it is well possible that most complex code will be out of reach for the foreseeable future. As a comparison a quantitative analysis is intrinsically more complex than a qualitative one and hence we should accordingly moderate our expectations in quantitative tools achieving the same results as qualitative ones anytime soon. There are however important families of programs where automatic analysis is within reach, for example side channel analysis for cryptographic protocols P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 96–110, 2010. c Springer-Verlag Berlin Heidelberg 2010
Applied Quantitative Information Flow and Statistical Databases
97
[12]. Also the integration of quantitative analysis with heuristics and software engineering tools has been successfully demonstrated [17]. In this paper we will introduce an original technique to automatically compute QIF. This technique uses state of the art technology to compute the lattice interpretation of a program. While stressing the general purpose nature of our applied QIF, we investigate in Section 5 the possible application of this tool to a particular field where we believe these techniques have great potential: statistical databases. 1.1
Statistical Databases
Database queries are a major source of information. While data mining tries to maximise the amount of information that can be extracted by a database, security experts work in the opposite direction, i.e. to minimise the confidential information that can be extracted. This paper addresses security issues of statistical databases, i.e. databases where users are allowed to query statistics about confidential data; a typical example would be the average salary in a company: the security threat is that information about an individual salary may be leaked by one or more queries. As a trivial example, knowing that all employees are paid the same amount in conjunction with knowing the average salary will reveal the individual salary of all employees. Ideally, a statistical database security officer should prevent or detect attacks that gain individual information. However, this has been shown as unachievable, for example because of “trackers” [9]. In Section 5 we sketch how applied QIF can be used to measure the amount of confidential information leaked by a set of queries and hence to improve security risk assessment for statistical databases. In relating QIF to statistical databases, the idea is to interpret a statistical query as a simple program in a programming language and use the interpretation of programs in the Lattice of Information to apply known tools and techniques to measure the amount of confidential information leaked by a set of queries. 1.2
Contributions
The tool presented in Section 4 is original; its relation to DisQuant is discussed in the same section. Proposition 2 is also original and allows for an automatic and elegant interpretation of databases queries in LoI . Also the ideas in Section 5 about the use of applied QIF in the statistical databases context are original.
2
Lattice of Information
It has been shown by Landauer and Redmond [13] that observations about the behaviour of a deterministic system can be elegantly modelled in terms of a lattice. Let Σ be the set of all states in a system. An observation can be seen as an equivalence class of states defined by σ ∼ σ if and only if σ and σ are
98
J. Heusser and P. Malacaria
indistinguishable given that observation. The join and meet lattice operations stand for the intersection of relations and the transitive closure union of relations respectively. Thus, higher elements in the lattice can distinguish more while lower elements in the lattice can distinguish less states. It is easy to show that this is a complete lattice of equivalence relations. The bottom of this lattice is the least informative observation (any two states are equivalent, i.e. all states are equivalent) and the top of the lattice is the most informative observation (each state is only equivalent to itself). Aptly they named this lattice Lattice of Information ( LoI ). The ordering of LoI is defined as ≈ ∼ ↔ ∀σ1 , σ2 (σ1 ∼ σ2 ⇒ σ1 ≈ σ2 ) (1) where σ1 , σ2 ∈ Σ. An equivalent presentation for the same lattice is in terms of partitions. In fact any equivalence relation can be seen as a partition whose blocks are its equivalence classes. Seen as a lattice of partition we have σ σ = {a ∩ b|a ∈ σ, b ∈ σ }. In this paper we will assume this lattice to be finite; this is motivated by considering information storable in programs variables: such information is ≤ 2k where k is the number of bits of the secret variable. We give a typical example of how these equivalence relations can be used in an information flow setting. Let us assume the set of states Σ consists of a tuple
l, h where l is a low variable and h is a confidential variable. One possible observer can be described by the equivalence relation
l1 , h1 ≈ l2 , h2 ↔ l1 = l2 That is the observer can only distinguish two states whenever they agree on the low variable part. Clearly, a more powerful attacker is the one who can distinguish any two states from one another, or
l1 , h1 ∼ l2 , h2 ↔ l1 = l2 ∧ h1 = h2 The ∼-observer gains more information than the ≈-observer by comparing states, therefore ≈ ∼. A random variable on a finite space can be seen as map X : D → R(X), where D is a finite set with a probability distribution and R(X), a measurable set, is the range of X. For each element d ∈ D, the probability of it is denoted p(d). For x ∈ R(X) p(x) means the probability that X takes on the value x, i.e. def p(x) = d∈X −1 (x) p(d). From that perspective, X partitions the space D into sets which are indistinguishable to an observer who sees the value that X takes on. This can be seen as the equivalence relation ker(X): d ker(X) d iff X(d) = X(d )
(2)
The Shannon entropy of a random variable X is denoted H(X), defined as follows H(X) = − p(x) log p(x) x
Applied Quantitative Information Flow and Statistical Databases
99
As seen from the definition of p(x), the entropy of X only depends on its set of inverse images X −1 (x). Thus, if two random variables X and Y have the same inverse images they will necessarily have the same entropy. More formally, we write X Y whenever the following holds X Y iff {X −1 (x) : x ∈ R(X)} = {Y −1 (y) : y ∈ R(Y )} and thus if X Y then H(X) = H(Y ). This shows that each element of the lattice LoI can be seen as a random variable. We can hence identify LoI with a lattice of random variables ordered by (1). Notice that the of two random variables is the classic notion of joint random variable, i.e. X Y = (X, Y ). In general, LoI is not distributive. 2.1
Measures
We can attempt to quantify the amount of information provided by a point in LoI by using lattice theoretic notions, such as semivaluations. A join semivaluation on LoI is a real valued map ν : LoI → R, that satisfies the following properties: ν(X Y ) + ν(X Y ) ≤ ν(X) + ν(Y ) X Y implies ν(X) ≤ ν(Y )
(3) (4)
for every element X and Y in a lattice [16]. The property (4) is order-preserving: a higher element in the lattice has a larger valuation than elements below itself. The first property (3) is a weakened inclusion-exclusion principle. Proposition 1. The map ν(X Y ) = H(X, Y )
(5)
is a join semivaluation. Equation 5 is an important result, firstly described by Nakamura [16]. He proved that the only probability-based join semivaluation on the lattice of information is Shannon’s entropy. It is easy to show that a valuation itself is not definable on this lattice, thus Shannon’s entropy is the best approximation to a probabilitybased valuation on this lattice. Other measures can be used, which are however less mathematically appealing. We will also consider Min-Entropy which seems like a good complementing measure. While Shannon entropy intuitively results in an “averaging” measure over a probability distribution, the Min-Entropy H∞ takes on a “worst-case” view: only the maximal value p(x) of a random variable X is considered H∞ (X) = − log max p(x) x∈X
where it is always the case that H∞ (X) ≤ H(X). We write M(X) to indicate Shannon’s entropy or a more general Renyi’s entropy.
100
3
J. Heusser and P. Malacaria
Measuring Program Leakage
In previous works, we developed theories to quantify the information leakage of programs [5,14]. The main idea for deterministic programs is to interpret observations on a program as equivalence relations on states [14,15] and therefore as random variables in the lattice of information. The random variable associated to a program P is the equivalence relation on any states σ, σ from the universe of states Σ defined by σ σ ⇐⇒ P (σ) =obs P (σ )
(6)
in this paper =obs represents the relation “to have the same observable output”. We denote the interpretation of a program P in LoI as defined by the equivalence relation (6 ) by Π(P ). Consider the example if h=0 then access else deny where the variable h ranges over {0, . . . , 3}. The output random variable O associated to the program represents the information available to an observer The equivalence relation (i.e. partition) associated to the above program is hence O = { {0} {1, 2, 3}} access
deny
O effectively partitions the domain of the variable h, where each disjoint subset represents an output. The partition reflects the idea of what a passive attacker can learn of secret inputs by backwards analysis of the program, from the outputs to the inputs. The quantitative evaluation of the partition O is measuring such knowledge gains of an attacker, solely depending on the partition of states and the probability distribution of the input. The next proposition says that we can represent algebraic operations in LoI using programs: Proposition 2. Given programs P1 , P2 there exists a program P12 such that Π(P12 ) = Π(P1 ) Π(P2 ) Given programs P1 , P2 , we define P12 = P1 ; P2 where the primed programs P1 , P2 are P1 , P2 with variables renamed so to have disjoint variable sets. If the two programs are syntactically equivalent, then this results in self-composition [3]. For example, consider the two programs P1 ≡ if (h == 0) x = 0 else x = 1,
P2 ≡ if (h == 1) x = 0 else x = 1
with their partitions Π(P1 ) = {{0}{h = 0}} and Π(P2 ) = {{1}{h = 1}}. The program P12 is the concatentation of the previous programs with variable renaming P12 ≡ h = h; if (h == 0) x = 0 else x = 1; h = h; if (h == 1) x = 0 else x = 1
Applied Quantitative Information Flow and Statistical Databases
101
The corresponding lattice element is the join, i.e. intersection of blocks, of the individual programs P1,2 Π(P12 ) = {{0}{1}{h = 0, 1} = {{0}{h = 0}} {{1}{h = 1}} The above result can be extended to expressions of the language: we can associate to an expression e the program consisting of the assignment x = e and use Proposition 2 to compute the lub in LoI of a set of expressions. This is the basic technique we will later use for computing leakage of database queries. Notice that Π(P ) is a general representation that can be used as the basis for several quantitative measures likes Shannon’s entropy, Renyi entropies or guessability measures, as described in Section 2. The overarching idea for quantifying the leakage of a partition Π(P ) is to compute the difference between uncertainty about the secret before and after observing the output of the program. For a Shannon-based measure, leakage is defined in [5,14] as I(Π(P ); h|l), i.e. the conditional mutual information between the program and the secret given the low input. I(Π(P ); h|l) = H(Π(P )|l) − H(Π(P )|l, h) =A H(Π(P )|l) − 0 = H(Π(P )|l) =B H(Π(P )) where equality A holds because the program is deterministic and B holds when the program only depends on the high inputs, i.e. all low variables are initialised in the code of the program. Thus, for such programs, the Shannon-based leakage measure is reduced to simply the Shannon entropy of the partition Π(P ). We can relate the order in LoI and the amount of leakage by the following result Proposition 3. Let P1 , P2 be two programs depending only on the high inputs. Then Π(P1 ) Π(P2 ) iff for all probability distributions on states in LoI, H(Π(P1 )) H(Π(P2 )).
4
Automatically Calculating Π(P )
The computationally intensive task in quantifying information leaks is calculating the partition of input states Π(P ). Applying a measure M(Π(P )) is in comparison cheap and easy to do (if the probability distribution is known). We developed a tool, AQuA (Automated Quantitative Analysis) which calculates Π(P ) given a program P in the programming language C without user interaction or code annotations. The idea is best explained using a similar example from before with 4 bit variable width, and the secret input variable pwd: P ≡ if(pwd == 4) { return 1; } else { return 0; }
102
J. Heusser and P. Malacaria
The first step of the method is to find a representative input for each possible output. In our case, AQuA could find the set {4, 5}, for outputs 1 and 0, respectively. This is accomplished using SAT-based fixed point computation. The next step runs on that set of representative inputs. For each input in that set, it counts the number of possible inputs which lead to the same implicit, distinct output. This step is accomplished using model counting. The next section will look at these two steps in more detail. 4.1
Method
The method consists of two reachability analyses, which can be run either one after another or interleaved. The first analysis finds a set of inputs to which the original program produces distinct outputs for. That set has cardinality of the number of possible outputs for the program. The second analysis counts the set of all inputs which lead to the same output. This analysis is run on all members of the set of the first analysis. Together, these two analyses allow to discover the partition of the input space according to a program’s outputs. To a program P we associate two modified programs P= and P= , representing the two reachability questions. The two programs are defined as follows: P= (i) ≡ h = i; P ; P ; assert(l! = l ) P= (i) ≡ h = i; P ; P ; assert(l = l ) The program P is self-composed [3,18] and is either asserting low-equality or lowinequality on the output variable and its copy. Their argument is the initialisation value for the input variable. This method works on any number of input variables, but we simplify it to a single variable. The programs P= and P= are unwound into propositional formula and then translated in Conjunctive Normal Form (CNF) in a standard fashion. P= is solved using a number of SAT solver calls using a standard reachability algorithm (SAT-based fixed point calculation) [11]. Algorithm 1 describes this input discovery. In each iteration it discovers a new input h which does not lead to the same output as previous the input h. The new input h is added to the set Sinput . The observable output l is added to the formula as blocking clause, to avoid finding the same solution again in a different iteration. This process is repeated until P= is unsatisfiable which signifies that the search for Sinput elements is exhausted. Given Sinput (or a subset of it) as result of Algorithm 1, we can use P= to count the sizes of the equivalence classes represented by Sinput using model counting. This process is displayed in Algorithm 2 and is straightforward to understand. The algorithm calculates the size of the equivalence class [h]P= for every h in Sinput by counting the satisfying models of P= (h). The output M of Algorithm 2 is the partition Π(P ) of the original program P . Proposition 4 (Correctness). The set Sinput of Algorithm 1 contains a representative element for each possible equivalence class of Π(P ). Algorithm 2 calculates {[s1 ]P= , . . . , [sn ]P= } which, according to (6), is Π(P ).
Applied Quantitative Information Flow and Statistical Databases
103
Input: P= Output: Sinput Sinput ← ∅ h ← random Sinput ← Sinput ∪ {h} while P= (h) not unsat do (l, h ) ← Run SAT solver on P= (h) Sinput ← Sinput ∪ {h } h ← h P= ← P= ∧ l =l end
Algorithm 1. Calculation of Sinput using P= Input: P= , Sinput Output: M M =∅ = ∅ do while Sinput h ← s ∈ Sinput #models ← Run allSAT solver on P= (h) M = M ∪ {#models} Sinput ← Sinput \ {s} end
Algorithm 2. Model counting of equivalence classes in Sinput
4.2
Implementation
The implementation builds up on a toolchain of existing tools, together with some interfacing, language translations, and optimisations. See Figure 1 for an overview. AQuA has the following main features: – runs on a subset of ANSI C without memory allocation and with integer secret variables – no user interaction or code annotations needed except command line options – supports non-linear arithmetic and integer overflows AQuA works on the equational intermediate representation of the CBMC bounded model checker [7]. C code is translated by CBMC into a program of constraints which in turn gets optimised through standard program analysis techniques into cleaned up constraints1 . This program then gets self-composed and user-provided source and sink variables get automatically annotated. In a next step, the program gets translated into the bit-vector arithmetic Spear format of the Spear theorem prover [1]. At this point, AQuA will spawn the two instances, P= and P= , from the input program P . 1
CBMC adds some constraints which distorts the model counting.
104
J. Heusser and P. Malacaria C CBMC Constr aints Optimisations SelfComp Language translation Spear Format #SAT
SAT
P=
S_input
P=
Partition
Fig. 1. Translation steps
Algorithms 1 and 2 get executed sequentially on those two program versions. However, depending on the application and cost of the SAT queries, once could also choose to execute them interleaved, by first calculating one input to the program P= and then model counting that equivalence class. For Algorithm 1, Spear will SAT solve P= directly and report the satisfying model to the tool. The newly found inputs are stored until P= is reported to be unsat. For Algorithm 2, Spear will bit-blast P= down to CNF which in turn gets model counted by either RelSat [4] or C2D. C2D is only used in case the user specifies fast model counting through command line options. While the counting is much faster on difficult problems than RelSat, the CNF instances have to be transformed into a d-DNNF tree [8] which is very costly in memory. This is a trade-off between time and space. In most instances, RelSat is fast enough, except in cases with multiple constraints on more than two secret input variables. Once the partition Π(P ) is calculated, the user can choose which measure to apply. Loops. The first step of the program transformations is treating loops in an unsound way, i.e. a user needs to define a fixed number of loop unwindings. This is a inherent property of the choice of tools used, as CBMC is a bounded model checker, which limit the number of iterations down to what counterexamples can be found. While this is a real restriction in program verification – as bugs can be missed in that way – it is not as crucial for our quantification purposes. Algorithm 1 detects at one point an input which contains all inputs beyond the iteration bound. Using the principle of maximum entropy, this “sink state” can be used to always safely over-approximate entropy. Let us assume we analyse a binary search examples with 15 unwindings of the loop and 8 bit variables. AQuA reports the partition
Applied Quantitative Information Flow and Statistical Databases
105
Table 1. Performance examples. * 30 loop unrollings; † from [2]; counted with C2D Machine: Linux, Intel Core 2 Duo 2GHz. Program #h range Σh bits P= Time P= + P= Time Spear LOC CRC8 1h.c 1 8 bit 8 17.36s 32.68s 370 CRC8 2h.c 2 8 bit 16 34.93s 1m18.74s 763 sum3.c† 3 0 . . . 9 9.96 (103 ) 0.19s 0.95s 16 sum10.c 10 0 . . . 5 25.84 (610 ) 1.59s 3m30.76s 51 nonlinear.c 1 16 bit 16 0.04s 13.46s 20 search30.c* 1 8 bit 8 0.84s 2.56s 186 auction.c† 3 20 bit 60 0.06s 16.90s 42
Partition: {241}{1}{1}{1}{1}{1}{1}{1}{1}{1}{1}{1}{1}{1}{1}{1}: 256 where the number in the brackets are the model counts. We have 15 singleton blocks and one sink block with a model count of the remaining 241 unprocessed inputs. When applying a measure, the 241 inputs could be distributed as well in singleton blocks which would over-approximate (and in this case actually exactly find) the leakage of the input program. Proposition 5 (Sound loop leakage). Let us assume partition Π(P )n is the result of n unwindings of P , and Π(P )m is m unwindings of P , where m ≥ n. If every element of the “sink state” block b ∈ Π(P )n is distributed in individual ˆ )n , then Π(P )m Π(P ˆ )n . From Proposition blocks, the partition denoted as Π(P ˆ 3 follows that H(Π(P )m ) H(Π(P )n ). Experiences. Table 1 provides a small benchmark to give an idea on what programs AQuA has been tested on. The running times have been split between Algorithm 1 to calculate P= and the total run time; also it provides the lines of code (LOC) the program has in Spear format. The biggest example is a full CRC8 checksum implementation where the input is two char variables (16 bit) which has over 700 LOC. The run time depends on the number of secrets and their ranges and as a result on the cardinality of the partition. The programs are available from the first author’s website. 4.3
Comparison to DisQuant
Recently, Backes, K¨opf, and Rybalchenko published an elegant, and inspiring method to calculate and quantify an equivalence relation given a C-like program [2]. Their tool DisQuant turns information flow checking into a reachability problem by self-composing a program and then applying a model checker to find pairs of inputs which violate the secure information flow safety property. This will result in a logical formula of secret relations. Out of the relation, equivalence classes and their sizes can be calculated. The relation is built by a guided
106
J. Heusser and P. Malacaria
abstraction-refinement technique. This means that the tool starts with a blank canvas where every high input is related to each other. Then it successively refines an equivalence relation R by learning that some input pairs (hi , hj ) lead to different outputs, in which case a new equivalence class is added to the relation. However, before DisQuant can say anything about the (size of the) partition it has to complete the refinement process; intermediate results in the CEGAR refinement process might not be useable for quantification. In comparison, our method is in a way the opposite: it calculates a whole equivalence class for one input, independent of the remaining equivalence classes. This has multiple advantages: it is easy to distribute the computation for different inputs and model counting over multiple computers; for some problems, not all equivalence classes need to be calculate and the computation can stop after finding certain properties, e.g. finding a too large/small equivalence class for a given policy; we can calculate a partition for a subset of inputs; we can provide incremental lower bounds on the leakage. Two additional differences between the two approaches are worth mentioning separately: Due to the bit precise modelling of arithmetic operators and overflows our tool can handle non-linear constraints on secret inputs, while DisQuant is limited by its underlying techniques to linear arithmetic. Also, Algorithm 2 in our tool is not only able to count the number of elements in an equivalence class but also enumerate the models. While it is prohibitive to completely enumerate large equivalence classes, it is still possible to extract example models which could be used for some purposes.
5
Database Queries as Programs
We will now describe how we can model statistical database queries as programs. Once a database query has been modelled as a program, we can apply our program analysis tools to calculate the partition of states and in turn quantify the leakage of the queries. This section is not about showcasing AQuA’s performance but to illustrate the width of applications of applied QIF. We will use concepts used by Dobkin et al. [10] to describe databases Definition 1. A database D is a function from 1, . . . , n to N. The number of elements in the database is denoted by n; N is the set of possible attributes. A database D can also be directly described by its elements {d1 , . . . , dn }, with D(i) = di for 1 ≤ i ≤ n. For a database with n number of objects, a query is an n-ary function. Given D, q(D) = q(d1 , . . . , dn ) is the result of the query q on the database D. We assume that a database user can choose the function q and restrict its application to some of the elements of {d1 , . . . , dn }, depending on the query structure. However, the user can not see any values the function q runs on. An arbitrary query is translated by the following transformation Q1 = q(di , . . . , dj )
⇒
l1 = e(hi , . . . hj )
Applied Quantitative Information Flow and Statistical Databases
107
where the function q applied to (di , . . . , dj ) is rewritten to some C expression e2 on the secret variables hi , . . . , hj , where hn is equal to dn for all i ≤ n ≤ j; the output is stored in the observable variable l1 . A sequence of queries Q1 , . . . , Qn results in tuples of observable variables (l1 , . . . , ln ). We denote the partition of states for a query Qi , after the transformation above, as Π(Qi ). 5.1
Database Inference by Examples
To measure the degree of database inferences possible by a sequence of queries we define the following ratio, comparing leakage with the respective secret space Definition 2 (SDB Leakage Ratio). Given an SDB, let Q1 , . . . , Qn be queries, and h1 , . . . , hm be the involved secret elements in the database. The percentage of leakage revealed by the sequence of queries is given by M( 1≤i≤n Π(Qi ))
(7)
M(h1 , . . . , hm ) In the definition we can use Proposition 2 to compute
1≤i≤n
Π(Qi )
Max/Sum Example. Two or more queries can lead to an inference problem when there is an overlap on the query fields. Assume two series of queries: Q1 = max(h1 , h2 )
Q2 = sum(h3 , h4 )
The first series of queries ask for the max and sum of two disjoint set of fields. The two queries don’t share any common secret fields, so Q1 does not contribute to the leakage of Q2 . Q1 = max(h1 , h2 )
Q2 = sum(h1 , h2 )
It is a different picture if the two queries run on the same set of fields, as shown in Q1 , Q2 . Intuitively, we learn the biggest element of the two and we learn the sum of the two. The queries combined reveal the values of both secret fields, i.e. sum − max = min. Assuming 2 bit variables, we get the following calculations: H(Π(Q1 )) = 1.7490 H(Π(Q2 )) = 2.6556 H(Π(Q1 ) Π(Q2 )) = 4.4046 H(Π(Q1 )) = 1.7490 H(Π(Q2 )) = 2.6556 H(Π(Q1 ) Π(Q2 )) = 3.25 The measure of how much of the secret the two series of queries revealed is the ratio between the join of the queries to the whole secret space: 2
Expressions usually used in statistical database are sum, count,average, mean,median etc.. our context is however general so any C expression can be used.
108
J. Heusser and P. Malacaria Table 2. Contributors
Contributor Industry Geograph. Area C1 Steel Northeast C2 Steel West C3 Steel South C4 Sugar Northeast C5 Sugar Northeast C6 Sugar West
where we have used H, the Shannon entropy as the leakage measure3 . The 3.25 bits, or 81% of the secret, is the maximal possible leakage for the query, as we still don’t know which of the two secrets secret was the bigger one of the two, however “everything” is leaked in a sense, while the first query only reveals 55% of the secret space. For the enforcement, we could think of a simple monitor which keeps adding up the information released so far for individual users and which would refuse certain queries in order to not reveal more than a policy allows. A policy can be as simple as a percentage of the secret space to be released. Sum Queries Inference. Consider a database storing donations of contributors to a political party from the steel and sugar industry, contributors coming from several geographical areas. Given Tables 2 and 3, a user is allowed to make sum queries on all contributors which share a common attribute (Industry or Geographic Area)4 . Table 3 summarises all possible queries, where the amount donated by each contributor Ci is represented by the value hi . In this scenario, the owner of the databases wants to make sure that no user can learn more than 50% of the combined secret knowledge of what each contributor donated. We will look at two users querying the database; the queries of the first user fulfill the requirements of the database owner, the second user (who happens to be contributor C1 ) is clearly compromising the database information release requirements. User 1 is making two queries Q1 = sum(h1 , h2 , h3 ) Q2 = sum(h4 , h5 , h6 ) In other words, User 1 is asking for the sum of the contributors from the steel and sugar industry. For simplicity, we assume only 2 bit variables for each contributor hi . AQuA calculates a partition with 100 equivalence classes, and a Shannon entropy of 5.9685 of total 12 bits. 3 4
Taking a different measure like min entropy we would get 40% and 75% respectively. Example adapted from [10].
Applied Quantitative Information Flow and Statistical Databases
109
This results in a ratio of 5.9685 H(Π(Q1 ) Π(Q2 )) = ≈ 49.73% H(h1 , . . . , h6 ) 12 which is just within the requirements of 50% information leakage. User 2, who is contributor C1 , is inquiring the following two queries: Q3 = sum(h4 , h5 , h6 ) Q4 = sum(h1 , h4 , h5 ) Here, Q3 and Q4 have an overlap in the fields h4 and h5 . Since User 2 is C1 , the field h1 is known, so with these two queries, User 2 is able to learn h6 , i.e. h6 = Q3 − Q4 + h1 . The substantial knowledge gain of User 2 is revealed in the leakage ratio H(Π(Q3 ) Π(Q4 )) 4.6556 H(Π(Q3 ) Π(Q4 )) = = ≈ 77.6% H(h1 , h4 , h5 , h6 ) H(h4 , h5 , h6 ) 6 where in the second equation term h1 in the denominator disappear because contributor C1 knows h1 (similarly Q4 = sum(h4 , h5 ))5 . If our tool was evaluating the information leakage of these queries before the result was reported back to the user, then Q4 could be denied for User 2. We can see the previous database as an (easily computable) abstraction of a real database with a large number of entries. In this case C1 could represent the set of contributors form the Steel industry in the Northeast. In this case the leakage ratio would tell us the amount of information the queries leak about the group of individual (or set of secret data). We can hence extract valuable information about the threat of a set of queries by automatically computing the leakage on an abstraction of a database. This measure can be combined with more classical query restriction techniques like set size and overlap restriction within a threat monitor. While a precise theory of this monitor is beyond the scope of this work we believe the ideas are sound and workable.
6
Related Work
The closest work to ours is the one reviewed in Section 4.3. Another impressive method to quantify information flows in large programs is described by McCamant in multiple works [17]. The lattice of information has been described by Landauger and Redmond [13]. k -Anonymity [19] is a notion related to our database inference work. In our framework, a partition which satisfies k -Anonymity has no equivalence class smaller than k. However, we consider the whole probability distribution and thus measure more than what k -Anonymity does. Further research is needed to clarify the connection of the two works. 5
To understand the numbers 4.6556 comes by the fact that the queries reveal h6 i.e. 2 bits, plus sum(h4 , h5 ) which is 2.6556 bits.
110
J. Heusser and P. Malacaria
References 1. Babi´c, D., Hutter, F.: Spear Theorem Prover. In: Proc. of the SAT 2008 Race (2008) 2. Backes, M., K¨ opf, B., Rybalchenko, A.: Automatic Discovery and Quantification of Information Leaks. In: Proc. 30th IEEE Symposium on Security and Privacy, S& P 2009 (2009) (to appear) 3. Barthe, G., D’Argenio, P.R., Rezk, T.: Secure Information Flow by SelfComposition. In: Proceedings of the 17th IEEE workshop on Computer Security Foundations CSFW (2004) 4. Bayardo, R., Schrag, R.: Using CSP look-back techniques to solve real-world SAT instances. In: Proc. of AAAI 1997, pp. 203–208. AAAI Press/The MIT Press (1997) 5. Clark, D., Hunt, S., Malacaria, P.: A static analysis for quantifying information flow in a simple imperative language. Journal of Computer Security 15(3) (2007) 6. Clark, D., Hunt, S., Malacaria, P.: Quantitative information flow, relations and polymorphic types. Journal of Logic and Computation, Special Issue on Lambdacalculus, type theory and natural language 18(2), 181–199 (2005) 7. Clarke, E., Kroening, D., Lerda, F.: A Tool for Checking ANSI-C Programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004) 8. Darwiche, A., Marquis, P.: A Knowledge Compilation Map. Journal of Artificial Intelligence Research 17, 229–264 (2002) 9. Denning, D.E., Schlˇsrer, J.: A fast procedure for finding a tracker in a statistical database. ACM Transactions on Database Systems 5(1), 88–102 (1980) 10. Dobkin, D., Jones, A.K., Lipton, R.J.: Secure databases: Protection against user influence. ACM Transactions on Database Systems 4, 97–106 (1979) 11. Chauhan, P., Clarke, E.M., Kroening, D.: Using SAT based Image Computation for Reachability. Carnegie Mellon University, Technical Report CMU-CS-03-151 (2003) 12. K¨ opf, B., Basin, D.: An information-theoretic model for adaptive side-channel attacks. In: Proceedings of the 14th ACM conference on Computer and communications security CCS 2007, pp. 286–296 (2007) 13. Landauer, J., Redmond, T.: A Lattice of Information. In: Proc. of the IEEE Computer Security Foundations Workshop. IEEE Computer Society Press, Los Alamitos (1993) 14. Malacaria, P.: Assessing security threats of looping constructs. In: Proc. ACM Symposium on Principles of Programming Language (2007) 15. Malacaria, P.: Risk Assessment of Security Threats for Looping Constructs. To appear in the Journal Of Computer Security (2009) 16. Nakamura, Y.: Entropy and Semivaluations on Semilattices. Kodai Math. Sem. Rep. 22, 443–468 (1970) 17. McCamant, S.A.: Quantitative Information-Flow Tracking for Real Systems. MIT Department of Electrical Engineering and Computer Science, Ph.D., Cambridge, MA (2008) 18. Terauchi, T., Aiken, A.: Secure information flow as a safety problem. In: Hankin, C., Siveroni, I. (eds.) SAS 2005. LNCS, vol. 3672, pp. 352–367. Springer, Heidelberg (2005) 19. Sweeney, L.: k-anonymity: a model for protecting privacy. International Journal on Uncertainty, Fuzziness and Knowledge-based Systems 10(5), 557–570 (2002)
Specification and Verification of Side Channel Declassification Josef Svenningsson and David Sands Department of Computer Science and Engineering, Chalmers University of Technology G¨ oteborg, Sweden {josefs,dave}@chalmers.se
Abstract. Side channel attacks have emerged as a serious threat to the security of both networked and embedded systems – in particular through the implementations of cryptographic operations. Side channels can be difficult to model formally, but with careful coding and program transformation techniques it may be possible to verify security in the presence of specific side-channel attacks. But what if a program intentionally makes a tradeoff between security and efficiency and leaks some information through a side channel? In this paper we study such tradeoffs using ideas from recent research on declassification. We present a semantic model of security for programs which allow for declassification through side channels, and show how side-channel declassification can be verified using off-the-shelf software model checking tools. Finally, to make it simpler for verifiers to check that a program conforms to a particular side-channel declassification policy we introduce a further tradeoff between efficiency and verifiability: by writing programs in a particular “manifest form” security becomes considerably easier to verify.
1
Introduction
One of the pillars of computer security is confidentiality – keeping secrets secret. Much recent research in language based security has focused on how to ensure that information flows within programs do not violate the intended confidentiality properties [SM03]. One of the difficulties of tracking information flows is that information may flow in various indirect ways. Over 30 years ago, Lampson [Lam73] coined the phrase covert channel to describe channels which were not intended for information transmission at all. At that time the concern was unintended transmission of information between users on timeshared mainframe computers. In much security research that followed, it was not considered worth the effort to consider covert channels. But with the increased exposure of sensitive information to potential attackers, and the ubiquitous use of cryptographic mechanisms, covert channels have emerged as a serious threat to the security of modern systems – both networked and embedded. The following key papers provide a view of the modern side-channel threat landscape: P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 111–125, 2010. c Springer-Verlag Berlin Heidelberg 2010
112
J. Svenningsson and D. Sands
• Kocher [Koc96] showed that by taking timing measurements of RSA cryptographic operations one could discover secret keys. Later [KJJ99] it was shown that one could do the same by measuring power consumption. • Based on Kocher’s ideas numerous smart card implementations of cryptographic operations have shown to be breakable. See e.g. [MDS99]. • Brumley and Boneh [BB05] showed that timing attacks were not just relevant to smart cards and other physical cryptographic tokens, but could be effective across a network; they developed a remote timing attack on an SSL library commonly used in web servers. What is striking about these methods is that the attacks are on the implementations and not features of the basic intended functionality. Mathematically, cryptographic methods are adequately secure, but useless if the functionally correct implementation has timing or other side channels. 1.1
Simple Timing Channels
Timing leaks often arise from the fact that computation involves branching on the value of a secret. Different instructions are executed in each branch, and these give rise to a timing leak or a power leak (whereby a simple power analysis [MS00] can reveal information about e.g. control flow paths). One approach is to ensure that both branches take the same time [Aga00], or to eliminate branches altogether [MPSW05] – an approach that is also well known from real-time systems where it is used to make worst case execution time easy to determine [PB02]. Consider the pseudocode in Figure 1 representing a na¨ıve implementation of modular exponen1 r = 1; tiation, which we will use as our running example 2 i = m - 1; throughout the paper. 3 w h i l e ( i >= 0) { The data that goes in to this function is typi4 r = r * r; cally secret. A common scenario is that the vari5 i f ( d [ i ] == 1) able x is part of a secret which is to be encrypted { or decrypted and variable d is the key (viewed 6 r = r * x; here as an array of bits). It is important that 7 } these remain secret. (On the other hand, m, the 8 i = i - 1; length of the key, is usually considered public 9 } knowledge.) 10 return r ; However, as this function is currently written it is possible to derive some or all of the informaFig. 1. Modular exponentiation about the key using either a timing or power tion attack. The length of the loop will always reveal the size of the key – and this is accepted. In the body of the loop there is a conditional statement which is executed depending on whether the current bit in the key is set or not. This means that each iteration of the loop will take different amount of time depending on the value of the key. A timing attack measuring the time it takes to compute the whole result can
Specification and Verification of Side Channel Declassification
113
be used to learn the hamming weight of the key, i.e. the number of 1’s. With control over the key and repeated runs this is sufficient to leak the key [Koc96]. A power analysis could in principle even leak the key in a single run. 1.2
Timing and Declassification
Often the run-time cost of securing an algorithm against timing attacks using a general purpose method is higher than what we are prepared to pay. For example, using a table-lookup instead of a branch [Cor99] the conditional can be replaced 2 by the code to the left. This fixes the timing leak, 3 but the algorithm becomes considerably slower – even after eliminating the common subexpression. Another even more costly approach is Agat’s cross copying idea, whereby (roughly speaking) every branch on a secret value if h then A else B is transformed into if h then A;[B] else [A];B where [A] is a ghost copy of A which takes the same time to compute but otherwise has no effect. There are optimisations of this approach using unification [KM06], or by making the padding probabilistic [DHW08], but efficiency wise the improvements offered by those techniques are probably not sufficient in this context. A potential solution to this tension between security and efficiency is to make 1 r = 1; a tradeoff between the two. For this rea2 i = m - 1; son it is not uncommon for algorithms 3 k = 0; to have some side channel leakage. An 4 w h i l e ( i >= 0) { 5 r = r * ( k ? x : r ) ; example of this is the following variation on modular exponentiation, adapted 6 k = k ^ d [ i ]; 7 i = i - ( k ? 0 : 1) ; from [CMCJ04], which is intended to provide some (unspecified) degree of protec8 } tion against simple power analysis attacks (but still leaks the hamming weight of the Fig. 2. Protected exponentiation key). 1
z [0] = r * r mod N z [1] = r * r * M mod N r = z [ d [ i ]]
Research Goals and Approach. Our research goal is to determine how to express this tradeoff. There are three key issues to explore: • Security Policies: how should we specify side-channel declassification? • Security Mechanisms: how to derive programs which achieve the tradeoff? • Security Assurance: how can we show that programs satisfy a given policy, with a rigorous specification and formal verification? This paper deals primarily with the first and the third point. The first step – a prerequisite to a rigorous specification – is to specify our attacker model. A model sets the boundaries of our investigation (and as always with covert channels, there are certainly attacks which fall outside). We choose
114
J. Svenningsson and D. Sands
(as discussed in Section 3) the program counter security model [MPSW05]. This model captures attackers performing simple power and timing analysis. To specify a security policy we turn to work on declassification. The concept of declassification has been developed specifically to allow the programmer to specify what, where, or when a piece of information is allowed to leak (and by whom). A simple example is a program which requires a password based login. For this program to work it must declassify (intentionally leak) the value of the comparison between the actual and the user supplied password strings. Declassification has been a recent hot topic in information flow security (see [SS05] for an overview). The standard techniques for declassification seem largely applicable to our problem, but there are some differences. The reason being that (in the context of cryptographic algorithms in particular) we may be interested in the distinction between declassifying some data directly (something which has potentially zero cost to the attacker), and declassifying the data but only through a side channel – the latter is what we call side channel declassification. We will adapt existing declassification concepts to specify what information we are willing to leak through timing channels (Section 4) . More specifically, we use small programs as a specification of what information is leaked. This follows the style of delimited release [SM04]. As an example, we might want to specify that a program does not leak more that the hamming weight of the key. This can be achieved by using the program fragment in Figure 3 as a specification: it explicitly computes the hamming weight of the key. The formal definition of side-channel declassification (Section 4) is that if the at1 h = 0; tacker knows the information leaked by the 2 i = m - 1; declassifier then nothing more is learned by 3 w h i l e ( i >= 0) { running the program. 4 i f ( d [ i ] == 1) { We then turn to the question of verifica5 h = h + 1; tion. We investigate the use of off-shelf au6 } tomatic program verification tools to verify 7 i = i - 1; side-channel declassification policies. The 8 } first step is to reify the side channel by Fig. 3. Hamming weight computation transforming the program to represent the side-channel as part of the program state (Section 5). This reduces the specification of side-channel declassification to an extensional program property. The next step is to observe that in many common cases we can simplify the side-channel instrumentation. This simplification (described in Section 5.1) does not need to be semantics preserving – it simply needs to preserve the side-channel declassification condition. As we aim to use automatic off-the-shelf model checkers we need one final transformation to make our programs amenable to verification. We use self composition to reduce the verification problem to a safety property of a transformed program. Section 6 describes the approach and experiments with software model checkers.
Specification and Verification of Side Channel Declassification
115
For various reasons the side-channel declassification property of algorithms can still be hard to verify. The last part of this work (Section 7) introduces a tradeoff which makes verification much simpler. The idea is to write programs in what we call manifest form. In manifest form the program is written in two parts: a declassifier first computes what is to be released, and then using this information a side-channel secure program computes the rest. The verification problem amounts to showing that the second part of the program is indeed side-channel secure (this can be rather straightforward due to the strength of the side-channel security condition), and that the declassifier satisfies the property that it does not leak more through its side channel than it leaks directly. We call these manifest declassifiers. Since declassifiers are much simpler (and quite likely useful in many different algorithmic contexts) verification of manifest declassifiers is relatively simple. We show how this technique can overcome the verification limitations of certain verification tools. An extended version of this article containing material left out for reasons of space constraints is available as technical report [SS09].
2
Preliminaries
In this section we present the language we are going to use and set up the basic machinery in order to define our notion of security. Since we target cryptographic algorithms we will be using a small while language with arrays. It’s syntax is defined below: C ∈ Command ::= x = e | x[y] = e | C1 ; C2 | if e then C1 else C2 | while e C | skip e ∈ Expression ::= x | x[e] | n | e1 op e2 | x ? y : z op ∈ Operators := + | ∗ | − | ˆ | mod | . . . The commands of the program should not require much explanation as they are standard for a small while language. One particular form of expression that we have chosen to include that may not look very standard (for a toy language) is the ternary operator borrowed from the language C. It’s a conditional expression that can choose between the value of two different register based on the value of a third register. We have restricted it to only operate on registers since allowing it to choose between evaluating two general expressions may give rise to side channels. This kind of operation can typically be implemented to take a constant amount of time so that it doesn’t exhibit a side channel by using conditional assignment that is available in e.g. x86 machine code. The semantics of programs is completely standard. We defer the definition until the next section where an operational semantics is given together with some additional instrumentation.
3
Baseline Security Model
In this section we present the semantic security model which we use to model the attacker and to define the baseline notion of declassification-free security. For a
116
J. Svenningsson and D. Sands
good balance between simplicity and strength we adopt an existing approach: the program counter security model [MPSW05]. This attacker model is strong enough to analyze simple power analysis attacks [KJJ99] – where the attacker is assumed to be able to make detailed correlations between the power profile of a single run with the instructions executed during that run. The idea of the program counter security model is to assume the attacker can observe a transcript consisting of the sequence of program counter positions. This is slightly stronger than an attacker who could perfectly deduce the sequence of instructions executed from a (known) program given a power consumption profile of an execution. It does, however, assume that the power consumption of a particular operation does not depend on the data it manipulates. In particular it does not model differential power analysis. Suppose a program operates on a state which can be partitioned into a low (public) part, and a high (secret) part. A program is said to be Transcript-secure if given any two states whose low parts are equal, running the program on these respective states yields equal transcripts and final states which also agree on their low parts.1 To specialise this definition to our language we note that it is sufficient for the attacker to observe the sequence of branch decisions in a given run in order to be able to deduce the sequence of instructions that were executed. To this end, in Figure 4 we give an instrumented semantics for our language which makes this model of side channels concrete. Apart from the instrumentation (in the form of labels on the transitions) this is a completely standard small-step operational semantics. The transition labels, o, are either a silent step (τ ), a 0 or a 1. A zero or one is used to record which branch was taken in an if or while statement. Definition 1 (Transcript). Let d1 , d2 , . . . range over {0, 1}. We say that a configuration C, S has a transcript d1 , . . . , dn if there exist configurations Ci , Si , i ∈ [1, n] such that n C, S→∗ →1 C1 , S1 →∗ →2 · · · →∗ → Cn , Sn →∗ skip, S
τ
d
τ
d
τ
d
τ
for some S . In the above case we will write [[C]]S = S (when we only care about the final T state) and [[C]] S = (S , t) where t = d1 , . . . , dn (when we are interested in the state and the transcript). For the purpose of this paper (and the kinds of algorithms in which we are interested in this context) we will implicitly treat [[C]] and [[C]]T as functions rather than partial functions, thus ignoring programs which do not always terminate. Now we can formally define the baseline security definition, which following [MPSW05] we call Transcript-security: 1
It would be natural to assume that attackers have only polynomially bounded computing power in the size of the high part of the state. For the purposes of this paper our stronger definition will suffice.
Specification and Verification of Side Channel Declassification
n, S ⇓ n
117
e, S ⇓ v x[e], S ⇓ S(x)(v) S(x) =0 S(x) = 0 x?y : z, S ⇓ S(y) x?y : z, S ⇓ S(z)
x, S ⇓ S(x)
e2 , S ⇓ v2 e1 , S ⇓ v1 e1 op e2 , S ⇓ v1 op v2 e, S ⇓ v
e, S ⇓ v
τ
x = e, S → skip, S[x → v]
x[y] = e, S → skip, S[x → x[S(y) → v]] o
τ
skip; C, S → C, S e, S ⇓ v
C1 , S → C1 , S o
C1 ; C2 , S → C1 ; C2 , S
v =0 1
if e C1 C2 , S → C1 , S e, S ⇓ v
τ
e, S ⇓ 0 0
if e C1 C2 , S → C2 , S
v =0
1
while e C, S → C; while e C, S
e, S ⇓ 0 0
while e C, S → skip, S
Fig. 4. Instrumented Semantics
Definition 2 (Transcript-Security). Assume a partition of program variables into low and high. We write R =L S if program states R and S differ on at most their high variables. We extend this to state-transcript pairs by (R, t1 ) =L (S, t2 ) ⇐⇒ R =L S & t1 = t2 reflecting the fact that a transcript is considered attacker observable (low). T A program C is Transcript-secure if for all R, S, if R =L S then [[C]] R =L [[C]]T S. Note that Transcript-security, as we have defined it, is a very strong condition and also very simple to check. A sufficient condition for Transcript-security is that the program in question (i) does not assign values computed using high variables to low variables, and (ii) does not contain any loops or branches on expressions containing high variables. The main contribution of [MPSW05] is a suite of methods for transforming programs into this form. Unfortunately the transformation can be too costly in general, but that method is nicely complemented by use of declassification.
4
Side Channel Declassification
To weaken the baseline definition of security we adopt one of the simplest mechanisms to specify what information may be leaked about a secret: delimited release [SM04]. The original definition of delimited release specified declassification by placing declassify labels on various expressions occurring in a program. The idea is that the attacker is permitted to learn about (at most) the values of those expressions in the initial state, but nothing more about the high part of the state.
118
J. Svenningsson and D. Sands
We will reinterpret delimited release using a simple program rather than a set of expressions. The idea will be to specify a (hopefully small and simple) program D which leaks information from high variables to low ones. A program is Transcript-secure modulo declassifier D if it leaks no more than D, and this leak occurs through the side channel. Definition 3 (Side Channel Declassification). Let D be a program which writes to variables distinct from all variables occurring in C. We define C to be Transcript-secure modulo D if for all R and S such that R =L S we have [[C]]R =L [[C]]S & ([[D]]R = [[D]]S ⇒ [[C]]T R =L [[C]]T S). The condition on the variables written by D is purely for convenience, but is without loss of generality. The first clause of the definition says that the only information leak can be through the side channel. The second clause says that the leak is no more than what is directly leaked by D. It is perhaps helpful to =L [[C]]T S ⇒ [[D]]R = [[D]]S. consider this clause in contrapositive form: [[C]]T R This means that if there is an observable difference in the transcripts of two runs then that difference is manifest in the corresponding runs of the declassifier. Note that if we had omitted the condition [[C]]R =L [[C]]S then we would have the weaker property that C would be allowed to leak either through the store or through the side channel – but we wouldn’t know which. From an attackers point of view it might take quite a bit more effort to attack a program if it only leaks though the side channel so it seems useful to make this distinction. Clearly there are other variations possible involving multiple declassifiers each leaking through a particular subset of observation channels.
5
Reifying the Side Channel
In the previous sections we have a definition of security that enables us to formally establish the security of programs with respect to side channel declassification. We now turn to the problem of verifying that particular programs fulfil the security condition. In order to avoid having to develop our own verification method we have chosen to use off-the-shelf software verification tools. Software verification tools work with the standard semantics of programs. But recall that our security condition uses an instrumented semantics which involves a simple abstraction of side channels. In order to make it possible to use offthe-shelf tools for our security condition we must reify the transcript so that it becomes an explicit value in the program which the tools can reason about. It is easy to see how to do this: we add a list-valued variable t to the program, and transform, inductively, each conditional if e then C else C’ into if e then t = t++"1"; C else t = t++"0"; C’ and each while loop while e do C into (while e do t = t++"1"; C); t= t++"0" and inductively transform the subexpressions C and C’.
Specification and Verification of Side Channel Declassification
5.1
119
Simplifying the Instrumentation
Reifying the transcript from the instrumented semantics in this way will create a dynamic data structure (a list) which is not bounded in size in general. Such data structures make programs more difficult to reason about, especially if we want some form of automation in the verification process. Luckily, there are several circumstances which help us side step this problem. Concretely we use two facts to simplify the reification of the side channel. The first simplification we use depends on the fact that we do not have to preserve the transcript itself – it is sufficient that it yields the same low-equivalence on programs. Suppose that P T is the reified variant of the program P and that the reification is through the addition of some low variables. In order to use P T for verification of side-channel security properties it is sufficient for it to satisfy the following property: ∀R, S.[[P ]]T R =L [[P ]]T S ⇐⇒ [[P T ]]R =L [[P T ]]S We call such a P T an adequate reification of P . The second simplification that we can perform in the construction of a reified 1 program is that we are specifically tar- r = 1; 2 geting cryptographic algorithms. A com- i = m - 1; 3 mon structure among the ones we have k = 0; t = 0; 4 tried to verify is that the while loops con- w h i l e ( i >= 0) { t = t + 1; 5 tain straight line code (but potentially r = r * (k ? x : r); 6 conditional expressions). If it is the case k = k xor d [ i ]; 7 that while loops don’t contain any nested i = i - ( k ? 0 : 1) ; 8 branching or looping constructs then we 9 can avoid introducing a dynamic data } structure to model the transcript. Let us refer to such programs as unnested. For Fig. 5. Instrumented modular unnested programs it is simply enough to exponentiation use one fresh low variable for each occurrence of a branch or loop. Thus the reification transformation for unnested programs is defined by applying the two transformation rules below to each of the loops and branches respectively: while e C ; v = 0; while e (v = v + 1; C) if e then C else C’ ; if e then v = 1; C else v = 0; C’
(v fresh) (v fresh)
The program in Figure 5 is an instrumented version of the program in Figure 2. The only change is the new (low) variable t which keeps track of the number of iterations in the while loop.
6
Self Composition
Standard automatic software model checking tools cannot reason about multiple runs of a program. They deal exclusively with safety properties which involves
120
J. Svenningsson and D. Sands
reasoning about a single run. As is well-known, noninterference properties (like side-channel declassification) are not safety properties – they are defined as properties of pairs of computations rather than individual ones. However, a recent technique has emerged to reduce noninterference properties to safety properties for the purpose of verification. The idea appeared in [DHS03], and was explored extensively in [BDR04] where the idea was dubbed self composition. Suppose C is the program for which we want to verify noninterference. Let θ be a bijective renaming function to a disjoint set of variables from those used in C. Let Cθ denote a variable renamed copy of C. Then the standard noninterference property of C can be expressed as a safety property of C; Cθ viz. the Hoare triple {∀v ∈ Low .v = θ(v)}C; Cθ {∀v ∈ Low .v = θv} To extend this to deal with side-channel declassification, let us suppose that C T is an adequate reification of C. Then we can verify Transcript-security modulo D by the Hoare triple above (non side-channel security) in conjunction with: {∀v ∈ Low .v = θ(v)}D; Dθ ; C T ; CθT {(∀x ∈ W.x = θ(x)) ⇒ ∀y ∈ Low .y = θ(y)} where W denotes the variables written by D. Here we take advantage of the assumption that the variables written by D are disjoint from those used in C T . This enables us to get away with a single renaming. Note that since D is a program and not an expression we cannot simply use it in the precondition of the Hoare triple (c.f. [BDR04, TA05]). 6.1
Experiments Using Self Composition
As Terauchi and Aiken discovered when they used self composition, it often resulted in verification problems that were too hard for the model checkers to handle [TA05]. As a result of this they developed a series of techniques for making the result of self composition easier to verify. The main technique is the observation that the low part of the two initial states must be equal and hence any computation that depends only on the low part can safely be shared between the two copies of the program. This was reported to help verifying a number of programs. We employ the same technique in our experiments. We have used the model checkers Blast[HJMS03] and Dagger[GR06] and applied them to self composed version of the cryptographic algorithms. In particular we have tried to verify the instrumented modular exponentiation algorithm in Figure 5 secure modulo the hamming weight of the key (Figure 3). We have also tried all the algorithms proposed in [CMCJ04] since they all exhibit some form of side-channel leak and therefore have to be shown to be secure relative that leak. None of the model checkers were powerful enough to automatically verify the programs secure. The main reason these tools fail seems to be that they do not reason about the contents of arrays. Being able to reason about arrays is crucial for our running example, as it involves computing the hamming weight of an array.
Specification and Verification of Side Channel Declassification
121
Another problem comes from the fact that the programs we wish to prove secure may be very different from its declassifier. Relating two different programs with each other is a very difficult task and not something that current software model checkers are designed to do. By helping the model checkers with some manual intervention it is possible to verify the programs secure. Blast has a feature which allows the user to supply their own invariants. Given the correct invariants it will succeed with the verification. However, these predicates are not checked for correctness and coming up with them can be a highly non-trivial task. We have therefore developed another method for verification which we will explore in the next section.
7
Manifest Form
In this section we introduce a new way to structure programs to make verification considerably easier: Manifest Form. In manifest form the program is written in two parts: a declassifier first computes what is to be released, and then using this information a Transcript-secure program computes the rest. Manifest form represents a tradeoff: writing a program in manifest form may make it less efficient. The idea is that the program makes the declassification explicit in its structure (this is similar to the specification of relaxed noninterference [LZ05]). But for this to be truly explicit declassification the declassifier itself should not leak through its side channel – or more precisely, the declassifier should not leak more through its side channel than it does directly through the store. Definition 4 (Manifest Declassifier). A program D is said to be a Manifest Declassifier if for all R and S T
T
[[D]]S =L [[D]]R ⇒ [[D]] S =L [[D]] R As an example of a non manifest declassifier, consider the program to the left below which declassifies whether an array of length m contains all zeros. Here the array length m, and i and the declassified value allz, are low. This is not manifest because the transcript leaks more than the store: it reveals the position of the first nonzero element. A manifest version of this declassifier is shown on the right: 1 2 3 4 5 6
i = m - 1; allz = 1; w h i l e ( allz and i >= 0) { allz = ( d [ i ]? 0 : 1) ; i = i - 1; } i = 0
i = m - 1; allz = 1; w h i l e ( i >= 0) { allz *= ( d [ i ]? 0 : 1) ; i = i - 1; }
1 2 3 4 5
Definition 5 (Manifest Form). A program P is in Manifest Form if P = D; Q where D is a manifest declassifier and Q is transcript secure.
122
J. Svenningsson and D. Sands
The program in Figure 6 is written in manifest form but otherwise it represents the same algorithm as the program in Figure 2. The first part of the program (lines 1–6) computes the hamming weight of the key, d, and this (using low variable hamming) is then used in the second part of the program to determine the number of loop iterations. 1
hamming = 0; i = m - 1; w h i l e ( i >= 0) { hamming += ( d [ i ] ? 1 : 0) ; i = i + 1; }
2 3 4 5 6
7 8 9 10 11 12 13 14 15
r = 1; k = 0; i = m - 1; j = m - 1 + hamming ; w h i l e ( j >= 0) { r = r * (k ? x : r); k = k xor d [ i ]; i = i - ( k ? 0 : 1) ; j = j - 1; }
Fig. 6. Modular Exponentiation in Manifest Form
7.1
Manifest Security Theorem
Armed with the definitions of sound manifest declassifiers we can now state the theorem which is the key to the way we verify side-channel declassification. Theorem 1. Given a program P = D; Q with D being a sound manifest declassifier and Q is transcript secure then P is transcript secure modulo D This theorem helps us decompose and simplify the work of verifying that a program in manifest form is secure. First, showing that Q is transcript secure is straightforward as explained in section 3. Verifying that D is a sound manifest declassifier, which might seem like a daunting task given the definition, is actually something that is within the reach of current automatic tools for model checking. We apply the same techniques of reifying the side channel and self composition to the problem of verifying sound manifest declassifiers. When doing so we have been able to verify that our implementation of the hamming weight computation in Figure 3 is indeed a sound manifest declassifier and thereby establishing the security of the modular exponentiation algorithm in Figure 6. We have had the same success2 with all the algorithms presented in [CMCJ04].
8
Related Work
The literature on programming language techniques for information flow security is extensive. Sabelfeld and Myers survey [SM03] although some seven years old remains the standard reference in the field. It is notable that almost all of 2
Using Blast version 2.5.
Specification and Verification of Side Channel Declassification
123
the work in the area has ignored timing channels. However any automated security checking that does not model timing will accept a program which leaks information through timing, no matter how blatant the leak is. Agat [Aga00] showed how a type system for secure information flow could be extended to also transform out certain timing leaks by padding the branches of appropriate conditionals. K¨ opf and Mantel give some improvements to Agat’s approach based on code unification [KM06]. In a related line, Sabelfeld and Sands considered timing channels arising from concurrency, and made use of Agat’s approach [SS00]. Approximate and probabilistic variants of these ideas have also emerged [PHSW07, DHW08]. The problem with padding techniques in general is that they do not change the fundamental structure of a leaky algorithm, but use the “worst-case principle” [AS01] to make all computation paths equally slow. For cryptographic algorithms this approach is probably not acceptable from a performance perspective. Hedin and Sands [HS05, Hed08] consider applying Agat’s approach in the context of Java bytecode. One notable contribution is the use of a family of time models which can abstract timing behaviour at various levels of accuracy, for example to model simple cache behaviour or instructions whose time depends on runtime values (e.g. array allocation). The definitions and analysis are parameterised over the time models. The program counter security model [MPSW05] can be seen as an instance of this parameterised model. More specific to the question of declassification and side channels, as we mentioned above, [DHW08] estimates the capacity of a side channel – something which can be used to determine whether the leak is acceptable – and propose an approximate version of Agat’s padding technique. Giacobazzi and Mastroeni [GM05] recently extended the abstract noninterference approach to characterising what information is leaked to include simple timing channels. Their theoretical framework could be used to extend the present work. In particular they conclude with a theoretical condition which, in principle, could be used to verify manifest declassifiers. K¨ opf and Basin’s study of timing channels in synchronous systems[KB06] is the most closely related to the current paper. They study a Per model for expressing declassification properties in a timed setting – an abstract counterpart to the more programmer-oriented delimited release approach used here. They also study verification for deterministic systems by the use of reachability in a product automaton – somewhat analogous to our use of self composition. Finally their examples include leaks of hamming weight in a finitefield exponentiation circuit.
9
Conclusions and Further Work
Reusing theoretical concepts and practical verification tools we have introduced a notion of side channel declassification and shown how such properties can be verified by a combination of simple transformations and application of off-theshelf software model checking tools. We have also introduced a new method to specify side-channel declassification, manifest form, a form which makes the security property explicit in the program structure, and makes verification simpler.
124
J. Svenningsson and D. Sands
We have applied these techniques to verify the relative security of a number of cryptographic algorithms. It remains to investigate how to convert a given program into manifest form. Ideas from [MPSW05, LZ05] may be adaptable to obtain the best of both worlds: a program without the overhead of manifest form, but satisfying the same side-channel declassification property.
References [Aga00] [AS01] [BB05] [BDR04]
[CMCJ04]
[Cor99]
[DHS03]
[DHW08]
[GM05]
[GR06]
[Hed08]
[HJMS03]
[HS05]
[KB06]
Agat, J.: Transforming out timing leaks. In: Proc. ACM Symp. on Principles of Programming Languages, January 2000, pp. 40–53 (2000) Agat, J., Sands, D.: On confidentiality and algorithms. In: Proc. IEEE Symp. on Security and Privacy, May 2001, pp. 64–77 (2001) Brumley, D., Boneh, D.: Remote timing attacks are practical. Journal of Computer and Telecommunications Networking 48, 701–716 (2005) Barthe, G., D’Argenio, P., Rezk, T.: Secure information flow by selfcomposition. In: Proceedings of CSFW 2004, June 2004. LNCS, pp. 100– 114. IEEE Press, Los Alamitos (2004) Chevallier-Mames, B., Ciet, M., Joye, M.: Low-cost solutions for preventing simple sidechannel analysis: Side-channel atomicity. IEEE Transactions on Computers 53(6), 760–768 (2004) Coron, J.-S.: Resistance against differential power analysis for elliptic curve cryptosystems. In: Koc, C.K., Paar, C. (eds.) Cryptographic Hardware and Embedded Systems, pp. 292–302 (1999) Darvas, A., H¨ ahnle, R., Sands, D.: A theorem proving approach to analysis of secure information flow. In: Proc. Workshop on Issues in the Theory of Security (April 2003) Di Pierro, A., Hankin, C., Wiklicky, H.: Quantifying timing leaks and cost optimisation. In: Chen, L., Ryan, M.D., Wang, G. (eds.) ICICS 2008. LNCS, vol. 5308, pp. 81–96. Springer, Heidelberg (2008) Giacobazzi, R., Mastroeni, I.: Timed abstract non-interference. In: Pettersson, P., Yi, W. (eds.) FORMATS 2005. LNCS, vol. 3829, pp. 289–303. Springer, Heidelberg (2005) Gulavani, B.S., Rajamani, S.K.: Counterexample driven refinement for abstract interpretation. In: Hermanns, H., Palsberg, J. (eds.) TACAS 2006. LNCS, vol. 3920, pp. 474–488. Springer, Heidelberg (2006) Hedin, D.: Program analysis issues in language based security. PhD thesis, Department of Computer Science and Engineering, Chalmers University of Technology (2008) Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Software verification with blast. In: Ball, T., Rajamani, S.K. (eds.) SPIN 2003. LNCS, vol. 2648, pp. 235–239. Springer, Heidelberg (2003) Hedin, D., Sands, D.: Timing aware information flow security for a JavaCard-like bytecode. In: First Workshop on Bytecode Semantics, Verification, Analysis and Transformation (BYTECODE 2005). Electronic Notes in Theoretical Computer Science (2005) (to appear) K¨ opf, B., Basin, D.A.: Timing-sensitive information flow analysis for synchronous systems. In: Gollmann, D., Meier, J., Sabelfeld, A. (eds.) ESORICS 2006. LNCS, vol. 4189, pp. 243–262. Springer, Heidelberg (2006)
Specification and Verification of Side Channel Declassification [KJJ99]
[KM06]
[Koc96]
[Lam73] [LZ05]
[MDS99]
[MPSW05]
[MS00]
[PB02]
[PHSW07] [SM03] [SM04]
[SS00]
[SS05]
[SS09]
[TA05]
125
Kocher, P., Jaffe, J., Jun, B.: Differential power analysis. In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 388–397. Springer, Heidelberg (1999) K¨ opf, B., Mantel, H.: Eliminating implicit information leaks by transformational typing and unification. In: Dimitrakos, T., Martinelli, F., Ryan, P.Y.A., Schneider, S. (eds.) FAST 2005. LNCS, vol. 3866, pp. 47–62. Springer, Heidelberg (2006) Kocher, P.C.: Timing attacks on implementations of Diffie-Hellman, RSA, DSS, and other systems. In: Koblitz, N. (ed.) CRYPTO 1996. LNCS, vol. 1109, pp. 104–113. Springer, Heidelberg (1996) Lampson, B.W.: A note on the confinement problem. Comm. of the ACM 16(10), 613–615 (1973) Li, P., Zdancewic, S.: Downgrading policies and relaxed noninterference. In: Proc. ACM Symp. on Principles of Programming Languages, January 2005, pp. 158–170 (2005) Messergers, T.S., Dabbish, E.A., Sloan, R.H.: Power analysis attacks on modular exponentiation in smartcards, in cryptographic hardware and embedded systems. In: Ko¸c, C ¸ .K., Paar, C. (eds.) CHES 1999. LNCS, vol. 1717, pp. 144–157. Springer, Heidelberg (1999) Molnar, D., Piotrowski, M., Schultz, D., Wagner, D.: The program counter security model: Automatic detection and removal of control-flow side channel attacks. In: Won, D.H., Kim, S. (eds.) ICISC 2005. LNCS, vol. 3935, pp. 156–168. Springer, Heidelberg (2006) Mayer-Sommer, R.: Smartly analyzing the simplicity and the power of simple power analysis on smartcards. In: Paar, C., Ko¸c, C ¸ .K. (eds.) CHES 2000. LNCS, vol. 1965, pp. 78–92. Springer, Heidelberg (2000) Puschner, P., Burns, A.: Writing temporally predictable code. In: 7th IEEE International Workshop on Object-Oriented Real-Time Dependable Systems (2002) Di Pierro, A., Hankin, C., Siveroni, I., Wiklicky, H.: Tempus fugit: How to plug it. J. Log. Algebr. Program. 72(2), 173–190 (2007) Sabelfeld, A., Myers, A.C.: Language-based information-flow security. IEEE J. Selected Areas in Communications 21(1), 5–19 (2003) Sabelfeld, A., Myers, A.C.: A model for delimited information release. In: Futatsugi, K., Mizoguchi, F., Yonezaki, N. (eds.) ISSS 2003. LNCS, vol. 3233, pp. 174–191. Springer, Heidelberg (2004) Sabelfeld, A., Sands, D.: Probabilistic noninterference for multi-threaded programs. In: Proc. IEEE Computer Security Foundations Workshop, July 2000, pp. 200–214 (2000) Sabelfeld, A., Sands, D.: Dimensions and principles of declassification. In: Proceedings of the 18th IEEE Computer Security Foundations Workshop, Cambridge, England, pp. 255–269. IEEE Computer Society Press, Los Alamitos (2005) Svenningsson, J., Sands, D.: Specification and verification of side channel declassification. Technical Report 2009:13, Department of Computer Science and Engineering, Chalmers University of Technology and University of Gothenburg (December 2009) arXiv:0912.2952 (cs.CR) Terauchi, T., Aiken, A.: Secure information flow as a safety problem. In: Proceedings of the 12th International Static Analysis Symposium, pp. 352–367 (2005)
Secure Information Flow for Distributed Systems Rafael Alp´ızar and Geoffrey Smith School of Computing and Information Sciences, Florida International University, Miami, FL 33199, USA {ralpi001,smithg}@cis.fiu.edu
Abstract. We present an abstract language for distributed systems of processes with local memory and private communication channels. Communication between processes is done via messaging. The language has high and low data and is limited only by the Denning restrictions; this is a significant relaxation as compared to previous languages for concurrency. We argue that distributed systems in the abstract language are observationally deterministic, and use this result to show that well-typed systems satisfy termination-insensitive noninterference; our proof is based on concepts of stripping and fast simulation, which are a valuable alternative to bisimulation. We then informally explore approaches to implement this language concretely, in the context of a wireless network where there is a risk of eavesdropping of network messages. We consider how asymmetric cryptography could be used to realize the confidentiality of the abstract language.
1 Introduction In this paper we craft a high-level imperative language for distributed systems. Our goal is to provide the programmer with a simple and safe abstract language, with a built-in API to handle communications between processes. The abstract language should hide all messy communication protocols that control the data exchange between processes and all the cryptographic operations that ensure the confidentiality of the data transmitted. We also want to classify variables into different security levels, and we want a secure information flow property that says that distributed system cannot leak information from higher to lower levels. We would like our language to have a clean familiar syntax and to have the maximum power of expression that we can give it. A distributed system thus, is a group of programs executing in a group of nodes such that there is at least one program per node. An executing program with its local data is a process. As our processes may reside in separate nodes, they should have their own private memories and be able to send and receive messages from other processes. We would like to classify data according to a security lattice, which in our case will be limited to H and L, and we would like to maintain the ability to transmit and receive H and L values. To do this we will need separate channels for each classification, otherwise we would only be able to receive messages using H variables as demonstrated in adversary Δ1 of Figure 1. In this attack, Process 1 sends a H variable on channel a, but Process 2 receives it into a L variable. Because of subsumption, H channels can transmit H or L data but the receiving variable must be typed H while L channels can only transmit L data. Therefore our communication channels must have a security P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 126–140, 2010. c Springer-Verlag Berlin Heidelberg 2010
Secure Information Flow for Distributed Systems
-Δ1 Process 1 send(a, h1 ) Process 2 receive(a, l2 )
-Δ2 Process 1 if (h1 is even) then run a long time; send(a, 1) Process 2 if (h2 is odd) then run a long time; send(a, 0) Process 3 run a short time; receive(a, l3 )
127
-Δ3 Process 1 if (h1 is even) then run a long time; send(a, 1) Process 2 run a short time; if (channel a has data) then l2 := 0 else l2 := 1
Fig. 1. Attacks (first wave)
classification. What else do they need? We shall see that channels also need a specific source process and a specific destination process. The reason is exemplified by distributed system Δ2 of Figure 1. In this attack, Process 1 and Process 2 both send on channel a. If we assume that h1 and h2 are initialized to the same secret value, then the last bit of this value is leaked into l3 (assuming sufficiently “fair” scheduling). Our type system prevents attacks Δ1 and Δ2 by giving each channel a type of the form τ chi,j that specifies the security level (τ ) and the sending (i) and receiving (j) processes. Also, processes cannot be allowed to test if a channel has data because this ability would also render the language unsound by allowing timing channels. This is illustrated by distributed system Δ3 of Figure 1. This attack leaks the last bit of h1 to l2 . When h1 is even Process 1 takes a long time to send its message so when Process 2 checks, the channel will be empty. Therefore we do not allow processes to make such tests. Because a process trying to receive from a channel must block until a message is available, the programmer has to be careful to ensure that for each receive, there is a corresponding send. The converse, however, is not required since a process may send a message that is never received. Indeed, processes should not be required to wait on send and should be able to send multiple times on the same channel. To handle this, we will need an unbounded buffer for each channel, where sent messages wait to be received. Once we have some idea of what the language should be like, we would like to know that it is safe and argue a noninterference (NI) property on it. But we would like to restrict processes as little as possible. To this end, we explore the possibility of typing processes using only the classic Denning restrictions [1], which disallows an assignment l := e to a L variable if either e contains H variables or if the assignment is within an if or while command whose guard contains H variables. This would be in sharp contrast to prior works in secure information flow for concurrent programs (such as [2,3,4,5]) which have required severe restrictions to prevent H variables from affecting the ordering of assignments to shared memory. Our language, being based on message passing rather than shared memory, is much less dependent on timing and the behavior of the scheduler. Indeed, it turns out that our distributed systems are observationally deterministic which means that, despite our use of a purely nondeterministic
128
R. Alp´ızar and G. Smith
process scheduler, the final result of programs is uniquely determined by the initial memories. Next we would like to explore what it would take to implement our language in a concrete setting. We wish our operational model to be as “close to the ground” as we can. We would like something like a wireless LAN where eavesdroppers can see all communications; what would it take to implement a safe language there? Obviously secret data cannot be transmitted in a wireless LAN with any expectation of confidentiality, hence, we need cryptography. What kind? Asymmetric cryptography seems to be the appropriate style for our setting. The paper is organized as follows: Section 2 formally defines the abstract language for distributed systems and argues the key noninterference theorem; our proof is not based on bisimulation, however, but instead on concepts of stripping and fast simulation as in [6]. In Section 3 we informally explore what it would take to implement the abstract language in a concrete setting over a public network, and we work out our adversarial model. Section 4 presents related work and Section 5 concludes the paper.
2 An Abstract Language for Distributed Systems This section defines an abstract language for distributed systems. The language syntax (Figure 2) is that of the simple imperative language except that processes are added to the language and they may send or receive messages from other processes. Note that pseudocommand done is used to denote a terminated command. It may not be used as a subcommand of another command, except that for technical reasons we do allow the branches of an if command to be done. Allowing this also has the minor practical benefit of letting us code “if e then c” by “if e then c else done”. (phrases)
p ::= e | c
(expressions) e ::= x | n | e1 + e2 | . . .
(variables)
x, y, z, . . .
(commands) c ::= done | skip | x := e | send(a, e) | receive(a, x) | if e then c1 else c2 | while e do c | c1 ; c2
(channel ids) a, b, . . . (process ids) i, j, . . .
Fig. 2. Abstract Language Syntax
Semantics of processes (−→): Each process can refer to its local memory μ, which maps variables to integers, and to the global network memory Φ, which maps channel identifiers to lists of messages currently waiting to be received; we start execution with an empty network memory Φ0 such that Φ0 (a) = [ ], for all a. Thus we specify the semantics of a process via judgments of the form (c, μ, Φ)−→(c , μ , Φ ). We use a standard small-step semantics with the addition of rules for the send and receive commands; the rules are shown in Figure 3. In the rules, we write μ(e) to denote the value of expression e in memory μ. The rule for send(a, e) updates the network memory by adding the value of e to the end of the list of messages waiting on channel a. The rule for receive(a, x) requires that there be at least one message waiting to be received on channel a; it removes the first such message and assigns it to x.
Secure Information Flow for Distributed Systems
129
Semantics of distributed systems (=⇒): We model a distributed system as a function Δ that maps process identifiers to pairs (c, μ) consisting of a command and a local memory. A global configuration then has the form (Δ, Φ), and rule globals defines the purely nondeterministic behavior of the process scheduler, which at each step can select any process that is able to make a transition. updates x ∈ dom(μ) (x := e, μ, Φ)−→ (done, μ[x := μ(e)], Φ) if s
The Type System: Figure 4 shows the type system of the abstract language; its rules use an identifier typing Γ that maps identifiers to types. The typing rules enforce only the Denning restrictions [1]; in particular notice that we allow the guards of while loops to be H. Channels are restricted to carrying messages of one security classification from a specific process i to a specific process j and accordingly are typed Γ (a) = τ chi,j where τ is the security classification of the data that can travel in the channel, i is the source process and j is the destination process. So to enable full communications between processes i and j we need four channels with types H chi,j , H chj,i , L chi,j , and L chj,i . In a typing judgment Γ, i c : τ cmd the process identifier i specifies which process command c belongs to; this is used to enforce the rule that only process i can send on a channel with type τ chi,j or receive on a channel with type τ chj,i . We therefore say that a distributed system Δ is well typed if Δ(i) = (c, μ) implies that Γ, i c : τ cmd, for some τ . Language Soundness: We now argue soundness properties for our language and type system, starting with some standard properties, whose proofs are straightforward. Lemma 1 (Simple Security). If Γ, i e : τ , then e contains only variables of level τ or lower.
130
R. Alp´ızar and G. Smith
sec
τ ::= H | L
phrase ρ ::= τ | τ var | τ cmd | τ chi,j
rvalt
Γ (x) = τ var Γ, i x : τ
updatet
Γ (x) = τ var Γ, i e : τ Γ, i x := e : τ cmd
base
L⊆H
plust
cmd
τ ⊆ τ τ cmd ⊆ τ cmd
Γ, i e1 : τ Γ, i e2 : τ Γ, i e1 + e2 : τ
if t
Γ, i e : τ Γ, i c1 : τ cmd Γ, i c2 : τ cmd Γ, i if e then c1 else c2 : τ cmd
reflex
ρ⊆ρ
trans
ρ1 ⊆ ρ2 ρ2 ⊆ ρ3 ρ1 ⊆ ρ3
whilet
Γ, i e : τ Γ, i c1 : τ cmd Γ, i while e do c1 : τ cmd
subsump
Γ, i p : ρ1 ρ1 ⊆ ρ2 Γ, i p : ρ2
composet
Γ, i c1 : τ cmd Γ, i c2 : τ cmd Γ, i c1 ; c2 : τ cmd
terminalt
Γ, i done : H cmd
send t
skipt
Γ, i skip : H cmd
Γ (a) = τ chi,j Γ, i e : τ Γ, i send(a, e) : τ cmd
int t
Γ, i n : L
receivet
Γ (a) = τ chj,i Γ (x) = τ var Γ, i receive(a, x) : τ cmd
Fig. 4. Abstract Language Type System
Lemma 2 (Confinement). If Γ, i c : τ cmd, then c assigns only to variables of level τ or higher, and sends or receives only on channels of level τ or higher. Lemma 3 (Subject Reduction). If Γ, i c : τ cmd and (c, μ, Φ) −→ (c , μ , Φ ), then Γ, i c : τ cmd. We now turn to more interesting properties. We begin by defining terminal global configurations; these are simply configurations in which all processes have terminated: Definition 1. A global configuration (Δ, Φ) is terminal if for all i, Δ(i) = (done, μi ) for some μi . Notice that we do not require that Φ be an empty network memory—we allow it to contain unread messages. Now we argue that, in spite of the nondeterminism of rule globals , our distributed programs are observationally deterministic [7], in the sense that each program can reach at most one terminal configuration. Theorem 1 (Observational Determinism). Suppose that Δ is well typed and that (Δ, Φ)=⇒∗ (Δ1 , Φ1 ) and (Δ, Φ)=⇒∗ (Δ2 , Φ2 ), where (Δ1 , Φ1 ) and (Δ2 , Φ2 ) are terminal configurations. Then (Δ1 , Φ1 ) = (Δ2 , Φ2 ).
Secure Information Flow for Distributed Systems
131
Proof. We begin by observing that the behavior of each process i is completely independent of the rest of the distributed system, with the sole exception of its receive commands. Thus if we specify the sequence of messages [m1 , m2 , . . . , mn ] that process i receives during its execution, then process i’s behavior is completely determined. (Notice that the sequence [m1 , m2 , . . . , mn ] merges together all of the messages that process i receives on any of its input channels.) We now argue by contradiction. Suppose that we can run from (Δ, Φ) to two different terminal configurations, (Δ1 , Φ1 ) and (Δ2 , Φ2 ). By the discussion above, it must be that some process receives a different sequence of messages in the two runs. So consider the first place in the second run (Δ, Φ)=⇒∗ (Δ2 , Φ2 ) where a process i receives a different message than it does in the first run (Δ, Φ)=⇒∗ (Δ1 , Φ1 ). But for this to happen, there must be another process j that earlier sent a different message to i than it does in the first run. (Note that this depends on the fact that, in a well-typed distributed system, any channel can be sent to by just one process and received from by just one process.) But for j to send a different message than in the first run, it must itself have received a different message earlier. This contradicts the fact that we chose the first place in the second run where a different message was received. We now wish to argue that well-typed distributed systems satisfy a terminationinsensitive noninterference property. (We certainly need a termination-insensitive property since, under the Denning restrictions, H variables can affect termination.) Definition 2. Two memories μ and ν are L-equivalent, written μ ∼L ν, if they agree on the values of all L variables. Similarly, two network memories Φ and Φ are Lequivalent, also written Φ ∼L Φ , if they agree on the values of all L channels. Now we wish to argue that if we run a distributed system twice, using L-equivalent initial memories for each process, then, assuming that both runs terminate successfully, we must reach L-equivalent final memories for each process. A standard way to prove such a result is by establishing some sort of low bisimulation between the two runs. However this does not seem to be possible for our abstract language, because changing the values of H variables can affect when receive commands are able to be executed. Figure 5 shows an example that illustrates the difficulty. Suppose we run this program twice, using two L-equivalent memories for Process 1, namely [h1 = 1, l1 = 0] and [h1 = 0, l1 = 0], and the same memory for Process 2, [h2 = 0, l2 = 0]. Under the first memory, Process 1 immediately sends on channel aH,1,2 , which then allows Process 2 to do its receive and then to assign to l2 before Process 1 assigns to l1 . But under the second memory, Process 1 does not send on channel aH,1,2 until after assigning to l1 , which means that the assignment to l2 must come after the assignment to l1 . Thus Process 1 if h1 then send(aH,1,2 , 1) else done; l1 := 2; send(aH,1,2 , 2)
Process 2 receive(aH,1,2 , h2 ); l2 := 3
Fig. 5. A difficult example for low bisimulation
132
R. Alp´ızar and G. Smith
the two runs are not low bisimilar. (Notice that the two runs are fine with respect to noninterference, however—in both cases we end up with l1 = 2 and l2 = 3.) Because of this difficulty, we develop a different approach to noninterference, via the concepts of stripping and fast simulation, which were first used in [6]. Intuitively, the processes in Figure 5 contain H commands that are irrelevant to the L variables, except that they can cause delays. If we strip them out, we are left with Process 1 l1 := 2
Process 2 l2 := 3
This shows what will happen to the L variables if the system terminates. We therefore introduce a stripping operation that eliminates all subcommands of type H cmd, so that the delays that such subcommands might have caused are eliminated. More precisely, we have the following definition: Definition 3. Let c be a well-typed command. We define c = done if c has type H cmd; otherwise, define c by x := e = x := e if e then c1 else c2 = if e then c1 else c2 while e do c1 = while e do c1 send(a, e) = send(a, e) receive(a,⎧ x) = receive(a, x) if c1 : H cmd ⎨ c2 if c2 : H cmd – c1 ; c2 = c1 ⎩ c1 ; c2 otherwise
– – – – –
Also, we define μ to be the result of deleting all H variables from μ, and Φ to be the result of deleting all H channels from Φ. We extend · to well-typed global configurations by (Δ, Φ) = (Δ , Φ ), where if Δ(i) = (c, μ), then Δ (i) = (c , μ ). We remark that stripping as defined in [6] replaces subcommands of type H cmd with skip; in contrast our new definition here aggressively eliminates such subcommands. Note also that μ ∼L μ and Φ ∼L Φ. Now we have a simple lemma: Lemma 4. For any c, c contains only L variables and channels. Proof. By induction on the structure of c. If c has type H cmd, then c = done, which (vacuously) contains only L variables and channels. If c does not have type H cmd, then consider the form of c. If c is x := e, then c = x := e. Since c does not have type H cmd, then by rule updatet we must have that x is a L variable and e : L, which implies by Simple Security that e contains only L variables. The cases of send(a, e) and receive(a, x) are similar. If c is while e do c1 , then c = while e do c1 . By rule whilet e : L, which implies by Simple Security that e contains only L variables and channels. And, by induction, c1 contains only L variables and channels. The cases of if e then c1 else c2 and c1 ; c2 are similar. Now the key result that we wish to establish is that (Δ, Φ) can simulate (Δ, Φ), up to the final values of L variables. To this end we first adapt fast simulation from [6] (which in turn was based on strong and weak simulation in Baier et al [8]) to a nondeterministic (rather than probabilistic) setting.
Secure Information Flow for Distributed Systems
133
Definition 4. A binary relation R on global configurations is a fast low simulation with respect to =⇒ if whenever (Δ1 , Φ1 )R(Δ2 , Φ2 ) we have 1. (Δ1 , Φ1 ) and (Δ2 , Φ2 ) agree on the values of L variables and channels, and 2. if (Δ1 , Φ1 )=⇒(Δ1 , Φ1 ), then either (Δ1 , Φ1 )R(Δ2 , Φ2 ) or there exists (Δ2 , Φ2 ) such that (Δ2 , Φ2 )=⇒(Δ2 , Φ2 ) and (Δ1 , Φ1 )R(Δ2 , Φ2 ). That is, (Δ2 , Φ2 ) can match, in zero or one steps, any move from (Δ1 , Φ1 ). In pictures: (Δ1 , Φ1 ) ⇓ (Δ1 , Φ1 )
R R or R
(Δ2 , Φ2 ) ⇓ (Δ2 , Φ2 )
Viewing our stripping function · as a relation, we write (Δ1 , Φ1 ) · (Δ2 , Φ2 ) if (Δ1 , Φ1 ) = (Δ2 , Φ2 ). Here is the key theorem about the stripping relation · : Theorem 2. · is a fast low simulation with respect to =⇒. Proof. First, it is immediate from the definition of · that (Δ, Φ) and (Δ, Φ) agree on the values of L variables and channels. Next we must show that any move from (Δ, Φ) can be matched by (Δ, Φ) in zero or one steps. Suppose that the move from (Δ, Φ) involves a step on process i. Then we must have Δ(i) = (c, μ), (c, μ, Φ)−→(c , μ , Φ ), and Δ = Δ[i := (c , μ )]. To show that (Δ, Φ) can match this move in zero or one steps, note that (Δ, Φ) = (Δ , Φ ) and that Δ (i) = (c , μ ). Hence it suffices to show that either (c , μ , Φ ) = (c , μ , Φ ) or else
(c , μ , Φ )−→(c , μ , Φ ).
We argue this by induction on the structure of c. If c has type H cmd, then c = done. Also, by Confinement we have μ ∼L μ and Φ ∼L Φ , which implies that μ = μ , and Φ = Φ . And by Subject Reduction we have c : H cmd, which implies that c = done. So the move (c, μ, Φ)−→(c , μ , Φ ) is matched in zero steps by (done, μ , Φ ). If c does not have type H cmd, then consider the possible forms of c: 1. c = x := e. Here c = c. By updatet , x : L var and e : L. So by Simple Security μ (e) = μ(e), which implies that μ [x := μ (e)] = μ[x := μ(e)] . Hence the move (c, μ, Φ)−→(done, μ[x := μ(e)], Φ) is matched by the move (c, μ , Φ )−→(done, μ [x := μ (e)], Φ ). 2. c = send(a, e). Here c = c. By sendt , a is a low channel and e : L. Hence Φ (a) = Φ(a) = [m1 , . . . , mk ]. Also, by Simple Security, μ (e) = μ(e). Hence the move (c, μ, Φ)−→(done, μ, Φ[a := [m1 , . . . , mk , μ(e)]) is matched by the move (c, μ , Φ )−→(done, μ , Φ [a := [m1 , . . . , mk , μ (e)]). 3. c = receive(a, x). Here c = c. By receivet , a is a low channel and x : L var. Hence Φ (a) = Φ(a) = [m1 , . . . , mk ], where k ≥ 1. Therefore, the move (c, μ, Φ)−→(done, μ[x := m1 ], Φ[a := [m2 , . . . , mk ]]) is matched by the move (c, μ , Φ )−→(done, μ [x := m1 ], Φ [a := [m2 , . . . , mk ]]).
134
R. Alp´ızar and G. Smith
4. c = if e then c1 else c2 . Here c = if e then c1 else c2 . By if t , e : L and by Simple Security μ(e) = μ (e). So if μ(e) = 0, then (c, μ, Φ)−→(c1 , μ, Φ) is matched by (c , μ , Φ )−→(c1 , μ , Φ ). The case when μ(e) = 0 is similar. 5. c = while e do c1 . Here c = while e do c1 . By whilet , e : L and c1 does not have type H cmd. By Simple Security, we have μ (e) = μ(e). So in case μ(e) = 0, then the move (c, μ, Φ)−→(c1 ; while e do c1 , μ, Φ) is matched by the move (c , μ , Φ )−→(c1 ; while e do c1 , μ , Φ ). (This uses the fact that c1 ; while e do c1 = c1 ; while e do c1 .) The case when μ(e) = 0 is similar. 6. c = c1 ; c2 . Here c = c1 ; c2 has three possible forms: c1 ; c2 , if neither c1 nor c2 has type H cmd (first subcase); c2 , if c1 : H cmd (second subcase); or c1 , if c2 : H cmd (third subcase). In the first subcase, neither c1 nor c2 has type H cmd. If the move from c is by the first rule composes , then (c1 , μ, Φ)−→(done, μ , Φ ). By induction, this move can be matched by (c1 , μ , Φ ) in zero or one steps. In fact it cannot be matched in zero steps—because c1 does not have type H cmd, it is easy to see = done. Hence we must have (c1 , μ , Φ )−→(done, μ , Φ ). that c1 It follows that (c1 ; c2 , μ, Φ)−→(c2 , μ , Φ ) is matched by (c1 ; c2 , μ , Φ ) −→(c2 , μ , Φ ). If instead the move from c is by the second rule composes , then (c1 , μ, Φ)−→(c1 , μ , Φ ), where c1 = done. By induction, (c1 , μ , Φ ) can match this move, going in zero or one steps to (c1 , μ , Φ ). Hence the move (c1 ; c2 , μ, Φ)−→(c1 ; c2 , μ , Φ ) can be matched by (c1 ; c2 , μ , Φ ), going in zero or one steps to (c1 ; c2 , μ , Φ ). A subtle point, however, is that even though c1 does not have type H cmd, it is still possible that c1 : H cmd. In this case we cannot match by moving to (c1 ; c2 , μ , Φ ), since here c1 ; c2 = c2 = c1 ; c2 = done; c2 . But here the match of the move from c1 must actually be to (done, μ , Φ ), and it must be in one step (rather than zero) since c1 = done. Hence in this case the move (c1 ; c2 , μ, Φ)−→(c1 ; c2 , μ , Φ ) is instead matched by the first rule composes : (c1 ; c2 , μ , Φ )−→(c2 , μ , Φ ). 1 In the second subcase we have c1 : H cmd so c1 ; c2 = c2 . If the move from c is by the first rule composes , then we must have (c1 , μ, Φ)−→(done, μ , Φ ), where by Confinement μ ∼L μ and Φ ∼L Φ . So the move (c1 ; c2 , μ, Φ)−→(c2 , μ , Φ ) is matched in zero steps by (c2 , μ , Φ ). If instead the move from c is by the second rule composes , then we must have (c1 , μ, Φ)−→(c1 , μ , Φ ), where by Confinement μ ∼L μ and Φ ∼L Φ , and by Subject Reduction c1 : H cmd. Hence the move (c1 ; c2 , μ, Φ)−→(c1 ; c2 , μ , Φ ) is again matched in zero steps by (c2 , μ , Φ ), since c1 ; c2 = c2 . Finally, the third subcase is similar to the first. Now we are ready to use these results in establishing our termination-insensitive noninterference result: Theorem 3. Let Δ1 be a well-typed distributed program and let Δ2 be formed by replacing each of the initial memories in Δ1 with a L-equivalent memory. Let Φ1 and 1
An example illustrating this situation is when c is (if 0 then l := 1 else h := 2); l := 3. This goes in one step to h := 2; l := 3, which strips to l := 3. In this case, c = (if 0 then l := 1 else done); l := 3, which goes in one step to l := 3.
Secure Information Flow for Distributed Systems
135
Φ2 be L-equivalent channel memories. Suppose that (Δ1 , Φ1 ) and (Δ2 , Φ2 ) can both execute successfully, reaching terminal configurations (Δ1 , Φ1 ) and (Δ2 , Φ2 ) respectively. Then the corresponding local memories of Δ1 and Δ2 are L-equivalent, and Φ1 ∼L Φ2 . Proof. By definition, (Δ1 , Φ1 ) · (Δ1 , Φ1 ) and (Δ2 , Φ2 ) · (Δ2 , Φ2 ) . Hence, since · is a fast low simulation, we know that (Δ1 , Φ1 ) and (Δ2 , Φ2 ) can also execute successfully, and can reach terminal configurations whose local memories are L-equivalent to the corresponding memories of Δ1 and Δ2 . Moreover, by Theorem 1 we know that those terminal configurations are unique. But (Δ1 , Φ1 ) is identical to (Δ2 , Φ2 ) , since neither contains H variables or channels. Hence they must reach the same terminal configuration. It follows that the corresponding local memories of Δ1 and Δ2 are L-equivalent and that Φ1 ∼L Φ2 .
3 Towards a Concrete Implementation In this section we explore the implementation of the abstract language in a concrete setting over a public network, including the abilities of the external adversary (Eve), the exploration of the appropriate network environment, and the characteristics of the security tools used to protect the data while it is being transmitted. The abstract language’s requirement of private channels limits its applicability to secure settings but we would like to implement our language in a more practical setting where communications between programs happen via a public network. We would like a setting like a wireless LAN but then we are faced with significant challenges to ensure confidentiality. Eavesdroppers can easily see all communications; what would it take to implement our language for distributed systems in a wireless environment? Clearly, secret data cannot be transmitted in a wireless LAN with any expectation of confidentiality as any computer with a receiver can get all the data that has been transmitted. Hence our first requirement, we need cryptography to hide the information being transmitted. Asymmetric cryptography seems to be the appropriate style for our setting. Having decided on cryptography to hide the information that is being transmitted, we really want to encrypt only what is necessary to maintain the soundness of the distributed system, since encryption and decryption are expensive operations. In a wireless LAN, communications happen via electromagnetic signals which contain not only the message (payload) but also other information like source, destination, and data classification (header). Clearly we have to encrypt the payload but do we have to encrypt the header? In fact we do, for otherwise the language confidentiality would be lost as exemplified in Δ4 of Figure 6. The channel used in both processes is a high channel (encrypted payload) yet an eavesdropper can still discern the value of the least bit of the secret h1 by looking in the header of each packet for which process sends first; if Process 1 sends first then h1 is odd and the least bit is 1. Therefore our second requirement: we have to encrypt the header and the payload of packets on high channels to prevent the leakage of secret information. Yet we are not done because surprisingly, this attack works even if the message sent is public and it is being sent on a public channel. Consider Δ4 again and let’s assume that we are encrypting all headers (secret and public)
136
R. Alp´ızar and G. Smith
-Δ4 Process 1 if (h1 is even) then run a long time send(aH,1,7 , 1) Process 2 if (h2 is odd) then run a long time send(aH,2,8 , 0)
-Δ5 Process 1 if (h1 is even) then ?
C ← Epk (someM sg); send(aL,1,3 , C) send(aL,1,3 , C)
Fig. 6. Attacks (second wave)
and the secret payloads, but we are allowing the public data to be transmitted in the clear. This seems reasonable enough since the adversary will get all public data at the end of the execution. Nevertheless, if we observe a public value of 1 being transmitted first we will know with high probability that the least bit of h1 is 1. Hence our third requirement: we have to encrypt all transmitted data. We remark that if we had an active adversary which was able to drop packets, modify them and resend them, in addition to the attacks that we have seen she could modify packets to leak information and to affect the integrity of the distributed system. For example if the packet header was not encrypted she could change the packet classification from H to L thereby declassifying the payload which could cause the packet to be received by a low channel buffer in the receiving process. Then she can wait until the end of the execution to pick up the leaked secret from the process’s public memory. This distinction may play a role in deciding what kind of security property will be necessary in our encryption scheme. Specifically a passive adversary might only require IND-CPA security while the active adversary will definitely require IND-CCA security. As an illustration of this distinction consider the following variation of the Warinschi attack on the Needham-Schroeder-(Lowe) protocol [9] where an IND-CPA scheme has the flaw where there is a function C := f (C) that takes an encrypted plaintext (like a packet header) and returns the ciphertext of an identical plaintext but with a certain location within it changed to L. This would not affect the security of the encryption scheme in any other way, i.e., an adversary would not be able to know anything about the content of the ciphertext but by simply substituting the header of any packet with C and re-sending it, the adversary would be able to declassify the payload of the message. But continuing with our analysis, are we done? We have decided to encrypt all data that is transmitted in the network, yet it is not enough to ensure confidentiality. Consider adversary Δ5 of Figure 6, it encrypts a message and sends it two times if a secret is even. Meanwhile Eve scans every transmission waiting for two identical ciphertexts; if they are found she knows with high probability that the least bit of h1 is 1, if all ciphers are distinct, it is 0. Therefore our fourth requirement: all transmitted data must be composed of freshly generated ciphertexts to ensure the confidentiality of our distributed systems. Finally, there are two more attacks that we need to consider. The first one is the classical timing attack. If we know how long a typical execution step takes and can measure time intervals then we can leak information. Δ6 of Figure 7 exemplifies this attack. If Eve is able to measure the time interval between the two transmissions she will have
Secure Information Flow for Distributed Systems
-Δ6 Process 1 send(aL,1,2 , 0) if (h1 is even) then delay 1 sec; send(aL,1,2 , 0)
137
-Δ7 Process 1 if (h1 is even) then send(aH,1,2 , 0)
Fig. 7. Attacks (third wave)
the value of the least bit of h1 . A related timing attack is to count the number of message transmitted as in Δ7 of Figure 7. In this attack, the last bit of h1 is leaked by the transmission of one or zero packets. These attacks can be generalized to changing the statistics of packet transmission; for example, one can conceive an attack where the time distribution for packet transmission is uniform to leak a 0 and χ-square or normal for a 1. However, this area of study seems inappropriate for us to handle. A solution might be to impose a super-density of packet transmission at regular intervals, where only some of the packets are real messages. This would eliminate the problem but significantly increase the network bandwidth utilization. Another solution to this problem may the NRL-Pump [10,11] which is currently available at the local (government-owned) electronics shop. The pump obfuscates Eve’s ability to measure the time between messages. It does this by inserting random delays based on an adaptive mechanism that adjusts the delays based on network traffic statistics. However, the pump cannot prevent timing channels based on the order of distinguishable messages. To summarize, we not only have to somehow hide the meaning of messages, but also hide anything in a message that makes it distinguishable to Eve. Obviously, this includes payload, but also the message header, since the source or destination of a message can be used to distinguish it from another. Although L values are public, they cannot be seen within the network traffic. This is not a problem when computation happens within a processor (like in some multithreaded environments) because it is not reasonable that Eve would have access to the public memory in real time. Soundness: Next we sketch a possible way to argue a computational noninterference property for our concrete language. Random Transfer Language Definition and Soundness: First, we should be able to move our language, at the most basic level, from a nondeterministic to a probabilistic setting by constructing a subset language which we will call Random Transfer Language, and prove PNI on it. The PNI property on this language shall establish that if we allow only fresh-random traffic, the language is safe and sound. Message Transfer Language Definition and Soundness: Then, we should be able to construct a Message Transfer Language and prove CNI on it. This language simply has the regular send command encrypt its payload before transmission on a public channel but keeps private channels for transmission of header information. Header Transfer Language Definition and Soundness: Next, we should be able to construct Header Transfer Language and prove CNI on it. In this language the send command encrypt its header before transmission, but keeps private channels for transmission of the payload.
138
R. Alp´ızar and G. Smith
Hybrid Cryptographic Argument: Finally, we should be able to argue that since the Message and Header Transfer languages both satisfy CNI, the combination also should via a hybrid cryptographic argument.
4 Related Work Peeter Laud [12,13] pioneers computationally secure information flow analysis with cryptography. Later, with Varmo Vene [14], they develop the first language and type system with cryptography and a computational security property. Recently, in [15], he proves a computational noninterference property on a type system derived from the work of Askarov et al [16]. In previous work on multithreaded languages [17,2,18,19,5], the type systems have strongly curtailed the language’s expressive power in order to attain soundness. An expansion of these languages, with a rich set of cryptographic primitives, a treatment of integrity as well as confidentiality, and a subject close to ours is the work of Fournet and Rezk [20]. The primary differences between our papers are that their system does not handle concurrency and is subject to timing channels; on the other hand, their active adversary is more powerful having the ability to modify public data. Another effort toward enhancing the usability of languages with security properties is the extensive functional imperative language Aura [21]. The language maintains confidentiality and integrity properties of its constructs as specified by its label [22] by “packing” it using asymmetric encryption before declassification. The cryptographic layer is hidden to the programmer making it easier to use. This system uses static and runtime checking to enforce security. Using a different approach, Zheng and Myers [23] use a purely static type system to achieve confidentiality by splitting secrets under the assumption of non-collusion of repositories (e.g. key and data repositories). Under this model ciphertexts do not need to be public which allows relaxation of the type system while maintaining security. Further towards the practical end of the spectrum are efforts to provide assurance levels to software (as in EAL standard). In this line of work (Shaffer, Auguston, Irvine, Levin [24]) a security domain model is established and “real” programs are verified against it to detect flow violations. Another paper close to ours is by Focardi and Centenaro [5]. It treats a multiprogrammed language and type system over asymmetric encryption and proves a noninterference property on it. The main differences are that their type system is more restrictive, requiring low guards on loops, and they use a formal methods approach rather than computational complexity.
5 Conclusion and Future Work In this paper we have crafted an abstract language for distributed systems while maintaining a relaxed computational environment with private data, and we have argued that it has the noninterference property. We have explored the feasibility of implementing this language in a concrete setting where all communications happen via a public network with cryptography to protect confidentiality.
Secure Information Flow for Distributed Systems
139
The obvious course for future work is to formalize these explorations and to prove computational noninterference on the concrete system. Another interesting area is to identify the environments where encryption schemes with weaker security (like INDCPA) is sufficient to ensure soundness. As the complexity of these languages increases, our reduction proofs may become unmanageable. One solution may be to use an automatic proving mechanism as in [25]. This work applies to security protocols in the computational model rather than languages. The tool works as a sequence of reductions towards a base that is easily proved to be secure, hence the original protocol is secure.
Acknowledgments This work was partially supported by the National Science Foundation under grant CNS-0831114. We are grateful to Joshua Guttman, Pierpaolo Degano, and the FAST09 referees for helpful comments and suggestions.
References 1. Denning, D., Denning, P.: Certification of programs for secure information flow. Communications of the ACM 20(7), 504–513 (1977) 2. Smith, G., Volpano, D.: Secure information flow in a multi-threaded imperative language. In: Proceedings 25th Symposium on Principles of Programming Languages, San Diego, CA, January 1998, pp. 355–364 (1998) 3. Sabelfeld, A., Sands, D.: Probabilistic noninterference for multi-threaded programs. In: Proceedings 13th IEEE Computer Security Foundations Workshop, Cambridge, UK, July 2000, pp. 200–214 (2000) 4. Smith, G.: Improved typings for probabilistic noninterference in a multi-threaded language. Journal of Computer Security 14(6), 591–623 (2006) 5. Focardi, R., Centenaro, M.: Information flow security of multi-threaded distributed programs. In: PLAS 2008: Proceedings of the Third ACM SIGPLAN Workshop on Programming Languages and Analysis for Security, pp. 113–124. ACM, New York (2008) 6. Smith, G., Alp´ızar, R.: Fast probabilistic simulation, nontermination, and secure information flow. In: Proc. 2007 ACM SIGPLAN Workshop on Programming Languages and Analysis for Security, San Diego, California, June 2007, pp. 67–71 (2007) 7. Zdancewic, S., Myers, A.C.: Observational determinism for concurrent program security. In: Proceedings 16th IEEE Computer Security Foundations Workshop, Pacific Grove, California, June 2003, pp. 29–43 (2003) 8. Baier, C., Katoen, J.P., Hermanns, H., Wolf, V.: Comparative branching-time semantics for Markov chains. Information and Computation 200(2), 149–214 (2005) 9. Warinschi, B.: A computational analysis of the Needham-Schroeder-(Lowe) protocol. In: Proceedings 16th IEEE Computer Security Foundations Workshop, Pacific Grove, California, June 2003, pp. 248–262 (2003) 10. Kang, M.H., Moskowitz, I.S.: A pump for rapid, reliable, secure communication. In: CCS 1993: Proceedings of the 1st ACM Conference on Computer and Communications Security, pp. 119–129. ACM, New York (1993) 11. Kang, M.H., Moskowitz, I.S., Chincheck, S.: The pump: A decade of covert fun. In: 21st Annual Computer Security Applications Conference (ACSAC 2005), pp. 352–360. IEEE Computer Society, Los Alamitos (2005)
140
R. Alp´ızar and G. Smith
12. Laud, P.: Semantics and program analysis of computationally secure information flow. In: Sands, D. (ed.) ESOP 2001. LNCS, vol. 2028, pp. 77–91. Springer, Heidelberg (2001) 13. Laud, P.: Handling encryption in an analysis for secure information flow. In: Degano, P. (ed.) ESOP 2003. LNCS, vol. 2618, pp. 159–173. Springer, Heidelberg (2003) 14. Laud, P., Vene, V.: A type system for computationally secure information flow. In: Li´skiewicz, M., Reischuk, R. (eds.) FCT 2005. LNCS, vol. 3623, pp. 365–377. Springer, Heidelberg (2005) 15. Laud, P.: On the computational soundness of cryptographically masked flows. In: Proceedings 35th Symposium on Principles of Programming Languages, San Francisco, California (January 2008) 16. Askarov, A., Hedin, D., Sabelfeld, A.: Cryptographically-masked flows. In: Proceedings of the 13th International Static Analysis Symposium, Seoul, Korea, pp. 353–369 (2006) 17. Abadi, M., Fournet, C., Gonthier, G.: Secure implementation of channel abstractions. In: LICS 1998: Proceedings of the 13th Annual IEEE Symposium on Logic in Computer Science, Washington, DC, USA, p. 105. IEEE Computer Society, Los Alamitos (1998) 18. Smith, G.: Probabilistic noninterference through weak probabilistic bisimulation. In: Proceedings 16th IEEE Computer Security Foundations Workshop, Pacific Grove, California, June 2003, pp. 3–13 (2003) 19. Abadi, M., Corin, R., Fournet, C.: Computational secrecy by typing for the pi calculus. In: Kobayashi, N. (ed.) APLAS 2006. LNCS, vol. 4279, pp. 253–269. Springer, Heidelberg (2006) 20. Fournet, C., Rezk, T.: Cryptographically sound implementations for typed information-flow security. In: Proceedings 35th Symposium on Principles of Programming Languages, San Francisco, California (January 2008) 21. Jia, L., Vaughan, J.A., Mazurak, K., Zhao, J., Zarko, L., Schorr, J., Zdancewic, S.: Aura: a programming language for authorization and audit. In: Hook, J., Thiemann, P. (eds.) ICFP, pp. 27–38. ACM, New York (2008) 22. Vaughan, J., Zdancewic, S.: A cryptographic decentralized label model. In: IEEE Symposium on Security and Privacy, Oakland, California, pp. 192–206 (2007) 23. Zheng, L., Myers, A.C.: Securing nonintrusive web encryption through information flow. In: PLAS 2008: Proceedings of the third ACM SIGPLAN Workshop on Programming Languages and Analysis for Security, pp. 125–134. ACM, New York (2008) 24. Shaffer, A.B., Auguston, M., Irvine, C.E., Levin, T.E.: A security domain model to assess ´ Pistoia, M. (eds.) PLAS, pp. software for exploitable covert channels. In: Erlingsson, U., 45–56. ACM, New York (2008) 25. Blanchet, B.: A computationally sound mechanized prover for security protocols. In: SP 2006: Proceedings of the 2006 IEEE Symposium on Security and Privacy (S&P 2006), Washington, DC, USA, pp. 140–154. IEEE Computer Society Press, Los Alamitos (2006)
Probable Innocence in the Presence of Independent Knowledge Sardaouna Hamadou1, Catuscia Palamidessi2 , Vladimiro Sassone1 , and Ehab ElSalamouny1 1
2
ECS, University of Southampton INRIA and LIX, Ecole Polytechnique
Abstract. We analyse the Crowds anonymity protocol under the novel assumption that the attacker has independent knowledge on behavioural patterns of individual users. Under such conditions we study, reformulate and extend Reiter and Rubin’s notion of probable innocence, and provide a new formalisation for it based on the concept of protocol vulnerability. Accordingly, we establish new formal relationships between protocol parameters and attackers’ knowledge expressing necessary and sufficient conditions to ensure probable innocence.
1 Introduction Anonymity protocols often use random mechanisms. It is therefore natural to think of anonymity in probabilistic terms. Various notions of such probabilistic anonymity have been proposed and a recent trend of work in formalizing these notions is directed at exploring the application of information-theoretic concepts (e.g. [18,4,5,6,1,15]). In our opinion, however, except a recent paper by Franz, Meyer, and Pashalidis [11] which addresses the advantage that an adversary could take of hints from the context in which the protocol operates, such approaches fail to account for the fact that in the real world, the adversary often have some extra information about the correlation between anonymous users and observables. Consider for example the following simple anonymous voting process. In a parliament composed by Labourists and Conservatives, one member voted against a proposal banning minimum wages. Without any additional knowledge it is reasonable to assume that the person is more likely to be in the most liberal political group. If however we know in addition that one Conservative voted against, then it is more reasonable to suspect the liberally-inclined Conservatives. Similarly, suppose that in a classroom of n students the teacher asks to tick one of two boxes on a piece of paper to indicate whether or not they are satisfied by her teaching. If n is small and the teacher noticed that the pupils use pens of different colours, then she can use these colours to partition the class so as to make the vote of some students more easily identifiable. Extra knowledge of this kind, independent of the logic of the protocol used, can affect dramatically its security. The extra knowledge can either arise from an independent source, as in the first example, or simply from the context in which the anonymity protocol is run, as in the second example. A relevant point in case is Reiter and Rubin’s Crowds protocol [16], which allows Internet users to perform anonymous web transactions. The idea is to send the message P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 141–156, 2010. c Springer-Verlag Berlin Heidelberg 2010
142
S. Hamadou et al.
through a chain of users participating in the protocol. Each user in the ‘crowd’ must establish a path between her and a set of servers by selecting randomly some users to act as routers. The random selection process is performed in such a way that when a user in the path relays a message, she does not know whether or not the sender is the initiator of the message, or simply a forwarder like herself. Each user only has access to messages routed through her, and some participants may be corrupted, i.e., they may work together in order to uncover the identity of the initiator. It is well known that Crowds cannot ensure strong anonymity [16,3] in presence of corrupted participants, but when the number of corrupted users is sufficiently small, it provides a weaker notion of anonymity known as probable innocence. Informally, a sender is probably innocent if to an attacker she is no more likely to be the originator than not to be. Although Crowds has been widely analysed in the literature (e.g. [3,15]), the fact that independent information may be available to the attacker has been so far ignored. We maintain that this is ultimately incompatible with achieving a comprehensive and reliable analysis of the protocol, as attackers’ extra knowledge is inherent to Crowds. In particular, as any request routed through an attacker reveals the identity of the target server, a team of attackers will soon build up a host of observations suitable to classify the behaviour of honest participants. This paper is to the best of our knowledge the first to investigate the impact of the attacker’s independent knowledge on the anonymity in the Crowds protocol. Related work. Quantitative approach to the foundations of information-hiding has became a very active and mature research field. Several various formal definitions and frameworks have been proposed for reasoning about secure information flow analysis (e.g. [19,7,8,9]), side-channel analysis (e.g [14]) and anonymity. Our work follows a recent trend in the analysis of anonymity protocols directed to the application of information-theoretic notions (e.g. [17,18,4,5,6,1,15,10,2]). The most related work to ours is the one of Reiter and Ruben [16], the one of Halpen and O’Neill [12], and the recent paper of Chatzikokolakis, and Palamidessi [3]. In [16] the authors propose a formal definition of probable innocence that considers the probability of observable events induced by actions of an anonymous user participating in the protocol. They require that the probability of an anonymous user producing any observable to be less than one half. In [12] the authors formalize probable innocence in terms of the adversary’s confidence that a particular anonymous event happened, after performing an observation. Their definition requires that the probability of an anonymous events should be at most one half, under any observation. In [3] the authors argue that the definition of [16] makes sense only for systems satisfying certain properties while the definition of [12] depends on the probabilities of anonymous events which are not part of the protocol. They propose a definition of probable innocence that tries to combine the two previous ones by considering both the probability of producing some observable and the adversary’s confidence after the observation. Another recent work closely related to ours is the one of Smith’s [19] which proposes a new metric for quantitative information flow based on the concept of vulnerability as an alternative to previous metrics based on Shannon entropy and mutual information. Informally, the idea is that the adversary knows the a priori distributions of the hidden (anonymous) events and always ‘bets’ on the most likely culprit. The a priori
Probable Innocence in the Presence of Independent Knowledge
143
vulnerability then is the probability that the adversary guesses the true culprit based only on the a priori distribution. The a posteriori vulnerability is the average probability that the adversary guesses the true culprit based on the a posteriori probability distribution on the agents after the observation. The main difference between these approaches and ours is that they do not take into account the very likely additional knowledge of the adversary about the correlation between the anonymous events and some observables independent from the behaviour of the protocol. In this paper we first generalize the concepts of probable innocence and vulnerability. Instead than just comparing the probability of being innocent with the probability of being guilty, we consider the degree of the probability of being innocent. Informally a protocol is α-probable innocent if for any anonymous user the probability of being innocent is less than or equal to α. Similarly a protocol is α-vulnerable if the a posteriori vulnerability of the anonymous users is less than or equal to α. We prove that these two notions are related. In particular (α-)probable innocence implies (α-)vulnerability and in the specific case when the a priori distribution of the anonymous events is uniform, they are equivalent. We furthermore extend these definitions in order to cope with the extra independent knowledge of the adversary by computing the a posteriori probability and the a posteriori vulnerability w.r.t to both the protocol observables and the independent observables. We show that the presence of extra knowledge makes probable innocence (resp. vulnerability) more difficult to be achieved. Finally, it should be acknowledged that our observations about the importance of additional knowledge of the adversary are not entirely new. Indeed, as already noticed above, Franz, Meyer, and Pashalidis [11] considered the fact that an adversary could take advantage of hints from the context in which a protocol operates. However, though that their approach is closely related to ours in spirit, it is not general in the sense that it assumes a deterministic correlation between the anonymous events and the observable hints and a uniform distribution on the anonymous events. Moreover, their metric is associated to Shannon entropy which is recently proven by Smith [19] of being less accurate than vulnerability-based metric. Structure of the paper. The paper is organised as follows: in §2 we fix some basic notations and recall the fundamental ideas of the Crowds protocol and its properties, including the notion of probable innocence. In §3 we reformulate and extend probable innocence using the idea of protocol vulnerability; §4 and §5 deliver our core technical contribution by respectively extending probable innocence and vulnerability to the case of attacker’s independent knowledge.
2 Preliminaries This section describes our conceptual framework and briefly revises the Crowds protocol and the notion of probable innocence. We use capital letters A, B to denote discrete random variables and the corresponding small letters a, b and calligraphic letters A, B for their values and set of values respectively. We denote by p(a), p(b) the probabilities of a and b respectively and by p(a ∧ b) their joint probability. The conditional probability of a given b is defined as
144
S. Hamadou et al.
p(a | b) =
p(a ∧ b) p(b)
The Bayes theorem relates the conditional probabilities p(a | b) and p(b | a) as follows p(a | b) =
p(b | a) p(a) . p(b)
2.1 The Framework In this paper we consider a framework similar to the probabilistic approaches to anonymity and information flow used in (for instance) [13], [5], [15], and [19]. We restrict ourselves to total protocols and programs with one high level (or anonymous) input A, a random variable over a finite set A, and one low level output (observable) O, a random variable over a finite set O. We represent a protocol/program by the matrix of the conditional probabilities p(o j | ai ), where p(o j | ai ) is the probability that the low output is o j given that the high input is ai . We assume that the high input is generated according to an a priori publicly-known probabilistic distribution. An adversary or eavesdropper can see the output of a protocol, but not the input, and he is interested in deriving the value of the input from the observed output in one single try. In this paper we will also assume that the attacker has access to the value of a random variable S distributed over S that summarizes his additional knowledge (information) about A independent from the behavior of the protocol, as explained in the introduction. The matrix of the conditional probabilities p(sk | ai ) expresses the correlation between the anonymous events and the additional knowledge of the adversary. When | S | = 1 the adversary’s additional information about A is a trivial one and cannot help his effort in determining the value of A. For example knowing the length of a password in a fixed-length password system is a trivial information since all passwords have the same length. Trivial information allows us to model the absence of additional information. The standard framework can therefore be seen as an instance of our framework. 2.2 The Crowds Protocol and the Definition of Probable Innocence The protocol. Crowds is a protocol proposed by Reiter and Rubin in [16] to allow Internet users performing anonymous web transactions by protecting their identity as originators of messages. The central idea to ensure anonymity is that the originator forwards the message to another, randomly-selected user, which in turn forwards the message to another user, and so on until the message reaches its destination (the end server). This routing process ensures that, even when a user is detected sending a message, there is a substantial probability that she is simply forwarding it on behalf of somebody else. More specifically, a crowd is a fixed number of users participating in the protocol. Some members (users) in the crowd may be corrupted (the attackers), and they can collaborate in order to discover the originator’s identity. The purpose of the protocol is to protect the identity of the message originator from the attackers. When an originator – also known as initiator– wants to communicate with a server, she creates a random path between herself and the server through the crowd by the following process.
Probable Innocence in the Presence of Independent Knowledge
145
– Initial step: the initiator selects uniformly at random a member of the crowd (possibly herself) and forwards the request to her. We refer to the latter user as the forwarder. – Forwarding steps: a forwarder, upon receiving a request, flips a biased coin. With probability 1 − p f she delivers the request to the end server. With probability p f she selects uniformly at random a new forwarder (possibly herself) and forwards the request to her. The new forwarder repeats the same forwarding process. The response from the server to the originator follows the same path in the opposite direction. Each user (including corrupted users) is assumed to have only access to messages routed through her, so that she only knows the identities of her immediate predecessor and successor in a path, and the end server. Informal definition of Probable Innocence. In [16] Reiter and Rubin have proposed a hierarchy of anonymity notions in the context of Crowds. These range from ‘absolute privacy,’ where the attacker cannot perceive the presence of communication, to ‘provably exposed,’ where the attacker can prove the sender and receiver relationship. Clearly, as most protocols used in practice, Crowds cannot ensure absolute privacy in presence of attackers or corrupted users, but can only provide weaker notions of anonymity. In particular, in [16] the authors propose an anonymity notion called probable innocence and prove that, under some conditions on the protocol parameters, Crowds ensures the probable innocence property to the originator. Informally, they define it as follows:
A sender is probably innocent if, from the attacker’s point of view, the sender appears no more likely to be the originator than to not be the originator.
(1)
In other words, the attacker may have reason to suspect the sender of being more likely than any other potential sender to be the originator, but it still appears at least as likely that she is not. The formal property proved by Reiter and Rubin. Let m be the number of users participating in the protocol and let c and n be the number of the corrupted and honest users, respectively, with m = n + c. Since anonymity makes only sense for honest users, we define the set of anonymous events as A = {a1 , a2 , . . . , an }, where ai indicates that user i is the initiator of the message. We define the set of observable events as O = {o1 , o2 , . . . , on }, where oi indicates that user i forwarded a message to a corrupted user. We also say that user i is detected by the attacker. As it is usually the case in the analysis of Crowds, we assume that attackers will always deliver a request to forward immediately to the end server, since forwarding it any further cannot help them learn anything more about the identity of the originator. In [16] Reiter and Rubin formalise their notion of probable innocence via the conditional probability p(I | H) that the initiator is detected given that any user is detected at all. Here I denotes the event that it is precisely the initiator to forward the message to the attacker on the path, and H that there is an attacker in the path. Precisely, probable innocence holds if p(I | H) ≤ 12 . In our setting the probability that user j is detected given that user i is the initiator, can be written simply as p(o j | ai ). As we are only interested in the case in which a
146
S. Hamadou et al.
user is detected, for simplicity we do not write such condition explicitly. Therefore, the property proved in [16] (i.e. p(I | H) ≤ 12 ) translates in our setting as: ∀i. p(oi | ai ) ≤ 1/2
(2)
Reiter and Rubin proved in [16] that, in Crowds, the following holds: ⎧ ⎪ n−1 ⎪ ⎪ ⎨1 − m pf i = j p(o j | ai ) = ⎪ ⎪ ⎪ ⎩1p i j m
f
Therefore, probable innocence (2) holds if and only if m≥
c+1 pf p f − 12
3 Probable Innocence Revisited and Extended In our opinion there is a mismatch between the idea of probable innocence expressed informally in (1) and the property actually proved by Reiter and Rubin, cf. (2). The former, indeed, seems to correspond to the following: ∀i. p(ai | oi ) ≤ 1/2
(3)
It is worth noting that this is also the interpretation given by Halpern and O’Neill [13]. The properties (2) and (3) however coincide under the assumption that the a priori distribution is uniform, i.e. that each honest user has equal probability of being the initiator. This is a standard asumption in Crowds. Proposition 1. If the a priori distribution is uniform, then ∀i, j. p(ai | o j ) = p(o j | ai ). Proof. If the a priori distribution is uniform, then for every i we have p(ai ) = 1/n where n is the number of honest users. The probability of user j being detected is also uniform, and hence equal to 1/n. In fact, every initiator forwards the message to each other user with the same probability, and each forwarder does the same, hence each user has the same probability of being detected when she is the initiator, and the same probability of being detected when she is not the initiator. Therefore we have: p(o j | a j ) = p(ok | ak ) and p(o j | ai ) = p(ok | ai ) for every j, k and i j, k, and hence: p(o j) = p(o j ∧ a j ) +
p(o j ∧ ai ) = p(o j | a j )p(a j ) + i j p(o j | ai )p(ai ) = p(ok | ak )p(ak ) + ik p(ok | ai )p(ai ) by symmetry i j
= p(ok ) Finally, by using the Bayes theorem, we have: p(ai | o j ) =
p(o j | ai ) p(ai ) p(o j | ai ) · 1/n = = p(o j | ai ) p(o j ) 1/n
Probable Innocence in the Presence of Independent Knowledge
147
Corollary 1. If the a priori distribution is uniform, then (2) and (3) are equivalent. The following proposition points out that in presence of uniform a priori distribution, the matrix associated to the protocol, i.e. the array of the conditional probabilities p(o j |ai ), has equal elements everywhere except on the diagonal: Proposition 2. If the a priori distribution is uniform, then there exists a p such that ⎧ ⎪ ⎪ ⎪ ⎨p i = j p(o j | ai ) = ⎪ ⎪ ⎪ ⎩ 1−p i j n−1
Proof. As already noted in the proof of Proposition 1, for symmetry reasons we have p(o j | a j ) = p(ok | ak ) and p(o j | ai ) = p(ok | ai ) for every j, k and i j, k. It is generally the case, in Crowds, that p is (much) greater than (1 − p)/(n − 1), which means that the user which is detected is also the most likely culprit. This allows us to reformulate the property of probable innocence in terms of the (a posteriori) vulnerability [19] of a protocol, which coincides with the converse of the Bayes risk [6]. Let us briefly recall the definition of vulnerability. The idea is that the adversary knows the a priori distributions and always ‘bets’ on the most likely culprit. The a priori vulnerability then is the probability that the adversary guesses the true culprit based only on the a priori distribution p(a). The a posteriori vulnerability is the average probability that the adversary guesses the true culprit based on the a posteriori probability distribution on the agents after the observation, i.e., p(a | o). Formally: Definition 1 ([19]) – The a priori vulnerability is V(A) = maxi p(ai ) – The a posteriori vulnerability is V(A | O) = j p(o j ) maxi (p(ai |, o j)) Using the Bayes theorem, we can reformulate V(A | O) as follows: V(A | O) = max(p(o j |ai ) p(ai )) j
i
(4)
It is easy to see that probable innocence implies that the a posteriori vulnerability is smaller than 1/2. The converse also holds, if the a priori distribution is uniform. Proposition 3 – If either (2) or (3) holds, then V(A | O) ≤ 1/2. – If V(A | O) ≤ 1/2 and the a priori distribution is uniform, then (2) and (3) hold. We now generalize the concept of probable innocence. Instead than just comparing the probability of being innocent with the probability of being guilty, we consider the degree of the probability of being innocent. Similarly for the vulnerability. Definition 2. Given a real number α ∈ [0, 1], we say that a protocol satisfies
148
S. Hamadou et al.
– α-probable innocence if and only if ∀i. p(ai | oi ) ≤ α – α-vulnerability if and only if V(A | O) ≤ α. Clearly α-probable innocence coincides with the standard probable innocence for α = 1/2. It is also to be remarked that the minimum possible value of α is 1/n, i.e., it is not possible for a protocol to satisfy α-probable innocence or α-vulnerability if α is smaller than this value.
4 Probable Innocence in Presence of Extra Information We now consider the notion of probable innocence when we assume that the adversary has some extra information about the correlation between the culprit and the observable. We express this extra information in terms of a random variable S , whose values s1 . . . s we assume to be observable, and the conditional probabilities p(sk | ai ). We assume that, the original observables O and the additional observables S are independent, for every originator. Example 1. Consider an instance of the Crowds protocol in which there are two servers, and assume that the users are divided in two parts, A1 and A2 . Assume that each user in A1 , when he is the initiator, has probability p1 to address his message to the first server (as the final destination of the message). Conversely, assume that each user in A2 has probability p2 to address the second server. The address of the server appears in the message, and it is therefore observed by the adversary when he intercepts the message. It is clear that (because of the way Crowds works) the event that the message is intercepted is independent from the server to which the message is addressed. If we indicate by s1 the fact that the message is addressed to the first server, and by s2 the fact that the message is addressed to the second server, the matrix of the conditional probabilities corresponding to this example is as follows: ⎧ ⎪ ⎪ ⎪ p1 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ 1 − p1 p(s | a) = ⎪ ⎪ ⎪ ⎪ 1 − p2 ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ p2
a ∈ A1 , s = s1 a ∈ A1 , s = s2 a ∈ A2 , s = s1 a ∈ A2 , s = s2
We are interested in exploring how the extra information provided by S and the conditional probabilities of S given A affects the notion of probable innocence. We take the point of view that the invariant property should be the one expressed by (3), generalized by Definition 2. We reformulate this definition to accommodate the presence of extra information in the observables. Definition 3 (α-probable innocence in presence of extra information). Given a real number α ∈ [0, 1], we say that a protocol satisfies α-probable innocence if and only if ∀i, k. p(ai | oi ∧ sk ) ≤ α
Probable Innocence in the Presence of Independent Knowledge
149
The following lemma expresses the relation between the conditional probabilities with respect to the new observables and the original ones Lemma 1. ∀i, j, k. p(ai | o j ∧ sk ) = p(ai | o j )
p(sk | ai ) p(sk | o j )
Proof. By Bayes theorem we have, for every i, j, k p(ai | o j ∧ sk ) =
p(o j ∧ sk | ai ) p(ai ) p(o j ∧ sk )
Since we are assuming that, given any originator ai , O and S are independent, we have p(o j ∧ sk | ai ) = p(o j | ai ) p(sk | ai ), and therefore p(ai | o j ∧ sk ) =
p(o j | ai ) p(sk | ai ) p(ai ) p(o j ∧ sk )
We can rewrite p(o j ∧ sk ) as p(sk | o j ) p(o j ). Hence: p(ai | o j ∧ sk ) =
p(o j | ai ) p(sk | ai ) p(ai ) p(sk |o j ) p(o j)
Finally, using Bayes theorem again, we conclude.
We can now prove the presence of extra information reduces the degree α of probable innocence by a factor q = mini,k p(sk | oi )/p(sk | ai ) : Proposition 4 – In presence of extra information, a protocol satisfies α-probable innocence if ∀i. p(ai | oi ) ≤ q α – If ∀i, j. p(ai | oi ) = p(a j | o j ), then the above condition is also necessary, i.e. the protocol satisfies α-probable innocence only if ∀i. p(ai | oi ) ≤ q α Proof. Immediate from previous lemma, with j = i.
In general the factor q in the above proposition is strictly greater than 0 and strictly smaller than 1. Note also that, in the case of Crowds, the protocol satisfies the required symmetry, i.e. the elements in the principal diagonal of the matrix of the conditional probabilities are all the same (cf. Prop. 2) and therefore the above factor q is strict. Example 2. Consider an instance of the Crowds protocol where there are 6 members (m = 6). One of these members is an attacker (c = 1), and the others are honest (n = 5). Assume that p f = 3/4 then we have p(oi | ai ) = 1 −
1 n−1 4 3 pf = 1 − · = m 6 4 2
150
S. Hamadou et al.
and, for i j,
1 1 3 1 pf = · = m 6 4 8 Now suppose that, as in Example 1, there are two servers and the honest members are divided into two groups A1 and A2 , where A1 = {1, 2} (resp. A2 = {3, 4, 5}) are the users which prefer the server 1 (resp. the server 2). Assume that the preference probabilities p1 = p2 = 3/4, i.e. that the conditional probabilities p(s | a) are given by ⎧ ⎪ 3 ⎪ ⎪ ⎨ 4 a i ∈ Ak p(sk | ai ) = ⎪ ⎪ ⎪ ⎩ 1 a i Ak p(o j | ai ) =
4
Because of the independence assumption, the conditional probabilities p(o ∧ s | a) can be computed as the product p(o | a) p(s | a) (see Fig. 1). From these we can compute the joint probabilities p(o ∧ s) by using the formula p(o j ∧ sk | ai ) p(ai ) p(o j ∧ sk ) = i
Assuming that the a priori distribution is uniform (p(ai ) = 15 ), we obtain the probabilities shown in Fig. 1. From these we can then calculate p(s | o) using the definition p(sk | o j ) =
p(sk ∧ o j ) p(o j )
and the fact that if A is uniformly distributed then also O is uniformly distributed (p(o j) = 15 ). Finally, using Bayes theorem, we can calculate the probabilities p(a | o ∧ s) from p(o ∧ s | a), p(o ∧ s), and p(a). Using the values of p(sk | oi ) and p(sk | ai ), the factor q = mini,k p(sk | oi )/p(sk | ai ) in Proposition 4 is evaluated to 3/4. It is easy to see that Proposition 4 holds for this instance of Crowds, i.e. ∀i, k. p(ai | oi ∧ sk ) ≤ α if and only if ∀i. p(ai | oi ) ≤ q α. In fact p(ai | oi ) = 1/2 and maxi,k p(ai | oi ∧ sk ) = 2/3. We note that in some cases the extra information may contradict the original observable. For instance it could be the case that user 1, when she is the originator, has a strong preference for the server 1. So if the attacker receives a message from user 1 addressed to the server 2, it may be better for him to assume that the originator is one (arbitrary) user from the group that favors the server 2, rather than user 1. We argue, therefore, that the presence of extra information makes the property of probable innocence more difficult to satisfy, because the attacker can use the extra information to improve his guess about the culprit, and he may guess a user which is not necessarily the one who sent the message to him. Therefore it seems reasonable to consider the following definition: Definition 4 (α-probable innocence in presence of extra information, safe version). Given a real number α ∈ [0, 1], a protocol satisfies α-probable innocence if and only if ∀i, j, k. p(ai | o j ∧ sk ) ≤ α
Probable Innocence in the Presence of Independent Knowledge
Fig. 1. The matrices of the conditional probabilities of Example 2. We use here the notation o, s to represent o ∧ s.
However, it turns out that the relation with the original notion of probable innocence remains the same, and Proposition 4 still provides the appropriate bound: Proposition 5 – In presence of extra information, a protocol satisfies the safe version of α-probable innocence if ∀i, j. p(ai | o j ) ≤ q α
152
S. Hamadou et al.
– If ∀i, j. p(ai | oi ) = p(a j | o j ), then the above condition is also necessary, i.e. the protocol satisfies the safe version of α-probable innocence only if ∀i. p(ai | oi ) ≤ q α where q = mini, j,k p(sk |o j )/p(sk |ai ) . Example 3. Consider again the instance of Crowds like in Example 2, but assume now that the preference probabilities are much higher than before, namely ⎧ ⎪ 9 ⎪ ⎪ ⎨ 10 ai ∈ Ak p(sk | ai ) = ⎪ ⎪ ⎪ ⎩ 1 a A 10
i
k
We can compute the probabilities p(o∧ s | a), p(o∧ s), p(s | o) and p(a | o∧ s) like before. The results are shown in Fig. 2. We note that in certain cases the extra knowledge dominates over the original observables. For instance, if the adversary receives a message from user 3 addressed to server 1, it is better for him to bet that a sender of group 1 is the originator, rather than user 3. In fact the a posteriori probability of the latter is p(a3 | o3 ∧ s1 ) = 1/6 while the a posteriori probability of (say) user 1 is p(a1 | o3 ∧ s1 ) = 3/8. Using the values of p(sk | oi ) and p(sk | ai ), the factor q = mini,k p(sk | oi )/p(sk | ai ) in Proposition 4 is evaluated to 2/3, and we can see that Proposition 5 holds for this instance of Crowds, i.e. ∀i, k. p(ai | oi ∧ sk ) ≤ α if and only if ∀i. p(ai | oi ) ≤ q α. In fact p(ai | oi ) = 1/2 and maxi, j,k p(ai | o j ∧ sk ) = 3/4.
5 Vulnerability in Presence of Extra Information In this section we explore how the definition of α-vulnerability is affected by the presence of extra information. Let us start with the definition of α-vulnerability in presence of the new observables. It is natural to extend the notion of α-vulnerability by considering the (a posteriori) vulnerability when the observables are constituted by the joint random variables O, S , which is given by p(o j ∧ sk ) max p(ai | o j ∧ sk ) V(A | O, S ) = j,k
i
Hence we extend α-vulnerability as follows: Definition 5 (α-vulnerability in presence of extra information). Given a real number α ∈ [0, 1], a protocol satisfies α-vulnerability if and only if V(A | O, S ) ≤ α. For the next proposition, we consider the specific case in which the protocol satisfies the symmetry of Crowds. Proposition 6. Let = |S| denote the cardinality of the extra observables. Assume that, for each i, p(oi | ai ) = p = maxi, j p(o j | ai ) and let q = maxi,k p(sk | ai ). We have:
Probable Innocence in the Presence of Independent Knowledge
Fig. 2. The matrices of the conditional probabilities of Example 3. We use here the notation o, s to represent o ∧ s.
1. V(A | O, S ) ≤ α if V(A | O) ≤
α q.
2. If the a priori distribution is uniform and and only if V(A | O) ≤ α.
(1−p) q n−1
≤ p (1−q) , then V(A | O, S ) ≤ α if −1
Proof. By definition we have: V(A | O, S ) =
j,k
p(o j ∧ sk ) max p(ai | o j ∧ sk ) i
154
S. Hamadou et al.
Using Bayes theorem we derive: V(A | O, S ) =
max(p(o j ∧ sk | ai ) p(ai )) i
j,k
Because of the independence of O and S for any given originator, we deduce: V(A | O, S ) = max(p(o j | ai ) p(sk | ai ) p(ai )) j,k
i
(5)
1. Since q = maxi,k p(sk | ai ), from (5) we derive: V(A | O, S ) ≤
j,k
max(p(o j | ai ) q p(ai )) = q i
j
max(p(o j | ai ) p(ai )) = q V(A | O) i
2. Since the input distribution is uniform: V(A | O, S ) =
1 max(p(o j | ai ) p(sk | ai )) n j,k i
(1−q) If (1−p) n−1 q ≤ p −1 then maxi (p(o j | ai ) p(sk | ai )) = p(o j | a j ) p(sk | a j ) = p p(sk | a j ). Hence 1 1 1 V(A | O, S ) = p p(sk | a j ) = p p(sk | a j ) = p = V(A|O) n j,k n j n j k
It is interesting to note that, in the part (2) of Proposition 6, the extra knowledge does not make the protocol more vulnerable. This is because the additional knowledge is sometimes in accordance with the best guess based on the original observable, and sometimes in conflict, but the original observable always dominates, and therefore the additional knowledge is either redundant or disregarded. In any case, it is not used to make the guess. In the general case (represented by the first part of the proposition), however, the additional knowledge may dominate the original observable, and induce the adversary to change his bet, thus increasing his chances. For this reason, the vulnerability increases in general of a factor q.
6 Conclusion In this paper we focussed on the Crowds anonymity protocol and asked the question of how its existing analyses are affected by taking into account that attackers may have independent knowledge about users’ behaviours. This amounts to providing the attackers with information about the correlation between a set of observables s1 , . . . , s and the event that user i is the originator of a message, as formalised by the conditional probability p(sk | ai ). We formalised the idea of probable innocence for such systems, both in standard terms and via the notion of protocol vulnerability, and identified a simple and neat measure of the impact of independent knowledge. Namely, it makes probable
Probable Innocence in the Presence of Independent Knowledge
155
innocence (resp. vulnerability) more difficult to achieve by a factor q (resp. q) which depends on the ratio between the probability of the observables conditional to the originator and conditional to the user detected (and, in the case of vulnerability, also from the cardinality of the random variable that represents the extra knowledge). In conclusion, we remark that although the scenario in which attackers possess or can acquire extra knowledge is highly likely, it has so far been ignored. In the near future, we plan to work on the even more interesting scenario in which the attackers use their ‘beliefs’ about users behaviour to raise the vulnerability of anonymity protocols such as Crowds.
References 1. Bhargava, M., Palamidessi, C.: Probabilistic anonymity. In: Abadi, M., de Alfaro, L. (eds.) CONCUR 2005. LNCS, vol. 3653, pp. 171–185. Springer, Heidelberg (2005) 2. Braun, C., Chatzikokolakis, K., Palamidessi, C.: Compositional methods for informationhiding. In: Amadio, R.M. (ed.) FOSSACS 2008. LNCS, vol. 4962, pp. 443–457. Springer, Heidelberg (2008) 3. Chatzikokolakis, K., Palamidessi, C.: Probable innocence revisited. Theor. Comput. Sci. 367(1-2), 123–138 (2006) 4. Chatzikokolakis, K., Palamidessi, C., Panangaden, P.: Probability of error in informationhiding protocols. In: CSF, pp. 341–354. IEEE Computer Society, Los Alamitos (2007) 5. Chatzikokolakis, K., Palamidessi, C., Panangaden, P.: Anonymity protocols as noisy channels. Inf. Comput. 206(2-4), 378–401 (2008) 6. Chatzikokolakis, K., Palamidessi, C., Panangaden, P.: On the Bayes risk in informationhiding protocols. Journal of Computer Security 16(5), 531–571 (2008) 7. Chen, H., Malacaria, P.: Quantitative analysis of leakage for multi-threaded programs. In: PLAS 2007: Proceedings of the 2007 workshop on Programming languages and analysis for security, pp. 31–40. ACM, New York (2007) 8. Clark, D., Hunt, S., Malacaria, P.: A static analysis for quantifying information flow in a simple imperative language. Journal of Computer Security 15(3), 321–371 (2007) 9. Clarkson, M.R., Myers, A.C., Schneider, F.B.: Belief in information flow. In: CSFW, pp. 31–45. IEEE Computer Society, Los Alamitos (2005) 10. Deng, Y., Pang, J., Wu, P.: Measuring anonymity with relative entropy. In: Dimitrakos, T., Martinelli, F., Ryan, P.Y.A., Schneider, S. (eds.) FAST 2006. LNCS, vol. 4691, pp. 65–79. Springer, Heidelberg (2007) 11. Franz, M., Meyer, B., Pashalidis, A.: Attacking unlinkability: The importance of context. In: Borisov, N., Golle, P. (eds.) PET 2007. LNCS, vol. 4776, pp. 1–16. Springer, Heidelberg (2007) 12. Halpern, J.Y., O’Neill, K.R.: Anonymity and information hiding in multiagent systems. Journal of Computer Security 13(3), 483–512 (2005) 13. Halpern, J.Y., O’Neill, K.R.: Anonymity and information hiding in multiagent systems. Journal of Computer Security 13(3), 483–512 (2005) 14. K¨opf, B., Basin, D.A.: An information-theoretic model for adaptive side-channel attacks. In: Ning, P., di Vimercati, S.D.C., Syverson, P.F. (eds.) ACM Conference on Computer and Communications Security, pp. 286–296. ACM, New York (2007) 15. Malacaria, P., Chen, H.: Lagrange multipliers and maximum information leakage in different ´ Pistoia, M. (eds.) PLAS, pp. 135–146. ACM, New observational models. In: Erlingsson, U., York (2008)
156
S. Hamadou et al.
16. Reiter, M.K., Rubin, A.D.: Crowds: Anonymity for web transactions. ACM Transactions on Information and Systems Security 1(1), 66–92 (1998) 17. Serjantov, A., Danezis, G.: Towards an information theoretic metric for anonymity. In: Dingledine, R., Syverson, P.F. (eds.) PET 2002. LNCS, vol. 2482, pp. 41–53. Springer, Heidelberg (2003) 18. Shmatikov, V., Wang, M.-H.: Measuring relationship anonymity in mix networks. In: Juels, A., Winslett, M. (eds.) WPES, pp. 59–62. ACM, New York (2006) 19. Smith, G.: On the foundations of quantitative information flow. In: de Alfaro, L. (ed.) FOSSACS 2009. LNCS, vol. 5504, pp. 288–302. Springer, Heidelberg (2009)
A Calculus of Trustworthy Ad Hoc Networks Massimo Merro and Eleonora Sibilio Dipartimento di Informatica, Universit` a degli Studi di Verona, Italy
Abstract. We propose a process calculus for mobile ad hoc networks which embodies a behaviour-based multilevel decentralised trust model. Our trust model supports both direct trust, by monitoring nodes behaviour, and indirect trust, by collecting recommendations and spreading reputations. The operational semantics of the calculus is given in terms of a labelled transition system, where actions are executed at a certain security level. We define a labelled bisimilarity parameterised on security levels. Our bisimilarity is a congruence and an efficient proof method for an appropriate variant of barbed congruence, a standard contextually-defined program equivalence. Communications are proved safe with respect to the security levels of the involved parties. In particular, we ensure safety despite compromise: compromised nodes cannot affect the rest of the network. A non interference result expressed in terms of information flow is also proved.
1
Introduction
Wireless technology spans from user applications such as personal area networks, ambient intelligence, and wireless local area networks, to real-time applications, such as cellular and ad hoc networks. A Mobile ad hoc network (MANET) is a self-configuring network of mobile devices (also called nodes) communicating with each other via radio transceivers without relying on any base station. Lack of a fixed networking infrastructure, high mobility of the devices, shared wireless medium, cooperative behaviour, and physical vulnerability are some of the features that make challenging the design of a security scheme for mobile ad hoc networks. Access control is a well-established technique for limiting access to the resources of a system to authorised programs, processes, users or other systems. Access control systems typically authenticate principles and then solicit access to resources. They rely on the definition of specific permissions, called access policies, which are recorded in some data structure such as Access Control Lists (ACLs). ACLs work well when access policies are set in a centralised manners. However, they are less suited to ubiquitous systems where the number of users may be very large (think of sensor networks) and/or continuously changing. In these scenarios users may be potentially unknown and, therefore, untrusted. In order to overcome these limitations, Blaze et at. [1] have introduced the notion of
This work has been partially supported by the national MIUR Project SOFT.
P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 157–172, 2010. c Springer-Verlag Berlin Heidelberg 2010
158
M. Merro and E. Sibilio
Decentralised Trust Management as an attempt to define a coherent framework in which safety critical decisions are based on trust policies relying on partial knowledge. Trust formalisation is the subject of several academic works. According to [2], trust is the quantified belief by a trustor , the trusting party, with respect to the competence, honesty, security and dependability of a trustee, the trusted party, within a specified context. Trust information is usually represented as a collection of assertions on the reliability of the parties. The trust establishment process includes specification of valid assertions, their generation, distribution, collection and evaluation. Trust assertions may be uncertain, incomplete, stable and long term. Trust evaluation is performed by applying specific policies to assertions; the result is a trust relation between the trustor and the trustee. According to their scope and kind of trust evidence, trust frameworks can be divided in two categories: certificate-based and behaviour-based. In the first ones, trust relations are usually based on certificates, to be spread, maintained and managed either independently or cooperatively. In behaviour-based frameworks, each node performs trust evaluation based on continuous monitoring of misbehaviours of neighbours (direct trust). Misbehaviours typically include dropping, modification, and misrouting of packets at network layer. However, trust evaluation may also depend on node reputation. Node reputation usually comes from other nodes (indirect trust) and does not reflect direct experience of the interested node. In history-based trust models, node reputation may also depend on past behaviours. The characteristics of mobile ad hoc networks pose a number of challenges when designing an appropriate trust model for them. Due to the lack of a fixed network infrastructure, trust models for MANETs must be decentralised and should support cooperative evaluation, according to the diversity in roles and capabilities of nodes. There are various threats to ad hoc networks, of which the most interesting and important is node subversion. In this kind of attack, a node may be reverse-engineered, and replaced by a malicious node. A bad node can communicate with any other node, good or bad. Bad nodes may have access to the keys of all other bad nodes, whom they can impersonate if they wish. They do not execute the authorised software and thus do not necessarily follow protocols to identify misbehaviour, revoke other bad nodes, vote honestly or delete keys shared with revoked nodes. So, trust frameworks for ad hoc networks should support node revocation to isolate malicious nodes. Another key feature of MANETs is their support for node mobility: devices move while remaining connected to the network, breaking links with old neighbours and establishing fresh links with new devices. This makes security even more challenging as the compromise of a legitimate node or the insertion of a malicious node may go unnoticed in such a dynamic environment. Thus, a mobile node should acquire trust information on new neighbours, and remove trust information on old neighbours that cannot be monitored anymore. In this paper, we propose a process calculus for mobile ad hoc networks which embodies a behaviour-based multilevel trust model. Our trust model supports
A Calculus of Trustworthy Ad Hoc Networks
159
both direct trust, by monitoring nodes behaviour, and indirect trust, by collecting recommendations and spreading reputations. No information on past behaviours of nodes is recorded. We model our networks as multilevel systems where each device is associated to a security level depending on its role [3]. Thus, trust relations associate security levels to nodes. Process calculi have been recently used to model different aspects of wireless systems [4,5,6,7,8,9,10]. However, none of these papers address the notion of trust. In our calculus, each node is equipped with a local trust store containing a set of assertions. These assertions supply trust information about the other nodes, according to a local security policy. Our calculus is not directly concerned with cryptographic underpinnings. However, we assume the presence of a hierarchical key generation and distribution protocol [11]. Thus, messages are transmitted at a certain security level relying on an appropriate set of cryptographic keys. We provide the operational semantics of our calculus in terms of a labelled transition system. Our transitions are of the form λ M −− →ρ N indicating that the network M can perform the action λ, at security level ρ, evolving into the network N . For simplicity, our operational semantics does not directly express mobility. However, we can easily adapt the approach proposed in [9] to annotate our labelled transitions with the necessary information to represent node mobility. Our calculus enjoys two desirable security properties: safety up to a security level and safety despite compromise. Intuitively, the first property means that only trusted nodes, i.e. with an appropriate security level, may synchronise with other nodes. The second property says that bad (compromised) nodes, once detected, may not interact with good nodes. A central concern in process calculi is to establish when two terms have the same observable behaviour. Behavioural equivalences are fundamental for justifying program transformations. Our program equivalence is a security variant of (weak) reduction barbed congruence, a branching-time contextually-defined program equivalence. Barbed equivalences [12] are simple and intuitive but difficult to use due to the quantification on all contexts. Simpler proof techniques are based on labelled bisimilarities [13], which are co-inductive relations that characterise the behaviour of processes using a labelled transition system. We define a labelled bisimilarity parameterised on security levels proving that it represents an efficient proof method for our reduction barbed congruence. We apply our notion of bisimilarity to prove a non-interference property for our networks. Intuitively, a network is interference free if its low security level behaviour is not affected by any activity at high security level.
2
A Behaviour-Based Multilevel Decentralised Trust Model
In our framework each node comes together with an extra component called trust manager . A trust manager consists of two main modules: the monitoring
160
M. Merro and E. Sibilio
module and the reputation handling module. The first one monitors the behaviour of neighbours, while the second one collects/spreads recommendations and evaluates trust information about other nodes using a local security policy. The continuous work of the trust manager results in a local trust store T containing the up-to-date trust relations. Trust information may change over time due to mobility, temporary disconnections, recommendations, etc. As a consequence, trust knowledge may be uncertain and incomplete. The main objective of the model is to isolate bad nodes, i.e. nodes which do not behave as expected. For this reason, we support node revocation. This may happens when a node detects a misbehaviour of another node, and spreads this information to its neighbours. Repudiable evidences enable bad nodes to falsely accuse good nodes. Hence, it would be foolish to design a simple decision mechanism that revokes any node accused of misbehaviour. Thus, recommendations are always evaluated using a local security policy implementing an appropriate metric. The basic elements of our model are nodes (or principals), security levels, assertions, policies and trust stores. We use k, l, m, n, . . . to range over the set Nodes of node names. We assume a complete lattice S, < of security levels: bad < trust < low < high. We use the Greek letter ρ for security levels belonging to S. The set of assertions is defined as Assertions = Nodes × Nodes × S. Thus, an assertion m, n, ρ says that a node m trusts a node n at security level ρ. A local trust store T contains a set of assertions, formally T ⊆ ℘(Assertions). A node can receive new assertions from its neighbours. These assertions will be opportunely stored in the local trust store by the trust manager, according to a local security policy P. A security policy P is a function that evaluates the current information collected by a node and returns a set of consistent assertions, formally P : ℘(Assertions) → ℘(Assertions). For simplicity, we assume that all nodes have the same security policy P. Notice that the outcome of the policy function could differ from one node to another as the computation depends on the local knowledge of nodes. Thus, when a node m (the trustor) wants to know the security level of a node n (the trustee), it has to check its own trust store T . For convenience, we often use T as a partial function of type Nodes → Nodes → S, writing T (m, n) = ρ if m considers n as a node of security level ρ. If ρ = bad then m considers n a bad (unreliable) node and stops any interaction with it. Messages exchanged among nodes are assumed to be encrypted using a hierarchical key generation and distribution protocol [14]. The trust manager may determine a key redistribution when a security level is compromised. More generally, re-keying [15] allows to refresh a subset of keys when one or more nodes join or leave the network; in this manner nodes are enable to decrypt past traffic, while evicted nodes are unable to decrypt future traffic. As showed in [14] re-keying may be relatively unexpensive if based on “low-cost” hashing operators.
A Calculus of Trustworthy Ad Hoc Networks
3
161
The Calculus
In Table 1, we define the syntax of our calculus in a two-level structure, a lower one for processes and a upper one for networks. We use letters k, l, m, n, . . . for node names. The Greek symbol σ ranges over the security levels low and high, the only ones which are directly used by programmers. We use letters x, y, z for variables, u for values, and v and w for closed values, i.e. values that do not contain free variables. We write u ˜ to denote a tuple u1 , . . . , uk of values. Networks are collections of nodes (which represent devices) running in parallel and using channels at different security levels to communicate with each other. We use the symbol 0 to denote an empty network. We write M | N for the parallel composition of two sub-networks M and N . We write n[P ]T for a node named n (denoting its network address) executing the sequential process P , with a local trust store T . Processes are sequential and live within the nodes. We write nil to denote the skip process. The sender process σ!˜ v .P can broadcast the value v˜ at security level σ, continuing as P . A message transmitted at security level ρ can be decrypted only by nodes at security level ρ or greater, according to the trust store of both sender and receiver. Moreover, we assume that messages are always signed by transmitters. The receiver process σ?(˜ x).P listens on the channel for incoming communications at security level σ. Upon reception, the receiver process evolves into P , where the variables of x˜ are replaced with the message v˜. We write {v˜/x˜ }P for the substitution of variables x ˜ with values v˜ in P . Process [˜ v = w]P, ˜ Q is the standard “if then else” construct: it behaves as P if v˜ = w, ˜ and as Q otherwise. We write H˜ v to denote a process defined def via a definition H(˜ x) = P , with | x ˜ |=| v˜ |, where x˜ contains all variables that appear free in P . Defining equations provide guarded recursion, since P may contain only guarded occurrences of process identifiers. In process σ?(˜ x).P variables x ˜ are bound in P . This gives rise to the standard notion of α-conversion and Table 1. The Syntax Values u ::= v x
closed value variable
Networks: M, N ::= 0 M |N n[P ] T
empty network parallel composition node
Processes: P, Q ::=
nil σ!˜ u.P σ?(˜ x).P [˜ u = u˜ ]P, Q H˜ u
termination broadcast receiver matching recursion
162
M. Merro and E. Sibilio
free and bound variables. We assume there are no free variables in our networks. The absence of free variables in networks is trivially maintained as the network evolves. Given a network M , nds(M ) returns the set of the names of those nodes which constitute the network M . Notice that, as networks addresses are unique, we assume that there cannot be two nodes with the same name in the same network. We write i Mi to denote the parallel composition of all sub-networks Mi . Finally, we define structural congruence, written ≡, as the smallest congruence which is a commutative monoid with respect to the parallel operator. 3.1
The Operational Semantics
We give the operational semantics of our calculus in terms of a Labelled Transition System (LTS). We have divided our LTS in two sets of rules. Table 2 contains the rules to model the synchronisation between sender and receivers. Table 3 contains the rules to model trust management, i.e. the actions of the trust manager components. λ
Our transitions are of the form M −− →ρ M , indicating that the network M can perform the action λ, at security level ρ, evolving into the network M . By construction, in such a transition, ρ will be always different from bad. More precisely, ρ will be equal to low for low-level-security transmissions, and equal to high for high-level-security transmissions. If ρ = trust then the transition models some aspects of trust management and involves all trusted nodes. The label λ ranges over the actions m!˜ v D, m?˜ v D, and τ . The action m!˜ v D models the transmission of message v˜, originating from node m, and addressed to the set of nodes in D. The action m?˜ v D represents the reception of a message v˜, sent by m, and received by the nodes in D. We sometimes write m?˜ vn as an abbreviation for m?˜ v {n}. The action τ models silent actions, as usual.
Table 2. LTS - Synchronisation (Snd)
D := {n : T (m, n) ≥ σ} m!˜ v D
m[σ!˜ v.P ]T −−−−−− →σ m[P ]T m?˜ v D
(RcvPar)
M −−−−−− →ρ M
(Rcv)
T (n, m) ≥ σ
n[σ?(˜ x).P ]T −−−−−− →σ n[{v˜/x˜ }P ]T
m?˜ v D := D ∪ D N −−−−−−− →ρ N D m?˜ v D
M | N −−−−−− →ρ M | N m!˜ v D
(Sync)
M −−−−−− →ρ M
m?˜ v D
N −−−−−−− →ρ N D ⊆ D m!˜ v D
M | N −−−−−− →ρ M | N λ
(Par)
|x ˜ |=| v˜ |
m?˜ v n
M −− →ρ M sender(λ) ∈ / nds(N ) λ
M | N −− →ρ M | N
A Calculus of Trustworthy Ad Hoc Networks
163
The function sender(·) applied to an action returns the name of the sender, thus sender(m!˜ v D) = sender(m?˜ v D) = m, whereas sender(τ ) =⊥. Let us comment on the rules of Table 2. Rule (Snd) models a node m which broadcasts a message v˜ at security level σ; the set D contains the nodes at security level at least σ, according to the trust store of m. Rule (Rcv) models a node n receiving a message v˜, sent by node m, at security level σ. Node n receives the message from m only if it trusts m at security level σ. Rule (RcvPar) serves to put together parallel nodes receiving from the same sender. If sender and receiver(s) trust each other there will be a synchronisation.1 Rule (Sync) serves to synchronise the components of a network with a broadcast communication; the condition D ⊆ D ensures that only authorised recipients can receive the transmitted value. Rule (Par) is standard in process calculi. Notice that using rule (Par) we can model situations where potential receivers do not necessarily receive the message, either because they are not in the transmission range of the transmitter or simply because they loose the message. Rules (Sync), (RcvPar) and (Par) have their symmetric counterparts. Example 1. Let us consider the network: def
M = k[σ?(˜ x).Pk ]Tk | l[σ?(˜ x).Pl ]Tl | m[σ!˜ v .Pm ]Tm | n[σ?(˜ x).Pn ]Tn where Tk (k, m) ≥ σ, Tl (l, m) < σ, Tm (m, n) = Tm (m, l) ≥ σ, Tm (m, k) < σ and Tn (n, m) ≥ σ. In this configuration, node m broadcasts message v˜ at security level σ, knowing that the nodes allowed to receive the message at that security level are n and l. However, node l does not trust m at security level σ. Thus, n is the only node that may receive the message. By an application of rules (Snd), (Rcv), (Par), and (Sync) we have: m!˜ v D
M −−−−−→ − σ k[σ?(˜ x).Pk ]Tk | l[σ?(˜ x).Pl ]Tl | m[Pm ]Tm | n[{v˜/x˜ }Pn ]Tn . Now, let us comment on the rules of Table 3 modelling trust management. Rule (Susp) models direct trust. This happens when the monitoring module of a node m, while monitoring the activity of a trusted node n, detects a misbehaviour of n. In this case, node m executes two operations: (i) it implements node revocation updating its trust store, according to its local policy; (ii) it broadcasts the corresponding information to inform all trusted nodes about the misbehaviour of n. Notice that this transmission is not under the control of the code of m but it rather depends on the reputation handling module. Notice also that the transmission is addressed to all trusted nodes, that’s why the transmission fires at security level trust. Rule (SndRcm) models indirect trust by sending a recommendation. This may happen, for example, when a node moves and asks for recommendations on new neighbours. Again, recommendations are addressed to all trusted nodes, according to the trust knowledge of the recommender. Rule (RcvRcm) models the reception of a recommendation from a trusted node: a 1
Here, we abstract on the actual behaviour of receivers as they verify the identity of the sender and discard unauthorised messages.
164
M. Merro and E. Sibilio Table 3. LTS - Trust Management T (m, n) > bad v˜ := n, bad (Susp) T := P(T ∪ m, v˜) D := {n : T (m, n) > bad} m!˜ v D m[P ]T −−−−−− →trust m[P ]T (SndRcm)
T (m, n) = ρ
v˜ := n, ρ
D := {n : T (m, n) > bad}
m!˜ v D
m[P ]T −−−−−− →trust m[P ]T
(RcvRcm)
T (n, m) > bad
v˜ := l, ρ
T := P(T ∪ m, v˜)
m?˜ v n
n[P ]T −−−−−− →trust n[P ]T (Loss)
T ⊆ T T := P(T ) τ n[P ]T −− →trust n[P ]T
new trust table T is calculated, applying the local policy to T ∪ m, v˜. Rule (Loss) models loss of trust information. This happens, for instance, when a node moves, changing its neighbourhood. In this case, assertions concerning old neighbours must be deleted as they cannot be directly verified. The consistency of the remaining assertions must be maintained by applying the security policy. Example 2. Let us show how direct and indirect trust work. Let us consider the network: def M = k[Pk ]Tk | l[Pl ]Tl | m[Pm ]Tm | n[Pn ]Tn where Tk (k, m) ≥ trust, Tl (l, m) = bad, Tm (m, n) = Tm (m, l) = Tm (m, k) ≥ trust, and Tn (n, m) ≥ trust. Now, if node m observes that node k is misbehaving, then (i) it adds an assertion m, k, bad to its local knowledge; (ii) it broadcasts the information to its neighbours. Thus, by an application of rules (Susp), (RcvRcm), (Par), and (Sync) we have m!˜ v D
M −−−−−− →trust k[Pk ]T | l[Pl ]Tl | m[Pm ]T | n[Pn ]T . m
k
n
Notice that since l does not trust m, only node n (but also the bad node k) will receive m’s recommendation. Moreover the local knowledge of m and n will Table 4. LTS - Matching and recursion λ
(Then)
λ
→ρ n[P ]T n[P ]T −− λ
n[[˜ v = v˜]P, Q]T −− →ρ n[P ]T (Rec)
(Else)
λ n[{v˜/˜ x}P ]T −− →ρ n[P ]T λ
→ρ n[Q ]T n[Q]T −−
λ
v˜1 = v˜2
n[[v˜1 = v˜2 ]P, Q]T −− →ρ n[Q ]T def
H(˜ x) = P
n[H˜ v]T −− →ρ n[P ]T
A Calculus of Trustworthy Ad Hoc Networks
165
change, accordingly to the local policy. This is a case of direct trust for m, and indirect trust for n. The security level that n will assign to k will actually depend the local policy of n. Finally, Table 4 contains the standard rules for matching and recursion.
4
Node Mobility
In wireless networks node mobility is associated with the ability of a node to access telecommunication services at different locations from different nodes. Node mobility in ad hoc networks introduces new security issues related to user credential management, indirect trust establishment and mutual authentication between previously unknown and hence untrusted nodes. Thus, mobile ad hoc networks has turned to be a challenge for automated verification and analysis techniques. After the first works on model checking of (stationary) ad hoc networks [16], Nanz and Hankin [5] have proposed a process calculus where topology changes are abstracted into a fixed representation. This representation, called network topology, is essentially a set of connectivity graphs denoting the possible connectivities within the nodes of the network. Table 5. LTS - Synchronisation with network restrictions
(SndR)
D := {n : T (m, n) ≥ σ}
(RcvR)
m!˜ v D
m[σ!˜ v .P ]T −−−−−− →σ,∅ m[P ]T m?˜ v D
(RcvParR)
M −−−−−− →ρ,C1 M
(SyncR)
M −−−−−− →ρ,C1 M
T (n, m)≥σ
|˜ x|=|˜ v| m?˜ v n
P :={v˜/x˜ }P
n[σ?(˜ x).P ]T −−−−−− →σ,(n,m) n[P ]T
m?˜ v D := D ∪ D N −−−−−−− →ρ,C2 N D
m?˜ v D
M | N −−−−−− →ρ,C1 ∪C2 M | N m!˜ v D
m?˜ v D
N −−−−−−− →ρ,C2 N D ⊆ D
m!˜ v D
M | N −−−−−− →ρ,C1 ∪C2 M | N λ
(ParR)
M −− →ρ,C M sender(λ) ∈ / nds(N ) λ
M | N −− →ρ,C M | N
As the reader may have noticed, our calculus does not directly model the network topology neither in the syntax nor in the semantics. However, it is very easy to add topology changes at semantics level, so that each state represents a set of valid topologies, and a network can be at any of those topologies at any time [9]. In Table 5 we rewrite the rules of Table 2 in the style of [9]. Rules λ
→ρ,C M , indicating that the network M can perform the are of the form M −− action λ, at security level ρ, under the network restriction C, evolving into the
166
M. Merro and E. Sibilio
network M . Thus, a network restriction C keeps track of the connections which are necessary for the transition to fire. The rules in Table 3 can be rewritten in a similar manner, except for rule (Loss) in which the network restriction is empty i.e. C = ∅. Example 3. Consider the same network given in the Example 1. Then by applying rules (SndR), (RcvR), (ParR), and (SyncR) we have m!˜ v D
→σ,{(n,m)} k[σ?(˜ x).Pk ]Tk | l[σ?(˜ x).Pl ]Tl | m[Pm ]Tm | n[{v˜/x˜ }Pn ]Tn . M −−−−−− The transition is tagged with the network restriction {(n, m)}, as only node n has synchronised with node m. Notice that the rule (Loss) in Table 3 may indirectly affect future communications. In fact, if a trust information is lost then certain nodes may not be able of communicating anymore. The reader may have noticed that the rules of Table 5 do not use network restrictions in the premises. As a consequence, there is a straightforward operaλ
λ
→ρ and one of the form −− →ρ,C . tional correspondence between a transition −− Proposition 1 λ
→ρ M with λ ∈ {m!˜ v D, m?˜ v D} iff there exists a restriction C such 1. M −− λ
→ρ,C M and C ⊆ {(m, n) for all n ∈ D}. that M −− τ τ →ρ M iff M −− →ρ,∅ M . 2. M −− Proof. By transition induction.
5
Safety Properties
In this section, we show how to guarantee in our setting that only authorised nodes receive sensible information. We define a notion of safety up to a security level to describe when a communication is safe up to a certain security level. Definition 1 (Safety up to a security level). A node m transmitting at level ρ may only synchronise with a node n receiving at level ρ or above, according to the local knowledge of m and n, respectively. Intuitively, Definition 1 says that a synchronisation at a certain security level ρ is safe if the involved parties trust each other at that security level. The safety property is then preserved at run time. m!˜ v D
→ρ M with Theorem 1 (Safety preservation). Let M −−−−−− M ≡ m[P ]T | i ni [Pi ]Ti and M ≡ m[P ]T | i ni [Pi ]T . i
= Pi , for some i, then T (m, ni ) ≥ ρ and Ti (ni , m) ≥ ρ. 1. If Pi 2. If Ti = Ti , for some i, then T (m, ni ) ≥ ρ and Ti (ni , m) ≥ ρ. m!˜ v D
→ρ M . Proof. By induction on the transition M −−−−−−
A Calculus of Trustworthy Ad Hoc Networks
167
A consequence of Theorem 1, is that (trusted) nodes never synchronise with untrusted nodes. In this manner, bad nodes (recognised as such) are isolated from the rest of the network. m!˜ v D
Corollary 1 (Safety despite compromise). Let M −−−−−− →ρ M such that M ≡ m[P ]T | ni [Pi ]Ti and M ≡ m[P ]T | ni [Pi ]T . i
i
i
If T (m, ni )=bad or Ti (ni , m)=bad, for some i, then Pi =Pi and Ti =Ti .
6
Behavioural Semantics
Our main behavioural equivalence is σ-reduction barbed congruence, a variant of Milner and Sangiorgi’s (weak) barbed congruence [12] which takes into account security levels. Basically, two terms are barbed congruent if they have the same observables (called barbs) in all possible contexts, under all possible evolutions. For the definition of barbed congruence we need two crucial concepts: a reduction semantics to describe how a system evolves, and a notion of observable which says what the environment can observe in a system. From the LTS given in Section 3.1 it is easy to see that a network may evolves either because there is a transmission at a certain security level or because a node looses some trust information. Thus, we can define the reduction relation between networks using the following inference rules: τ
m!˜ v D
− ρ M (Red1) M −−−−−→ M M
(Red2)
M −− →trust M M M
We write ∗ to denote the reflexive and transitive closure of . In our calculus, we have both transmission and reception of messages although only transmissions may be observed. In fact, in a broadcasting calculus an observer cannot see whether a given process actually receives a broadcast synchronisation. In particular, if the node m[σ!˜ v .P ]T evolves into m[P ]T we do not know whether some potential recipient has synchronised with m. On the other hand, if a node n[σ?(˜ x).P ]T evolves into n[{v˜/x˜}P ]T , then we can be sure that some trusted node has transmitted a message v˜ to n at security level σ. Definition 2 (σ-Barb). We write M ↓σn if M ≡ m[σ!˜ v .P ]T | N , for some m, N, v˜, P, T such that n ∈ / nds(M ), and T (m, n) ≥ σ. We write M ⇓σn if M ∗ M ↓σn for some network M . The barb M ⇓σn says that there is a potential transmission at security level σ, originating from M , and that may reach the node n in the environment. In the sequel, we write R to denote binary relations over networks. Definition 3 (σ-Barb preserving). A relation R is said to be σ-barb preserving if whenever M R N it holds that M ↓σn implies N ⇓σn .
168
M. Merro and E. Sibilio
Definition 4 (Reduction closure). A relation R is said to be reduction closed if M R N and M M imply there is N such that N ∗ N and M R N . As we are interested in weak behavioural equivalences, the definition of reduction closure is given in terms of weak reductions. Definition 5 (Contextuality). A relation R is said to be contextual if M R N implies that M | O R N | O, for all networks O. Finally, everything is in place to define our σ-reduction barbed congruence. Definition 6 (σ-Reduction barbed congruence). The σ-reduction barbed congruence, written ∼ =σ , is the largest symmetric relation over networks which is σ-barb preserving, reduction closed and contextual.
7
Bisimulation Proof Ethod
The definition of σ-reduction barbed congruence is simple and intuitive. However, due to the universal quantification on parallel contexts, it may be quite difficult to prove that two terms are barbed congruent. Simpler proof techniques are based on labelled bisimilarities. In the sequel we define an appropriate notion of bisimulation. As a main result, we prove that our labelled bisimilarity is a proof-technique for our σ-reduction barbed congruence. In general, a bisimulation describes how two terms (in our case networks) can mimic each other actions. First of all we have to distinguish between transmissions which may be observed and transmissions which may not be observed by the environment. m!˜ v D
M −−−−−→ − ρ M D⊆nds(M) (Shh) τ M −− →ρ M
ρ =bad
m!˜ v D
(Obs)
M −−−−−→ − ρ M M
D:=D\nds(M) =∅ m!˜ v D −−−−−− →ρ M
Rule (Shh) models transmissions that cannot be observed because none of the potential receivers are in the environment. Notice that security levels of τ -action are not related to the transmissions they originate from. Rule (Obs) models a transmission, at security level ρ, of a message v˜, from a sender m, that may be Notice that the rule received by the nodes of the environment contained in D. (Obs) can only be applied at top-level of a derivation tree. In fact, we cannot use this rule together with rule (Par) of Table 2, because λ does not range on the new action. In the sequel, we use the metavariable α to range over the following actions: τ , m?˜ v D, and m!˜ v D. Since we are interested in weak behavioural equivalences, that abstract over τ -actions, we introduce a standard notion of weak action: we τ write = ⇒ρ to denote the reflexive and transitive closure of −− →ρ ; we also write α
α
α ˆ
α
== ⇒ρ to denote = ⇒ρ −− →ρ = ⇒ρ ; == ⇒ρ denotes = ⇒ρ if α = τ and == ⇒ρ otherwise.
A Calculus of Trustworthy Ad Hoc Networks
169
Definition 7 (δ-Bisimilarity). The δ-bisimilarity, written ≈δ , is the largest α symmetric relation over networks such that whenever M ≈δ N if M −− →ρ M , α ˆ
⇒ρ N and M ≈δ N . with ρ ≤ δ, then there exists a network N such that N == This definition is inspired by that proposed in [17]. Intuitively, two networks are δ-bisimilar if they cannot be distinguished by any observers that cannot perform actions at security level greater than δ. Theorem 2 (≈δ is contextual). Let M and N be two networks such that M ≈δ N . Then M | O ≈δ N | O for all networks O. Proof. We prove that the relation def S = { M | O , N | O for all O such that M ≈δ N } is a δ-bisimulation.
Theorem 3 (Soundness). Let M and N be two networks such that M ≈δ N . Then M ∼ =σ N , for σ ≤ δ. Proof. It is easy to verify that δ-bisimilarity is σ-barb preserving and reduction closed, by definition. Contextuality follows by Theorem 2. Remark 1. For the sake of analysis, we can define the δ-bisimilarity using the labelled transition system with network restrictions of Table 5. However, by Proposition 1 the resulting bisimilarity would not change.
8
Non-interference
The seminal idea of non interference [18] aims at assuring that “variety in a secret input should not be conveyed to public output ”. In a multilevel computer system [3] this property says that information can only flow from low levels to higher ones. The first taxonomy of non-interference-like properties has been uniformly defined in a CCS-like process calculus with high-level and low-level processes, according to the level of actions that can be performed [19]. To detect whether an incorrect information flow (i.e. from high-level to low-level) has occurred, a particular non-interference-like property has been defined, the socalled Non Deducibility on Composition (NDC). This property basically says that a process is secure with respect to wrong information flows if its low-level behaviour is independent of changes to its high-level behaviour. Here, we prove a non-interference result using as process equivalence the notion of δ-bisimilarity previously defined. Formally, high-level behaviours can be arbitrarily changed without affecting low-level equivalences. Definition 8 describes what high-level behaviour means in our setting. We recall that we assumed the presence of a trust manager component for each node to manage trust information. As a consequence, actions at security level trust do not depend on the syntax of the processes as they depend on the trust manager. These actions can fire at any step of the computation and cannot be predicted in advance.
170
M. Merro and E. Sibilio
Definition 8 (δ-high level network). A network H is a δ-high level network, λ
written H ∈ Hδ , if whenever H −− →δ H then either δ = trust or δ > δ. Moreover, H ∈ Hδ . The non-interference result can be stated as follows. Theorem 4 (Non-interference). Let M and N be two networks such that M ≈δ N . Let H and K be two networks such that: (i) H, K ∈ Hδ , (ii) H ≈trust K, and (iii) nds(H) = nds(K). Then, M | H ≈δ N | K. Proof. We prove that the relation { M | H , N | K : H, K ∈ Hδ , M ≈δ N, H ≈trust K and nds(H)=nds(K)} is a δ-bisimulation.
9
Related Work
Formal methods have been successfully applied for the analysis of network security (see, for instance, [20,21,22,23,24]). Komarova and Riguidel [25] have proposed a centralised trust-based access control mechanism for ubiquitous environments. The goal is to allow a service provider for the evaluation of the trustworthiness of each potential client. Crafa and Rossi [17] have introduced a notion of controlled information release for a typed version of the π-calculus extended with declassified actions. The controlled information release property scales to non interference when downgrading is not allowed. They provide various characterisations of controlled release, based on typed behavioural equivalence, parameterised on security levels, to model observers at a certain security level. Hennessy [26] has proposed a typed version of the asynchronous π-calculus in which I-O types are associated to security levels. Typed equivalences are then used to prove a non interference result. As regards process calculi for wireless systems, Mezzetti and Sangiorgi [4] have proposed calculus to describe interferences in wireless systems. Nanz and Hankin [5] have introduced a calculus for mobile wireless networks for specification and security analysis of communication protocols. Merro [7] has proposed a behavioural theory for MANETs. Godskesen [8] has proposed a calculus for mobile ad hoc networks with a formalisation of an attack on the cryptographic routing protocol ARAN. Singh et al. [6] have proposed the ω-calculus for modelling the AODV routing protocol. Ghassemi et al. [9] have proposed a process algebra where topology changes are implicitly modelled in the semantics. Merro and Sibilio [27] have proposed a timed calculus for wireless systems focusing on the notion of communication collision. In trust models for ad hoc networks, the timing factor is important because more recent trust informations should have more influence on the trust establishment process. More generally, a notion of time would allow to record past behaviours. Finally, Godskesen and Nanz [10] have proposed a simple timed calculus for wireless systems to express a wide range of mobility models.
A Calculus of Trustworthy Ad Hoc Networks
171
None of the calculi mentioned above deal with trust. Carbone et al. [28] have introduced ctm, a process calculus which embodies the notion of trust for ubiquitous systems. In ctm each principal is equipped with a policy, which determines its legal behaviour, formalised using a Datalog-like logic, and with a protocol, in the process algebra style, which allows interactions between principals and the flow of information from principals to policies. In [29] Martinelli uses a cryptographic variant of CCS to describe and analyse different access control policies.
References 1. Blaze, M., Feigenbaum, J., Lacy, J.: Decentralized Trust Management. In: Symposium on Security and Privacy, pp. 164–173. IEEE Computer Society, Los Alamitos (1996) 2. Grandison, T.W.A.: Trust Management for Internet Applications. PhD thesis, Department of Computing, University of London (2003) 3. Bell, D.E., LaPadula, L.J.: Secure Computer System: Unified Exposition and Multics Interpretation. Technical Report MTR-2997, MITRE Corporation (1975) 4. Mezzetti, N., Sangiorgi, D.: Towards a Calculus For Wireless Systems. Electronic Notes in Theoretical Computer Science 158, 331–353 (2006) 5. Nanz, S., Hankin, C.: A Framework for Security Analysis of Mobile Wireless Networks. Theoretical Computer Science 367(1-2), 203–227 (2006) 6. Singh, A., Ramakrishnan, C.R., Smolka, S.A.: A Process Calculus for Mobile Ad Hoc Networks. In: Lea, D., Zavattaro, G. (eds.) COORDINATION 2008. LNCS, vol. 5052, pp. 296–314. Springer, Heidelberg (2008) 7. Merro, M.: An Observational Theory for Mobile Ad Hoc Networks (full paper). Information and Computation 207(2), 194–208 (2009) 8. Godskesen, J.: A Calculus for Mobile Ad Hoc Networks. In: Murphy, A.L., Vitek, J. (eds.) COORDINATION 2007. LNCS, vol. 4467, pp. 132–150. Springer, Heidelberg (2007) 9. Ghassemi, F., Fokkink, W., Movaghar, A.: Equational Reasoning on Ad Hoc Networks. In: Sirjani, M. (ed.) FSEN 2009. LNCS, vol. 5961, pp. 113–128. Springer, Heidelberg (2010) 10. Godskesen, J.C., Nanz, S.: Mobility Models and Behavioural Equivalence for Wireless Networks. In: Field, J., Vasconcelos, V.T. (eds.) COORDINATION 2009. LNCS, vol. 5521, pp. 106–122. Springer, Heidelberg (2009) 11. Huang, D., Medhi, D.: A Secure Group Key Management Scheme for Hierarchical Mobile Ad Hoc Networks. Ad Hoc Networks 6(4), 560–577 (2008) 12. Milner, R., Sangiorgi, D.: Barbed Bisimulation. In: Kuich, W. (ed.) ICALP 1992. LNCS, vol. 623, pp. 685–695. Springer, Heidelberg (1992) 13. Milner, R.: Communication and Concurrency. Prentice-Hall, Englewood Cliffs (1989) 14. Shehab, M., Bertino, E., Ghafoor, A.: Efficient Hierarchical Key Generation and Key Diffusion for Sensor Networks. In: SECON, pp. 76–84. IEEE Communications Society, Los Alamitos (2005) 15. Di Pietro, R., Mancini, L.V., Law, Y.W., Etalle, S., Havinga, P.J.M.: LKHW: A Directed Diffusion-Based Secure Multicast Scheme for Wireless Sensor Networks. In: ICPP Workshops 2003, pp. 397–413. IEEE Computer Society, Los Alamitos (2003)
172
M. Merro and E. Sibilio
16. Bhargavan, K., Obradovic, D., Gunter, C.A.: Formal Verification of Standards for Distance Vector Routing Protocols. Journal of the ACM 49(4), 538–576 (2002) 17. Crafa, S., Rossi, S.: Controlling Information Release in the π-calculus. Information and Computation 205(8), 1235–1273 (2007) 18. Goguen, J.A., Meseguer, J.: Security Policies and Security Models. In: IEEE Symposium on Security and Privacy, pp. 11–20 (1982) 19. Focardi, R., Gorrieri, R.: A Classification of Security Properties for Process Algebras. Journal of Computer Security 3(1), 5–33 (1995) 20. Reitman, R., Andrews, G.: An Axiomatic Approach to Information Flow in Programs. ACM Transactions on Programming Languages and Systems 2(1), 56–76 (1980) 21. Smith, G., Volpano, D.: Secure Information Flow in a Multi-threaded Imperative Language. In: Proc. 25th POPL, pp. 355–364. ACM Press, New York (1998) 22. Heintz, N., Riecke, J.G.: The SLam Calculus: Programming with Secrecy and Integrity. In: Proc. 25th POPL, pp. 365–377. ACM Press, New York (1998) 23. Bodei, C., Degano, P., Nielson, F., Nielson, H.R.: Static Analysis for the pi-Calculus with Applications to Security. Information and Computation 168(1), 68–92 (2001) 24. Boudol, G., Castellani, I.: Noninterference for Concurrent Programs and Thread Systems. Theoretical Computer Science 281(1-2), 109–130 (2002) 25. Komarova, M., Riguidel, M.: Adjustable Trust Model for Access Control. In: Rong, C., Jaatun, M.G., Sandnes, F.E., Yang, L.T., Ma, J. (eds.) ATC 2008. LNCS, vol. 5060, pp. 429–443. Springer, Heidelberg (2008) 26. Hennessy, M.: The Security pi-calculus and Non-Interference. Journal of Logic and Algebraic Programming 63(1), 3–34 (2005) 27. Merro, M., Sibilio, E.: A Timed Calculus for Wireless Systems. In: Arbab, F., Sirjani, M. (eds.) FSEN 2009. LNCS, vol. 5961, pp. 228–243. Springer, Heidelberg (2010) 28. Carbone, M., Nielsen, M., Sassone, V.: A Calculus for Trust Management. In: Lodaya, K., Mahajan, M. (eds.) FSTTCS 2004. LNCS, vol. 3328, pp. 161–173. Springer, Heidelberg (2004) 29. Martinelli, F.: Towards an Integrated Formal Analysis for Security and Trust. In: Steffen, M., Zavattaro, G. (eds.) FMOODS 2005. LNCS, vol. 3535, pp. 115–130. Springer, Heidelberg (2005)
Comparison of Cryptographic Verification Tools Dealing with Algebraic Properties Pascal Lafourcade, Vanessa Terrade, and Sylvain Vigier Universit´e Grenoble 1, CNRS, Verimag, France {firstname.lastname}@imag.fr Abstract. Recently Kuesters et al proposed two new methods using ProVerif for analyzing cryptographic protocols with Exclusive-Or and Diffie-Hellman properties. Some tools, for instance CL-Atse and OFMC, are able to deal with Exclusive-Or and Diffie-Hellman. In this article we compare time efficiency of these tools verifying some protocols of the litterature that are designed with such algebraic properties.
1
Introduction
Use of cryptographic primitives for encoding secret data is not sufficient to ensure security. For example even with encrypted data, there exist security flaws: an intruder is able to discover information that should remain secret between two participants. As a consequence, an important activity of research on this topic leads to the development of different tools for verifying cryptographic protocols. One of the main properties required by cryptographic protocols is the secrecy, which means that secret data generated by an honest agent should not be learnt by an intruder. Another important property is the authentication, which means that every party can authenticate the party with whom they are executing the protocol. Over the last decades many automatic tools based on formal analysis techniques have been presented for verifying cryptographic protocols [2, 8, 11, 16, 19, 31, 33, 35, 41, 22]. All these tools use the so-called Dolev-Yao intruder model. This modelling of the adversary offers a good level of abstraction and allows such tools to perform a formal analysis. All these works use the perfect encryption hypothesis, which means that the only way to decrypt a cipher text is to know the inverse key. This hypothesis abstracts the cryptography in order to detect a “logical flaw” due to all possible interleaving of different executions of the protocol. For relaxing this assumption, many works have been done for verifying protocols under some equational theories, like Exclusive-Or. In [18] we list protocols using algebraic properties by conception and protocols that are safe without considering any algebraic property but that are flawed if we consider an algebraic property. All these examples are evidences that relaxing the perfect encryption hypothesis by considering equational theories is an important issue in security. In order to achieve this goal some tools have been developed for considering some algebraic properties [7, 8, 22, 27, 28, 43].
This work was supported by ANR SeSur SCALP, SFINCS, AVOTE.
P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 173–185, 2010. c Springer-Verlag Berlin Heidelberg 2010
174
P. Lafourcade, V. Terrade, and S. Vigier
Contribution: We compare tools which are able to deal with algebraic properties for the verification of cryptographic protocols. More specifically we look at the Exclusive-Or property and the so-called Diffie-Hellman property, since these are the most frequently used. In order to verify cryptographic protocols using such properties we use CL-Atse and OFMC two tools of the platform Avispa. Both CL-Atse and OFMC analyze protocols with such algebraic properties for a bounded number of sessions. On the other hand ProVerif can verify cryptographic protocols for an unbounded number of session, but is not able to deal with such properties. But, recently Kuesters et al proposed in [27, 28] two methods for transforming a protocol description using Exclusive-Or and DiffieHellman property into a ProVerif input file. Hence we use these two “translators” in order to compare the following three tools: CL-Atse, OFMC and ProVerif. Our main contribution is to compare some protocols presented in [17] which are using Exclusive-Or and Diffie-Hellman. We also compare the tools using a more complex e-auction protocol [24]. We have chosen this protocol because it is longer than protocols in [17] and uses Exclusive-Or. We check secrecy and authentication properties for most of the protocols under study. Moreover for the e-auction protocol, we analyze the non-repudiation property, meaning that a participant cannot claim that he never did something. The non-repudiation is a property that is often involved in e-commerce protocols, because the seller or the bank usually do not want a customer to be allowed to deny a transaction. For modelling the property of non-repudiation we follow the approach of L. Vigneron et al [26] which expresses non-repudiation in term of authentication. Using this method we are able to check this property with the three tools in presence of algebraic properties. State of the art: Some work exists on comparing the performance of different security protocol analysis tools. In [34] C. Meadows compares the approach used by NRL [33] and the one used by G. Lowe in FDR [37] on the Needham-Schroeder example [36]. The two tools are shown to be complementary even though NRL is considerably slower. In [4], a large set of protocols is verified using the AVISS tool and timing results are given. In [44], a similar test is performed using the successor of AVISS, called the Avispa tool suite [2]. As the AVISS/Avispa tools consist of respectively three and four back-end tools, these tests effectively work as a comparison between the back end tools. No conclusions about the relative results are drawn in these articles. A qualitative comparison between Avispa and Hermes [11] is presented in [25]. This test leads to some generic advice for users of these two tools. It is not based on actual testing, but rather on conceptual differences between the modeling approaches of the tools. In [13], a number of protocol analysis tools are compared with respect to their ability to find a particular set of attacks. Recently in [20], we proposed a fair comparison of the following tools: Casper/FDR, ProVerif, Scyther and Avispa. We obtain for the first time an “efficiency ranking” of such tools. In this work we only looked at some protocols without any algebraic property. Here we continue our investigations with algebraic properties.
Comparison of Cryptographic Verification Tools
175
Outline: In the next Section we describe briefly the different tools used. In Section 3, for each analyzed protocol we present a short description and eventual attacks found by the tools. Finally in the last Section, we conclude by discussing our results obtained in Section 4.
2
Tools
In this section, we present the three tools used for our comparison. We have chosen these tools because we have already compared them in [20] on a set of protocols without algebraic properties and because they are dealing with the two main algebraic properties used in cryptographic protocols: Exclusive-Or and Diffie-Hellman. Avispa [2] (V: 1.1) Automated Validation of Iternet Security Protocols and Applications consists of the following four tools: CL-Atse [43] developed by M. Turuani, Sat-MC [3] created by A. Armando et al, OFMC [6] designed by S. M¨ odersheim, and TA4SP [10] proposed by Y. Boichut. All these tools take the same input language called HLPSL (High Level Protocol Specification Language). We describe a little bit more the two tools OFMC and CL-Atse which deal with selected algebraic properties. CL-Atse: (V: 2.2-5) Constraint-Logicbased Attack Searcher applies constraint solving with simplification heuristics and redundancy elimination techniques [43]. OFMC (V: 2006/02/13) On-theFly Model-Checker employs symbolic techniques to perform protocol falsification as well as bounded analysis, by exploring the state space in a demand-driven way [6]. ProVerif [8,9] (V: 1.16) developed by B. Blanchet analyzes an unbounded number of sessions by using over-approximation and represents protocols by Horn clauses. ProVerif accepts two kinds of input files: Horn clauses and a subset of the Pi-calculus. The tool uses an abstraction of fresh nonce generation, enabling it performs unbounded verification for a class of protocols. It can handle many different cryptographic primitives (shared and public-key cryptographic, hash functions...) and an unbounded number of sessions of the protocol. In ProVerif it is possible to model any equational theory, but the tool might not terminate. It is the case with Exclusive-Or or Diffie-Hellman exponentiation, however commutativity of the exponentiation alone is supported by ProVerif. It allows B. Blanchet’s tool to verify protocols that only use this particular algebraic property of the exponentiation. Recently R. K¨ uster and T. Truderung proposed two new “tools” XOR-ProVerif [29] and DH-ProVerif [27]. These small programs transform a protocol using Exclusive-Or and Diffie-Hellman mechanism into a protocol in Horn clauses compatible with ProVerif. The input files for XOR-ProVerif and DH-ProVerif are Prolog files. This allows us to compare ProVerif with CL-Atse and OFMC for protocols using algebraic properties of Exclusive-Or and DiffieHellman.
176
3
P. Lafourcade, V. Terrade, and S. Vigier
Results
In this section, we present the results obtained by implementing the selected protocols in OFMC, CL-Atse and ProVerif. For our experiments we used a PC DELL E4500 Intel dual Core 2.2 Ghz with 2 GB of RAM. Due to obvious reasons all complete descriptions of protocols are not given here, but for each protocol we try to present the minimum in order to clearly explain the results given by the tools. In [30], we propose a short description and an implementation of each analyzed protocols. We first present protocols dealing with Exclusive-Or and after the ones using Diffie-Hellman. All time measurements are given in Figure 1 and 2. In the ProVerif column of these tables, we mention two numbers, the first one corresponds to the time of the transformation done by Kuesters et al’s tool and the second one is the verification time given by ProVerif. All our experiments are accessible at this address http://www-verimag.imag.fr/~plafourc/FAST09. tar.gz Notations: We denote principals by A, B, S..., messages by Mi , nonces generated by A by NA , public keys of A by P KA , symmetric keys between A and B by KAB , fresh symmetric keys by KA , a prime number by P , a primitive root by G. The Exclusive-Or is denoted by A ⊕ B. The exponentiation of G by the nonce NA modulo P is denoted by GNA mod P . 3.1
Bull’s Authentication Protocol
This protocol [12, 38] aims at establishing fresh session keys between a fixed number of participants and a server. The protocol is recursive and uses one key for each pair of agents adjacent in the chain. In our modelling, the protocol is initiated by A and then goes through B and C before reaching S. At the end, new session keys KAB and KBC are established. Each key KXY should be known exactly by X and Y (and also S), even if other participants are malicious. Results: The checked property is the secrecy of KAB between A and B and the secrecy of KBC between B and C. We first notice that OFMC is slower than CL-Atse. The analysis using XOR-ProVerif crashes after more that one hour and the size of partial output produced is more that 400 MB. This corresponds to the fact that the algorithm proposed by Kuesters et al is exponential in the number of variables used in Exclusive-Or and the number of constants used in the protocol. This point demonstrates clearly the limit of the approach using XOR-ProVerif. So we restrict the cases considered in the XOR-ProVerif file in order that the analysis end. It means that guided by the already known attack we fix some variables by the name of the principal according to the attack. This allows XOR-ProVerif to generate in 5 seconds the input file for ProVerif. In this setting ProVerif found the same attack as Avispa tools in 5 + 12 = 17 seconds.
Comparison of Cryptographic Verification Tools
177
The attack found is the same as the one presented in the survey [18, 38]. I is part of the protocol since he plays the third role, and the different steps of the protocol take place normally. Yet, at the end of the protocol, I is able to retrieve the key shared by A and B. Indeed, he had with step 4 KAB ⊕ h(NB , KBS ) and KBI ⊕ h(NB , KBS ) with which he can compute KAB ⊕ KBI . Moreover, he had also obtained KBI , as it is the aim of the protocol, and with KAB ⊕ KBI , he can obviously retrieve KAB , that should be shared only by A and B. We can see in this attack that the intruder uses the property of the Exclusive-Or: X ⊕ X = 0. Indeed, it permits him to eliminate the term h(NB , KBS ) in order to find KAB ⊕ KBI . We propose a new version of the protocol, to prevent the third participant from using this property. New Protocol: In this new version of the protocol, the idea was to introduce a second nonce created by B, which permits to avoid the dual use of a unique nonce on which lays on the attack. Here, S uses the first nonce NB1 to encrypt the key KAB and the second nonce NB2 to encrypt KBC . As a result, the attack consisting in computing two parts intended for B to find KAB ⊕ KBI is no more valid. The analysis of this second version with the back end OFMC did not end after more that 20 hours of computation, while the analysis with CL-Atse gave no attack after 1h23. This result shows that small modification can drastically change the time of verification. We can also notice that in this case and as we have observed in the case without algebraic considerations, OFMC is slower than CL-Atse. Finally XOR-ProVerif also crashes with this new protocol, but if we fix again some variables in XOR-ProVerif the transformation ends in 17 seconds and the total analysis takes 2 minutes and 15 seconds. 3.2
Wired Equivalent Privacy Protocol
The Wired Equivalent Privacy protocol (WEP) [1], is used to protect data during wireless transmission. To encrypt a message M for X, A applies the operator ⊕ to RC4(v, KAX ) and [M, C(M )], where RC4 is a public algorithm using v an initial vector and a symmetric key KAX , and C is an integrity checksum function. For decrypting the received message, X computes RC4(v, KAX ) and after applying Exclusive-Or, he obtains [M, C(M )] and can verify that the checksum is correct. Results: The property verified was the secrecy of M2 between A and B. All the tools found quickly the same following attack: 0.1. A −→ B : 0.2. A −→ I : 1. A −→ I :
First of all, A sends the same message M1 to B and I in steps 0.1 and 0.2. I is able to determine RC4(v, KAI ). Then, by computing ([M1 , C(M1 )] ⊕ RC4(v, KAB )) ⊕ ([M1 , C(M1 )] ⊕ RC4(v, KAI )) ⊕ RC4(v, KAI ), I can reach RC4(v, KAB ) and consequently, he can access to every following messages intended for B. Indeed, in
178
P. Lafourcade, V. Terrade, and S. Vigier
step 1, the intruder intercepts ([M2 , C(M2 )]⊕RC4(v, KAB )) and he can compute ([M2 , C(M2 )] ⊕ RC4(v, KAB )) ⊕ RC4(v, KAB )), which is equal to [M2 , C(M2 )]. We have implemented a new version of this protocol, based on the changing of the initial vector at every message sent. Like for the “Bull’s Authentication Protocol”, it has permitted to prevent the intruder from retrieving RC4(v, KAB ) with the Exclusive-Or property, and to ensure secrecy for the messages reserved to B. Avispa tools and ProVerif have indeed considered the new protocol safe in less than 1 second. 3.3
Gong’s Mutual Authentication Protocol
The protocol [23] aims at providing mutual authentication and distributing a fresh secret key K. It makes use of a trusted server S with which each of the two agents A and B shares a secret password respectively PA and PB . As an alternative to encryption algorithms, this protocol uses the one-way functions f1 ,f2 ,f3 and g. The principal B can obtain, using the properties of Exclusive-Or, the triple (K, HA , HB ) from the message that he receives at step 3, and check it by computing g(K, HA , HB , PB ). Knowing PA and after receiving NS , A uses the functions f1 ,f2 and f3 to get K, HA and HB . Hence, she can verify the message HB sent by B at step 4 and sending the message HA to B in order to prove her identity. Results: Here, we checked the secrecy of the key created. It was declared safe by CL-Atse and OFMC. This time OFMC with 19 seconds is faster than CL-Atse with one minute and 34 seconds. For ProVerif, we have no result since converting the XOR-ProVerif file to horn clauses returns “out of global stack”. Here also the number of variables used in Exclusive-Or is important comparing to the other protocols, which could explain the crash of the tool. 3.4
TMN
This is a symmetric key distribution protocol [32, 42]. Indeed, here, the key KB is given to A. For each session, the server verifies that the keys KA and KB have not been used in previous sessions. Results: The same attack which is described below was found with ProVerif and Avispa in less than one second. There exists another attack presented in the survey (or see [40]) based on a property of encryption. It used the fact that {X}P KS ∗ {Y }P KS = {X ∗ Y }P KS . Yet, the tools used does not seem to be able to find this attack, since they can not take into account such property. 1. A −→ S : 2. S −→ I : 3. I(B) −→ S : 4. S −→ I :
B, {KA }P KS A A, {KI }P KS B, KI ⊕ KA
Comparison of Cryptographic Verification Tools
179
In the first step, A starts a normal session with B. In the second step, I intercepts the message sent by S and then, in step 3, he impersonates B and sends his own symmetric key to the server. Finally, the intruder intercepts B and KI ⊕ KA and as he knows KI , he can find KA by computing (KI ⊕ KA ) ⊕ KI . Finally, I can transmit B, KI ⊕ KA to A. 3.5
Salary Sum
This protocol [39] allows a group of people to compute the sum of their salaries without anyone declaring his own salary to the others. For the sake of simplicity, the protocol is only considered for four principals A, B, C and D. This protocol uses addition, but OFMC can analyze this property with similar time results as for the Exclusive-Or. Hence, in order to make the comparison we replace addition by Exclusive-Or in the analyzed version. Results: We verified the secrecy of all salaries, and we found different attacks, depending on the tool used. The attack found with Avispa tools is based on the fact that I plays both the role of C and D: 1. A −→ B : 2. B −→ I : 3. I(B) −→ C : 4. C −→ I :
A, {NA ⊕ SA }P KB B, {NA ⊕ SA ⊕ SB }P KI B, {NA ⊕ SA ⊕ SB }P KC C, {NA ⊕ SA ⊕ SB ⊕ SC }P KI
In this attack, B believes he is doing the protocol with I in the third position while C believes he is doing it with I in the fourth position. Indeed, in step 2, B sends NA ⊕SA ⊕SB encrypted with P KI and in step 4, C sends NA ⊕SA ⊕SB ⊕SC encrypted with P KI too. Consequently, by subtracting these two numbers, I can obviously reach SC , which should have remain secret. Note that the first implementation ends with XOR-ProVerif but this times ProVerif does not terminate after more than six hours. Here we can see the limitations of ProVerif. It is well known that for some protocols ProVerif does not terminates. On the other side for the first time on this example we can clearly observe that OFMC is much more better than CL-Atse. We also change a little bit the modeling by fixing some agent in the XORProVerif input file. With this new version ProVerif terminates and finds an attack in less than 1 + 11 = 12 seconds. The attack described below is very similar to the attack of the survey. SA , SB , SC , SD are numbers representing salaries. 1. A −→ I : 2. I(D) −→ A : 3. A −→ I :
A, {NA ⊕ SA }P KI D, {NA ⊕ SA }P KA SA
Here, I is the second participant of the protocol. Indeed, A sends him NA ⊕SA as expected. Then, contrary to the normal protocol, I impersonates D and sends to A directly what he has received: NA ⊕ SA . A, believing it comes from D, applies again the Exclusive-Or with NA , and consequently sends SA to I, instead of the sum of all the salaries. In the attack presented in the survey, I adds SI ⊕ SI ⊕ SI
180
P. Lafourcade, V. Terrade, and S. Vigier
to NA ⊕ SA before sending it to A. Then, A sends him SA ⊕ SI ⊕ SI ⊕ SI and since he knows SI , he can retrieve SA . The principles of the two attacks are the same, but the second version permits to prevent A from realizing it is his own salary he received at step 3. 3.6
E-Auction
The H-T Liaw, W-S Juang and C-K Lin’s protocol [24] is an improvement of the Subramanian’s protocol. It satisfies the requirements for electronic auction like anonymity, privacy... but also adds the properties of non-repudiation, untraceability, auditability, one time registration and unlinkability. Results: We check the secrecy, the authentication and the non repudiation with the three tools and all find it secure in less than 1 second. This protocol is composed of 13 exchanges of messages, but contains only two Exclusive-Or operations. This confirms the theory, meaning that the complexity for verifying protocol with Exclusive-Or increases exponentially with the number of ExclusiveOr operations. 3.7
Others
We compare the tools on the Exclusive-Or Needham-Schroeder protocol proposed by Mathieu Turuani on the web site of Avispa. We also code the Three Pass protocol proposed by R. Shamir et al. with the One Time Pad encryption scheme (protocol described in [15]). In this situation there exists a simple attack which consists of xoring all exchanged messages for recovering the secret. The results are the same for the both protocols, all tools find the attack in less than one second. 3.8
Diffie-Hellman Key Exchange Protocol
In the protocol presented in [21], the initiator A first chooses a prime number P and a primitive root G of the group Z/P Z. He sends them with the exponentiation of G by a fresh number NA and the responder does the same with a fresh number NB . At the end, they share a common key which is the exponentiation of G by NA and NB .This protocol has to guarantee the secrecy of the fresh key. Results: The implementation of the protocol realized is the simplified version of the one presented above. Indeed, in the first step of the protocol, A sends to B only GNA , and we consider that P and G were known by everybody. Here, we can see that B (but also A) has no means to check authentication on the three numbers received. ProVerif and Avispa give us in less than one second the following well-known authentication attack: 1. A −→ I : P, G, (GNA ) mod P 2. I(A) −→ B : P, X1 , X2 3. B −→ I(A) : (X1NB ) mod P
Comparison of Cryptographic Verification Tools
3.9
181
IKA
The Initial Key Agreement [5] or also called the GDH.2 protocol, uses the same idea as the Diffie-Hellman protocol but it is extended to an unlimited number of agents. It aims at establishing a group key between a fixed number of participants X1 ,...,Xn . Each agent has a secret nonce Ni and at the end of the protocol, the key shared between all principals is the exponentiation of G (primitive root of the group Z/P Z where P is a prime number) by all the participant’s nonces. Results: We found an attack with CL-Atse and OFMC in less than 2s. This attack is similar to the attack on the Diffie-Hellman protocol. For DH-ProVerif this protocol was already studied by Kuesters et al in [27]. As it is mentioned in [27] a naive modelling of this protocol produces an input file for ProVerif, which does not terminate. The situation is similar as the situation we found in the Salary Sum with Exclusive-Or. The authors “used a technique inspired by the one sometimes used in the process calculus mode for ProVerif when encoding phases”. Hence they proposed two versions: one safe version for one session
Analyzed Protocols
Avispa OFMC CL-Atse UNSAFE UNSAFE Bull [12] secrecy attack secrecy attack 0.08 s 0.08 s The analysis SAFE Bull v2 does not end time search: 20 h 1 h 10 min UNSAFE UNSAFE WEP [1] secrecy attack secrecy attack 0.01 s less than 0.01 s WEP v2 SAFE SAFE 0.01 s less than 0.01 s Gong [23] SAFE SAFE 19 s 1 min 34 s UNSAFE UNSAFE Salary Sum [39] secrecy attack secrecy attack 0.45 s 11 min 16 s UNSAFE UNSAFE TMN [32, 42] secrecy attack secrecy attack 0.04 s less than 0.01 s E-Auction [24] SAFE SAFE less than 1s 0.59 s 3-Pass Shamir [15] UNSAFE UNSAFE less than 1s less than 1s ⊕ Needham-Schroeder UNSAFE UNSAFE less than 1s less than 1s Fig. 1. Results for protocols using XOR
ProVerif XOR-ProVerif No result XOR-ProVerif does not end No result XOR-ProVerif does not end UNSAFE secrecy attack less than 1 s SAFE less than 1 s No result does not end UNSAFE secrecy attack does not end UNSAFE secrecy attack less than 1 s SAFE less than 1 s UNSAFE less than 1 s UNSAFE less than 1 s
182
P. Lafourcade, V. Terrade, and S. Vigier Analyzed Protocols
Avispa ProVerif OFMC CL-Atse DH-ProVerif UNSAFE UNSAFE UNSAFE D.H [21] authentication Survey authentication authentication attack attack attack 0.01 s less than 0.01 s less than 1 s UNSAFE UNSAFE UNSAFE IKA [5] authentication authentication 1s+2min 33s and secrecy attack and secrecy attack SAFE less than 0.01 s less than 0.01 s 3s + 1s Fig. 2. Results for protocols using DH
(second line in the table of Figure 2) and one unsafe version for two sessions in order to find the well-known attack (first line in the table of Figure 2). We adapt the two versions given by the authors for 4 principals to 3 principals, in order to analyze the same configuration as the one used in the Avispa tools. We observe that ProVerif is a little bit slower that the other tools, due to the translation performed by XOR-ProVerif.
4
Conclusion and Discussion
In this paper we have compared time efficiency of cryptographic verification tools using Exclusive-Or and Diffie-Hellman properties. In Figure 1 and 2 we sum up the results obtained with the different tools for the studied protocols. Globally, we found the same attacks with OFMC, CL-Atse, and XOR-ProVerif or DH-ProVerif. Most of the time these attacks were identical to those of the survey [18] except for Salary Sum and TMN. These exceptions are normal since for the first one we change the addition into Exclusive-Or and for the second one any tool can deal with the homomorphism property used in the attack presented in the survey. For the Exclusive-Or property, it seems that when OFMC terminates it is globally faster that CL-Atse. But for protocols using a large number of ExclusiveOr operations, e.g. for instance in the Bull’s protocol, OFMC does not terminates whereas CL-Atse does. The difference between the Bull’s protocol and the E-auction’s protocol shows clearly that the number of Exclusive-Or used in a protocol is the parameter which increases verification time. This confirms that complexity is exponential in the number of Exclusive-Or. This also explains the failure of XOR-ProVerif in this situation. On the other hand, if the number of variables and constants is not too large ProVerif is very efficient and faster that Avispa tools. Finally, for some protocols, such as modified version of the Salary Sum for ProVerif or the improved version of Bull’s protocol for OFMC the tools were not able to end the analysis in a limited period of time. For Diffie-Hellman property, all protocols were analyzed quickly by all the tools. This confirms the polynomial complexity of DH-ProVerif and the fact that this equational theory is less complex than Exclusive-Or.
Comparison of Cryptographic Verification Tools
183
Indeed, the use of variables with Exclusive-Or or “exponentiation” seems to increase rapidly the search time of the tools, especially for XOR-ProVerif, but also for OFMC and CL-Atse. Future work: Recently a new version of OFMC has been proposed in the project AVANTSSAR. A new version of TA4SP is also announced on the website of the author, this new version deals with some algebraic properties including Exclusive-Or and Diffie-Hellman. In the future we plan to check the same protocols with this new version of OFMC and also to include the new version of TA4SP in our study. Moreover we would like to see if the new OFMC is more efficient than its older version. We also would like to include the tool Maude NPA [22] in our analysis. This tool uses rewriting techniques for proving security of cryptographic protocols in presence of equational theories. Our fisrt test shows on DH protocol that Maude takes around 6 minutes instead of less than one second for all the other tools. Moreover in [14] the authors propose an improved version of XOR-ProVerif, we would like to test this new version. Finally we project to continue this preliminary analysis in a fair way as we did in [20].
References 1. IEEE 802.11 Local and Metropolitan Area Networks: Wireless LAN Medium Acess Control (MAC) and Physical (PHY) Specifications (1999) 2. Armando, A., Basin, D., Boichut, Y., Chevalier, Y., Compagna, L., Cuellar, J., Drielsma, P.H., He´ am, P.-C., Kouchnarenko, O., Mantovani, J., M¨ odersheim, S., von Oheimb, D., Michael, R., Santiago, J., Turuani, M., Vigan` o, L., Vigneron, L.: The AVISPA tool for the automated validation of internet security protocols and applications. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 281–285. Springer, Heidelberg (2005) 3. Armando, A., Compagna, L.: An optimized intruder model for SAT-based modelchecking of security protocols. In: Armando, A., Vigan` o, L. (eds.) ENTCS, March 2005, vol. 125, pp. 91–108. Elsevier Science Publishers, Amsterdam (2005) 4. Armando, A., Basin, D.A., Bouallagui, M., Chevalier, Y., Compagna, L., M¨ odersheim, S., Rusinowitch, M., Turuani, M., Vigan` o, L., Vigneron, L.: The aviss security protocol analysis tool. In: Brinksma, E., Larsen, K.G. (eds.) CAV 2002. LNCS, vol. 2404, pp. 349–353. Springer, Heidelberg (2002) 5. Ateniese, G., Steiner, M., Tsudik, G.: New multiparty authentication services and key agreement protocols. IEEE Journal of Selected Areas in Communications 18(4), 628–639 (2000) 6. Basin, D., M¨ odersheim, S., Vigan` o, L.: An On-The-Fly Model-Checker for Security Protocol Analysis. In: Snekkenes, E., Gollmann, D. (eds.) ESORICS 2003. LNCS, vol. 2808, pp. 253–270. Springer, Heidelberg (2003) 7. Basin, D.A., M¨ odersheim, S., Vigan` o, L.: Ofmc: A symbolic model checker for security protocols. Int. J. Inf. Sec. 4(3), 181–208 (2005) 8. Blanchet, B.: An efficient cryptographic protocol verifier based on prolog rules. In: Proc. CSFW 2001, pp. 82–96. IEEE Comp. Soc. Press, Los Alamitos (2001) 9. Blanchet, B.: Cryptographic Protocol Verifier User Manual (2004) 10. Boichut, Y., H´eam, P.-C., Kouchnarenko, O., Oehl, F.: Improvements on the Genet and Klay technique to automatically verify security protocols. In: Proc. AVIS 2004 (April 2004)
184
P. Lafourcade, V. Terrade, and S. Vigier
11. Bozga, L., Lakhnech, Y., Perin, M.: HERMES: An Automatic Tool for Verification of Secrecy in Security Protocols. In: Computer Aided Verification (2003) 12. Bull, J., Otway, D.J.: The authentication protocol. Technical Report DRA/CIS3/PROJ/CORBA/SC/1/CSM/436-04/03, Defence Research Agency (1997) 13. Cheminod, M., Cibrario Bertolotti, I., Durante, L., Sisto, R., Valenzano, A.: Experimental comparison of automatic tools for the formal analysis of cryptographic protocols. In: DepCoS-RELCOMEX 2007, Szklarska Poreba, Poland, June 14-16, pp. 153–160. IEEE Computer Society Press, Los Alamitos (2007) 14. Chen, X., van Deursen, T., Pang, J.: Improving automatic verification of security protocols with xor. In: Cavalcanti, A. (ed.) ICFEM 2009. LNCS, vol. 5885, pp. 107–126. Springer, Heidelberg (2009) 15. Clark, J., Jacob, J.: A survey of authentication protocol literature (1997), http://www.cs.york.ac.uk/~jac/papers/drareviewps.ps 16. Corin, R., Etalle, S.: An improved constraint-based system for the verification of security protocols. In: Hermenegildo, M.V., Puebla, G. (eds.) SAS 2002. LNCS, vol. 2477, pp. 326–341. Springer, Heidelberg (2002) 17. Cortier, V., Delaune, S., Lafourcade, P.: A survey of algebraic properties used in cryptographic protocols. Journal of Computer Security 14(1), 1–43 (2006) 18. Cortier, V., Delaune, S., Lafourcade, P.: A survey of algebraic properties used in cryptographic protocols. Journal of Computer Security 14(1), 1–43 (2006) 19. Cremers, C.J.F.: The Scyther Tool: Verification, falsification, and analysis of security protocols. In: Gupta, A., Malik, S. (eds.) CAV 2008. LNCS, vol. 5123, pp. 414–418. Springer, Heidelberg (2008) 20. Cremers, C.J.F., Lafourcade, P., Nadeau, P.: Comparing state spaces in automatic protocol analysis. In: Cortier, V., Kirchner, C., Okada, M., Sakurada, H. (eds.) Formal to Practical Security. LNCS, vol. 5458, pp. 70–94. Springer, Heidelberg (2009) 21. Diffie, W., Hellman, M.: New directions in cryptography. IEEE Transactions on Information Society 22(6), 644–654 (1976) 22. Escobar, S., Meadows, C., Meseguer, J.: Maude-npa: Cryptographic protocol analysis modulo equational properties. In: Aldini, A., Barthe, G., Gorrieri, R. (eds.) FOSAD. LNCS, vol. 5705, pp. 1–50. Springer, Heidelberg (2007) 23. Gong, L.: Using one-way functions for authentication. SIGCOMM Computer Communication 19(5), 8–11 (1989) 24. Liaw, H.-T., Juang, W.-S., Lin, C.-K.: An electronic online bidding auction protocol with both security and efficiency. Applied mathematics and computation 174, 1487– 1497 (2008) 25. Hussain, M., Seret, D.: A comparative study of security protocols validation tools: HERMES vs. AVISPA. In: Proc. ICACT 2006, vol. 1, pp. 303–308 (2006) 26. Klay, F., Vigneron, L.: Automatic methods for analyzing non-repudiation protocols with an active intruder. In: Degano, P., Guttman, J., Martinelli, F. (eds.) FAST 2008. LNCS, vol. 5491, pp. 192–209. Springer, Heidelberg (2009) 27. K¨ usters, R., Truderung, T.: Using ProVerif to Analyze Protocols with DiffieHellman Exponentiation. In: Proceedings of the 22nd Computer Security Foundations Symposium (CSF), pp. 157–171. IEEE Computer Society, Los Alamitos (2009) 28. K¨ usters, R., Truderung, T.: Reducing protocol analysis with xor to the xor-free case in the horn theory based approach. In: Ning, P., Syverson, P.F., Jha, S. (eds.) ACM Conference on Computer and Communications Security, pp. 129–138. ACM, New York (2008)
Comparison of Cryptographic Verification Tools
185
29. K¨ usters, R., Truderung, T.: Reducing protocol analysis with xor to the xor-free case in the horn theory based approach. In: ACM Conference on Computer and Communications Security, pp. 129–138 (2008) 30. Lafourcade, P., Terrade, V., Vigier, S.: Comparison of cryptographic verification tools dealing with algebraic properties. Technical Report TR-2009-16, Verimag (October 2009) 31. Lowe, G.: Casper: a compiler for the analysis of security protocols. J. Comput. Secur. 6(1-2), 53–84 (1998) 32. Lowe, G., Roscoe, A.W.: Using CSP to detect errors in the TMN protocol. IEEE Transactions on Software Engineering 23(10), 659–669 (1997) 33. Meadows, C.: Language generation and verification in the NRL protocol analyzer. In: Proc. CSFW 1996, pp. 48–62. IEEE Comp. Soc. Press, Los Alamitos (1996) 34. Meadows, C.: Analyzing the needham-schroeder public-key protocol: A comparison of two approaches. In: Martella, G., Kurth, H., Montolivo, E., Bertino, E. (eds.) ESORICS 1996. LNCS, vol. 1146, pp. 351–364. Springer, Heidelberg (1996) 35. Mitchell, J.C., Mitchell, M., Stern, U.: Automated analysis of cryptographic protocols using Murphi. In: IEEE Symposium on Security and Privacy (May 1997) 36. Needham, R., Schroeder, M.: Using encryption for authentication in large networks of computers. Communication of the ACM 21(12), 993–999 (1978) 37. Roscoe, A.W.: Model-checking CSP. Prentice-Hall, Englewood Cliffs (1994) 38. Ryan, P.Y.A., Schneider, S.A.: An attack on a recursive authentication protocol. a cautionary tale. Inf. Process. Lett. 65(1), 7–10 (1998) 39. Schneier, B.: Applied Cryptography, 2nd edn. Wiley, Chichester (1996) 40. Simmons, G.J.: Cryptoanalysis and protocol failures. Communications of the ACM 37(11), 56–65 (1994) 41. Song, D., Berezin, S., Perrig, A.: Athena: A novel approach to efficient automatic security protocol analysis. Journal of Computer Security 9(1/2), 47–74 (2001) 42. Tatebayashi, M., Matsuzaki, N., Newman, D.B.: Key distribution protocol for digital mobile communication systems. In: Brassard, G. (ed.) CRYPTO 1989. LNCS, vol. 435, pp. 324–334. Springer, Heidelberg (1990) 43. Turuani, M.: The CL-Atse Protocol Analyser. In: Pfenning, F. (ed.) RTA 2006. LNCS, vol. 4098, pp. 277–286. Springer, Heidelberg (2006) 44. Vigan` o, L.: Automated security protocol analysis with the AVISPA tool. ENTCS 155, 61–86 (2006)
Game-Based Verification of Multi-Party Contract Signing Protocols Ying Zhang1,2 , Chenyi Zhang1 , Jun Pang1 , and Sjouke Mauw1 1
University of Luxembourg, 6, rue Richard Coudenhove-Kalergi, L-1359 Luxembourg 2 Shandong University, Jinan, 250101 China
Abstract. A multi-party contract signing (MPCS) protocol is used for a group of signers to sign a digital contract over a network. We analyse the protocols of Mukhamedov and Ryan (MR), and of Mauw, Radomirovi´c and Torabi Dashti (MRT), using the finite-state model checker Mocha. Mocha allows for the specification of properties in alternating-time temporal logic (ATL) with game semantics, and the model checking problem for ATL requires the computation of winning strategies. This gives us an intuitive interpretation of the verification problem of crucial properties of MPCS protocols. We analyse the MR protocol with up to 5 signers and our analysis does not reveal any flaws. MRT protocols can be generated from minimal message sequences, depending on the number of signers. We discover an attack in a published MRT protocol with 3 signers, and present a solution for it. We also design a number of MRT protocols using minimal message sequences for 3 and 4 signers, all of which have been model checked in Mocha.
1
Introduction
The goal of a multi-party contract signing (MPCS) protocol is to allow a number of parties to sign a digital contract over a network. Such a protocol is designed as to ensure that no party is able to withhold his signature after having received another party’s signature. A simple way to achieve this is to involve a trusted third party (T ). This trusted third party simply collects the signatures of all signers and then distributes them to all parties. A major drawback of this approach is that the trusted third party easily becomes a bottleneck, since it will be involved in all communications for all contracts. This problem is addressed by the introduction of, so-called, optimistic multi-party contract signing protocols [1]. The idea is that involvement of the trusted third party is only required if something goes wrong, e.g. if one of the parties tries to cheat or if a nonrecoverable network error occurs. If all parties and the communication network behave correctly, which is considered the optimistic case, the protocol terminates successfully without intervention of the trusted third party. MPCS protocols are supposed to satisfy three properties: fairness, abusefreeness and timeliness. Fairness means that each signer who sends out his signature has a means to receive all the other signers’ signatures. Abuse-freeness guarantees that no signer can prove to an outside observer that he is able to P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 186–200, 2010. c Springer-Verlag Berlin Heidelberg 2010
Game-Based Verification of Multi-Party Contract Signing Protocols
187
determine the result of the protocol. Timeliness ensures that each signer has the capability to end infinite waiting. Several optimistic contract signing protocols have been proposed, most of which only focus on the special case of two parties [2,3]. In 1999, Garay and Mackenzie proposed the first optimistic contract signing protocol [4] with multiple parties, which we call the GM protocol. Chadha, Kremer and Scedrov found a flaw in the GM protocol for n ≥ 4, where n is the number of signers. They revised the GM protocol by modifying one of its sub-protocols and proposed a fixed protocol [5] in 2004 (which we call the CKS protocol). Mukhamedov and Ryan later showed that the CKS protocol fails to satisfy the fairness property for n ≥ 5 by giving a so-called abort-chaining attack. They proposed a fixed protocol [6] in 2008 based on the CKS protocol (which we call the MR protocol). Mukhamedov and Ryan proved that their protocol satisfies fairness and claimed that it satisfies abuse-freeness and timeliness as well. They also gave a formal analysis of fairness in the NuSMV model checker for 5 signers. Using the notion of abort-chaining attacks, Mauw, Radomirovi´c and Torabi Dashti analysed the message complexity of MPCS protocols [7]. Their results made it feasible to construct MPCS protocols excluding abort-chaining attacks but with minimal messages, which we call the MRT protocols, based on so-called signing sequences. They also gave an example protocol with 3 signers. However, they only provided a verification of the protocol at a conceptual level. In this paper, we follow the approach of Chadha, Kremer and Scedrov [5] to model check the two recently proposed protocols, the MR and MRT protocols, in Mocha [8]. Mocha can be used to model check properties specified in ATL [9]. This allows us to have a precise and natural formulation of desired properties of contract signing, as the model checking problem for ATL requires the computation of winning strategies. We model the MR protocol with up to 5 signers and verify both fairness and timeliness properties, while Mukhamedov and Ryan only analysed fairness of their protocol with 5 signers. We clarify how to construct an MRT protocol from a minimal signing sequence. According to this methodology, we design a number of MRT protocols for 3 and 4 signers, all of which have been model checked in Mocha. In particular, we discover a fairness attack on the published MRT protocol with 3 signers [7] and we present a solution to it. The fixed protocol is shown to satisfy fairness.
2
Preliminaries
This section describes the basic structure of an optimistic contract signing protocol with its underlying assumptions. A few cryptographic primitives are employed in such protocols which we only briefly introduce. We also explain the security requirements associated with MPCS protocols. 2.1
Basic Notions
An optimistic MPCS protocol generally involves a group of signers P1 , . . . , Pn , who want to sign a contract monitored by a trusted third party T . A signer
188
Y. Zhang et al.
may be honest and thus strictly follow the protocol, or he may be dishonest and deviate from the protocol in order to collude with other dishonest signers to get undesirable advantages over the remaining signers. The structure of a protocol consists of a main protocol and one or several sub-protocols. The main protocol is executed by signers to exchange their promises at different levels and signatures without the intervention from the trusted third party T . The subprotocols, which usually include an abort protocol and a resolve protocol, are launched by a user Pi on contacting T to deal with awry situations. Once having contacted T by initiating a sub-protocol, the signers would never be allowed to proceed the main protocol any more. T makes a decision on basis of the information contained in a request provided by a signer as well as all previous requests that have been sent by other participants. A request consists of the promises that the requesting signer has received so far, serving as a clue for T to judge the signer’s position in the current protocol execution. On making a decision, T presumes that all the signers are honest, unless the received requests contradict, showing that someone has lied. A reply from T can be either an abort confirmation or a contract signed by all the participants. After T has sent an abort reply, she may later overturn that abort and reply with a signed contract to subsequent requests if T detects that all the signers who have previously contacted T are dishonest.1 However, once T has sent a signed contract, she will have to stick to that decision for all subsequent requests. Without launching a sub-protocol, a signer Pi quits a protocol if he simply follows the main protocol till the end. Otherwise, Pi quits the protocol once a reply from T is received. An important assumption of optimistic contract signing protocols is that all communication channels between the signers and the trusted third party are resilient, which means that messages sent over the channels are guaranteed to be delivered eventually. 2.2
Cryptographic Primitives
An optimistic MPCS protocol usually employs zero-knowledge cryptographic primitives, private contract signatures (P CS) [4]. We write P CSPi ((c, τ ), Pj , T ) for a promise made by Pi to Pj (i = j) on contract c at level τ , where τ indicates the current level of a protocol execution where Pi makes the promise. A promise is assumed to have the following properties. – P CSPi ((c, τ ), Pj , T ) can only be generated by Pi and Pj . – Only Pi , Pj and T can verify P CSPi ((c, τ ), Pj , T ). – P CSPi ((c, τ ), Pj , T ) can be transformed into Pi ’s signature only by Pi and T . Intuitively, P CSPi ((c, τ ), Pj , T ) acts as a promise by Pi to Pj to sign the contract c at level τ . However, the properties guarantee that Pj cannot use it to prove to anyone except T that he has this promise. This is essential to achieve abuse-freeness for MPCS protocols. Since these properties sufficiently describe the purpose and use of this primitive, we will not discuss its implementation. 1
Otherwise T ’s overturn decision may impair fairness of an honest signer who has previously received an abort reply.
Game-Based Verification of Multi-Party Contract Signing Protocols
2.3
189
Desirable Properties
All contract signing protocols are expected to satisfy three security properties [6], viz. fairness, abuse-freeness and timeliness. Fairness. At the end of the protocol, either each honest signer gets all the others’ signatures, or no signer gets any signature. Fairness ensures that no signer can get any valuable information without sending out his signature, and once a signer sends out his signature, he will eventually get all the others’ signatures. An abort chaining [6] is a sequence of abort and resolve messages to T in a particular order, such that it enforces T to return an abort reply to an honest user who has already sent out his signature. Abort-chaining attacks are a major challenge to fairness, and were instrumental to deriving the resolve-impossibility result for a trusted third party for a certain class of MPCS protocols [6]. Abuse-freeness. At any stage of the protocol, any set of signers are unable to prove to an outside observer that they have the power to choose between aborting the protocol and getting the signature from another signer who is honest and optimistically participating in the protocol. Intuitively, a protocol not being abuse-free implies that some of the signers have an unexpected advantage over other signers, and therefore they may enforce others to compromise on a contract. Timeliness. Each signer has a solution to prevent endless waiting at any time. That means no signer is able to force anyone else to wait forever.
3
Formal Model
In this section, we discuss how to model protocols in Mocha using a concurrent game structure, and how to express specifications for the desired properties in alternating-time temporal logic (ATL) with game semantics. We start with the introduction of concurrent game structures and ATL [9]. 3.1
Concurrent Game Structures and ATL
A (concurrent) game structure is a tuple S = k, Q, Π, π, d, δ with components: k ∈ N+ is the number of players, identified with the numbers 1, . . . , k. Q is a finite set of states. Π is a finite set of propositions. π : Q → 2Π is a labeling function. For each state q ∈ Q, a set π(q) ⊆ Π of propositions are true. – d : {1, . . . , k} × Q → N+ . da (q) represents the number of available moves for player a ∈ {1, . . . , k} at state q ∈ Q. We identify the moves of player a at state q with the numbers 1, . . . , da (q). – δ is a transition function. For each q ∈ Q and each move vector j1 , . . . , jk , δ(q, j1 , . . . , jk ) is the state that results from q if every player a ∈ {1, . . . , k} chooses move ja ≤ da (q).
– – – –
190
Y. Zhang et al.
The temporal logic ATL (Alternating-time Temporal Logic) is defined with respect to a finite set Π of propositions and a finite set Σ = {1, . . . , k} of players. An ATL formula is one of the following: – p for propositions p ∈ Π. – ¬φ or φ1 ∨ φ2 , where φ, φ1 , and φ2 are ATL formulas. – A φ, Aφ, or Aφ1 Uφ2 , where A⊆ Σ is a set of players, and φ, φ1 and φ2 are ATL formulas. We interpret ATL formulas over the states of a concurrent game structure S that has the same propositions and players. The labeling of the states of S with propositions is used to evaluate the atomic formulas of ATL. The logical connectives ¬ and ∨ have the standard meaning. In order to give the definition of the semantics of ATL, we first give the notion of strategies. Consider a game structure S = k, Q, Π, π, d, δ. A strategy for player a ∈ Σ is a mapping fa : Q+ → N such that λ is a non-empty finite state sequence and fa (λ) ≤ da (q) if λ’s last state is q. In other words, a strategy fa represents a set of computations that player a can enforce. Hence, FA = {fa | a ∈ A} induces a set of computations that all the players in A can cooperate to enforce. Given a state q ∈ Q, out(q, FA ) is the set of computations enforced by the set of players A applying strategies in FA . Write λ[i] for the i-th state in the sequence λ starting from 0. We are now ready to give the semantics of ATL. We write S, q |= φ to indicate that the state q satisfies the formula φ in the structure S. And if S is clear from the context we can omit S and write q |= φ. The satisfaction relation |= is defined for all states q of S inductively as follows: q |= p, for propositions p ∈ Π, iff p ∈ π(q). q |= ¬φ iff q |= φ. q |= φ1 ∨ φ2 iff q |= φ1 or q |= φ2 . q |= A φ iff there exists a set FA of strategies, one for each player in A, such that for all computations λ ∈ out(q, FA ), we have λ[1] |= φ. – q |= Aφ iff there exists a set FA of strategies, one for each player in A, such that for all computations λ ∈ out(q, FA ) and for all positions i ≥ 0, we have λ[i] |= φ. – q |= Aφ1 Uφ2 iff there exists a set FA of strategies, one for each player in A, such that for all computations λ ∈ out(q, FA ), there exists a position i ≥ 0 such that λ[i] |= φ2 and for all positions 0 ≤ j < i, we have λ[j] |= φ1 .
– – – –
Note that φ can be defined as true Uφ. The logic ATL generalises Computation Tree Logic (CTL) [10] on game structures, in that the path quantifiers of ATL are more general: the existential path quantifier ∃ of CTL corresponds to Σ, and the universal path quantifier ∀ of CTL corresponds to ∅. 3.2
Modelling MPCS Protocols in Mocha
Mocha [8] is an interactive verification environment for the modular and hierarchical verification of heterogeneous systems. Its model framework is in the form
Game-Based Verification of Multi-Party Contract Signing Protocols
191
of reactive modules [11]. The states of a reactive module are determined by variables and are changed in a sequence of rounds. Mocha can check ATL formulas, which express properties naturally as winning strategies with game semantics. This is the main reason we choose Mocha as our model checker in this work. Mocha provides a guarded command language to model the protocols, which uses the concurrent game structures as its formal semantics. The syntax and semantics of this language can be found in [8]. Intuitively, each player a ∈ Σ conducts a set of guarded commands in the form of guard ξ → update ξ . The update step is executed by each player choosing one of its commands whose boolean guard evaluates to true. The next state combines the outcomes of the guarded commands chosen by the players. We now describe how to model MPCS protocols in detail, following [5]. Each participant is modelled as a player using the above introduced guarded command language. In order to model that a player could be either honest or malicious, for each player Pi we build a process P iH, which honestly follows the steps of his role in the protocol, and another process P i, which is allowed to cheat. An honest signer only sends out a message when the required messages according to the protocol are received, i.e., he faithfully follows the protocol all the time. A dishonest signer may send out a message if he gets enough information for generating the message. He can even send messages after he is supposed to stop. The trusted third party T is modelled to be honest throughout the time. We express the communicational messages as shared boolean variables. The variables are false by default and set to true when they are sent out by the signers. For signers Pi and Pj , a variable P i Sj represents that Pi has got Pj ’s signature. Since Pi continues to hold Pj ’s signature once Pi gets it, we model that once P i Sj is set to true its value would never be changed thereafter. For each Pi , a variable P i stop models whether signer Pi has quitted the protocol. Since ¬P i stop is one of conditions within each Pi ’s guarded command, Pi would never change any of its variables once P i stop is set to true. The integer P r i j L = τ represents that Pi has sent out his τ -th level promise to Pj . In particular for MRT protocols, the integer P r i k j L = τ represents that Pi has forwarded Pk ’s τ -th level promise to Pj . All Mocha models can be found at [12]. 3.3
Expressing Properties of MPCS Protocols in ATL
We formalise both fairness and timeliness as in [5]. Fairness. A protocol is fair for signer Pi can be expressed as: if any signer obtains Pi ’s signature, then Pi has a strategy to get all the others’ signatures. In ATL, it can be formalised as follows: fairnessP i ≡ ∀ (( P j Si) ⇒ P iH ( P i Sj)). 1≤j =i≤n
1≤j =i≤n
Timeliness. At any time, every signer has a strategy to prevent endless waiting. Signer Pi ’s timeliness is expressed as: timelinessP i ≡ ∀ (P iH P i stop).
192
Y. Zhang et al.
Chadha, Kremer and Scedrov also gave an invariant formulation of fairness for Pi as follows: invfairnessP i ≡ ∀ (P i stop ⇒ (( P j Si) ⇒ ( P i Sj))) 1≤j =i≤n
1≤j =i≤n
They proved that if a contract signing protocol interpreted as a concurrent game structure satisfies timelinessP i for Pi then the protocol satisfies fairnessP i iff it satisfies invfairnessP i [5, Thm. 3].2
4
Model Checking the MR Protocol
In this section, we give the description of the MR protocol [6] proposed by Mukhamedov and Ryan. We build models using a program for any number n of signers, and model check both fairness and timeliness for the models with up to 5 signers in Mocha. 4.1
Description of the MR Protocol
The MR protocol is based on P CSs and consists of one main protocol, one abort sub-protocol and one resolve sub-protocol. Main protocol. The main protocol consists of n/2 + 1 rounds for n signers, and requires n(n − 1)(n/2 + 1) messages for the optimistic execution. In each round τ (τ ≤ n/2), a signer Pi starts with waiting for the τ -level promises from lower signers Pj (j < i). After receiving all the lower signers’ promises, he sends out his τ -level promise to all the higher signers Pk (k > i) and then waits for the promises from higher signers. On receipt of all higher signers’ promises, Pi then sends out his own τ -level promise to lower signers and finishes his current round. If Pi has received the (n/2 + 1)-th level promises and signatures from all the lower signers, he broadcasts his (n/2 + 1)-th level promise and signature to all the other signers. If Pi does not receive all the expected messages, he may quit the protocol, or send an abort or a resolve request to the trusted third party T , according to his current position in the main protocol. The abort request has the following form: SPi ((c, Pi , (P1 , . . . , Pn ), abort)) The resolve request is as follows: SPi ({P CSPj ((m, τj ), Pi , T )}j∈{1,...,n}\{i} , SPi (m, 0)) where for j > i, τj is the maximal level promise received from all the signers Pj such that j > j; for j < i, τ is the maximal level promise received from all the signers Pj such that j < j. 2
For MRT protocols with 4 signers, we verify invfairnessP i instead of fairnessP i on their Mocha models after we have successfully checked timeliness for Pi .
Game-Based Verification of Multi-Party Contract Signing Protocols
τj =
max{τ | ∀j > i, Pi has received P CSPj ((m,τ ),Pi ,T ) } max{τ | ∀j < i, Pi has received P CSPj ((m,τ ),Pi ,T ) }
193
if j > i if j < i
Sub-protocols. T maintains a boolean variable validated to indicate whether T has ever replied with a full signed contract. T uses a set S(c) to record all the signers which have contacted T and received an abort reply. T also controls two variables hi (c) and i (c) for each Pi to record Pi ’s executing position at the moment Pi contacts T . The variable hi (c) indicates the highest level promise Pi has sent to all the signers Pj where j > i, and i (c) indicates the highest level promise Pi has sent to all the signers Pj where j < i. Abort sub-protocol. When receiving an abort request from Pi , T first checks if she has ever sent a signed contract. If not, i.e., validated is false, T adds i into S(c), sends Pi an abort reply, and stores the reply. Besides, T sets hi (c) = 1 and i (c) = 0. Otherwise, T sends a signed contract to Pi . Resolve sub-protocol. When receiving a resolve request form Pi , T checks if it is the first request she has ever received. If it is, T simply replies Pi with a signed contract and sets validated to true. T judges Pi ’s current execution position and updates hi (c) and i (c) according to that position. If it is not the first request, T checks if she has ever sent a signed contract by checking if validated is true. (1) If yes, T sticks to the decision and replies Pi with a signed contract and updates hi (c) and i (c). (2) If not, that means T has ever replied an abort to some signer. In order to make a decision on whether to stick to the abort or to overturn it, T checks if it is the case that all signers in S(c) are cheating. That is, for each j ∈ S(c), T compares τj from Pi ’s request and hj (c) and j (c) from T ’s record to check if Pj continues the main protocol after receiving an abort reply. If all j ∈ S(c) are dishonest, T overturns her abort decision, and replies Pi with a signed contract, at the same time updating hi (c) and i (c). Otherwise, T sticks to her abort reply to Pi and updates hi (c) and i (c). 4.2
Automatic Analysis
We now give the analysis results of the MR protocol. We have verified fairness and timeliness of the MR protocol with 2, 3 and 4 signers, and our analysis did not reveal any flaw. In our analysis of the MR protocol with 5 signers, timeliness can be checked in Mocha. While for fairness, it seems infeasible for Mocha to verify. So instead of building one entire model covering all possible behaviours, we built a number of specific models, each of which focuses on one certain possible abort-chaining attack scenario. For instance, in order to check if there is an abort-chaining attack in which P1 aborts first and cooperates with P2 , P3 and P4 to get honest P5 ’s signature, we built a model where only P5 is honest, P1 will abort firstly and the other dishonest signers only resolve rather than abort. In this way, we reduce the size of the state space significantly when checking fairness.3 3
In the future, we want to apply this technique to even larger protocol instances.
194
Y. Zhang et al.
We explain that the above checks can cover all possible abort-chaining scenarios (in case of 5 signers). As mentioned in Sect. 2.3, an abort-chaining attack is achieved by collaborative malefactors to enforce an aborting outcome after they obtain the honest signer’s signature. We use Mi (i ∈ N) to indicate a set consisting of dishonest signers. Intuitively, if M1 ⊆ M2 , and M2 is not able to achieve an abort-chaining attack, then neither is M1 . So for the MR protocol with 5 signers, we only need to check scenarios where one of the signers is the victim, and all the other 4 signers are dishonest and collude to cheat. If there does not exist an abort-chaining attack for such scenario, then there does not exist an abortchaining attack for a scenario with fewer dishonest signers. Since each abortchaining attack starts with some dishonest signer contacting T with an abort reply, and ends up with the victim signer sending out his signature, we choose one signer to abort in our model, and choose another signer to be the victim from the last 4 signers. For the MR protocol with 5 signers, only signers P1 , P2 , = j) P3 and P4 have the possibility to abort. We use iAjH(i ∈ [1, 4], j ∈ [1, 5], i to indicate a model in which Pi aborts and Pj is the victim. So, in total we get 16 possible attack scenarios to check. Ultimately, the analysis result shows that no abort-chaining attack is detected for the MR protocol with 5 signers. If no one aborts, an honest signer can always get other signers’ signature by simply sending a resolve request to T . This means our analysis of fairness in Mocha is exhaustive. A formal correctness argument of our reasoning is postponed for future research. Mukhamedov and Ryan have shown that the CKS protocol [5] fails to satisfy the property of fairness with n ≥ 5. They propose a new protocol [6] and give its formal analysis in the model checker NuSMV [13] for 5 signers. They split fairness into two sub-properties in order to cover all possible scenarios, for which it is necessary to go through a number of cases. ATL has the advantage to express fairness in terms of strategies, so that our fairness specifications turn out more natural than are definable in CTL [10]. Comparing to Mukhamedov and Ryan’s work, we reduce the verification problem of fairness on the system model level instead of decomposing specifications.
5
Model Checking MRT Protocols
The main result of Mauw, Radomirovi´c and Torabi Dashti [7] makes it feasible to construct MPCS protocols excluding abort-chaining attacks with minimal communication messages. We first describe a methodology for designing an MRT protocol in Sect. 5.1. The description of the message order in the derived protocol is fully determined as in [7]. However, since Mauw, Radomirovi´c and Torabi Dashti only gave a high-level description of the message contents, we make the underlying assumptions precise. In Sect. 5.2, we design a family of MRT protocols and give their analysis in Mocha. Our model checking results reveal an abortchaining attack on an example protocol with 3 signers described in [7, Sect. 7], for which we propose a fix based on an important assumption that was not made explicit in the original paper.
Game-Based Verification of Multi-Party Contract Signing Protocols
195
Fig. 1. MRT protocols with 3 signers (the left one is based on the signing sequence 12 | 3121 | 3212, the right one describes the protocol in [7])
5.1
Design Methodology of MRT Protocols
An MRT protocol defines a sequence of messages m1 , m2 , . . . , m to be exchanged between a group of n signers in the main protocol, where every mi is supposed to be received before mj is sent out if i < j. The principles of MRT protocols are exactly those of MR, except that we have the following additional assumptions. 1. In each step a signer sends out message mi (1 ≤ i ≤ ) to another signer. 2. The receiver of mi is the sender of mi+1 , where i < . 3. The receiver of each message is allowed to have the most recent promises (signatures) of all the other signers, provided that they have ever sent out promises (signatures). That is, a sender may need to forward up to n − 2 promises of other signers besides his own promise. Based on the assumptions, an MRT protocol can be regarded as a list of the indices of the signers in which order they send their messages. Such a list is called a signing sequence. A signing sequence α for an MRT protocol with signers P1 , . . . , Pn can be divided into three phases. In the initial phase, the first n − 1 signers send out their promises according to the first n − 1 distinct elements of α. The middle phase is initiated by a (first level) promise of the signer who was missed out in the initial phase, followed by a sequence of numbers indicating the particular order of further promise exchanges. In the end phase the signers exchange their signatures. A typical signing sequence for n = 5 is of the following form. 1234 | 543212345432 | 12345123 From the example one may easily observe that the end phase needs to be at least of length 2n − 2, in that the first n numbers (as a permutation) are for all the signers to send out their signatures, and the remaining n − 2 messages are necessary to further distribute the signatures. The last receiver is implicit in a sequence but can be uniquely determined, e.g., signer P4 in the above example.
196
Y. Zhang et al.
An MRT protocol does not explicitly distinguish abort and resolve, i.e., every request to the trusted third party T is a resolve. It is obvious that if a signer in the initial phase sends a request to T , an abort will always be replied. However in the middle phase and end phase, T will have to make a decision based on whether all the previously requested signers have been dishonest. A major contribution of [7] is showing that a protocol generated by a signing sequence α is free of abort chaining attacks iff α’s middle phase together with the first n elements from its end phase contains all permutations of the set {1, . . . , n}. Therefore, finding the shortest sequence containing all permutations yields a solution to minimize the number of message exchanges in this particular class of protocols. To design an MRT protocol for n signers, we first find a shortest sequence α containing all permutations of the set {1, . . . , n}, using Adleman’s algorithm [14]. This sequence serves as the middle phase and partial end phase of a signing sequence. To complete the end phase, we append more indices of the signers at the end of α such that the end phase is able to distribute all the signatures to all signers. The initial phase can be obtained simply by pre-pending a sequence of length n − 1 to α to construct a full permutation at the beginning. There exist 7 (isomorphically) distinct shortest sequences which contain all permutations in {1, 2, 3} and they are presented below.4 ➊ 3123 | 123 ➎ 31321 | 31
➋ 3121 | 321 ➏ 3123 | 213
➌ 3123 | 132 ➐ 3121 | 312
➍ 31323 | 13
The symbol ‘|’ is used to separate different phases in the final signing sequence. Taking sequence ➋ as an example. First we complete the end phase by appending a 2 in the end. After adding the initial phase 12 at the beginning, we get a complete signing sequence 12 | 3121 | 3212. The main protocol derived from this signing sequence is depicted in the left-hand side of Fig. 1.5 Note that a shortest sequence containing all permutations does not necessarily give rise to a protocol with minimal messages: sequence ➍ requires appending two numbers in the end phase for completing the final signature distribution. For 4 signers, there are 9 distinct sequences modulo isomorphism: ➀ 42314234 | 1243 ➃ 42314324 | 1234 ➆ 42312432 | 1423
Fig. 2 shows a protocol designed from sequence ➁. 5.2
Design and Verification of MRT Protocols
In this section, we design a number of MRT protocols based on the methodology in Sect. 5.1. Each MRT protocol consists of a main protocol and a resolve sub-protocol. Similar to the MR protocol, the MRT protocols assume resilient 4 5
Sequence ➊ determines the example protocol in [7, Sect. 7]. We circle the positions where a signer is allowed to send a request to T . prτ (c, i) and s(c, i) denote Pi ’s τ -level promise and Pi ’s signature on c, respectively.
Game-Based Verification of Multi-Party Contract Signing Protocols
197
Fig. 2. An MRT protocol with 4 signers based on the sequence 123 | 42314234 | 123432
communication channels and private contract signatures (P CS). We have modelled and verified fairness and timeliness properties of the MRT protocols generated from all 7 shortest sequences for 3 signers. As for 4 signers, we verified the protocols generated from sequence ➁, sequence ➃ and sequence ➆ as aforementioned. We briefly present our modelling of MRT protocols as follows. Main protocol. The signers send and receive messages in the order specified by a signing sequence which is generated from a shortest sequence containing all permutation as introduced before. Upon receipt of a message containing all required information, a signer Pi generates a message consisting of all the upto-date promises and signatures and sends it to the next designated receiver. If Pi does not receive the expected message, he may quit the protocol if he has not sent out any messages yet, or he may start the resolve protocol by sending a resolve request to T . The request is in the form of {dispute, i, Hi, c}i , where dispute is a reserved keyword indicating Pi is contacting T for intervention, and Hi is Pi ’s history including all the messages he has sent or received so far, which gives T sufficient information to judge Pi ’s current position in an execution. The identifier c is meant to uniquely identify this contract signing session that includes the contract, the signing partners and the contract text. Pi ’s request does not indicate whether Pi asks T to abort or resolve. It is T ’s responsibility to make a decision and to reply with an abort or a signed contract. Resolve Sub-protocol. T maintains a tuple c, status in her database indicating a list of signers who have requested so far. Together with the history Hi of each received request, T is able to make a decision on whether to reply with an abort or a signed contract. The reasoning patterns of T in the sub-protocols of MRT are very similar to that of the MR protocol: a signer is considered dishonest if
198
Y. Zhang et al.
he is shown by another signer’s request to have continued in the main protocol after having sent a request to T . However in the MRT protocols, different signers may have different promise levels at a particular position, which are induced by the signing sequences of the main protocols. As a consequence, a sub-protocol of MRT has to be slightly adjusted from that of MR, and the sub-protocols may differ from each other. 5.3
An Attack on the Example Protocol
Our analysis in Mocha reveals an abort chaining attack in the example MRT protocol with 3 signers in [7]. This is due to that the protocol does not strictly follow the methodology as described in Sect. 5.1. Here we also present a simple fix. The protocol with its attack scenario is depicted in Fig. 1 (right). The abortchaining attack is highlighted as shadowed circles. In this scenario, P1 and P3 are dishonest and collude to obtain P2 ’s signature. The attack is achieved as follows, where promτ (c, i) denotes the τ -level promise of Pi on contract c: – P1 sends his first message out, and then contacts T with H1 = {prom1 (c, 1)}, by which T presumes P1 is in the initial phase, and replies with an abort at the same time storing c, (1 : {prom1 (c, 1)}) into her database. After having contacted T , P1 continues in the main protocol till the end. – P3 contacts T at the position of the first highlighted R circle with H3 = {prom1 (c, 1), prom1 (c, 2), prom1 (c, 3)}. This message does not reveal that P1 is continuing the main protocol, thus T also replies with an abort and stores c, (3 : {prom1 (c, 1), prom1 (c, 2), prom1 (c, 3)}) into her database. After having contacted T , P3 continues in the main protocol up to the receipt of P2 ’s signature. – P2 faithfully follows the main protocol till the end. After sending out his signature, P2 will never receive P3 ’s signature. Then P2 contacts T with H2 = {prom1 (c, 1), prom1 (c, 2), prom1 (c, 3), prom2 (c, 1), prom2 (c, 2), sig(c, 1), sig(c, 2)}. On receipt of such a request, T is able to deduce that P1 has been dishonest. However, T is unable to conclude that P3 is cheating, because P3 ’s second level promise was not forwarded by P1 according to the protocol design as shown in [7, Sect. 7]. The flaw of this protocol is due to a violation of assumption 3 in Sect. 5.1. In order to fix the problem, we change P1 ’s last message from {sig(c, 1)} into {sig(c, 1), prom2 (c, 3)}, i.e., P1 is required to forward all the up-to-date promises and signatures in his hand to P2 . With P3 ’s second level promise in H2 , T is able to find out that P3 is dishonest. Therefore, T can overturn her abort decision and guarantee fairness for P2 .
6
Discussion and Conclusion
In this paper, we have used the model checker Mocha to analyse two types of MPCS protocols – the MR protocol [6] and a number of MRT protocols [7].6 6
All Mocha models and ATL properties can be found at [12].
Game-Based Verification of Multi-Party Contract Signing Protocols
199
Mocha allows one to specify properties in ATL which is a branching-time temporal logic with game semantics, and the model checking problem for ATL requires the computation of winning strategies. Thus the use of Mocha allows us to have a precise and natural formulation of desired properties of contract signing. Mukhamedov and Ryan showed that the CKS protocol is not fair for n ≥ 5 by giving an abort-chaining attack. The fairness of their fixed protocol [6] has been analysed in NuSMV for 5 signers. Instead, we modelled the MR protocol in Mocha with up to 5 signers and both fairness and timeliness properties have been checked. The formulation of fairness in ATL as winning strategies is model independent, while Mukhamedov and Ryan have to split fairness into two CTL sub-properties in order to cover all possible scenarios, for which it is necessary to go through a number of cases (see [6], Sect. 7). The main result of Mauw, Radomirovi´c and Torabi Dashti [7] made it feasible to construct fair MPCS protocols with a minimal number of messages. Their main theorem [7] states that there is a fair signing sequence of length n2 − n + 3, where n is the number of signers in an MPCS protocol. This fair sequence must contain all permutations of {1, . . . , n} as sub-sequences, and it can be transformed back into an MPCS protocol of length n2 +1. However, the resulting MPCS protocol is only free of abort-chaining attacks, and it is merely conjectured that this implies fairness. We described how to derive an MR protocol from a minimal signing sequence explicitly. Following this methodology, we designed a number of MRT protocols for 3 and 4 signers, all of which have been checked in Mocha. In particular, we discovered an abort-chaining attack in the published MRT protocol with 3 signers [7]. The flaw is due to a mistake in the protocol design. We also presented a solution to it, and the fixed protocol is shown to satisfy fairness in Mocha. Chadha, Kremer and Scedrov used Mocha to check abuse-freeness in the GM protocol and the CKS protocol, and found a vulnerability in the first protocol [5]. The vulnerability is due to the fact that T ’s reply to a signer’s abort or resolve request contains additional information, which can be used by the signer as a proof for an outside challenger. Their fix is to exclude the additional information from T ’s replies. The MR protocol uses similar abort and resolve sub-protocols. Mukhamedov and Ryan claimed that their protocol is abuse-free because of the use of PCS. However, the situation with MRT protocols is different: a single signer not only sends out his own promise to the intended receiver, but forwards the other signers’ promises. It might give a coalition of signers an advantage to the remaining signers. However, the advantage has to be provable. How to formalise abuse-freeness in a precise and correct way is a challenging research topic [15,16,17]. Our immediate future work is to analyse abuse-freeness in the MRT protocols; either we prove the designed MRT protocols abuse-free or we can use the built models to identify a point that a coalition of signers have a provable advantage against an honest signer. In this paper, we have verified protocols with a quite limited number of signers (up to five), and the verification of timeliness properties in Mocha usually took minutes while for fairness properties
200
Y. Zhang et al.
it might need a number of days. Another future direction is to study abstract interpretation [18] in order to analyse the models in Mocha with more signers. Acknowledgement. We thank Saˇsa Radomirovi´c for many helpful discussions.
References 1. Asokan, N., Waidner, M., Schunter, M.: Optimistic protocols for fair exchange. In: Proc. CCS, pp. 7–17. ACM, New York (1997) 2. Asokan, N., Shoup, V., Waidner, M.: Optmistic fair exchange of digital signatures. Selected Areas in Communications 18(4), 591–606 (2000) 3. Kremer, S., Markowitch, O., Zhou, J.: An intensive survey of fair non-repudiation protocols. Computer Communications 25(17), 1606–1621 (2002) 4. Garay, J.A., MacKenzie, P.D.: Abuse-free multi-party contract signing. In: Jayanti, P. (ed.) DISC 1999. LNCS, vol. 1693, pp. 151–166. Springer, Heidelberg (1999) 5. Chadha, R., Kremer, S., Scedrov, A.: Formal analysis of multi-party contract signing. J. Autom. Reasoning 36(1-2), 39–83 (2006) 6. Mukhamedov, A., Ryan, M.D.: Fair multi-party contract signing using private contract signatures. Inf. Comput. 206(2-4), 272–290 (2008) 7. Mauw, S., Radomirovi´c, S., Torabi Dashti, M.: Minimal message complexity of asynchronous multi-party contract signing. In: Proc. CSF, pp. 13–25. IEEE CS, Los Alamitos (2009) 8. Alur, R., Henzinger, T.A., Mang, F.Y.C., Qadeer, S., Rajamani, S.K., Tasiran, S.: Mocha: Modularity in model checking. In: Y. Vardi, M. (ed.) CAV 1998. LNCS, vol. 1427, pp. 521–525. Springer, Heidelberg (1998) 9. Alur, R., Henzinger, T.A., Kupferman, O.: Alternating-time temporal logic. J. ACM 49(5), 672–713 (2002) 10. Emerson, E.A.: Temporal and modal logic. In: Handbook of Theoretical Computer Science (B), pp. 955–1072. MIT Press, Cambridge (1990) 11. Alur, R., Henzinger, T.A.: Reactive modules. Formal Methods in System Design 15(1), 7–48 (1999) 12. Zhang, Y., Zhang, C., Pang, J., Mauw, S.: Game-based verification of multiparty contract signing protocols – Mocha models and ATL properties (2009), http://satoss.uni.lu/members/jun/mpcs/ 13. Cimatti, A., Clarke, E.M., Giunchiglia, E., Giunchiglia, F., Pistore, M., Roveri, M., Sebastiani, R., Tacchella, A.: NuSMV 2: An open source tool for symbolic model checking. In: Brinksma, E., Larsen, K.G. (eds.) CAV 2002. LNCS, vol. 2404, pp. 359–364. Springer, Heidelberg (2002) 14. Adleman, L.: Short permutation strings. Discrete Mathematics 10, 197–200 (1974) 15. Chadha, R., Mitchell, J.C., Scedrov, A., Shmatikov, V.: Contract signing, optimism, and advantage. J. Log. Algebr. Program. 64(2), 189–218 (2005) 16. K¨ ahler, D., K¨ usters, R., Wilke, T.: A Dolev-Yao-based definition of abuse-free protocols. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 95–106. Springer, Heidelberg (2006) 17. Cortier, V., K¨ usters, R., Warinschi, B.: A cryptographic model for branching time security properties - the case of contract signing protocols. In: Biskup, J., L´ opez, J. (eds.) ESORICS 2007. LNCS, vol. 4734, pp. 422–437. Springer, Heidelberg (2007) 18. Henzinger, T.A., Majumdar, R., Mang, F.Y.C., Raskin, J.F.: Abstract interpretation of game properties. In: Palsberg, J. (ed.) SAS 2000. LNCS, vol. 1824, pp. 220–239. Springer, Heidelberg (2000)
Attack, Solution and Verification for Shared Authorisation Data in TCG TPM Liqun Chen and Mark Ryan HP Labs, UK, and University of Birmingham, UK
Abstract. The Trusted Platform Module (TPM) is a hardware chip designed to enable computers to achieve greater security. Proof of possession of authorisation values known as authdata is required by user processes in order to use TPM keys. If a group of users are to be authorised to use a key, then the authdata for the key may be shared among them. We show that sharing authdata between users allows a TPM impersonation attack, which enables an attacker to completely usurp the secure storage of the TPM. The TPM has a notion of encrypted transport session, but it does not fully solve the problem we identify. We propose a new authorisation protocol for the TPM, which we call Session Key Authorisation Protocol (SKAP). It generalises and replaces the existing authorisation protocols (OIAP and OSAP). It allows authdata to be shared without the possibility of the impersonation attack, and it solves some other problems associated with OIAP and OSAP. We analyse the old and the new protocols using ProVerif. Authentication and secrecy properties (which fail for the old protocols) are proved to hold of SKAP.
1
Introduction
The Trusted Platform Module (TPM) specification is an industry standard [14] and an ISO/IEC standard [6] coordinated by the Trusted Computing Group (TCG), for providing trusted computing concepts in commodity hardware. TPMs are chips that aim to enable computers to achieve greater levels of security than is possible in software alone. There are 100 million TPMs currently in existence, mostly in high-end laptops. Application software such as Microsoft’s BitLocker and HP’s HP ProtectTools use the TPM in order to guarantee security properties. The TPM stores cryptographic keys and other sensitive information in shielded locations. Keys are organised in a tree hierarchy, with the Storage Root Key (SRK) at its root. Each key has associated with it some authorisation data, known as authdata. It may be thought of as a password to use the key. Processes running on the host platform or on other computers can use the TPM keys in certain controlled ways. To use a key, a user process has to prove knowledge of the relevant authdata. This is done by accompanying the command with an HMAC (a hash-function-based message authentication code, as specified in [5]), P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 201–216, 2010. c Springer-Verlag Berlin Heidelberg 2010
202
L. Chen and M. Ryan
keyed on the authdata or on a shared secret derived from the authdata. When a new key is created in the tree hierarchy, its authdata is chosen by the user process, and sent encrypted to the TPM. The encryption is done with a key that is derived from the parent key authdata. The TPM stores the new key’s authdata along with the new key. Creating a new key involves using the parent key, and therefore an HMAC proving knowledge of the parent key’s authdata has to be sent. If a group of users are to be authorised to use a key, then the authdata for the key may be shared among them. In particular, the authdata for SRK (written srkAuth) is often assumed to be a widely known value, in order to permit anyone to create child keys of SRK. This is analogous to allowing several people to share a password to use a resource, such as a database. We show that sharing authdata between users has some significant undesirable consequences. For example, an attacker that knows srkAuth can fake all the storage capabilities of the TPM, including key creation, sealing, unsealing and unbinding. Shared authdata completely breaks the security of the TPM storage functions. Some commentators to whom we have explained our attack have suggested the TPM’s encrypted transport sessions as a way of mitigating the attack. We show that they are not able to do that satisfactorily (section 2.4). We solve this problem by proposing a new authorisation protocol for the TPM, which we call Session Key Authorisation Protocol (SKAP). It generalises and replaces the existing authorisation protocols (OIAP and OSAP). In contrast with them, it does not allow an attacker that knows authdata to fake a response by the TPM. SKAP also fixes some other problems associated with OIAP and OSAP. To demonstrate its security, we analyse the old and the new protocols using the protocol analyser ProVerif [8,9], and prove authentication and secrecy properties of SKAP. Related work. Other attacks of a less significant nature have been found against the TPM. The TPM protocols expose weak authdata secrets to offline dictionary attacks [11]. To fix this, we proposed to modify the TPM protocols by using SPEKE (Simple Password Exponential Key Exchange [1]). However, the modifications proposed in [11] do not solve the problem of shared authdata. An attacker can in some circumstances illegitimately obtain a certificate on a TPM key of his choice [12]. Also, an attacker can intercept a message, aiming to cause the legitimate user to issue another one, and then cause both to be received, resulting in the message being processed twice [10]. Some verification of certain aspects of the TPM is done in [13]. Also in [13], an attack on the delegation model of the TPM is described; however, experiments with real TPMs have shown that the attack is not possible [7]. Paper overview. Section 2 describes the current authorisation protocols for the TPM, and in Sections 2.2 and 2.3 we demonstrate our attack. In Section 2.4 we explain why the TPM’s encrypted transport sessions don’t solve the problems. Section 3 describes our proposed protocol, SKAP, that replaces OIAP and OSAP. In section 4, we use ProVerif to demonstrate the security of SKAP compared with OIAP and OSAP. Conclusions are in Section 5.
Attack, Solution and Verification for Shared Authorisation Data
2
203
TPM Authorisation
A TPM command that makes use of TPM keys requires the process issuing the command to be authorised. A process demonstrates its authorisation by proving knowledge of the relevant authdata. Since the TPM is a low-power device, its design minimises the use of heavy-weight cryptography, preferring light-weight solutions (such as hashes and HMACs) where possible. Demonstration of authorisation is done by accompanying a TPM command with such an HMAC of the command parameters, keyed on the authdata or on a shared secret derived from the authdata. We note the result of the HMAC by hmacad (msg), where ad is the authdata, and msg is a concatenation of selected message parameters. The response from the TPM to an authorised command is also accompanied by an HMAC of the response parameters, again keyed on the authdata or the shared secret. This is intended to authenticate the response to the calling process. The TPM provides two kinds of authorisation sessions, called object independent authorisation protocol (OIAP) and object specific authorisation protocol (OSAP). OIAP allows multiple keys to be used within the same session, but it doesn’t allow commands that introduce new authdata, and it doesn’t allow authdata for an object to be cached for use over several commands. An OSAP session is restricted to a single object, but it does allow new authdata to be introduced and it creates a session secret to securely cache authorisation over several commands. If a command within an OSAP session introduces new authdata, then the OSAP session is terminated by the TPM (because the shared secret is contaminated by its use in XOR encryption). In order to prevent replay attacks, each HMAC includes two nonces, respectively from the user process and TPM, as part of msg. The nonces created by the calling process are called “odd”, denoted no , and the nonces created by the TPM are called “even”, denoted by ne . The nonces are sent in the clear, and also included in the msg part of the HMAC. Both the process and TPM use a fresh nonce in each HMAC computation, and they verify the incoming HMACs to check integrity and authorisation. For example, the process sends the first nonce odd no1 to the TPM and receives the first nonce even ne1 along with mac1 = hmacad (no1 , ne1 , ...), and then sends mac2 = hmacad (ne1 , no2 , ...) with no2 and receives ne2 along with mac3 = hmacad (no2 , ne2 , ...), and so on. This is sometimes called a rolling nonce protocol. 2.1
Authorisation Example
In this subsection, we will take a look at an authorisation example, in which a user process first asks the TPM create a new key as part of the storage key tree, then loads this key into the TPM internal memory, and finally uses this key to encrypt some arbitrary data. These three functions are demonstrated in Figure 1 in three separated sessions. Session 1 shows the exchange of messages between the user process and the TPM when a child key of another loaded key (called the parent key) is created
204
L. Chen and M. Ryan
using the TPM command TPM CreateWrapKey. The TPM returns a blob, consisting of the newly created key and some other data, encrypted with the parent key. The user and TPM achieve this function by performing the following steps: 1. First, the user process sets up a OSAP session based on the currently loaded parent key. The parent key handle is pkh, and its authdata is ad(pkh). The TPM OSAP command includes pkh and the nonce nosap . o 2. Upon receipt of the TPM OSAP command, the TPM assigns a new session , and sends these authorisation handle ah, generates two nonces ne and nosap e items back as the response. 3. The user process and the TPM each calculate the shared secret S derived from ad(pkh), and the two nonces for OSAP by using the HMAC algorithm. 4. Then, the user process calls TPM CreateWrapKey, providing arguments including authdata newauth for the key being created, some other parameters about the key, and the HMAC keyed on S demonstrating knowledge of SRK authdata. To protect the new authdata, it is XOR-encrypted with a key derived from ad(pkh) and ne using the hash-function SHA1. 5. After receiving this command, the TPM checks the HMAC and creates the new key. The TPM returns a blob, keyblob, consisting of the public key and an encrypted package containing the private key and the new authdata. The returned message is authenticated by accompanying it with an HMAC with the two nonces keyed on S. 6. Because the shared secret S has been used as a basis for an authdata encryption key, the OSAP session is terminated by the TPM. Later commands will have to start a new session. In order to be used, the newly created key must be loaded into the TPM. For this, an OIAP session may be used. Session 2 shows the messages exchanged between the user process and the TPM during the creation of the OIAP session and the TPM LoadKey2 command. The following steps are performed: 1. The user process sends the TPM OIAP command to the TPM. 2. The TPM assigns the session authorisation handle ah and sends it back along with the newly created nonce ne . , 3. The process calls TPM LoadKey2, providing arguments including the parent key handle pkh and keyblob. The authorisation of this command is achieved using the authdata of the parent key, ad(pkh). 4. The TPM checks the HMAC, and if the check passes, decrypts keyblob and loads the key into its internal memory. The TPM finally creates a key handle for the loaded key kh and a nonce n e and sends them back together with an HMAC keyed on the authdata of the parent key ad(pkh). After the key is loaded, it can be used to encrypt data using TPM Seal. As well as encrypting the data, TPM Seal binds the encrypted package to particular Platform Configuration Registers (PCRs) specified in the TPM Seal command. The TPM will later unseal the data only if the platform is in a configuration matching those PCRs. TPM Seal requires a new OSAP session based on the newly created key. The details are shown in Session 3, where the user process and TPM perform the following steps:
Attack, Solution and Verification for Shared Authorisation Data User
205
TPM Session 1: TPM OSAP( pkh,
nosap ) o ah, ne , nosap e osap S = hmacad(pkh) (nosap e , no )
osap S = hmacad(pkh) (nosap e , no )
TPM CreateWrapKey( ah, pkh, no , . . . , SHA1(S, ne ) ⊕ newauth ), keyblob,
TPM Seal( ah , kh, no , info of sealed data, PCR, . . . , SHA1(S , n hmacS (n e ) ⊕ newauth ), e , no , . . .) sealedblob, n e , hmacS (ne , no , . . .)
Fig. 1. Session 1: Creating a key on the TPM. TPM OSAP creates an OSAP session and the shared secret S by both parties. TPM CreateWrapKey requests the TPM to create a key. The command and the response are authenticated by the shared secret S. Session 2: Loading a key on the TPM. TPM OIAP creates an OIAP session for the TPM LoadKey2 command. Session 3: Using the key to seal data. TPM OSAP creates an OSAP session and its corresponding shared secret S for the TPM Seal command. The seal command and the response are authenticated by S .
1. The first three steps are identical to Session 1, except they use the key handle kh and authdata ad(kh) belonging to the newly loaded key, instead of pkh and ad(pkh). 2. After setting up the OSAP session, the user process calls TPM Seal, providing arguments including the information of the sealed data, a PCR and the new authdata for the corresponding unseal process. The new authdata is again XOR-encrypted with a key derived from the encryption key authdata.
206
L. Chen and M. Ryan
The message is authenticated by accompanying it with an HMAC keyed on the secret S . 3. The TPM responds the command with a sealed blob sealedblob, which consists of an encrypted package containing the sealed data, the PCR value and the new authdata. Again the returned message is authenticated by accompanying it with an HMAC keyed on the secret S . 2.2
The Problem of Shared Authdata
If authdata is a secret shared only between the calling process and the TPM, then the HMACs serve to authorise the command and to authenticate the TPM response. However, as mentioned earlier, authdata may be shared between several users, in order to allow each of them to use the resource that the authdata protects. In particular, the authdata of SRK is often assumed to be a well-known value. E.g., in Design Principles of the TPM specification [6,14], sections 14.5, 14.6 refer to the possibility that SRK authdata is a well-known value, and sections 30.2, 30.8 refer to other authorisation data being well-known values. The usage model is that a platform has a single TPM, and the TPM has a single SRK, which plays the role of the root of a trusted key hierarchy tree. If the platform has multiple users, each of them can build their own branches of the tree on the top of the same root. In order to let multiple users access SRK, the authdata of SRK is made available to all of them. The goal is that although these users share the same SRK and its authdata, they are only able to access their own key branches but not anyone else’s. We will show how the idea of sharing SRK authdata fails to achieve the design principle of the protected storage functionality of the TPM. Suppose one of these users who knows an authdata value is malicious and he can intercept a command from another user to the TPM (the TPM protocols involving encryption and HMACs are clearly designed on the assumption that such interception is possible). He can use knowledge of the authdata to decrypt any new authdata that the command is introducing; and he can fake the TPM response that is authenticated using the shared authdata. It follows that an attacker that knows the authdata for SRK can fake the creation of child keys of SRK. Those keys are then keys made by the attacker in software, and completely under his control. He can intercept requests to use those keys, and fake the response. Therefore, all keys intended to be descendants of SRK can be faked by the TPM. An attacker with knowledge of SRK authdata can completely usurp the storage functionality of the TPM, by creating all the keys in software under his own control, and faking all the responses by the TPM. 2.3
The Attack in Practice
We suppose that Alice is in possession of a laptop owned by her employer, that has an IT department which we call ITadmin. The TPM TakeOwnership command has been performed by ITadmin when the laptop was first procured; thus, the TPM has created SRK and given its authdata to ITadmin. When Alice
Attack, Solution and Verification for Shared Authorisation Data
207
receives the laptop, she is also provided with SRK authdata so that she can use the storage functions of the TPM. Alice now decides to create a key on the TPM with authdata of her own choosing, and wants to encrypt her data using that key. She invokes the commands of Figure 1 of Section 2.1. Unknown to her, ITadmin has configured the laptop so that commands intended to go to the TPM go instead to software under ITadmin’s control. This software responds to all the commands that Alice sends. ITadmin’s software creates the necessary nonces and fakes the response to TPM OSAP. Next, it fakes the creation of the key and fakes all the responses to the user (again creating all the necessary nonces). In particular, in the case of TPM CreateWrapKey, ITadmin’s software – is able to calculate the session secret S, since it is based on SRK authdata and other public values (namely, the OSAP nonces that are sent in the clear); – is able to decrypt the new authdata, since it is XOR encrypted with a key based on SRK authdata and other public values (namely, the command nonces that are sent in the clear); – is able to create an RSA key in software, according to the parameters specified in the command; – is able to create the message returned to the user process. This involves encrypting the “secret” package with SRK, and creating the HMAC that “authenticates” the TPM. Next, ITadmin’s software fakes the response to TPM LoadKey2 (using its knowledge of SRK authdata to create the necessary HMAC). Finally, it fakes the response to TPM Seal (using its knowledge of the new key’s authdata to create the necessary HMAC). Therefore, the ITadmin can successfully impersonate the TPM just because it knows the authdata of SRK. The attack scenario given in this example, in which ITadmin is the attacker, is similar to that illustrating the TPM CertifyKey attack in [12]. Many other scenarios are possible. For example, TPMs are now common in servers, and many interesting use cases involve remote clients accessing TPM functionality on a server (for instance, to achieve guarantees about the server behaviour). In that scenario, our attack means that the server is able to spoof all the responses from the TPM. Another class of scenarios which illustrate this attack revolve around virtualisation; there too, independent virtual environments share a TPM and share knowledge of SRK authdata, allowing one such environment to spoof TPM replies to another. 2.4
Encrypted Transport Sessions
The OIAP and OSAP sessions are intended to provide message integrity, but not message confidentiality. The TPM has a notion of encrypted transport session [14,6] which is intended to provide message confidentiality. Encrypted transport sessions are initiated with the TPM EstablishTransport command, which allows a session key to be established, using a public storage key of the TPM. Since
208
L. Chen and M. Ryan
the security of the session is anchored in a public key, and that public key can be certified, this does indeed defeat the TPM spoofing attack we have described above. However, encrypted transport sessions are not an ideal solution to be used as an alternative of the OIAP and OSAP sessions for the purpose of providing robust TPM authorisation, because the encrypted transport sessions do not solve the problem of weak authdata, reported in [11]. In that paper, it is shown that the TPM protocols expose authdata to the possibility of offline guessing attacks. If authdata is based on a weak secret, then an attacker that tries to guess the value of the authdata is able to confirm his guess offline. Encrypted transport sessions do not resist against this attack because they do not encrypt the highentropy values (the rolling nonces) that are used in the authorisation HMACs. Therefore, changes to OIAP and OSAP are necessary, to avoid the attack of [11]. Unfortunately, the changes proposed in [11] do not solve the attack we have identified in this paper. The solution proposed in [11] is based on the SPEKE protocol, which relies on a secret being shared between the two participants, whereas shared authdata precisely invalidates that assumption. Thus, there is no alternative to a thorough re-design of the authorisation protocols of the TPM.
3
A New TPM Authorisation Protocol
Our aim is to design an authorisation protocol that solves both the weak authdata problem of [11] and the shared authdata problem reported in this paper. Moreover, we aim to avoid the complexity and cost of the encrypted transport session. (We showed above that the encrypted transport session doesn’t solve both attacks anyway.) Our solution relies on public-key cryptography; the TPM designers wanted to avoid that, since it is expensive, but it seems impossible to achieve proper authentication with shared authdata without it. We design our protocol to minimise the frequency with which public key operations are required. We propose Session Key Authorisation Protocol (SKAP), which has the following advantages over the existing OIAP and OSAP protocols: – It generalises OIAP and OSAP, providing a session type that offers the advantages of both. In particular, it can cache a session secret to avoid repeatedly requesting the same authdata from a user (like OSAP), and it allows different objects within the same session (like OIAP). – It is a long-lived session. In contrast with OSAP, it is not necessary to terminate the session when a command introduces new authdata. – It allows authdata to be shared among users, without allowing users that know authdata to impersonate the TPM. – In contrast with existing TPM authorisation, it does not expose low-entropy authdata to offline dictionary attacks [11].
Attack, Solution and Verification for Shared Authorisation Data User
209
TPM TPM SKAP( kh,
{S}pk(kh) ) ah,
K1 = hmacS (ad(kh), ne , 1) K2 = hmacS (ad(kh), ne , 2)
ne
K1 = hmacS (ad(kh), ne , 1) K2 = hmacS (ad(kh), ne , 2)
TPM Command1( ah, kh, no , . . . ), hmacK1 (null, ne , no , . . .) response,
TPM Command2( ah, kh , no , . . . ), hmacK1 (ad(kh ), ne , no , . . .) response,
ne ,
encK2 (newauth), ne ,
hmacK1 (null, ne , no , . . .)
encK2 (newauth), hmacK1 (ad(kh ), ne , no , . . .)
Fig. 2. Establishing a session using Session Key Authorisation Protocol, and executing two commands in the session. The session is established relative to a loaded key with handle kh. Command1 uses that key, and therefore does not need to cite authdata. Command2 uses a different key, and cites authdata in the body of the authorisation HMAC.
The message exchanges between a user process and the TPM in the SKAP protocol is illustrated in Figure 2. Similarly to OSAP, an SKAP session is established relative to a loaded key with handle (say) kh. The secret part of this key sk(kh) is known to the TPM and the public part pk(kh) is known to all user processes which want to use the key. At the time the session is established, the user process generates a high-entropy session secret S, which could be created as a session random number, and sends the encryption {S}pk(kh) of S with pk(kh) to the TPM. Theoretically any secure asymmetric encryption algorithm can be used for this purpose; in the TPM Specification uses RSA-OAEP [2] throughout, so we propose to use that too. The TPM responds with an authorisation handle ah and the first of the rolling nonces, ne , as usual. Then each side computes two keys K1 , K2 from S by using a MAC function keyed on S. The authdata ad(kh) for the key and the nonce ne are cited in the body of the MAC. Any secure MAC function is suitable for our solution, but the TPM specification uses HMAC [5] for other purposes so we use that too. Command1 in the illustrated session uses the key (sk(kh), pk(kh)) for which the session was established. The authorisation HMAC it sends is keyed on K1 , a secret known only to the user process and the TPM. In contrast with OSAP, this secret is not available to other users or processes that know the authdata for the key. Moreover, K1 is high-entropy even if the underlying authdata is low entropy
210
L. Chen and M. Ryan
(thanks to the high-entropy session secret S). New authdata (written newauth) that Command1 introduces to the TPM is encrypted using K2 . In the figure, encK2 (newauth) denotes the result of encrypting newauth with a symmetric encryption algorithm using the secret key K2 . In general, any secure symmetric encryption scheme can be used in this solution. More specifically, in order to guarantee against not only eavesdropping but also unauthorised modification, we suggest using authenticated encryption as specified in [4]. One example is AES Key Wrap with AES block cipher [3]. In contrast with OSAP, SKAP sessions may use keys other than the one relative to which the session was established. Command2 in Figure 2 uses a different key, whose handle is kh . Authdata for that key is cited in the body of the HMAC that is keyed on S. 3.1
The Example Revisited
We revisit the authorisation example described in Section 2.1, where the user wants to perform three commands, TPM CreateWrapKey, TPM LoadKey2 and TPM Seal in a short period. We briefly demonstrate how these commands can be run in a single session (Figure 3). Suppose that the user starts from a parent key
User
TPM TPM SKAP( pkh,
{S}pk(pkh) ) ah,
K1 = hmacS (ad(pkh), ne , 1) K2 = hmacS (ad(pkh), ne , 2)
ne
K1 = hmacS (ad(pkh), ne , 1) K2 = hmacS (ad(pkh), ne , 2)
TPM CreateWrapKey( ah, pkh, no , . . . , hmacK1 (null, ne , no , . . .) keyblob,
TPM LoadKey2( ah, pkh, no , . . . ), kh,
TPM Seal( ah, kh, no , . . . , hmacK1 (ad(kh), ne , no , . . .)
ne ,
encK2 (newauth)), hmacK1 (null, ne , no , . . .)
hmacK1 (null, ne , no , . . .) ne ,
hmacK1 (null, ne , no , . . .)
encK2 (newauth ) ),
sealedblob,
n e ,
hmacK1 (ad(kh ), n e , no , . . .)
Fig. 3. An example of SKAP, showing creating a key, loading the key, and sealing with the key in a single SKAP session. Compare Figure 1.
Attack, Solution and Verification for Shared Authorisation Data
211
whose handle is pkh, and whose authdata ad(pkh) is well-known. (This parent key might be SRK, for example.) By following the SKAP protocol, the user first establishes a session for the parent key. To do this, he chooses a 160-bit random number as the session secret S, then encrypts S with the public part of the parent key and sends {S}pk(pkh) to the TPM. After that both sides compute two keys K1 and K2 based on the values S and ad(pkh). Then the user sends TPM CreateWrapKey as TPM Command1 in Figure 2 along with an encrypted new authorisation data for the requested key and HMAC for integrity check. The TPM responds the command with a key blob for the newly created key. When receiving any message which shows either of these two keys K1 and K2 has been used, the user is convinced that he must be talking to the TPM and the TPM knows that its communication partner knows ad(pkh). When the user wants to use this key (for example, for the sealing function), he sends the TPM the second command TPM LoadKey2 in the same session. Since this also uses the parent key, it is again an example of Command1. The user and the TPM carry on using K1 for authentication. Since TPM LoadKey2 does not introduce new authdata, K2 is not used. After the loading key process succeeds, the user sends the last command TPM Seal. This command uses the newly created and loaded key, which is not the key for which the session is created. Therefore it is an example of Command2 in the figure, and the authdata for the key is required. The command uses the session keys K1 and K2 for authentication and protection of the sealed blob authdata, as before. So as we have seen that a single session of the SKAP protocol can handle multiple commands comfortably. The commands are shown in Figure 3. Comparison with Figure 1 shows a reduction from 12 to 8 messages, showing that that our protocol is more efficient as well as more secure.
4
Verification
We have modelled the current OSAP authorisation protocol using ProVerif [8,9]. ProVerif is a popular and widely-used tool that checks security properties of protocols. It uses the Dolev-Yao model; that is, it assumes the cryptography is perfect, and checks protocol errors against an active adversary that can capture and insert messages, and can perform cryptographic operations if it has the relevant keys. ProVerif is particularly good for secrecy and authentication properties, and is therefore ideal for our purpose. ProVerif is easily able to find the shared authdata attack of section 2.3. It shows both failure of secrecy and failure of authentication. We have also modelled the new proposed protocol SKAP, and ProVerif confirms the secrecy and authentication properties. Our ProVerif code scripts for OSAP and SKAP are shown in Appendixes 1 and 2 respectively1 . In both models, there are two processes, representing the user process and the TPM. The user process requests to start a new session 1
If not present in this version, those appendices can be found on the version on Mark Ryan’s web page.
212
L. Chen and M. Ryan
(respectively OSAP or SKAP) and then requests the execution of a command, such as TPM CreateWrapKey to create a new key. The user process then checks the response from the TPM, and (in our first version) declares the event successU. The TPM process provides the new session, executes the requested command (after checking correct authorisation), and provides the response to the calling user process. It declares the event successT. The properties we verify are – query attacker:newauth – query ev:successU(˜ x) ==> ev:successT(˜ x) The first one checks if newauth is available to the attacker. The second one stipulates that if the user declares success (i.e. the user considers that the command has executed correctly) for parameters x ˜, then the TPM also declares success (i.e. it has executed the command) with the same parameters. (Here, the parameters include the agreed session key.) If this property is violated, then potentially an attacker has found a means to impersonate the TPM. We expect the secrecy property (first query) to fail for OSAP and succeed for SKAP, and this is indeed the case. The correspondence property (second query) is also expected to expected to fail OSAP and succeed for SKAP. Unfortunately the second query fails for both models, for the trivial reason that the TPM can complete the actions in its trace and then stop just before it declares success. To avoid this trivial reason, we extend the user process so it asks the TPM to prove knowledge of the new authdata introduced by the command, before it declares success. Now if the user declares success, the TPM should have passed the point at which it declares success too. If it has not, then an attacker has found a means to impersonate the responses of the TPM. With this modification, we find an attack for each of the properties for OSAP, demonstrating the attack of section 2.3. ProVerif proves that SKAP satisfies both properties, demonstrating its security.
5
Conclusion
Sharing authorisation data between several users of a TPM key is a practice endorsed by the Trusted Computing Group [6,14, Design principles, §14.5, §14.6, §30.2, §30.8], but it makes the TPM vulnerable to impersonation attacks. An attacker in possession of the authorisation data for the storage root key (which is the authdata most likely to be shared among users) can completely usurp the secure storage functionality of the TPM. The encrypted transport sessions of the TPM solve this problem, but they do not solve the related problem of guessing attacks (also known as dictionary attacks) on weak authdata, reported in [11]. The solution proposed for guessing attacks does not solve the problem of shared authdata. Therefore, a re-design of the TPM authorisation sessions is necessary. We propose SKAP, a new authorisation session, to replace the existing authorisation sessions OIAP and OSAP. It generalises both of them and improves
Attack, Solution and Verification for Shared Authorisation Data
213
them in several ways, in particular by avoiding the TPM impersonation attack and the weak authdata attack. We have analysed the old authorisation sessions and the new proposed one in ProVerif, the protocol analyser. The results show the vulnerability of the old sessions, and the security of the new one.
References 1. ISO/IEC 11770-4: Information technology – Security techniques – Key management – Part 4: Mechanisms based on weak secrets 2. ISO/IEC 18033-2: Information technology – Security techniques – Encryption algorithms – Part 2: Asymmetric ciphers 3. ISO/IEC 18033-3: Information technology – Security techniques – Encryption algorithms – Part 3: Block ciphers 4. ISO/IEC 19772: Information technology – Security techniques – Authenticated encryption 5. ISO/IEC 9797-2: Information technology – Security techniques – Message authentication codes (MACs) – Part 2: Mechanisms using a dedicated hash-function 6. ISO/IEC, P.D.: 11889: Information technology – Security techniques – Trusted platform module 7. Ables, K.: An attack on key delegation in the trusted platform module (first semester mini-project in computer security). Master’s thesis, School of Computer Science, University of Birmingham (2009) 8. Blanchet, B.: An efficient cryptographic protocol verifier based on prolog rules. In: Schneider, S. (ed.) 14th IEEE Computer Security Foundations Workshop, Cape Breton, Nova Scotia, Canada, June 2001, pp. 82–96. IEEE Computer Society Press, Los Alamitos (2001) 9. Blanchet, B.: ProVerif: Automatic Cryptographic Protocol Verifier User Manual (2008) 10. Bruschi, D., Cavallaro, L., Lanzi, A., Monga, M.: Replay attack in TCG specification and solution. In: ACSAC 2005: Proceedings of the 21st Annual Computer Security Applications Conference, pp. 127–137. IEEE Computer Society, Los Alamitos (2005) 11. Chen, L., Ryan, M.D.: Offline dictionary attack on TCG TPM weak authorisation data, and solution. In: Grawrock, D., Reimer, H., Sadeghi, A., Vishik, C. (eds.) Future of Trust in Computing. Vieweg & Teubner (2008) 12. G¨ urgens, S., Rudolph, C., Scheuermann, D., Atts, M., Plaga, R.: Security evaluation of scenarios based on the TCG’s TPM specification. In: Biskup, J., L´ opez, J. (eds.) ESORICS 2007. LNCS, vol. 4734, pp. 438–453. Springer, Heidelberg (2007) 13. Lin, A.H.: Automated Analysis of Security APIs. Master’s thesis, MIT (2005), http://sdg.csail.mit.edu/pubs/theses/amerson-masters.pdf 14. Trusted Computing Group. TPM Specification version 1.2. Parts 1–3 (2007), http://www.trustedcomputinggroup.org/specs/TPM/
214
L. Chen and M. Ryan
Appendix 1: ProVerif Script for OSAP free null, c, one, two. fun enc/2. fun dec/2. fun senc/2. fun sdec/2. fun hmac/2. fun pk/1. fun handle/1. equation dec(sk, enc(pk(sk), m)) = m. equation sdec(k, senc(k, m)) = m.
query attacker:newauth. (* ATTACK FOUND *) query ev:successU(x,y,z) ==> ev:successT(x,y,z). (* ATTACK FOUND *) let User = (* request an OSAP session *) new no; new noOSAP; out(c, (kh, noOSAP)); in(c, (ah, ne, neOSAP) ); let K = hmac(authdata, (neOSAP, noOSAP)) in (* request execution of a command, e.g. TPM_CreateWrapKey *) new newauth; out(c, no); out(c, senc(K,newauth) ); out(c, hmac(K,(ne,no)) ); (* receive the response from the TPM, and check it *) in(c, (r, hm) ); if hm = hmac( K , r) then (* check that the TPM has newauth *) new n; out(c, n); in(c, hm2); if hm2=hmac(newauth,n) then event successU(K, r, newauth). let TPM = (* handle the request for an OSAP session *) new ne; new neOSAP; in(c, noOSAP ); out(c, (ne, neOSAP) ); let K = hmac(authdata, (neOSAP, noOSAP)) in (* execute a command from the user, e.g. TPM_CreateWrapKey *) in(c, (no, encNewAuth, hm)); if hm = hmac(K, (ne,no)) then let newauth = sdec(K, encNewAuth) in
Attack, Solution and Verification for Shared Authorisation Data
215
(* return a response to the user *) new response; out(c, ( response, hmac( K , response) )); event successT(K, response, newauth); (* if asked, prove knowledge of newauth *) in(c, n); out(c, hmac(newauth,n)). process new skTPM; (* secret part of a TPM key *) let pkTPM = pk(skTPM) in (* public part of a TPM key *) new authdata; (* the shared authdata *) let kh = handle(pkTPM) in out(c, (pkTPM, authdata, kh) ); ( !User | !TPM )
Appendix 2: ProVerif Script for SKAP free null, c, one, two. fun enc/2. fun dec/2. fun senc/2. fun sdec/2. fun hmac/2. fun pk/1. fun kdf/2. fun handle/1. equation dec(sk, enc(pk(sk), m)) = m. equation sdec(k, senc(k, m)) = m. query attacker:newauth. (* SECRECY HOLDS *) query ev:successU(w,x,y,z) ==> ev:successT(w,z,y,z). (* CORRESPONDENCE HOLDS *) let User = (* request an OSAP session *) new K; new no; out(c, (kh, enc(pkTPM, K)) ); in(c, (ah, ne)); let K1 = hmac(K, (authdata, ne, one)) in let K2 = hmac(K, (authdata, ne, two)) in (* request execution of a command, e.g. TPM_CreateWrapKey *) new newauth; out(c, ( no, senc(K2,(ne,no,newauth)), hmac(K1,(null,ne,no)) ) ); (* receive the response from the TPM, and check it *) in(c, (response, hm) ); if hm = hmac( kdf(K1,newauth), response) then
216
L. Chen and M. Ryan
(* check that the TPM has newauth *) new n; out(c, n); in(c, hm2); if hm2=hmac(newauth,n) then event successU(K1, K2, response, newauth). let TPM = (* handle the request for an OSAP session *) new ne; in(c, encSessKey ); let K = dec(skTPM, encSessKey) in out(c, ne); let K1 = hmac(K, (authdata, ne, one)) in let K2 = hmac(K, (authdata, ne, two)) in (* execute a command from the user, e.g. TPM_CreateWrapKey *) in(c, (no, encNewAuth, hm)); if hm = hmac(K1, (null,ne,no)) then let (ne’,no’,newauth) = sdec(K2, encNewAuth) in if ne’=ne then if no’=no then (* return a response to the user *) new reponse; out(c, ( response, hmac( kdf(K1,newauth), response) )); event successT(K1, K2, response, newauth); (* if asked, prove knowledge of newauth *) in(c, n); out(c, hmac(newauth,n)). process new skTPM; (* secret part of a TPM key *) let pkTPM = pk(skTPM) in (* public part of a TPM key *) new authdata; (* the shared authdata *) let kh = handle(pkTPM) in out(c, (pkTPM, authdata, kh) ); ( !User | !TPM )
Trusted Multiplexing of Cryptographic Protocols Jay McCarthy1 and Shriram Krishnamurthi2 1
Brigham Young University 2 Brown University
Abstract. We present an analysis that determines when it is possible to multiplex a pair of cryptographic protocols. We present a transformation that improves the coverage of this analysis on common protocol formulations. We discuss the gap between the merely possible and the pragmatic through an optimization that informs a multiplexer. We also address the security ramifications of trusting external parties for this task and evaluate our work on a large repository of cryptographic protocols. We have verified this work using the Coq proof assistant.
1
Problem and Motivation
A fundamental aspect of a cryptographic protocol is the set of messages that it accepts. Protocol specifications contain patterns that specify the messages they accept. These patterns describe an infinite set of messages, because the variables that appear in them may be bound to innumerable values. We call this set a protocol’s message space. There is a history of attacks on protocols based on the use of (parts of) messages of one protocol as (parts of) messages of another protocol [2,11,15]. These attacks, called type-flaw (or type-confusion) attacks, depend fundamentally on the protocol relation of message space overlap. If the message spaces of two protocols overlap, then there is at least one session of each protocol where at least one message could be accepted by both protocols. This property, however, is more general than a “presence of type-flaw attack” property, because not all overlaps are indications of successful attacks. (In fact, it is common for new versions of a protocol to contain many similar messages.) The message space overlap property not only gives us insight into the protocol and its relation to other protocols it also provides a test for a fundamental deployment property: dispatchability. We define dispatchability as the ability for a multiplexer to unambiguously deliver incoming protocol messages to the proper protocol session. (We can compare a protocol session’s message space with another session’s message space to determine if it is possible to dispatch to the correct session. This basic property is necessary for servers to provide concurrency and support for many protocol clients.) Servers typically rely on tcp for this property. They assign a different tcp port for each protocol and trust the operating system’s tcp implementation to do the dispatching. However, when cryptographic protocols are embedded in other contexts, such as existing Web service protocols (e.g., soap), more explicit methods of distinguishing protocol messages must be used. Furthermore, P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 217–232, 2010. c Springer-Verlag Berlin Heidelberg 2010
218
J. McCarthy and S. Krishnamurthi
by leaving this essential step implicit, it is not included in the formally verified portion of the protocol specification. This means that the protocol that is actually used is not the one that is verified. Finally, the delegated notion of a session (e.g., tcp’s or ssl’s) may not match the protocol’s notion. This is particularly problematic in protocols with more than two participants that are not simply compositions of two-party protocols. Notice that message space overlap implies that dispatchability is not achievable. If there is a message M that could be accepted by session p and session q of some protocols, then what would a dispatcher do when delivered M ? A faulty identification might cause the actual (though unintended) recipient to go into an inconsistent state or even leak information while the intended recipient starves. It cannot unambiguously deliver the message and therefore is not a correct dispatcher. We present a dispatching algorithm that correctly delivers messages if there is no message space overlap. This algorithm provides proof that the lack of message space overlap implies dispatchability. We present an analysis that determines whether the message spaces of two protocols (sessions of a protocol) overlap. We also present an analysis, phrased as an optimized dispatcher, that determines why there is no overlap between two spaces by finding the largest abstractions of two protocols for which there is no overlap. We present our analysis of protocols from the spore protocol repository [16] and show they provide insights to improve our analyses. We present our work in the context of an adaptation of cppl, the Cryptographic Protocol Programming Language [8]. We have built an actual tool and applied it to concrete representations of protocols. All of our work is formalized using the Coq proof assistant [19], and we make our formalization freely available.1 Coq provides numerous advantages over paper-and-pencil formalizations. First, we use Coq to mechanically check our proofs, thereby bestowing much greater confidence on our formalization and on the correctness of our theorems. Second, because all proofs in Coq are constructive, our tool is actually a certified implementation that is extracted automatically in a standard way from our formalization, thereby giving us confidence in the tool also. Finally, being a mechanized representation means others can much more easily adapt this work to related projects and obtain high confidence in the results. Outline. In Sec. 2, we explain the technical background of our theory. Next, in Sec. 3, we develop the decision procedure for message space overlap. In Sec. 4, we show how message space overlap provides a sufficient foundation for a dispatching algorithm. This algorithm is inefficient, so we present an analysis in Sec. 5 that optimizes it. Finally, we discuss related work and conclusions.
2
Introduction to CPPL
cppl [8] is a language for expressing cryptographic protocols with trust annotations. cppl allows the programmer to control protocol actions with trust 1
Sources are available at: http://faculty.cs.byu.edu/~jay/tmp/dispatch09/
Trusted Multiplexing of Cryptographic Protocols
219
a knows a:name b:name kab:symkey learns kabn:symkey b knows a:name b:name kab:symkey learns kabn:symkey 1 2 3 4
a b a b
-> -> -> ->
b a b a
: : : :
a, {|na:nonce|} kab {|na, nb:nonce|} kab {|nb|} kab] {|kabn:symkey, nbn:nonce|} kab Fig. 1. Andrew Secure RPC Protocol
1 proc b (a:name b:name kab:symkey) _ 2 let chana = accept in 3 recv chana (a, {| na:nonce |} kab) -> _ then 4 let nb = new nonce in 5 send _ -> chana {| na, nb |} kab then 6 recv chana {| nb |} kab -> _ then 7 let nbn = new nonce in 8 let kabn = new symkey in 9 send _ -> chana {| kabn, nbn |} kab then 10 return _ (kabn) Fig. 2. Andrew Secure RPC Role B in cppl
constraints so that an action such as transmitting a message will occur only when the indicated constraint is satisfied. The cppl semantics identifies a set of strands [18], annotated with trust formulas and the values assumed to be unique, as the meaning of a role in a protocol. We will explain the relevant aspects of cppl by using the Andrew Secure RPC protocol (Figure 1) and the encoding of its b role in cppl (Figure 2) as our example. Message Syntax. The various kinds of messages that may be sent and received are paramount to our investigation. We give their syntax in Figure 3. Messages (m) may be constructed by concatenation (,), hashing (hash(m)), variable binding and pattern matching (< v = m >), asymmetric signing ([m]v), symmetric signing ([|m|]v), asymmetric encryption ({m}v), and symmetric encryption ({|m|}v). In these last four cases, v is said to be in the key-position. For example, in the Andrew role kab is in key-position on line 3. Concatenation is right associative. Parentheses are control precedence. Well-formedness. As a cppl program executes, it builds an environment of locally known values associated with identifiers. This environment is consulted to determine the values of pattern identifiers in message syntax and is extended during matching when those identifiers are free. Not all syntactically valid messages are well-formed in a cppl program, because they may refer to free identifiers in positions that cannot be free. These patterns are not well-formed.
220
J. McCarthy and S. Krishnamurthi m := | | v := t := |
nil (m, m ) [m]v x:t text name
| | |
v hash(m) [|m|]v
| | |
k {m}v
|
{|m|}v
| |
msg symkey
| |
nonce pubkey
|
channel
Fig. 3. cppl Message Syntax
Intuitively, to send a message we must be able to construct it, and to do that, every identifier must be bound. Therefore, a pattern m is well-formed for sending in an environment σ (written σ s m herein) if all identifiers that appear in it are bound. For example, the message on line 5 of the Andrew role is well-formed, but if we removed line 4, it would not be because nb would not be bound. Surprisingly, the well-formedness condition is different for message patterns used for receiving rather than sending: only some identifiers must be bound due to meaning of the cryptographic primitives. However, a similar intuition holds for using a message pattern to receive messages. When a message matches a pattern, the identifiers that confirm its shape— those that are used as keys or under a hash—must be known to the principal. Thus, a pattern m is well-formed for receiving in σ (written σ r m) if all identifiers that appear in key-positions or hashes are bound. For example, the pattern on line 3 of the Andrew role is well-formed because kab is bound, but if it were not, then the pattern would not be well-formed. A cppl program (p) is well-formed ( p), when each message is well-formed in the appropriate context and runtime environment. Semantics and Adversary. The semantics of a cppl program is given by a set of strands where each strand describes one possible local run. A strand, s, is simply a list of messages that are sent (+m) or received (−m): s := .
|
+m → s
|
−m → s
The adversary in the strand semantics is essentially the Dolev-Yao adversary [5]. Since a strand merely specifies what messages are sent and received rather than how they are constructed, where they are sent, or from whence they come, the adversary has maximal power to manipulate the protocol by modifying, redirecting, and generating messages ex nihilo. This ensures that proofs built on the semantics are secure in the face of a powerful adversary. The basic abilities of adversary behavior that make up the Dolev-Yao model include: transmitting a known value such as a name, a key, or whole message; transmitting an encrypted message after learning its plain text and key; and transmitting a plain text after learning a ciphertext and its decryption key. The adversary can also manipulate the plain-text structure of messages: concatenating and separating message components, adding and removing constants, etc. Since an adversary that encrypts or decrypts must learn the key from the
Trusted Multiplexing of Cryptographic Protocols
221
network, any key used by the adversary—compromised keys—have always been transmitted by some participant. A useful concept when discussing the adversary is a uniquely originating value. This is a value that enters the network at a unique location. Locally produced nonces2 are uniquely originating values. By definition, the adversary cannot know these values until they have been sent in an unprotected context.
3
Analysis
In this section we present our analysis that determines when there is a message that could be accepted by two sessions of two protocols. Two sessions of one protocol can be analyzed by comparing a protocol with itself. The strand space model of protocols is aptly suited for this problem. From the strand, we can read off each message pattern the protocol accepts. For example, the strand +m1 → −m2 → −m3 → . accepts messages with patterns m2 and m3 . We denote this set of message patterns as M(s) for strand s. Each message pattern m describes an infinite set of messages (one for each instantiation of the variables in m) that would be accepted at that point of the protocol. If we could compare the sets of two patterns, then we could easily lift this analysis to two protocols s and s by checking each pattern in M(s) against each pattern in M(s ). The essence of our problem is therefore determining when a message pattern m “overlaps” with another message pattern m , i.e., when there is an actual message M that could be matched by both m and m . We call this analysis match. 3.1
Defining match
We have many options when defining match. We could assume that the structure of message patterns are ambiguous. That is, we could assume that (m1 , m2 ) could possibly overlap with hash(m3 ) or {m4 }k . We will not do this. We assume that messages are encoded unambiguously. Concrete protocol implementations that do not conform to this assumption may have type-flaw attacks [2,11,15]. This initial consideration shrinks the design space of match: message patterns must have identical structure for them to possibly overlap. There are two important caveats: variables with type msg and bind patterns (< v = m >). In the first, we treat such variables as “wildcards” because they will accept any message when used in a pattern. In the second, we ignore the variable binding and use the sub-pattern m in the comparison. With this structural means of determining when two message patterns potentially overlap, all that remains is to specify when to consider two variables as potentially overlapping. The simplest strategy is to assume that if the types of two variables are the same, then it is possible that each could refer to the same value. We call this strategy type-based and write it matchτ . 2
Numbers used once.
222
J. McCarthy and S. Krishnamurthi
Correctness. matchτ is correct if it soundly approximates message space overlap, i.e., if ¬ matchτ m m then there is no overlap between the possible messages accepted by pattern m and pattern m . This implies that matchτ m m should not be read as “every message accepted by m is accepted by m ” (or vice versa), because there are some environments (and therefore protocol sessions) where there can be no overlap between messages. For example, the pattern x does not overlap with y if x is bound to 2 and y is bound to 3. But there is at least one environment pair that contains at least one message that is accepted by both: when x and y are bound to 2 and the message is 2. Evaluation. The theorem prover can tell us if matchτ is correct, but it cannot tell us if the analysis is useful. We address the utility of the analysis by running it on a large number of protocol role pairs. We have encoded 121 protocol roles from 43 protocol definitions found in the Security Protocols Open Repository (spore) [16] in cppl. For each role, our analysis generates every possible strand interpretation of the role, then compares each message pattern with those of another role. Analyzing all possible pairs only takes a few seconds and we find that when using matchτ , 15.7% of protocol role pairs are non-overlapping (i.e., for 84.3% of the pairs there is a message that is accepted by both roles in a run.) This is an extravagantly high number. If we actually look at the source of many protocols in cppl, we learn why there are such poor results with matchτ . Many protocols have the following form: 1 n
recv chan (m_1, m:msg) -> _ then ... match m m_2 then
where m1 and m2 are particular patterns, such as (price, p) or {m1 }k . Consider how matchτ would compare this message with another: Because it contains a wildcard message (with type msg), it is possible for any message to be accepted. This tells us that the specificity of the protocol role impacts the efficacy of our analysis. In the next section, we develop a transformation on protocol roles that increases their specificity. This greatly improves the performance of matchτ . 3.2
Message Specificity
Suppose we have a protocol with the following protocol role: 1 2 3
recv ch (m1, a) -> _ then let nc = new nonce in match m1 {|b, k’|} k -> _
If this role were slightly different, then we could execute it with more partners: 1’ recv ch (<m1={|b, k’|} k>, a) -> _ 2 then let nc = new nonce in 3 match m1 {|b, k’|} k -> _
Trusted Multiplexing of Cryptographic Protocols
223
In this modified protocol, the wildcard message m1 on line 1 is replaced by a more specific pattern on line 1 . We say that message pattern m1 is more specific than message pattern m2 if for all messages m, matchτ m1 m implies matchτ m2 m (i.e., every message that is accepted by m1 is accepted by m2 .) Our transformation, called foldm, increases the specificity of message patterns. It works as follows: for each message reception point where message m is received, foldm records the environment before reception as σm , inspects the rest of the role for pattern points where identifier i is compared with pattern p such that σm r p, and replaces each occurrence of i in m with < i = p >, thereby increasing the specificity of m. We prove the following theorems about this transformation: Theorem 1. If p then foldm p. Theorem 2. Every pattern in p has a more specific pattern in foldm p. Preservation. We must also ensure that this transformation preserves the semantics of the protocol meaningfully. However, since we are clearly changing the set of messages accepted by the protocol (requiring them to be more specific), the transformed protocol does not have the same meaning. The fundamental issue is whether the protocol meaning is different. Recall that the meaning of a protocol is a set of strands that represent potential runs. This is smaller after the transformation. However, if we consider only the runs that end in success—those runs in which a message matching pattern p is provided when expected—then there is no difference in protocol behavior. Why? Consider the example from above. Suppose that a message M matching the pattern (m1, a) is provided at step 1 in the original protocol and that the rest of protocol executes successfully. Then m1 must match the pattern {|b, k’|} k, and, the message M must match the pattern (<m1={|b, k’|} k>, a). Therefore, if the same message was sent to the transformed protocol, the protocol would execute successfully. This holds in every case because the transformation always results in more specific patterns that have exactly this property. What happens to runs that fail in the original protocol? They continue to fail in the transformed protocol, but may fail differently. Suppose that a message M is delivered to the example protocol at step 1 and the protocol fails. It either fails at step 1 or step 3. If it fails at step 1, then it does not match the pattern (m1,a) or the pattern (<m1={|b, k’|} k>, a). Therefore it fails at step 1 in the transformed protocol as well. If it fails at step 3, then the left component of the message M does not match the pattern {|b,k’|} k, and, the transformed protocol will fail at step 1 for the very same reason. In general, then, the transformed protocol’s behavior is identical modulo failure. If the same sequence of external messages is delivered to a transformed role, then it will either (a) succeed like the untransformed counterpart or (b) fail earlier because some failing pattern matching was moved earlier in the protocol. Semantically, this means that the set of strand bundles that a protocol can be a part of is smaller. This could be thought of as a fault preserving transformation in the style of Lowe [12].
224
J. McCarthy and S. Krishnamurthi
The transformed protocol must actually be used in deployment for the analysis to be sound. If not, a message may be delivered to the wrong recipient. Worse, this mis-delivery will only be apparent later when the principal attempts a deeper pattern match. Since the more specific pattern was not matched initially, this deep match will fail and signal an error. Adversary. This transformation either decreases the amount of harm the adversary can do or does not change it. Since the only difference in behavior is that faulty messages are noticed sooner, whatever action the principal would have taken before performing the lifted pattern matching is not done. Therefore, the principal does less before failing, and therefore the “hooks” for the adversary are decreased. Of course, for any particular protocol, these hooks may or may not be useful, but in general there are fewer hooks. Evaluation. When we apply foldm to our test suite of 121 protocol roles and then run the matchτ analysis, we find within seconds that the percentage of nonoverlapping role pairs increases from 15.7% to 61%. This means that for 61% of protocol role pairs from our repository, it is always possible to unambiguously deliver a message to a single protocol handler. However, when we look just at the special case of comparing a role with itself (i.e., determining if it is possible to dispatch to sessions correctly) we find that none of the roles have this property according to matchτ . This is an unsurprising result. Every message pattern p is exactly the same as itself. Therefore, matchτ will resolve that p has the same shape as p and could potentially accept the same messages. The problem is that matchτ looks only at the two patterns. It does not consider the context in which they appear: a cryptographic protocol that may make special assumptions about the values bound to certain variables. In particular, some values are assumed to be unique. For example, in many protocols, nonces are generated randomly and used to prevent replay attacks and conduct authentication tests [7]. In the next section, we incorporate uniqueness into our analysis. 3.3
Relying on Uniqueness
In the Andrew Secure RPC role (Fig. 2), the message received on line 6 must match the pattern {|nb|}kab , where nb is a nonce that was freshly generated on line 4. This means that no two sessions of this role could accept the same message at line 6, because each is waiting for a different value for nb. We call the version of our analysis that incorporates information about uniqueness matchδ . Whenever the analysis compares a variable u from protocol α and a variable v from protocol β, if u is in the set of unique values generated by α or v is in the set of unique values generated by β, then the two are assumed not to match, regardless of anything else about the variables. In all other cases, two variables are assumed to be potentially overlapping. In particular, the types are ignored, unlike matchτ .
(a) Non-overlapping Protocol Role Pairs (b) Non-overlapping Protocol Role Sessions
Evaluation. When we apply matchδ to our test suite, we find that the percentage of non-overlapping sessions is 0.8%. After applying the foldm transformation, this increases to 14.8%. There is no degradation to the performance of the analysis either: the entire test suite results are available almost instantaneously. If we look at the other 85.2% of the protocols, is there anything more that can be incorporated into the analysis? There is. The first action of many protocol roles is to receive a particular initiation message. Since this is the first thing the role does, it cannot possibly contain a unique value generated by the role. Therefore, the matchδ analysis will not be able to find a unique value that distinguishes the session that the message is meant for. In the next section, we will discuss how to get around this difficulty. 3.4
Handling Initial Messages
The first thing the Andrew Secure RPC role (Fig. 2) does (shown on line 3) is receive a certain message: (a, {|na|}kab ). Since this message does not contain any value uniquely generated for the active session role, it seems that the initial messages of two sessions can be confused. A little reflection reveals that initial messages create sessions, thus they may not be confused across sessions. Therefore, we can ignore the first message of a protocol role, if it is not preceded by any other action, for the purposes of determining the dispatchability of a protocol role’s sessions. We must, of course, compare the initial message with all other messages to ensure that the initial message cannot be confused with them, but we do not need to compare the initial message with itself. When we use this insight with the matchδ analysis, we write it as matchι(δ) . Evaluation. Table 1a presents the results when analyzing each pair of protocol roles. Interestingly, unique values are not very useful when comparing roles, although they do increase the coverage slightly. We have inspected the protocols not handled by matchτ +δ to determine why the protocol pairs may potentially accept the same message. 1. Protocols with similar goals and similar techniques for achieving those goals typically have the same initial message. Examples include the Neumann Stubblebine, Kao Chow, and Yahalom protocol families. 2. Different versions of the same protocol will often have very similar messages, typically in the initial message, though not always. Often these protocols are
226
J. McCarthy and S. Krishnamurthi
modified by making tiny changes so that the other messages remain identical. A good example is the Yahalom family of protocols. 3. Some protocols have messages that cannot be refined by foldm because the key necessary to decrypt certain message components must be received from another message or from a computation. This leaves a message component that will match any other message, so such protocols cannot be paired with a large number of other protocols. One example is the S role of Yahalom. 4. For many protocols, there is dependence among the pattern-matching in the continuation of message reception. (One example is the P role of the Woo Lam Mutual protocol.) As a result, only the independent pattern is substituted into the original message reception pattern. This leaves a variable in the pattern that matches all messages. Table 1b presents the results when analyzing the sessions of each protocol role. It may seem odd that the matchι(τ ) analysis is able to verify any sessions, given our argument against matchτ . Why should removing the initial message make any difference? In 31.4% of the protocols, the protocol receives only a single, initial message. We have also inspected the protocols that the most permissive session-based analysis rules out. 1. Some messages simply do not contain a unique value. A prominent example is the A role of many variants of the Andrew Secure RPC protocol. 2. Some roles have the same problems listed above as (3) and (4), except that in these instances the lack of further refinement hides a unique value. One example is the C role of the Splice/AS protocol. Performance. Computing these tables takes about two minutes.
4
Dispatching
Our analysis determines when there is no message that could be confused during any run of two protocols. We can use this property to build a dispatching algorithm. The algorithm is very simple: forward every incoming message to every protocol handler. (For sessions, we must recognize the initial message and create a new session; otherwise, forward the message to each session.) This algorithm is correct because every message that is accepted by some protocol (session) is only accepted by one protocol (session), according to the overlap property. This (absurd) algorithm makes no attempt to determine which protocol an incoming message is actually intended for. This is clearly inefficient. Yet, it shows that distinct message spaces are sufficient for dispatching. In a network load-balancing setting, where “forwarding a message” actually corresponds to using network bandwidth, this algorithm betrays the intent of load-balancing. On a single machine, where “forwarding a message” corresponds to invoking a handling routine, there are two major costs: (1) a linear search through the various protocol/session handlers; and, (2) the cpu cost associated with each of these handlers. In some scenarios, cost 2 is negligible because most
Trusted Multiplexing of Cryptographic Protocols
Nil nil ↓σ = nil
Var v ↓σ = v
Const k ↓σ = k
Join (m, m ) ↓σ = (m ↓σ , m ↓σ )
Hash σ s hash(m)
Hash (Wild) σ s hash(m)
SymEnc
hash(m) ↓σ = hash(m)
hash(m) ↓σ = ∗
{|m|}k ↓σ = {|m ↓σ |}k
SymEnc (Wild) k∈ /σ σ
{|m|}k ↓ = ∗
...
227
k∈σ
Bind < v = m >↓σ =< v = m ↓σ >
Fig. 4. Message Redaction
network servers are not cpu-bound. However, since we are dealing with cryptographic protocols, the cost of performing decryption only to find an incorrect nonce, etc., is likely to be prohibitive. A better algorithm would use a mapping from input patterns to underlying sessions and efficiently compare new messages with patterns in the mapping prior to delivery. The main problem with this mapping algorithm is that it requires trust in the dispatcher: the dispatcher must look inside encrypted messages to determine which protocol (session) they belong to. In the next section we discuss how to (a) minimize and (b) characterize the amount of trust that must be given to a dispatcher of this sort to perform correct dispatching.
5
Optimization
Our task in this section is to determine how much trust, in the form of secret data (e.g., keys), must be given to a dispatcher to inspect incoming messages to the point that they can be distinguished. First, we will formalize how deep a dispatcher can inspect any particular message with a certain amount of information. Second, we will describe the process that determines the optimal trust for any pair of protocols (or any pair of sessions of one protocol.) Finally, we formalize the security repercussions of this trust. The end result of this section is a metric of how efficient dispatching can be for a protocol; all protocols should aspire to require no trust in the dispatcher. Message Redaction. Suppose that a message is described by the pattern (a, {|b|}k ). If the inspector of this message does not know key k, then in general3 this message is not distinguishable from (a, ∗). We call this the redaction of pattern (a, {|b|}k ) under an environment that does not contain k. We write m ↓σ to denote the redaction of message m under σ. This is defined in Figure 4. 3
There are kinds of encryption that allow parties without knowledge of a key to know that some message is encrypted by that key but still not know the contents of the message.
J. McCarthy and S. Krishnamurthi
60%
60%
50%
50%
Percentage of Protocol Roles
Percentage of Protocol Role Pairs
228
40%
30%
20%
10%
0%
40%
30%
20%
10%
0%
25%
50%
75%
100%
Percentage of Total Trust Necessary
0%
0%
25%
50%
75%
100%
Percentage of Total Trust Necessary
(a)
(b) Fig. 5. Trust Optimization Graphs
Theorem 3. An environment σ can interpret m ↓σ : for all σ and m, σ r m ↓σ . Theorem 4. Every message that is matched by m is matched by m ↓σ . (Sec. 3.2) Theorem 5. σ r m implies m ↓σ = m. These theorems establish that m ↓σ captures the view that a dispatcher, that is trusted with σ only, has of a message m. The next task is to minimize σ while ensuring that match can rule out potential message confusion. Minimizing σ. Suppose we compare m = ({|b|}k , {|c|}j ) with m = ({|b |}k , {|c |}j ), where b and b are unique values of their respective protocols, with matchτ +δ . Because b and b are unique, the analysis, and therefore the dispatcher, needs to look at b and b only to ensure that these message patterns cannot describe the same messages. This means that even though the patterns mention the keys k and j (k and j ), only k (k ) is necessary to distinguish the messages. Another way of putting this is that m ↓{k} = ({|b|}k , ∗) does not overlap with m ↓{k } = ({|b |}k , ∗), according to matchτ +δ . We prove that if m and m cannot be confused according to match, then there is a smallest set σ, such that m ↓σ also cannot be confused with m ↓σ according to match. We prove this by showing that for all m, there is a set Vm , such that for all σ, m ↓Vm ∪σ = m ↓Vm . In other words, there is a “strongest” set for ↓ that cannot be improved. This set is the set σ such that σ r m. Our brute-force search construction algorithm then considers each subset of Vm (Vm ) and selects the smallest subset such that the two messages are distinct after ↓. We have run this optimization on our test-suite of 121 protocol roles; it takes about one minute total to complete. Figure 5a breaks down protocol pairs according to the percentage of their keys required to establish trust. This graph shows that 43% of protocol pairs do not require any trust to properly dispatch. The other end of the graph shows that only 18% of all protocol pairs require complete trust in the dispatcher. Figure 5b shows the same statistics for protocol sessions. In this situation, 54% of the protocol roles do not require any trust for
Trusted Multiplexing of Cryptographic Protocols
229
the dispatcher to distinguish sessions, while 37% require complete trust. These results were calculated in 7.6 minutes and 1.6 seconds respectively. These experiments indicate that it is very fruitful to pursue optimizing the amount of trust given to a dispatcher. However, we have not yet characterized the security considerations of this trust. We do so in the next section. Managing Trust. In previous sections, we have discussed how much trust to give to a load-balancer so it can dispatch messages correctly. In this section, we provide a mechanism for determining the security impact of that trust. Recall that a strand is a list of messages to send and receive. We have formalized “trust” as a set of keys (and other data) to be shared with a load-balancer. We define a strand transformation ↑k that transforms a strand s such that it shares k by sending a particular message containing k as soon as possible. (It is trivial to lift ↑k to share multiple values.) We define s ↑k as: (sd → s) ↑k = sd → +(LB, v) → s if k ∈ bound(sd) / bound(sd) (sd → s) ↑k = sd → (s ↑k ) if k ∈ . ↑k = . (This definition clearly preserves well-formedness and performs its task.) In this definition the tag LB indicates that this value is shared with the load-balancer by some means. Depending on the constraints of the environment, this means can be assumed to be perfectly secure or have some specific implementation (e.g., by using a long-term shared key or public-key encryption.) Since s ↑k is a strand, it can be analyzed using existing tools and techniques [4,6,10,17] to determine the impact of an adversary load-balancer.
6
Insights
The development of the message space overlap analysis and the trust optimization give us insight into why and how message spaces do not overlap. The effectiveness of matchτ for pairs of protocols demonstrates that it is primarily shape that prevents overlap between different protocols. This corresponds with our intuitions, because protocols typically use dissimilar formats. The disparity between matchτ and matchδ demonstrates that for pairs of protocol sessions, it is uniquely originating values that prevent overlap. This matches our intuition, because nonces are consciously designed to prevent replay attacks and ensure freshness, which correspond to the goal of identifying sessions. The statistical differences between these two analyses in different settings allow us to make these conclusions in a coarse way. But the trust optimization process answers the real question: “Why do two message spaces not overlap?” When the trust optimization redacts a message, it is removing the parts of the message that are not useful for distinguishing that protocol (session). This means that what remains is useful, and thus the fully redacted message is only what is necessary to ensure that there is no message space overlap. Thus, for any two protocols (sessions), trust optimization explains why there is no overlap.
230
7
J. McCarthy and S. Krishnamurthi
Related Work
Previous Work. In prior work with Guttman and Ramsdell [14], we only addressed the question of when a protocol role supports the use of multiple sessions. In addition, that approach was significantly different from the one presented here. Though we presented a program transformation similar to foldm, we did not formalize the correctness of the transformation. Second, we used only the na¨ıve dispatching algorithm and did not investigate a more useful algorithm. Third, we did not consider pairs of protocols. Therefore, the current presentation is more rigorous, practical, and general. Our previous problem was only to inspect protocol role message patterns for the presence of distinguishing (unique) values. This is clearly incorrect in the case of protocol role pairs. Consider the role A, which accepts the message Ma , then (Na , ∗), and role B, which accepts the message Mb , then (∗, Nb ), where Nx is a local nonce for x. Each message pattern of each role contains a distinguishing value, so it passes the analysis. But it is not deployable with the other protocol because it is not possible to unambiguously deliver the message (Na , Nb ) after the messages Ma and Mb have been delivered. It is actually worse than this. We can encode these two protocols as one protocol: accept either Ma or Mb , then depending on the first message, accept (Na , ∗) or (∗, Nb ). Our earlier analysis would ignore the initial messages (which is problematic in itself if Ma and Mb overlap), then check all the patterns in each branch, and report success. This is clearly erroneous because it is possible to confuse an A session with a B session. This work avoids these problems by directly phrasing the problem in terms of deciding message overlap—the real property of interest rather than a proxy to it as distinguishing values were. It is useful to point out, however, that the earlier work was sound for protocol roles that did not contain branching, which is an very large segment of our test suite. Our use of Coq ensures that our analysis is correct for all protocols. Dispatching. The Guttman and Thayer [9] notion of protocol independence through disjoint encryption and a related work by Cortier et al. [3] study the conditions under which security properties of cryptographic protocols are preserved under composition with one or more other protocols. This is an important problem, since it ensures that it is safe to compose protocols. A fundamental result of the Guttman study shows that different protocols must not encrypt similar patterns by the same keys—a similar conclusion to some of our work. However, our work complements theirs by studying whether it is possible to compose protocols and, in particular, how efficient such a multiplexer can be. Ideally both of these problems must be addressed before deployment. Detecting type-flaw attacks [2,11,15] is a similar problem to ours. These attacks are based on the inability of a protocol message receiver to unambiguously determine the shape of a message. For example, a nonce may be sent where the receiver expects a key, a composite message may be given in place of a key, etc. These attacks are often effective when they force a regular participant into
Trusted Multiplexing of Cryptographic Protocols
231
using known values as if they were keys. Detecting when a particular attack is a type-flaw attack, or when components of a regular protocol execution may be used as such, is similar to our problem. These analyses try to determine when sent message components can be confused with what a regular participant expects. However, in these circumstances a peculiar notion of message matching captures the ambiguity in bit patterns. Some analyses use size-based matching where any message of n-bits can be accepted by a pattern expecting n-bits; for example, an n-bit nonce can be considered an n-bit key. Others assume that message structure is discernible but the leaf-types are not, so a nonce paired with a nonce cannot be interpreted as single nonce, but it may be interpreted as a nonce paired with a key. Our analysis is similar in spirit but differs in the notion of message overlap: we assume that message shapes can be encoded reliably. Optimization. The problem of optimizing the amount of trust given to a dispatch is very similar in spirit to ordering of pattern-matching clauses [13] and ordering rules in a firewall or router [1], which are both similar to the decision tree reduction problem. But, our domain is much simpler than the general domain of these problems and the constants are much smaller (|Vm | is rarely greater than 3 for most protocols), so we are not afflicted with many of the motivating concerns in those areas. Even so, these problems serve only as guidelines for the actual optimization process, not the formulation of the solution.
8
Conclusion
We have presented an analysis (match) that determines if there is an overlap in the message space of different protocols (or sessions of the same protocol.) We have shown how it is important to look at real protocols in the development of this analysis (in our case, the spore repository [16].) By looking at real protocols, we learned that it was necessary to (1) refine protocol specifications (foldm), (2) incorporate cryptographic assumptions about unique values (matchδ ), and (3) take special consideration of the initial messages of a protocol (matchι(δ) ). We have shown how this analysis and the message space overlap property can be used to provide the correctness proof of a dispatching algorithm. We have discussed the performance implications of this algorithm and pointed toward the essential features of a better algorithm. We have developed a formalization (↓σ ) of the “view” that a partially trusted dispatcher has of messages. We have presented an optimization routine that minimizes the amount of trust necessary for match to succeed on a protocol pair. We have presented the results of this analysis for the spore repository. We have also formalized the modifications (↑k ) that must be made to a protocol in order to enable trust of a load-balancer. Lastly, we have discussed how this optimization explains why there is no overlap between two message spaces. The entire work was formally verified in the Coq theorem proof assistant to increase confidence in our results.
232
J. McCarthy and S. Krishnamurthi
Acknowledgments. This work is partially supported by the NSF (CCF-0447509, CNS-0627310, and a Graduate Research Fellowship), Cisco, and Google. We are grateful for the advice of Joshua Guttman and John Ramsdell.
References 1. Begel, A., McCanne, S., Graham, S.L.: BPF+: exploiting global data-flow optimization in a generalized packet filter architecture. In: Symposium on Communications, Architectures and Protocols (1999) 2. Bodei, C., Degano, P., Gao, H., Brodo, L.: Detecting and preventing type flaws: a control flow analysis with tags. Electronic Notes in Theoretical Computer Science 194(1), 3–22 (2007) 3. Cortier, V., Delaitre, J., Delaune, S.: Safely Composing Security Protocols. In: Conference on Foundations of Software Technology and Theoretical Computer Science (2007) 4. Doghmi, S.F., Guttman, J.D., Thayer, F.J.: Skeletons, homomorphisms, and shapes: Characterizing protocol executions. Electronic Notes in Theoretical Computer Science, vol. 173, pp. 85–102 (2007) 5. Dolev, D., Yao, A.: On the security of public-key protocols. IEEE Transactions on Information Theory 29, 198–208 (1983) 6. F´ abrega, F.J.T., Herzog, J.C., Guttman, J.D.: Strand spaces: Why is a security protocol correct? In: IEEE Symposium on Security and Privacy (1998) 7. Guttman, J.D.: Authentication tests and disjoint encryption: a design method for security protocols. Journal of Computer Security 12(3/4), 409–433 (2004) 8. Guttman, J.D., Herzog, J.C., Ramsdell, J.D., Sniffen, B.T.: Programming cryptographic protocols. In: Trust in Global Computing (2005) 9. Guttman, J.D., Thayer, F.J.: Protocol independence through disjoint encryption. In: Computer Security Foundations Workshop (2000) 10. Guttman, J.D., Thayer, F.J.: Authentication tests and the structure of bundles. Theoretical Computer Science 283(2), 333–380 (2002) 11. Heather, J., Lowe, G., Schneider, S.: How to prevent type flaw attacks on security protocols. In: Computer Security Foundations Workshop (2000) 12. Hui, M.L., Lowe, G.: Fault-perserving simplifying transformations for security protocols. Journal of Computer Security 9(1-2), 3–46 (2001) 13. Lee, P., Leone, M.: Optimizing ML with run-time code generation. Programming Language Design and Implementation (1996) 14. McCarthy, J., Guttman, J.D., Ramsdell, J.D., Krishnamurthi, S.: Compiling cryptographic protocols for deployment on the Web. In: World Wide Web, pp. 687–696 (2007) 15. Meadows, C.: Identifying potential type confusion in authenticated messages. In: Computer Security Foundations Workshop (2002) 16. Project EVA. Security protocols open repository (2007), http://www.lsv.ens-cachan.fr/spore/ 17. Song, D.X.: Athena: a new efficient automated checker for security protocol analysis. In: Computer Security Foundations Workshop (1999) 18. Thayer, F.J., Herzog, J.C., Guttman, J.D.: Strand spaces: Proving security protocols correct. Journal of Computer Security 7(2/3), 191–230 (1999) 19. The Coq development team. The Coq proof assistant reference manual, 8.1 edn. (2007)
Specifying and Modelling Secure Channels in Strand Spaces Allaa Kamil and Gavin Lowe Oxford University Computing Laboratory Wolfson Building, Parks Road, Oxford, OX1 3QD, UK {allaa.kamil,gavin.lowe}@comlab.ox.ac.uk
Abstract. We adapt the Strand Spaces model to reason abstractly about layered security protocols, where an Application Layer protocol is layered on top of a secure transport protocol. The model abstracts away from the implementation of the secure transport protocol and just captures the properties that it provides to the Application Layer. We illustrate the usefulness of the model by using it to verify a small single sign-on protocol.
1
Introduction
Many security architectures make use of layering of protocols: a special-purpose Application Layer protocol is layered on top of a general-purpose Secure Transport Layer protocol, such as SSL/TLS [Tho00]. The secure transport protocol provides a secure channel to the Application Layer, i.e., it provides a communication channel with some extra security services such as authentication and confidentiality. The Application Layer protocol builds on this to provide extra functionality and security guarantees. As an example, one common use of such layered architectures is in SingleSign-On (SSO) protocols. In such protocols, a User seeks to access services provided by a Service Provider ; the User is authenticated by a trusted Identity Provider. Typically, the User can open a unilateral TLS connection to the Service Provider, which authenticates the Service Provider but not the User. Further, the User can open a unilateral TLS connection to the Identity Provider, which authenticates the Identity Provider; the User then provides a password to authenticate herself. The SSO protocol builds upon these secure channels to allow the User to authenticate herself to the Service Provider. The SAML SSO Protocol [OAS05] is one such protocol. However, the use of secure channels is not enough to ensure the security of the application protocol. For example, Google adapted the SAML SSO for use with Google Apps [Goo08]. Unfortunately, this adaptation introduced a flaw, reported in [ACC+ 08]. The aim of our research programme is to investigate how to analyse such layered protocols. In this paper, we extend the Strand Spaces model [THG98] in order to specify and model layered protocols. P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 233–247, 2010. c Springer-Verlag Berlin Heidelberg 2010
234
A. Kamil and G. Lowe
One way to analyse such layered protocols would be to explicitly model both layers; this is the approach taken in [HSN05]. We take the view that it is better to abstract away from the implementation of the Secure Transport Layer and simply to model the services it provides to the Application Layer. This greatly simplifies the analysis of the architecture. Further, such an analysis produces more general results: it allows us to deduce the security of the Application Layer protocol when layered on top of an arbitrary secure transport protocol that provides (at least) the assumed services. Of course, this approach introduces a proof obligation that the secure transport protocol does indeed provide the services assumed of it. However, such a proof only needs to be done once per transport protocol. An example such proof, for bilateral TLS, appears in [Kam09, KL09]. Such proofs tend to assume that the two layers are independent, so that no message can be replayed from one layer to the other. Different secure transport protocols will allow or prevent different actions by a dishonest penetrator. Any reasonable transport protocol will allow the penetrator to take part in sessions, to send messages using his own identity, and receive messages intended for him. Some transport protocols will keep Application Layer messages confidential, such as the transport protocol that encodes Application Layer message m from A to B as TL1A,B (m) = A, B , {m}PK (B ) (where PK (B ) is B ’s public key). But others will allow the penetrator to learn them, such as the transport protocol that encodes m from A to B as TL2A,B (m) = A, B , {m}SK (A) [CGRZ03] (where SK (A) is A’s secret key). Some transport protocols will allow the penetrator to fake messages, causing a regular (i.e. honest) agent to receive an arbitrary Application Layer message known to the penetrator, apparently from some third party; this is the case with encoding TL1 but not TL2 . Finally, some transport protocols will allow the penetrator to hijack messages, changing either the intended recipient or the apparent sender of the message; for example, with encoding TL2 , the penetrator may transform the Transport Layer message into A, C , {m}SK (A) and redirect it to C ; alternatively, with encoding TL1 , the penetrator may transform the Transport Layer message into C , B , {m}PK (B ) and re-ascribe it to C . Our approach is to build an abstract model that describes each of these potential penetrator actions —sending, receiving, learning, faking and hijacking— as a high-level penetrator strand. Of course, the penetrator may also build Application Layer messages himself, pick them apart, or otherwise transform them; we capture these abilities using (slightly adapted versions of) standard penetrator strands. We assume that different transport protocols deployed in the same system are independent, in the sense that the penetrator cannot directly transform a message sent over one transport protocol into a message over another protocol, other than by performing a receive or learn followed by a send or fake.
Specifying and Modelling Secure Channels in Strand Spaces
235
In the next section we present the foundations of the model, describing the way we abstractly represent Transport Layer messages and the penetrator’s possible actions in high-level bundles. In Section 3 we describe how to specify the properties of secure channels by disallowing appropriate high-level penetrator strands. In Section 4 we prove a normal form lemma that subsequently allows us to restrict our attention to bundles in a particular form. We illustrate our model in Section 5 by using it to analyse a small single sign-on protocol. We sum up and discuss forthcoming work in Section 6. Related work. The work closest to the current paper is [DL08, Dil08]. That paper uses a CSP-style formalism [Ros98], with a view towards analysing protocols using model checking. That paper, like this, defines potential capabilities of the penetrator, and then specifies secure channels by limiting those capabilities. We see the two approaches as complementary: model checking is good for finding attacks; the Strand Spaces approach is good for building the theoretical foundations, and producing proofs of protocols that reveal why the protocol is correct. Armando et al. [ACC07] use LTL to specify security properties of channels, and then use SATMC, a model checker for security protocols, to analyse a fair exchange protocol. In [ACC+ 08] they analyse SAML SSO and the Google Apps variant using the same techniques. Bella et al. [BLP03] adapt the inductive approach to model authenticated and confidential channels, and use these ideas to verify a certified e-mail delivery protocol. Bugliesi and Focardi [BF08] model secure channels within a variant of the asynchronous pi-calculus. Each of these works captures the properties of secure channels by limiting the penetrator’s abilities regarding messages on such channels, although each considers fewer variants of authenticated channels than the current paper. As noted above, our work is mainly targeted at layered protocol architectures. However, it can also be used to model empirical channels, where some messages of a protocol are implemented by human mediation, and so satisfy extra security properties. Creese et al. [CGRZ03] capture such empirical channels, within the context of CSP model checking, again by restricting the penetrator’s abilities.
2
The Abstract Model: High-Level Bundles
In this section we present high-level bundles, which abstractly model secure transport protocols. We present high-level terms, which capture Transport Layer messages. We then adapt the notion of a strand space [THG98] to such high-level terms. We then describe how we model the penetrator’s ability to manipulate both Transport Layer and Application Layer messages. Finally, we define highlevel bundles. 2.1
High-Level Terms and Nodes
Let A be the set of possible messages that can be exchanged between principals in a protocol. The elements of A are usually referred to as terms. As in the
236
A. Kamil and G. Lowe
original Strand Spaces model [THG98], A is freely generated from two disjoint sets, T (representing tags, texts, nonces, and principals) and K (representing keys) by means of concatenation and encryption. Definition 1. Compound terms are built by two constructors: – encr : K × A → A representing encryption; – join : A × A → A representing concatenation. Conventionally, {t }k is used to indicate that a term t is encrypted with a key k and t0ˆt1 to denote the concatenation of t0 and t1 . The set K of keys is equipped with a unary injective symmetric operator inv : K → K; inv(k ) is usually denoted k −1 . Let Tname ⊆ T be the set of agent names, ranged over by X , Y ; let Tpname ⊆ Tname , ranged over by P , be the set of names the penetrator uses when actively participating in a protocol as himself; we let A, B range over names of regular agents. As in the original Strand Spaces model, a strand is a sequence of message transmissions and receptions. A node is the basic element of a strand. Each node n is associated with a message, or high-level term, denoted msg(n). A positive node is used to represent a transmission while a negative node is used to denote reception. Each node communicates over a channel, which may provide some security services to the message; a channel that does not provide any security services is called the bottom channel, denoted ⊥. In our abstract model, messages are modelled as follows. Definition 2. Every node n in strand st is associated with a high-level term of the form (X , Y , m, c) where: – m ∈ A is the Application Layer message. – X ∈ Tname : If n is positive and st is a regular strand, then X is the name of the regular agent who is running st . Otherwise, X refers to the agent that is claimed to have sent m. – Y ∈ Tname : If n is negative and st is a regular strand, then Y is the name of the regular agent who is running st . Otherwise, Y refers to the agent that is intended to receive m. – c is the identifier of the secure channel over which n communicates. We write Aˆ for the set of high-level terms. We may use an underscore ( ) in the first or second position of the tuple to indicate that the term is not associated with a particular sender or receiver respectively. If n is a regular node then we assume that its term must be associated with a specified sender and receiver. The following definition is a straightforward adaptation from [THG98]. The relation n → n represents inter-strand communication, while n ⇒ n represents flow of control within a strand. ˆ we Definition 3. A directed term is a pair σ, a with σ ∈ {+, −} and a ∈ A; ∗ ˆ write it as as +t or −t . (±A) is the set of finite sequences of directed terms. ˆ ∗ is denoted by σ1 , a1 , ..., σn , an . A strand space A typical element of (±A) ˆ ˆ ∗ . Fix a strand space Σ . over A is a set Σ with a trace mapping tr : Σ → (±A)
Specifying and Modelling Secure Channels in Strand Spaces
237
1. A node n is a pair st , i, with st ∈ Σ and i an integer satisfying 1 ≤ i ≤ length(tr (st )). The set of nodes is denoted by N . We define msg(n) = tr (st )(i). We will say the node n belongs to the strand st . 2. There is an edge n1 → n2 if and only if msg(n1 ) = +a and msg(n2 ) = −a ˆ The edge means that node n1 sends the message a, which for some a ∈ A. is received by n2 , recording a potential causal link between those strands. 3. When n1 = st , i, and n2 = st , i + 1 are members of N , there is an edge n1 ⇒ n2 . The edge expresses that n1 is an immediate causal predecessor of n2 on the strand st . n ⇒+ n is used to denote that n precedes n (not necessarily immediately) on the same strand. We now define the notions of origination and unique origination in the context of a high-level strand space. Definition 4. Let Σ be a high-level strand space. 1. Let I be a set of undirected terms. The node n ∈ Σ is an entry point for I iff n is positive and associated with a high-level term (A, B , m, c) for some m ∈ I, and whenever n ⇒+ n and n is associated with a high-level term (A , B , m , c ), m ∈ / I. 2. An undirected term t originates on a node n iff n is an entry point for the set of messages that contain t as a subterm. 3. An undirected term t is uniquely originating in a set of nodes S ⊂ N iff there is a unique n ∈ S such that t originates on n. 2.2
The Penetrator
We can classify the activities of the penetrator, according to their effects on Application Layer messages: – Actions that are used to construct or pick apart Application Layer messages; – Actions that are used to handle high-level terms, affecting the Transport Layer “packaging” without modifying the corresponding Application Layer messages. The first type of actions is used to transform and create messages of the form ( , , m, ⊥), i.e. messages that are sent on the bottom channel without being associated with a particular sender or receiver. To model them, we adapt the standard penetrator strands from [THG98] to handle high-level terms. Definition 5. A standard penetrator strand in a high level bundle is one of the following: M. Text message: +( , , r , ⊥) where r ∈ TP ; K. Key: +( , , k , ⊥) where k ∈ KP ; C. Concatenation: −( , , t0 , ⊥), −( , , t1 , ⊥), +( , , t0ˆt1 , ⊥); S. Separation into components: −( , , t0ˆt1 , ⊥), +( , , t0 , ⊥), +( , , t1 , ⊥); E. Encryption: −( , , k , ⊥), −( , , t , ⊥), +( , , {t }k , ⊥) where k ∈ K; D. Decryption: −( , , k −1 , ⊥), −( , , {t }k , ⊥), +( , , t , ⊥) where k ∈ K.
238
A. Kamil and G. Lowe
The second type of penetrator actions only affects the “packaging” of the Application Layer message, i.e. it only affects the first, second, and fourth components of a high-level term. These paths are used to perform the following activities: 1. Send: the penetrator may send an Application Layer message m by creating a Transport Layer message with payload m, and inserting it in the network using a penetrator’s identity. 2. Receive: the penetrator may receive an Application Layer message m as a payload of a Transport Layer message that was sent for him by a regular agent. 3. Learn: the penetrator may intercept and learn an Application Layer message m from a Transport Layer message with a payload m that was exchanged between regular agents. 4. Fake: the penetrator may fake an Application Layer message by creating a Transport Layer message with payload m, and inserting it in the network dishonestly (i.e. using another agent’s identity). 5. Hijack: the penetrator may change the sender and/or receiver field in a previously sent Transport Layer message without changing the Application Layer message; the penetrator can perform hijacking in three ways [DL08]: (a) Re-ascribe: the penetrator may re-ascribe a previously sent message by intercepting and sending it using another agent’s identity. (b) Redirect: the penetrator may redirect a previously sent message by intercepting it and sending it to a different agent. (c) Re-ascribe/redirect: the penetrator may re-ascribe and redirect a previously sent message at the same time. We abstractly model each of the penetrator paths defined above as a highlevel penetrator strand that sends and receives terms in the form (A, B , m, c). Definition 6. A penetrator strand in a high-level bundle is either a standard penetrator strand or a high-level penetrator strand of one of the following forms: SD. RV. LN. FK. HJ.
Sending: −( , , m, ⊥), +(P , B , m, c) where P ∈ Tpname and B ∈ / Tpname ; / Tpname ; Receiving: −(A, P , m, c), +( , , m, ⊥) where P ∈ Tpname and A ∈ Learning: −(A, B , m, c), +( , , m, ⊥) where A, B ∈ / Tpname ; / Tpname ; Faking: −( , , m, ⊥), +(A, B , m, c) where A, B ∈ Hijacking: −(X , Y , m, c), +(X , Y , m, c) such that X = X or Y = Y .
As an example, Figure 1 illustrates part of a bundle, where the penetrator uses several different strands to transform the high level message (S , P , AˆN , c) into (S , B , PˆN , c). 2.3
High-Level Bundles
A high-level bundle is a finite subgraph of N , (→ ∪ ⇒) for which the edges express the causal dependencies of the nodes.
Specifying and Modelling Secure Channels in Strand Spaces
•
(S ,P ,AˆN ,c)
RV /•
( , ,AˆN ,⊥)
•
S
/• •
o
( , ,A,⊥)
C •o ( , ,N ,⊥)
•
( , ,P ,⊥)
M •
( , ,PˆN ,⊥)
FK /•
/ •
•
•
(S ,B ,PˆN ,c)
239
/
Fig. 1. Transforming a high-level term
Definition 7. [THG98] Suppose →B ⊂ →, ⇒B ⊂ ⇒, and B = NB , →B ∪ ⇒B is a subgraph of N , → ∪ ⇒. B is a bundle if (1) NB and →B ∪ ⇒B are finite; (2) If n2 ∈ NB and msg(n2 ) is negative, then there is a unique n1 such that n1 →B n2 ; (3) If n2 ∈ NB and n1 ⇒ n2 then n1 ⇒B n2 ; and (4) B is acyclic. We write B for (→B ∪ ⇒B )∗ . The relation B expresses the causal relationship in the high-level bundle B. Proposition 8. Let B be a high-level bundle. Then B is a partial order, i.e. a reflexive, antisymmetric, transitive relation. Every non-empty subset of the nodes in B has a B -minimal member. Definition 9. Bundles B and B in a strand space Σ are equivalent iff they have the same regular strands.
3
Modelling Secure Channels
So far we allow high-level bundles with arbitrary penetrator strands. In this section we restrict these strands to capture properties of secure channels. Our approach follows [DL08]. We begin with confidential channels, and then provide the building blocks of authenticated channels. Each channel is associated with a specification that states which of these properties it satisfies. Confidential channels protect the confidentiality of the messages sent on them. If message (A, B , m, c) is sent over a confidential channel c, the penetrator cannot deduce m if the message was not intended for him. However, he can still see the high-level message. We can define a secure channel c to satisfy the confidentiality property C in terms of the penetrator’s activity as follows. Definition 10 (Confidential Channels). Let channel c satisfy C . Then there / Tpname is no LN strand of the form −(A, B , m, c), +( , , m, ⊥) where A, B ∈ in any high-level bundle. For example, if the Transport Layer protocol encodes the high-level term (A, B , m, c) as Aˆ{m}PK (B ) , where PK (B ) is B ’s public key, then it provides a confidential channel. We write C (c) to indicate that c satisfies C , and similarly for the properties we define below.
240
A. Kamil and G. Lowe
If a channel is non-fakable, then the penetrator cannot create and send an application message using another agent’s identity. Definition 11 (No faking). Let channel c satisfy NF . Then there is no FK strand of the form −( , , m, ⊥), +(A, B , m, c) where A, B ∈ / Tpname in any high-level bundle. For example, if the Transport Layer protocol encodes the high-level term (A, B , m, c) as Bˆ{m}SK (A) , where SK (A) is A’s secret key, then it provides a non-fakable channel. We now consider various restrictions on hijacking. In each case we do not want to prevent HJ strands of the form −(X , Y , m, c), +(X , Y , m, c) where (i) if C (c) then Y ∈ Tpname , and (ii) if NF (c) then X ∈ Tpname : in such cases the penetrator could learn m (via a RV or LN strand) and then produce +(X , Y , m, c) (via a SD or FK strand) to produce the same effect. If a channel is non-re-ascribable, then the penetrator cannot intercept a previously sent message and send it using a different sender’s identity. Following [DL08], we distinguish between two notions of no re-ascribing: – No re-ascribing where the penetrator cannot re-ascribe messages using any identity; – No honest re-ascribing where the penetrator cannot re-ascribe messages using an honest identity, but can still re-ascribe messages using a penetrator’s identity. For example, if the Transport Layer protocol encodes the high-level term (A, B , m, c) as {{m}PK (B )}SK (A) , then the penetrator P may replace the signature using SK (A) with his own signature using SK (P ), so as to re-ascribe the message to himself; however, he cannot re-ascribe the message to an honest agent. On the other hand, if (A, B , m, c) is encoded as {{m, A}PK (B )}SK (A) , then he can no longer re-ascribe the message to himself. We define non re-ascribable channels as follows.1 Definition 12 (No honest re-ascribing). Let channel c satisfy NRA− . Then for every HJ strand of the form −(X , Y , m, c), +(X , Y , m, c) in a high-level bundle, one of the following holds: (a) X = X , i.e. no re-ascribing takes place; (b) X ∈ Tpname , i.e. the message is re-ascribed with a penetrator’s identity; or (c) if C (c) then Y ∈ Tpname , and if NF (c) then X ∈ Tpname , i.e., as discussed above, the penetrator can learn the underlying Application Layer message and then send or fake the message. Definition 13 (No re-ascribing). Let channel c satisfy NRA. Then for every HJ strand of the form −(X , Y , m, c), +(X , Y , m, c) in a high-level bundle, one of the following holds: (a) X = X , i.e. no re-ascribing takes place; (b) X , X ∈ Tpname , i.e. the message is re-ascribed from one penetrator identity to another; or (c) if C (c) then Y ∈ Tpname , and if NF (c) then X ∈ Tpname . 1
Some of the details of these definitions are a little delicate, and are necessary for some of the future work discussed in Section 6.
Specifying and Modelling Secure Channels in Strand Spaces
241
If a channel is non-redirectable, the penetrator cannot intercept a previously sent message and send it for a different receiver. As with re-ascribing, we distinguish between two notions of no redirecting: – No-redirecting where the penetrator cannot redirect any message; – No-honest redirecting where the penetrator cannot redirect messages sent to honest participants, but can redirect messages sent to himself. For example, if the Transport Layer protocol encodes the high-level term (A, Y , m, c) as {{m}SK (A) }PK (Y ) , then the penetrator P can transform a message for himself, i.e. {{m}SK (A) }PK (P ) , into one for B , i.e. {{m}SK (A) }PK (B ) , and so redirect it to B ; however he cannot redirect a message sent to an honest agent. On the other hand, if (A, Y , m, c) is encoded as {{m, Y }SK (A)) }PK (Y ) , then he can no longer redirect a message sent to himself. We define non-redirectable channels as follows. Definition 14 (No honest redirecting). Let channel c satisfy NRD −. Then for every HJ strand of the form −(X , Y , m, c), +(X , Y , m, c) in a high-level bundle, one of the following holds: (a) Y = Y , i.e. no redirecting takes place; (b) Y ∈ Tpname , i.e. the original message was sent to the penetrator; or (c) if C (c) then Y ∈ Tpname , and if NF (c) then X ∈ Tpname . Definition 15 (No-redirecting). Let channel c satisfy NRD . Then for every HJ strand of the form −(X , Y , m, c), +(X , Y , m, c) in a high-level bundle, one of the following holds: (a) Y = Y , i.e. no redirecting takes place; (2) Y , Y ∈ Tpname , i.e. the message is redirected from one penetrator identity to another; or (c) if C (c) then Y ∈ Tpname , and if NF (c) then X ∈ Tpname .
4
Normal and Abstractly Efficient Bundles
Bundles can contain various types of redundancy. For example, an encryption edge immediately followed by a decryption edge just reproduces the original term: this redundancy can be removed to produce an equivalent bundle. It is clearly simpler if we can restrict our attention to bundles without such redundancies. This is the question we consider in this section. Definition 16. In a high-level bundle, a ⇒+ edge is constructive if it is part of an E, C, SD or FK strand. It is destructive if it is part of a D, S, LN or RV strand. An edge is non-destructive if it is constructive or part of an HJ strand. Similarly, an edge is non-constructive if it is destructive or part of an HJ strand. Definition 17. A high-level bundle B is normal iff for any penetrator path of B, no non-destructive edge precedes a non-constructive edge. Proposition 18. For every high-level bundle B, there exists an equivalent highlevel normal bundle B . Moreover, the penetrator nodes of B form a subset of the penetrator nodes of B and the ordering B is a restriction of the ordering B .
242
A. Kamil and G. Lowe
Proof (Sketch.). The proof proceeds by showing that whenever a non-destructive edge precedes a non-constructive edge, an equivalent bundle can be found without this redundancy. For standard penetrator strands, the proof is as in [GT01]. Figure 2 (a)–(d) gives some examples of how to replace redundancies arising from high-level penetrator strands. A simple case analysis shows no redundancy involves a standard penetrator strand and a high-level penetrator strand. Normal bundles may still contain redundancies. For example, an LN strand followed by an SD strand may be replaced by an HJ strand (or a transmission edge if the identities match). Definition 19. A high-level bundle B is abstractly efficient if every penetrator path p that starts at n1 such that msg(n1 ) = +(X , Y , m, c), and ends at n2 such that msg(n2 ) = −(X , Y , m, c), consists of a single HJ strand or else there is a transmission edge between n1 and n2 . Proposition 20. For every high-level bundle, there exists an equivalent highlevel efficient bundle that is also normal. Proof (Sketch.). Figure 2 (e)–(f) gives examples of how some of the remaining redundancies can be removed.
5
Example: A Single Sign-On Protocol
In this example we illustrate our definitions of high-level bundles and secure channels via a small example. We consider a single sign-on protocol that authenticates a User U to a Service Provider SP , with the help of an Identity Provider IdP . We will use a secure channel c that satisfies C ∧ NRD − . It is reasonable to suppose that the Service Provider and Identity Provider each has a public key certificate, so unilateral TLS can be used to establish an authenticated channel to them; we believe that this channel satisfies C ∧ NRD − . For messages sent from the Identity Provider to the User, the channel could be implemented using unilateral TLS to authenticate the Identity Provider, combined with the User sending a password to authenticate herself; we believe that this channel satisfies C ∧ NF ∧ NRD ∧ NRA, which is stronger than is required. We will consider the following protocol, where →c indicates messages sent using channel c, and → indicate messages sent on the bottom channel. 0 . U → SP 1 . SP →c IdP 2 . IdP →c U 3 . U →c SP
: : : :
UˆIdP 1ˆSPˆUˆN 2ˆIdPˆSPˆN 3ˆUˆIdPˆN
Here N is a fresh unpredictable value; “1 ”, “2 ” and “3 ” are distinct tags used to ensure unique readability of the messages. Message 0 is sent across the bottom channel to initiate the protocol. SP then creates a fresh nonce which is passed via IdP to U , and then back to U in order to authenticate U to SP .
Specifying and Modelling Secure Channels in Strand Spaces
◦
( , ,m,⊥)
FK /•
LN (A,B ,m,c)
•
/• •
( , ,m,⊥) / ◦V V V V • V V(A,B ,m,c)♣ V V • V V/
◦
(X ,B ,m,c)
HJ /•
( , ,m,⊥)
◦
V V V V V+ ◦
◦
(X ,B ,m,c)
(A,B ,m,c)
•
(X1 ,Y1 ,m,c)♣
(A,P ,m,c)
RV /•
(P ,B ,m,c)
FK ( , ,m,⊥)
/◦
/◦ • (e) An inefficient LN-SD path and the corresponding efficient path.
/•
(A ,B ,m,c)
(
(A ,B ,m,c)
•
◦
(
HJ /•F F
• (P ,B ,m,c)
/• (X2 ,Y2 ,m,c) /◦ •
(d) A path containing an HJ-HJ redundancy, replaced by an HJ strand. ◦
HJ / • II I
IIII IIII IIII IIII III
(X1 ,Y1 ,m,c)
FF F FF / FF • • FF F & (X2 ,Y2 ,m,c) /◦ •
/•
(A,B ,m,c)
(X ,Y ,m,c)
( , ,m,⊥)
HJ
/◦
SD ( , ,m,⊥)
HJ /•
•
◦
(X ,Y ,m,c)
(X ,Y ,m,c)♣
LN /•
FF F FF / FF • FF F ' (A ,Y ,m,c) /◦ •
(b) A path containing an FK-HJ redundancy, replaced by an FK strand.
( , ,m,⊥)
/• (A ,Y ,m,c) /◦ •
(A,Y ,m,c)♣
•
/◦ • (c) A path containing an HJ-LN redundancy, replaced by an LN strand. ◦
FK / • FF
◦
FF F FF FF / • FF F '
(A,Y ,m,c)
•
◦
LN / • FF
( , ,m,⊥)
/• •
HJ
/◦
LN (X ,B ,m,c)
•
FK /• •
(a) A path containing an FK-LN redundancy, replaced by a transmission edge. ◦
( , ,m,⊥)
243
(A,P ,m,c)
/◦
HJ / • II I
IIII IIII IIII IIII III
/◦ • (f) An inefficient RV-FK path and the corresponding efficient path.
Fig. 2. Redundancies and how to eliminate them. ♣ indicates a discarded message.
244
A. Kamil and G. Lowe
In order to model this protocol, we start by defining regular strands for each of the three roles. – Strands of the form User (U , SP , IdP , N ) have trace + (U , SP , UˆIdP , ⊥), − (IdP , U , 2ˆIdPˆSPˆN , c), + (U , SP , 3ˆUˆIdPˆN , c) . – Strands of the form ServProv (SP , U , IdP , N ) have trace − (U , SP , UˆIdP , ⊥), + (SP , IdP , 1ˆSPˆUˆN , c), − (U , SP , 3ˆUˆIdPˆN , c) . – Strands of the form IdProv (IdP , U , SP , N ) have trace − (SP , IdP , 1ˆSPˆUˆN , c), + (IdP , U , 2ˆIdPˆSPˆN , c) . We will therefore consider bundles B containing strands of the above form (and no other regular strands). Further, since each nonce N is freshly generated, we assume that for every ServProv (SP , U , IdP , N ) strand st containing at least two nodes in the bundle, N originates uniquely at (st , 2 ). Consider a bundle B containing all three nodes of a Service Provider strand st = ServProv (SP , U , IdP , N ), and such that SP , U , IdP ∈ / Tpname . We aim to show that there is a corresponding User strand in B, i.e., the User is authenticated to the Service Provider. By Proposition 20, we may, without loss of generality, assume that B is normal and abstractly efficient. The reason the protocol works is that only SP , U and IdP can obtain N . The lemma below captures this. Let X be the set containing the terms (SP , IdP , 1ˆSPˆUˆN , c), (IdP , U , 2ˆIdPˆSPˆN , c), and (U , SP , 3ˆUˆIdPˆN , c). Lemma 21. Every occurrence of N on a regular node is within a high-level term from X ; N does not occur within any high-level term of the form ( , , m, ⊥). Proof. Suppose for a contradiction that the result does not hold. Let n be a
B -minimal node where this occurs (there must be a minimal such node by Proposition 8). Clearly n = (st , 2 ), since msg(st , 2 ) is in X . Since N originates uniquely at (st , 2 ), N does not originate at n. Hence one of the following holds. – n is a positive regular node. Then there must be some node n such that n ⇒+ n and N occurs within msg(n ). Then, by the assumed B minimality of n, msg(n ) ∈ X ; hence the strand containing n and n transforms a message from X into a message not in X . But no regular strand can do this; for example: • If an Identity Provider strand receives a message from X , it is necessarily of the form (SP , IdP , 1ˆSPˆUˆN , c); it will then send (IdP , U , 2ˆIdPˆSPˆN , c), which is also in X : the presence of the SP and U fields within message 1 is important here.
Specifying and Modelling Secure Channels in Strand Spaces
245
• If a User strand receives a message from X , it is necessarily of the form (IdP , U , 2ˆIdPˆSPˆN , c); it will then send (U , SP , 3ˆUˆIdPˆN , c), which is also in X : the presence of the SP field within message 2 is important here. – n is either a negative regular node containing N outside X , or a penetrator node containing N in a term of the form ( , , m, ⊥). In each case, the term is produced by a penetrator path that starts at a regular node n containing a term from X . Since B is normal and abstractly efficient, every such penetrator path must start with an RV or LN strand, or comprise a single HJ strand. Clearly no RV strand can operate on terms from X . Since c is a confidential channel, no LN strand can operate on messages from X . Since c satisfies C ∧NRD − , every HJ strand either: (case (a) of Definition 14) changes only the first field of high-level messages, but no honest strand will accept the result of transforming a term from X in this way; or (cases (b) and (c) of Definition 14) operates on high-level messages whose second field is an element of Tpname , so cannot operate on messages from X . Now consider the term (U , SP , 3ˆUˆIdPˆN , c) received at (st , 3 ). From the above lemma, the term could not have been produced by an FK strand. Using this and the fact that the bundle is normal and abstractly efficient, the term must result from either a transmission edge, or an HJ strand, from a term from X . The latter case cannot occur (since each HJ strand changes either the sender or receiver field). An analysis of the honest strands then shows that the message is transmitted from the final node of a User (U , SP , IdP , N ) strand. A similar analysis could be used to show the presence of a corresponding Identity Provider strand; we omit the details. It is possible to simplify the protocol slightly, by removing some fields. However, we do need the fields U and SP in message 1, and SP in message 2 to ensure that Lemma 21 is satisfied, in particular by honest Identity Provider and User strands. Further (but arguably less importantly), the IdP field in message 3 is needed to ensure the User and Service Provider agree upon the identity of the Identity Provider. Finally, the presence of the U field in message 3 simplifies the proof slightly.
6
Conclusion
In this paper we have described how to model secure channels within the Strand Spaces formalism. We represent messages sent over the network using high-level terms, which abstract away from the implementation of the secure transport protocol. We then abstractly modelled ways in which the penetrator can operate upon such high level terms: to obtain the underlying Application Layer message (either honestly or dishonestly); to have an honest agent receive the Application Layer message (either apparently from the penetrator or some third party); or to hijack the message, to change either the recipient or the apparent sender. We specified properties of secure channels by restricting the capabilities available to the penetrator. Finally, we illustrated the model by using it to verify a property
246
A. Kamil and G. Lowe
of a simple single sign-on protocol: we believe that the proof helps to explain why the protocol is correct. This is the first of a planned series of papers reporting work from [Kam09]. We briefly discuss some of the results here. Many secure transport protocols group messages together into sessions, so that the recipient of messages receives an assurance that the sender sent those messages as part of the same session. For example, a single sign-on protocol is normally used as a prelude to some session: the Service Provider wants to be sure that all the messages in that session came from the same User who was authenticated by the single sign-on. Further, some transport protocols give the recipient a guarantee that the messages were received in the same order in which they were sent. These properties are captured in [Kam09, Chapter 5]. In this paper we have presented high-level bundles, which abstract away from the implementation of the secure transport protocol. As mentioned in the Introduction, one could also model layered architectures by explicitly modelling the transport protocol, in low-level bundles. In [Kam09, Chapter 6] the relationship between these models is described, and it is shown that —subject to certain independence assumptions— for every low-level bundle, there is a highlevel bundle that abstracts it. Hence the abstraction is sound: by verifying a protocol in a high-level Strand Space, one can deduce that the implementation of the protocol, as modelled in the low-level Strand Space, is also correct. In [DL08] it is shown that not all combinations of the properties from Section 3 are distinct, and a hierarchy of different properties is —informally— derived. In [Kam09, Chapter 7] the same result is —more formally— obtained for our Strand Spaces definitions. In Section 5, we performed a direct verification of the example protocol. In [Kam09, Chapter 7], a number of verification-oriented rules are presented. Some rules concern when an honest agent receives a message over a particular secure channel, and allow one to deduce facts about how that message was produced. Further rules allow one to verify the secrecy of certain terms, while others adapt the Authentication Tests of [GT00] to high-level bundles. In [Kam09, Chapter 8], these rules are used in a number of examples, concerning both layered protocol architectures and empirical channels.
References [ACC07] [ACC+ 08]
[BF08]
Armando, A., Carbone, R., Compagna, L.: LTL model checking for security protocols. In: 20th IEEE Computer Security Foundations Symposium (2007) Armando, A., Carbone, R., Compagna, L., Cuellar, J., Tobarra, L.: Formal analysis of SAML 2.0 web browser single sign-on: Breaking the SAML-based single sign-on for Google Apps. In: The 6th ACM Workshop on Formal Methods in Security Engineering, FMSE 2008 (2008) Bugliesi, M., Focardi, R.: Language based secure communication. In: Proceedings of the 21st IEEE Computer Security Foundations Symposium (2008)
Specifying and Modelling Secure Channels in Strand Spaces [BLP03]
[CGRZ03]
[Dil08] [DL08] [Goo08]
[GT00] [GT01] [HSN05]
[Kam09] [KL09] [OAS05]
[Ros98] [THG98]
[Tho00]
247
Bella, G., Longo, C., Paulson, L.: Verifying second-level security protocols. In: Basin, D., Wolff, B. (eds.) TPHOLs 2003. LNCS, vol. 2758, pp. 352–366. Springer, Heidelberg (2003) Creese, S.J., Goldsmith, M.H., Roscoe, A.W., Zakiuddin, I.: The attacker in ubiquitous computing environments: formalising the threat model. In: Proceedings of the 1st International Workshop on Formal Aspects in Security and Trust, FAST (2003) Dilloway, C.: On the Specification and Analysis of Secure Transport Layers. DPhil thesis, Oxford University (2008) Dilloway, C., Lowe, G.: Specifying secure transport layers. In: 21st IEEE Computer Security Foundations Symposium, CSF 21 (2008) Google. Web-based reference implementation of SAML-based SSO for Google Apps (2008), http://code.google.com/apis/apps/sso/ saml reference implementation web.html Guttman, J.D., Thayer, F.J.: Authentication tests. In: IEEE Symposium on Security and Privacy, pp. 96–109 (2000) Guttman, J.D., Thayer, F.J.: Authentication tests and the structure of bundles. Theoretical Computer Science (2001) Hansen, S.M., Skriver, J., Nielson, H.R.: Using static analysis to validate the SAML single sign-on protocol. In: Proceedings of the 2005 Workshop on Issues in the Theory of Security, WITS 2005 (2005) Kamil, A.: The Modelling and Analysis of Layered Security Architectures in Strand Spaces. DPhil thesis, Oxford University, Forthcoming (2009) Kamil, A., Lowe, G.: Analysing TLS in the Strand Spaces model (2009) (Submitted for publication) OASIS Security Services Technical Committee. Security assertion markup language (SAML) v2.0 technical overview (2005), http://www.oasis-open.org/committees/security/ Roscoe, A.W.: The Theory and Practice of Concurrency. Prentice Hall, Englewood Cliffs (1998) Javier Thayer, F., Herzog, J.C., Guttman, J.D.: Strand spaces: Why is a security protocol correct? In: IEEE Symposium on Research in Security and Privacy, pp. 160–171. IEEE Computer Society Press, Los Alamitos (1998) Thomas, S.: SSL and TLS: Securing the Web. Wiley, Chichester (2000)
Integrating Automated and Interactive Protocol Verification Achim D. Brucker1 and Sebastian A. M¨ odersheim2 1
Abstract. A number of current automated protocol verification tools are based on abstract interpretation techniques and other over-approximations of the set of reachable states or traces. The protocol models that these tools employ are shaped by the needs of automated verification and require subtle assumptions. Also, a complex verification tool may suffer from implementation bugs so that in the worst case the tool could accept some incorrect protocols as being correct. These risks of errors are also present, but considerably smaller, when using an LCF-style theorem prover like Isabelle. The interactive security proof, however, requires a lot of expertise and time. We combine the advantages of both worlds by using the representation of the over-approximated search space computed by the automated tools as a “proof idea” in Isabelle. Thus, we devise proof tactics for Isabelle that generate the correctness proof of the protocol from the output of the automated tools. In the worst case, these tactics fail to construct a proof, namely when the representation of the search space is for some reason incorrect. However, when they succeed, the correctness only relies on the basic model and the Isabelle core.
1
Introduction
Over the last decade, a number of automated tools for security protocol verification have been developed such as AVISPA [1] and ProVerif [4]. They allow engineers to find problems in their security protocols before deployment. Indeed, several attacks to security protocols have been detected using automated tools. The focus of this work is the positive case—when no attack is found: to obtain a proof of security. Many automated tools employ over-approximation and abstraction techniques to cope with the infinite search spaces that are caused, e.g., by an unbounded number of protocol sessions. This means to check the protocol in a finite abstract model that subsumes the original model. Thus, if the protocol is correct in the abstract model, then so it is in the original model. However, the soundness of such abstractions depends on subtle assumptions, and it is often hard to keep track of them, even for experts. Moreover, it is often hard to formalize protocols P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 248–262, 2010. c Springer-Verlag Berlin Heidelberg 2010
Integrating Automated and Interactive Protocol Verification
249
correctly in such over-approximated models. Finally, tools may also have bugs. For all these reasons, it is not unlikely that insecure protocols are accidentally verified by automated verification tools. There are semi-automated methods such as the Isabelle theorem prover which offer a high reliability: if we trust in a small core (the proof checking and some basic logical axioms), we can rely on the correctness of proved statements. However, conducting proofs in Isabelle requires considerable experience in both formal logic and proof tactics, as well as a proof idea for the statement to show. In this work, we combine the best of both worlds: reliability and full automation. The idea is that abstraction-based tools supposedly compute a finite representation of an over-approximated search space, i.e., of what can happen in a given protocol, and that this representation can be used as the basis to automatically generate a security proof in Isabelle. This proof is w.r.t. a clean standard protocol model without over-approximation. If anything goes wrong, e.g., the abstraction is not sound, this proof generation fails. However, if we succeed in generating the proof, we only need to trust in the standard protocol model and the Isabelle core. Our vision is that such automatically generated verifiable proofs can be the basis for reaching the highest assurance level EAL7 of a common criteria certification at a low cost. We have realized the integration of automated and interactive protocol verificaAbstraction/ tion in a prototype tool that is summaReference Model Refinement rized in Fig. 1. The protocol and the proof goals (e.g., secrecy, authentication) are Abstract Model specified in the reference model (Sect. 2 Proof Generator and Sect. 3). This reference model is not FP Module driven by technical needs of the autoProof OR mated verification and is close to other Verified Attack high-level protocol models. The descripIsabelle Core Fixedpoint Trace tion is fed into the automated tool, where Isabelle/OFMC OFMC we consider (a novel module of) the Fig. 1. The workflow of a protocol ver- Open-source Fixed-point Model-Checker ification approach combing automated OFMC [24], formerly called On-the-Fly (e.g., OFMC) and interactive (e.g., Is- Model-Checker. OFMC first chooses an abelle) techniques initial abstraction and produces an abstracted version of the protocol description as a set of Horn clauses. From this, the fixed-point (FP) module computes a least fixed-point of derivable events. If this fixed-point contains an attack, this can either be a real attack (that similarly works in the reference model) or a false attack that was caused by the abstraction. By default, OFMC will assume that the attack is false, refine the abstraction based on the attack details and restart the verification. If the computed fixed-point does not contain an attack, it is handed to the proof generator of the Isabelle/OFMC, our extension of the interactive theorem prover Isabelle [25]. The proof generator translates the fixed-point into the terms of the reference model (using annotations about the
250
A.D. Brucker and S.A. M¨ odersheim
abstraction OFMC considered) and generates an Isabelle proof with respect to the protocol and goal description in the reference model. This proof is fed into the Isabelle core to check the proof. We emphasize two points. First, the entire approach is completely automatic: after the specification of the protocol and goals in the reference model, no further user interaction is required. Second, we need to trust in only two points—marked with a dark-gray background in Fig. 1: the reference model and the Isabelle core. Bugs in any other part, namely OFMC or the proof generation, can in the worst case result in a failure to verify a (possibly correct) protocol, but they cannot make us falsely accept a flawed protocol as being correct. Our approach currently has two limitations: we consider a typed model and, based on this, we limit the intruder composition to well-typed messages of the protocol. The first limitation can be justified by implementation discipline (see [19]). The second limitation is not a restriction as we show in Theorem 1. Contributions. An increasing number of works considers the combination of automated methods with interactive theorem proving to obtain both highly reliable and fully automated verification. In this paper, we contribute to this line of work with a novel approach for security protocols. The first novel aspect is that our approach automatically generates a proof from the representation of an overapproximation of the search space computed by an automated protocol verifier. Techniques based on over-approximation, similar to the ones we consider, have turned out to be very successful in protocol verification [4,6,7,10], and our approach is thus the first step towards employing a whole class of established tools for automated proof generation. The second novel aspect is that the proof is entirely based on a standard protocol model without over-approximation close to the model employed for instance in [26]. Our approach thus relates overapproximated representations with standard protocol models. Practically, we have implemented the integration between Isabelle on the interactive side and the novel FP-module of OFMC on the automated side. The result is a completely automated protocol verifier for an unbounded number of sessions and agents that produces Isabelle-verifiable proofs with respect to a standard protocol model.
2
The Reference Protocol Model
We begin with a reference protocol model, which is used in the Isabelle theorem prover and is thus the basis of this work. The model is inspired by the formalization of several security protocols in Isabelle by Paulson and others in [26,3] and is close to the persistent IF model in [23]. Messages. We follow the common black-box cryptography model where messages are modeled as symbolic terms. We define the set of all messages in style of an inductive datatype in a functional programming language. Here, all sansserif symbols and the symbols of F are constructors. The inductively defined datatype is interpreted like a term in the free term algebra, i.e. syntactically different terms are interpreted as being different.
Integrating Automated and Interactive Protocol Verification
251
Definition 1. Let F , LA , LN , LS , LP , VA , VN , VS , VP , and VU be pairwise disjoint sets of symbols where F , LA , LN , LS , and LP are finite, and VA , VN , VS , VP , and VU are countable. We define the sets of messages, agents, nonces, symmetric keys, and public keys, respectively, to be the following sets of terms: M = agent A | nonce N | symkey S | pubkey P | VU | crypt M M | inv M | scrypt M M | cat [M] | F M A = LA × N | VA N = LN × N | VN S = LS × N | VS
P = LP × N | VP
The set A contains both the concrete agent names (LA × N) and the variables for agent names (VA ). The concrete agent names consist of a label and a natural number. The labels are for the interplay of Isabelle and the automated methods, for now it is sufficient to think just of an infinite set of agents, indexed by natural numbers. Similarly, N , S, and P define infinite reservoirs of concrete constants and variables for nonces, symmetric keys, and public keys. For convenience, we write in the examples of this paper simply a, b, or i for concrete agent names, A, B for agent variables, n, n1 , etc. for concrete nonces, NA , NB etc. nonce variables. In general, we use lower-case letters for constants and function symbols and upper-case letters for variables. We distinguish atomic messages and composed messages (first and second line in the definition of M). Except for the untyped variables of VU , all atomic messages are of a particular type, namely from one of the sets A, N , S, or P. The constructors like agent ensure that the respective subsets of the message space are disjoint, for instance, no agent name can be a nonce. We discuss the details of typing below. In examples, we will omit the constructors for convenience, when the type is clear from the context. Messages can be composed with one of the following operations: crypt and scrypt represent asymmetric and symmetric encryption, respectively. We also simply write {m}k for crypt k m, and {|m|}k for scrypt k m. inv(M ) represents the private key belonging to a public key. cat denotes concatenation. For readability, we omit the cat and the list brackets. For instance the term {na 3 , b}pk(a) is convenient notation for the following more technical message (for given labeling and numbering): crypt (pk (agent (honest, 1))) (cat [nonce (na, 3), agent (dishonest, 2)]). Here we have used two labels, honest and dishonest, for agents. This represents the default abstraction for agents in the abstract model. We require that the abstraction for the agents is a refinement of the default abstraction. We use this labeling to distinguish honest and dishonest agents also in the reference model. We use the standard notion of matching messages. The constructors like agent here enforce a typing regime: typed variables can only be matched with atomic messages of the same type. Only untyped variables can be matched with composed messages. In rules for honest agents, we only use typed variables. Such a typed model which is standard in protocol verification, even in interactive verification with Isabelle [26,3], considerably simplifies the verification task. The typing can be justified by tagging as in [19]. Events and Traces. We define the set of events also as an inductive datatype, based on messages:
iknows {m}k ∈ [t] iknows inv(k) ∈ [t] iknows m # t ∈ T
t∈T
iknows {m}inv(k) ∈ [t] iknows k ∈ [t] iknows m # t ∈ T
t∈T
iknows {|m|}k ∈ [t] iknows k ∈ [t] iknows m # t ∈ T
t∈T
iknows m ∈ [t] iknows k ∈ [t] iknows {|m|}k # t ∈ T t ∈ T iknows m ∈ [t] f ∈ Fpub iknows f (m) # t ∈ T
t ∈ T iknows m1 ∈ [t] . . . iknows mn ∈ [t] iknows m1 #, . . . # iknows mm # t ∈ T t∈T
secret A M ∈ [t] iknows M ∈ [t] attack M # t ∈ T
honest A
t∈T
request B A id M ∈ [t] witness A B id M ∈ / [t] attack M #t ∈ T
honest A
Fig. 2. The protocol independent rules of the reference model: the first four are composition rules (C), the next four are decomposition rules (D) and the last two are attack rules (A)
Definition 2. The set of events is defined as follows: E ::=iknows M | state R [M] | secret A M | witness A A I M | request A A I M | attack M where I is a finite set of identifiers disjoint from all other symbols so far and R ⊂ I. A trace is a finite sequence e1 # · · · # en of events ei . The identifier set I contains constant symbols for the protocol variables, allowing us to describe as which protocol variable an agent interprets a particular message. We use Gothic fonts for identifiers, e.g. A and B. The event iknows m means that the intruder just learned the message m. The event state R msgs means that an honest agent playing role R has reached a state of its protocol execution that is characterized by the list msgs of messages. We need the other four events for expressing the goals in a protocol independent form when we introduce attack rules below. Rules and Protocols. Based on these definitions, we formalize protocols by a set of inductive rules on traces that have the following form: t ∈ T φ(t, e1 , . . . , en ) e1 # . . . # en # t ∈ T
Integrating Automated and Interactive Protocol Verification
253
i.e. whenever t is a valid trace of the set T of traces and e1 , . . . , en are events that fulfill a certain condition φ with t, then also the extension of t with these events is also part of T. Also, we have the rule that the empty trace is part of T. Note that we require that all transition rules of honest agents contain only typed variables, i.e. no variables of VU . Fig. 2 shows the protocol independent rules for the intruder (C) and (D), following the standard Dolev-Yao style intruder deduction, as well as the attack rules (A), which follow the standard definitions of attacks in AVISPA [1]; here [t] denotes the set of events in the trace t and Fpub ⊆ F is the set of functions that is accessible to the intruder. Before we explain the attack rules, we describe the transition rules of role Alice of the standard example protocol, NSL [20] (more interesting examples are found in Sect. 6): t∈T
NA ∈ / used (t)
iknows {NA, A}pk(B) # state A [A, B, NA] # witness A B NA NA # secret B NA # t ∈ T
Example 1 t∈T
state A [A, B, NA] ∈ [t]
iknows {NA, NB, B}pk(A) ∈ [t]
iknows {NB }pk(B) # request A B NB NB # t ∈ T
Here, used (t) is the set of all atomic messages that occur in t to allow for the fresh generation of nonces. The goals of a protocol are described negatively by what counts as an attack. This is done by attack rules that have the event attack on the right-hand side. We now explain the events that we use in attack rules. First, secret A M means that some honest agent (not specified) requires that the message M is a secret with agent A. Thus, it counts as an attack, if a trace contains both iknows M and secret A M for an honest agent A (first attack rule in Fig. 2). For authentication, the event witness A B id m means that for a particular purpose id , the honest agent A wants to transmit the message M to agent B. Correspondingly, when B believes to have received message M for purpose id from agent A, the event request B A id M occurs. It thus counts as an attack, if request B A id M occurs in a trace for an honest agent A and the trace does not contain the corresponding event witness A B id M (second attack rule in Fig. 2). We call a trace an attack trace if it contains the attack event. Definition 3. Let a protocol be described by an inductive set R of rules. The protocol is said to be safe, if the least set T of traces that is closed under R contains no attack trace. Despite some differences, our model is similar to the one of Paulson [26], as discussed in detail in the extended version of this paper [8].
3
Limiting Intruder Composition
The closure of the intruder knowledge under the composition rules of the intruder is generally infinite, e.g. the intruder can concatenate known messages arbitrarily.
254
A.D. Brucker and S.A. M¨ odersheim
However, many of these messages are useless to the intruder since, due to typing, no honest agent accepts them. It is therefore intuitive that we do not loose any attacks if we limit intruder composition to terms, and subterms thereof, that some honest agent can actually receive. (A similar idea has been considered e.g. in [28].) This limitation on intruder composition makes our approach significantly simpler, and we have therefore chosen to integrate this simplification into our reference model for this first version, and leave the generalization to an unlimited intruder for future versions. We formally define the transformation that limits intruder composition as follows: Definition 4. For a set of rules R that contain no untyped variables VU , let MR be the set of all messages that occur in an iknows or secret event of any rule of R along with as their sub-messages. We say that an intruder composition rule r can compose terms for R, if the resulting term of r can be unified with a term in MR . In this case we call rσ an r-instance for compositions of MR if σ is the most general unifier between the resulting term of r and a message in MR . We say that R is saturated if it contains all r-instances for composition in MR . Since we excluded untyped variables, atomic messages in MR are typed, i.e. of the form type(·). Also, due to typing, every finite R has a finite saturation. Theorem 1. Given an attack against a protocol described by a set of rules R ∪ C ∪ D ∪ A where C and D are the intruder composition and decomposition rules and A are the attack rules. Let R be a saturated superset of R. Then there is an attack against R ∪ D ∪ A. The proof is found in the extended version of this paper [8].
4
The Abstract Protocol Model
We now summarize two kinds of over-approximations of our model that are used in our automated analysis tool to cope with the infinite set of traces induced by the reference model. These techniques are quite common in protocol verification and a more detailed description can be found in [23]; we discuss them here only as far as they are relevant for our generation of Isabelle proofs. The first technique is a data abstraction that maps the infinite set of ground atomic messages (that can be created in an unbounded number of sessions) to a finite set of equivalence classes in the style of abstract interpretation approaches. The second is a control abstraction: we forget about the structure of traces and just consider reachable events in the sense that they are contained in some trace of the reference model. Neither of these abstractions is safe: each may introduce false attacks (that are not possible in the reference model). Also, it is not guaranteed in general that the model allows only for a finite number of reachable events, i.e. the approach may still run into non-termination. Data Abstraction. In the style of abstract interpretation, we first partition the set of all ground atomic messages into finitely many equivalence classes and
Integrating Automated and Interactive Protocol Verification
255
then work on the basis of these equivalence classes. Recall that atomic ground messages are defined as a pair (l, n) where l is a label and n is a natural number. We use the label to denote the equivalence class of the message in the abstraction. The abstract model thus identifies different atoms with the same label, and hence we just omit the second component in all messages of the abstract model. There is a large variety of such data abstractions. For the proof generation the concrete way of abstraction is actually irrelevant, and we just give one example for illustration: Example 2. The initial abstraction that OFMC uses for freshly created data is the following. If agent a creates a nonce n for agent b, we characterize the equivalence class for that nonce in the abstract model by the triple (n, a, b). This abstraction can be rephrased as “in all sessions where a wants to talk to b, a uses the same constant for n (instead of a really fresh one)”. For a large number of cases, this simple abstraction is sufficient; in the experiments of Sect. 6, only two examples (NSL and Non-reversible Functions) require a more fine-grained abstraction. Note that OFMC automatically refines abstractions when the verification fails, but this mechanism is irrelevant for the proof generation. The protocol from Example 1, rules for A, then look as follows: t∈T iknows {(NA , A, B), A}pk(B) # state A [A, B, (NA , A, B)] # witness A B NA (NA , A, B) # secret B (NA , A, B), # t ∈ T t ∈ T state A [A, B, NA] ∈ [t] iknows {NA, NB, B}pk(A) ∈ [t] iknows {NB}pk(B) # request A B NB NB # t ∈ T Here, we only abstract NA in the first rule where it is created by the A. It is crucial that the nonce NB is not abstracted in the second rule: since NB is not generated by A, A cannot be sure a priori that was indeed generated by B. In fact, if we also abstract NB here, the proof generation fails, because the resulting fixed-point does no longer over-approximate the traces of the reference model. More generally, fresh data are abstracted only in a rule where they are created. Finally, observe that the condition is gone that tells that the freshly created NA never occurred in the trace before, because now agents may actually use the same value several times in place of the fresh nonce. The key idea to relate our reference model with the abstract one is the use of labels in the definition of concrete data. Recall that each concrete atomic message in the reference model is a pair of a label and a natural number. The finite set of labels is determined by the abstraction we use in the abstracted model; in the above example, we use LN = {NA , NB } × LA × LA where LA is the abstraction of the agents (for instance LA = {(honest, dishonest)}). As the atomic messages consist of both such a label and a natural number, the reference model is thus endowed with an infinite supply of constants for each equivalence class of the abstract model. The relationship between data in the reference and abstract models is straightforward: the abstraction of the concrete constant (l, n)
256
A.D. Brucker and S.A. M¨ odersheim
is simply l. Vice-versa, each equivalence class l in the abstract model represents the set of data {(l, n) | n ∈ N} in the reference model. It is crucial that in the reference model, the labels are merely annotations to the data and the rules do not care about these annotations, except for the distinction of honest and dishonest agents as discussed before. The labels however later allow us to form a security proof in the reference model based on the reachable events in the concrete model. We need to take the abstraction into account in the reference model when creating fresh data. In particular, we need to enforce the labeling that reflects exactly the abstraction. We extend the assumptions of the first rule from Example 1 by a condition on the label of the freshly created NA: t ∈ T NA ∈ / used (t) label (NA) = (NA , A, B) iknows {NA, A}pk(B) # state A [A, B, NA] # witness A B NA NA # secret B NA # t ∈ T where label (l, n) = l. (Recall that every concrete value is a pair of label and a natural number.) Control Abstraction. We now come to the second part of the abstraction. Even with the first abstraction on data, the model gives us an infinite number of traces (that are of finite but of unbounded length). The idea for simplification is that under the data-abstraction, the trace structure is usually not relevant anymore. In the reference model, we need the trace structure for the creation of fresh data and for distinguishing potentially different handling of the same constant in different traces. Under the data-abstraction, however, all these occurrences fall together. In the abstract model we thus abandon the notion of traces and consider only the set E of events that can ever occur. Example 3. Our running example has now the following form: iknows {(NA , A, B), A}pk(B) , state A [A, B, (NA , A, B)] , witness A B NA (NA , A, B) , secret B (NA , A, B) ∈ E state A [A, B, NA] ∈ E
iknows {NA, NB , B}pk(A) ∈ E
iknows {NB}pk(B) , request A B NB NB ∈ E
5
Turning Fixed-Points into Proofs
We now turn to the proof generator itself (see Fig. 1), putting the pieces together to obtain the security proof with respect to the reference model. Let RG denote in the following the given set of the reference model that describes the protocol, its goals, and the intruder behavior, as described in Sect. 2 and 3. Recall that OFMC chooses an abstraction for the data that honest agents freshly create, and refines these abstractions if the verification fails. As described
Integrating Automated and Interactive Protocol Verification
257
in Sect. 4, the connection between the data abstraction and the reference model is made by annotating each freshly created message with a label expressing the abstraction. Since this annotation is never referred to in the conditions of any rule, the set of traces remains the same modulo the annotation. We denote by RM the variant of the rules with the annotation of the freshly created data. The next step is the inductive definition of the set of traces T in Isabelle, representing the least fixed-point of RM . For such inductive definitions, Isabelle proves automatically various properties (e.g., monotonicity) and derives an induction scheme, i.e. if a property holds for the empty trace and is preserved by every rule of RM , then it holds for all traces of T. This induction scheme is fundamental for the security proof. We define in Isabelle a set of traces T that represents the over-approximated fixed-point F P computed by OFMC, expanding all abstractions. We define this via a concretization function ·: l = {(l, n) | n ∈ N} F = ∪f ∈F f
f t1 . . . tn = {f s1 . . . sn | si ∈ ti } T = {e1 # . . . #en | ei ∈ F P }
This replaces each occurrence of an abstract datum with an element of the equivalence class it represents, and then builds all traces composed of events from the fixed-point. Note that while F P is finite, T is infinite. As the next step, the proof generation module proves several auxiliary theorems using a set of specialized tactics we designed for this purpose. Each proved auxiliary theorem can be used as a proof rule in subsequent proofs. We thus use Isabelle as a framework for constructing a formal tool in a logically sound way (see also [31]). The first auxiliary theorem is that T does not contain any attacks. The theorem is proved by unfolding the definition of the fixed-point and applying Isabelle’s simplifier. The main part of the proof generation is an auxiliary theorem for each rule r ∈ RM that T is closed under r. In a nutshell, these theorems are also shown by unfolding the definition of T and applying the simplifier. In this case, however, on a more technical level, we need to convert the set comprehensions of the definition into a predicate notation so that the simplifier can recognize the necessary proof steps. This is actually also the point where the labels that annotate the abstraction silently fulfill their purpose: the rules are closed under any concretization of the abstract data with the elements from the equivalence class they represent. By our construction, the proof generation does not need to take care of the abstraction at all. Using the theorems that all rules are closed under T , we can now show the last auxiliary theorem, namely that T ⊆ T , i.e., that OFMC indeed computed an over-approximation of what can happen according to the reference model. This theorem is proved by induction, using the induction scheme we have automatically obtained from the definition of T above, i.e. we show that the subset relation is preserved for each rule of RM , using the set of the auxiliary theorems.
258
A.D. Brucker and S.A. M¨ odersheim Table 1. Analyzing security protocols using Isabelle/OFMC
Protocol
FP time [s]
Protocol
ISO 1-pass (sk) ISO 2-pass (sk) NSL DenningSacco ISO 1-pass (pk) Bilateral Key Exchange Andrew Secure RPC DenningSacco ISO 2-pass (ccf) NSL (w. key server)
40 56 75 76 82 87 104 117 124 127
ISO 2-pass mutual NSCK TLS (simplified) ISO 2-pass (pk) ISO 3-pass mutual ISO 2-pass mutual ISO 2-pass mutual ISO 1-pass (ccf) ISO 3-pass mutual
Finally, we derive our main theorem that T contains no attack, which immediately follows from T ⊆ T and T containing no attack. We have thus automatically derived the proof that the protocol is safe from the given reference model description and OFMC’s output.
6
Experimental Results
For first experiments, we have considered several protocols from the Clark-Jacob library [11] and a simplified version of TLS. Tab. 1 shows the results in detail, namely the size of the fixed-point and the time to generate and check the Isabelle proof. Here, we have considered for each protocol all those secrecy and authentication goals that do actually hold (we do not report on the well-known attacks on some of these protocols that can be detected with OFMC). The runtime for generating the fixed-point in OFMC is negligible (< 10 s for each example), while the runtime for Isabelle/OFMC is significantly larger. We hope to improve on the proof generator performance by fine-tuning and specializing the low-level proof tactics where we currently use generic ones of Isabelle. We suggest, however, that the proof generation time does not affect the experimentation with OFMC such as testing different designs and variants of a newly designed protocol, because the proof generation is meant only as a final step when the protocol design has been fixed and verified with OFMC.
7
Related and Future Work
There is a large number of automated tools for protocol verification. [4,6,7,10] in particular are close to the method that is implemented in the new fixed-point module of OFMC: they are all based on an over-approximation of the search space as described in Sect. 4: the first over-approximation concerns the fresh data, following the abstract interpretation approach of [14] and the second concerns the control structure, i.e., considering a set of reachable events rather than
Integrating Automated and Interactive Protocol Verification
259
traces. We propose that the other over-approximation-based tools can similarly be connected to Isabelle as we did it for OFMC. The work most closely related to ours is a recent paper by Goubault-Larrecq who similarly considers generating proofs for an interactive theorem prover from the output of automated tools [18]. He considers a setting where the protocol and goal are given as a set S of Horn clauses; the tool output is a set S∞ of Horn clauses that are in some sense saturated and such that the protocol has an attack iff a contradiction is derivable. He briefly discusses two principal approaches to the task of generating a proof from S∞ . First, showing that the notion of saturation implies consistency of the formula and that the formula is indeed saturated. Second, showing the consistency by finding a finite model. He suggests that the first approach is unlikely to give a practically feasible procedure, and rather follows the second approach using tools for finding models of a formula. In contrast, our work, which is closer to the first kind of approach, shows that this proof generation procedure does indeed work in practice for many protocols, comparable to the results of [18]. We see the main benefit of our approach in the fact that we can indeed use the output of established verification tools dedicated to the domain of security protocols. However, note that the work of [18] and ours have some major differences which makes results hard to compare. First, we consider a reference model where the protocol is modeled as a set of traces; the generated proofs are with respect to this reference model and all abstractions are merely part of the automatic tools. In contrast, [18] considers only one protocol model based on Horn clauses, close to the abstract model in our paper. Taking the soundness of all these abstractions for granted is a weakness of [18]. However, also our approach takes some things for granted, namely a strictly typed model and, based on this, specialized composition rules for the intruder. The typing is common in protocol verification and can be justified by a reasonable protocol implementation discipline [19]. The second assumption is justified by Theorem 1. Our next steps are concerned with lifting these two assumptions from our reference model, i.e. allowing for an untyped reference model with unbounded intruder composition. First experiments suggest that at least the unbounded intruder composition can be feasibly integrated into the proof generation procedure. Similarly, [32] performs static analysis of security protocols using the tool Rewrite which can generate proofs for the theorem prover Coq. Like in the case of [18], the resulting proof is with respect to an over-approximated model only and takes the soundness of all abstractions for granted. A completely different approach to achieve the same goal is currently followed by Meier. Based on [21], he considers an embedding of the Scyther tool [15] into Isabelle in order to generate proofs automatically when they fall into the scope of the Scyther method. The advantage is here that one does not rely on a typed model, while at the stage of this writing the proof generation is not in all cases completely automated. Several automated verification tools are based on, or related to, automated theorem provers, e.g. SATMC [2] generates Boolean formulae that are fed into a SAT-solver, and ProVerif [4] can generate formulae for the first-order theorem
260
A.D. Brucker and S.A. M¨ odersheim
prover SPASS [30]. While there is some similarity with our approach, namely connecting to other tools including the subtle modeling issues, this goes into a different direction. In fact, these approaches additionally rely on both the correctness of the translation to formulae, and the correctness of the automated theorem prover that proves them. In contrast, we generate the proof ourselves and let Isabelle check that proof. Several papers such as [9,5,23,13]have studied the relationships between protocol models and the soundness of certain abstractions and simplifications in particular. For instance, [13] shows that for a large class of protocols, two agents (an honest and a dishonest one) is sufficient. Recall that we have used this as a standard abstraction of honest agents. While such arguments have thus played an important role in the design of the automated tool and its connection to the reference model, the correctness of our approach does not rely on such arguments and the question whether a given protocol indeed satisfies the assumptions. Rather, it is part of the Isabelle proof we automatically construct that the abstract model indeed covers everything that can happen in the reference model. The automated verifier may thus try out whatever abstraction it wants, even if it is not sound. Once again, in the worst case, the proof in Isabelle simply fails, if the abstraction is indeed unsound for the given protocol, e.g., when some separation of duty constraints invalidate the assumptions of the two-agents abstraction. This was indeed one of the main motivations of our work: our system does not rely on the subtle assumptions and tricks of automated verification. This also allows for some heuristic technique that extends the classical abstraction refinement approaches such as [12]. There, the idea is to start with a simple most abstraction, and when the automatic verification fails, to refine the abstraction based on the counter-example obtained. This accounts for the effect that the abstraction may lead to incompleteness (i.e., failure to verify a correct system), but it is essential that one uses only sound abstractions (i.e., if the abstract model is flawless then so is the concrete model). With our approach, we are now even able to try out potentially unsound abstractions in a heuristic way, i.e. start with abstractions that usually work (like the two-agent abstraction). If they are unsound, i.e. the Isabelle proof generation fails, then we repeat the verification with a more refined abstraction. Isabelle has been successfully used for the interactive verification in various areas, including protocol verification [26,3]. These works are based on a protocol model that is quite close the our reference model (see Sect. 2). There are several works on increasing the degree of automation in interactive theorem provers by integrating external automated tools [17,29,16,27,22]. These works have in common that they integrate generic tools like SAT or SMT tools. In contrast, we integrate a domain-specific tool, OFMC, into Isabelle. As further future work, we plan the development of additional Isabelle tactics improving the performance of the verification in Isabelle. Currently, the main bottleneck is a proof step in which a large number of existential quantified variables need to be instantiated with witnesses. While, in general, such satisfying candidates cannot be found efficiently, we plan to provide domain-specific
Integrating Automated and Interactive Protocol Verification
261
tactics that should be able infer witnesses based on domain-specific knowledge about the protocol model. Finally, we plan to eliminate the limitations on the intruder inductions from the model explained in Sect. 3. While this limitation can be reasonably justified in many cases, the fact that our approach relies on it is a drawback, both in terms of efficiency and also theoretically. In fact, tools like ProVerif instead employ a more advanced approach of rule saturation that allow to work without the limitation. We plan to extend our approach to such representations of the fixed-point. Acknowledgments. The work presented in this paper was partially supported by the FP7-ICT-2007-1 Project no. 216471, “AVANTSSAR: Automated Validation of Trust and Security of Service-oriented Architectures” (www.avantssar.eu). We thank Luca Vigan` o for helpful comments.
References 1. Armando, A., Basin, D., Boichut, Y., Chevalier, Y., Compagna, L., Cuellar, J., Hankes Drielsma, P., H´eam, P.C., Mantovani, J., M¨ odersheim, S., von Oheimb, D., Rusinowitch, M., Santiago, J., Turuani, M., Vigan` o, L., Vigneron, L.: The AVISPA Tool for the Automated Validation of Internet Security Protocols and Applications. In: Etessami, K., Rajamani, S.K. (eds.) CAV 2005. LNCS, vol. 3576, pp. 281–285. Springer, Heidelberg (2005), http://www.avispa-project.org 2. Armando, A., Compagna, L.: SAT-based Model-Checking for Security Protocols Analysis. Int. J. of Information Security 6(1), 3–32 (2007) 3. Bella, G.: Formal Correctness of Security Protocols. Springer, Heidelberg (2007) 4. Blanchet, B.: An efficient cryptographic protocol verifier based on prolog rules. In: CSFW 2001, pp. 82–96. IEEE Computer Society Press, Los Alamitos (2001) 5. Blanchet, B.: Security protocols: from linear to classical logic by abstract interpretation. Information Processing Letters 95(5), 473–479 (2005) 6. Boichut, Y., H´eam, P.C., Kouchnarenko, O., Oehl, F.: Improvements on the Genet and Klay technique to automatically verify security protocols. In: AVIS 2004, pp. 1–11 (2004) 7. Bozga, L., Lakhnech, Y., Perin, M.: Pattern-based abstraction for verifying secrecy in protocols. Int. J. on Software Tools for Technology Transfer 8(1), 57–76 (2006) 8. Brucker, A., M¨ odersheim, S.: Integrating Automated and Interactive Protocol Verification (extended version). Tech. Rep. RZ3750, IBM Zurich Research Lab (2009), http://domino.research.ibm.com/library/cyberdig.nsf 9. Cervesato, I., Durgin, N., Lincoln, P.D., Mitchell, J.C., Scedrov, A.: A Comparison between Strand Spaces and Multiset Rewriting for Security Protocol Analysis. In: Okada, M., Pierce, B.C., Scedrov, A., Tokuda, H., Yonezawa, A. (eds.) ISSS 2002. LNCS, vol. 2609, pp. 356–383. Springer, Heidelberg (2003) 10. Chevalier, Y., Vigneron, L.: Automated Unbounded Verification of Security Protocols. In: Brinksma, E., Larsen, K.G. (eds.) CAV 2002. LNCS, vol. 2404, pp. 324–337. Springer, Heidelberg (2002) 11. Clark, J., Jacob, J.: A survey of authentication protocol: Literature: Version 1.0 (1997), http://www.cs.york.ac.uk/~ jac/papers/drareview.ps.gz 12. Clarke, E., Fehnker, A., Han, Z., Krogh, B., Ouaknine, J., Stursberg, O., Theobald, M.: Abstraction and counterexample-guided refinement in model checking of hybrid systems. Int. J. of Foundations of Computer Science 14(4), 583–604 (2003)
262
A.D. Brucker and S.A. M¨ odersheim
13. Comon-Lundh, H., Cortier, V.: Security properties: two agents are sufficient. In: Degano, P. (ed.) ESOP 2003. LNCS, vol. 2618, pp. 99–113. Springer, Heidelberg (2003) 14. Cousot, P.: Abstract interpretation. Symposium on Models of Programming Languages and Computation, ACM Computing Surveys 28(2), 324–328 (1996) 15. Cremers, C.: Scyther. Semantics and Verification of Security Protocols. Phd-thesis, University Eindhoven (2006) 16. Erk¨ ok, L., Matthews, J.: Using Yices as an automated solver in Isabelle/HOL. In: AFM 2008 (2008) 17. Fontaine, P., Marion, J.Y., Merz, S., Nieto, L.P., Tiu, A.F.: Expressiveness + automation + soundness: Towards combining SMT solvers and interactive proof assistants. In: Hermanns, H., Palsberg, J. (eds.) TACAS 2006. LNCS, vol. 3920, pp. 167–181. Springer, Heidelberg (2006) 18. Goubault-Larrecq, J.: Towards producing formally checkable security proofs, automatically. In: CSF 2008, pp. 224–238. IEEE Computer Society, Los Alamitos (2008) 19. Heather, J., Lowe, G., Schneider, S.: How to prevent type flaw attacks on security protocols. In: CSFW 2000. IEEE Computer Society Press, Los Alamitos (2000) 20. Lowe, G.: Breaking and fixing the Needham-Schroeder public-key protocol using FDR. In: Margaria, T., Steffen, B. (eds.) TACAS 1996. LNCS, vol. 1055, pp. 147– 166. Springer, Heidelberg (1996) 21. Meier, S.: A formalization of an operational semantics of security protocols. Diploma thesis, ETH Zurich (2007), http://people.inf.ethz.ch/meiersi/fossp 22. Meng, J., Quigley, C., Paulson, L.C.: Automation for interactive proof: First prototype. Information and Computation 204(10), 1575–1596 (2006) 23. M¨ odersheim, S.: On the Relationships between Models in Protocol Verification. J. of Information and Computation 206(2–4), 291–311 (2008) 24. M¨ odersheim, S., Vigan` o, L.: The open-source fixed-point model checker for symbolic analysis of security protocols. In: Aldini, A., Barthe, G., Gorrieri, R. (eds.) FOSAD 2007. LNCS, vol. 4677, pp. 166–194. Springer, Heidelberg (2007) 25. Nipkow, T., Paulson, L.C., Wenzel, M.: Isabelle/HOL: A Proof Assistant for Higher-Order Logic. LNCS, vol. 2283. Springer, Heidelberg (2002) 26. Paulson, L.C.: The inductive approach to verifying cryptographic protocols. J. of Computer Security 6(1-2), 85–128 (1998) 27. Paulson, L.C., Susanto, K.W.: Source-level proof reconstruction for interactive theorem proving. In: Schneider, K., Brandt, J. (eds.) TPHOLs 2007. LNCS, vol. 4732, pp. 232–245. Springer, Heidelberg (2007) 28. Roscoe, A.W., Goldsmith, M.: The perfect spy for model-checking crypto-protocols. In: DIMACS (1997) 29. Weber, T., Amjad, H.: Efficiently checking propositional refutations in HOL theorem provers. J. of Applied Logic 7(1), 26–40 (2009) 30. Weidenbach, C., Schmidt, R.A., Hillenbrand, T., Rusev, R., Topic, D.: System description: Spass version 3.0. In: Pfenning, F. (ed.) CADE 2007. LNCS (LNAI), vol. 4603, pp. 514–520. Springer, Heidelberg (2007) 31. Wenzel, M., Wolff, B.: Building formal method tools in the Isabelle/Isar framework. In: Schneider, K., Brandt, J. (eds.) TPHOLs 2007. LNCS, vol. 4732, pp. 352–367. Springer, Heidelberg (2007) 32. Zunino, R., Degano, P.: Handling exp, × (and Timestamps) in Protocol Analysis. In: Aceto, L., Ing´ olfsd´ ottir, A. (eds.) FOSSACS 2006. LNCS, vol. 3921, pp. 413– 427. Springer, Heidelberg (2006)
A User Interface for a Game-Based Protocol Verification Tool Peeter Laud1 and Ilja Tˇsahhirov2 1
Abstract. We present a platform that allows a protocol researcher to specify the sequence of games from an initial protocol to a protocol where the security property under consideration can be shown to hold using “conventional” means. Our tool represents the protocol in the form of a program dependency graph. A step in the sequence corresponds to replacing a local fragment in the current graph. The researcher interacts with the tool by pointing out the location of this fragment and choosing the applied transformation from a list. The tool guarantees the errorfreeness of the sequence. By our knowledge, this is the first time where the aspects of user interaction have been seriously considered for a sequenceof-games-based protocol analyzer.
1
Introduction
The sequence-of-games-based approach is a method for giving security proofs for cryptographic protocols that is at the same time computationally sound and sufficiently organized for keeping track of all the details about the probabilities and conditional probabilities of various events. It is based on the fact that most cryptographic primitives have their security definitions stated as two experiments (or cryptographic games) that an adversary can interact with. A primitive is secure if the adversary cannot tell those two experiments apart. In this approach, the security proof of a cryptographic protocol (or a primitive) consists of two steps (which may take place simultaneously). The first step is the construction of a sequence of cryptographic games, the first of which is the original protocol and the last is a game that obviously fulfills the security property we want to prove (e.g. if the goal is the confidentiality of some value, the final game should contain no references to that value). The second step is the verification that to a resourcebounded adversary, each protocol in that sequence is indistinguishable from the one that immediately precedes it. To make such verification easy, the neighboring
This research has been supported by Estonian Science Foundation, grant #6944, by the European Regional Development Fund through the Estonian Center of Excellence in Computer Science, EXCS, and by EU Integrated Project AEOLUS (contract no. IST-15964).
P. Degano and J. Guttman (Eds.): FAST 2009, LNCS 5983, pp. 263–278, 2010. c Springer-Verlag Berlin Heidelberg 2010
264
P. Laud and I. Tˇsahhirov
protocols in that sequence should syntactically differ only a little. For example, a protocol in that sequence may have been obtained from the previous one by locating one of the experiments from the definition of a cryptographic primitive in the code of this protocol, and replacing that part of the code with the code of the other experiment. Alternatively, the change from one protocol to the next could be a simple program transformation/optimization (e.g. copy propagation), done in order to make locating one of the aforementioned experiments easier. A protocol researcher needs tool support for both steps of the proof done in the style of sequences of games. As a protocol in the sequence is constructed by applying a rather small change to the previous protocol, it makes sense to constrain the researcher in constructing the next protocol, thereby avoiding transcription errors. The verification of the proof (the second step) is also better left to an automated theorem prover. The most recent results in this area mostly tackle the second problem — verifying the given sequence of games. Languages for cryptographic games have been proposed and certain program transformations have been proven (using proof assistants, such as Coq or Isabelle/HOL) to keep the games indistinguishable to an adversary [5,9]. In contrast, we consider the first problem in this paper. We present a tool that helps a protocol researcher to interactively construct that sequence. So far, similar tools (Blanchet’s CryptoVerif [12,13] and the analyzer of Tˇsahhirov and Laud [37]) have worked almost fully automatically. An automatic generation of a game sequence is convenient, but not necessarily scalable. There are no guarantees that the set of transformations that the analyzer applies is convergent. Hence the analyzer may get stuck in a game that is not yet obviously secure but also cannot be transformed any longer, while a different order of transformations could have lead to a complete sequence. One may try to come up with heuristics for choosing the order of transformations, but this approach is certainly not complete and may not be worth the effort. Instead, one should rely on the knowledge of the protocol designer — he/she should have some idea why the protocol is secure, and be able to guide the analyzer. Our tool is an extension of our protocol analyzer [37,36]. The protocol is presented to the protocol researcher in a form (a dependency graph) that we believe is relatively easy comprehend and where, importantly, the location where one wishes to apply a certain transformation can be easily indicated. The researcher starts with the initial protocol representation (translated from a language similar to applied π-calculus) and applies one transformation after another until the protocol is easy to analyze. The tool makes sure that the researcher will not make invalid transformations. The form in which the protocols are represented has a well-defined semantics, hence it should not be too difficult to combine our tool with some of the verifiers of game sequences we have mentioned above to create a complete tool-chain for producing computationally sound proofs of protocols.
2
Related Work
The task of tool-supported computationally sound proving of security properties of cryptographic properties has received closer attention for almost a decade
A User Interface for a Game-Based Protocol Verification Tool
265
now. Starting from Abadi and Rogaway [4], a line of work [3,29,16,22,15] has attempted to show that the security of a protocol in formal model implies its security in the computational model, thereby leveraging the body of work on protocol analysis in the formal model. In parallel to that, program analyses have been devised that are correct with respect to the computational security definitions of cryptographic primitives [39,23,24,27,33,20]. A somewhat similar line of work tries to axiomatize the computational semantics of protocols [28,17,18]. A somewhat different “formal model” with full computational justification was offered by Backes et al. [7] in the form of a universally composable cryptographic library. Various methods of protocol analysis (in the formal model) have been successfully carried over to this model, including type systems [26,1], abstract interpretation [6] and theorem-proving [34,35]. The consideration of the sequence of code transformations as a universally and automatically applicable method for protocol analysis first appeared in [25]. The method was generally popularized by Bellare and Rogaway [11] as the gamebased method. It was quickly recognized as allowing automated or computer assisted analysis of protocols [32]. By now, the underlying principles of the method have been formalized, also in proof assistants [14,30,5,9] and automatic analyzers have appeared [12,37]. Active research is going on in this area.
3
Game-Based Protocol Analysis
A cryptographic game is the interaction between the adversary and its environment containing the protocol we want to analyze. A game is specified by describing the operations that the environment performs and values it makes available to, or receives from the adversary. The adversary’s goal is to bring the game to a state that is considered as winning for it. For example, the adversary may win a game if it correctly guesses a bit generated by the environment. To formally argue about a game, and to locate a game (or an experiment) in a larger game, it has to be expressed in a programming language with formal semantics. In the sequence-of-games-based protocol analysis, the initial game is transformed to a final game that is obviously secure. Each transformation step changes the game in a way that makes the adversary’s winning probability larger or only negligibly smaller. In the latter case we have assumed that the adversary’s running time is constrained to be polynomial; in the following we only consider probabilistic polynomial-time (PPT) adversaries. The obviousness of the security of the final game just means that it is easy to analyze and bound the adversary’s probability of winning by using some conventional means. For example, if the adversary’s goal is to guess a randomly generated bit, and the final game makes no references to that bit, then the adversary’s winning probability is definitely no more than 1/2.
4
Protocol Representation
We use dependency graphs as our intermediate representation of protocols [37,36]. It has advantages with respect to abstract syntax trees / control flow graphs
266
P. Laud and I. Tˇsahhirov
(used by CryptoVerif) in naturally allowing certain transformations one would like to invoke after applying a cryptographic transformation. Also, the dependency graph emphasizes the producers and consumers of different data items and henceforth appears to be a natural way to specify cryptographic games (despite the tendency to use imperative languages for that purpose in cryptographic literature). The dependency graph is a directed graph, where each node corresponds to a computation, producing a value (either a bit-string or a Boolean). The edges of the graph indicate which nodes use values produced at another nodes. A computation happening at a node could be the execution of a cryptographic algorithm, an arithmetic or a boolean operation. The values produced are either bit strings or boolean values. The values produced outside of the graph (for example, random coin tosses, incoming messages, secret payloads) are brought into it via special nodes, having no incoming edges. Additionally, certain nodes (modeling the sending of messages) explicitly make their input values available to the adversary. Program dependency graphs have originated as a program analysis and optimization tool [19], systematically recording the computational relationships between different parts of a program. Since then, several flavors of dependency graphs have been proposed, some of them admitting a formal semantics [8,31], thus being suitable as intermediate program representations in a compiler. Programs represented as dependency graphs are amenable to aggressive optimizations as all program transformations we may want to apply are incremental on dependency graphs. The translation from an optimized dependency graph back to a sequence of instructions executable on an actual processor may be tricky as the optimizations may have introduced patterns that are not easily serializable. This is not al issue for us because we do not have to translate the optimized / simplified / analyzed protocol back to a more conventional form. The formal definition and semantics of dependency graphs (DGs) can be found in [36]. Informally, DG is a directed, possibly infinite graph where each node v contains an operation λ(v) and edges carry the values produced by their source node to be used in the computation at the target node. The nodes have input ports to distinguish the roles of incoming values. For each port of each node, there is exactly one incoming edge. The “normal” nodes of a DG are functional — same inputs cause it to produce the same output. Special nodes are used for inputs from and outputs to the outside world. To represent scheduling information, most computational nodes of a DG have a special boolean input — the control dependency. A node can execute only if the value of its control dependency is true (initially, the value of all nodes is either ⊥ (for nodes producing bit-strings) or false). During the execution of the dependency graph, the adversary can set the values of certain Boolean-valued input-nodes labeled Req to true and thereby initiate the execution of (certain parts of) the DG. The execution of a DG proceeds in alteration with the adversary. First the adversary sets some Req-nodes and/or the values of some Receive-nodes (these
A User Interface for a Game-Based Protocol Verification Tool
267
nodes bring bit-string inputs to the DG). The setting of these nodes causes certain nodes of the DG to compute their values. If a value reaches some Send-node then such value is reported back to the adversary. The adversary can then again set some Req- and Receive-nodes and the process repeats, until the adversary decides to stop. The adversary then tries to output something related to secret values in the environment, made available to the DG through Secret-nodes. Two dependency graphs G1 and G2 with the same set of input/output nodes (labeled Req, Receive or Send) are indistinguishable if for all PPT adversaries A, the output of A running in parallel with G1 is indistinguishable from its output if it runs in parallel with G2 . A dependency graph is polynomial if at any time the number of its nodes with values different from ⊥ or false is polynomial in the number of its Receive- and Req-nodes that the adversary has set. A game transformation is given by two dependency graph fragments (DGF). A DGF is basically a DG without the input/output nodes of a regular DG, but having some input/output nodes of its own (in principle: edges with one end inside and the other end outside of the DGF), for both Booleans and bit-strings. A DGF can be executed by the adversary, similarly to a regular DG. Again, the adversary can (iteratively) set the inputs to the DGF and learn the outputs. The indistinguishability and polynomiality for DGF-s is defined in the same way as for DG-s. Definition. An occurrence of a DGF H in a DG G is a mapping ϕ from the input and internal nodes of H to the nodes of G, such that – if v and w are input or internal nodes of H, then there is an edge from v to the port π of w iff there is an edge from ϕ(v) to the port π of ϕ(w); – if there is an edge from ϕ(v) to some node u in G, such that u is not the image of some internal node of H under ϕ, then there must be an edge from v to an output node in H. If H and H have the same inputs and outputs, and ϕ is an occurrence of H in G then we can replace this occurrence by H by removing from G all nodes ϕ(v), where v is an internal node of H, and introducing the internal nodes and edges of H in their stead. Theorem 1. If polynomial DGFs H and H are indistinguishable, and DG G is obtained from polynomial G by replacing an occurrence of H with H , then G and G are indistinguishable and G is polynomial, too [36]. Each node of the DG corresponds to a single operation that the system may perform. To model that some role of some protocol may be executed up to n times, we have to analyze a DG containing n copies of that role. To model that some role of some protocol may be executed an unbounded number of times, or that a party can take part in unbounded number of protocol sessions, requires infinite dependency graphs. In infinite dependency graphs, the set of nodes is countably infinite. Also, certain nodes (conjunction and disjunction) may have a countable number of predecessors. A dependency graph fragment can similarly be infinite.
268
P. Laud and I. Tˇsahhirov
As the infiniteness of a dependency graph is typically caused from the infinite repeating of certain finite constructs, the graph is regular enough to be finitely represented. Details can be found in [36]. Here we mention only that we are actually working with dependency graph representations (DGR-s) where each node may represent either a single, or countably many (identified with the elements of NX for a certain finite set X, recorded in the DGR node) nodes in the actual DG. Similarly, DGFs generalize to DGFRs — dependency graph fragment representations. On the choice of protocol representation. We believe that the representation based on dependency graphs will be more convenient to use than the one based on abstract syntax trees as used by CryptoVerif. There are several reasons for that. First, the enabling conditions for transformations are more often locally represented in dependency graphs. Hence they should be easier to notice by the protocol researcher (but importantly, the visualizer also has to make it easy to locate interesting vertices and to explore their neighborhoods). Second, pointing at the to-be-transformed part of the protocol is very simple using a graph representation, while it may require doing a complex selection in a textual representation. Third and most importantly, all information is easily available in a dependency graph of the protocol, possibly annotated with nodes carrying the results of its static analysis. CryptoVerif contains not just the language for representing protocols, but also a language for true facts and rewrite rules it has collected for a protocol [13, App. C.2–C.5]. The user cannot control which facts are derived. In an interactive tool, these facts might be added to the textual representation of the program as annotations, but there does not necessarily exist an obvious location in the text for them. Fourth, the graphical representation allows certain natural transformations for which there is no equivalent in the syntax-tree-based representation.
5
The Tool
Our tool takes as an input a protocol specified in a language remniscient to the applied π-calculus [2], translates it into a dependency graph representation, presents it on the screen and allows the researcher to pick a particular transformation and the occurrence of the first DGFR. This occurrence is validated and then replaced by the second DGFR specified by the transformation, the result is again displayed and the researcher can choose the next transformation to apply. There is no obvious end-point to the analysis; at some moment the researcher can decide that the transformed protocol is now obviously secure. We use the graph visualizer uDraw(Graph) [21,38] as the front-end of our tool. It receives the commands to change the displayed graph from our tool, and sends back the actions of the user — the selected nodes and edges, as well as names of the chosen transformations. The visualizer allows the user to explore the graph and to change its layout. Fig. 1 shows a screenshot of the visualizer after loading a protocol that has just been translated from the π-calculus-like language to a
A User Interface for a Game-Based Protocol Verification Tool
269
Fig. 1. A screenshot of the visualizer with a loaded DGR
DGR. We see that the translation procedure itself is straightforward and does not attempt to optimize the DGR. In the visualizer, the first row of a node shows its ID and label, and the second row shows the elements of the (multi)set X of its replication dimensions. This node of the DGR corresponds to NX nodes in the actual DG. The tool has been implemented in OCaml. The components of the tool are its main loop (driving the interaction), the graph transformer and the various transformations. A transformation is specified as an OCaml module describing the initial and the final DGFR. Additionally, it contains a method for helping the user to choose the occurrence of the initial DGFR in the DGR. Instead of selecting all nodes and edges comprising a DGFR, as well as specifying the embedding of the DGFR in the DGR, the user has to select only a couple of fixed nodes/edges of the DGFR and this method will reconstruct the whole DGFR. The graph transformer is an OCaml functor receiving a graph transformation as an input and returning a module containing a function that takes as arguments a DGR and the names of the selected nodes / edges, and returns the transformed DGR (or an error message). The main loop receives the node and edge selection commands from the visualizer, as well as the name of the transformation menu element that the user has selected. It calls the correct graph transformation function, finds the difference between the original and the modified DGR, and sends that difference back to the visualizer. 5.1
Specifying a Family of Pairs of DGFRs
It makes sense to parameterize certain parts of DGFRs. For example, to express that tupling followed by projection just selects one of the inputs (modulo
270
P. Laud and I. Tˇsahhirov
control dependencies and ⊥-s): πin ((x1 , . . . , xn )) → xi , then there should not be a separate transformation for each n and i, but those should be the parameters of the transformation. The DGFR pairs are specified as OCaml modules with a certain signature. In effect, our approach can be described as a shallow embedding of DGFRs into OCaml. A module conforming to that signature has to first define an OCaml data type for variable names. The variable names are used as a part in the type for variables; the variables map to the nodes and edges of DGFRs, but also to other values. As next, the module has to declare a mapping from variable names to their types. There are 5+1 possible types for a variable or a variable name — it can denote either an integer, a node, an edge, a set of dimensions, a map of dimensions (attached to edges going from one summary node in the DGR to another one; describing how the edges of DGR must be mapped to edges in the DG), or an array where all elements have the same type. The type “node” has three subtypes — a node can be either an input node, an internal node or an output node. Given the datatype X of variable names, the variables of type V are defined either as scalars of type X or elements (v, i), where v ∈ V and i is a natural number. The module then has to define a number of functions that map the variables to their values, in effect describing a DGFR. The following functions have to be defined – a map from variables of type “array” to their length; – maps from variables of type “edge” to their source and target (variables of type “node”), dimension map (variables of type “dimension map”) and the input port at the target node; – maps from the variables of types “integer”, “dimension”, and “dimension map”, giving their actual values; – maps from variables of type “node” giving their label, and their dimension (and also the input dimension if it the node is a contracting node). Importantly, all those functions can call each other if necessary. Circular dependencies will be detected. The module has to specify two lists of variable names. The elements of these lists correspond to the nodes and edges in the initial and final DGFR, respectively. During the transformation, the nodes and edges only in the first list will be removed and those only in the second list will be added. In the code of the functions that the module has to define, it is possible to ask for the actual parameters of the nodes and edges that are the values of the variables with the names in the list defining the initial DGFR. One can ask for the actual labels and dimensions of the variables of type “node”, and dimension maps of the variables of type “edge”. The module has to define a validation function that tells whether the values of initial variables (variables whose names are in the first list) define a valid DGFR. The function can assume that the nodes and edges are connected in the way given by the functions we described before, but it still has to verify that the nodes have the correct labels. If the transformation depends on it, this function
A User Interface for a Game-Based Protocol Verification Tool
271
also has to verify that the dimensions and dimension maps of nodes and edges are suitable. We see that our definition of DGFRs abstracts away from the actual DGR. Indeed, both the validation of the initial DGFR and the construction of the final DGFR are made in terms of DGFR variables. Only the expansion function that the module also has to define has access to the actual DGR. The task of this function is to assign values to the variables whose names are in the list of variable names for the initial DGFR. For variables of type “array”, it also has to define the length of the array. The inputs to this function are the DGR, and the identities of certain nodes and edges of the DGR, which the expansion function will treat as the values of certain fixed DGFR variables. 5.2
The Graph Transformer
A graph transformer will be defined for each of the transformations specified as the pair of two DGFRs, but the definition is through an OCaml functor. Given the transformation module TrM , the transformer function will take a DGR and the node identities as arguments. First, it passes the arguments to the expansion function of TrM and receives a mapping ϕ from initial variables to nodes and edges of the graph. Second, it verifies that the ϕ indeed constitutes an occurrence of a DGFR in the given DGR — the internal nodes must have all their predecessor and successor nodes also as elements of the DGFR. Third, it invokes the validation function of TrM on the received DGFR. Fourth, it performs the actual change of the DGFR — it deletes the nodes and edges that occur in the initial DGFR, but not in the final. Then it adds new nodes and edges (corresponding to the variable names that occur in the list for the final DGFR, but not in the list for the initial DGFR). It calls the functions of TrM to find the parameters of those nodes and edges. The graph transformer also makes sure that the values computed by the functions of TrM are memoized and that the computation of a certain function on a certain variable does not (possibly indirectly) invoke the same function on the same variable again. Additionally, the graph transformer verifies the outputs of the functions of TrM . For example, if the result of the function returning the dimension of a variable of type “node” is not a variable of type “dimension” then the transformation is immediately halted. Hence the typing of variables is enforced, albeit dynamically. 5.3
Example Analysis
Let us consider a situation where A and B have a long-term shared key KAB , but whenever B wants to send a secret M to A, first A generates a short-term key k, sends it encrypted under KAB to B who then uses k to protect M . In the conventional “arrow-notation” this protocol can be specified as follows: A−→B : {k}KAB B−→A : {M }k A−→ : OK
(1)
272
P. Laud and I. Tˇsahhirov
The initial protocol (game), directly corresponding to (1), but simplified from the output of the translator is depicted in Fig. 2. Here solid edges carry bit-strings and dashed edges booleans. The node 43 generates the key KAB . The nodes 60–179 represent a session of A; those nodes have the (multi)set of dimensions {A}, i.e. we are modeling an unbounded number of sessions. The first message is constructed by nodes 60 and 96 (the random coins for these operations are provided by special RS-nodes 63 and 99), and sent away by the node 118; the adversary can request it to be sent by setting (an instance of) the node 119 to true. The second message is received in node 136, decrypted in 142, and if it decrypts successfully (node 154) then the third message is sent in nodes 171 and 178. Similarly, nodes 196–245 represent a session of B — receiving the first and constructing and sending the second message. As the node 43 is only used for encryption and decryption, we can apply a transformation corresponding to the IND-CCA- and INT-CTXT-security of symmetric encryption [10] to it. We select node 43 and choose “Replace a secretkey decryption” from the menu. The resulting graph is depicted in Fig. 3. The transformation introduced the nodes 674–680. At first, it replaced the encryption node 96 with the node 676 labeled SymencZ. This operation encrypts a fixed bit-string ZERO using the random coins and the key that are given to it. The string ZERO cannot be the output of any node in the DG. The SymencZnode does not use the plaintext argument (60) of the original node. Still, it should not produce output if the original plaintext has not been computed. Hence the test (node 674) whether it has been computed is part of the control dependency of node 676. At the decryption side, the ciphertext 196 is compared to all computed encryptions of ZERO in node 677 (note its set of dimensions). If one of them matches then it the corresponding plaintext (node 60) is selected as the result of decryption by nodes (“multiplexers”) 679 and 680. The MUX-nodes have an arbitrary (finite) number of inputs, their output is the least upper bound of their inputs. I.e. if all inputs are ⊥ then the output is ⊥ and if exactly one input is different from ⊥ then the output is equal to that input. If more than one input is different from ⊥ (in this transformation, the number of inputs to MUX-nodes is equal to the number of SymEnc-operations) then the result of this operation is , denoting an inconsistency in the DG. A inconsistency means an immediate termination of the computation; this is visible to the adversary (i.e., if the transformations are correct, then this can happen only with negligible probability). Similarly, an lMUX is a “long MUX” — in a DG it is a node with infinite number of pairs of inputs (a bit-string and a boolean). If no boolean inputs are true, it returns ⊥. If exactly one of the boolean inputs is true then it returns the corresponding bit-string input. In a DGR, a lMUX is a contracting node — in our example, node 679 contracts the dimension A. We would like to apply the symmetric encryption transformation also to node 60, but it has two uses forbidding that. We get rid of the use by node 674 by noting that SymKey succeeds if there are random coins incoming from RS, and if the control dependency is true. The RS-node 63 always produces coins because
A User Interface for a Game-Based Protocol Verification Tool
119: req A
245: req B
75: ooor [A]
179: req A
241: ooor [B]
63: RS A
45: or
r 137: and A
273
99: RS A
46: RS
r
r
384: and A
224: true
43: Symkey
223: RS B
r 394: and B
k k 136: Receive A
60: Symkey A
196: Receive B
214: Secret B
k 142: Symdec A
96: Symenc A
202: Symdec B k
154: bit-string OK? A
118: Send A
220: Symenc B
172: and A
244: Send B
171: Const(1) A
178: Send A
Fig. 2. Initial protocol
its control dependency is true. Hence the value of node 674 equals the value of node 384 and we get rid of this use of node 60. The other use by node 679 eventually ends up in the SymEnc-node 220. As MUX- and lMUX-nodes do not change the values passing through them, we can swap them with the operations following them. Hence we can move the SymDecnode first to the other side of node 680 and then node 679, resulting in the graph depicted in Fig. 4. We see that the encryption node (with new ID 704) is now right next to the SymKey-node 60. The ability to do such swaps of operations with multiplexers is one of the main advantages of dependency graphs. It is instructive to consider how the second message {M }k (sent by node 244) is computed in this graph. In the b-th round of B, this rounds secret message Mb is encrypted with keys ka for all rounds a of A. The correct round a is then chosen by nodes 795 and 697 by comparing the first message received by B (node 196) in this round with the first messages sent by A (node 118, sending the result of node 676) in all rounds. The next transformation steps should be obvious. After getting rid of the node 674 (described above) we apply the symmetric encryption transformation to the key generation 60. This will get rid of the use of the Secret-node by node 704.
274
P. Laud and I. Tˇsahhirov 119: req A
245: req B
75: ooor [A]
179: req A
241: ooor [B]
63: RS A
45: or
99: RS A
46: RS
r
r 137: and A
224: true
r
r
384: and A
43: Symkey
223: RS B
394: and B
k 136: Receive A
60: Symkey A
196: Receive B
214: Secret B
k 142: Symdec A
674: bit-string OK? A
154: bit-string OK? A
675: and A
172: and A
171: Const(1) A
178: Send A
676: SymencZ A
118: Send A
677: =? B,A
678: and B,A
679: lMUX [A] B
680: MUX B k 220: Symenc B
244: Send B
Fig. 3. Applying encryption transformation to KAB
The Secret-node will then be used by the nodes replacing the decryption node 142, as it has to be returned as the plaintext. It will be an input to a multiplexer whose output is only used by node 154. The position of node 154 will be swapped with multiplexers, making the OK?-node an immediate successor of the Secretnode. The use of Secret-node by an OK?-node can be transformed away and the Secret-node had no other uses. We are left with a dead Secret-node that can be removed. Thus the confidentiality of secrets is preserved.
A User Interface for a Game-Based Protocol Verification Tool
119: req A
245: req B
75: ooor [A]
179: req A
241: ooor [B]
63: RS A
45: or
224: true
99: RS A
46: RS
r
r 137: and A
275
r
r
384: and A
223: RS B
394: and B
43: Symkey
k 136: Receive A
60: Symkey A
142: Symdec A
674: bit-string OK? A
154: bit-string OK? A
704: Symenc B,A
675: and A
172: and A
178: Send A
214: Secret B
k
k
171: Const(1) A
196: Receive B
676: SymencZ A
118: Send A
677: =? B,A
678: and B,A
705: lMUX [A] B
697: MUX B
244: Send B
Fig. 4. Moving Symenc over MUX-s
6
Conclusions and Future Work
The presented example was very simple. More complex examples require the introduction of more and different kinds of nodes, as well as extending the definition of semantics of the dependency graph (but still keeping it mostly functional, i.e. without side effects) [36]. The extension serves to bring back arguments about the order of execution. We introduce the nodes labeled Before with two Boolean inputs u and v. The label of the node is going to change to either true or false during the execution of the protocol, depending on the order in which u and v
276
P. Laud and I. Tˇsahhirov
become true. Such nodes require us to spend more effort on proving the transformations correct — it is not merely sufficient to prove that identical inputs to two graph fragments lead to identical (or indistinguishable) outputs, but it is also necessary to consider increasing sequences of inputs. See [36] for details. The work on the analyser continues — the planned expansions include additional arithmetic operations, cryptographic primitives (different security properties) and equivalences, as well as different control structures. Adding loops as a control structure would allow to apply the analyzer to various cryptographic primitives constructed from simpler primitives, e.g. to block ciphers modes of operation. Expressing loops in dependency graphs has been a researched topic [31]. But in our formalization, where we already have infinite dependency graphs, the unrolling of loops seems to be the easiest way to express them. A multiplexer will be used to select the value of the variable from the last (in the current run) iteration of the loop. In the dependency graph representation, a loop will introduce an extra dimension. The analyser should also leave a trail of its work that can later be used as a proof that the initial and final protocols have indistinguishable semantics. The proof should be verifiable using a proof assistant, e.g. Coq. The proof would consist of the following parts: – The statement and proof of theorem 1. Also, the statement and proof of transitivity of indistinguishability. – For each of the defined transformations: the proof that the two DGFs they specify are indistinguishable. – For each transformation step: the proof that the first DGF occurs in the graph before the transformation, and that the transformation replaces it with the second DGF. Only that last part has to be constructed separately for each of the analysed protocols. While our analysis procedure starts by translating an protocol specified in an applied π-calculus like language into a DGR, the transformations we apply can change this DGR to a shape that cannot be naturally expressed in a typical process calculus where parallel composition and replication define the structure of parallelly executing threads. For example, in Fig. 4, the node 704 represents a computation done with values defined in different threads and thus can be placed naturally neither to threads “A” nor to threads “B”. If one argues that presenting the intermediate protocols as processes in some process calculus makes them more comprehensible to researchers than the presentation as DGRs, then it will make sense to search for calculi that could naturally express such computations. CryptoVerif’s find. . . suchthat-construction is an attempt towards that direction, but it only allows matching, not arbitrary computations, and it is also highly asymmetric. We desire a calculus that would represent the computation of node 704 in a way that relates it equally with threads “A” and threads “B”. At the same time, the calculus should still fix the order of execution inside a thread.
A User Interface for a Game-Based Protocol Verification Tool
277
References 1. Abadi, M., Corin, R., Fournet, C.: Computational secrecy by typing for the pi calculus. In: Kobayashi, N. (ed.) APLAS 2006. LNCS, vol. 4279, pp. 253–269. Springer, Heidelberg (2006) 2. Abadi, M., Fournet, C.: Mobile values, new names, and secure communication. In: POPL 2001, pp. 104–115 (2001) 3. Abadi, M., J¨ urjens, J.: Formal eavesdropping and its computational interpretation. In: Kobayashi, N., Pierce, B.C. (eds.) TACS 2001. LNCS, vol. 2215, pp. 82–94. Springer, Heidelberg (2001) 4. Abadi, M., Rogaway, P.: Reconciling two views of cryptography (the computational soundness of formal encryption). J. Cryptology 15(2), 103–127 (2002) 5. Backes, M., Berg, M., Unruh, D.: A formal language for cryptographic pseudocode. In: Cervesato, I., Veith, H., Voronkov, A. (eds.) LPAR 2008. LNCS (LNAI), vol. 5330, pp. 353–376. Springer, Heidelberg (2008) 6. Backes, M., Laud, P.: Computationally sound secrecy proofs by mechanized flow analysis. In: ACM CCS 2006, pp. 370–379 (2006) 7. Backes, M., Pfitzmann, B., Waidner, M.: A composable cryptographic library with nested operations. In: ACM CCS 2003, pp. 220–230 (2003) 8. Ballance, R.A., Maccabe, A.B., Ottenstein, K.J.: The program dependence web: A representation supporting control, data, and demand-driven interpretation of imperative languages. In: PLDI 1990, pp. 257–271 (1990) 9. Barthe, G., Gr´egoire, B., B´eguelin, S.Z.: Formal certification of code-based cryptographic proofs. In: POPL 2009, pp. 90–101 (2009) 10. Bellare, M., Namprempre, C.: Authenticated encryption: Relations among notions and analysis of the generic composition paradigm. In: Okamoto, T. (ed.) ASIACRYPT 2000. LNCS, vol. 1976, pp. 531–535. Springer, Heidelberg (2000) 11. Bellare, M., Rogaway, P.: The security of triple encryption and a framework for code-based game-playing proofs. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 409–426. Springer, Heidelberg (2006) 12. Blanchet, B.: A computationally sound mechanized prover for security protocols. In: IEEE S&P 2006, pp. 140–154 (2006) 13. Blanchet, B.: A Computationally Sound Mechanized Prover for Security Protocols. Cryptology ePrint Archive, Report 2005/401 (February 2, 2007) 14. Corin, R., den Hartog, J.: A probabilistic hoare-style logic for game-based cryptographic proofs. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 252–263. Springer, Heidelberg (2006) 15. Cortier, V., Kremer, S., K¨ usters, R., Warinschi, B.: Computationally sound symbolic secrecy in the presence of hash functions. In: Arun-Kumar, S., Garg, N. (eds.) FSTTCS 2006. LNCS, vol. 4337, pp. 176–187. Springer, Heidelberg (2006) 16. Cortier, V., Warinschi, B.: Computationally sound, automated proofs for security protocols. In: Sagiv, M. (ed.) ESOP 2005. LNCS, vol. 3444, pp. 157–171. Springer, Heidelberg (2005) 17. Datta, A., Derek, A., Mitchell, J.C., Shmatikov, V., Turuani, M.: Probabilistic polynomial-time semantics for a protocol security logic. In: Caires, L., Italiano, G.F., Monteiro, L., Palamidessi, C., Yung, M. (eds.) ICALP 2005. LNCS, vol. 3580, pp. 16–29. Springer, Heidelberg (2005) 18. Datta, A., Derek, A., Mitchell, J.C., Warinschi, B.: Computationally sound compositional logic for key exchange protocols. In: CSFW 2006, pp. 321–334 (2006) 19. Ferrante, J., Ottenstein, K.J., Warren, J.D.: The program dependence graph and its use in optimization. ACM Trans. Program. Lang. Syst. 9(3), 319–349 (1987)
278
P. Laud and I. Tˇsahhirov
20. Fournet, C., Rezk, T.: Cryptographically sound implementations for typed information-flow security. In: POPL 2008, pp. 323–335 (2008) 21. Fr¨ ohlich, M., Werner, M.: Demonstration of the interactive graph-visualization system vinci. In: Tamassia, R., Tollis, I.G. (eds.) GD 1994. LNCS, vol. 894, pp. 266–269. Springer, Heidelberg (1995) 22. Janvier, R., Lakhnech, Y., Mazar´e, L.: Completing the picture: Soundness of formal encryption in the presence of active adversaries. In: Sagiv, M. (ed.) ESOP 2005. LNCS, vol. 3444, pp. 172–185. Springer, Heidelberg (2005) 23. Laud, P.: Semantics and program analysis of computationally secure information flow. In: Sands, D. (ed.) ESOP 2001. LNCS, vol. 2028, pp. 77–91. Springer, Heidelberg (2001) 24. Laud, P.: Handling encryption in an analysis for secure information flow. In: Degano, P. (ed.) ESOP 2003. LNCS, vol. 2618, pp. 159–173. Springer, Heidelberg (2003) 25. Laud, P.: Symmetric encryption in automatic analyses for confidentiality against active adversaries. In: IEEE S&P 2004, pp. 71–85 (2004) 26. Laud, P.: Secrecy types for a simulatable cryptographic library. In: ACM CCS 2005, pp. 26–35 (2005) 27. Laud, P., Vene, V.: A type system for computationally secure information flow. In: Li´skiewicz, M., Reischuk, R. (eds.) FCT 2005. LNCS, vol. 3623, pp. 365–377. Springer, Heidelberg (2005) 28. Lincoln, P., Mitchell, J.C., Mitchell, M., Scedrov, A.: A probabilistic poly-time framework for protocol analysis. In: ACM CCS 1998, pp. 112–121 (1998) 29. Micciancio, D., Warinschi, B.: Soundness of formal encryption in the presence of active adversaries. In: Naor, M. (ed.) TCC 2004. LNCS, vol. 2951, pp. 133–151. Springer, Heidelberg (2004) 30. Nowak, D.: A framework for game-based security proofs. In: Qing, S., Imai, H., Wang, G. (eds.) ICICS 2007. LNCS, vol. 4861, pp. 319–333. Springer, Heidelberg (2007) 31. Pingali, K., Beck, M., Johnson, R., Moudgill, M., Stodghill, P.: Dependence flow graphs: An algebraic approach to program dependencies. In: POPL 1991, pp. 67–78 (1991) 32. Shoup, V.: Sequences of games: a tool for taming complexity in security proofs. Cryptology ePrint Archive, Report 2004/332 (2004), http://eprint.iacr.org/ 33. Smith, G.: Secure information flow with random assignment and encryption. In: FMSE 2006, pp. 33–44 (2006) 34. Sprenger, C., Backes, M., Basin, D.A., Pfitzmann, B., Waidner, M.: Cryptographically sound theorem proving. In: CSFW 2006, pp. 153–166 (2006) 35. Sprenger, C., Basin, D.A.: Cryptographically-sound protocol-model abstractions. In: CSF 2008, pp. 115–129 (2008) 36. Tˇsahhirov, I.: Security Protocols Analysis in the Computational Model — Dependency Flow Graphs-Based Approach. PhD thesis, Tallinn University of Technology (2008) 37. Tˇsahhirov, I., Laud, P.: Application of dependency graphs to security protocol analysis. In: Barthe, G., Fournet, C. (eds.) TGC 2007 and FODO 2008. LNCS, vol. 4912, pp. 294–311. Springer, Heidelberg (2008) 38. uDraw(Graph) graph visualizer (2005), http://www.informatik.uni-bremen.de/uDrawGraph/en/index.html 39. Volpano, D.M.: Secure introduction of one-way functions. In: CSFW 2000, pp. 246–254 (2000)
Author Index
Alcalde, Baptiste 4 Alp´ızar, Rafael 126 Armando, Alessandro Banerjee, Anindya Brucker, Achim D.
M¨ odersheim, Sebastian A. Murray, Toby 81 66 Nestmann, Uwe Nielsen, Mogens
1 248
Palamidessi, Catuscia 141 Pang, Jun 186 Ponta, Serena Elisa 66