Lecture Notes in Computer Science Commenced Publication in 1973 Founding and Former Series Editors: Gerhard Goos, Juris Hartmanis, and Jan van Leeuwen
Editorial Board David Hutchison Lancaster University, UK Takeo Kanade Carnegie Mellon University, Pittsburgh, PA, USA Josef Kittler University of Surrey, Guildford, UK Jon M. Kleinberg Cornell University, Ithaca, NY, USA Alfred Kobsa University of California, Irvine, CA, USA Friedemann Mattern ETH Zurich, Switzerland John C. Mitchell Stanford University, CA, USA Moni Naor Weizmann Institute of Science, Rehovot, Israel Oscar Nierstrasz University of Bern, Switzerland C. Pandu Rangan Indian Institute of Technology, Madras, India Bernhard Steffen TU Dortmund University, Germany Madhu Sudan Microsoft Research, Cambridge, MA, USA Demetri Terzopoulos University of California, Los Angeles, CA, USA Doug Tygar University of California, Berkeley, CA, USA Gerhard Weikum Max Planck Institute for Informatics, Saarbruecken, Germany
7025
Bart De Decker Jorn Lapon Vincent Naessens Andreas Uhl
(Eds.)
Communications and Multimedia Security 12th IFIP TC 6 / TC 11 International Conference, CMS 2011 Ghent, Belgium, October 19-21, 2011 Proceedings
13
Volume Editors Bart De Decker K.U. Leuven, Department of Computer Science - DistriNet Celestijnenlaan 200A, 3001 Leuven, Belgium E-mail:
[email protected] Jorn Lapon Vincent Naessens KAHO Sint-Lieven - MSEC Gebroeders De Smetstraat 1, 9000 Gent, Belgium E-mail:{jorn.lapon,vincent.naessens}@kahosl.be Andreas Uhl University of Salzburg Visual Computing and Multimedia Jakob Haringer Str.2, A 5020 Salzburg, Austria E-mail:
[email protected]
ISSN 0302-9743 e-ISSN 1611-3349 ISBN 978-3-642-24711-8 e-ISBN 978-3-642-24712-5 DOI 10.1007/978-3-642-24712-5 Springer Heidelberg Dordrecht London New York Library of Congress Control Number: 2011937966 CR Subject Classification (1998): C.2, K.6.5, E.3, D.4.6, J.1, H.4 LNCS Sublibrary: SL 4 – Security and Cryptology
© Springer-Verlag Berlin Heidelberg 2011 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, re-use of illustrations, recitation, broadcasting, reproduction on microfilms or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer. Violations are liable to prosecution under the German Copyright Law. The use of general descriptive names, registered names, trademarks, etc. in this publication does not imply, even in the absence of a specific statement, that such names are exempt from the relevant protective laws and regulations and therefore free for general use. Typesetting: Camera-ready by author, data conversion by Scientific Publishing Services, Chennai, India Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
It is with great pleasure that we present the proceedings of the 12th IFIP TC6 and TC-11 Conference on Communications and Multimedia Security (CMS 2011), which was held in Ghent, Belgium on October 19–21, 2011. The meeting continued the tradition of previous CMS conferences which were held in Linz, Austria (2010) and Heraklion, Crete, Greece (2006). The series of CMS meetings features an almost unique combined discussion of two specific topics in IT security which is hardly found elsewhere in the scientific community. At first sight communication and multimedia security does not seem to be highly related, but there are even applications where both aspects are obviously involved like streaming of secured video or privacy questions in social networks. On the one hand, there are specialized meetings on multimedia security like the ACM Multimedia and Security Workshop or the Information Hiding Conference, on the other hand there are specialized meetings on communication and network security like the ACM Computer and Communication Security Conference. The only meetings somewhat closer to CMS are the IFIP Information Security Conference (IFIP SEC), the Information Security Conference (ISC), and the Conference on Information and Communications Security (ICICS). However, the explicit focus on Multimedia Security is missing and usually, there are very few papers on this topic seen at those much more general conferences. The program committee (PC) received 52 submissions out of which only 11 full papers were accepted. In this edition, we have included 10 short papers, which describe valuable work-in-progress. Also, five extended abstracts reflecting the posters discussed at the conference, complete these proceedings. We would like to thank all the authors who submitted papers. Each paper was anonymously reviewed by three to five reviewers. In addition to the PC Members, several external reviewers joined the review process in their particular areas of expertise. We are grateful for their sincere and hard work. We tried to compile a balanced program covering various topics of communications and multimedia security: cryptanalysis, covert channels, biometrics, watermarking, ... just to name a few. We are also grateful to Moti Yung (Google Inc and Columbia University), Ronald Leenes (University of Tilburg), and Jaap-Henk Hoepman (TNO and Radboud University Nijmegen) for accepting our invitation to deliver keynote talks. We appreciate the contributions of our sponsors: Luciad, IBM, Google, BelSpo (Belgian State, Belgian Science Policy). Without their financial support, it would not have been possible to organize this conference or to attract as many young researchers. Finally, special thanks go to the organizing committee who handled all local organizational issues and provided us with a comfortable and inspiring location
VI
Preface
and a terrific social program. For us, it was a distinct pleasure to serve as program chairs of CMS 2011. We hope that you will enjoy reading these proceedings and that they may inspire you for future research in communications and multimedia security. October 2011
Bart De Decker Andreas Uhl
Organization
CMS 2011 was the 12th Joint IFIP TC6 and TC11 Conference on Communications and Multimedia Security. It was organized by KAHO Sint-Lieven in cooperation with K.U.Leuven.
Executive Committee Conference Chair Program Co-chairs Organizing Chair
Bart De Decker (K.U. Leuven, Belgium) Bart De Decker (K.U. Leuven, Belgium) Andreas Uhl (University of Salzburg, Austria) Vincent Naessens (KAHO Sint-Lieven, Belgium)
Program Committee Anas Abou El Kalam Partrick Bas David W. Chadwick Howard Chivers Isabelle Chrisment Gabriela Cretu-Ciocarlie Fr´ed´eric Cuppens Herv´e Debar Sabrina De Capitani di Vimercati Bart De Decker Yvo G. Desmedt Lieven Desmet Lieven De Strycker Yves Deswarte Jana Dittmann Stelios Dritsas Taher Elgamal Gerhard Eschelbeck Simone Fischer-H¨ ubner J¨ urgen Fuß Teddy Furon S´ebastien Gambs
IRIT - INP, France CNRS-Lagis, Lille, France University of Kent, UK Cranfield University, UK LORIA-University of Nancy, France Real-Time Innovations, Inc., USA T´el´ecom Bretagne, France T´el´ecom SudParis, France Universit` a degli Studi di Milano, Italy K.U.Leuven, Belgium University College London, UK K.U.Leuven, Belgium KAHO Sint-Lieven, Belgium LAAS-CNRS, France University of Magdeburg, Germany Athens University of Economics and Business, Greece Axway Inc., USA Webroot, USA Karlstad University, Sweden Upper Austria University of Applied Sciences, Austria INRIA Rennes - Bretagne Atlantique, France Universit´e de Rennes 1 - INRIA / IRISA, France
VIII
Organization
Christian Geuer-Pollmann Dieter Gollmann Mohamed Gouda R¨ udiger Grimm Jean Hennebert
Microsoft Research, Germany Hamburg University of Technology, Germany National Science Foundation, USA University of Koblenz, Germany University of Applied Sciences, HES-SO, Switzerland Eckehard Hermann Upper Austria University of Applied Sciences, Austria Jaap-Henk Hoepman TNO / Radboud University Nijmegen, The Netherlands Andreas Humm University of Fribourg, Switzerland Edward Humphreys XiSEC, UK Christophe Huygens K.U.Leuven, Belgium Witold Jacak Upper Austria University of Applied Sciences, Austria Sushil Jajodia George Mason University, USA Lech Janczewski University of Auckland, New Zealand G¨ unter Karjoth IBM Research - Zurich, Switzerland Stefan Katzenbeisser TU Darmstadt, Germany Markulf Kohlweiss Microsoft Research Cambridge, UK Herbert Leitold Secure Information Technology Center (A-SIT), Austria Javier Lopez University of Malaga, Spain Louis Marinos ENISA, Greece Keith Martin Royal Holloway, University of London, UK Fabio Massacci University of Trento, Italy Chris Mitchell Royal Holloway, University of London, UK Refik Molva Eur´ecom, France J¨ org R. M¨ uhlbacher Johannes Kepler Universit¨ at Linz, Austria Yuko Murayama Iwate Prefectural University, Japan Vincent Naessens KAHO Sint-Lieven, Belgium Chandrasekaran Pandurangan Indian Institute of Technology, Madras, India G¨ unther Pernul University of Regensburg, Germany Alessandro Piva University of Florence, Italy Hartmut Pohl University of Applied Sciences Bonn-Rhein-Sieg, Germany Jean-Jacques Quisquater Universit´e catholique de Louvain, Belgium Kai Rannenberg Goethe University Frankfurt, Germany Vincent Rijmen, K.U.Leuven, Belgium and Graz University of Technology, Austria Pierangela Samarati Universit` a degli Studi di Milano, Italy Riccardo Scandariato K.U.Leuven, Belgium Ingrid Schaum¨ uller-Bichl Upper Austria University of Applied Sciences, Austria J¨ org Schwenk Ruhr-Universit¨ at Bochum, Germany Andreas Uhl University of Salzburg, Austria
Organization
Umut Uludag Vijay Varadharajan Pedro Veiga Tatjana Welzer Andreas Westfeld Ted Wobber Shouhuai Xu Moti Yung
IX
Scientific and Technological Research Council (TUBITAK), Turkey Macquarie University, Australia University of Lisbon, Portugal University of Maribor, Slovenia University of Applied Sciences, Dresden, Germany Microsoft Research Silicon Valley, USA University of Texas at San Antonio, USA Google & Columbia University, USA
Referees Gergely Alp´ ar Haitham Al-Sinani Goekhan Bal Nataliia Bielova Christian Broser Gerardo Fernandez Christoph Fritsch Joaquin Garcia-Alfaro Mohamed Maachaoui Jef Maerien Sascha M¨ uller Khalid Salih Nasr Adam O’Neill Tobias Pulls Andreas Reisser Boyeon Song Borislav Tadic Peter Teufl Marianthi Theoharidou T.T. Tun Subhashini Venugopalan Ge Zhang Zhenxin Zhan Bernd Zwattendorfer
University of Nijmegen, The Netherlands Royal Holloway, University of London, UK Goethe University Frankfurt, Germany University of Trento, Italy University of Regensburg, Germany University of Malaga, Spain University of Regensburg, Germany T´el´ecom Bretagne, France IRIT - INP, France K.U.Leuven, Belgium TU Darmstadt, Germany IRIT - INP, France University of Texas, Austin, USA Karlstad University, Sweden University of Regensburg, Germany National Institute for Mathematical Sciences, Daejeon, Korea Deutsche Telekom AG, Germany IAIK, TU Graz, Austria Athens University of Economics and Business, Greece University of Trento, Italy Indian Institute of Technology, Madras, India Karlstad University, Sweden University of Texas at San Antonio, USA IAIK, TU Graz, Austria
Sponsoring Institutions/Companies Belgian State (Belgian Science Policy), IAP Programme, P6/26, “BCRYPT” Luciad IBM Google
Table of Contents
Part I: Research Papers Applicability and Interoperability Analysis of Revocation Strategies for Anonymous Idemix Credentials . . . Jorn Lapon, Markulf Kohlweiss, Bart De Decker, and Vincent Naessens
3
Architecture and Framework Security A Secure Key Management Framework for Heterogeneous Wireless Sensor Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Mahdi R. Alagheband and Mohammad Reza Aref Twin Clouds: Secure Cloud Computing with Low Latency (Full Version) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Sven Bugiel, Stefan N¨ urnberger, Ahmad-Reza Sadeghi, and Thomas Schneider
18
32
Secure Hardware Platforms Implementation Aspects of Anonymous Credential Systems for Mobile Trusted Platforms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Kurt Dietrich, Johannes Winter, Granit Luzhnica, and Siegfried Podesser
45
Biometrics Approximation of a Mathematical Aging Function for Latent Fingerprint Traces Based on First Experiments Using a Chromatic White Light (CWL) Sensor and the Binary Pixel Aging Feature . . . . . . . Ronny Merkel, Jana Dittmann, and Claus Vielhauer Two-Factor Biometric Recognition with Integrated Tamper-Protection Watermarking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Reinhard Huber, Herbert St¨ ogner, and Andreas Uhl
59
72
XII
Table of Contents
Feature Selection by User Specific Feature Mask on a Biometric Hash Algorithm for Dynamic Handwriting . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Karl K¨ ummel, Tobias Scheidat, Christian Arndt, and Claus Vielhauer
85
Multimedia Security Dynamic Software Birthmark for Java Based on Heap Memory Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Patrick P.F. Chan, Lucas C.K. Hui, and S.M. Yiu
94
A Secure Perceptual Hash Algorithm for Image Content Authentication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Li Weng and Bart Preneel
108
Network Security Low-Attention Forwarding for Mobile Network Covert Channels . . . . . . . Steffen Wendzel and J¨ org Keller
122
Authentication Cryptanalysis of a SIP Authentication Scheme . . . . . . . . . . . . . . . . . . . . . . . Fuwen Liu and Hartmut Koenig
134
Part II: Work in Progress Applicability and Interoperability Mapping between Classical Risk Management and Game Theoretical Approaches . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Lisa Rajbhandari and Einar Arthur Snekkenes
147
Digital Signatures: How Close Is Europe to Truly Interoperable Solutions? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Konstantinos Rantos
155
Architecture and Framework Security A Generic Architecture for Integrating Health Monitoring and Advanced Care Provisioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Koen Decroix, Milica Milutinovic, Bart De Decker, and Vincent Naessens
163
Table of Contents
XIII
Secure Hardware Platforms A Modular Test Platform for Evaluation of Security Protocols in NFC Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Geoffrey Ottoy, Jeroen Martens, Nick Saeys, Bart Preneel, Lieven De Strycker, Jean-Pierre Goemaere, and Tom Hamelinckx GPU-Assisted AES Encryption Using GCM . . . . . . . . . . . . . . . . . . . . . . . . . Georg Sch¨ onberger and J¨ urgen Fuß
171
178
Multimedia Security Radon Transform-Based Secure Image Hashing . . . . . . . . . . . . . . . . . . . . . . Dung Q. Nguyen, Li Weng, and Bart Preneel
186
Network Security On Detecting Abrupt Changes in Network Entropy Time Series . . . . . . . . Philipp Winter, Harald Lampesberger, Markus Zeilinger, and Eckehard Hermann
194
Motif-Based Attack Detection in Network Communication Graphs . . . . . Krzysztof Juszczyszyn and Grzegorz Kolaczek
206
Authentication Secure Negotiation for Manual Authentication Protocols . . . . . . . . . . . . . . Milica Milutinovic, Roel Peeters, and Bart De Decker
214
A Secure One-Way Authentication Protocol in IMS Context . . . . . . . . . . . Mohamed Maachaoui, Anas Abou El Kalam, and Christian Fraboul
222
Part III: Posters High Capacity FFT-Based Audio Watermarking . . . . . . . . . . . . . . . . . . . . . Mehdi Fallahpour and David Meg´ıas
235
Efficient Prevention of Credit Card Leakage from Enterprise Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Matthew Hall, Reinoud Koornstra, and Miranda Mowbray
238
Security Warnings for Children’s Smart Phones: A First Design Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jana Fruth, Ronny Merkel, and Jana Dittmann
241
XIV
Table of Contents
Ciphertext-Policy Attribute-Based Broadcast Encryption Scheme . . . . . . Muhammad Asim, Luan Ibraimi, and Milan Petkovi´c
244
Anonymous Authentication from Public-Key Encryption Revisited (Extended Abstract) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Daniel Slamanig
247
Part IV: Keynotes Mobile Identity Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Jaap-Henk Hoepman
253
Who Needs Facebook Anyway - Privacy and Sociality in Social Network Sites . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Ronald E. Leenes
254
From Protecting a System to Protecting a Global Ecosystem . . . . . . . . . . Moti Yung
255
Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
257
Part I
Analysis of Revocation Strategies for Anonymous Idemix Credentials Jorn Lapon1 , Markulf Kohlweiss3 , Bart De Decker2 , and Vincent Naessens1 1
Katholieke Hogeschool Sint-Lieven, Industrial Engineering 2 Katholieke Universiteit Leuven, CS-DISTRINET 3 Microsoft Research, Cambridge
Abstract. In an increasing information-driven society, preserving privacy is essential. Anonymous credentials promise a solution to protect the user’s privacy. However, to ensure accountability, efficient revocation mechanisms are essential. Having classified existing revocation strategies, we implemented one variant for each. In this paper we describe our classification and compare our implementations. Finally, we present a detailed analysis and pragmatic evaluation of the strategies. Keywords: Anonymous Credentials, Revocation, Privacy, Performance.
1 Introduction Individuals release a lot of personal information to all kinds of service providers. Users have no impact on what is done with these data once they are released. Service providers can process them to build detailed profiles. The latter can be privacy sensitive (depending on the content) and possibly lead to discrimination, extortion, blackmail, or other illegal activities. For instance, a bookshop can detect a user’s interest in books related to certain diseases and sell that information to insurance companies that can fix higher life insurance contributions/premiums. Similarly, many governments collect information about people who submitted controversial political statements on fora and discriminate these citizens. During the last decades, many privacy enhancing technologies have been proposed and developed. They aim at offering a higher level of privacy (or anonymity) in the digital world. Examples are anonymous communication channels, anonymous e-mail and publication systems, privacy policy evaluation tools and anonymous credential systems. Anonymous credentials allow for anonymous authentication. Only the attributes – or properties thereof – that are required to access a service are proved to a service provider. For instance, users can prove to belong to a certain age category to get discounts on public transport tickets. Similarly, they only need to prove to live in the city in order to get access to the waste recycling center. Although those technologies are more privacy-friendly compared to certificate technology, revocation becomes more complex. Multiple revocation strategies have already been proposed in the literature, often with a theoretical security and performance analysis. However, a pragmatic assessment of revocation schemes for anonymous credentials is still lacking. Hence, it B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 3–17, 2011. c IFIP International Federation for Information Processing 2011
4
J. Lapon et al.
is very difficult to compare results due to varying security parameters and alternative software implementations. However, a critical and pragmatic comparison is crucial to bring those technologies to practice. Contribution. The contribution of this paper is threefold. First, existing revocation schemes are classified in six categories or strategies. Second, one variant of each strategy has been implemented with comparable security parameters and added to an existing library, namely the Identity Mixer Library [1]. Third, the paper gives a detailed analysis and pragmatic evaluation of the implemented strategies. Amongst others, the security and anonymity properties, the connectivity and performance of the schemes are compared. Usable performance results are presented in the sense that all schemes were implemented within the same library and run on the same platform. From this evaluation, guidelines are derived that can be applied by software developers that have to select a revocation strategy – or a combination of them – in a particular setting. The rest of this paper is structured as follows. Section 2 introduces basic building blocks that are used throughout the rest of this paper. Thereafter, revocation schemes are classified according to six categories. Section 4 gives a pragmatic complexity analysis and compares multiple functional properties of the schemes. Section 5.2 focuses on implementation details and results, extracted from a practical realization after which guidelines are formulated. The guidelines assist the developer in selecting a strategy – or a combination of them – in a particular setting. Finally, Section 7 discusses further work and general conclusions.
2 Preliminaries We shortly discuss some key elements, important for a good understanding of the rest of the paper. Anonymous Credentials. Anonymous credential systems [2–5] allow for anonymous yet accountable transactions between users and organizations. Moreover, selective disclosure allows the user to reveal only a subset of possible properties of the attributes embedded in the credential: e.g. a credential with the user’s date of birth as an attribute can be used to prove that the owner is over 18 without disclosing the exact date of birth or other attributes. We focus on anonymous credentials in which multiple shows are unlinkable [2, 3, 6]. All these schemes originate from existing group signatures and identity escrow schemes. Proofs of Knowledge. Anonymous credential systems heavily rely on proofs of knowledge. These allow a prover to convince a verifier about its knowledge of certain values, without leaking any useful information. For instance, when showing a credential, the prover only wants to prove knowledge of a valid credential, and possibly disclose properties of attributes in the credential. To address revocation, these proofs are further extended to prove that the shown credential is indeed not revoked. Cryptographic Accumulators. An important, but more complicated revocation mechanism that will be discussed are based on so-called cryptographic accumulators. A cryptographic accumulator, first introduced by Benaloh and de Mare [7], is a construction
Analysis of Revocation Strategies for Anonymous Idemix Credentials
5
which allows the accumulation of a number of elements into one value. The size of this value is independent of the number of elements incorporated. For each accumulated element, there is a witness that allows to prove that the element is contained in the accumulator. It must be infeasible, for the adversary, to find a witness for an element that is not included in the accumulator. Camenisch and Lysyanskaya [8] further extended this notion into dynamic accumulators. In dynamic accumulators adding and removing values and updating individual witnesses can be done dynamically [4].
3 Revocation Strategies In traditional credential systems, verifying the revocation status of a credential is straightforward and involves a simple lookup of a revealed credential specific identifier in a list. Well-known examples are OCSP [9] and CRL [10] based schemes. This strategy can be used for both local and global revocation. A revocation authority controls the validity of the credential globally, while services can use the identifier for local access control. We can distinguish two types of lists: blacklists, in which only revoked credentials are listed; and whitelists, in which only valid credentials are listed. Moreover, the time can be limited between credential revocations and a service still accepting the credential show as valid (latency). In anonymous credential systems, on the other hand, the credential specific identifier is no longer revealed. In fact, this is one of the key requirements of anonymous credentials. In literature several revocation strategies for anonymous credentials have been developed, with each their advantages and drawbacks. Although some revocation mechanisms perform well for small groups, we focus on revocation schemes suitable for large scale settings such as electronic identity cards and e-passports. Efficiency in processing and communication is crucial. We distinguish three parties: the issuer I handles the issuance of credentials; the user U obtains credentials; and the service provider SP to whom users authenticate. Although the ultimate goal is to make the overhead caused by the revocation strategy as little as possible, most strategies assign substantial workload to one of these parties. Nevertheless, for some strategies there may be an additional overhead for other parties as well. Based on this, we identify four classes. In the first class, Limited Overhead, none of the parties gets a big payload to handle revocation. In fact we will see that none of those solutions are satisfactory for anonymous credential revocation. For the other three classes, handling revocation assigns a substantial payload to at least one of the parties (i.e. Issuer, User or Service Provider) in the scheme. 3.1 Limited Overhead Pseudonymous Access [Nym ]. Though, more related to service usage [11], a simple and efficient solution requires the owner to provably disclose a domain specific pseudonym [12, 13]. The service provider or a trusted party of that domain is in charge of creating and modifying the list of accepted or revoked pseudonyms. Although the domain specific pseudonym can be used for local access control, it cannot be used for a global revocation of the credential. Moreover, the user is no longer anonymous as all his transactions in the same domain are linkable.
6
J. Lapon et al.
Verifiable Encryption [VE ]. Although verifiable encryption is often cited in anonymous credential schemes related to anonymity revocation [14, 15], it could be used for revocation as well. Hence, the user verifiably encrypts the credential’s identifier with the public key of the issuer. To verify the revocation status, the service provider sends the ciphertext to the issuer, that decrypts the ciphertext. The issuer can now use the obtained identifier to do a simple lookup of the revocation status of the corresponding credential and report it to the service provider. This solution is closely related to the OCSP protocol in traditional credential schemes, with only little overhead. However, the user requires a lot of trust in the issuer, since it is able to monitor the usage of the credential (i.e. to which service providers the credential is shown). A possible solution is to require the service provider to make this request over an anonymous channel. Furthermore, replacing the public key of the issuer with the public key of another trusted third party, allows to have a separate authority in charge of the revocation tasks. Moreover, if the encrypted identifier is replaced with a domain specific pseudonym, a domain specific revocation authority may take care of access control in a certain domain. In spite of the Nym and VE strategies, a practical and privacy friendly revocation strategy with limited (constant) overhead is not yet available. 3.2 Issuer In the most naive solution, both the group public key and the credentials of each user are reissued whenever a party is revoked or added to the group. This solution, though its security is high for its zero latency, results in an unacceptable overhead for both users and issuers in large scale settings, hence, it is impractical. The Limited Lifetime and Signature Lists, discussed below, are two schemes requiring the issuer to frequently generate updates for users. Limited Lifetime [LL ]. In this scheme, an attribute expressing the lifetime of the credential, is enclosed. During each authentication, the user proves that the credential has not expired. The lifetime of a credential highly determines the usability of the revocation scheme. A short lifetime requires the user to frequently re-validate the credential, while a long lifetime makes the scheme insecure. Instead of reissuing new credentials, Camenisch et al. [16] pointed out that non-interactive credential updates are a useful replacement. The issuer generates credential update info for all valid credentials before the end of the credential’s lifetime is reached. Before the user can authenticate, the user has to download this information and update his credential. Signature Lists [RL ]. Similar to CRLs in traditional schemes, it is possible to maintain revocation lists in anonymous credential schemes. However, the verification is more complicated. Instead of the service provider performing the verification, the user has to prove that the credential is not revoked. In the case of whitelists, the list consists of signatures on the identifiers of each valid credential and a list identifier. The user selects the signature in the whitelist containing the identifier of his credential and then proves knowledge of the identifier together with the proof that the credential identifier in the
Analysis of Revocation Strategies for Anonymous Idemix Credentials
7
signature is the same as the one contained in the credential being validated. Additionally, the list identifier is revealed, such that the service provider can verify that the latest list was used. For blacklists, proving non-membership is more complex. T. Nakanishi et al. [17] propose an elegant solution by ordering the list of revoked identifiers. For each consecutive pair of identifiers, the issuer publishes a signature on the pair, together with an identifier of the list. During a credential show, the user then proves knowledge of his credential and a signature from the blacklist, such that the identifier in the credential lies between two revoked identifiers in the ordered blacklist. Similar as in the case of whitelists, the disclosed list identifier shows that the latest revocation list was used. If this proof verifies successfully, the service provider is ensured that the credential is valid with respect to the latest blacklist. In the latter two schemes, the effort of the issuer is significant. For every change that requires the removal of a signature from a whitelist or addition to the blacklist, the issuer has to rebuild the entire revocation list with a new list of identifiers. In case of a join in the whitelist, it is sufficient to add only one signature to the latest whitelist. Likewise, re-approving a previously revoked credential can be done by replacing two consecutive signatures by one new signature. Nevertheless, authentication in both schemes proving (non-)membership results in a non-negligible, but constant overhead. 3.3 User Accumulators [Acc ]. A more complex, but possibly more efficient solution for credential revocation is based on so-called dynamic accumulators [18–20]. The user needs to prove membership or non-membership in the accumulator, during authentication for whitelist, resp. blacklist revocation. The service provider therefore fetches the latest accumulator value from the revocation authority and if the proof of the credential show verifies correctly w.r.t. that accumulator value, the service provider is ensured that the credential has not been revoked. Except for the verification of a more elaborate proof, the service provider has no additional overhead. On the other hand, although building this proof can be done quite efficiently, it requires the user to first update its witness, which is time-consuming. The latter enables proving (non-)membership in the accumulator. Moreover, since revoking and possibly also adding credentials to the group change the value of the accumulator, a witness update is required. These updates require resources (e.g. exponentiations [18, 19], storage [20]) linear to the number of added or revoked credentials from the accumulator. 3.4 Service Provider Verifier Local Revocation[VLR ]. For many applications, the resources available to users to perform these witness updates, are very limited. In this case verifier local revocation[21, 22], first introduced by E. Brickell et al. [33], may come to the rescue. Service providers download a list of items each linked to a revoked credential. During authentication, the user provably reveals a token allowing the verifier to check that the token is not related to any of the items in the list. Therefore, as the service provider has to check each item in the list, verification takes a (maximum) number of resources
8
J. Lapon et al.
linear with the number of revoked credentials. Batch verification techniques try to tackle this [23]. Note that in some VLR schemes [33, 21], all signatures made with the same credential become linkable after its revocation. Therefore, more recent schemes ensure backward unlinkability [22] such that former credential shows remain unlinkable. This strategy has been adapted by the Trusted Computing Group for the use in trusted platform modules (TPM) [24]. Note that in this case, revocation is only possible if the private key is revealed to the public. As long as the corrupted private key is kept secret by the adversary, revocation of the corrupted TPM is not possible.
4 Discussion As we focus on strategies rather than on specific revocation schemes, the analysis of the strategies makes abstraction of scheme specific details. Nevertheless, we do not hesitate to pinpoint the advantages of some specific schemes. Complexity. All strategies try to tackle the same problem in a different way. For some strategies, the complexity analysis is obvious, in others it is rather subtle. Table 1 shows the complexity of the most expensive computations for each scheme. We assume that ˜ is constant. The last column illustrates the the average number of valid users (#U), frequency of occurrence of these complex computations. Table 1. Total complexity of the most computationally intensive processing during an interval Δ Complexity Nym Ord(1) VE Ord(1) ˜ LL I: Ord(#U) RLw
˜ I: Ord(#U)
Frequency − −
creation of credential updates for each valid credential creation of signatures for each valid credential
1 Δt 1 max(Δt ,Δc ) 1 max(Δt ,Δc )
creation of signatures for each ordered pair of revoked credentials 1 Acc U: Ord(#RΔ [+#J Δ ]) update of the user’s witness Δc VLR V: Ord(#R) verifying the list of revoked credentials every verification RLb
I: Ord(#R)
Description
#U˜ : average number of members Δt : time between list updates. #R(Δ ) : revoked members (since last update) Δ c : time between revocations/joins. #J (Δ ) : joined members (since last update)
The table confirms the classification in section 3. For both Nym and VE the workload is constant for every party. Further, the LL and RL strategies require the issuer to frequently compute updates, resp. signatures for valid or revoked credentials. As mentioned before, updating the list in the RL strategies is not required as long as no identifiers are removed from the list. As opposed to LL, in which after each time-interval, the issuer computes for every valid credential a new credential update.
Analysis of Revocation Strategies for Anonymous Idemix Credentials
9
Accumulator based strategies (Acc), on the other hand, alleviate the work of the issuer by moving a part of the computation to the users. In fact accumulator updates can be done quite efficiently and in batch by the issuer (e.g. 1 multibase exponentiation in the case of [8]). However, now the user has to perform a number of complex computations (i.e. exponentiations in [8, 25]) linear in the number of added and removed credentials. The accumulator scheme by Camenisch et al. [26] is in this sense quite efficient. Using the so called state information, users can efficiently update their witness by a number of multiplications. However, in large scale settings, the information required to perform the update is considerably large. Hence, special update servers are required to make the updates efficiently, since they may keep the state information in memory. To keep the number of changes of the accumulator in whitelist based accumulators to a minimum, during setup the issuer can accumulate a large set of unused identifiers. Once the issuer issues a credential it fetches a free identifier from the set and includes it in the credential. As such, the accumulator does not change whenever new users join the group. Instead of updating the accumulator after each addition or removal, it is possible to update the accumulator value only after a certain time, similar to the case of RL schemes. However, to increase flexibility and decrease latency, a list of the latest accumulators can be published, and allow the service provider to decide which accumulator values are still acceptable. Hence, the service provider may decide to accept proofs with older accumulators. Finally, often the issuer can perform the witness updates more efficiently [8]. However, in this case, the user is subject to timing attacks in cases the issuer and service provider collude. Finally, in the VLR strategy, the verifier carries the burden. In case of a valid credential, the verifier has to perform a computation for every item in the revocation list. There exist VLR schemes [27] that improve efficiency of the verification, however, for large scale settings the complexity of the credential show and the memory load become significant. Although verification of the validity of signatures (i.e. equality) can be done in batch [28], batch verification of revocation lists (i.e. inequality) has not been described in detail in literature. Functional Properties. Table 2 gives an overview of some functional properties of the different strategies with respect to the basic scheme without revocation. It illustrates that there is no straightforward winner. Schemes that score clearly better with certain properties, perform worse on others, and vice versa. It is for instance clear that the Nym and VE strategies are less privacy friendly. In fact all other strategies allow full anonymity. However, to obtain full anonymity in LL, RL and Acc, the user should download the entire set of update information, since otherwise timing attacks are possible. Alternatively, a private information retrieval scheme may allow the user to download the required data more efficiently, while maintaining anonymity. Of course, in large scale settings, with many service providers and users, and since the download may be done well before the actual credential show, the dangers of timing attacks may be negligible. The security of the LL and RL strategies, defined by the latency, perform worse than the other strategies, that allow for zero latency. Note that to decrease communication overhead, Acc and VLR can accept a non-zero latency, by accepting older accumulators resp. revocation lists.
10
J. Lapon et al.
Table 2. Functional properties (↓ and without revocation)
Anonymity Latency Netw. Conn. Download (U/SP) Global/Local
↑ shows the relation w.r.t. the basic credential scheme
Nym
VE
↓
↓
LL
RL
↑
↑
U
Acc
VLR
-/-
-/-
↑/-
U (SP) U (SP) SP ↑ / - ↑ / - -/↑
L
G[L]
G
G
G
G
To decrease latency in the case of LL and RL, the frequency of issuing update information resp. revocation lists should be higher than the frequency of revoking credentials. This is computationally expensive especially in the large scale settings that we envision. Nevertheless, both LL and RL can be useful in environments with lower security requirements. VLR schemes use blacklisting. RL and Acc, on the other hand, allow for both blackand whitelisting. In the case of RL schemes, while a proof of membership may be more efficient in the case of whitelists, some settings advocate for blacklist based schemes with possibly more efficient updates. Especially, the “valid versus revoked credentials” ratio determines which strategy is the better choice. In the case of accumulator based revocation, the difference between white- and blacklists is rather subtle. The table further shows that the user is required to be online for LL, RL and Acc. The service provider may have to download information for RL, Acc and VLR. However, for both RL and Acc it is possible to avoid downloads. In the case of RL, the service provider can simply verify the revealed validity time of the shown signature. If it lies in an acceptable (small) time interval, it accepts the credential show. Otherwise, it requires the use of a newer revocation list. In case of Acc, the user could provide the signed accumulator to the service provider. Note that the amount of data to be downloaded in case of the Acc by the user and VLR by the service provider may be substantial. For some VLR schemes, such as the one of Ateniese et al. [22], to obtain high security the revocation list requires frequent updates, resulting in even more data traffic. Combining strategies. As already discussed, the different schemes expose different properties. To maximize the advantage of those properties, multiple strategies can be combined in the same credential scheme. For instance, an updatable lifetime may be used in parallel with accumulators. The lifetime may be sufficient in low-security environments, while a service requiring high-security may require the same user to prove membership in the latest accumulator. In another example, Nym could be used for local access control, while another strategy is used for verifying the global revocation status. In fact all strategies discussed are compatible and only require the issuer to include the appropriate attributes in the credential.
Analysis of Revocation Strategies for Anonymous Idemix Credentials
11
5 Implementation 5.1 Implementation Notes One of the most versatile anonymous credential systems available to date is the Identity Mixer system [1], short Idemix. Some of the schemes (i.e. LL, Nym and VE) are readily available in this library. We extended the library with the other revocation strategies mentioned. For RL and Acc both a white- and a blacklist scheme is implemented as well as a VLR scheme. More details are given below. Note that our choice of schemes was restricted by the cryptographic schemes used in the Identity Mixer. For instance, the library does not implement pairings, heavily limiting the number of possible schemes. Note that the implementation was done respecting the architecture and design of the library as much as possible. In fact, all extensions can be optionally activated depending on the proofspecification. Most of the implementation effort went to the extended proofs of the credential shows. Except for the declaration and parsing of the appropriate attributes in the credentialspecifications, there are no major additions to the issuance of the credentials. An optional
element has been added to the proofspecification, in which , and elements allow to declare the revocation scheme applied during the credential show. Like the calls in the library to the extensions for proving, for instance, inequality and commitments, we added calls to the appropriate extensions (i.e. VLR-Prover and Verifier, Acc-Prover and Verifier and RL-Prover and Verifier), in the Prover and Verifier class. These handle the revocation scheme specific proofs. The credential shows in Idemix are implemented as common three-move zero-knowledge protocols (called sigma protocol), made non-interactive using the Fiat-Shamir heuristic [29]. The provers, in a first round compute the scheme specific t-values (i.e. the values in the first flow of the sigma protocol) and add those values to the computation of the challenge c. Then, the prover computes the s-values (i.e. the responses of the third flow of the sigma protocol) and adds them to the message sent to the verifier. Finally, at the verifier, the tˆ-values are computed based on the s-values and verified with the received common values. The extensions use the security parameters used in the original Idemix library for the construction of the proofs. Signature Lists. The signature lists for both white- and blacklists are instantiated by CL signatures, which are also used in the library. They allow to prove knowledge of the signature and its attributes, without revealing them. Moreover, it allows to prove relations such as equality of the identifier in the signature and the identifier in the credential in the case of whitelists. For blacklist revocation, an implementation was made based on the scheme of Nakanishi et al [17]. As mentioned before, the revocation list consists of an ordered list of revoked identifiers, which are pair-wise signed by the revocation authority together with a list identifier. Additionally, an unused minimum and maximum identifier is included in the list. To prove that the credential is not revoked, the user proves knowledge of a signature in the revocation list with a certain identifier such that the identifier in the credentials lies in the interval formed by the identifiers in the signature. Therefore, for the implementation we recover the inequality provers available in Idemix.
12
J. Lapon et al.
CL-Accumulator scheme. Several accumulator based revocation schemes exist. An implementation in C++, comparing three of them is available in [30]. The schemes implemented there, are all whitelist revocation schemes. One of the schemes compatible with the Idemix library is the construction by Camenisch et al [18]. Building on this construction J. Li et al [31] extended the scheme with a non-membership proof, allowing the same accumulator construction to be used for blacklisting as well. The schemes have been implemented based on the membership proof in Section 3.3 “Efficient Proof That a Committed Value Was Accumulated” presented in [18] and the non-membership proof defined in Protocol 1 in Section 5 “Efficient Proof That a Committed Value Was Not Accumulated” in [31] DAA-VLR scheme. Finally, the VLR scheme adopted by the TCG group [24] has been implemented. In contrast to what is implemented in TPMs, in which the private key is required for revocation, a separate random identity attribute id is enclosed in the credential. The latter is then used to perform the verification. This allows the issuer to revoke the credential based on this identity, and does not require the private key of the credential to be compromised. The protocol presented in [24] defines the issuance and proof of an entire DAA anonymous credential. Our implementation extends the ?
credential show of Idemix with the proof of knowledge PK{(id) : Nv = ζ id ∧ ζ ∈R γ } with id the identity of the user, and ζ a randomly chosen base. and the verification of the list of revoked values by verifying that ζ idi = Nv for each idi in the revocation list. 5.2 Results This section reports the results of two experiments. The first experiment deals with the issuance and showing of a single credential. The second experiment analyses the time required for the complex computations as in the complexity analysis. The experiments use the default security parameters (i.e. k = 160 bit) proposed in Appendix A, Table 2 of the Idemix library specification [1], and are executed on a DELL Latitude P9600 @ 2.53GHz, 4GB RAM. Note that since most algorithms are probabilistic, large variations in timings are possible. To make the measurements as realistic as possible and minimize overhead caused for instance by class loading, the given numbers are averages over a large number of runs. Moreover, the communication overhead is not included. Table 3 presents for each implemented scheme, the total time required to issue and show a credential. The credential-show includes the verification of the revocation status. Since for all schemes issuing a credential does not require complex calculations w.r.t. the Basic scheme, issuing a credential is about the same for most schemes. A small time difference may be noticed for all but the Nym scheme, caused by an additional attribute required by the revocation strategy. However, as could be expected, there is more variance in showing a credential. Only the time for a credential show in the Nym and VLR scheme lies close to the Basic scheme. For these schemes, the small overhead is caused by the computation and disclosure of a pseudonym. Note that in VLR this pseudonym is randomized. For the whitelist based RLw scheme, the time is doubled w.r.t. the Basic scheme. Here, showing a credential implies two proofs, namely one proof for proving the knowledge of a credential, and an additional proof for proving the knowledge of a signature from the revocation list, with the same identifier as in the credential. The
Analysis of Revocation Strategies for Anonymous Idemix Credentials
13
Table 3. Timing analysis for issuing and showing a single credential (average over 200 rounds) (in sec.)
Issue
Show
Basic Nym VLR Accb Accw LL RLw RLb VE
2.5 2.5 2.6 2.6 2.6 2.6 2.6 2.6 2.6
0.8 0.9 1.0 3.5 3.8 4.2 1.8 8.6 16.0
overhead for the credential show in the white- and blacklist accumulator based schemes, is induced by the complex membership resp. non-membership proof. A more detailed analysis may be found in [30]. It is a bit surprising that showing a credential in the LL scheme takes even more time. The reason for this is that the scheme (as implemented in Idemix) requires an expensive range proof to show that the credential’s expiration time, is larger than or equal to the current time. This way the epoch strategy is very flexible, as not all users have to update as frequently as others. However, if the lifetime attribute in credentials is synchronized and the same for all credentials, it is possible to simply disclose the lifetime value. As such, the credential show takes about as much time as in the case of the Basic scheme. Similarly, showing a credential in the RLb scheme requires an additional signature proof and two range proofs. The signature proof, proves knowledge of a signature in the revocation list and the range proofs prove that the identifier in the credential lies between the revoked identifiers in the proved signature. The worst scheme is the one based on verifiable encryption. This scheme may not be practical for revocation. Moreover, this result shows that using verifiable encryption for anonymity revocation implies a very large overhead as well. In the second experiment, summarized in table 4, the most complex computations as discussed in section 4 have been verified in practice. Since the total amount of valid users, in our setting will be much larger than the number of revoked users, it is clear that LL and RLw require a lot of computations by the issuer. Hence, RLb might be more interesting. However, as noted in the previous experiment, showing a credential in the RLb scheme is expensive, and may seem impractical. The accumulator based schemes have practically no overhead at the issuer’s side. However, before showing his credential, a user has to update his witness. The witness update takes approximately 20ms per revoked credential, since the previous update. As stated before, it is possible to avoid witness updates as a result of joining new credentials. If it is possible to let the user have frequent witness updates, then this overhead is spread over time and may be acceptable for applications. Finally, the VLR solution shows that it only takes approximately 3ms per revoked credential, to verify the validity of a credential.
14
J. Lapon et al. Table 4. Time analysis of the most complex computations (sec.) LL RLw RLb Accb Accw VLR
Issuer
User
1.4 ∗ #U˜ 1.31 ∗ #U˜ 1.50 ∗ #R 0.16 0.02 ∗ #RΔ + 0 ∗ #J 0.16 0.02 ∗ #RΔ + 0 ∗ #J
Verifier
0.003 ∗ #R
For the Belgian eID card1 , there are about ten million users, and about 375.000 revocations a year. We have to note though that the certificates of youngsters and kids in Belgium are automatically revoked, giving an incorrect image of the number of actual revocations resulting from lost or stolen credentials. Moreover, Belgian citizens may opt to revoke their digital certificates themselves. Applying the schemes to this large scale setting, we have the following results. Generating update information in the LL scheme would take about 160 days. For the RLb scheme with 375.000 revocations, it takes about 6.5 days. Similarly, for the VLR scheme, verifying a credential show takes 18 minutes. For the latter, however, batch verification may increase the speed. Note that in literature, there is no batch verification scheme available that is tuned for the verification that a credential is not in the list of the VLR. For VLR, the batch verification should allow to verify that none of the tokens in the list match with the one being verified, while in literature the authors often refer to batch verification of signatures. In this case, batch verification allows to verify that all signatures are valid. Although great improvements can be reached by faster implementations and processors (e.g. an implementation in C++ of the accumulator takes only 1.5ms instead of 20ms in Java in which Idemix is implemented), these numbers show that for large scale settings, the RL, LL and VLR schemes are impractical.
6 Applying Revocation Schemes - Guidelines It is clear that there is not one strategy superior to all the others. Therefore, we end the analysis with an overview of which strategies are useful in which settings. Nevertheless, a combination of multiple strategies may sometimes offer the best trade-off. The guidelines are summarized in table 5 and discussed below. Security. For high security environments (i.e. requiring low latency) accumulator based revocation is the most secure and privacy-friendly strategy, closely followed by some verifier local revocation schemes. For the latter, one has to select a VLR scheme carefully that provides adequate anonymity. On the other hand, for lower security environments, LL provides a reasonable trade-off. RL offers a similar solution but is not restricted to the issuer to act as revocation manager. 1
Results obtained from http://godot.be/eidgraphs
Analysis of Revocation Strategies for Anonymous Idemix Credentials
15
Table 5. Feasibility of the schemes w.r.t. connectivity and resources. (•: positive; ◦: neutral; else: negative)
Nym VE LL RL Acc VLR
Security Offline U SP • • • • ◦ • ◦ • • • (•) •
Low Resources U SP • • ◦ ◦ ◦ • ◦ • • •
Processing Environments. Often a user’s credential is kept in resource constrained environments (e.g. a smartcard). In this case, VLR schemes require the least computational overhead for the user. Also LL is a possible alternative. RL and Acc, however, require complex computations, making these scenarios less effective in resource constrained environments. In some settings, the verifier has limited resources (e.g. a door lock). In this case VLR is not an option. Connectivity. In case of RL and Acc the user requires frequent communication with the issuer. On the other hand, for the service provider in the case of Łand RL, it is sufficient to keep track of time to be able to verify the revocation status. This is especially important for offline service providers. In this case, when computing power is not an issue, the more secure accumulators may provide an alternative. Then the user should provide the latest accumulator, signed by the revocation authority. The verifier then simply checks the validity time of the revocation list. Online environments offer more freedom. In some schemes, computation may be outsourced to other possibly trusted environments. For instance, verification in the VLR setting may be done by an external more powerful party. When the verifier outsources this verification to a more powerful trusted party, it actually implements a kind of OCSP scenario. Related to accumulator schemes, some schemes [20] also take advantage of remote witness updates.
7 Conclusion This paper classifies existing revocation strategies for anonymous credential systems into six categories. The analysis shows that there is no straightforward winner, and the effectiveness and efficiency of a specific strategy heavily relies on the setting in which the mechanism is used. To maximise the applicability of anonymous credentials, only a combination of multiple strategies may provide some relieve. Therefore, guidelines are proposed that show which strategies should be applied in which settings. The practicality and applicability of anonymous credential schemes in real-life settings is an on-going discussion and remain important aspects to analyse. In the implementation and comparison presented in this paper, we focussed on schemes that are suitable within the Identity Mixer Library. Nevertheless, revocatoin schemes, for
16
J. Lapon et al.
instance, based on pairing based cryptography or suitable within the U-Prove credential scheme [32], may show to be better alternatives. For future work, we envision to compare the results of this paper, with implementations of more revocation schemes for anonymous credentials. Acknowledgements. This research is partially funded by the Interuniversity Attraction Poles Programme Belgian State, Belgian Science Policy and the Research Fund K.U.Leuven, and the IWT-SBO projects DiCoMas and MobCom.
References 1. Specification of the Identity Mixer Cryptographic Library – Version 2.3.2. Technical report, IBM Research – Zurich (2010) 2. Chaum, D.: Security Without Identification: Transaction Systems to Make Big Brother Obsolete. Commun. ACM 28(10), 1030–1044 (1985) 3. Camenisch, J.L., Lysyanskaya, A.: An Efficient System for Non-transferable Anonymous Credentials with Optional Anonymity Revocation. In: Pfitzmann, B. (ed.) EUROCRYPT 2001. LNCS, vol. 2045, pp. 93–118. Springer, Heidelberg (2001) 4. Camenisch, J., Herreweghen, E.V.: Design and implementation of the idemix anonymous credential system. In: Atluri, V. (ed.) ACM Conference on Computer and Communications Security, pp. 21–30. ACM, New York (2002) 5. Brands, S.: A Technical Overview of Digital Credentials (2002) 6. Bangerter, E., Camenisch, J.L., Lysyanskaya, A.: A Cryptographic Framework for the Controlled Release of Certified Data. In: Christianson, B., Crispo, B., Malcolm, J.A., Roe, M. (eds.) Security Protocols 2004. LNCS, vol. 3957, pp. 20–42. Springer, Heidelberg (2006) 7. Benaloh, J.C., de Mare, M.: One-Way Accumulators: A Decentralized Alternative to Digital Sinatures (Extended Abstract). In: Helleseth, T. (ed.) EUROCRYPT 1993. LNCS, vol. 765, pp. 274–285. Springer, Heidelberg (1994) 8. Camenisch, J.L., Lysyanskaya, A.: Dynamic Accumulators and Application to Efficient Revocation of Anonymous Credentials. In: Yung, M. (ed.) CRYPTO 2002. LNCS, vol. 2442, pp. 61–76. Springer, Heidelberg (2002) 9. Myers, M., Ankney, R., Malpani, A., Galperin, S., Adams, C.: X.509 internet public key infrastructure online certificate status protocol - ocsp (1999) 10. Housley, R., Polk, W., Ford, W., Solo, D.: Internet x.509 public key infrastructure certificate and certificate revocation list (crl) profile (2002) 11. Brands, S., Demuynck, L., De Decker, B.: A practical system for globally revoking the unlinkable pseudonyms of unknown users. In: Pieprzyk, J., Ghodosi, H., Dawson, E. (eds.) ACISP 2007. LNCS, vol. 4586, pp. 400–415. Springer, Heidelberg (2007) 12. Camenisch, J., M¨odersheim, S., Sommer, D.: A formal model of identity mixer. Formal Methods for Industrial Critical Systems, 198–214 (2010) 13. Bichsel, P., Camenisch, J.: Mixing identities with ease. In: de Leeuw, E., Fischer-H¨ubner, S., Fritsch, L. (eds.) IDMAN 2010. IFIP AICT, vol. 343, pp. 1–17. Springer, Heidelberg (to apppear, 2010) 14. Camenisch, J.L., Shoup, V.: Practical verifiable encryption and decryption of discrete logarithms. In: Boneh, D. (ed.) CRYPTO 2003. LNCS, vol. 2729, pp. 126–144. Springer, Heidelberg (2003) 15. Backes, M., Camenisch, J., Sommer, D.: Anonymous yet accountable access control. In: Proceedings of the 2005 ACM Workshop on Privacy in the Electronic Society, pp. 40–46. ACM, New York (2005)
Analysis of Revocation Strategies for Anonymous Idemix Credentials
17
16. Camenisch, J., Kohlweiss, M., Soriente, C.: Solving revocation with efficient update of anonymous credentials. In: Security and Cryptography for Networks, pp. 454–471 (2011) 17. Nakanishi, T., Fujii, H., Hira, Y., Funabiki, N.: Revocable group signature schemes with constant costs for signing and verifying. In: Jarecki, S., Tsudik, G. (eds.) PKC 2009. LNCS, vol. 5443, pp. 463–480. Springer, Heidelberg (2009) 18. Camenisch, J.L., Lysyanskaya, A.: Dynamic accumulators and application to efficient revocation of anonymous credentials. In: Yung, M. (ed.) CRYPTO 2002. LNCS, vol. 2442, pp. 61–76. Springer, Heidelberg (2002) 19. Nguyen, L.: Accumulators from bilinear pairings and applications. In: Menezes, A. (ed.) CT-RSA 2005. LNCS, vol. 3376, pp. 275–292. Springer, Heidelberg (2005) 20. Camenisch, J., Kohlweiss, M., Soriente, C.: An accumulator based on bilinear maps and efficient revocation for anonymous credentials. In: Jarecki, S., Tsudik, G. (eds.) PKC 2009. LNCS, vol. 5443, pp. 481–500. Springer, Heidelberg (2009) 21. Boneh, D., Shacham, H.: Group signatures with verifier-local revocation. In: Proceedings of the 11th ACM Conference on Computer and Communications Security, pp. 168–177. ACM, New York (2004) 22. Ateniese, G., Song, D., Tsudik, G.: Quasi-efficient revocation of group signatures. In: Blaze, M. (ed.) FC 2002. LNCS, vol. 2357, pp. 183–197. Springer, Heidelberg (2003) 23. Zaverucha, G.M., Stinson, D.R.: Group testing and batch verification. In: Kurosawa, K. (ed.) Information Theoretic Security. LNCS, vol. 5973, pp. 140–157. Springer, Heidelberg (2010) 24. Brickell, E., Camenisch, J., Chen, L.: Direct anonymous attestation. In: Proceedings of the 11th ACM Conference on Computer and Communications Security, pp. 132–145. ACM, New York (2004) 25. Nguyen, L.: Accumulators from Bilinear Pairings and Applications. In: Menezes, A. (ed.) CT-RSA 2005. LNCS, vol. 3376, pp. 275–292. Springer, Heidelberg (2005) 26. Camenisch, J., Kohlweiss, M., Soriente, C.: An Accumulator Based on Bilinear Maps and Efficient Revocation for Anonymous Credentials. In: Jarecki, S., Tsudik, G. (eds.) PKC 2009. LNCS, vol. 5443, pp. 481–500. Springer, Heidelberg (2009) 27. Demuynck, L., De Decker, B.: How to prove list membership in logarithmic time. CW Reports, KU Leuven, Department of Computer Science, vol. CW470 (2006) 28. Bellare, M., Garay, J.A., Rabin, T.: Fast batch verification for modular exponentiation and digital signatures. In: Nyberg, K. (ed.) EUROCRYPT 1998. LNCS, vol. 1403, pp. 236–250. Springer, Heidelberg (1998) 29. Fiat, A., Shamir, A.: How to Prove Yourself: Practical Solutions to Identification and Signature Problems. In: Odlyzko, A.M. (ed.) CRYPTO 1986. LNCS, vol. 263, pp. 186–194. Springer, Heidelberg (1987) 30. Lapon, J., Kohlweiss, M., De Decker, B., Naessens, V.: Performance analysis of accumulatorbased revocation mechanisms. In: Rannenberg, K., Varadharajan, V., Weber, C. (eds.) Security and Privacy - Silver Linings in the Cloud. IFIP AICT, vol. 330, pp. 289–301. Springer, Boston (2010) 31. Li, J., Li, N., Xue, R.: Universal Accumulators with Efficient Nonmembership Proofs. In: Katz, J., Yung, M. (eds.) ACNS 2007. LNCS, vol. 4521, pp. 253–269. Springer, Heidelberg (2007) 32. Stefan Brands, C.P.: U-Prove Cryptographic Specification V1.0. Technical report, Microsoft Corporation (2010) 33. Brickell, E., Camenisch, J., Chen, L.: The DAA scheme in context. Trusted Computing, 143–174
A Secure Key Management Framework for Heterogeneous Wireless Sensor Networks Mahdi R. Alagheband1, and Mohammad Reza Aref2 1
2
EE Department, Science and Research branch, Islamic Azad University, Tehran, Iran [email protected] EE Department, ISSL Laboratory, Sharif University of Technology, Tehran, Iran [email protected]
Abstract. A Wireless sensor network (WSN) is composed of numerous sensor nodes with both insecurely limited hardware and restricted communication capabilities. Thus WSNs suffer from some inherent weaknesses. Key management is an interesting subject in WSNs because it is the fundamental element for all security operations. A few key management models for heterogeneous sensor networks have been proposed in recent years. In this paper, we propose a new key management scheme based on elliptic curve cryptography and signcryption method for hierarchical heterogeneous WSNs. Our scheme as a secure infrastructure has superior sensor node mobility and network scalability. Furthermore, we propose both a periodic authentication and a new registration mechanism in our scheme due to prevention of sensor node compromising. Also, the proposed scheme does not increase the number of keys in sensor nodes and has a reasonable communication and computation overhead compared with the other schemes. Keywords: Key management, Heterogeneous sensor network, Signcryption , Elliptic curve cryptography, Authentication.
1
Introduction
A wireless sensor network (WSN) has ability to monitor and control events in a specified environment with the aid of numerous sensor devices. However, these sensor nodes (SNs) have noticeable constraints on energy, computation and bandwidth resources. Despite cited restrictions, WSNs have unique characteristics such as SN mobility, large scalability, limited resources, special traffic patterns and uncertain to many types of attacks. The structure of WSNs divides into two kinds: homogeneous and heterogeneous on the whole. All SNs are similar to each other and are deployed in a flat architecture in homogeneous WSNs, while in heterogeneous both are two or more kinds of sensors are defined and the whole of SNs are separated in some clusters. Hence, not only does the
This work was supported in part by Iran National Science Fund (INSF)-cryptography chair, and in part Iran Telecommunication Research Center (ITRC).
B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 18–31, 2011. c IFIP International Federation for Information Processing 2011
A Secure Key Management Framework for Heterogeneous WSNs
19
average of communication overhead and energy consumption decrease, but also the network scalability and performance increase in heterogeneous WSN [1]. Due to the fact that WSNs are susceptible to many attacks and have widespread constraints, the design of security mechanisms is highly important. Key management is the first crucial function to achieve security objectives because sensor nodes and cluster leaders need valid common key to utilize cryptography mechanisms. According to SN technology development, the key management protocols are classified based on encryption techniques in three categories, including symmetric, asymmetric and hybrid key management models [1]. Symmetric schemes that also called pre-distribution schemes are responsible for loading some keys into the sensor nodes prior to deployment phase, based on either their physical or wireless interfaces. These schemes suffer from some problems such as probabilistic key distribution between SNs, non-scalability after deployment, weakness against node compromising, lack of mobility and high communication overhead [2, 3]. Asymmetric schemes use both elliptic curve cryptography (ECC) and identity based cryptography (IBC) in recent years [6]. Asymmetric models are more flexible but very heavyweight in the sensor networks. The recent progress in ECC and IBC awards new opportunities to apply public key cryptography in WSNs. Since ECC keys are defined on an additive group with 160-bit length, this family of public key cryptography is as secure as RSA keys with 1024-bit length [4]. Also, recent implementation on MICA2 or MICAz mote has approved the feasibility of ECC in WSN [4, 5]. Hybrid schemes have been designed based on heterogeneous WSNs with different kinds of nodes. Despite distinction among base station, cluster leaders and SNs, each element performs distinctive responsibility in hybrid hierarchical architecture. As computational cost of cluster leaders is more than SNs, cluster leaders usually have more obligations such as aggregation, routing, control and cluster leading. In this paper, we present a secure hybrid key management infrastructure in hierarchical heterogeneous WSN (HHWSN). ECC is used among cluster leaders and base station in the proposed scheme. Moreover, a special mechanism is used in the clusters for periodic authentication and SN mobility among the clusters. The contributions of this paper are four folds. i) In order to achieve complete security, a specific signcryption method with forward security characteristic is utilized in inter-cluster communication. ii) Our scheme supports SN mobility to move among the clusters. iii) We design a periodic authentication to prevent SN compromising. iv) A new registration model is designed for SNs enrollment after network deployment. The rest of the paper is organized as follows: section 2 describes the preliminaries which are practical for understanding the proposed protocol and related works. In section 3, some related works are analyzed. In section 4, we propose the new key management scheme. In section 5 we compare the scheme with a few related schemes. Section 6 gives comparison result. Finally, conclusion is presented.
20
2
M.R. Alagheband and M.R. Aref
Preliminaries
In this section, we describe some essential points used in this paper. BS should select some primitive parameters in initialization phase. F is the selected elliptic curve over finite field q: y 2 = x3 + ax + b (mod q). G is base point of elliptic curve F with order n and O is point of F at infinite. n is the order of point G, where n is a prime,n × G = O and n > 2160 . (The symbol ‘×’ denotes the elliptic curve point multiplication [6]. For simplicity, a list of notations used in the paper is shown in Table 1. Table 1. List of notations Notation BS CL
Description Base station Cluster leader
Notation Pbs Ubs
Description BS’s Private key BS’s public key Network key [128 bit] (just for SN Sensor node KN registration) Adjacent CL Neighbour leaders of CL KSNi Sensor node key IDcl or IDSN Identity of CL or SN Kcl Cluster key Pcli CLi ’s Private key Sgn Signcryption algorithm Ucli CLi ’s Public key t.s. Timestamp Least time duration for Maximum movement time for tcomp tmove node compromising SN A public and fixed A lightweight and secure meta H message one-way hash function Ek (.)/Dk (.) Lightweight symmetric encryption/decryption algorithm with key k
The security of asymmetric and hybrid key management especially in BS-CL links are based on ECDLP (Elliptic Curve Discrete Logarithm Problem)that is a hard problem until now [6]. Furthermore, the security of SN-SN links is supported by lightweight symmetric cryptography generally [7]. BS generates public-private keypairs based on ECDLP. These keys are assigned to all nodes in the asymmetric key management or just CLs in the hybrid key management schemes. BS performs following terms for key generation. - Choose P a random number as a private key P ∈ [1 q − 1]. - Compute U = P × G as a public key. - Embed (P, U ) in node securely after deployment and save it in its database. After this phase, every CL in heterogeneous WSN has a unique pairwise key. Key generation aside, signcryption is used in the paper particularly in CL-BS links too. Not only does signcryption technique combine the digital signature and encryption algorithms to achieve authentication and confidentiality but also signcryption has lower computation and communication overhead. Thus the utilization of signcryption in WSNs is highly profitable. Besides, we utilize the signcryption scheme (Sgn) with extra characteristics such as public verifiability
A Secure Key Management Framework for Heterogeneous WSNs
21
BS
CL1
CLn
CL2 CLi
SNs
Sensor Node (SN) into clusters
Cluster Leader (CL) with tamper-proof hardware
Fig. 1. A sample hierarchical heterogeneous sensor network model
and forward secrecy in our proposed scheme [8, 9]. If a cluster leader is revealed, the authenticity of past transmitted messages from the compromised CL to BS is valid because of forward security attribute. It protects the authenticity of messages even though the private key of the sender is disclosed [8, 10]. Every CL has both the public key of other CLs and BS. A typical signcryption model with cited attributes will be used in our scheme. The details of some reasonable signcryption scheme for WSN have been explained in [8, 10]. A HHWSN is composed of a BS as a sink node, a small number of CLs and numerous SNs that classified in clusters (Fig. 1). Number of CLs is not noticeable compared with density of SNs. The following assumptions are noted in our network model: 1. SNs are not equipped with tamper-proof hardware due to inherently constraints. 2. CLs have better resources and more responsibility compared with SNs. Since the ability of asymmetric cryptography computation is absolutely essential for them. Therefore, every CL has unique public-private keypairs and are equipped with tamper-proof hardware. 3. Each SN and CL have a unique ID (IDcli or IDSNi ). 4. BS does not have any restriction on computation, storage or power supply. BS know all CLs public key (Ucli ) and SNs keys (KSNi ). 5. CLs are static but SNs are mobile.
22
M.R. Alagheband and M.R. Aref
All CLs and SNs are usually deployed in uncontrolled regions without strict supervision. Every cluster of SNs sense environments and send raw data to corresponding CL. Each CL aggregates information and routes it to the BS by respective protocols.
3
The Analysis of Related Works
In this section, we demonstrate some considerable hierarchical heterogeneous key management schemes proposed until now and analyse their advantages and disadvantages [11–14]. Riaz et al. have proposed SACK [11] as a secure key management framework for a HHWSN. Every SN has a unique key with the BS and CLs have an extra key to communicate with BS and other CLs. Besides, all SNs in every cluster have a distinctive common key for secure intra cluster connection. One master key of 1024 bits is stored in each SN and CL after deployment in SACK. CLs and SNs use it to compute shared key after cluster formation. Furthermore, SACK has a revocation mechanism for compromised node. But intruder can abuse master key to penetrate the network as. Since initial seeds for key generation are sent plainly after cluster formation in key assignment phase, the adversary can simply eavesdrop it. Now the newcomer malicious adversary with both compromised master key and eavesdropped seed can compute intra cluster key subsequently. Indeed, the security of the whole WSN will seriously be failed if just one SN is compromised. Besides, SACK has some other damaging problems in key generation algorithm. As authors have pointed out, a single polynomial can generate only 895 distinct keys. After 895 times, a Re-keying algorithm should be employed for solving this weakness. But SACK undergoes substantial communication and computation overhead with the Rekeying algorithm. Moreover, 1024 bits as a master key is partially heavy burden for the sensor nodes. X. Du et al. [12] proposed a routing-driven key management scheme based on ECC (RDEC) in HHWSN. Although, every SN and CL has a pairwise privatepublic key based on ECC in RDEC, SNs do not have shared key with all neighbors in intra cluster connections. All SNs have a common key with just some neighbor SNs in the specific routes that the routing protocol has already defined to send data for BS. Each SN firstly sends Key-Request message to CL. Then the CL computes diverse shared key between every two neighbor SNs and sends it based on the defined route in RDEC scheme. RDEC has some damaging feature. i) Every CL requires enormous storage space to save all SNs public keys for common key generation because SNs are clustered after deployment phase. This amount of storage space is ineligible for WSNs. ii) All SNs have a certain time to send the un-encrypted Key-Request message to CL. An adversary can replace the parts of Key-Request messages and deceives CL in the defined time because the Key-Request message is sent un-encrypted. iii) KH is a pre-loaded symmetric key that is embedded in the newly-deploy SNs and CLs. An adversary can reveal KH because the hardware of
A Secure Key Management Framework for Heterogeneous WSNs
23
SNs is not tamper proof. However, it is probable that the compromised SNs key is revoked but adversary can damage the network as a newcomer SN. Furthermore, after KH revelation, RDEC does not have any mechanism to distinguish this catastrophe. iv) Every CL has keys related to all SNs after pre-deployment phase. Therefore, apart from pre-loaded SNs, any new SN cannot register its public key at the WSN based on RDEC scheme after deployment phase. Mizanur and Khalil [13] have proposed another key management framework (PKAS) on pairing based cryptography. PKAS has tried to improve RDEC scheme based on IBC. Every CL or SN has an ID and two distinctly random numbers embedded in the pre-deployment phase. Each CL has IDs and random numbers of all SNs and authenticates its SNs in its cluster. Thus the information of clustering is prerequisite in PKAS. In PKAS, although the random number of SNs is periodically updated by the BS and distributed to SNs via CLs, WSN should undergo enormous amount of communication overhead. The SACK’s solution to solve this challenge looks better than PKAS’s out because the cost of transmission is much more than the cost of computation. Moreover, each SN requires the nearest CL’s ID for mutual authentication. So either every SN should save all CL’s IDs or authentication should be run after cluster formation in PKAS scheme. Not only is saving of all CL’s IDs very heavy for the feeble SNs but also the clustering information declines the network scalability and flexibility. PIBK is another identity based key management protocol for HHWSN [14]. PIBK has been designed for a static network with fixed and location aware SNs that use IBC to establish pairwise keys. Each SN gets three keys (network key, cluster key and SN key) in pre-deployment phase. Then, every SN should communicate ID with neighbors in a restricted time duration (Bootstrapping time). After Bootstrapping time, all SN should save neighborhood IDs so that every two nodes can make shared secure key in their cluster.
4
The Proposed Framework
In this section, we describe our proposed key management infrastructure for HHWSN in six parts. 4.1
Key Assignment in Pre-deployment Phase
Prior to initialization and cluster formation phase, some symmetric and asymmetric keys should be embedded in all SNs in pre-deployment phase. We have used more strict security policies for CL-BS links because of the high emphasis on communication between CLs and the BS. Therefore, public key cryptography is tapped to achieve a higher level of security in WSN. As it was pointed out in Table 1, Ucl is the public key and Pcl is the private key of any CL (Ucl = Pcl × G). The Pcl is called the discrete logarithm of the Ucl to the base G. Also CLs have a common symmetric key as a group key (Kcl ) for secure communication together. The key will be useful in periodic authentication.
24
M.R. Alagheband and M.R. Aref
Likewise, BS has two keys Ubs and Pbs (Ubs = Pbs × G). Pbs will be secret key for BS that CLs and SNs do not know it forever. Ubs is embedded in CLs to execute signcryption algorithm after deployment phase. Indeed, CL computes the signcryption of messages by Ubs and Pcl , sends it to BS completely secure and verifies the authenticity of BS with the aid of Ubs . On the other hand, all SNs have a common network key (KN ). This key just is used in the registration procedure after network deployment that will be explained in section 4.3. In order to perform periodic authentication, every SN has an exclusive key with BS (KSNi ) which BS knows both KSNi and IDSNi . 4.2
Inter-cluster Communication
The structure of heterogeneous WSN emphasizes the importance of security in CL-BS and CL-CL links. The network needs a method to communicate securely between BS and CLs prior to SN’s registration. If an adversary discloses either a CL-BS or a CL-CL links, the network security will be damaged increasingly. Hence, every CL as well as BS has distinct public and private pairwise keys. Since message confidentiality and sender’s authentication in CLs-BS links have a particular emphasis, digital signature and ECC have been used in many key management schemes to drive confidentiality, integrity and authenticity [11–16]. In contrast, according to the computational and memory constraints in WSN, it is not acceptable to utilize signature-then-encryption method to keep message confidentiality and authenticity permanently among WSN’s nodes. 4.3
SN’s Registration
After WSN deployment, SNs should find the nearest CL for registration into its cluster. Fig. 2 illustrates the registration procedure among SN, CL and BS. A SN will be enrolled in the nearest CL by the following steps: 1. SN sends α = IDSNi and β = HKN (IDSNi ) to the nearest CL by means of keyed one-way hash function (H). 2. CL verifies whether HKN (α) is equal to β. If it is true, goes to step 3, otherwise rejects the message and alarms to BS. 3. CL computes Sgn(IDSNi , t.s.) with its private key and sends it to BS (Sgn is the Signcryption algorithm). 4. As soon as Unsigncryption and verification phase are done, BS responds to CL by Sgn(IDSNi , KSNi , t.s.). 5. CL saves ID and KSNi after verification. 6. CL uses a lightweight symmetric encryption algorithm to generate ciphertext γ = EKSN (meta Kclj ), where meta is a public and fixed passage that all nodes know it. 7. The SN computes DKSNi (γ) where the secret key KSNi has been embedded in SN at pre-deployment phase. SN verifies if the first part of DKSNi (γ) is equal to meta. If it is true, SN generates KN from KN with a lightweight one way hash function. Thus, the computation of KN from KN is impossible.
A Secure Key Management Framework for Heterogeneous WSNs
25
The beginning of registration procedure without KN is impossible. Therefore, in order to prevent disclosure of KN , each SN should change KN to KN after membership in a cluster immediately. Since KN is revealed entirely after registration, the adversary cannot compromise a SN subsequently. The transformation is based on a one-way function and computation KN from KN is impossible. Indeed, if a registered SN is compromised imaginatively, the adversary cannot take part in registration procedure as a legal node because he has just achieved to KN and KN was completely deleted. Every newcomer SN can use KN to do registration procedure in a defined range of time after WSN deployment. The time duration is not enough for newcomer SN compromising by means of adversary. Moreover, it is plainly visible that the transformation does not impose constraint on network scalability and new SNs are added during WSN’s life, as all nodes derive KN to KN . The adversary can obtain Kcl but he is unable to disorder secure connections between SNs and CLs with the aid of periodic authentication explained in the next section. 4.4
Periodic Authentication and SN Mobility
One of the crucial parts of the proposed key management infrastructure that is usually ignored in heterogeneous WSN is “periodic authentication”[11–14]. Since SNs in contrast to CLs are not equipped with tamper-proof hardware, it is completely probable that a SN is compromised after deployment. Although KN as unique parameter for registration has been deleted, the adversary can grab Kcl and KSNi readily and disorder the SN-CL and SN-SN links. Thus, the proposed key management scheme has a periodic authentication to preserve SNs against compromising as well as to support SN’s mobility among clusters especially in liquid environments. Fig. 3 illustrates the periodic authentication mechanism between SN and CL in every cluster. Every CL should regularly authenticate the SNs which have registered in its cluster. The period of this mechanism (tcomp ) depends on the duration of node compromising. WSNs usually utilize ZigBee or IEEE 802.15.4 platform for communication. Since the time duration compared with the period of ZigBee’s MAC layer is negligible, the periodic authentication does not impose extra overhead [17]. Furthermore, the overhead of periodic authentication compared with overhead of other policies such as key updating in SACK, RDEC, PKAS and PIBK is rational. According to the Fig. 3, CLj sends the query for all registered SN periodically in its cluster. SNi checks the truth of query. SNi sends flow 2 if the flow 1 be true. As soon as the CL receives the flow 2, it computes HKSN (KN ) and checks with α inasmuch as just CL knew both KSNi and KN after SN registration. The SN is confirmed for next tcomp period provided the flow 2 is verified. Otherwise, CL will alarm to BS that the mentioned SN is uncertain. In the normal conditions just phase 1 and 2 (Fig. 4) are performed but if CL does not receive any message in the defined time, the SN has presumably moved to another cluster. Thus the CL sends Sgn(KN , ID, P robeRequest) to adjacent CLs to track the SN (phase 5). Since every CL has Ucl of other CLs, they can
26
M.R. Alagheband and M.R. Aref
SNi
CL j
BS
1. D IDSNi , E HKN (IDSNi )o 2. H
?
KN
(D ) E
3. Sgn(IDSNi , t.s.)o 4. Sgn(IDSNi , KSNi , t.s.) m 5. register ID and KSNi in CL j
6. J EKSNi (meta Kcl j ) m 7. if
?
DKSN (J ) meta i
then : K N o K N' and delete K N admit Kcl j as a cluster key
Fig. 2. SN registration procedure with CL and BS cooperation SNi
CLj 1.query=[t.s.,IDSNi ,λ=HK
(IDSNi ,t.s.)] SNi
← −−−−−−−−−−−−−−−−−−−−−−−−−−−−−−
2.α=HKSN (KN ,t.s. ),IDSNi ,t.s.
−−−−−−−−−−−−i−−−−−−−−−−−−−−−−−−→ Fig. 3. The periodic authentication mechanism between every SN and CL inside clusters
do unsigncryption algorithm. If the SN moved to another cluster, one of the adjacent CLs will find it in the defined time (tmov ). All adjacent CLs perform authentication mechanism again to find the moved SN. If an adjacent CL finds the moved SN and t < tmov , it sends a report to the prime CL. Otherwise, the prime CL supposes that the lost SN is compromising when t > tmov . In this condition, the lost SN should be revoked from the whole of WSN. The prime CL announces to adjacent CLs that the registered node with IDSNi is revoked. Also the prime CL sends the revocation message accompanied by IDSNi to BS and other registered SNs in its cluster. According to this model an adversary cannot enter to WSN by node compromising because KN as only registration key had been deleted and IDSNi had been revoked in WSN. Although each node must share a key with BS, the all authentication processes conduct without the aid of BS. Indeed, CL just informs BS in phase 8 (Fig. 4) for revocation provided the SN does not respond in time. On the other side, “SN mobility” is one of the most striking features of the periodic authentication and WSN can be easily deployed in liquid and unsteady environments easily. The moved SN can communicate with new CL after
A Secure Key Management Framework for Heterogeneous WSNs
27
BS
8
If adjacent CLs don't find the moved SN & t t tcomp then send : Sgn( revocation request )
1
Send : query
t.s, IDSNi , H K SN IDSNi , t.s. i
CL
3
Compute H KSN KNc & check equality with D i
SN
2
5
4
Send : D H KSN K N' , t.s.' , IDSNi , t.s.'
7
If 3 is false
then send : Sgn K SNi , ID, Probe request
If 3 is true then SNi is lawful and safe for next period
i
If one of the adjacent CLs find the SN & t tcomp
then the CL send : Sgn IDSNi , discovery message
Adjacent CLs
6
Adjacent CLs send the query into their clusters
Fig. 4. The flowchart of periodic authentication mechanism due to prevention of sensor node compromising and mobility among clusters
authentication because the new CL has received KSNi from the prime CL. Fig. 4 depicts vividly the mentioned mechanism. Although the process seems complicated, it is lightweight and straightforward because its period compared with similar policies in other key management frameworks in heterogeneous WSN (e.g. key updating) is logical and profitable. 4.5
Intra-cluster Communication between SNs
In this section, the model of intra-cluster communication between SNs in every cluster is described. Every SN has three embedded keys (KN , KSNi , Kclj ) as well as IDSNi after cluster formation. KN was used at periodic authentication. All registered SNs in a cluster have a common cluster key (Kclj ). However, Kclj will be changed provided SNi moves to other cluster. Therefore, they have mutual secure communication. Although an adversary can eavesdrop intra-cluster links and compromise Kcl, he cannot disclose any message and disorder intra-cluster transactions since the ID of revealed SN is revoked with the aid of periodic authentication mechanism and mutual intra-cluster communication without valid ID is impossible.
28
M.R. Alagheband and M.R. Aref
Also intra-cluster links need a mechanism to achieve authenticity. In contrast to inter-cluster links, the computation and communication overhead of digital signature and signcryption is irrational in intra-cluster links among limited SNs. Hence, each SN should accompany its ID in every encrypted message in order that receiver recognizes the identity of sender (Eq. 1). It is plainly visible that every SN can find its neighbors after some transactions. IDi ,EK (IDi ,m)
SNi −−−−−−cl−−−−−→ SNj
(1)
As we indicated in section 4.4, our proposed scheme can detect compromised SNs, while the attacker is compromising it. Although it is undeniable that the adversary can obtain Kcl , the periodic authentication mechanism finds this malicious node at once.
5
Security Analysis and Comparison
In this section we both compare our scheme with the last schemes on heterogeneous WSNs and demonstrate how it is resistant on important attacks. Firstly, we define well known attacks on WSNs and explain how our proposed scheme can prevent them. Node Capture Attack : In node capture attack, an adversary gains full control over sensor nodes through direct physical access [14]. According to the importance of CL-BS and CL-CL links, not only is public key cryptography (signcryption method with forward secrecy) used in BS-CL and CL-CL links but also the hardware of CLs is defined tamper-proof in our scheme. An adversary cannot compromise a CL and cannot do manipulation, replay and impersonation attacks, inasmuch as he should solve ECDLP. Furthermore, if a CL’s private key is compromised imaginatively, the adversary cannot still reveal previous plaintexts from signcrypted messages because of forward secrecy. On the other side, SN compromising is highly probable because the hardware of SNs is not tamper-proof. In order to increase persistence against the defect, registration mechanism and periodic authentication were designed to prevent penetration of an adversary to the WSN. Hence intra-cluster links will be secure. In the worst case, if an adversary compromises a SN after deployment in t > tcomp, the adversary cannot impersonate a legal SN with its compromised ID because the CL has already revoked it via periodic authentication mechanism. Although the adversary grabs Kcl , IDSNi and KN , he does not have enough time to send correct response to CL in the authentication protocol. Thus the CL revokes the compromised SN immediately. Moreover, if a random ID is chosen by adversary, the CL will reveal it in next periodic authentication as well. If the adversary obstructs the flows 1 or 6 in Fig. 2, he will have enough time to compromise the SN but he cannot generate desyncronization attack because KN has changed to KN in the last stage of SN’s registration mechanism and the adversary cannot take part in registration procedure with the aid of KN . On the other side, the maximum time duration for registration into a cluster to prevent
A Secure Key Management Framework for Heterogeneous WSNs
29
this disturbance is bootstrapping time (tboot ). When tboot is finished, all SNs should have been registered. Otherwise unregistered SNs will delete KN and will send out at the WSN practically. Since the time requirement for registration is very shorter than tboot , this policy has not decreased the throughput of network. Moreover, our scheme is extensible and it is possible to add new SNs during the life of WSN. Although the registered SNs do not have KN , the new SNs join a cluster with the aid of KN , compute KN and then delete KN . Replay Attack : An adversary can record IDi ans HKN (IDi ) (flow 1 in Fig. 2) in one location and sends it again either there or another location. Since BS has verified the IDi previously, the adversary cannot introduce itself as a trusted SN to CL and BS. Also in authentication protocol, upon receiving the response of SN at t.s. (flow 2 in Fig. 4), CL verifies whether t.s. − t.s. ≤ T for prevention of replay attack. If it holds, SNi will be safe and valid for next period. If an adversary reveals KSNi , the SN with KSNi is revoked immediately based on periodic authentication. Message Manipulation Attack : In this attack, an adversary may drop, change, or even forge exchanged messages in order to interrupt the communication process but he cannot manipulate messages in our proposed scheme because an adversary is not a valid node at all. The ways of this attack are three aspects. i) It is probable that an adversary manipulates query flow in periodic authentication (Fig. 3) but the SN checks the equality between α and HKSNi (KN , t.s. , IDSNi ) and then SN will realize this disturbance immediately because the adversary does not have the SN’s key and cannot impersonate SN without KSNi . ii) Despite the fact that the adversary knows meta, if the adversary modifies flow 6 (Fig. 2), the SN will not admit the received Kcli as the cluster key. The adversary cannot reveal KSNi in tboot duration. iii) All CL-CL and CL-BS links are resistant to every kind of manipulation or impersonation attacks as they are based on Signcryption method. Masquerade Attack : In this attack, an adversary can pretend to be a valid node and participate in the network communication. In our proposed scheme, all the nodes in the network are authenticated to each other along the way. Thus, the adversary cannot pretend to be valid nodes and cannot exchange the wrong information among the valid nodes. Therefore, a masquerade attack is not applicable on our proposed protocol. To sum up, we compare our proposed key management infrastructure with the SACK, RDEC, PKAS, PIBK schemes that have been designed based on HHWSN. Our scheme has some unique predominant features including SN mobility, periodic authentication, preventative mechanism against SN compromising and utilization of signcryption rather than signature-encryption (Table 2).
6
Conclusion
A few key management frameworks have been designed for HHWSN in recent years. In this paper we proposed a novel and secure key management infrastructure for HHWSN. Our proposed scheme has number of striking features,
30
M.R. Alagheband and M.R. Aref
Table 2. The comparison of five schemes (Enc.=Encryption, Sig.=Signature, Key Agr.=Key Agreement)
XXX
XXXScheme SACK[11] RDEC[12] PKAS[13] XXX Feature Mobility No No No Number of saved key 2+1 (1024 2 3 in every SN bit) Situation of network The More than Just the after one node Whole of one SN SN fail compromising WSN fail fail Difference among SN No Yes Yes and CL Authentication once The type of used Enc.+ Enc. Enc. PKC Sig. SN-CL & SN-CL & The position of PKC CL-BS CL-BS CL-BS Scalability after Yes Yes Yes network deployment Clustering as a Yes No Yes prerequisite for Key management
No
Our scheme YES
4
3
Whole of WSN fail
Just the SN fail
No
Yes
-
periodic
Key Agr.
Sgn
SN-CL & CL-BS
CL-BS
Yes
Yes
Yes
No
PIBK[14]
including ECC utilization just between CL and BS, using signcryption rather than encryption with signature by forward security and public verifiability characteristics, SN mobility, periodic authentication to prevent SN compromising and a unique SN registration model in clusters. Furthermore, SNs just have undergone light computation and power consumption.
References 1. Zhang, J., Varadharajan, V.: Wireless sensor network key management survey and taxonomy. Journal of Network and Computer Applications 33, 63–75 (2010) 2. Eschenauer, L., Gligor, V.D.: A key management scheme for distributed sensor networks. In: Proceeding of the 9th ACM Conference on Computer and Communication Security, pp. 41–47 (November 2002) 3. Perrig, A., Szewczyk, R., Wen, V., Cullar, D., Tygar, J.D.: SPINS: security protocols for sensor networks. In: Proceedings of the 7th Annual ACM/IEEE International Conference on Mobile Computing and Networking, pp. 189–199 (2001) 4. Gura, N., Patel, A., Wander, A., Eberle, H., Shantz, S.C.: Comparing elliptic curve cryptography and RSA on 8-bit CPUs. In: Joye, M., Quisquater, J.-J. (eds.) CHES 2004. LNCS, vol. 3156, pp. 119–132. Springer, Heidelberg (2004) 5. Malan, D.J., Welsh, M., Smith, M.D.: A public-key infrastructure for key distribution in Tinyos based on elliptic curve cryptography. In: First Annual IEEE Communications Society Conference on Sensor and Ad Hoc Communications and Networks (2004)
A Secure Key Management Framework for Heterogeneous WSNs
31
6. Hankerson, D., Menezes, A., Vanstone, S.: Guide to elliptic curve cryptography. Springer, Heidelberg (2004) 7. Lee, J., Kapitanova, K., Son, S.H.: The price of security in wireless sensor networks. Computer Networks Journal (2010) 8. Hwang, R.-J., Lai, C.-H., Su, F.-F.: An efficient signcryption scheme with forward secrecy based on elliptic curve. Journal of Applied Mathematics and Computation 167(2), 870–881 (2005) 9. Zheng, Y., Imai, H.: How to construct efficient signcryption schemes on elliptic curves. Information Processing Letters 68, 227–233 (1998) 10. Alaghband, M., Soleimanipour, M., Aref, M.: A new signcryption scheme with forward security. In: Fourth Information Security and Cryptology International Conference, ISCISC (2007) 11. Du, X., Guizani, M., Xiao, Y., Chen, H.-H.: A Routing-Driven Elliptic Curve Cryptography Based Key Management Scheme for Heterogeneous Sensor Networks. IEEE Transaction on Wireless Communications 8(3) (2009) 12. Riaz, R., Naureen, A., Akram, A., Hammad, A., Hyung Kim, K., Farooq, H.: A unified security framework with three key management schemes for wireless sensor networks. International Journal Computer Communications 31, 4269–4280 (2008) 13. Mizanur Rahman, S., El-Khatib, K.: Private key agreement and secure communication for heterogeneous sensor networks. Journal of Parallel and Distributed Computing 70, 858–870 (2010) 14. Boujelben, M., Cheikhrouhou, O., Abid, M., Youssef, H.: A Pairing Identity based Key Management Protocol for Heterogeneous Wireless Sensor Networks. IEEE Transaction on Wireless Communications Conference (2009) 15. Collins, M., Dobson, S., Nixon, P.: A Secure Lightweight Architecture for Wireless Sensor Networks. In: The Second International Conference on Mobile Ubiquitous Computing, Systems, Services and Technologies. IEEE Computer Society, Los Alamitos (2008) 16. Pei, Q., Wang, L., Yin, H., Pang, L., Tang, H.: Layer Key Management Scheme on Wireless Sensor Networks. In: Fifth International Conference on Information Assurance and Security. IEEE Computer Society, Los Alamitos (2009) 17. Baronti, P., Pillai, P., Chook, V.W.C., Chessa, S., Gotta, A., Fun Hu, Y.: Wireless sensor networks: A survey on the state of the art and the 802.15.4 and ZigBee standards. Computer Communications 30, 1655–1695 (2007)
Twin Clouds: Secure Cloud Computing with Low Latency (Full Version)
Sven Bugiel, Stefan N¨ urnberger, Ahmad-Reza Sadeghi, and Thomas Schneider Center for Advanced Security Research Darmstadt, Technische Universit¨ at Darmstadt, Germany {sven.bugiel,stefan.nuernberger,ahmad.sadeghi,thomas.schneider} @trust.cased.de
Abstract. Cloud computing promises a cost effective enabling technology to outsource storage and massively parallel computations. However, existing approaches for provably secure outsourcing of data and arbitrary computations are either based on tamper-proof hardware or fully homomorphic encryption. The former approaches are not scaleable, while the latter ones are currently not efficient enough to be used in practice. We propose an architecture and protocols that accumulate slow secure computations over time and provide the possibility to query them in parallel on demand by leveraging the benefits of cloud computing. In our approach, the user communicates with a resource-constrained Trusted Cloud (either a private cloud or built from multiple secure hardware modules) which encrypts algorithms and data to be stored and later on queried in the powerful but untrusted Commodity Cloud. We split our protocols such that the Trusted Cloud performs security-critical precomputations in the setup phase, while the Commodity Cloud computes the time-critical query in parallel under encryption in the query phase. Keywords: Secure Cloud Computing, Cryptographic Protocols, Verifiable Outsourcing, Secure Computation.
1
Introduction
Many enterprises and other organizations need to store and compute on a large amount of data. Cloud computing aims at renting such resources on demand. Today’s cloud providers offer both, highly available storage (e.g., Amazon’s Elastic Block Store [2]) and massively parallel computing resources (e.g., Amazon’s Elastic Compute Cloud (EC2) with High Performance Computing (HPC) Clusters [3]) at low costs, as they can share resources among multiple clients. On the other hand, sharing resources poses the risk of information leakage. Currently, there is no guarantee that security objectives stated in Service Level Agreements (SLA) are indeed fulfilled. Consequently, when using the cloud, the
A preliminary version of this paper was published as extended abstract in [7].
B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 32–44, 2011. c IFIP International Federation for Information Processing 2011
Twin Clouds: Secure Cloud Computing with Low Latency
33
client is forced to blindly trust the provider’s mechanisms and configuration [9]. However, this is accompanied by the risk of data leakage and industrial espionage due to a malicious insider at the provider or due to other customers with whom they share physical resources in the cloud [32]. Example applications that need to protect sensitive data include, but are not limited to, processing of personal health records or payroll databases. Access usually occurs not very frequently, but needs to be processed very fast while privacy of the data should be preserved. Due to regulatory reasons, contractual obligations, or protection of intellectual property, cloud clients require confidentiality of their outsourced data, that computations on their data were processed correctly (verifiability), and that no tampering happened (integrity). Secure outsourcing of arbitrary computations on data is particularly difficult to fulfill if the client does not trust the cloud provider at all. Some cryptographic methods allow specific computations on encrypted data [4,18], or to securely and verifiably outsource storage [24]. Secure computation of arbitrary functions, e.g., arbitrary statistics or queries, on confidential data can be achieved based on fully homomorphic encryption as shown in [10,8]. However, these schemes are not yet usable in practice due to their poor efficiency. Furthermore, in a multi-client scenario, cryptography alone is not sufficient and additional assumptions have to be made such as using tamper-proof hardware [42]. Still, secure hardware which provides a shielded execution environment does not scale well as it is expensive and relatively slow. Our Approach. We propose a model for secure computation of arbitrary functions with low latency using two clouds (twins). The resource-constrained Trusted Cloud is used for pre-computations whereas the untrusted, but powerful Commodity Cloud is used to achieve low latency (cf. Fig. 1). Our approach allows to separate the computations into their security and performance aspects: securitycritical operations are performed by the Trusted Cloud in the Setup Phase, whereas performance-critical operations are performed on encrypted data in parallel by the Commodity Cloud in the Query Phase. Analogous to electricity, this can be seen as a battery that can be charged over night with limited amperage and later provides energy rapidly during discharge.
Fig. 1. Twin Clouds model with Client, Trusted Cloud, and Commodity Cloud
In the Setup Phase, the Trusted Cloud encrypts the outsourced data and programs using Garbled Circuits (GC) [43] which requires only symmetric cryptographic operations and a constant amount of memory. In the time-critical Query Phase, the Trusted Cloud verifies the results computed by the Commodity Cloud
34
S. Bugiel et al.
under encryption. Our proposed solution is transparent as the Client uses the Trusted Cloud as a proxy that provides a clearly defined interface to manage the outsourced data, programs, and queries. We minimize the communication over the secure channel (e.g., SSL/TLS) between Client and Trusted Cloud. Outline and Contribution. After summarizing related work in §2 and preliminaries in §3 we present the following contributions in the respective sections: In §4 we present our model for secure outsourcing of data and arbitrary computations with low latency using two clouds. The Trusted Cloud is mostly involved in the Setup Phase, while queries are evaluated under encryption and in parallel by the untrusted Commodity Cloud. In §5 we give an instantiation of our model based on GCs, the currently most efficient method for secure computation. Our proposed solution has several advantages over previous proposals (cf. §2): 1. Communication Efficiency. We minimize the communication between the client and the Trusted Cloud as only a program, i.e., a very compact description of the function, is transferred and compiled on-the-fly into a circuit. 2. Transparency. The client communicates with the Trusted Cloud over a secure channel and clear interfaces that abstract from the underlying cryptography. 3. Scalability and Low Latency. Our approach is highly scalable as both clouds can be composed from multiple nodes. In the Query Phase, the Trusted Cloud performs only few computations (independent of the function’s size). 4. Multiple Clients. Our protocols can be extended to multiple clients such that the Commodity Cloud securely and non-interactively computes on the clients’ input data.
2
Related Work
Here, we summarize related works for secure outsourcing of storage and arbitrary computations based on Trusted Computing (§2.1), Secure Hardware (§2.2), Secure Computation (§2.3), and Architectures for Secure Cloud Computing (§2.4). 2.1
Trusted Computing
The most prominent approach to Trusted Computing technology was specified by the Trusted Computing Group (TCG) [40]. The TCG proposes to extend common computing platforms with trusted components in software and hardware, which enable the integrity measurement of the platform’s software stack at boot-/load-time (authenticated boot, [35]) and the secure reporting of these measurements to a remote party (remote attestation, [14]). Thus, it provides the means to achieve verifiability and transparency of a trusted platform’s software state. Trusted Computing enables the establishment of trusted execution environments in commodity cloud infrastructures [37,36]. However, the reliable and efficient attestation of execution environments at run-time remains an open research problem. Trusted Computing is orthogonal to our approach and could be used to augment the Trusted Cloud with attestation capabilities.
Twin Clouds: Secure Cloud Computing with Low Latency
2.2
35
Secure Hardware / HSMs
Cryptographic co-processors, such as the IBM 4765 or 4764 [19], provide a highsecurity, tamper-resistant execution environment for sensitive cryptographic operations. Such co-processors are usually certified, e.g., according to FIPS or Common Criteria. Hardware Security Modules (HSM) or Smartcards additionally provide a secure execution environment to execute custom programs. As secure hardware is usually expensive, relatively slow, and provides only a limited amount of secure memory and storage, it does not qualify as building block for a cost-efficient, performant, and scalable cloud computing infrastructure. 2.3
Secure Computation
Secure computation allows mutually distrusting parties to securely perform computations on their private data without involving a trusted third party. Existing approaches for secure computation are either based on computing with encrypted functions (called Garbled Circuits), or computing on encrypted data (using homomorphic encryption) as summarized in the following. Garbled Circuits. Yao’s Garbled Circuits (GC) [43] allow secure computation with encrypted functions. On a high level, one party (called constructor) “encrypts” the function to be computed using symmetric cryptography and later, the other party (called evaluator) decrypts the function using keys that correspond to the input data (called “garbled values”). We give a detailed description of GCs later in §3.2. Although GCs are very efficient as they use only symmetric cryptographic primitives, their main disadvantage is that each GC can be evaluated only once and its size is linear in the size of the evaluated function. As used in several works (e.g., [28,1,13,23,17]), the trusted GC creator can generate GCs in a setup phase and subsequently GCs are evaluated by one or more untrusted parties. Afterwards, the GC creator can verify efficiently that the computations indeed have been performed correctly (verifiability). Our protocols in §5 use GCs in commodity clouds that are composed from off-the-shelf hardware. In particular our protocols do not require that the cloud is equipped with trusted hardware modules (as proposed in [20,21,34,26]), while they could benefit from hardware accelerators such as FPGAs or GPUs (cf. [23]). Homomorphic Encryption. Homomorphic Encryption (HE) allows to compute on encrypted data without using additional helper information. Traditional HE schemes are restricted to specific operations (e.g., multiplications for RSA [33], additions for Paillier [29], or additions and up to one multiplication for [6]). They allow to outsource specific computations, e.g., encryption and signatures [18], to untrusted workers, but require interaction to compute arbitrary functions. Recently, Fully HE (FHE) schemes have been proposed for arbitrary computations on encrypted data [11,38,41]. When combined with GCs for verifiability (cf. above), FHE allows to securely outsource data and arbitrary computations [10,8]. However, FHE is not yet sufficiently efficient to be used in practical applications [38,12].
36
S. Bugiel et al.
Multiple Data Owners. The setting of secure outsourcing of data and computations can be generalized to multiple parties who provide their encrypted inputs to an untrusted server that non-interactively computes the verifiably correct result under encryption. However, using cryptography alone, this is only possible for specific functions [15], but not arbitrary ones [42]. This impossibility result can be overcome by using a trusted third party, in our case the Trusted Cloud. 2.4
Architectures for Secure Cloud Computing
We combine advantages of the following architectures for secure cloud computing. An architecture for Signal Processing in the Encrypted Domain (SPED) in commodity computing clouds is described in [39]. SPED is based on cryptographic concepts such as secure multiparty computation or homomorphic encryption, which enable the secure and verifiable outsourcing of the signal processing. The authors propose a middleware architecture on top of a commodity cloud which implements secure signal processing by using SPED technologies. The client communicates via a special API, provided by a client-side plugin, with the middleware in order to submit new inputs and retrieve results. However, the authors do not elaborate on how to instantiate their protocols efficiently and do not answer problems regarding the feasibility of their approach. For instance, if GCs are used, they need to be transferred between the client-side plugin and the middleware which requires a large amount of communication. We parallelize the client plugin within the Trusted Cloud, provide a clear API that abstracts from cryptographic details, and give complete protocols. Another architecture for secure cloud computing was proposed in [34]. The authors propose to use a tamper-proof hardware token which generates GCs in a setup phase that are afterwards evaluated in parallel by the cloud. The token receives the description of a boolean circuit and generates the corresponding GC using a constant amount of memory (using the protocol of [22]). The hardware token is integrated into the infrastructure of the cloud service provider either in form of a Smartcard provided by the client, or as a cryptographic co-processor. We overcome several restrictions of this architecture by transferring smaller program descriptions instead of boolean circuits, virtualizing the hardware token in the Trusted Cloud, and providing a clear API for the client. This idea of secure outsourcing of data and computations based on a tamperproof hardware token was extended to the multi-cloud scenario in [26]. In this scenario, multiple non-colluding cloud providers are equipped with a tamperproof hardware token each. On a conceptual level, the protocol of [26] is similar to that of [34]: The token outputs helper information, i.e., multiplication tuples (resp. garbled tables in [34]), to the associated untrusted cloud provider who uses this information within a secure multi-party computation protocol executed among the cloud providers (resp. for non-interactive computation under encryption) based on additive secret-sharing (resp. garbled circuits). The tokens in both protocols need to implement only symmetric cryptographic primitives (e.g., AES or SHA) and require only a constant amount of memory. In contrast, our Twin Clouds protocol is executed between two clouds (one trusted and one untrusted) and does not require trusted hardware.
Twin Clouds: Secure Cloud Computing with Low Latency
3
37
Preliminaries
Our constructions make use of the following building blocks. 3.1
Encryption and Authentication
Confidentiality and authenticity of data can be guaranteed with symmetric cryptography: either with a combination of symmetric encryption (e.g., AES) and a Message Authentication Code (MAC, e.g., HMAC), or by using authenticated encryption, a special mode of operation of a block cipher (e.g., EAX [5]). Notation. x = AuthEnc(x) denotes the authentication and encryption of data x; x = DecVer( x) denotes the corresponding verification and decryption process. 3.2
Garbled Circuits (GC)
Arbitrary functions can be computed securely based on Yao’s Garbled Circuits (GC) [43]. Compared to FHE (cf. §2.3), GCs are highly efficient as they use only symmetric cryptographic primitives but require helper information (cf. Fig. 2).
constructor function f W1 W2 gate Gi ∧ W3 data x
CreateGC
garbled circuit f w 1 w 2
garbled table Ti
f
evaluator
∧ w 3
Garble
x
EvaluateGC y = f( x)
y = f (x)
Verify Fig. 2. Overview of Garbled Circuits
The main idea of GCs is that the constructor generates an encrypted version of the function f (represented as boolean circuit), called garbled circuit f. For this, it assigns to each wire Wi of f two randomly chosen garbled values w i0 , w i1 j that correspond to the respective values 0 and 1. Note that w i does not reveal any information about its plain value j as both keys look random. Then, for each gate of f, the constructor creates helper information in form of a garbled table Ti that allows to decrypt only the output key from the gate’s input keys (details below). The garbled circuit f consists of the garbled tables of all gates. Later, the evaluator obtains the garbled values x corresponding to the inputs x of the function and evaluates the garbled circuit f by evaluating the garbled gates one-by-one using their garbled tables. Finally, the evaluator obtains the corresponding garbled output values y which allow the constructor to decrypt them into the corresponding plain output y = f (x).
38
S. Bugiel et al.
Security and Verifiability. GCs are secure against malicious evaluator (cf. [13]) and demonstration of valid output keys implicitly proves that the computation was performed correctly (cf. [10]). To guarantee security and verifiability, a GC can be evaluated only once, i.e., a new GC must be created for each evaluation. Efficient GC constructions. The efficient GC construction of [25], provably secure in the random oracle model, provides “free XOR” gates, i.e., XOR gates have no garbled table and negligible cost for evaluation. For each 2-input nonXOR gate the garbled table has size ≈ 4t bits, where t is the symmetric security parameter (e.g., t = 128); creation of the garbled table requires 4 invocations of a cryptographic hash function (e.g., SHA-256) and evaluation needs 1 invocation. As shown in [22], generation of GCs requires only a constant amount of memory (independent of the size of the evaluated function) and only symmetric cryptographic operations (e.g., SHA-256). The implementation results of [31] show that evaluation of GCs can be performed efficiently on today’s hardware: GC evaluation of the reasonably large AES functionality (22,546 XOR; 11,334 non-XOR gates) took 2s on a single core of an Intel Core 2 Duo with 3.0 GHz. is the GC for boolean Notation. x is the garbled value corresponding to x. C x) denotes evaluation of C on x circuit C (with |C| non-XOR gates). y = C( . 3.3
Circuit Compiler
The functions to be computed securely can be expressed in a compact way in a hardware description language and compiled automatically into a boolean circuit. A prominent example is Fairplay’s [27] Secure Function Description Language (SFDL) which resembles a simplified version of a hardware description language, e.g., Verilog or VHDL (Very high speed integrated circuit Hardware Description Language), and supports types, variables, functions, boolean operators (∧, ∨, ⊕, . . . ), arithmetic operators (+, −), comparison (<, ≥, =, . . . ), and control structures like if-then-else or for-loops with constant range. Other candidates for compact description and compilation into boolean circuits are the languages and tools provided by [30,16]. As shown in [16], the compilation into a circuit can be implemented with a low memory footprint. In principle, it would be possible to compile algorithms formulated in any standard programming language such as C or Java into a boolean circuit, as every computable function can be expressed as boolean circuit of polynomial size. Notation. C = Compile(P ) denotes compilation of program P into circuit C.
4
Twin Clouds Model
Our Twin Clouds model, depicted in Fig. 1 on page 33, allows secure outsourcing of data and arbitrary computations with low latency to an untrusted commodity cloud. In our model, the Client makes use of the services offered by a cloud service provider to outsource its data and computations thereon into the provider’s
Twin Clouds: Secure Cloud Computing with Low Latency
39
Commodity Cloud in a secure way. The confidentiality and the integrity of the outsourced data must be protected against a potentially malicious provider, and the correctness of the outsourced computations must be verifiable by the Client. Due to the assumed large size of the Client’s data and/or the computational complexity of the computations thereon, it is not possible to securely outsource the data to the Commodity Cloud and let the Client execute its computations locally after retrieving back the entire data. Instead, the computations must be performed by the Commodity Cloud without interaction with the Client. To achieve these goals and satisfy the above mentioned security requirements, the Twin Cloud model uses a Trusted Cloud as proxy between the Client and the Commodity Cloud. The Trusted Cloud provides a resource-restricted execution environment and infrastructure that is fully trusted by the Client. As the resources of the Trusted Cloud are restricted, relatively expensive, and potentially slow, the computations can also not be performed within the Trusted Cloud. Instead, the Trusted Cloud is a transparent proxy that adds the needed security properties (integrity, confidentiality, verifiability) on top of the services provided by the fast but insecure Commodity Cloud. It provides an interface for secure storage and computations to the Client while abstracting from the service provider’s cloud infrastructure. This interface (e.g., a web-frontend or API) allows to securely submit data, programs, and queries to be securely stored and computed. The low-bandwidth connection between Client and Trusted Cloud is secured by a secure channel (e.g., SSL/TLS). The Trusted Cloud is used mostly during a Setup Phase, but performs only few computations during the time-critical Query Phase. It is assumed to have a small amount of storage only; if larger amounts of data need to be stored, they can be securely outsourced to the Commodity Cloud’s untrusted storage. To allow this secure outsourcing of storage, the Trusted Cloud is connected to the Commodity Cloud over an unprotected high-bandwidth channel. A possible instantiation of the Trusted Cloud can be a private cloud of the Client (e.g., his existing IT infrastructure). Alternatively, the Trusted Cloud could be a cluster of virtualized cryptographic co-processors (e.g., the IBM 4765 [19] or other Hardware Security Modules) which are offered as a service by a third party and which provide the necessary hardware-based security features to implement a secure remote execution environment trusted by the Client.
5
Twin Clouds Protocols
To efficiently instantiate the Twin Clouds model of §4 we use a “battery” for secure computations: In the Setup Phase, the battery is charged by precomputing encrypted (garbled) data and functions within the resource-limited Trusted Cloud. Later, in the Query Phase, the battery is rapidly discharged by evaluating these encryptions in parallel within the Commodity Cloud. Simplification. To ease presentation, we assume a single client who outsources a single program P . However, our protocols naturally extend to multiple programs and clients. We also assume that the Trusted Cloud takes appropriate
40
S. Bugiel et al.
measures to protect against replay attacks, e.g., an internal database of randomly chosen keys for each authenticated encryption and GC with associated garbled data. Interface. The Client accesses the Trusted Cloud over a secure channel and the following interface which abstracts from all underlying cryptographic details: During the Setup Phase, the Client provides the data D to be outsourced and the program P (formulated in a Hardware Description Language, cf. §3.3) to be computed. Later, in the Query Phase, the Client issues a query q which should be processed as fast as possible resulting in the response r = P (q, D) output to the Client. Additionally, the Client can update the stored data D or program P . Protocol Overview. On a high-level, our protocols work as follows: The Trusted Cloud stores Client’s data D and program P securely in the Commodity Cloud. Then, the Trusted Cloud retrieves back D and re-encrypts it and generates GCs C from P ; both are stored into its garbled equivalent D, in the Commodity Cloud. Later, the Client’s query q is encrypted and sent to q , D) under the Commodity Cloud which computes the garbled result r = C( encryption (using a pre-computed C which is deleted afterwards). Finally, the Trusted Cloud verifies the garbled result and sends r = P (q, D) to the Client. We describe the details of the two phases next. Actions invoked by the Client are denoted by Latin letters and automatically triggered actions by Greek ones. 5.1
Setup Phase
The Setup Phase, depicted in Fig. 3, consists of the following use-cases.
Fig. 3. Setup Phase: a,b) Client registers data D and program P to be stored securely in the Commodity Cloud. α) Updates of D require re-generation of garbled data D. β) Updates of P require re-generation of garbled circuits C.
a) Modify Data. When the Client provides new or modified data D to be out = AuthEnc(D) (cf. §3.1) in the Comsourced (a1), D is stored securely as D is re-generated modity Cloud (a2). Whenever D is modified, the garbled data D can be deleted. (cf. α below) and all pre-computed GCs C
Twin Clouds: Secure Cloud Computing with Low Latency
41
b) Modify Program. Whenever the Client provides a new or modified program P (b1), P is stored securely as P = AuthEnc(P ) (cf. §3.1) in the Commodity Cloud can be deleted. (a2). Whenever P is modified, all pre-computed GCs C must be reα) Garble Data. Whenever D is changed, the garbled data D from generated. For this, the Trusted Cloud requests the securely stored data D (cf. §3.1), genthe Commodity Cloud (α1), recovers the data D = DecVer(D) = Garble(D) (cf. §3.2), and stores this erates the corresponding garbled data D back into the Commodity Cloud (α2). β) Garble Program. Whenever D or P is changed or the Trusted Cloud has capacities for pre-computations, new GCs are generated. For this, the Trusted Cloud requests the securely stored program P from the Commodity Cloud (β1), recovers the program P = DecVer(P ) (cf. §3.1), compiles it into a boolean circuit = Garble(C) (cf. §3.2), and C = Compile(P ) (cf. §3.3), generates a new GC C stores this back into the Commodity Cloud (β2). 5.2
Query Phase
The query phase depicted in Fig. 4 consists of the following use-case:
Fig. 4. Query Phase: Client sends query q to the Trusted Cloud to be computed by is deleted afterwards. the Commodity Cloud under encryption (c). The used GC C
c) Process Query. When the Client sends a query q for secure evaluation (c1), the Trusted Cloud converts q into its garbled equivalent q = Garble(q) (cf. §3.2) which is forwarded to the Commodity Cloud (c2). The Commodity Cloud q , D) by evaluating a pre-computed GC C computes the garbled response r = C( (cf. §3.2) in parallel and deleting it afterwards. The garbled result r is returned to the Trusted Cloud (c3) which verifies the correctness of the result r = Verify( r) (cf. §3.2) and returns r = P (q, D) to the Client. 5.3
Analysis
In the following we analyze the security and efficiency properties of our protocols. Security Analysis. The security of our protocols stems from the fact that the Trusted Cloud is a secure execution environment, whereas the adversary can have full control over the Commodity Cloud and all communication channels.
42
S. Bugiel et al.
More specifically, our protocols are secure against a malicious Commodity Cloud provider as well as external adversaries: The Commodity Cloud is neither able to or program P as these are successfully modify nor to learn the outsourced data D authenticated and encrypted (cf. §3.1). The security and verifiability properties of GCs (cf. §3.2) ensure that the Commodity Cloud also cannot successfully C, r, or intermediate results of the computation. Clearly, modify or learn q, D, the Commodity Cloud learns an upper bound on the size of all data which can be circumvented by appropriate padding. The same holds true for external attackers that also cannot interfere with the communication between Client and Trusted Cloud due to the usage of a secure channel (e.g., SSL/TLS). Efficiency Analysis. The communication between Client and Trusted Cloud is minimized as only data and a compact program are transferred over the secure channel, while the communication between Trusted Cloud and Commodity of size ≈ 4t · |C| bits, where t is the Cloud is dominated by the transfer of C symmetric security parameter, e.g., t = 128 (cf. §3.2). The Commodity Cloud’s and C, while the Trusted storage is dominated by t · (|D| + 4|C|) bits for D Cloud needs only low memory/storage. The dominating factor of the computation complexity are 4|C| hash function evaluations by the Trusted Cloud in the Setup Phase and |C| parallel hash function evaluations by the Commodity Cloud in the Query Phase. Note that many functionalities such as queries on or statistics over large databases naturally allow parallelization. Finally, we’d like to emphasize that our protocols can be used to securely outsource data and arbitrary computations thereon, use only symmetric-key cryptographic primitives, and do not rely on tamper-proof hardware. A prototype implementation to verify their practical efficiency is left as future work. Acknowledgements. We thank Radu Sion for pointing out the analogy of our Twin Clouds model with a rechargeable battery that accumulates energy (computations) over some time and can then be uncharged rapidly. This work was in part funded in part by the European Commission through the ICT program under contract 257243 TClouds and 216676 ECRYPT II.
References 1. Algesheimer, J., Cachin, C., Camenisch, J., Karjoth, G.: Cryptographic security for mobile code. In: Security and Privacy, pp. 2–11. IEEE, Los Alamitos (2001) 2. Amazon. Elastic Block Store, EBS (2011), http://aws.amazon.com/ebs 3. Amazon. Elastic Compute Cloud, EC2 (2011), http://aws.amazon.com/ec2 4. Atallah, M., Pantazopoulos, K., Rice, J., Spafford, E.: Secure outsourcing of scientific computations. Advances in Computers 54, 216–272 (2001) 5. Bellare, M., Rogaway, P., Wagner, D.: The EAX mode of operation: A two-pass authenticated-encryption scheme optimized for simplicity and efficiency. In: Roy, B., Meier, W. (eds.) FSE 2004. LNCS, vol. 3017, pp. 389–407. Springer, Heidelberg (2004) 6. Boneh, D., Goh, E.-J., Nissim, K.: Evaluating 2-DNF formulas on ciphertexts. In: Kilian, J. (ed.) TCC 2005. LNCS, vol. 3378, pp. 325–341. Springer, Heidelberg (2005)
Twin Clouds: Secure Cloud Computing with Low Latency
43
7. Bugiel, S., N¨ urnberger, S., Sadeghi, A.-R., Schneider, T.: Twin Clouds: An architecture for secure cloud computing (Extended Abstract). In: Workshop on Cryptography and Security in Clouds (WCSC 2011), March 15-16 (2011) 8. Chung, K.-M., Kalai, Y., Vadhan, S.: Improved delegation of computation using fully homomorphic encryption. In: Rabin, T. (ed.) CRYPTO 2010. LNCS, vol. 6223, pp. 483–501. Springer, Heidelberg (2010) 9. Cloud Security Alliance. Top threats to cloud computing, v. 1.0 (2010) 10. Gennaro, R., Gentry, C., Parno, B.: Non-interactive verifiable computing: outsourcing computation to untrusted workers. In: Rabin, T. (ed.) CRYPTO 2010. LNCS, vol. 6223, pp. 465–482. Springer, Heidelberg (2010) 11. Gentry, C.: Fully homomorphic encryption using ideal lattices. In: STOC 2009, pp. 169–178. ACM, New York (2009) 12. Gentry, C., Halevi, S.: Implementing gentry’s fully-homomorphic encryption scheme. In: Paterson, K.G. (ed.) EUROCRYPT 2011. LNCS, vol. 6632, pp. 129– 148. Springer, Heidelberg (to appear, 2011) 13. Goldwasser, S., Kalai, Y.T., Rothblum, G.N.: One-time programs. In: Wagner, D. (ed.) CRYPTO 2008. LNCS, vol. 5157, pp. 39–56. Springer, Heidelberg (2008) 14. Trusted Computing Group. Trusted platform module (TPM) main specification (2007) 15. Halevi, S., Lindell, Y., Pinkas, B.: Secure computation on the web: Computing without simultaneous interaction. Cryptology ePrint Archive, 2011/157 (2011) 16. Henecka, W., K¨ ogl, S., Sadeghi, A., Schneider, T., Wehrenberg, I.: TASTY: Tool for Automating Secure Two-partY computations. In: CCS, pp. 451–462. ACM, New York (2010) 17. Herzberg, A., Shulman, H.: Secure guaranteed computation. Cryptology ePrint Archive, Report 2010/449 (2010) 18. Hohenberger, S., Lysyanskaya, A.: How to securely outsource cryptographic computations. In: Kilian, J. (ed.) TCC 2005. LNCS, vol. 3378, pp. 264–282. Springer, Heidelberg (2005) 19. IBM. Cryptocards (2011), http://www-03.ibm.com/security/cryptocards/ 20. Iliev, A.: Hardware-Assisted Secure Computation. PhD thesis, Dartmouth College, Hanover, NH, USA (2009) 21. Iliev, A., Smith, S.: Small, stupid, and scalable: secure computing with Faerieplay. In: Workshop on Scalable Trusted Computing (STC 2010), pp. 41–52. ACM, New York (2010) 22. J¨ arvinen, K., Kolesnikov, V., Sadeghi, A.-R., Schneider, T.: Embedded SFE: Offloading server and network using hardware tokens. In: Sion, R. (ed.) FC 2010. LNCS, vol. 6052, pp. 207–221. Springer, Heidelberg (2010) 23. J¨ arvinen, K., Kolesnikov, V., Sadeghi, A.-R., Schneider, T.: Garbled circuits for leakage-resilience: Hardware implementation and evaluation of one-time programs. In: Mangard, S., Standaert, F.-X. (eds.) CHES 2010. LNCS, vol. 6225, pp. 383–397. Springer, Heidelberg (2010) 24. Kamara, S., Lauter, K.: Cryptographic cloud storage. In: Sion, R., Curtmola, R., Dietrich, S., Kiayias, A., Miret, J.M., Sako, K., Seb´e, F. (eds.) RLCPS, WECSR, and WLC 2010. LNCS, vol. 6054, pp. 136–149. Springer, Heidelberg (2010) 25. Kolesnikov, V., Schneider, T.: Improved garbled circuit: Free XOR gates and applications. In: Aceto, L., Damg˚ ard, I., Goldberg, L.A., Halld´ orsson, M.M., Ing´ olfsd´ ottir, A., Walukiewicz, I. (eds.) ICALP 2008, Part II. LNCS, vol. 5126, pp. 486–498. Springer, Heidelberg (2008)
44
S. Bugiel et al.
26. Loftus, J., Smart, N.P.: Secure outsourced computation. In: Nitaj, A., Pointcheval, D. (eds.) AFRICACRYPT 2011. LNCS, vol. 6737, pp. 1–20. Springer, Heidelberg (to appear, 2011) 27. Malkhi, D., Nisan, N., Pinkas, B., Sella, Y.: Fairplay – a secure two-party computation system. In: Security, pp. 287–302. USENIX (2004) 28. Naor, M., Pinkas, B., Sumner, R.: Privacy preserving auctions and mechanism design. In: Electronic Commerce (EC 1999), pp. 129–139. ACM, New York (1999) 29. Paillier, P.: Public-key cryptosystems based on composite degree residuosity classes. In: Stern, J. (ed.) EUROCRYPT 1999. LNCS, vol. 1592, pp. 223–238. Springer, Heidelberg (1999) 30. Paus, A., Sadeghi, A.-R., Schneider, T.: Practical secure evaluation of semi-private functions. In: Abdalla, M., Pointcheval, D., Fouque, P.-A., Vergnaud, D. (eds.) ACNS 2009. LNCS, vol. 5536, pp. 89–106. Springer, Heidelberg (2009) 31. Pinkas, B., Schneider, T., Smart, N., Williams, S.: Secure two-party computation is practical. In: Matsui, M. (ed.) ASIACRYPT 2009. LNCS, vol. 5912, pp. 250–267. Springer, Heidelberg (2009) 32. Ristenpart, T., Tromer, E., Shacham, H., Savage, S.: Hey, you, get off of my cloud: exploring information leakage in third-party compute clouds. In: CCS 2009, pp. 199–212. ACM, New York (2009) 33. Rivest, R.L., Shamir, A., Adleman, L.: A method for obtaining digital signatures and public-key cryptosystems. Comm. ACM 21, 120–126 (1978) 34. Sadeghi, A.-R., Schneider, T., Winandy, M.: Token-based cloud computing: Secure outsourcing of data and arbitrary computations with lower latency. In: Acquisti, A., Smith, S.W., Sadeghi, A.-R. (eds.) TRUST 2010. LNCS, vol. 6101, pp. 417–429. Springer, Heidelberg (2010) 35. Sailer, R., Zhang, X., Jaeger, T., Van Doorn, L.: Design and implementation of a TCG-based integrity measurement architecture. In: Security. USENIX (2004) 36. Santos, N., Gummadi, K., Rodrigues, R.: Towards trusted cloud computing. In: Hot Topics in Cloud Computing (HotCloud 2009). USENIX (2009) 37. Schiffman, J., Moyer, T., Vijayakumar, H., Jaeger, T., McDaniel, P.: Seeding clouds with trust anchors. In: CCSW 2010, pp. 43–46. ACM, New York (2010) 38. Smart, N.P., Vercauteren, F.: Fully homomorphic encryption with relatively small key and ciphertext sizes. In: Nguyen, P.Q., Pointcheval, D. (eds.) PKC 2010. LNCS, vol. 6056, pp. 420–443. Springer, Heidelberg (2010) 39. Troncoso-Pastoriza, J.R., P´erez-Gonz´ alez, F.: CryptoDSPs for cloud privacy. In: Workshop on Cloud Information System Engineering, CISE 2010 (2010) 40. Trusted Computing Group (2011), http://www.trustedcomputinggroup.org 41. van Dijk, M., Gentry, C., Halevi, S., Vaikuntanathan, V.: Fully homomorphic encryption over the integers. In: Gilbert, H. (ed.) EUROCRYPT 2010. LNCS, vol. 6110, pp. 24–43. Springer, Heidelberg (2010) 42. Van Dijk, M., Juels, A.: On the impossibility of cryptography alone for privacypreserving cloud computing. In: HotSec 2010, pp. 1–8. USENIX (2010) 43. Yao, A.C.-C.: How to generate and exchange secrets. In: FOCS 1986, pp. 162–167. IEEE, Los Alamitos (1986)
Implementation Aspects of Anonymous Credential Systems for Mobile Trusted Platforms Kurt Dietrich, Johannes Winter, Granit Luzhnica, and Siegfried Podesser Institute for Applied Information Processing and Communications Graz, University of Technology Inffeldgasse 16a, 8010 Graz, Austria {Kurt.Dietrich,Johannes.Winter,Siegfried.Podesser}@iaik.tugraz.at [email protected]
Abstract. Anonymity and privacy protection are very important issues for Trusted Computing enabled platforms. Protection mechanisms are required in order to hide activities of the trusted platforms when performing cryptography based transactions over the Internet, which would otherwise compromise the platform’s privacy and with it the users’s anonymity. In order to address this problem, the Trusted Computing Group (TCG) has introduced two concepts addressing the question how the anonymity of Trusted Platform Modules (TPMs) and their enclosing platforms can be protected. The most promising of these two concepts is the Direct Anonymous Attestation (DAA) scheme which eliminates the requirement of a remote authority but includes complex mathematical computations. Moreover, DAA requires a comprehensive infrastructure consisting of various components in order to allow anonymous signatures to be used in real-world scenarios. In this paper, we discuss the results of our analysis of an infrastructure for anonymous credential systems which is focused on the Direct Anonymous Attestation (DAA) scheme as specified by the TCG. For the analysis, we especially focus on mobile trusted platforms and their requirements. We discuss our experiences and experimental results when designing and implementing the infrastructure and give suggestions for improvements and propose concepts and models for - from our point of view - missing components.
1
Introduction
The anonymity of trusted platforms is a crucial topic in Trusted Computing. The use of common digital signature schemes requires complex public-key infrastructures and allows adversaries to track and identify certain signing platforms. In order to address this problem, the TCG has introduced two schemes to protect the anonymity of Trusted Platform Modules (TPMs) and with it the anonymity of their host platforms and users. One of these schemes - the Privacy CA (PCA) scheme - relies on standard public-key infrastructures and performs the anonymization step in collaboration with a trusted third party. The other, B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 45–58, 2011. c IFIP International Federation for Information Processing 2011
46
K. Dietrich et al.
from our point of view, more promising scheme is Direct Anonymous Attestation (DAA) which eliminates the requirement of a remote authority, but requires complex mathematical computations. Although large effort has been put into researching the efficient generation of anonymous signatures, little attention has been paid to the supporting infrastructures which are essential components for deploying anonymous credential technology in the field. When designing and developing infrastructures for the DAA scheme, one faces several obstacles. The first problem is that different protocols exist. Different DAA schemes have been proposed, for example in [3], [10] and [5]. These schemes are either variations of the original RSA-based DAA protocol or schemes based on different cryptographic primitives. Second, different clients have to be supported. DAA client platforms may range from server systems and desktop PCs, down to mobile phones or smart-cards. Each of these platforms has different processing capabilities and, therefore, different requirements on the used DAA scheme. The third obstacle is credential revocation. It is easy to see that revoking an anonymous credential is a complex task. Common approaches either involve revocation authorities or rogue tagging mechanisms. However, rogue-tagging typically implies drawbacks like high resource requirements that may not be available on mobile platforms. Suitable infrastructures that take all these issues into account are missing. In order to address these problems, we have designed and implemented a DAA infrastructure. Therefore, we introduce a Join protocol specification that supports the different variations of DAA schemes mentioned before. Moreover, we introduce an online revocation check mechanism that allows us to move the rogue tagging check from the client to a trusted third party. Although research has been done focusing on specific problems of DAA on mobile platforms ([7], [16]) investigations of the feasibility of DAA with all its aspects in the sense of a complete infrastructure with revocation etc. are missing. However, such infrastructures are especially interesting when taking new applications of DAA in the mobile world, such as anonymous authentication via NFC [1] or transport layer security (TLS) [8] into account. Therefore, we provide an investigation of the practicability and requirements of DAA and its infrastructure for mobile platforms. For our analysis, we have chosen the RSAbased DAA scheme as specified by the Trusted Computing Group (TCG) [14] as this is currently the only existing public version of a DAA scheme which is standardized by a public committee. 1.1
Related Work
Few publications can be found that address the topic of infrastructures for anonymous credentials. The most important one is the paper from Camenish et. al. [4] which deals with the design and the implementation of the idmix credential system prototype. The publication addresses protocols and credentials used in the idmix system which is related to DAA. The remainder of this article is organized as follows. We discuss the overall architecture of our implementation and give detailed results of the server and client components where we put special emphasis on the rogue tagging mechanism.
Implementation Aspects of Anonymous Credential Systems
47
Moreover, we discuss our proposed protocol with support for different types of clients and different schemes and we address the topic of credential revocation by proposing a modification of the online-certificate-status-protocol (OCSP).
2
Overview
The Direct Anonymous Attestation (DAA) scheme is a mechanism to provide trusted information to a challenger about a platform’s integrity without compromising the platform’s privacy. This can be achieved by applying an anonymous group signature on the attestation information provided by the TPM. The DAA model involves three parties: the issuer or group manager, the signer (or client) and a verifier. The issuer manages the group and controls which platform may enter the group and creates a group credential for the entering platforms. Moreover, the issuer defines and publishes a set of group parameters which allows verifiers to validate signatures that have been created by group members. Briefly summarized, an anonymous attestation scenario works as follows: on startup, the client platform which consists of a host (i.e. PC platform) and a TPM, executes a TCG based authenticated-boot and creates measurement values of the loaded software image, which are then stored in the TPM. The TPM now contains the current configuration of the platform. When the platform wants to prove this configuration to a remote platform, it generates a signature on these hash values with a so-called attestation key. The attestation key is a special RSA key which is created and used inside the TPM. The TPM ensures that the private parts of attestation keys acan not leave the TPM and that attestation keys are only used for signing configuration values. The platform sends the signed configuration values to the remote verifier which is then able to validate the local platform’s configuration with the public part of the attestation key. Moreover, the remote platform can trust that the signature was generated in a genuine TPM by validating a DAA signature on the public attestation key. For each attestation process, a new key has to be created and certified. This is where the DAA signature comes into play. Instead of sending the public-key to a Privacy CA every time, the TPM can locally certify its attestation key using an anonymous signature on the key. When using anonymous signatures to certify attestation key, the TPM first creates a temporary RSA key-pair. The public part of this temporary RSA key is locally signed with a DAA signature. Hence, the TPM is able to locally certify its temporary attestation key. This temporary attestation key is then used to create a signatures on the configuration values. When validating a signature over the configuration values, the remote verifier first has to verify the DAA group signature on the temporary attestation key. Before a TPM can create DAA signatures, it has to generate a private DAA-key f and execute the Join protocol with the group manager in order to receive a group-credential for f. Once the TPM has joined a group, it is able to generate DAA signatures to locally certify public-key held by the TPM. In order to analyze the efficiency of the scheme on mobile platforms, we developed an infrastructure that provides issuer, client and verifier components.
48
K. Dietrich et al.
Our DAA infrastructure is based on a client-server architecture consisting of different components which include an issuer, a trusted-third party (TTP) for long term certification of the issuer’s public-key, a DAA status responder and a certificate authority (CA) that issues credentials for the TTP, the responder and the revocation authority. Moreover, the CA issues credentials for all players in the infrastructure to enable end-point authentication for the transport layer security (TLS) endpoints. Furthermore, it provides the corresponding revocation information for all issued certificates. All communication is established using the TLS protocol in order to provide authenticity of the endpoints and confidentiality and integrity of the transmitted data. Although our design and proposed protocol aims at supporting different clients, our current implementation focuses on desktop systems and mobile phones. 2.1
The Issuer
The issuer is responsible for generating the group credentials and for issuing the credentials to client platforms. Before a client can join the group, it has to be authenticated, allowing only authorized clients from joining the group. Moreover, the issuer has to perform a rogue and revocation check of the client’s TPM. Therefore, the issuer maintains a list with Endorsement Keys (EKs) for all the clients, which are allowed to join. However, there are some cases when the issuer would allow arbitrary clients to join. Therefore, we introduce two modes of operation for the issuer: 1. Public Access mode - Clients can join even if their EK is not on the issuer’s list. Their key will be put on the list on the fly during execution of the join protocol in order to be able to monitor who is in the group. 2. Private Access mode - Only clients that have an authorized EK on the issuer’s list can join and request the corresponding group credentials from the issuer. In our approach, we focus on the DAA protocol discussed in [10] which is available on many different platforms [2], [13], [7]. Issuer Setup Protocol. Before the issuer can start to operate, it has to generate its group credentials which is done during the setup phase where all parameters required for the issuer are generated. The parameters include values for exclusive use by the issuer (e.g. issuer’s private key) and public values like the issuer’s public-key. The public values also include a non interactive proof showing that the issuer’s parameters have been generated correctly. Parameters are generated according the following protocol: 1. The issuer chooses a modulus n with length ln and primes p, q, p , q such that: n = pq, p = 2p + 1, q = 2q + 1 2. Next, it chooses random integers x0 , x1 , xz ∈ [1, p q ] and x ∈ [1, n] which will be used to generate the proof and computes S = x2 mod n, Z = S xz mod n, R0 = S x0 mod n and R1 = S x1 mod n
Implementation Aspects of Anonymous Credential Systems
49
3. The issuer produces a non-interactive proof that S, Z, R0 and R1 are computed correctly. (see [3] for more details) 4. It generates rouge tagging parameters by choosing random primes ρ, Γ and γ ∈R ZΓ∗ , satisfying Γ = rρ + 1 such that r is an integer, ρ r, 2lΓ −1 < Γ < 2lΓ , 2lρ −1 < ρ < 2lρ and γ (Γ −1)/ρ ≡ 1 mod (Γ ). Finally, the issuer calculates γ = γ (Γ −1)/ρ mod Γ 5. The public-key of the issuer is the tuple (n, S, Z, R0 , R1 , γ, Γ, ρ) and the private-key is the tuple (p , q ). In addition to the public-key, the issuer computes a proof that the parameters of the public-key are generated correctly. This proof can be used to verify that the Z, R0 , R1 ∈ S and S ∈ QRn parameters are properly constructed. As the verification of the public-key proof is time and resource consuming, it may be delegated by a client to a trusted-third-party which verifies the proof and signs the group-key, thereby attesting the authenticity and the correctness of the key. 2.2
The Client
In our scenario, the mobile phone is a Java 2 MicroEdition (J2ME) application which runs on a mobile phone. These mobile platforms can be equipped with dedicated micro-controllers or software based TPMs as discussed in ([9], [6], [6]). After computing the secret key f , the client may execute the join protocol with the issuer in order to obtain its credentials. Once the client has received the credentials, it is able to create DAA signatures on behalf of the group. Any verifier can verify these signatures with the issuer’s - respectively the group’s public-key. Prior to joining the group, the client has to verify the public-key of the issuer. It is very important to perform this step, since improperly generated issuer-parameters can cripple the anonymity protection of the DAA scheme [3]. Our client application delegates the computationally expensive parts of the issuer proof verification to a trusted third party (TTP). The steps done by the trusted third party are roughly: The TTP verifies the issuer’s proof that Z, R0 , R1 ∈ S and that S is a quadratic residue mod (n). Then the TTP verifies that rouge tagging is set up correctly by checking whether ρ and Γ are primes, ρ | (Γ − 1), ρ (Γ − 1)/ρ and γ ρ ≡ 1 (mod Γ ). Finally, it checks if all parameters of the issuer’s public-key have the required lengths. Following these steps, the client has proof that all the issuer-parameters are computed correctly which implies that the security properties for the client still hold [3, page 9].
3
The Join Protocol
During the Join protocol, client and issuer exchange parameters. Some of these parameters are only temporary parameters which are used to derive other parameters and others are credentials which are finally issued by the issuer to be stored as client-key and group credential. Moreover, we have modified the protocol in order to support different (i.e. future versions and devices that cannot
50
K. Dietrich et al.
rely on a trusted host platform like smart-cards) versions of the DAA scheme. The Join protocol works as follows: 1. The client sends the join command, version and hash of its public EK. 2. The issuer receives the join command and tests if the requested version and DAA scheme are supported. When operating in Private Access mode (see section 2.1) the issuer now tests if the client’s EK hash is on the list of allowed EK hashes. Regardless of the access mode the issuer requests the client to transmit its public EK. 3. The issuer chooses a random ne of length lΦ and encrypts it with the client’s public EK. The encrypted ne and the issuer’s basename bsn are sent back to the client. 4. The client needs to show proof of possession of the public EK – The client’s TPM decrypts ne , picks a random ν of length ln + lΦ and computes: U = R0f0 R1f1 S ν as well as aU = H(U ne) – The client sends ζI = (HΓ (1bsnI ))(Γ −1)/ρ to its TPM. lf
– The TPM verifies that ζIρ ≡ 1 mod Γ holds and computes NI = ζI 0 1 – Finally the client sends U, cnt, NI and aU to the issuer. Upon reception of aU , the issuer computes aU = H(U ne ) – the client has proved that it is the owner of the EK if and only if aU = aU holds. Next, the issuer checks for the rogue tagging using NI (see section 4). The issuer generates a random nonce ni of length lh and forwards it to the client. The client needs to prove knowledge of f0 , f1 and ν to the issuer. – The TPM generates random numbers rf0 , rf1 of length lf + lΦ + lH and ˜ = Rrf0 Rrf1 S rν mod n rν of length ln + 2lΦ + lH . It then computes U 0 1 ˜ – Client sends ch = H(nR0 R1 SU U ni ) to its TPM – The TPM generates a random nonce nt of length lΦ and computes the final hash c and values sf0 , sf1 , ν as: c = H(ch nt ), sf0 = rf0 +sf0 , sf1 = rf1 +sf1 and sν = rν + s · ν – The client forwards c, nt , sf0 , sf1 , ν to the issuer. The issuer verifies the computations done by the client and its TPM. – sf0 , Sf1 must be of length lf + lΦ + lH + 1. sν must have the length ln + 2lΦ + lH + 1. ˆ = U −c Rsf0 Rsf1 S sν mod n and the proof if and – The issuer computes U 0 1 ˆ ni )nt ) holds. only if c = H(H(nR0 R1 SU U After verification of the client proofs, the issuer constructs a CamenischLysyanskaya (CL) credential. – The issuer first generates a random number νˆ of length lν and calculates ν = νˆ + 2lν −1 . Then the issuer selects a random prime e ∈ [2le −1 , 2le −1 + 2le −1 ] and computes: Φ(n) = (p − 1)(q − 1) and d = e−1 mod Φ(n). – The issuer now calculates A = ( USZv )d mod n and sends it to the client. The client challenges the issuer to prove correct computation of A. In order to do so, the client sends a nonce nh of length lΦ . (f +f 2
5.
6. 7. 8.
9.
10.
11.
)
Implementation Aspects of Anonymous Credential Systems
51
12. The issuer generates a random re ∈[0, p q ] and computes A˜ = ( USZv )re mod n, ˜ h ) and se = re − c d c = H(nZSU AAn – The response (c , se , A, e, ν ) is forwarded to the client. 13. The client verifies the issuer’s response and obtains its CL credential. – To verify the issuer’s response the client computes Aˆ = Ac ( USZv )se mod n ˆ h ). The response is valid if and only if c = c and c = H(nZSU AAn holds and if e is prime with e ∈ [2le −1 , 2le −1 + 2le −1 ]. – Using ν the TPM computes ν = ν + ν and then stores (f0 , f1 , ν ) as private-key. 3.1
Prototype Implementation of the Join Protocol
No defined application protocol for the DAA join step has been defined either by the authors of [3] or by the TCG. The TCG Trusted Software Stack specification includes provisions for basic support of the DAA join and sign commands of the TPM [14]. The basic DAA support specified in [14] only consists of an API interface without any kind of provisions for the application level network protocols required to execute the DAA join sequence between an issuer and a client platform. Trusted Channel between Issuer-Service and Clients. For our prototype implementation, we developed a simple application protocol which allows the client to communicate with a DAA issuer over a TLS secured network connection. The application protocol discussed in this section describes a working prototype implementation which is intended to encourage a broader discussion of how a practical DAA issuer network service could look like. The application protocol used by our prototype is a simple request response protocol based upon a series of simple ASN.1 messages exchanged over a trusted channel. In our prototype system, this trusted channel is established using a TLS protected TCP/IP network connection. Using TLS server authentication enables us to authenticate the DAA issuer service against the host willing to join the DAA group. The host can resort to standard PKI methods for verifying the identity of the DAA issuer based on its server certificate. On mobile platforms, the online certificate status protocol (OCSP) can be employed to delegate the possibly resource and communication intensive certificate validation step to a trusted third party. Since different platforms have different strengths and computation powers and our implementation of the issuer should serve all of them, we support different versions of commands. As you can see from the join process section, the first message to be sent from the host to the issuer is the command message. This message includes the command which in that case is ”join” and also the version of the join protocol. In order to support multiple platforms and protocol we implement different forms of ASN.1 messages and differ them based on the used version. The first version would be e.g. “1.0” and the second one “1.1”. Hence, whenever a client
52
K. Dietrich et al.
which can not handle the first option wants to join, it simply specifies which version it can handle or rather which it understands and then the issuer interacts with it in an appropriate form. Additionally the versioning scheme allows us to implement protocol changes in future while maintaining compatibility with existing clients.
4
Rogue Detection and Revocation
In order to detect and prevent compromised TPMs from joining or from signing messages, a rogue and revocation detection mechanism must be used. As mentioned in [3], if the secret of a TPM is revealed it is tagged as rogue and the values f0 and f1 are put on a blacklist. In order to check whether a TPM is rogue or not, the following steps have to be performed: The host and the issuer separately compute ζI = (HΓ (1bsnI ))(Γ −1)/ρ . Then lf
(f0 +f1 2
the client’s TPM computes NI = ζI
)
and sends NI to the verifier. Now lf
(f +f 2
)
the verifier checks for all pairs (f0 , f1 ) on its blacklist whether NI = ζI 0 1 holds, if a pair satisfying the equation is found the TPM is considered to be rogue. The same procedure is used during the sign and verification process of a signature, allowing the verifier to check whether the TPM is rogue or not. Alternatively, the ζV (ζI ) can be chosen at random. In this case, ζ serves only for rogue detection. But if we derive it from the bsn, we would have the same ζ for the same bsn, and it would be possible to link different actions of the platform. In case the issuer acts as verifier and uses the same bsn, the platform may be identified and its transactions may be linked, since ζV = ζI and can be linked to the identity, which should not be the case. The authors of [12] address this problem and propose a simple solution by changing the calculation of ζV to: ζV = (HΓ (0bsnV ))(Γ −1)/ρ In this case, the ζV = ζI so it is not possible to link the identity of the platform to the actions. The biggest problem here is how to gather information in order to assemble a blacklist. In [3][Page 14] it is mentioned that when a certificate (A, e, ν) and the values f0 and f1 are found, they are tested whether Ae R0f0 R1f1 S ν ≡ Z (mod n) holds. If so, they are put on the blacklist. However, there is no mechanism to get the parameters f0 and f1 . If somebody is able to extract them out of a TPM and uses them to impersonate another party there is no way to distinguish between the original platform and the impersonating platform. It is unlikely that the impersonator will publish the extracted secrets. Moreover, there is no way that a platform can check if its secrets have been extracted. Even if the owner of the platform somehow gets the information that the platform has been compromised, the secrets can not be extracted from the TPM in order to perform a self-revocation. There is no command for that action because the secret is supposed to never leave the TPM even if it is requested by the user [15] (for example, if the user wants to revoke the DAA credential if he
Implementation Aspects of Anonymous Credential Systems
53
notices that the TPM keys are compromised). The only way a user can delete its credentials is by performing a take-ownership where new secrets (f0 , f1 ) will be generated. However, if he did not extract and publish the old secrets, someone might use these secrets for signing on behalf of the group and there is no way to detect it since the pair (f0 , f1 ) is not in the blacklist. Hence, the extracted credentials cannot be revoked and remain valid. 4.1
Online Credential Revocation Check
In order to shift the computational effort from the mobile client to a more resourceful platform we introduce a modification of the Online Certificate Status Protocol (OCSP) [11]. OCSP allows to retrieve the status of a certificate from an online source, the OCSP responder. To achieve this, the OCSP request structure contains the CertID field consisting of hash algorithm identifier, hash of the issuer’s name and the issuer’s key, and the serial number of the requested certificate, we want to obtain the status of. A status may be good, revoked or unknown. A status may be good, revoked or unknown. We slightly modified the RFC2560 CertID structure to include the basename, the pseudonym ζ and a reference the issuer’s public-key, which is used to obtain the group parameters. Figrue 1 illustrates the information flow in our modified variant of the OCSP protocol.
Fig. 1. Online Credential Status Reporting
Using the information provided in the modified CertID structure the responders checks if its black-list contains ζ and returns an appropriate OCSP status response. The authenticity and inbtegrity of the OCSP response can be verified using standard public-key signatures as discussed in RFC2560 [11].
5
Experimental Results
In this section, we discuss the results and setup of our infrastructure implementation. We tested the performance of the server (issuer) on a Sony Vaio VGN-NR11Z/S notebook and the client on a Nokia 5800 Express Music cell phone, a Frescale i.MX51 development board and an MSI U135 netbook. Table 1 shows the main characteristics of the testing devices.
54
K. Dietrich et al. Table 1. Testing devices (Roles: H = Host, I = Issuer)
Device Role CPU Clock Freq. RAM OS Nokia 5800 Express Music H ARM11 434 MHz 128 MB Symbian 60v5 Sony Vaio VGN-NR11Z/S I Intel Core 2 Duo 2.0 GHz 2 GB Windows XP i.MX51 EVK H ARM Cortex-A8 800 MHz 512 MB Linux (Debian) MSI U135 netbook H Intel Atom 1.2 GHz 1 GB Linux (SuSE)
Overall performance evaluation of the DAA setup was done with the Sony Vaio notebook configured as server (issuer) and the Nokia mobile phone configured as client (host). The Freescale development board an the MSI netbook did not take part in the overall client/server performance evaluation. Instead the latter two devices were used to evaluate performance impact of using different Java virtual machine configurations as discussed later in section 5.2. 5.1
Overall Client/Server System Performance
In the client/server performance test setup, both test devices were placed on the same local area network (LAN). Both the Nokia 5800 Express cell phone and the Sony Vaio notebook were connected to the test LAN using a wireless LAN access point. For this test. the issuer component was executed on a SUN Java 1.6 virtual machine configured with its installation default settings for Windows XP.
Modulus length Join Time 1024 bit 10.85 s 1536 bit 19.56 s 2048 bit 32.52 s (a) Join times with empty blacklist
Rouge TPMs 100 1000 2000 5000 10000
Time 33.57 s 46.67 s 59.76 s 103.35 s 187.20 s
(b) Join times with different black-list sizes
Rouge TPMs 200 500 1000 2000 10000
Time 79.95 s 179.47 s 352.11 s 699.06 s 187.20 s
(c) Rogue detection on a mobile phone
Fig. 2. Join and rouge detection times
Join Performance Results After the initial client and the server setup has been performed, the client can do the DAA join step. The performance results for joining a DAA group with empty blacklist shown below in table 2a include the network communication overhead. Since communication is done over network the results may vary in different connections with different strengths. These results were gained when performing joining with an empty blacklist. Hence, there is no rogue TPM on the list. Rogue Detection Performance. As discussed earlier the issuer has to comlf
(f +f 2
)
pute a rouge tagging value NI = ζI 0 1 for each pair (f0 , f1 ) on its blacklist. A single NI computation is very expensive, but overall cost increases linearly with
Implementation Aspects of Anonymous Credential Systems
55
the size of the blacklist. The more TPMs are on the list, the longer it takes to prove for any TPM whether it is rogue or not and the longer it takes to finish the join protocol. We measured the join process performance considering the effects of the rogue detection process. The tests were performed in a way that after each Join a new pair (f0 , f1 ) is inserted into the blacklist for the next Join. In table 2b we show the time required for a join with rogue TPMs in the blacklist. The length of modulus for these measurements was 2048. As visible in figure 3a, there is a nearly linear dependency between the number of TPMs on the rouge list and the total time required for the joining process. Given those results the amount of time required for each additional rouge TPM corresponds to approximately 14.7 milliseconds or equivalently to 1.47 seconds per 100 rouge TPMs.
220
700
200 600 180 500 Total verification time [s]
Total join time [s]
160 140 120 100 80
400
300
200
60 100 40 20 0
2000
4000
6000
8000
10000
12000
Number of rouge TPMs on the blacklist
(a) Join performance over black-list size
0 200
400
600
800
1000
1200
1400
1600
1800
2000
Number of rouge TPMs on the blacklist
(b) Verification with rogue detection
Fig. 3. Join and verify performance
The biggest problem is that the rogue detection has to be performed when verifying signatures. Since a client should be able to verify signatures, it also should be able to check for rogue TPMs. We measured the execution time of the rogue detection depending on the number of TPMs in the blacklist on the Nokia mobile phone. The corresponding results can be found in table 2 and figure 3b. Assuming medium to large blacklist sizes it clearly becomes evident that rouge detection is an expensive operation which might not be feasible on batterypowered devices like mobile phones. Moreover the level of user acceptance for signature verification processes in the order of minutes is at least questionable. Proof Verification. As previously discussed, a proof that the parameters of the issuer’s public-key are generated correctly, should be generated by the issuer. In order to prove that Z, R0 , R1 ∈ S, a total of 3 · 160 = 420 relatively expensive calculations (verifications of bit-commitments) are required. Each of these calculations includes a modular exponentiation with an exponent ∈ [1, p q ] and also a modular multplication. The proof verification step requires significant computational resources and takes significant time (≈ 99.44 s) even on the Sony Vaio platform used as issuer in our other experiments. On the Nokia mobile phone platform the verification time is far beyond any acceptability bounds
56
K. Dietrich et al.
(≈ 2399.24 s). The computing power of Java applications on mobile devices is clearly insufficient to verify this kind of proof. A viable solution for this case would be to delegate the verification step to a trusted third party. 5.2
Performance Impact of Different Java Virtual Machines
In the previous section we considered performance characteristics of a model realization of a DAA infrastructure based on a mobile phone acting as host and a notebook playing the part of the issuer. In this section, we briefly evaluate the impact of the Java virtual machine on the performance of the client side join process. In contrast to section 5.1 we ignore any server side computations and network overhead. Measurements shown in this section were performed using a special version of the client application which just instruments the client-side computations. Table 2. Average timing values for client-side computations of the join process Platform Freescale i.MX51 Freescale i.MX51 Freescale i.MX51 Intel Atom N450 Intel Atom N450
Java Virtual Machine Average join time (client) Deviation OpenJDK/Zero 8.591 s 0.019 s OpenJDK/Shark (mixed) 6.087 s 0.510 s OpenJDK/Shark (interpreted) 142.742 s 0.274 s OpenJDK/Client (interpreted) 40.312 s 0.209 s OpenJDK/Client (mixed) 3.512 s 0.025 s
We tested four combinations of Java virtual machines and just-in-time compiler settings which are intended to model typical JVMs found on mobile platforms. In order to produce comparable results for the tested platforms we used OpenJDK 1.6.0 as Java runtime environment. All cryptography related operations, including big number support was implemented with IAIK’s JCE-ME1 library. We tested the mixed-mode and interpreted Java VM configurations given in table 2. Figure 4 plots client-side computation times for repeated execution of the join process from within the same virtual machine instance. Table 2 gives average values for computation times. We decided to sample execution times for one particular VM configuration inside a loop from within a single VM invocation. This decision allows us to evaluate initial delays due to class loading and just-in-compilation. Three of the four VM configurations depicted in 4 exhibit relatively constant timing behavior with some minor jitter caused by other operating system processes interfering with our measurements. Only the OpenJDK “Shark” virtual machine exhibits great variations in join process execution times, caused by the relatively expensive just-in-time compilation steps done during the first few join process executions. As evident from figure 4 there is a huge performance gap almost in the range of one order of magnitude between the JIT (mixed mode) and the non-JIT (interpreted only) configurations on the Intel Atom platform. Interestingly the performance gap between the non-JIT and the JIT virtual machines on the ARM platform is by far smaller. 1
See http://jce.iaik.tugraz.at/sic/Products/Mobile-Security/JCE-ME
Implementation Aspects of Anonymous Credential Systems
57
50 ARM Cortex-A8, OpenJDK Zero VM ARM Cortex-A8, OpenJDK Shark VM Intel Atom 450, OpenJDK Client VM (interpreted) Intel Atom 450, OpenJDK Client VM (mixed mode)
45
Client-side join computation time [s]
40 35 30 25 20 15 10 5 0 0
10
20
30
40
50
60
70
80
Test run
Fig. 4. Join computation performance for different Java VM configurations
6
Conclusion
Based on the overall performance results of our implementation shown in section 5.1 we conclude that the most time-consuming part of the DAA protocol is the rogue detection. Even if rogue detection can be done relatively fast on desktop PCs and servers it still negatively affects the verification process which might be done on battery-powered mobile devices. Recalling from the test of the rogue TPMs, the time required by rouge checking process increases lineraly with the size of the black-list. For larger black-lists the time spent for rouge detection easily exceeds the actual signature verification time by orders of magnitude. The worst part is, that rouge detection has a big influence on mobile phones, even for small blacklists with sizes in the order of 30 rouge TPMs. Therefore, we conclude that delegating the validation of the rogue status to a third party is an unavoidable requirement. Another open problem with rogue detection are the mechanisms and protocols used to report corrupted TPMs. The current implementation of DAA in TPM 1.2 implicitly anticipates compromise of the DAA private key as only revocation reason. Currently there is no method that enables a user to voluntarily report corruption of his platform without knowing the f value guarded by the TPM. Moreover the proof verification process required to check the validity of an issuer’s public key requires impractical amount of storage and computational resources on mobile phones. This problem can be solved by off-loading the proof verification to a trusted third party. Finally the results from section 5.2 clearly show the impact of the choice of Java virtual machine on the performance of our purely Java-based prototype implementation. Acknowledgments. We thank the anonymous reviewers for their helpful comments. This work has been supported in part by the European Commission through the FP7 programme under contract 257433 SEPIA.
58
K. Dietrich et al.
References 1. Berna, S., Yalcin, O. (eds.): RFIDSec 2010. LNCS, vol. 6370. Springer, Heidelberg (2010) 2. Bichsel, P., Camenisch, J., Groß, T., Shoup, V.: Anonymous credentials on a standard java card. In: CCS 2009: Proceedings of the 16th ACM Conference on Computer and Communications Security, pp. 600–610. ACM, New York (2009) 3. Brickell, E., Camenisch, J., Chen, L.: Direct Anonymous Attestation. In: Proceedings of the 11th ACM Conference on Computer and Communications Security, Washington DC, vol.(5), pp. 132–145 (November 2004) 4. Camenisch, J., Van Herreweghen, E.: Design and implementation of the idemix anonymous credential system. In: CCS 2002: Proceedings of the 9th ACM Conference on Computer and Communications Security, pp. 21–30. ACM, New York (2002) 5. Chen, L., Page, D., Smart, N.P.: On the Design and Implementation of an Efficient DAA Scheme. Cryptology ePrint Archive, Report 2009/598 (2009), http://eprint.iacr.org/ 6. Dietrich, K.: An Integrated Architecture for Trusted Computing for Java Enabled Embedded Devices. In: STC 2007, pp. 2–6. ACM, New York (2007) 7. Dietrich, K.: Anonymous Credentials for Java Enabled Platforms: A Performance Evaluation. In: Chen, L., Yung, M. (eds.) INTRUST 2009. LNCS, vol. 6163, pp. 88–103. Springer, Heidelberg (2010) 8. Dietrich, K.: Anonymous client authentication for transport layer security. In: De Decker, B., Schaum¨ uller-Bichl, I. (eds.) CMS 2010. LNCS, vol. 6109, pp. 268–280. Springer, Heidelberg (2010) 9. Dietrich, K., Winter, J.: Implementation Aspects of Mobile and Embedded Trusted Computing. In: Chen, L., Mitchell, C.J., Martin, A. (eds.) Trust 2009. LNCS, vol. 5471, pp. 29–44. Springer, Heidelberg (2009), http://dblp.uni-trier.de/db/conf/trust/trust2009.html#DietrichW09 10. Mitchell, C.: Direct Anonymous Attestation in Context. In: Trusted Computing (Professional Applications of Computing), pp. 143–174. IEEE Press, Piscataway (2005) 11. Myers, M., Ankney, R., Malpani, A., Galperin, S., Adams, C.: X.509 Internet Public Key Infrastructure Online Certificate Status Protocol - OCSP (1999) 12. Smyth, B., Ryan, M.D., Chen, L.: Direct Anonymous Attestation (DAA): Ensuring Privacy with Corrupt Administrators. In: Stajano, F., Meadows, C., Capkun, S., Moore, T. (eds.) ESAS 2007. LNCS, vol. 4572, pp. 218–231. Springer, Heidelberg (2007), http://dblp.uni-trier.de/db/conf/esas/esas2007.html#SmythRC07 13. Sterckx, M., Gierlichs, B., Preneel, B., Verbauwhede, I.: Efficient Implementation of Anonymous Credentials on Java Card Smart Cards. In: 1st IEEE International Workshop on Information Forensics and Security (WIFS 2009), pp. 106–110. IEEE, London (2009) 14. Trusted Computing Group: TCG Software Stack (TSS) Specification Version 1.2 Level 1, part 1: Commands and Structures (January 6, 2006) 15. Trusted Computing Group: TPM Main Specification Level 2 Version 1.2, Revision 103. Tech. rep., Trusted Computing Group (October 26, 2006) 16. Wachsmann, C., Chen, L., Dietrich, K., L¨ ohr, H., Sadeghi, A.-R., Winter, J.: Lightweight Anonymous Authentication with TLS and DAA for Embedded Mobile Devices. In: Burmester, M., Tsudik, G., Magliveras, S., Ili´c, I. (eds.) ISC 2010. LNCS, vol. 6531, pp. 84–98. Springer, Heidelberg (2011)
Approximation of a Mathematical Aging Function for Latent Fingerprint Traces Based on First Experiments Using a Chromatic White Light (CWL) Sensor and the Binary Pixel Aging Feature Ronny Merkel1, Jana Dittmann1, and Claus Vielhauer2 1
Department of Computer Science, Research Group Multimedia and Security, Otto-von-Guericke-University of Magdeburg, Universitätsplatz 2, 39106 Magdeburg, Germany 2 Department of Informatics and Media, Brandenburg University of Applied Sciences, Magdeburger Straße 50, 14770 Brandenburg an der Havel, Germany {Ronny.Merkel,Jana.Dittmann}@iti.cs.uni-magdeburg.de, [email protected]
Abstract. The age determination of latent fingerprint traces is a very important challenge for forensic investigations, which has not been solved satisfyingly so far. Based on prior work, we use the novel and very promising aging feature of counting binary pixel for the approximation of a mathematical aging function to be used for the age determination of latent fingerprint traces. We first show the feasibility of this feature in a test set of nine test series (each comprised of a fingerprint sample scanned continuously over four days) using three different optical sensors (CWL) of the same model and varying resolutions (3,5,10µm). We then approximate the aging function for each test series, showing an average error of approximation between 13% and 40% for an optimal approximation. We discuss the prospects and restrictions of such a function for the age determination of latent fingerprint traces and identify future research challenges. Keywords: latent fingerprint traces, crime scene investigation, age determination, binary pixel, mathematical aging function.
1 Introduction The determination of the age of a latent fingerprint trace found at a crime scene is a strong need in forensic investigations since many decades. Only very limited results have been achieved in this field so far. In recent years, contactless scanning techniques have been introduced to the field of fingerprint forensic research, often adapting surface- or other purpose measurement devices for the non-invasive acquisition of latent fingerprint traces (see [1] for a summary of such techniques). These acquisition techniques offer new opportunities to the field of fingerprint age determination, since latent fingerprint traces can now be obtained with a very high resolution and accuracy (which could not fully be achieved by classical powdering and sticky tape lifting or fuming with cyanoacrylate or other development techniques and subsequent photographing). B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 59–71, 2011. © IFIP International Federation for Information Processing 2011
60
R. Merkel, J. Dittmann, and C. Vielhauer
In [2], we suggested the measurement of ‘binary pixel’ as a first feature for a possible age determination, using a contact-less, non-invasive Chromatic White Light (CWL) sensor [3] as part of a surface measurement device for acquiring latent fingerprint traces from a hard disc platter. To the best of our knowledge, this is a novel approach, firstly using high-resolution image sensory in combination with pattern recognition techniques, which seems to be very promising to finally solve the important research challenge of the age determination of latent fingerprint traces. The technique normalises the captured fingerprint image, binarises it using a threshold and then counts the white pixel, which represent the background pixel not belonging to the residue. We show that the amount of this white pixel increases in a logarithmic way over time. This increase is assumed to happen due to decomposition processes of the fingerprint residue and the evaporation of water. In our design approach from [2] we scan a single fingerprint sample continuously over a time period of 10 hours, referred to as a test series throughout this paper. We examine only four of such test series in [2] using one CWL device (for each series, an individual left a fingerprint trace on a hard disc platter) and give no specific details on the parameters of the observed aging curve. Here, we extend these considerations by proposing a mathematical aging function which might be used for the age determination of latent fingerprint traces and evaluate the reproducibility of the characteristic tendency of the aging feature using different resolutions and different sensors of the same model as well as an increased test series length of four days. Our first test goal is to reproduce the results of [2] using an extended test set of nine test series, where each test series is comprised of a fingerprint left on a hard disc platter by an individual (randomly chosen from four different individuals providing their fingerprints for the tests) which is continuously scanned over four days (with a scan interval of 30 minutes or less, depending on the resolution). For this nine test series, we vary the resolution (3µm, 5µm and 10µm) and use three different CWL sensors of the same model (FRT MicroProf 200 CWL 600 [3], which is identical to the one used in [2]) to evaluate the reproducibility of the characteristic tendency of the feature under different scanning conditions. Our second test goal is to then approximate a mathematical aging function from our test results, which might later be used for the determination of the absolute age of a fingerprint trace. Based on the standard mathematical representation of a natural , t>0 we approximate the parameters a and b logarithmic curve for each test series. We then evaluate the average error of approximation of such an approximated mathematical function, showing no significant tendency for the influence of different scanners of the same model or different resolutions within the test range, but significant variations between the results of the test series (average errors of approximation between 13% and 40%), possibly caused by different influences such as the sweat composition of the residue (e.g. type and duration of sweating before fingerprint application or consumption of medicine, certain food or other substances), the environmental conditions (e.g. UV-light, temperature or humidity) or fingerprint application related influences (such as smearing or blurring of the fingerprint or contact time and –pressure when applying the print). We discuss the potential use of our approximated aging function in a forensic application scenario and identify future research challenges, which need to be solved for achieving a certain accuracy of the results.
Approximation of a Mathematical Aging Function for Latent Fingerprint Traces
61
The paper is structured as follows: in the next section, we give a short summary of current state of the art approaches concerning the topic and present our used sensor. We then introduce our design approach for a mathematical aging function in section 3, which might be used for the age determination of latent fingerprint traces. In section 4, we explain our assumptions, present our test setup and evaluate the test results for our two test goals. We afterwards discuss the potential application of an approximated mathematical aging function in a forensic investigation in section 6. Section 7 summarises the paper and identifies future work.
2 State of the Art So far, only very limited progress has been made in the domain of age determination of latent fingerprints in forensics. A few approaches include the use of fingerprint ridge thickness and changes of pores [4], the healing stage of a wound on the finger [5], fluorescence-properties of a fingerprint changing over time [6] or the use of a reference database of fingerprints of different aging states [7]. A good summary of current approaches for the age determination of latent fingerprints can be found in the German study of [6]. However, all of these approaches are either focused on very limited scenarios (e.g. a wound on the finger is required) or fail to deliver reliable results. They can therefore not satisfy the strong need for a comprehensive age determination of latent fingerprint traces. In prior work [2], we reported that a novel feature called ‘binary pixel’ shows a very promising, characteristic logarithmic aging curve for latent fingerprint traces left on a hard disc platter. The experiments are conducted using the FRT MicroProf 200 CWL 600 sensor [3] for acquiring the prints and four different fingerprint samples are captured continuously over 10 hours. For determining the binary pixel value, we normalise a scanned fingerprint image and binarise it using a threshold. We then measure the number of white background pixel in relation to the number of overall pixel, which results in our feature value for the binary pixel feature. Since the feature of counting binary pixel seems to be very characteristic for the aging process of latent fingerprint traces, we want to use it for our idea of approximating a mathematical aging function, which might be used for the age determination of latent fingerprint traces. To reproduce the characteristic tendency in our extended test set, we use the same FRT MicroProf 200 CWL 600 measurement device, which origins from the field of quality control of materials and is transferred to the domain of fingerprint imaging. It uses a chromatic white light (CWL) sensor and can produce high-resolution 16-bit intensity- and topography images of a fingerprint trace with a resolution of up to 1µm in the lateral domain and 20nm in the longitudinal domain. For our research, so far only the intensity images produced by the sensor are of interest, leaving the use of topographical values for future work. Potential other contactless sensors are summarised in [1].
3 Our Proposed Design for an Aging Function For the approximation of a mathematical aging function for a given aging feature, the general tendency of the course of the features aging curve needs to be known. To the
62
R. Merkel, J. Dittmann, and C. Vielhauer
best of our knowledge, most natural processes have a logarithmic, exponential or (in some cases) linear course. We identified the course of the aging curve for the binary pixel feature as logarithmic. This tendency was confirmed by all nine test series examined in this paper. We therefore approximate the course of the aging curve by using the standard mathematical formula for a logarithmic function: , >0.
(1)
The variables of the function (1) are represented by a point in time t and the relative amount of white background pixel f(t) present at this point. The constant parameters a, b and c describe specific aspects of the function and need to be determined for a specific aging curve. The parameter c describes the base of the logarithm and, since the aging of fingerprint traces is a natural process, is assumed to be Euler’s number (e), leading to formula (2). However, the function could be approximated using other values for the base c in an analogous way. , >0.
(2)
Although the mathematical aging function for the given aging feature is to be approximated, it is trivially not identical to the experimentally determined aging curve described by the test results. Therefore, we have to distinguish between a tuple tupEi=(tEi,fE(tEi)), tEi>0 which describes a specific point in time tEi and its corresponding amount of white background pixel fE(tEi) from the discrete experimental aging curve fE (where i describes the index number of the tuple within a test series) and the tuple tupMi=(tMi,fM(tMi)), tMi>0, which represents the corresponding point in time tMi and its value fM(tMi) as part of the continuous mathematical function fM approximating the course of the aging curve. Considering this naming scheme, formula (2) can be rewritten as: ,
> 0.
(3)
Furthermore, the experimental results of a test series described by the discrete aging function fE consist of a set of tuples TE={(tE1,fE(tE1)), (tE2,fE(tE2)), ..., (tEn,fE(tEn)); tEi>0, n N, n>0} representing all n measured samples of a fingerprint trace. Formula (3) can be transposed to calculate the point in time tMi of a sample from the amount of white background pixel fM(tMi) present: ,
0.
(4)
By substituting fM(tMi) with its experimental equivalent fE(tEi), we derive the following formula: ,
0.
(5)
The derived formula (5) enables us to calculate the absolute amount of time tMi’ which has passed from the time a fingerprint trace was left on a surface until the acquisition of a binary pixel feature value fE(tEi), given that the constant parameters a and b are
Approximation of a Mathematical Aging Function for Latent Fingerprint Traces
63
known. However, since influences like environmental conditions (which are discussed in section 4.1) might change over time and also other influences (such as sensor noise) are present, there is an offset between the calculated theoretical age tMi’ of a fingerprint trace and its real age tEi. This offset characterises the error of approximation erri for a specific tuple tupEi: |
|
,
> 0.
(6)
To obtain an average error of approximation erravg for a complete test series, we determine the errors erri for all tuples tupEi and calculate their average value:
1
|
1
|
1
. (7)
,
> 0,
> 0,
0
By using this formula, the average error of approximation erravg for an arbitrary test series can be calculated. In the following section, we evaluate the accuracy of such an approximation by determining the parameters a and b in a way to minimise the average error erravg.
4 Experimental Investigations Changing influences on the fingerprint aging process are supposed to be the biggest source of impreciseness of our approximated aging curve. Furthermore, some influence factors might have a big impact resulting from only small changes whereas other influence factors are rather insignificant. In this section, we therefore first introduce our main assumption concerning these influences, which is according to our knowledge a precondition for a reliable age determination. We then introduce our experimental test setup and evaluate the measured results regarding our two test goals: the reproduction of the characteristic aging tendency using an extended test set and the approximation of a mathematical aging function from our test results. 4.1 Assumptions Many different influences exist on the aging process of a latent fingerprint trace (see [4], [5], [6], [7], [8], [9], [10], [11] and [12]). We summarise them here as: the sweat influence (namely all factors influencing the consistency of the sweat and therefore its decomposition process), the environmental influence (such as UVlight, humidity, temperature, wind or air pressure), the application influence (which is determined by the characteristic way a fingerprint trace is left, such as smearing or blurring, contact-time, contact pressure or the application of creme or
64
R. Merkel, J. Dittmann, and C. Vielhauer
dirt to a finger prior to applying the fingerprint) and the surface influence (where characteristic surface properties influence the aging process, such as absorptive or corroding surfaces). Furthermore, the scan influence of the fingerprint acquisition process, meaning the influence of the technical properties of the capturing device and the acquisition parameters (such as the resolution of a scan or the size of the measured area), can have an important influence on the determination of the aging function of a given feature, especially if chosen wrong. In this paper, we assume that the sweat influence, the environmental influence and the application influence are kept constant. We are not interested in the explicit values of these influences (e.g. if the temperature is 20°C or 30°C) as long as they are constant. However, it might not be totally possible in practice to keep all this influences constant, since they are subject to changes over time, such as temperature and humidity changes or differences in the sweat composition of different fingerprint traces. Therefore, the resulting error of approximation of the aging function calculated in the next sections might be partially a result of such influences not being totally constant. It might be improved in future work by taking additional arrangements to keep all influences, which are not subject of the study, as constant as possible, or taking such influences into account when determining the age of a fingerprint. Other influences, such as the surface influence, can be kept constant very well by fixing the used material to a hard disc platter used for all tests. The scan influence of the fingerprint acquisition process is subject to investigation in the scope of our first test goal (see section 1) and is therefore systematically varied in the next section to experimentally determine the significance of its influence. 4.2 Test Setup For our test setup we designed nine different test series (see table 1). Each test series is comprised of one fingerprint sample from an individual (randomly chosen from four individuals contributing their fingerprints to the experiments) left on a hard disc platter (constant surface influence, ideal surface for the used measurement device), of which a randomly selected part of the size 2.5x2.5 mm is continuously scanned over a total period of four days (with a scan interval of 30 minutes or less, depending on the resolution). The size of the measured area and the scan interval are chosen in respect to the performance of the CWL sensor. At a resolution of 3µm, a size of the measured area of 2.5x2.5mm can be scanned in 30 minutes. If the scan interval is to be increased, the size of the measured area or the resolution needs to be decreased and vice versa. A size of the measured area of 2.5x2.5mm furthermore seems to be very sufficient for calculating the binary pixel feature (containing approximately between 5 and 8 ridge lines) and might even be decreased in future work. It is furthermore assumed that a total scan period of four days is sufficient for a first approximation of the aging function, since most changes to the fingerprint are happening within the first hours (which is already mentioned in [2] and is confirmed in our test results). For a more precise approximation, longer scanning periods might also be of interest in future work. Figure 1 exemplary depicts five different scan images of our test series 1, captured at different points in time t1=0days, t2=1day, t3=2days, t4=3days and t5=4days.
Approximation of a Matthematical Aging Function for Latent Fingerprint Traces
65
Fig. 1. Captured fingerprint im mages from test series 1 at five different points in time (from m left to right): t1=0days, t2=1day, t3=2days, = t4=3days and t5=4days
The used CWL sensor as a well as the resolution of scans captured within a testt series is varied to extract possible influences of these factors (scan influence). Whhile the intensity images of threee different CWL sensors of the same model (FRT MiccroProf 200 CWL 600; see [3]) are used, the resolution of the test series for each of thhese sensors is varied between 3, 5 and 10µm. 6bit grey scale intensity images acquired by the sensor are For all test series, the 16 normalised to the interval [0;1] and are binarised using the threshold thresh = 00.8. h disc platter in our experiments, such general threshhold Since we use only similar hard is applicable to all test seriies, which has been confirmed in preliminary tests. Hoowever, for using different ty ypes of platter or different surfaces in future work, the threshold might have to be adapted to different pixel grey value distributions. Table 1. Overview of our exp perimental test setup for nine test series, each comprised of one fingerprint trace on a hard diisc platter, of which a randomly selected area of 2.5x2.5mm m is measured in intervals of 30 miinutes or less (depending on the resolution) over four days Test series number Scanning Scan device influence resolution
9 1 2 3 4 5 6 7 8 CWL CWL CWL CWL CWL CWL CWL CWL CW C WL 3 1 1 1 2 2 2 3 3 3 3µm 5µm 10µm 3µm 5µm 10µm 3µm 5µm 10µµm
Given the introduced scaan interval (≤30min) and scan period (4 days), a total oof at least 192 tuples (tEi,fE(tEi)) of o different times tEi and corresponding relative amounts of white background pixel fE(ttEi) can be extracted for each test series, which can be uused for the approximation of the mathematical aging function and the corresponding errror d in section 3. of approximation described 4.3 Experimental Resultss The reproduction of the lo ogarithmic course of a fingerprint aging curve using the binary pixel feature was inttroduced as our first test goal in section 1. For all nine test series, a clear logarithmic course of the aging curve can be seen in figure 2. Theeref the approximation of our aging function is shown too be fore, the basic assumption for valid. We also re-sketched the t experimental aging-curve from the investigations off [2] for comparison (see figure 2). 2
R. Merkel, J. Dittmann, and C. Vielhauer
rel. amount of white (background) pixel
66
1
0.9
fE test series 4 fE test series 6
0.8
fE test series 2 fE test series 5 fE test series 8 fE test series 3 fE test series 9 fE re-sketched from [2]
0.7
fE test series 7
0.6
fE test series 1
0.5 0
20
40 60 time in hours
80
Fig. 2. The experimental aging curves for our nine test series depicting the relative amount of white background pixel in relation to the time passed; the experimental results from [2] are also re-sketched for comparison
The test results show that all experimental aging curves for the binary pixel feature have a logarithmic course and that the biggest changes happen within the first hours. They therefore confirm the results presented in [2] and furthermore show that there are differences in the slopes of the curves. Also, the curves are shifted along the y-axis. This is most likely caused by the different amounts of residue present for the different test series, since different latent fingerprint traces bear different amounts of residue (due to sweat composition, contact pressure when leaving a print, or other influences). Comparing the experimental aging curves of different resolutions and different CWL sensors frome these first experiments, no clear dependency can be found on either the resolution or the used sensor. However, the distortion of some curves (such as the ones from test series 6 and 7 depicted in fig. 2) is much higher than that of others, which seems to be a result of environmental influences (such as temperature, humidity or wind) which can not totally be controlled during our current experiments but should be taken into account in future research. In our second test goal from section 1 we want to approximate the parameters a and b of our formula (7) to calculate and minimise the average error of approximation erravg (see also section 3). For each test series, we have at least 192 tuples tupEi (as calculated in section 4.2), each consisting of a point in time tEi and the corresponding relative amount of white background pixel fE(tEi). The approximation of a and b can be seen as an optimisation problem. More precise, we want to find the minimum value for the average error erravg described by function (7) for varying parameters a and b. We solve this minimisation problem with the help of a computer program, which is systematically varying a and b in steps of 1/10000 in the interval of [0,1] and determines the values of a and b for which erravg is minimal. With the optimised parameters a and b we can approximate the course of
Approximation of a Mathematical Aging Function for Latent Fingerprint Traces
67
the discrete experimental aging curve by our derived continuous mathematical curve (see figure 3). The optimised parameters a and b as well as their corresponding average error erravg are listed in table 2 for all test series.
rel. amount of white (background) pixel
experimental aging curve fE for test series 1 approximated mathematical aging curve fM for test series 1 0.57 0.56 0.55 0.54 0.53 0.52 0.51 0.5 0.49 0
20
40 60 time in hours
80
100
Fig. 3. Exemplary illustration of the experimental aging curve and the approximated mathematical aging curve for the aging feature of counting binary pixel, shown for test series 1. The relative amount of white background pixel is shown in relation to the time passed.
In figure 3 we illustrate for test series 1 that the derived mathematical aging function seems to approximate the average course of the experimental curve very well, leaving offsets (which are probably due to the influence factors described in section 4.1 not being totally constant) of the practically measured white background pixel to both sides of the curve. Table 2. Our experimental test results for the optimized values of the parameters a and b as well as the average error of approximation erravg for all test series Test series number Scanning device resolution Optimal a Optimal b erravg
1
2
3
4
5
6
7
8
9
CWL1 CWL1
CWL1
CWL2 CWL2 CWL2 CWL3 CWL3
CWL3
3µm 5µm 0.0105 0.0093 0.5101 0.8151 0.1437 0.1548
10µm 0.0071 0.7504 0.2006
3µm 5µm 10µm 3µm 5µm 0.0036 0.0112 0.0059 0.0119 0.0079 0.9133 0.7489 0.8764 0.5752 0.7573 0.1689 0.1319 0.3988 0.2991 0.1652
10µm 0.0080 0.7454 0.2580
The test results in table 2 show three findings: a) No clear dependency on either the resolution of a test series or the used sensor within the test range. For neither the parameters a or b, nor the average error of approximation erravg any clear tendency can be extracted, which might be related to the resolution or the used sensor. Therefore, we can assume that the differences in the test results are caused by influences other than the scan influence.
68
R. Merkel, J. Dittman nn, and C. Vielhauer
b) The values of the param meters a and b vary greatly. While the parameter b is dependent on the relative am mount of residue in relation to the overall pixel (as m mentioned earlier) and thereforee is expected to vary in dependence of the fingerprint traace, the high variation in the values v of the parameter a is very challenging to explain. Here, as mentioned already y in section 4.1, changing influences such as the envirronmental influence (e.g. tem mperature, humidity or wind) or the sweat influence ((e.g. the composition of the resid due) might have a strong impact. c) The average error of app proximation erravg takes on values between 13-40% forr all test series. This shows that under the given conditions, even if the optimal values for the parameters a and b can be determined, this error is the best we can get. The reaason for this error might be flucctuations in the amount of white background pixel fE((tEi) measured for a certain poin nt in time tEi, which can be a result of changing influennces (such as temperature chang ges, humidity changes or wind) where slight changes miight be enough to change the deecomposition speed of the fingerprint sample or even aff ffect the sensors capturing characteristics. Therefore, the error might be improved in futture work by taking additional measures to keep all influences more constant or to ttake their changes into account. Of course, if the age of a fingerprint trace is to be deetermined at a crime scene, theese changes of influences might not always be controllaable or reproducible. Furthermore, examining g the error of approximation erri between experimenntal points in time tEi and theirr calculated theoretical equivalents tMi shows varying results. Figures 4 and 5 ex xemplary represent two characteristic tendencies, whhich occurred for the test series. In some cases (see figure 4), the error of approximattion erri is very high in the beginning and decreases over time, whereas in other caases (see figure 5) it is very un npredictable. This confirms the assumption that the errror might be a result of fluctuations in the external influences, since they are expeccted to occur randomly. test series 1
1 0.8 0.6 0.4 0.2 0 0
20
40 60 time in hours
80
1000
Fig. 4. Our experimental resullts for the error of approximation erri between the measured ttime tEi and the mathematically caalculated time tMi’ of a sample in relation to the absolute ttime passed, exemplary shown for test t series 1
Approximation of a Matthematical Aging Function for Latent Fingerprint Traces
69
test series 7
1 0.8 0.6 0.4 0.2 0 0
20
40 60 time in hours
80
1000
Fig. 5. Our experimental resullts for the error of approximation erri between the measured ttime tEi and the mathematically caalculated time tMi’ of a sample in relation to the absolute ttime passed, exemplary shown for test t series 7.
The other test series show w similar tendencies in the progression of their error off approximation if compared to the exemplary depicted ones. Both tendencies occcur nearly equally often among gst the test series.
5 Discussion For our mathematical aging function to be usable in a forensic application scenaario (e.g. at a crime scene), the constant parameters a and b play a major role. If they can be derived correctly, we caan determine the age of a fingerprint with an average errror of approximation erravg of about 13-40%. This can be done by scanning the finggerprint at the crime scene sho ortly after the crime investigation team arrives and witthin the next hours or even days (which is still practical, as our crime scene examiiner contact told us) with the pro oposed surface measurement device using the CWL sensor. Having measured the relative amount of white background pixel fE(tEi), we can eaasily calculate the age of the fin ngerprint by using formula (5). Since erravg only applies to an average case and erri can n be much higher for a single scan, several scans shouldd be carried out and the results should be averaged for a more reliable result. However, the correct ap pproximation of the parameters a and b remains a maajor challenge. To the best of our o knowledge, it is not possible to obtain a and b direcctly from a crime scene withoutt any additional information, regardless of how many sccans are performed. This is duee to the fact that neither the absolute time tEi of a scann is given, nor the parameter a or o the parameter b. It is unlikely that tEi is given at a crime scene (since then the age determinattion would not be necessary). Th herefore, either a or b need to be determined in advancee. If one of the two parameters is known, the other one can be calculated by conductting only two scans of the fingeerprint trace at the crime scene and solving the follow wing system of equations:
70
R. Merkel, J. Dittmann, and C. Vielhauer
,
> 0. ,
> 0,
(8) > 0.
(9)
Using this system of equations, Δt represents the time between the two scans, a or b are given and tMi is to be substituted. If a is given, b can be calculated, if b is given, a can be calculated. Determining the parameter b from a given parameter a solving the system of equations (8)(9) and inserting the corresponding values fE(tE2) for fM(tMi+Δt) and fE(tE1) for fM(tMi) whereas Δt describes the time difference between the two scans Δt=tE2-tE1 results in the following formula: 1
,
> 0,
0.
(10)
With the derived formula (10), the parameter b can be calculated from a given parameter a using at least two scans of the fingerprint trace. Again, in this situation it is recommended to use more than two scans for a better approximation of the average value of the parameters and minimising the influence of outliers. The biggest problem remains the determination of one of the parameters a or b prior to the age determination. Since b is strongly dependent on the amount of residue (as mentioned earlier), it is not possible to determine it in advance. However, it might be possible to determine the value of a by taking all significant influences on the aging process into account. In future work, a generalised table might be created, which assigns certain values to a, according to the temperature, humidity, presence of UV-light or even chemical composition of the fingerprint residue.
6 Conclusions and Future Work In this paper, the novel and very promising aging feature of counting binary pixel was examined in a test set of nine test series and a logarithmic course of its experimental aging curves was reported. A mathematical aging function was approximated for each test series as a continuous representation of the discrete experimental aging curve from the test results, to be used for the age determination of latent fingerprint traces in forensic investigations. The average error of approximation of this function was shown to be 13-40% under the given conditions. No characteristic influence of different resolutions or different sensors of the same model could be extracted within the test range. Possible reasons for the observed errors of approximation as well as prospects and limitations of an application of this function for the age determination of forensic latent fingerprint traces were discussed. In future work, the average error of approximation of the function should further be decreased by taking additional measures to keep the influences on the aging process constant or to take changes of these influences into account. Here, additional studies need to be conducted to distinguish major influences from minor ones. Furthermore, a technique should be investigated and developed for the reliable determination of the parameter a of the proposed aging function prior to the age
Approximation of a Mathematical Aging Function for Latent Fingerprint Traces
71
determination. This might be done by assigning certain values to the parameter depending on the specific characteristics of the present influences. Again, for this task, the significance of these influences needs to be studied further. Acknowledgments. The work in this paper has been funded in part by the German Federal Ministry of Education and Science (BMBF) through the Research Programme under Contract No. FKZ: 13N10816 and FKZ: 13N10818. We also want to thank Anja Bräutigam from the Federal State Police Head Quarters of Saxony-Anhalt and Stefan Gruhn from the Brandenburg University of Applied Sciences for their support in conducting the scans.
References 1. Leich, M., Kiltz, S., Dittmann, J., Vielhauer, C.: Non-destructive forensic latent fingerprint acquisition with chromatic white light sensors. In: Proc. SPIE 7880, 78800S (2011), doi:10.1117/12.872331 2. Hildebrandt, M., Dittmann, J., Pocs, M., Ulrich, M., Merkel, R., Fries, T.: Privacy Preserving Challenges: New Design Aspects for Latent Fingerprint Detection Systems with Contact-Less Sensors for Future Preventive Applications in Airport Luggage Handling. In: Vielhauer, C., Dittmann, J., Drygajlo, A., Juul, N.C., Fairhurst, M.C. (eds.) BioID 2011. LNCS, vol. 6583, pp. 286–298. Springer, Heidelberg (2011) 3. Fries Research Technology (January 19, 2011), http://www.frt-gmbh.com/en/ 4. Popa, G., Potorac, R., Preda, N.: Method for Fingerprints Age Determination (December 6, 2010), http://www.interpol.int/Public/Forensic/fingerprints/ research/AgeDetermination.pdf 5. Stamm, V.: Methoden zur Altersbestimmung daktyloskopischer Spuren. Wiesbaden: 9. Grundlehrgang für daktyloskopische Sachverständige, Hausarbeit (1997) 6. Aehnlich, J.: Altersbestimmung von datkyloskopischen Spuren mit Hilfe der LaserFluoreszenzspektroskopie. Diplomarbeit, Universität Hannover (2001) 7. Baniuk, K.: Determination of Age of Fingerprints. Forensic Science International (46), 133–137 (1990) 8. Stüttgen, G., Ippen, H.: Die normale und pathologische Physiologie der Haut. G. Fischer Verlag, Stuttgart (1965) 9. Liappis, N., Jakel, A.: Free Amino Acids in Human Eccrine Sweat. Arch. Dermatol. Res. 254, 185–203 (1975) 10. Wertheim, K.: Fingerprint Age Determination: Is There Any Hope? Journal of Forensic Identification 53(1), 42–49 (2003) 11. Holyst, B.: Kriminalistische Abschätzung des Spurenalters bei Fingerpapillarlinien. Archiv für Kriminologie, 94–103 (1983) 12. Sampson, M.: Lifetime of a Latent Print on Glazed Ceramic Tile. Journal of Forensic Identification 44(4), 379–386 (1994)
Two-Factor Biometric Recognition with Integrated Tamper-Protection Watermarking Reinhard Huber1 , Herbert St¨ ogner1 , and Andreas Uhl1,2 1 2
School of CEIT, Carinthia University of Applied Sciences, Austria Department of Computer Sciences, University of Salzburg, Austria [email protected]
Abstract. Two-factor authentication with biometrics and smart-cards enabled by semi-fragile watermarking is proposed. Several advantages of the scheme as compared to earlier approaches are discussed and experiments for an iris-based recognition system demonstrate that semi-fragile integrity verification can be provided by the system. This is achieved without impact on recognition performance, since the slight degradation in terms of ROC behavior which is observed on the watermarked sample data is more than compensated by the additionally available template that is transferred from the smart-card to the matching site via watermarking technology.
1
Introduction
Biometric recognition applications become more and more popular. Biometric authentication systems can resolve most of security issues of traditional token-based or knowledge-based authentication systems, since a biometric feature belongs only to one person and cannot be lost or forgotten. But eventually, biometric features can be stolen or adopted and there exist various other ways to circumvent the integrity of a biometric authentication system (see e.g. a corresponding collection of security issues compiled by the UK Government Biometrics Working Group1 ). Recent work systematically identifies security threats against biometric systems and possible countermeasures [15, 16] and e.g. discusses man-in-the-middle attacks and BioPhishing against a web-based biometric authentication system [21]. Among other suggestions to cope with security threats like applying liveness detection or classical cryptographic encryption and authentication techniques, watermarking has been suggested to solve security issues in biometric systems in various ways [5]. Dong et al. [2] try to give a systematic view of how to integrate watermarking into biometric systems in the case of iris recognition by distinguishing whether biometric template data are embedded into some host 1
This work has been partially supported by the Austrian Science Fund, project no. L554-N15. http://www.cesg.gov.uk/policy technologies/biometrics/ media/biometricsecurityconcerns.pdf
B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 72–84, 2011. c IFIP International Federation for Information Processing 2011
Two-Factor Biometric Recognition
73
data (“template embedding”), or biometric sample data is watermarked by embedding some data into them (“sample watermarking”). One of the application scenarios described in literature involving watermarks actually represents a two-factor authentication scheme [8]: biometric data is stored on a smart-card and the actual biometric data acquired at the sensor is used to verify if the user at the sensor is the legitimate owner of the smart-card. Watermarking is used to embed data of a second modality into the data which is stored on the card (in the reference given, facial data is embedded into fingerprint images and at the access control site, a fingerprint sensor is installed). Therefore, watermarking is employed as a simple means of transportation data of two different modalities in an integrated manner. Obviously, this application scenario represents as well a case of enabling the application of multibiometric techniques by using watermarking techniques, where biometric template data is embedded into biometric sample data (of different modalities) to enable multibiometric fusion. Traditionally, in these schemes two different sensors are used to acquire the data and again, watermarking is used as a transportation tool only. For an overwhelming majority of watermarking-based techniques for these two scenarios (i.e. two-factor authentication with smart-cards and multibiometrics), robust watermarks have been suggested. However, the motivations for applying this specific type of watermarks are not made clear and discussed superficially only, if at all, in most papers. The usage of robust embedding schemes seems to indicate that both data need to be tightly coupled and that the entire transmitted data might be subject to various manipulations, since robust embedding is meant to make the embedded data robust against changes of the host data. Therefore it seems that an insecure channel between sensor and processing module is assumed in this context. In such an environment host data manipulations are to be expected, including even malicious tampering like cropping. In recent work [4] it has been shown that most robust watermarks cannot prevent even a massive tampering attack (i.e. exchanging the entire iris texture) in the context of an iris recognition system with embedded template data. This comes as no surprise since these watermarks are actually designed to be robust against this type of attacks. Obviously, robust watermarking is not the best suited watermarking technology for the purpose it is suggested for in this context. While this specific attack discussed is targeted against the security of robust embedding (and can be resolved by using different types of watermarks as shown in this work), robust watermarking additionally introduces distortions into the sample data impacting on recognition performance [3]. Also for this problem, different types of watermarks may represent better solutions as also covered here. In this paper, we introduce the application of semi-fragile watermarking in the context of a two-factor authentication scheme using iris recognition and smart-cards. Contrasting to most multibiometric approaches, we do not embed template data from a different modality, but the modality of sample data and template data do match. We demonstrate that in addition to enable tightly coupled transport, semi-fragile watermarking can also provide sensitivity against
74
R. Huber, H. St¨ ogner, and A. Uhl
tampering and almost negligible impact on recognition performance. An additional advantage of the proposed scheme is the improved recognition accuracy due to the use of two templates in the matching process and increased security due to the two factor approach in general. Section 2 provides an overview of several techniques how to incorporate watermarking techniques into biometric systems. Emphasis is given to the discussion of several examples of two-factor authentication and multibiometric techniques, which are enabled by embedding template data into sample data using watermarking. In Section 3, we explain and discuss the proposed scheme and present experimental results in Section 4. Section 5 concludes the paper.
2
Watermarking in Biometric Systems
A recent overview on the topic and an extensive literature review is given in [5]. One of the first ideas to somehow combine biometric technologies and watermarking is “biometric watermarking” [18]. The aim of watermarking in this approach is not to improve any biometric system, but to employ biometric templates as “message” to be embedded in classical robust watermarking applications like copyright protection in order to enable biometric recognition after the extraction of the watermark (WM). A second application case for robust WMs is to prevent the use of sniffed sample data to fool the sensor in order to complement or replace liveness detection techniques. During data acquisition, the sensor (i.e. camera) embeds a WM into the acquired sample image before transmitting it to the feature extraction module. In case an intruder interferes the communication channel, sniffs the image data and presents the fake biometric trait (i.e. the image) to the sensor, it can detect the WM, will deduce non-liveness and will refuse to process the data further (see e.g. [1] embedding voice templates into iris sample data). A steganographic approach is to transmit biometric data (i.e. template data) hidden into some arbitrary carrier / host data or biometric samples of different biometric modalities. The idea is to conceal the fact that biometric data transfer takes place, e.g. Jain et al. [6] propose to embed fingerprint minutiae data into an arbitrary host image while Khan et al. [9] suggest to embed fingerprint templates into audio signals. Questions of sensor and sample authentication using watermarks have also been discussed. During data acquisition, the sensor (i.e. camera) embeds a watermark into the acquired sample image before transmitting it to the feature extraction module. The feature extraction module only proceeds with its tasks if the WM can be extracted correctly. For example, fragile watermarking has been suggested to serve that purpose either embedding image-independent [20] or image-dependent data as WM [19]. A significant amount of work has also been published in the area of using WMs to enable a multibiometric approach by embedding a biometric template into a biometric sample of different biometric modalities. There are two variants:
Two-Factor Biometric Recognition
75
First, there are two different sensors acquiring two biometrics traits. Since for one modality template data is embedded, these data need to be generated at the sensor site which makes this approach somewhat unrealistic, at least for low power sensor devices. In addition to that, besides the increased recognition performance of multimodal systems in general there is no further specific gain in security (see for example: Jain et al. [7] embed face data into fingerprint images as well as do Chung et al. [11] and Noore et al. [12]; Park et al. [13] suggest to use robust embedding of iris templates into face image data, etc.). The second variant is to store the template on a smart-card which has to be submitted by the holder at the access control site. The smart-card embeds the template into the host sample data. This in fact represents a two-factor authentication system which increases security by introducing an additional token-based scheme and also leads to higher recognition accuracy as compared to a single biometric modality. With respect to general two-factor authentication schemes, [17] propose to embed additional classical authentication data with robust watermarking into sample data, where the embedded signature is used as an additional security token like a password. Jain and Uludag [8] propose to embed face template data in fingerprint images stored on a smart-card (called scenario 2 in the paper while scenario 1 is a steganographic one). Instead of embedding an additional security token also biometric template data from a second sensor can be embedded – in [14] an encrypted palmprint template is embedded into a fingerprint image, where the key is derived from palmprint classes. Since these additional data are not used in multibiometric fusion but serve as independent second token coming from a second sensor, this approach can be interpreted as being both, a multibiometric recognition scheme or a two factor authentication scheme. The impact of watermarking on the recognition performance of biometric systems has been investigated most thoroughly in the context of iris recognition. While Dong et al. [2] do not report on performance degradations when investigating a single watermark embedding algorithm and one iris recognition technique only, H¨ ammerle et al. [3] find partially significant reductions in recognition accuracy (especially in case of high capacity) when assessing two iris recognition schemes and a couple of robust watermarking algorithms. Similar to the latter results, recognition impact has been observed as well for speech recognition [10] and fingerprint recognition [14].
3
Two-Factor Biometric Recognition: Semi-fragile Template Embedding
We focus on a two-factor authentication scheme based on biometrics and a token, i.e. a smart-card. When a user is enrolled into the system, sample data are acquired, corresponding template data is extracted and stored in two different ways: first, in the centralized biometric database required for the actual recognition process, and second, on the smart-card. In the authentication phase, the
76
R. Huber, H. St¨ ogner, and A. Uhl
smart-card is submitted by the user to the access control site and the sensor acquires “new” sample data. The following actions are performed (Fig. 1 illustrates the scenario): 1. From the acquired sample data, a template is extracted and compared to the template on the smart-card. Only if there is sufficient correspondence, the following stages are conducted subsequently. Note that this is done at the sensor site, so there is no necessity to contact the centralized database. 2. The smart-card embeds its template into the sample data employing a semifragile embedding technique (this template is referred to as “template watermark” subsequently). 3. The data is sent to the feature extraction and matching module. 4. At the feature extraction module, the watermark template is extracted, and is compared to the template extracted from the sample (denoted simply as “template” in the following). In this way, the integrity of the transmitted sample data is ensured when there is sufficient correspondence between the two templates. In case of a biometric system operating in verification mode the template watermark can also be compared to the template in the database corresponding to the claimed identity (denoted “database template” in the following). Note that in the latter case, the correspondence is expected to be higher since the template generated during enrollment has been extracted as template watermark – coming from the smart card – and is also extracted from the database. 5. Finally, in case the integrity of the data has been proven, the template watermark and the template are used in the matching process, granting access if the similarity to the database template is high enough. When comparing this approach to previous techniques proposed in literature, we notice the following differences / advantages: As opposed to techniques employing robust watermarking, the proposed scheme can ensure sample data integrity in addition to enabling tightly coupled transport. As opposed to techniques employing arbitrary (semi-)fragile watermarks for integrity protection (instead of the template watermark used here), there is no need to transmit / store the watermarks at the receiving site for integrity verification. Additionally, the recognition performance is better since two templates can be used in the matching process, one of which eventually identical to the database template. When compared to integrity protection enabled by (robust) digital signatures, our approach offers the advantage of disclosing the location of eventual modification which enables the assessment of the modifications’ significance. Also, the verification data is embedded and does not have to be taken care of separately. Besides, a signature-based scheme cannot provide the functionality of transporting the authentication data stored on the card, it is intrinsically restricted to integrity verification and cannot support the two-factor aspect of the scheme we have introduced here. However, some issues need to be investigated with respect to the proposed scheme (which will be done in the experiments):
Two-Factor Biometric Recognition
77
Fig. 1. Considered application scenario
– How can we construct an actual semi-fragile watermarking technique capable of embedding template data ? – What is the impact of the embedded template watermark on the recognition performance using the template for matching only ? – What is the amount of robustness we can support with a scheme like this (as opposed to a fragile scheme) ? – Does integrity verification indeed work in a robust manner ? – Can biometric matching take advantage of the two different templates available for matching ?
4
Experiments in the Case of Iris Recognition
4.1
Iris Recognition and Iris Databases
The employed iris recognition system is Libor Masek’s Matlab implementation2 of a 1-D version of the Daugman iris recognition algorithm. First, this algorithm 2
http://www.csse.uwa.edu.au/~pk/studentprojects/libor/sourcecode.html
78
R. Huber, H. St¨ ogner, and A. Uhl
segments the eye image into the iris and the remainder of the image. Iris image texture is mapped to polar coordinates resulting in a rectangular patch which is denoted “polar image”. For feature extraction, a row-wise convolution with a complex Log-Gabor filter is performed on the polar image pixels. The phase angle of the resulting complex value for each pixel is discretized into 2 bits. The 2 bit of phase information are used to generate a binary code. After extracting the features of the iris, considering translation, rotations, and disturbed regions in the iris (a noise mask is generated), the algorithm outputs the similarity score by giving the Hamming distance between two extracted templates. The sensible range of the Hamming distance reaches from zero (ideal matching of two iris images of the same person) to 0.5 (ideal mismatch between two iris images of different persons). The following three datasets are used in the experiments: CASIAv3 Interval database3 consists of 2639 images with 320 × 280 pixels in 8 bit grayscale .jpeg format, out of which 500 images have been used in the experiments. MMU database4 consists of 450 images with 320×240 pixels in 24 bit grayscale .bmp format, all images have been used in the experiments. UBIRIS database5 consists of 1876 images with 200×150 pixels in 24 bit colour .jpeg format, out of which 318 images have been used in the experiments. All intra-class and inter-class matches possible with the selected respective image sets have been conducted to generate the experimental results shown. 4.2
The Watermarking Scheme
As the baseline system, we employ the fragile watermarking scheme as developed by Yeung et. al and investigated in the context of fingerprint recognition [20]. For this algorithm, the watermark embedded is binary and padded to the size of the host image. Subsequently, the WM is embedded into each pixel according to some key information. As a consequence, the WM capacity is 89600, 76800, and 30000 bits for CASIAv3, MMU, and UBIRIS, respectively. Fig. 1 shows PSNR values averaged over all images embedding 10 randomly generated WM into each image. Obviously, the quality of the images remains very high after embedding, especially with enabled error diffusion which is therefore used in all subsequent experiments. In Figure 2 we display tampering localization examples of the original fragile scheme. Fig. 2.b shows a doctored image corresponding to images as used in the attack in [4] – the attack is clearly revealed and the location is displayed in exact manner. As expected, when applying compression to the image with JPEG quality 75%, the WM indicates errors across the entire image (except for the pupil area which is not affected by compression due to its uniform grayscale) as shown in Fig. 2.f. 3 4 5
http://www.cbsr.ia.ac.cn/IrisDatabase.htm/ http://pesona.mmu.edu.my/~ccteo/ http://www.di.ubi.pt/~hugomcp/investigacao.htm
Two-Factor Biometric Recognition
79
Table 1. PSNR without and with error diffusion
PSNR [dB] PSNR with ED [dB]
CASIAv3 MMU UBIRIS 48.07 43.22 47.69 49.57 44.57 49.16
(a) original
(b) replaced iris
(c) compressed image
(d) original watermark
(e) WM of (b)
(f) WM of (c)
Fig. 2. Tamper localization of the original Yeung scheme
Since this technique is a fragile WM scheme, no robustness against any image manipulations can be expected of course. Table 2 demonstrates this property by displaying averaged bit error rates (BER) computed between original and extracted WMs for a subset of 100 images with randomly generated WMs. As can be observed, there is a certain amount of robustness against noise and JPEG compression with quality 100. For the other attacks, the BER of 0.5 indicates that the extracted WMs are purely random and therefore entirely destroyed by the attack. So far, randomly generated WM with size identical to the images have been embedded. The usually smaller size of biometric templates can be exploited to embed the template in redundant manner, i.e. we embed the template several times as shown in Fig. 3.a. After the extraction process, all template watermarks are used in a majority voting scheme which constructs a “master” template watermark as shown in Fig. 3.b. We expect to result in higher robustness leading to an overall semi-fragile WM scheme for the template WMs. In our implementation, the iris code consists of 9600 bits, therefore, we can embed 9, 8, and 3 templates into images from the CASIAv3, MMU, and UBIRIS databases, respectively.
80
R. Huber, H. St¨ ogner, and A. Uhl Table 2. BER for six different attacks Attack CASIAv3 MMU UBIRIS Mean filtering 0.50 0.50 0.50 Gaussian Noise N = 0.0005 4.6 · 10−5 5.6 · 10−5 6.1 · 10−5 Gaussian Noise N = 0.001 0.03 0.03 0.03 JPEG Q100 0.05 0.06 0.05 JPEG Q95 0.43 0.45 0.45 JPEG Q75 0.49 0.50 0.50
(a) redundant embedding
(b) majority voting
Fig. 3. The semi-fragile Yeung scheme
4.3
Experimental Results
In Table 3 we show results for the robustness tests when applied to the database images with redundantly embedded template watermarks. When compared to Table 2, we clearly observe improved robustness against noise insertion and moderate JPEG compression. It can be clearly seen that with an increasing amount of redundancy, robustness is improved which is to be expected due to the more robust majority decoding (please recall, that for CASIAv3 redundancy is maximal among the three datasets). In interesting question is the extent of influence an embedded watermark has on the recognition performance of the system. In Fig. 4 we compare ROC curves of the original data and ROC curves of sample data with embedded WMs in the latter case, the average of ten embedded WMs is shown. While for the CASIAv3 and MMU there is hardly a noticeable impact, we notice significant result degradation in the case of the UBIRIS dataset. A possible explanation for this effect is the already low quality of this dataset, in case of additional degradation results get worse quickly, while for the other datasets there is still room for slight quality reduction since the original quality is very high. The situation changes when it comes to additional distortions: As shown in Table 4, also in the case of the CASIAv3 we notice some impact on recognition performance with embedded WMs as compared to the original sample data without WMs embedded. Beside the EER, we show FRR (for F AR = 10−3 ) and FAR (for F RR = 5 · 10−3 ). It is interesting to see that mean filtering and moderate JPEG compression can even improve the recognition results of the data
Two-Factor Biometric Recognition
81
Table 3. BER for seven different attacks Attack CASIAv3 MMU UBIRIS Mean filtering 0.50 0.50 0.50 Gaussian Noise N = 0.0005 0 0 0 Gaussian Noise N = 0.001 0 0 0.003 JPEG Q100 0 0 0.01 JPEG Q99 0 0.01 0.05 JPEG Q98 0.08 0.14 0.22 JPEG Q95 0.35 0.40 0.43
(a) CASIAv3
(b) MMU
(c) UBIRIS
Fig. 4. ROC curves for the sample data with random embedded WMs and without
without WM embedded – this effect is due to the denoising capabilities of mean filtering and compression. In any case, we notice a slight result degradation for the variant with embedded WMs. Finally, we want to consider the question in how far matching between template WM and database template(s) is influenced by attacks, i.e. we investigate robustness of the embedded template WM. The corresponding information can be used to assess the integrity of the data, i.e. in case a sufficient high degree of correspondence between those templates is observed, the integrity of the sample data is proven. We consider the case that 5 different templates are stored in the database out of which a single database template is generated by majority coding like explained before in the case of the template WM (compare Figure 3.b). Table 5 shows the BER for the different attacks considered. A typical decision threshold for the iris recognition system in use is at a BER ranging in [0.3, 0.35]. When taking this into account, we realize that integrity verification in our technique is indeed robust against moderate JPEG compression and noise. On the other hand, mean filtering and JPEG compression at quality 95% destroys the template WM and indicates modification. The distribution of incorrect bits can be used to differentiate between malicious attacks (where an accumulation of incorrect bits can be observed in certain regions, compare Fig. 2.e) and significant global distortions like compression (compare Fig. 2.f).
82
R. Huber, H. St¨ ogner, and A. Uhl Table 4. ROC behavior under different attacks ERR FRR FAR CASIAv3 no attack original template watermark mean filter original template watermark JPEG Q98 original template watermark UBIRIS no attack original template watermark Gaussian Noise N = 0.001 original template watermark JPEG Q95 original template watermark
0.045 0.048 0.035 0.044 0.037 0.049
0.091 0.081 0.061 0.063 0.074 0.086
0.650 0.742 0.644 0.669 0.626 0.617
0.032 0.046 0.038 0.049 0.036 0.045
0.062 0.071 0.068 0.073 0.066 0.070
0.764 0.865 0.871 0.868 0.838 0.975
Table 5. BER for seven different attacks Attack No attack Mean filtering Gaussian Noise N = 0.0005 Gaussian Noise N = 0.001 JPEG Q100 JPEG Q99 JPEG Q98 JPEG Q95
5
CASIAv3 MMU UBIRIS 0.21 0.49 0.21 0.21 0.21 0.21 0.25 0.41
0.23 0.50 0.23 0.23 0.23 0.24 0.30 0.45
0.19 0.50 0.19 0.19 0.19 0.22 0.32 0.45
Conclusion
In this paper we have introduced a two-factor authentication system using biometrics and a token-based scheme, e.g. a smart-card. Semi-fragile WM is used to embed the template data stored on the smart-card into the sample data acquired at the authentication site. We have discussed certain advantages of the approach as compared to earlier work and have shown experimentally in the case of an iris recognition system, that indeed semi-fragile integrity verification is achieved using the proposed approach. Care has to be taken in the actual biometric matching process since contrasting to claims in literature recognition performance of the templates extracted from watermarked sample data suffers degradation to some minor extent. However, this can more than compensated by the additional template watermark which should be involved in matching as well.
Two-Factor Biometric Recognition
83
References [1] Bartlow, N., Kalka, N., Cukic, B., Ross, A.: Protecting iris images through asymmetric digital watermarking. In: IEEE Workshop on Automatic Identification Advanced Technologies, vol. 4432, pp. 192–197. West Virginia University, Morgantown (2007) [2] Dong, J., Tan, T.: Effects of watermarking on iris recognition performance. In: Proceedings of the 10th International Conference on Control, Automation, Robotics and Vision (ICARCV 2008), pp. 1156–1161 (2008) [3] H¨ ammerle-Uhl, J., Raab, K., Uhl, A.: Experimental study on the impact of robust watermarking on iris recognition accuracy (best paper award, applications track). In: Proceedings of the 25th ACM Symposium on Applied Computing, pp. 1479– 1484 (2010) [4] H¨ ammerle-Uhl, J., Raab, K., Uhl, A.: Attack against robust watermarking-based multimodal biometric recognition systems. In: Vielhauer, C., Dittmann, J., Drygajlo, A., Juul, N.C., Fairhurst, M.C. (eds.) BioID 2011. LNCS, vol. 6583, pp. 25–36. Springer, Heidelberg (2011) [5] H¨ ammerle-Uhl, J., Raab, K., Uhl, A.: Watermarking as a means to enhance biometric systems: A critical survey. In: Ker, A., Craver, S., Filler, T. (eds.) IH 2011. LNCS, vol. 6958, pp. 238–254. Springer, Heidelberg (2011) [6] Jain, A.K., Uludag, U.: Hiding fingerprint minutiae in images. In: Proceedings of AutoID 2002, 3rd Workshop on Automatic Identification Advanced Technologies, Tarrytown, New York, USA, pp. 97–102 (March 2002) [7] Jain, A.K., Uludag, U., Hsu, R.L.: Hiding a face in a fingerprint image. In: Proceedings of the International Conference on Pattern Recognition (ICPR 2002), Quebec City, Canada, pp. 756–759 (August 2002) [8] Jain, A.K., Uludag, U.: Hiding biometric data. IEEE Transactions on Pattern Analysis and Machine Intelligence 25(11), 1494–1498 (2003) [9] Khan, M.K., Xie, L., Zhang, J.S.: Robust hiding of fingerprint-biometric data into audio signals. In: Lee, S.-W., Li, S.Z. (eds.) ICB 2007. LNCS, vol. 4642, pp. 702–712. Springer, Heidelberg (2007) [10] Lang, A., Dittmann, J.: Digital watermarking of biometric speech references: impact to the eer system performance. In: Delp, E.J., Wong, P.W. (eds.) Proceedings of SPIE, Security, Steganography, and Watermarking of Multimedia Contents IX, vol. 6505, p. 650513 (2007) [11] Moon, D.-s., Kim, T., Jung, S.-H., Chung, Y., Moon, K., Ahn, D., Kim, S.-K.: Performance evaluation of watermarking techniques for secure multimodal biometric systems. In: Hao, Y., Liu, J., Wang, Y.-P., Cheung, Y.-m., Yin, H., Jiao, L., Ma, J., Jiao, Y.-C. (eds.) CIS 2005, Part II. LNCS (LNAI), vol. 3802, pp. 635–642. Springer, Heidelberg (2005) [12] Noore, A., Singh, R., Vatsa, M., Houck, M.M.: Enhancing security of fingerprints through contextual biometric watermarking. Forensic Science International 169, 188–194 (2007) [13] Park, K.R., Jeong, D.S., Kang, B.J., Lee, E.C.: A Study on Iris Feature Watermarking on Face Data. In: Beliczynski, B., Dzielinski, A., Iwanowski, M., Ribeiro, B. (eds.) ICANNGA 2007. LNCS, vol. 4432, pp. 415–423. Springer, Heidelberg (2007) [14] Rajibul, M.I., Shohel, M.S., Andrews, S.: Biometric template protection using watermarking with hidden password encryption. In: Proceedings of the International Symposium on Information Technology 2008 (ITSIM 2008), pp. 296–303 (2008)
84
R. Huber, H. St¨ ogner, and A. Uhl
[15] Ratha, N.K., Connell, J.H., Bolle, R.M.: Enhancing security and privacy in biometrics-based authentication systems. IBM Systems Journal 40(3), 614–634 (2001) [16] Roberts, C.: Biometric attack vectors and defenses. Computers & Security 26, 14–25 (2007) [17] Satonaka, T.: Biometric watermark authentication with multiple verification rule. In: Proceedings of the 12th IEEE Workshop on Neural Networks in Signal Processing, pp. 597–606 (2002) [18] Vielhauer, C., Steinmetz, R.: Approaches to biometric watermarks for owner authentification. In: Proceedings of SPIE, Security and Watermarking of Multimedia Contents III, San Jose, CA, USA, vol. 4314 (January 2001) [19] Wang, D.-S., Li, J.-P., Hu, D.-K., Yan, Y.-H.: A novel biometric image integrity authentication using fragile watermarking and Arnold transform. In: Li, J.P., Bloshanskii, I., Ni, L.M., Pandey, S.S., Yang, S.X. (eds.) Proceedings of the International Conference on Information Computing and Automatation, pp. 799–802 (2007) [20] Yeung, M.M., Pankanti, S.: Verification watermarks on fingerprint recognition and retrieval. Journal of Electronal Imaging, Special Issue on Image Security and Digital Watermarking 9(4), 468–476 (2000) [21] Zeitz, C., Scheidat, T., Dittmann, J., Vielhauer, C.: Security issues of internetbased biometric authentication systems: risks of man-in-the-middle and BioPhishing on the example of BioWebAuth. In: Delp, E.J., Wong, P.W., Dittmann, J., Nemon, N.D. (eds.) Proceedings of SPIE, Security, Forensics, Steganography, and Watermarking of Multimedia Contents X, vol. 6819, pp. 0R-1–0R12 (2008)
Feature Selection by User Specific Feature Mask on a Biometric Hash Algorithm for Dynamic Handwriting Karl Kümmel, Tobias Scheidat, Christian Arndt, and Claus Vielhauer Brandenburg University of Applied Sciences, P.O. Box 2132, 14737 Brandenburg, Germany {kuemmel,scheidat,arndtch,claus.vielhauer}@fh-brandenburg.de
Abstract. One of the most important requirements on a biometric verification system, beside others (e.g. biometric template protection), is a high user authentication performance. During the last years a lot of research is done in different domains to improve user authentication performance. In this work we suggest a user specific feature mask vector MV applied on a biometric hash algorithm for dynamic handwriting to improve user authentication and hash generation performance. MV is generated using an additional set of reference data in order to select/deselect certain features used during the verification process. Therefore, this method is considered as a simple feature selection strategy and is applied for every user within the system. In our first experiments we evaluate 5850 raw data samples captured from 39 users for five different semantics. Semantics are alternative written content to conceal the real identity of a user. First results show a noticeable decrease of the equal error rate by approximately three percentage points for each semantic. Lowest equal error rate (5.77%) is achieved by semantic symbol. In the context of biometric hash generation, the reproduction rates (RR) increases by an average of approx. 26%, whereas the highest RR (88.46%) is obtained by semantic symbol along with a collision rate (CR) of 5.11%. The minimal amount of selected features during the evaluation is 81 and the maximum amount is 131 (all available features). Keywords: biometrics, dynamic handwriting, biometric hashing, user bitmask, feature selection.
1 Introduction Today, biometric user authentication is an important field in IT security. It relies on individual biological or behavioral characteristics of a person. The purpose of a generic biometric system is to identify and/or verify a person’s identity based on at least one biometric modality (i.e. fingerprint, iris, voice). For all biometric systems, it is crucial to protect the biometric reference data (template) in order to avoid misuse of individual and personal data. However, biometric templates cannot be easily protected by common cryptographic hash algorithms, like they are used in ordinary password authentication systems. The biometric intra-class variability has to be taken into account to ensure reproducibility and protection of a template. The problem of biometric template protection is a frequently discussed issue in biometrics [1]. One possibility to ensure reproducibility and simple template protection is for example the B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 85–93, 2011. © IFIP International Federation for Information Processing 2011
86
K. Kümmel et al.
Biometric Hash algorithm for dynamic handwriting introduced in [2]. The aim of this method is to transform intra-subject biometric data, into stable and individual hash vector values; an overview is given in section 2. In this work we focus on the authentication performance and robustness of this particular biometric hash algorithm. During the last years a lot of research in almost every biometric authentication algorithm and modality is done to improve user authentication performance. Hollingsworth et al. introduce in [4] a method where potential fragile iris code bits are masked to increase the separation between the match and non-match distributions in iris based authentication systems. Fratric et al. propose in [5] a method of feature extraction from face images to improve recognition accuracy. They use a so-called local binary linear discriminant analysis (LBLDA), which combines the good characteristics of both LDA and local feature extraction methods. Biometric fusion is another technique to improve user authentication performance. Rathgeb et al. describe in [6] a generic fusion technique for iris recognition at bit-level (called Selective Bit Fusion) to improve accuracy and processing time. Another method, besides many others, is the improvement of the authentication performance by determination of useful features during a feature selection process. In this context, useful features are features which positively affect the user authentication and biometric hash generation performance. Kumar et al. show in [7] that an evaluation and selection of useful biometric features can improve the recognition accuracy. They used a correlation based feature selection (CFS) for bimodal biometric systems and analyzed the classification performance. Makrushin et al. compare in [8] different feature selection strategies to determine sophisticated features. It has been shown that forward and backward selection algorithms have always better results than considered heuristics. In this work we suggest a much simpler way of feature selection as described in [8]: We apply a user specific feature mask on a biometric hash algorithm for dynamic handwriting to select and/or deselect specific features in order to improve the authentication performance as well as the generation of stable individual biometric hashes. The structure of the paper is composed as follows. In section 2 we give an overview on the Biometric Hash algorithm for dynamic handwriting. A simple user specific feature mask generation method is introduced in section 3. Experimental results are shown and discussed in section 4. In the last section we present a conclusion and our future work based on the findings.
2 Biometric Hash Algorithm The Biometric Hash algorithm for dynamic handwriting (hereafter BioHash) is initially introduced by Vielhauer et al. in [2] and enhanced in [3]. During the enrollment process the BioHash algorithm generates a so-called Interval Matrix IM for each user. The IM is based on raw data of the writer and several parameters. The raw data of each dynamic handwriting sample consists of a time dependent sequence of physical values derived from a digitizer device (e.g. Tablet PC, signature tablet). Generally, there are five values per sample point: pen tip positions x(t) and y(t), pen tip pressure p(t) and pen orientation angles altitude Φ(t) and azimuth Θ(t). From each raw data sample derived from a person during the enrollment process, a statistical
Feature Selection by User Specific Feature Mask on a Biometric Hash Algorithm
87
feature vector (static and dynamic features) is calculated with a dimensionality of k (k=131 in the implementation used in this paper). The IM consists of a vector containing the length of a mapping interval for each feature and an offset vector. Both vectors are calculated based on an analysis of intra-class variability of the user using his/her statistical feature vectors. There are two possibilities to parameterize the hash generation by scaling the mapping intervals stored in the IM: Tolerance Vector TV and Tolerance Factor TF. The aim of the TV is to provide a scaling of the mapping interval of each statistical feature separately. Thus, the dimensionality of TV is also k. TV can be calculated individually for each user or globally by a group of users, e.g. either based on all or a selection of enrolled persons, but also on a disjoint group. In contrast to the Tolerance Vector, the Tolerance Factor TF is a global hash generation parameter, which is a scalar value. Using the TF, it is possible to scale the mapping intervals for all features globally by the same factor. Based on one statistical feature vector derived from the enrollment data and the users’ individual IM the so-called interval mapping function determines the reference hash vector bref of a user. Therefore, the feature dependent interval lengths and offsets provided by IM are used to map each of the k statistical features to the corresponding hash value. Each further biometric hash is calculated in the same manner, independently if it is used for biometric verification or hash generation application. For verification, the hash vector b derived from the currently presented handwriting sample is compared against the reference hash vector bref by some distance measurement. For more details of the single calculation steps, the interested reader is referred to reference [2].
3 User Specific Feature Mask Generation In addition to the reference BioHash vector bref and the corresponding Interval Matrix IM, we generate a k dimensional (k=131) feature mask vector MV for each user. MV is created during the feature selection process after the enrollment. The main idea of creating a feature mask vector is to select or deselect specific features. If a bit is set to 1, the represented feature is considered during the verification process and if it is set to 0, it is not taken into account. This method allows an uncomplicated user specific enabling or disabling of used features. 3.1 Selection Strategy A user specific feature mask vector MV is generated using the reference BioHash vector bref and corresponding IM. Furthermore, raw data samples s0, s1, …, sn, which are not used during the enrollment process, are required. The identifier n represents the maximum amount of used samples. The feature selection (MV generation) is done in three steps. Firstly, the k-dimensional feature vectors fv0, fv1, …, fvn are determined from all raw data samples s0, s1, …, sn. Secondly, feature vectors of each user are mapped to the biometric hash vectors b0, b1, …, bn using the reference Interval Matrix IM of this specific user. In the last step the feature mask vector MV is generated for each user. Therefore a element-wise comparison is done using the reference BioHash bref and BioHashes b0, b1, …, bn of the specific user. If a certain number of values at
88
K. Kümmel et al.
position i is equal, the corresponding i-th bit of MV is set to 1; otherwise it is set to 0. We define a so-called similarity threshold ths as the maximum number of allowed differences between the i-th element of the hashes. If ths is set to 0, all values have to be equal; if ths set to 1, only one different value is allowed and so on. In the end, the result of the MV generation is a k-dimensional feature mask vector MV. The MV is a new part of the reference data (template) and is therefore stored together with the corresponding Interval Matrix IM and BioHash bref, for example in a database or on a Smart Card. In our first implementation we only generate the MV once for each user during the enrollment process. Figure 1 exemplarily shows the MV generation using only three short BioHashes (b1, b2 and b3) to demonstrate the procedure. In this example we use a threshold ths of 0 and the character “∧” (logical conjunction) represents the element-wise comparison between the three vectors. 24
26
24
0
13
13
13
1
113
117
117
0
309 5
ޔ
309 5
ޔ
309 5
=
1 1
710
710
710
1
81
83
84
0
28
28
28
1
b1
b2
b3
MV
Fig. 1. Example of MV generation during the enrollment process
3.2 Feature Mask Vector and Verification During a regular verification without feature mask vector consideration, the hash vector bcur derived from the currently presented handwriting sample is compared against the reference hash vector bref by some distance measurement. In the current configuration, we use the Hamming Distance combined with the feature mask vector as measurement. If two k-dimensional BioHashes bref and bcur are compared using the Hamming Distance, an intermediate result in terms of a k-dimensional bit vector HV is calculated. The number of all ones inside this vector is equal to the Hamming Distance of the two k-dimensional vectors bref and bcur. In order to include the feature selection results stored inside the feature mask vector MV, a k-dimensional vector HVMV is calculated by determining the result of the AND (logical conjunction) operation of the two vectors MV and HV. In the end, the Hamming Distance of bref and bcur, considering the feature mask vector MV, is the sum of all ones of the HVMV vector. Therefore, the maximum Hamming Distance value depends not only on the
Feature Selection by User Specific Feature Mask on a Biometric Hash Algorithm
89
dimensionality of a vector but also on the number of ones of the feature mask vector. Figure 2 shows the effect of the MV during the verification process; only short BioHashes are used to demonstrate the procedure. The result of this simple example, calculating the Hamming Distance of bref and bcur using the MV, is HDMV = 1. 5
4
1
1
0
0
7
7
0
0
1
0
17
17
0
0
0
0
0
0
1
1
41 2
⊕
41 3
=
ޔ
1 1
=
0 1
103
103
0
0
1
0
8
9
1
1
0
0
13
13
0
0
1
0
bcur
bref
HV
HV
MV
ҿ
HDMV = 1
HVMV
Fig. 2. Example of MV used during the verification process
4 Experimental Evaluation In this section we show our first experiments on applying a user specific feature mask vector on the Biometric Hash algorithm for dynamic handwriting. Our goal is to compare performance of user authentication and biometric hash generation of the algorithm with and without a feature mask vector. In this section, first we define experimental settings. Secondly, we introduce our methodology to provide a comparative study of the achieved results. Thirdly, experimental results are presented and discussed. 4.1 Experimental Settings The biometric database of our initial tests consists of 39 subjects, which have donated 30 handwriting samples in three sessions with an interval of at least one month between two sessions. Within a session a user provides 10 handwritten samples for five different semantics (5850 test samples overall). These semantics are “Free chosen Pseudonym” (pseudonym), “Free chosen Symbol” (symbol), “Answer to the Question: Where are you from?” (place), “Fixed 5 digit PIN: 77993” (public PIN) and “Free chosen 5 digit PIN” (secret PIN). It has been observed in [2] that semantics produce similar recognition accuracy compared to handwriting signatures, without disclosing the true identity of the writer. All samples were captured under laboratory conditions using a Toshiba M200 Portege tablet PC. The handwriting samples acquired during the first session are used as enrollment data in order to determine the reference
90
K. Kümmel et al.
BioHash bref as well as to generate the Interval Matrix IM. The samples of the second session are used for tuning of the Tolerance Factor TF and feature selection in terms of feature mask vector calculation. Finally, the data collected within the third session are used for evaluation. An attempt of one user to be verified as another one is considered as an imposter trial. Each test implies 390 genuine trials, where reference data of a user is matched against its own verification data (39 user times 10 test samples) and 14,820 imposter trials, where reference data of a user is matched against all other verification data except its own (38 user claims times 39 actual users times 10 test samples). Within the feature extraction process of the BioHash algorithm 131 features are calculated based on the handwritten samples. 4.2 Methodology In our first test we compare the performance of the BioHash algorithm with and without a user specific feature mask vector. Therefore, biometric error rates FRR/FAR, and EER are calculated for both settings. The false rejection rate (FRR) describes the ratio between the number of false rejections of authentic persons and the total number of tests. The FAR (false acceptance rate) is the ratio between number of false acceptances of non-authentic persons and the entire number of authentication attempts. For a comparative analysis of the verification performance, the equal error rate (EER) is a common measurement in biometrics. EER denotes the point in error characteristics, where FRR and FAR yield identical value. We also evaluate the reproducibility rate (RR) and collision rate (CR) for both settings; these values are related sums of identical reproduced hashes in genuine and imposter trials (see [9]). Because of the reciprocal effect of RR and CR, a tuning of the system to improve RR leads to a degradation of CR and vice versa. Therefore, the collision reproduction rate (CRR, [8]) is selected as a hash generation quality criterion. The CRR is defined in the following equation, whereas CR and RR are weighted equally. CRR =
1 (CR + (1 − RR)) 2
(1)
The tolerance vector TV is set to (1, …, 1) since all features are considered equally. Thus, the tolerance factor (TF) is the main parameter for controlling CR and RR. In previous work [8] we already determined tolerance factor values of the same evaluation data for two scenarios, lowest EER (EER mode) and highest RR (CRR mode), in all semantics. According to these results of the previous test, based on all 131 features, the TF values are set as shown in table 1. Table 1. Tolerance factor (TF) values used during the evaluation Semantic Public PIN Secret PIN Pseudonym Symbol Place
TF in CRR mode 1.50 1.75 2.50 3.50 2.50
TF in EER mode 1.00 1.00 1.25 1.50 1.25
Feature Selection by User Specific Feature Mask on a Biometric Hash Algorithm
91
Feature mask vectors are generated for each user in all semantic classes separately, as described in section 3.2, using the evaluation data of the second session. During the MV generation, only if all values at a specific position i are equal, then MVi is set to 1. Therefore the similarity threshold ths is set to 0. The minimal, average and maximal amounts of selected features are determined to show how many features are actually used during the verification or hash generation process. Note that the evaluation protocol leads to a realistic scenario since the reference data has already undergone an aging of at least 2 month compared to the evaluation data. In our first evaluation we do not consider the slightly increased computational effort which is caused by the MV calculation during the enrollment (MV creation) and verification. Compared to the feature extraction processing time it is negligible. 4.3 Experimental Results Table 2 shows EERs of all semantics with and without use of the user specific feature mask vector MV. The results point out that the EER decreases approximately by three percentage points in all semantic classes when the user specific feature mask vector MV is applied. The lowest EER is obtained by the semantic symbol and is marked in bold letters (5.77%). Semantic public PIN obtains the highest EER (12.86%), which might be caused due to the same written content of all users. Table 2. Equal error rates (EER) of all semantic classes with and without applied MV
No MV MV
Public PIN 16.55 % 12.86 %
Secret PIN 13.42 % 10.86 %
Pseudonym 10.96 % 6.58 %
Symbol 8.30 % 5.77 %
Place 9.79 % 7.09 %
Collision reproduction rates (CRR), reproduction rates (RR) and collision rates (CR) for all semantics (with and without MV) are shown in table 3. First results indicate that for all semantics the CRR decreases when a MV is applied. Therefore, the reproduction rate increases as well as the collision rate. A maximum reproduction rate (RR) of 88.46% is obtained by the semantic symbol with a collision rate (CR) of 5.11%. An average RR increase of approx. 26% is observed for all semantics, whereas the largest increase is obtained by semantic public PIN (from 48.72% up to 71.03%). Table 3. Collision reproduction rates (CRR), reproduction rates (RR) and collision rates (CR) of all semantic classes with and without user specific feature mask vector MV Semantic Public PIN Secret PIN Pseudonym Symbol Place
CRR 27.81% 24.27% 18.57% 11.19% 19.15%
No MV RR 48.72% 55.64% 66.67% 82.31% 65.38%
CR 4.33% 4.18% 3.79% 4.70% 3.68%
CRR 18.15% 16.76% 10.59% 8.33% 10.22%
MV RR 71.03% 73.33% 83.85% 88.46% 84.87%
CR 7.33% 6.84% 5.03% 5.11% 5.31%
92
K. Kümmel et al.
Table 4 shows the minimal, average and maximal amount of selected features represented by the feature mask vector in each semantic class for both scenarios (verification and hash generation mode). The minimal amount (81) of features used during a verification process is obtained by semantic secret PIN within the EER mode. In CRR mode the number of used features is always higher than in EER mode. The average amount of selected features over all semantics in EER mode is 122 and in CRR mode 128. Table 4. Minimal, average and maximal amount of selected features for each semantic in both scenarios (verification and hash generation mode)
Mode Min. Avg. Max.
Public PIN EER CRR 86 96 120 125 130 131
Secret PIN EER CRR 100 81 120 128 130 131
Pseudonym EER CRR 103 116 122 128 131 131
Symbol EER CRR 103 116 125 129 131 131
Place EER CRR 103 121 122 128 131 131
5 Conclusion and Future Work In this work we introduce a simple feature selection method applied on a biometric hash algorithm for dynamic handwriting. A generated user specific feature mask vector MV is used to switch specific features on or off, which are used during the verification or hash generation process. By analyzing the results, we come to a first conclusion that the application of feature mask vector MV leads to improved recognition accuracy. In our tests, the equal error rates (EER) decreases in all semantics noticeable by approximately three percentage points. Furthermore, the reproducibility of generated biometric hashes increases in all tests considerable. The average increase of the reproduction rate (RR) is approx. 26%, whereas the highest RR was achieved by the semantic symbol (88.46%) and the highest rise of the RR (from 48.72% up to 71.03%) was reached by the semantic public PIN. These results show that a simple feature selection strategy is able to substantial increase the biometric hash generation as well as the user authentication performance. In future work we will verify our first results by using additional test subjects and study the effects on a non-binary MV. A dynamic adaption of the MV is also considered in future works, where the MV is adapted after each successful verification attempt. Due to the reduction of relevant features to a specific user, caused by the feature mask vector MV; we will investigate the security issue on this side effect. Especially the advantages an attacker gains if he/she is in possession of a MV and reference data will be studied. The side effect also leads to reduced entropy and therefore to a potential reduction of a cryptographic key length, if the generated biometric hash is used as a basis for cryptographic key generation. Acknowledgments. This work is supported by the German Federal Ministry of Education and Research (BMBF), project “OptiBioHashEmbedded” under grant number 17N3109. The content of this document is under the sole responsibility of the authors. We also like to thank Prof. Jana Dittmann of the Otto-von-Guericke University Magdeburg and the StepOver GmbH for supporting the project “OptiBioHashEmbedded”.
Feature Selection by User Specific Feature Mask on a Biometric Hash Algorithm
93
References 1. Jain, A.K., Nandakumar, K., Nagar, A.: Biometric Template Security. In EURASIP Journal on Advances in Signal Processing, Article ID 579416 (2008) 2. Vielhauer, C.: Biometric User Authentication for IT Security: From Fundamentals to Handwriting. Springer, New York (2006) 3. Vielhauer, C., Steinmetz, R., Mayerhöfer, A.: Biometric Hash based on Statistical Features of Online Signature. In: Proc. of the Intern. Conf. on Pattern Recognition (ICPR), Quebec City, Canada, vol. 1 (2002) 4. Hollingsworth, K.P., Bowyer, K.W., Flynn, P.J.: The best bits in an iris code. IEEE Trans. on Pattern Analysis and Machine Intelligence 31(6), 964–973 (2009) 5. Fratric, I., Ribaric, S.: Local binary LDA for face recognition. In: Vielhauer, C., Dittmann, J., Drygajlo, A., Juul, N.C., Fairhurst, M.C. (eds.) BioID 2011. LNCS, vol. 6583, pp. 144– 155. Springer, Heidelberg (2011) 6. Rathgeb, C., Uhl, A., Wild, P.: Combining Selective Best Bits of Iris-Codes. In: Vielhauer, C., Dittmann, J., Drygajlo, A., Juul, N.C., Fairhurst, M.C. (eds.) BioID 2011. LNCS, vol. 6583, pp. 127–137. Springer, Heidelberg (2011) 7. Kumar, A., Zhang, D.: Biometric recognition using feature selection and combination. In: Kanade, T., Jain, A., Ratha, N.K. (eds.) AVBPA 2005. LNCS, vol. 3546, pp. 813–822. Springer, Heidelberg (2005) 8. Makrushin, A., Scheidat, T., Vielhauer, C.: Handwriting biometrics: Feature selection based improvements in authentication and hash generation accuracy. In: Vielhauer, C., Dittmann, J., Drygajlo, A., Juul, N.C., Fairhurst, M.C. (eds.) BioID 2011. LNCS, vol. 6583, pp. 37–48. Springer, Heidelberg (2011) 9. Scheidat, T., Vielhauer, C., Dittmann, J.: Advanced studies on reproducibility of biometric hashes. In: Schouten, B., Juul, N.C., Drygajlo, A., Tistarelli, M. (eds.) BioID 2008. LNCS, vol. 5372, pp. 150–159. Springer, Heidelberg (2008)
Dynamic Software Birthmark for Java Based on Heap Memory Analysis Patrick P.F. Chan, Lucas C.K. Hui, and S.M. Yiu Department of Computer Science The University of Hong Kong, Pokfulam, Hong Kong {pfchan,hui,smyiu}@cs.hku.hk
Abstract. Code theft has been a serious threat to the survival of the software industry. A dynamic software birthmark can help detect code theft by comparing the intrinsic characteristics of two programs extracted during their execution. We propose a dynamic birthmark system for Java based on the object reference graph. To the best of our knowledge, it is the first dynamic software birthmark making use of the heap memory. We evaluated our birthmark using 25 large-scale programs with most of them of tens of megabytes in size. Our results show that it is effective in detecting partial code theft. No false positive or false negative were found. More importantly, the birthmark remained intact even after the testing programs were obfuscated by the state-of-the-art Allatori obfuscator. These promising results reflect that our birthmark is ready for practical use. Keywords: software birthmark, software protection, code theft detection, Java.
1
Introduction
Over the years, code theft has been an issue that keeps threatening the software industry. From time to time, there are cases brought to the court about software license violation. For example, a former Goldman Sachs programmer was found guilty of code theft recently [19]. The software being stolen was for making fast trades to exploit tiny discrepancies in price. Such trading was the core source of revenue of that firm. Various software protection techniques have been proposed in the literature. Watermarking is one of the well-known and earliest approaches to detect software piracy in which a watermark is incorporated into a program by the owner to prove the ownership of it [9,7]. Although it cannot prevent software theft, it provides proof when legal action against the thief is needed. However, it is believed that “a sufficiently determined attacker will eventually be able to defeat any watermark” [8]. Watermarking also requires the owner to take extra action (embed the watermark into the software) prior to releasing the software. Thus, some existing Java developers do not use watermarking, but try to obfuscate their source code before publishing. Code obfuscation is a semantics-preserving B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 94–107, 2011. c IFIP International Federation for Information Processing 2011
Dynamic Software Birthmark for Java Based on Heap Memory Analysis
95
transformation of the source code that makes it more difficult to understand and reverse engineer [10]. However, code obfuscation only prevents others from learning the logic of the source code but does not hinder direct copying of them. On the other hand, the thief may further obfuscate the source code rendering code theft detection difficult. Thus, code obfuscation may not be a good mean to prevent software copying. A relatively new but less popular software theft detection technique is software birthmark. Software birthmark does not require any code being added to the software. It depends solely on the intrinsic characteristics of two programs to determine the similarity between them [23,17,20,15,22,14,10,18]. It was shown in [17] that a birthmark could be used to identify software theft even the embedded watermark had been destroyed by code transformation. According to Wang et al. [23], a birthmark is a unique characteristic a program possesses that can be used to identify the program. There are two categories of software birthmarks, static birthmarks and dynamic birthmarks. Static birthmarks are extracted from the syntactic structure of programs [22,18,13]. Dynamic birthmarks are extracted from the dynamic behavior of programs at run-time [23,17,20,15,14]. The usual method to destroy the birthmark and prevent discovery of code theft is by obfuscating the program. Since semantics-preserving transformations like code obfuscation only modify the syntactic structure of a program but not the dynamic behavior of it, dynamic birthmarks are more robust against them. Existing dynamic birthmarks make use of the complete control flow trace or API call trace obtained during the execution of a program [23,17,20,15,14]. Birthmarks based on control flow trace are still vulnerable to obfuscation attack such as loop transformation. The ones based on API call trace may suffer from not having enough API calls to make the birthmark unique. In this paper, we propose a novel dynamic birthmark, which we call object reference graph (ORG) birthmark, based on the unique characteristics of a program extracted from the heap memory at run-time. The heap memory is a location in the memory in which dynamically created objects are stored. The core idea of the proposed ORG birthmark is that the referencing structure represented by the object reference graph reflects the unique behavior of a program. An object reference graph is a directed graph. The nodes represent objects and the edges represent the referencing between the objects. They are independent of the syntactic structure of the program code and hence, are not to be changed by semantics-preserving code transformations. Although it is likely that software developed for the same purpose have similar dynamic behaviors, they may not have the same objects referencing structure. For example, a programmer may decide to put the file I/O instructions in a separate class for easier maintenance while others may not. We implemented a library theft detection system exploiting the ORG birthmark to justify this idea. The goal of the system is to detect if a library L is used by a software S. The first phase of the system dumps out the heap during the execution of the software S. The second phase of the system builds ORGs out of
96
P.P.F. Chan, L.C.K. Hui, and S.M. Yiu
the heap dumps. Finally, the system searches the ORGs to see if the ORG birthmark for library L, ORGBL , can be found by exploiting a subgraph isomorphism algorithm. Note that to extract the birthmark for library L, we need a software that is known to be using library L. From it, we build the object reference graph with respect to L by focusing only on those objects defined in that library. As classes from the same library often have the same prefix in their names, we can identify them by a prefix match of their names. We evaluated our ORG birthmark system using 25 large-scale Java programs with most of them of tens of megabytes in size. During the evaluation, our birthmark system successfully detected 2 libraries in the testing programs. This shows that our system is effective in identifying library theft and is able to distinguish programs developed for the same purpose. To test the robustness of the system against semantics-preserving code transformation, we obfuscated the programs with the state-of-the-art Allatori obfuscator. After that, the system could still successfully detect the 2 libraries in the obfuscated programs. This shows that our system is robust against semantics-preserving code transformation. The rest of the paper is structured as follows. In section 2, we explore the existing works in the literature. The definitions are given in section 3. In section 4, we formulate the threat model in which our system is designed. We provide the design details and evaluation results in section 5 and 6. Further discussion is covered in section 7 and section 8 concludes.
2
Related Work
Software birthmark is different from software watermarking in twofold. First, it is solely the characteristics of a program but not an identifier purposely embedded into the program. Therefore, even though the author of a program was not aware of software piracy when he released the program, a software birthmark can still be extracted from the program to help identify the copying of his program. Second, a birthmark cannot prove the authorship of a program. It can only suggest that a program is a copy of another program. In practice, it is used to collect some initial evidences before taking further investigations. Software birthmarks are further divided into static birthmarks and dynamic birthmarks. Static birthmarks (e.g. [22,18,13]) are extracted from the syntactic structure of a program and can be destroyed by semantics-preserving transformations. The trend of software birthmark research is going towards the direction of dynamic birthmarks. The rest of this section will discuss a few pieces of latest work on dynamic birthmarks. The first dynamic birthmark was proposed by G. Myles and C. Collberg [17]. They exploited the complete control flow trace of a program execution to identify the program. They showed that their technique was more resilient to attacks by semantics-preserving transformations than published static techniques. However, their work is still susceptible to various loop transformations. Moreover, the whole program path traces are large and make the technique not scalable.
Dynamic Software Birthmark for Java Based on Heap Memory Analysis
97
Tamada et al. proposed two kinds of dynamic software birthmarks based on API calls [15]. Their approach was based on the insights that it was difficult for adversaries to replace the API calls with other equivalent ones and that the compilers did not optimize the APIs themselves. Through analyzing the execution order and the frequency distribution of the API calls, they extracted dynamic birthmarks that could distinguish individually developed same-purpose applications and were resilient to different compiler options. Schuler et al. proposed a dynamic birthmark for Java that relies on how a program uses objects provided by the Java Standard API [20]. They observed short sequences of method calls received by individual objects from the Java Platform Standard API. By chopping up the call trace into a set of short call sequences received by API objects, it was easier to compare the more compact call sequences. Evaluation performed by the authors showed that their dynamic birthmark solution could accurately identify programs that were identical to each other and differentiate distinct programs. Their API birthmark is more scalable and more resilient than the WPP Birthmark proposed by Myles and Collberg [17]. Wang et al. proposed system call dependence graph (SCDG) based software birthmark called SCDG birthmark [23]. An SCDG is a graph representation of the dynamic behavior of a program, where system calls are represented by vertices, and data and control dependences between system calls are represented by edges. The SCDG birthmark is a subgraph of the SCDG that can identify the whole program. They implemented a prototype of SCDG birthmark based software theft detection system. Evaluation of their system showed that it was robust against attacks based on different compiler options, different compilers and different obfuscation techniques.
3
Problem Definitions
The section first provides the definition of dynamic birthmarks to ease further discussion. We borrow part of the definition from Tamada et al [15]. They are the first formal definition appearing in the literature and have been restated in subsequent papers related to dynamic software birthmark. After that, the formal definition of an ORG birthmark is introduced. 3.1
Software Birthmarks
A software birthmark is a group of unique characteristics extracted from a program that can uniquely identify the program. There are two categories of software birthmarks: static birthmarks and dynamic birthmarks. We focus on dynamic birthmarks in this research. Dynamic Birthmarks. A dynamic birthmark is one that is extracted when the program is executing. It relies on the run-time behavior of the program. Therefore, semantics-preserving transformations of the code like obfuscation cannot defeat dynamic birthmarks. Dynamic birthmarks are more robust compared with static birthmarks.
98
P.P.F. Chan, L.C.K. Hui, and S.M. Yiu
Definition 1. (Dynamic Birthmark) Let p, q be two programs or program components. Let I be an input to p and q. Let f (p, I) be a set of characteristics extracted from p when executing p with input I. f (p, I) is a dynamic birthmark of p only if both of the following criteria are satisfied: 1. f (p, I) is obtained only from p itself when executing p with input I 2. program q is a copy of p ⇒ f (p, I) = f (q, I) This definition is basically the same as that of static birthmarks except that the birthmark is extracted with respect to a particular input I. 3.2
ORG Birthmark
Before we give the definition of ORG birthmark, we need to define what is an object reference graph (ORG). An ORG is a directed graph representation of the structure formed between objects through object referencing. A node represents an object while an edge represents a field of one object referring to another. Objects instantiating the same class are grouped together and are denoted by one node. All the referencing made by this group of objects is represented by the out-going edges from that node. Multiple referencing to the same class of objects by this group of objects is represented by a single edge. We ignore any self-referencing as that can be exploited by an attacker to defeat the birthmark easily. We now give the formal definition of ORG. Definition 2. (ORG: Object Reference Graph) The object reference graph of a program run is a 2-tuple graph ORG = (N, E), where – N is a set of nodes, and a node n ∈ N corresponds to a class with non-zero number of instantiations – E ∈ N × N is the set of references between objects, and each edge n1 → n2 ∈ E corresponds to one or more references from any field of any objects instantiating the class represented by node n1 to any other objects instantiating the class represented by n2 . There is no duplicated edge between two nodes. Figure 1 shows an example ORG for 4 objects instantiating 3 classes. In Figure 1 (a), there are 4 objects, namely Tom, Jack, Peter, and John, with Tom and Jack instantiating the same class Cat. In Figure 1 (b), it shows an ORG with three nodes corresponding to the 3 classes in Figure 1 (a). Note that in a real ORG, the class name is not denoted by the node name. The node name in this figure is for illustration purpose only. Although Tom and Jack are referencing each other through the field Brother, this is not captured in the ORG as they belong to the same class Cat. Both of them reference the object Peter which belongs to the class Dog via the field Friend. This is represented by one edge in the ORG from node Cat to node Dog. The reference from Peter to John via the field Master and the reference from John to Peter via the field Pet are represented by the edge from node Dog to node Human and the edge from node Human to node Dog on the ORG respectively.
Dynamic Software Birthmark for Java Based on Heap Memory Analysis
99
Fig. 1. An Example ORG
Next, we state the definition for γ-isomorphism [12] which serves the purpose of comparing ORG birthmarks. Definition 3. (Graph Isomorphism) A graph isomorphism from a graph G = (N, E) to a graph G = (N , E ) is a bijective function f : N → N such that (u, v) ∈ E ⇔ (f (u), f (v)) ∈ E . Definition 4. (Subgraph Isomorphism) A subgraph isomorphism from a graph G = (N, E) to a graph G = (N , E ) is a bijective function f : N → N such that f is a graph isomorphism from G to a subgraph S ⊂ G . Definition 5. (γ-Isomorphism) A graph G is γ-isomorphic to G if there exists a subgraph S ⊆ G such that S is subgraph isomorphic to G , and |S| ≥ γ|G|, γ ∈ (0, 1]. Based on the γ-isomorphism definition, the OBG birthmark can be defined. Definition 6. (ORGB: Object Reference Graph Birthmark) Let p, q be two programs or program components. Let I be an input to p and q, and ORGp , ORGq be object reference graphs of the program runs with input I for p, q respectively. A subgraph of the graph ORGp is ORG birthmark of p, ORGBp , if both of the following criteria are satisfied:
100
P.P.F. Chan, L.C.K. Hui, and S.M. Yiu
– program or program component q is in a copy relation with p ⇒ ORGBp is subgraph isomorphic to ORGq . – program or program component q is not in a copy relation with p ⇒ ORGBp is not subgraph isomorphic to ORGq . Although our experiment showed that ORGB is robust to state-of-the-art obfuscation techniques, we relax subgraph isomorphism to γ-isomorphism in our detection for robustness to unobserved and unexpected attacks. Hence, a program p is regarded as a copy of another program q if the ORGB of p is γ-isomorphic to ORGB of q. We set γ = 0.9 in experiments since we believe that overhauling 10% of an ORGB is almost equivalent to changing the overall architecture of a program component.
4
Threat Model
In the attack scenario, Bob is the owner of a program P . The core part of it is a library L which is also developed by him. Alice wants to write another program Q which has similar functionalities as P . Obtaining a copy of program P , Alice reverse engineers it and gets the source code. She extracts the library L from program P and uses it in her own program Q. In order to escape from code theft detection, she obfuscates the source code before compilation. Later, Bob discovers that the program Q developed by Alice functions similarly to his own program P . He wants to find out if program Q uses the library L developed by him. Since the source code of program Q is obfuscated and illegible, he cannot justify it by reverse engineering program Q and looking at the source code. He then gets help from our dynamic birthmark system. He executes program P and gets the birthmark with respect to library L. After that, he executes program Q and gets the birthmark of the whole program Q. Obtaining the birthmark with respect to library L, ORGBL , and the birthmark of the whole program Q, ORGQ , he then finds out whether ORGBL is γ-isomorphic to ORGQ or not to identify code theft of library L.
5
System Design
In this section, we will give details of the design of our dynamic birthmark system. Figure 2 shows the overview of our system. The plaintiff program is the original program owned by the program owner. The defendant program is a program developed by someone else that is suspected to some partial code from the plaintiff program. The processes that the plaintiff and the defendant program undergo are the same except that there is an extra process, the classes refiner, for the plaintiff program. In this section, these processes will be introduced one by one.
Dynamic Software Birthmark for Java Based on Heap Memory Analysis
101
Fig. 2. System Overview
5.1
Heap Dumping
The heap is dumped using jmap [5] from J2SE SDK at an interval of 2 seconds and the dumps will later be merged. This is to avoid any information loss due to garbage collection. In our experiment, we kept dumping for 1 minute. 5.2
Classes Extractor and Filter
We make use of the jhat library from JDK [4] to parse the dump files generated by jmap [5]. The comprehensive list of classes appearing in the dumps is first extracted. However, not all classes represent the unique behavior of the program. Hence, we perform further filtering on this list of classes. The first group of classes to be pruned out are classes that are provided by Java or Sun since they do not represent the unique characteristics of the program. Their names start with java, javax, and sun. Thus, all classes with these prefixes in their names are removed from the class list. Attackers may try to escape from detection by changing the class names into names with such prefixes. However, we can avoid that by further checking the addresses or hash values of the classes that are actually referenced. Next, we need to filter out classes that have no instantiation at all. It is because such classes will become standalone nodes with no outgoing and incoming edge in the resulting ORG. They do not represent any unique characteristic of the program. 5.3
Classes Refiner
The next two refining steps are done only when extracting birthmark for a library. That is, it is done by Bob when extracting the birthmark for library L. (Refer to the threat model discussed in section 4). In order to extract the birthmark for a specific library only, we have to filter out classes which do not
102
P.P.F. Chan, L.C.K. Hui, and S.M. Yiu
belong to that library. To achieve this, we have to know the package name of the library which must be available to Bob as he is the developer of the library. The names of those classes in the library all start with the package name. Therefore, classes with names not starting with that prefix are filtered out from the class list as they do not belong to that library. The second refining step is to filter out classes that are usage dependent. A library may create different objects for different use cases. We need to avoid such discrepancy by observing different applications that are known to have used that library. By comparing the heap dumps of these applications, we can learn what are the classes that commonly appear in them with the same object reference structure. After the above 4 filtering and refining steps, a list of classes that can represent the unique behavior of the application or library is obtained. We can then proceed on to build the ORG based on this list. 5.4
Building the ORG/ORGB
For each of the dumps, we build the ORG as follows. Nodes are first created to represent the classes on the class list. After that, for each class on the class list, we transverse all the objects in the heap that instantiate that class. For each of such objects, we check the objects referenced by it one by one. For referenced objects which are also on the class list, we add an edge between the 2 nodes corresponding to the 2 classes to which the 2 objects (the referenced object and the referrer object) belong on the ORG if there is no such an edge yet. After this process, an ORG is built with nodes representing classes on the class list and edges representing referencing between objects instantiating the classes represented by the nodes. Note that there is only one edge even there are more than one reference between objects from the same pair of classes. Also, selfreferencing or referencing between objects in the same class are ignored and not captured on the ORG. Finally, the ORGs from the dumps are merged together to form a graph that embraces all the nodes appearing in the ORGs. The process of building the ORGB is the same. 5.5
Birthmark Comparison
We make use of a library of the VF graph isomorphism algorithm [16,11] called VFLib [6]. To test if a library L is used in a program P , we extract the ORG of the whole program P , ORGP , and the birthmark of library L, ORGBL , as mentioned earlier in this section. Note that the same input must be used, particularly when the library L is input dependent. It is because the structure of the heap may be input dependent in that case. We then check if ORGBL is γ-isomorphic to ORGP . If yes, we conclude that library L is used in program P . Otherwise, we conclude that library L is not used in program P .
Dynamic Software Birthmark for Java Based on Heap Memory Analysis
6
103
Evaluation
In this section, we will report the evaluation results on the effectiveness, the ability to distinguish same purpose programs, and the robustness of the prototype of our system. 6.1
Experiment Setup
We evaluated our birthmark system using 25 large-scale programs with most of them of tens of megabytes in size. The 25 programs were divided into 4 groups. The first group consisted of 6 programs with all of them using the JAudiotagger library. JAudiotagger is a third-party Java library for reading the ID3 tags in MP3 files [2]. The second group consisted of 5 programs with all of them using the JCommon library. JCommon is a third-party Java library containing miscellaneous classes that are commonly used in many Java applications [3]. The third group consisted of 11 programs with all of them using neither the JAudiotagger library nor the JCommon library. The fourth group consisted of 3 programs which also read MP3 tags but without using the JAudiotagger library. 6.2
Effectiveness
The birthmarks for JAudioTagger and JCommon library were first extracted. To extract the birthmark of a library, two programs where used to extract the common birthmark as mentioned in section 5.3. In our experiment, Jaikoz and Rapid Evolution 3 were used to extract the birthmark for JAudioTagger, ORGBJAT , while iSNS and JStock were used to extract the birthmark for JCommon, ORGBJC . For the 6 programs using the JAudiotagger library, a common MP3 file was used as the input file. For the programs using the JCommon library, it was impossible to control the input to the library without looking at the source code to get the idea of how the library was used them. However, the final filtering step mentioned in section 5.3 helped filter out the classes that were usage dependent. During the experiment, the applications were launched and a few actions were performed on them before the heaps were dumped. We tested the presence of ORGBJAT in the ORGs of Simpletag, Filerename, Jajuk, and MusicBox. All tests gave positive results. The ORGBJAT was not found in ORGs of any other programs in our set of testing programs. For the JCommon library, ORGBJC was found in the ORGs of Paralog, SportsTracker, and Zeptoscope. Again, it was not found in ORGs of any other programs in our set of testing programs. This part of the evaluation shows that the birthmark is effective in detecting library theft. During the experiment, no false positive or false negative was found. 6.3
Distinguishing Same Purpose Programs
In this part, we try to find out if programs developed for the same purpose can be distinguished by our system. During the experiment, the same MP3 file
104
P.P.F. Chan, L.C.K. Hui, and S.M. Yiu
used for extracting the birthmark of JAudiotagger library was used as input to the fourth group of testing programs. We tested if the library birthmark from JAudiotagger, ORGBJAT , could be found in their ORGs. Our experiment results showed that the ORGBJAT was not found in all 3 of their ORGs. We conclude our system can distinguish same purpose programs. 6.4
Robustness
In this final part of evaluation, the robustness of the system against semanticspreserving obfuscation is evaluated. Obfuscation means transforming a program P to program P such that it functions the same as P but its source code becomes difficult to understand mainly to deter reverse engineering [10]. We obfuscated all the 11 programs in the first two groups of programs using the state-of-the-art Allatori Java obfuscator [1]. We tested the presence of the ORGBJAT and the ORGBJC in the birthmarks of them. Our system could still detect the birthmark of the corresponding library in all of them. This shows that our birthmark system is robust against state-of-the-art obfuscation.
7
Discussion
In this section, we first discuss the situations in which our birthmark system is not applicable. After that, we discuss possible attacks to deface the birthmark. 7.1
Limitations
Since our birthmark extracts information from the heap to identify the program, the heap memory plays a major role in providing enough unique characteristics of the program. There are two main requirements for our birthmark to be effective. First, there must be enough heap objects. For large-scale applications, in which intellectual property right is a critical issue, there are usually many classes that are strongly connected by referencing. In practice, this requirement is satisfied by most libraries or applications. Second, the input to the library or application must be controllable. In some cases, it may be difficult to do that. For instance, it is hard to feed in the same input for a library that reports the current market values of stocks as it is time-critical. In that case, we can only take into account objects that are not input dependent and filter out other objects on the heap when extracting the birthmark. 7.2
Attacks
The most feasible attacks are class splitting and class coalescing as suggested in [21] by M. Sosonkin et al. Figure 3 shows how these two techniques can affect our birthmark. In the figure, it illustrates how the birthmark of a program will be altered if class splitting or class coalescing is applied on the class represented by the black node in the middle.
Dynamic Software Birthmark for Java Based on Heap Memory Analysis
105
Fig. 3. Class splitting and class coalescing
For class splitting, Sosonkin stated in the paper that they believed in practice, splitting a class into two classes not related by inheritance or aggregation is possible only in situations where the original design is flawed and there should have been several different classes. In other words, all references between the original class and the other classes are now going through the inheriting class. Therefore, the change on the heap structure is not influential and the original birthmark can still be found. For class coalescing, the structure is drastically changed. The original birthmark can no longer be found in the new heap structure. However, evaluation done by Sosonkin et al. showed that, unlike class splitting, class coalescing introduces tremendous amount of overhead proportional to the number of classes coalesced. Therefore, intensive class coalescing is not practical. For small amount of class coalescing, we can loosen our birthmark detection scheme and allow partial matching of the birthmark to be sufficient for a conclusion of a copy relation.
8
Conclusion
We have described the design details, implementation, and evaluation of our novel dynamic birthmark system. We implemented and evaluated the birthmark system using 25 testing programs. The evaluation showed that it is reliable and robust against semantics-preserving obfuscation. This research provides a novel dynamic birthmark and supplements the existing dynamic birthmarks. Future work includes combining the heap approach with the system call approach and looking into low-level object-oriented languages like C++.
106
P.P.F. Chan, L.C.K. Hui, and S.M. Yiu
Acknowledgements. The work described in this paper was partially supported by the General Research Fund from the Research Grants Council of the Hong Kong Special Administrative Region, China (Project No. RGC GRF HKU 713009E), the NSFC/RGC Joint Research Scheme (Project No. N HKU 722/09), and HKU Seed Fundings for Basic Research 200811159155 and 200911159149.
References 1. 2. 3. 4. 5. 6. 7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
Allatori, http://www.allatori.com/ Jaudiotagger, http://www.jthink.net/jaudiotagger/ Jcommon, http://www.jfree.org/jcommon/ jhat, http://download.oracle.com/javase/6/docs/technotes/tools/ share/jhat.html jmap, http://download.oracle.com/javase/1.5.0/docs/tooldocs/ share/jmap.html Vflib, http://www-masu.ist.osaka-u.ac.jp/~ kakugawa/VFlib/ Monden, A., Iida, H., Matsumoto, K.-i., Inoue, K., Torii, K.: Watermarking java programs. In: Proceedings of International Symposium on Future Software Technology (1999) Collberg, C., Carter, E., Debray, S., Huntwork, A., Kececioglu, J., Linn, C., Stepp, M.: Dynamic path-based software watermarking. In: Proceedings of the ACM SIGPLAN 2004 Conference on Programming Language Design and Implementation, PLDI 2004, pp. 107–118. ACM, New York (2004) Collberg, C., Thomborson, C.: Software watermarking: Models and dynamic embeddings. In: Proceedings of Symposium on Principles of Programming Languages, POPL 1999, pp. 311–324 (1999) Collberg, C., Thomborson, C., Low, D.: A taxonomy of obfuscating transformations. Tech. Rep. 148, (July 1997), http://www.cs.auckland.ac.nz/~ collberg/ Research/Publications/CollbergThomborsonLow97a/index.html Cordella, L.P., Foggia, P., Sansone, C., Vento, M.: Performance evaluation of the vf graph matching algorithm. In: Proceedings of the 10th International Conference on Image Analysis and Processing, ICIAP 1999, pp. 1172–1177. IEEE Computer Society, Washington, DC (1999) Eppstein, D.: Subgraph isomorphism in planar graphs and related problems. In: Proceedings of the Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 1995, pp. 632–640. Society for Industrial and Applied Mathematics, Philadelphia (1995) Tamada, H., Okamoto, K., Nakamura, M., Monden, A., Ichi Matsumoto, K.: Detecting the theft of programs using birthmarks. Tech. rep., Graduate School of Information Science. Nara Institute of Science and Technology (2003) Haruaki Tamada, K., Okamoto, M.N., Monden, A.: Dynamic software birthmarks to detect the theft of windows applications. In: Proc. International Symposium on Future Software Technology (2004) Tamada, H., Okamoto, K., Nakamura, M., Monden, A., Ichi Matsumoto, K.: Design and evaluation of dynamic software birthmarks based on api calls. Tech. rep., Nara Institute of Science and Technology (2007) Cordella, L.P., Foggia, P., Sansone, C., Vento, M.: Subgraph transformations for the inexact matching of attributed relational graphs. Computing (1998)
Dynamic Software Birthmark for Java Based on Heap Memory Analysis
107
17. Myles, G., Collberg, C.S.: Detecting Software Theft via Whole Program Path Birthmarks. In: Zhang, K., Zheng, Y. (eds.) ISC 2004. LNCS, vol. 3225, pp. 404–415. Springer, Heidelberg (2004) 18. Myles, G., Collberg, C.: K-gram based software birthmarks. In: Proceedings of the 2005 ACM Symposium on Applied Computing, SAC 2005, pp. 314–318. ACM, New York (2005) 19. NYTimes. Former goldman programmer found guilty of code theft (December 2010), http://dealbook.nytimes.com/2010/12/10/ex-goldman-programmer-isconvicted/ 20. Schuler, D., Dallmeier, V., Lindig, C.: A dynamic birthmark for java. In: Proceedings of the Twenty-Second IEEE/ACM International Conference on Automated Software Engineering, ASE 2007, pp. 274–283. ACM, New York (2007) 21. Sosonkin, M., Naumovich, G., Memon, N.: Obfuscation of design intent in objectoriented applications. In: Proceedings of the 3rd ACM Workshop on Digital Rights Management, DRM 2003, pp. 142–153. ACM, New York (2003) 22. Tamada, H., Nakamura, M., Monden, A.: Design and evaluation of birthmarks for detecting theft of java programs. In: Proc. IASTED International Conference on Software Engineering, pp. 569–575 (2004) 23. Wang, X., Jhi, Y.-C., Zhu, S., Liu, P.: Behavior based software theft detection. In: Proceedings of the 16th ACM Conference on Computer and Communications Security, CCS 2009, pp. 280–290. ACM, New York (2009)
A Secure Perceptual Hash Algorithm for Image Content Authentication Li Weng and Bart Preneel Katholieke Universiteit Leuven, ESAT/COSIC-IBBT {li.weng,bart.preneel}@esat.kuleuven.be
Abstract. Perceptual hashing is a promising solution to image content authentication. However, conventional image hash algorithms only offer a limited authentication level for the protection of overall content. In this work, we propose an image hash algorithm with block level content protection. It extracts features from DFT coefficients of image blocks. Experiments show that the hash has strong robustness against JPEG compression, scaling, additive white Gaussian noise, and Gaussian smoothing. The hash value is compact, and highly dependent on a key. It has very efficient trade-offs between the false positive rate and the true positive rate.
1
Introduction
In the Internet era, images are massively produced and distributed in digital form. Although digital images are easy to store and process, they are also susceptible to malicious modification. Due to widely available image editing software, even non-professionals can perform content modification. Consequently, people begin to suspect what they see from digital images. Sometimes, public incidents happen, due to fake images. Therefore, the need for protecting content authenticity is emerging. Among various techniques, perceptual hashing is a promising solution. Hashing means to compute a digest value from data. This digest value, typically a short binary string, is called a hash value. Perceptual hash algorithms are a particular kind of hash algorithms for multimedia data. They have the special property that the hash value is dependent on the multimedia content, and it remains approximately the same if the content is not significantly modified. Since a
This work was supported in part by the Concerted Research Action (GOA) AMBioRICS 2005/11 of the Flemish Government and by the IAP Programme P6/26 BCRYPT of the Belgian State (Belgian Science Policy). The first author was supported by the IBBT/AQUA project. IBBT (Interdisciplinary Institute for BroadBand Technology) is a research institute founded in 2004 by the Flemish Government, and the involved companies and institutions (Philips, IPGlobalnet, Vitalsys, Landsbond onafhankelijke ziekenfondsen, UZ-Gent). Additional support was provided by the FWO (Fonds Wetenschappelijk Onderzoek) within the project G.0206.08 Perceptual Hashing and Semi-fragile Watermarking.
B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 108–121, 2011. c IFIP International Federation for Information Processing 2011
A Secure Perceptual Hash Algorithm for Image Content Authentication
109
perceptual hash value is a compact representation of the original content, it can be used for robust content authentication. Compared with conventional cryptographic hash algorithms [1], perceptual hash algorithms have the advantage that they can tolerate the difference in quality and format – the binary representation no longer matters; the same content always maps to the same hash value. This is particularly useful for the multimedia domain. In this work, we focus on image content authentication by perceptual hash algorithms. In a typical application scenario, the authentic hash value is available; anyone who suspects the image can compute the hash value and compare it with the authentic one (Fig. 1b). For example, the authentic hash value can be published online, or electronically signed by digital signature techniques [1]. Although this application is known, there are some unsolved issues. In particular, there is the minor modification problem: when malicious modification is perceptually insignificant, the hash algorithm is unable to distinguish it from legitimate distortion. Most image hash algorithms compute the hash value from an image’s global features. Since global features are not sensitive to local modification, these algorithms are generally vulnerable to the minor modification problem, thus are not suitable for content authentication applications with high security demand. In this work, a potential solution is provided. We propose an image hash algorithm with the ability of authenticating image blocks. The rest of the work is organized as follows: Section 2 introduces image hashing and its limitation; Section 3 describes the proposed image hash algorithm; Section 4 shows some experiment results; Section 5 concludes the work.
Image
Key
Image
Key
Hash source Hash generation
Feature extraction Randomization
Hash 1
Hash 2
Feature reduction Similarity comparison
Hash
(a) Hash generation
Decision
(b) Hash comparison
Fig. 1. Diagrams of perceptual hash generation (a) and comparison (b)
2
Perceptual Hashing and Its Limitation
The basic components of a perceptual hash algorithm are feature extraction, feature reduction, and randomization (Fig. 1a). During feature extraction, robust features are extracted from the input signal. Typically, these features are
110
L. Weng and B. Preneel
insensitive to moderate distortion, such as compression, noise addition, etc. Feature reduction, similar to a quantization procedure, includes means to efficiently represent extracted features. Randomization is a critical component for security and other performance. It includes means to achieve the key-dependent property (explained later). Many recent designs begin to support this property. Besides the above major components, there are also pre-processing and post-processing. The requirements for perceptual hashing come from two aspects. From a generic security point of view, a perceptual hash algorithm should possess the following properties: – One-way: it is hard to reveal the input from the hash value; – Collision-resistant: it is hard to find different inputs that have similar hash values; given an input and its hash value, it is hard to find another input which has a similar hash value; – Key-dependent: the hash value is highly dependent on a key. The first property is useful for protecting the confidentiality of the input. The second property ensures that the chance of collision is negligible, so that the hash value can be considered only as a fair representation of the corresponding input. The last property is used for entity authentication, i.e., only the entity that knows the key can generate the correct hash value, see [1, message authentication code]. Additionally, from a multimedia security point of view, there is a more demanding requirement: – The hash value is insensitive to legitimate media content distortion, but sensitive to malicious modification. Unfortunately, these requirements are not all well fulfilled in practice. This is due to the intrinsic limitation of perceptual hash algorithms. A limit of perceptual hashing is that perceptually insignificant but malicious distortion cannot be distinguished from legitimate distortion. It is defined here as the minor modification problem. Since a perceptual hash value is computed from robust features, it can be used for content authentication. However, the effect is limited. A perceptual hash value is only sensitive to significant content modification, while malicious attacks can be perceptually insignificant [2, 3]. Considering an image as a point in a vector space, the essence of perceptual hashing is dimension reduction [4]. A hash value is typically computed from low dimensional features. It is naturally resistant to distortion that only affects higher dimensions. However, the distortion brought by malicious attacks can be as small as legitimate distortion. As long as the distortion only affects high dimensions, no matter it is legitimate or malicious, it will be tolerated. For example, Fig. 2 shows two Lena images: a) a maliciously modified version; b) a compressed version. Existing image hash algorithms may not be able to distinguish the two images. For example, the ART-based image hash algorithm in [5] is used to compute hash values for these images. The hash values are compared to the original one. The resulted hash distances (normalized Hamming distance) are 0.0091 for the modified version and 0.0195 for the compressed version. Such small distances
A Secure Perceptual Hash Algorithm for Image Content Authentication
111
normally indicate that the inputs are legitimate. The distances even imply that the modified version looks more authentic than the compressed version. In order to exclude insignificant but malicious distortion, a tight threshold can be used for hash comparison. However, that will also exclude some legitimate distortion, thus decreases the robustness. Therefore, conventional image hash algorithms are not suitable for applications with high security requirements.
(a) Modified Lena
(b) Compressed Lena
Fig. 2. The minor modification problem
The research on perceptual image hashing used to focus on robustness. The earliest work was probably proposed by Schneider and Chang in 1996 [6], based on the image histogram. Later in 1999, Fridrich proposed another algorithm based on randomized image block projection, which for the first time introduced the use of a secret key during hash generation [7, 8]. New algorithms come up with novel ways of feature extraction, such as [5, 9–11], and they typically strive for better robustness. For example, the radon transform based algorithm by Lef`ebvre et al. [12] and the Fourier-mellin transform based algorithm by Swaminathan et al. [13] claim to have relatively good resistance to rotation and some other geometric distortion. Another research topic of interest is the security of the key, see e.g. [14, 15]. Nevertheless, the issue concerned in this work has never been specifically addressed.
3
A Secure Image Hash Algorithm
In order to alleviate the minor modification problem, we propose to design an image hash algorithm by a block-based approach. That means, we consider an image block as the basic unit for authenticity protection. We evenly divide an image into blocks, and apply a “block” hash algorithm to each block. The final hash value is the concatenation of all block hash values. In this way, malicious modification is restricted up to the scale of a block.
112
L. Weng and B. Preneel
A straight-forward way to construct a block-based image hash algorithm is to apply an existing image hash algorithm to image blocks instead of the whole image. However, this approach might dramatically increase the hash size and the computational cost. A conventional image hash algorithm may have a hash size up to 1 kb. If an image has 64 blocks, it costs 64 kb to store the whole hash value. Besides, a large hash size also influences the speed of analysis for large-scale applications. Therefore, it is necessary to design block-based image hash algorithm specifically. The goal of this work is to design such an algorithm with a good balance between the performance and the cost. Before we describe the proposed algorithm, we need to define the performance of such block-based algorithms in general. Since the algorithm is applied to image blocks, the performance of such an algorithm is defined as how well it protects the content of a block unit. Therefore, the size of the block plays an important role in the design. If the block is too small, the hash size becomes extremely large. Another observation is that a block begins to lose perceptual meaning if the size is too small. In the extreme case, a block shrinks to a point and has no perceptual meaning. Therefore, the block size must be carefully chosen. On the other hand, the perceptual importance of a block is also relative to the size of the image. For example, a 64 × 64 block may not be significant in a 2048 × 2048 image, but it is significant in a 512 × 512 image. It means we need to fix the block size and the image size when defining the authentication performance. Based on these considerations, we define the protection level as the ratio between the block dimension and the default image dimension. The default image dimension is not the dimension of the original input image, but the maximum dimension before feature extraction. The protection level of our proposed algorithm is 64/512. We use a block size of 64 × 64 pixels. Before feature extraction, the image is resized so that the maximum dimension equals 512 pixels. The basic idea of the proposed algorithm is to generate a hash value from low frequency phases of the two-dimensional discrete Fourier transform (DFT) coefficients of an image block. It begins with a pre-processing stage. An input image is first converted to gray and resized by bicubic interpolation to make the maximum dimension equal to 512 pixels. The resulted image is smoothed by a 3 × 3 average filter and processed by histogram equalization. These steps have several effects: 1) reduce the dimensionality of the feature space; 2) limit the hash length and the computation time; 3) remove insignificant noise and increase robustness; 4) synchronize the image size. The preprocessed image is padded with zero to make the size equal to multiples of 64. The rest of the algorithm is applied to image blocks of 64 × 64 pixels. They are explained in detail below. The block hash values are concatenated to form the final hash value. 3.1
Feature Extraction from Image Blocks
The feature extraction is applied to image blocks. It works in the DFT domain. The DFT is an orthogonal transform. Since the coefficients are uncorrelated,
A Secure Perceptual Hash Algorithm for Image Content Authentication
113
there is low redundancy in the extracted features. The two-dimensional DFT of an M × N image block xm,n is defined as Xk,l =
M −1 N −1
xm,n exp
m=0 n=0
−j2πnk −j2πml + , N M
k = {0, 1, . . . , N − 1}, l = {0, 1, . . . , M − 1} . The extracted feature is the phase information of the coefficients ∠Xk,l ∈ [0, 2π) . It is well known that the phase is critical in the representation of natural images, see e.g. [16–18]. After an image block is transformed by a 2D-DFT, the coefficient matrix is organized to have the zero frequency (DC) coefficient in the center. Since low frequency coefficients are less likely to be affected by incidental distortion, an initial feature vector is formed by low frequency phases. For implementation simplicity, the phases within a central square of width 2l + 1 are extracted, where l is an integral parameter that controls the length of the feature vector. This is illustrated in Fig. 3.
60
DFT phase matrix
2
Selected phases
50
x
l=8
y index
1 Discarded part
40 0 30 −1
20
−2
10
−3 10
20
30
40
50
60
x index
(a) Diagram
(b) The phase map of an image block Fig. 3. Selection of low frequency phases
3.2
Feature Reduction and Randomization
The phase matrix in the frequency range specified by l is processed to compose the final feature vector. In our algorithm we set l = 8. Since pixel values are real numbers, the DFT coefficient matrix is conjugate-symmetric. Therefore, about half of selected phases are redundant. The phase matrix is divided into two parts and the redundant part is discarded, as shown in Figure 3. The zero phase of the DC coefficient is also discarded. There are 144 phase values left. They will be randomized and compressed. The randomization part requires a cryptographically
114
L. Weng and B. Preneel
secure pseudo-random bit generator (PRBG). It generates uniformly distributed pseudo-random data from a secret key. It can be implemented by e.g. running the block cipher AES in the counter mode [1, 19]. Specifically, there are two randomization steps and two compression steps (Fig. 4). First, 144 phase values are combined into a column vector v . This vector is subjected to a secret permutation p generated by the secure PRNG. The second step is compression. A new feature vector v is generated from the permuted one p(v) by computing the mean of every 2 elements vi = p(v)2i + p(v)2i+1 , i = 0, · · · , 71.
(1)
This step not only makes the final hash value more compact, but also increases robustness and security. The third step is dithering. The final feature vector f is derived by adding a dither sequence s to v ; this step is motivated by Johnson and Ramchandran’s work [20]. The dither sequence is generated by the secure PRNG. The elements of the dither sequence are uniformly distributed between 0 and 2π, and the addition operation is computed modulo 2π fi = (vi + si ) mod 2π, i = 0, · · · , 71 .
(2)
These steps make the hash value highly dependent on the secret key. The last step is quantization of the feature vector f . Because legitimate distortion is likely to cause slight changes in DFT phases, coarse quantization can be applied to gain robustness. For implementation simplicity, an n-bit uniform quantizer with Gray coding is used. The parameter n controls the quantization accuracy. We use n = 2.
Initial feature vector (144 phase values)
KEY Dithering
v Secure PRNG
Permutation
Final feature vector f
p(v) Quantization v’ Mean value computation
Block hash value (144 bits)
Fig. 4. Feature vector processing
3.3
Hash Comparison
A metric must be defined to measure the distance between hash values. The hash distance metric used in the proposed scheme is the bit error rate (BER), or the normalized Hamming distance. It is defined as
A Secure Perceptual Hash Algorithm for Image Content Authentication
dxy =
115
N −1 1 |xi − yi | , N i=0
where x and y are two binary vectors of length N . The (block) hash distance is compared with a threshold. The two images (or blocks) are considered similar if the distance is below the threshold. In this work, we mainly consider the similarity between image blocks.
4
Experiment Results
The proposed algorithm has been extensively tested. The results are shown in this section. We consider the performance in terms of robustness, discrimination, and key dependence. A database of natural scene photos1 are used in the tests. It consists of different genres such as architecture, art, humanoids, landscape, objects, vehicles, etc. The image resolutions are larger than 1280 × 1280 pixels. All tests are performed on image block level, except for the key dependence test. 4.1
Robustness Test
A good algorithm is robust against legitimate distortion. We consider a few kinds of distortion as legitimate – JPEG compression, isotropic down-scaling, Gaussian smoothing, and additive white Gaussian noise (AWGN). They are commonly encountered in practice. The hash value is expected to be insensitive to these operations. In this test, we generate distorted versions for 900 original images in the database according to Table 1, and compute all the hash values. For each pair of original block and its distorted version, we compute the average block hash distance. The results are listed in Tables 2–5. Table 1. Legitimate distortion Distortion name JPEG compression Down-scaling AWGN Gaussian smoothing
Distortion level (step) Quality factor: 5 – 25(5), 30 – 90 (10) Scale ratio: 0.3 – 0.9 (0.1) Signal to noise ratio: 10 – 40 (5) dB Window size: 3 – 13 (2)
The distortion levels are selected to slightly exceed the normal ranges in practice. The results show that except for some extreme cases, e.g., AWGN with 10 dB signal to noise ratio (SNR), or JPEG with quality factor 5, all the average hash distances are quite small and generally increase with the distortion level. Gaussian smoothing has the least influence to the hash value – the distance is 0.031 for all distortion levels. This demonstrates the good robustness of 1
www.imageafter.com
116
L. Weng and B. Preneel Table 2. Robustness test for JPEG compression Quality factor 5 10 15 20 25 30 Average block hash distance 0.218 0.168 0.144 0.128 0.117 0.108 Quality factor 40 50 60 70 80 90 Average block hash distance 0.097 0.088 0.081 0.072 0.056 0.039 Table 3. Robustness test for down-scaling Scale ratio 0.3 0.4 0.5 0.6 0.7 0.8 0.9 Average block hash distance 0.096 0.065 0.042 0.050 0.058 0.067 0.042
the extracted features. For down-scaling, the hash distance does not monotonically increase with the distortion level. It is possibly because that the scaling operation and the resizing operation (in the pre-processing stage) involve pixel sub-sampling and interpolation where noise is not always proportional to the scale ratio; moreover, they may lead to slight changes of the aspect ratio and displacement of image blocks, thus introduce some randomness in the final results. Nevertheless, the distances are small compared to JPEG and AWGN. 4.2
Discrimination Test
In this test, we compare image blocks of different content. A pair of blocks is randomly chosen from two images in the same category, and their block hash values are compared. The purpose of this test is to see if the hash value is able to distinguish perceptually different blocks. It also shows the algorithm’s potential to detect malicious modification on the block level. Although randomly selected blocks are likely to be different, sometimes we meet similar blocks. Therefore, a metric is needed to decide whether two blocks are really different. We use the well-known structural similarity (SSIM) [21] for judging the ground truth. The SSIM is a widely used similarity metric for images. It compares two images, and returns a score between 0 (no similarity) and 1 (full similarity). We apply SSIM to image blocks, and compare the similarity score with a predefined threshold t. In our experiment, we set t = 0.7. Those block pairs, whose SSIM scores are below the threshold, are considered as perceptually different. A large hash distance is expected for different blocks. We compute the hash distance for about 800 thousand pairs of different blocks. The average hash distances for some image types are listed in Table 6. The overall average hash distance is 0.437. Table 4. Robustness test for additive white Gaussian noise Signal to noise ratio (dB) 10 15 20 25 30 35 40 Average block hash distance 0.214 0.171 0.135 0.106 0.083 0.064 0.048
A Secure Perceptual Hash Algorithm for Image Content Authentication
117
Table 5. Robustness test for Gaussian smoothing Window size 3 5 7 9 11 13 Average block hash distance 0.031 0.031 0.031 0.031 0.031 0.031 Table 6. Average hash distance between different blocks Image type Architecture Art Landscape Objects Humanoids Vehicles ...
Average block hash distance Overall average (standard deviation) (standard deviation) 0.432 (0.040) 0.441 (0.042) 0.435 (0.046) 0.437 (0.043) 0.447 (0.042) 0.440 (0.042) 0.441 (0.043) ...
The discrimination performance is measured by the average block hash distance. Intuitively, if two image blocks are randomly chosen, they are most likely to be “half similar”. If the hash distance uniformly represents the similarity, on average it is about half of the full distance, i.e., 0.5. From this point of view, the proposed hash achieves good discrimination. The deviation from the ideal situation can be due to several reasons. First, the small size of the block hash limits the discrimination power. Second, since the test is carried out for the same type of images, the bias can be understood as the essential similarity among images of the same kind. The results show that it is unlikely to find different blocks with similar hash values. Therefore, attempts to replace a block with another is unlikely to succeed without being detected. 4.3
Hypothesis Test
In a typical application scenario, after the block hash distance d is computed, it is compared with a threshold T . The decision is made from two hypotheses: – H0 – the blocks correspond to different content; – H1 – the blocks correspond to similar content. If d ≤ T , we choose H1 ; otherwise we choose H0 . The overall performance of the algorithm can be characterized by the true positive rate Pd and the false positive rate Pf . They are defined as: – Pd = Probability {d ≤ T |H1 } ; – Pf = Probability {d ≤ T |H0 } . When the threshold T is increased, the hash is likely to tolerate more distortion, but that also increases the chance of false positive. A good algorithm should suppress the false positive rate while maintaining a high true positive rate. Previously, we have generated about 1.4 million positive cases (in the robustness test)
118
L. Weng and B. Preneel
and 0.8 million negative cases (in the discrimination test) on the block level. By adjusting the value of T , we compute Pd and Pf and plot their trends, as shown in Fig. 5. One can see that Pd increases with T , while Pf is negligible until T = 0.3. In order to choose the most suitable threshold value, we also take into account of the false negative rate Pm , which is defined as – Pm = Probability {d > T |H1 } . By definition, Pm = 1 − Pd . We can see that Pf and Pm are contradicting requirements. Depending on the severity of false positive and false negative, different applications give their own bias towards Pf or Pm . By default, we can choose the equal error rate point (EERP), where Pf = Pm , as the working point. In our case, T = 0.344 is the EERP, where Pd = 0.977, and Pf = Pm = 0.023. We also plot the relationship between Pd and Pf , known as the receiver operating characteristic (ROC) curve, in Fig. 6. The ROC curve illustrates the good overall performance from another angle – the proposed algorithm offers efficient tradeoffs between Pd and Pf . Pf , Pd , and Pm vs. T 1 0.9 0.8 False positive rate − Pf
Probability
0.7
True positive rate − Pd
0.6
False negative rate − Pm
0.5 0.4 0.3 0.2 EERP 0.1 0
0
0.1
0.2 0.3 Threshold − T
0.4
0.5
Fig. 5. True positive rate, false positive rate, and false negative rate
4.4
Key Dependence Test
All the above tests use a default key for generating hash values. When the algorithm is used as a message authentication code, the hash value must be highly dependent on the key. In this test, we use 900 images to validate the key dependence property. For each image, we generate 100 hash values using different keys. They are pair-wise compared. There are 4950 hash comparisons for each image, and about 5 million comparisons in total. If two different keys are used for the same image, the corresponding hash values should be totally different, as if they correspond to different content. The average hash distances for all images
A Secure Perceptual Hash Algorithm for Image Content Authentication
The ROC curve
The ROC curve (close−up)
1
1 0.99
0.9
0.98 True positive rate − Pd
0.8 True positive rate − Pd
119
0.7 0.6 0.5 0.4
0.97 0.96 0.95 0.94 0.93 0.92
0.3 0.91 0.2
0.9 0
0.1
0.2
0.3 0.4 0.5 0.6 False positive rate − Pf
0.7
0.8
0.9
0
0.02
(a) The complete curve
0.04 0.06 0.08 False positive rate − Pf
0.1
(b) A close-up
Fig. 6. The receiver operating characteristics
are plotted in Fig. 7. All the average distances are centralized around 0.5 with a very small dynamic range within 0.4997 – 0.5004. This demonstrates the good randomization mechanism of the proposed scheme.
Average hash distance between hash values by different keys 0.5005 average distance per image overall average = 0.5001
0.5004
Average hash distance
0.5003 0.5002 0.5001 0.5 0.4999 0.4998 0.4997
0
100
200
300
400 500 Image index
600
700
800
900
Fig. 7. Key dependence test
5
Conclusion and Discussion
In the multimedia domain, a challenge to content authentication is, that the same content may have different digital representations. A potential solution is perceptual hashing, because it provides robustness against incidental distortion, such as compression. However, due to the minor modification problem, conventional image hash algorithms only protect the overall content of an image. In this work, we propose an image hash algorithm with a much higher security level. The algorithm aims at protecting the authenticity of image blocks. We
120
L. Weng and B. Preneel
define the protection level as 64/512, which typically corresponds to 1/48 area of a 4 : 3 image. For each image block, features are computed from the phase values after the discrete Fourier transform. Experiments show that the hash has strong robustness against JPEG compression, scaling, additive white Gaussian noise, and Gaussian smoothing. The hash algorithm is key dependent, thus can be used as a message authentication code. Experiments confirm that the hash value is highly dependent on the key. The hash size is 144 bits per 64 × 64 block (6912 bits per 4 : 3 image) after pre-processing. In spite of such a compact size, hypothesis test shows that we achieve very efficient trade-offs between the false positive rate and the true positive rate. In our experiment, distortions such as rotation and translation are not taken into account, because it is questionable to consider them as authentic in the content protection circumstance. They typically generate much higher distortion than other non-geometric manipulations, thus give chance to malicious modification. In general, the performance of content authentication significantly degrades if geometric distortion is allowed. What is not discussed in this work is the security of the key, see [22]. Given some known image/hash pairs, it is not obvious how much effort is needed to derive the key of our algorithm. In practice, we advise not using the same permutation and dithering for all blocks in an image. An in-depth security analysis will be given in the future.
References 1. Menezes, A., van Oorschot, P., Vanstone, S.: Handbook of Applied Cryptography. CRC Press, Boca Raton (1996) 2. Coskun, B., Memon, N.: Confusion/diffusion capabilities of some robust hash functions. In: Proc. of 40th Annual Conference on Information Sciences and Systems, Princeton, USA (March 2006) 3. Weng, L., Preneel, B.: Attacking some perceptual image hash algorithms. In: Proc. of IEEE International Conference on Multimedia & Expo., pp. 879–882 (2007) 4. Voloshynovskiy, S., Koval, O., Beekhof, F., Pun, T.: Conception and limits of robust perceptual hashing: towards side information assisted hash functions. In: Proc. of SPIE., vol. 7254 (February 2009) 5. Weng, L., Preneel, B.: Shape-based features for image hashing. In: Proc. of IEEE International Conference on Multimedia & Expo. (2009) 6. Schneider, M., Chang, S.F.: A robust content based digital signature for image authentication. In: Proc. of International Conference on Image Processing (ICIP 1996), vol. 3, pp. 227–230 (1996) 7. Fridrich, J.: Robust bit extraction from images. In: Proc. of IEEE International Conference on Multimedia Computing and Systems, vol. 2, pp. 536–540 (1999) 8. Fridrich, J., Goljan, M.: Robust hash functions for digital watermarking. In: Proc. of International Conference on Information Technology: Coding and Computing (2000) 9. Venkatesan, R., Koon, S.M., Jakubowski, M., Moulin, P.: Robust image hashing. In: Proc. of IEEE International Conference on Image Processing, Vancouver, CA, vol. 3, pp. 664–666 (2000)
A Secure Perceptual Hash Algorithm for Image Content Authentication
121
10. Mih¸cak, M.K., Venkatesan, R.: New iterative geometric methods for robust perceptual image hashing. In: Proceedings of ACM Workshop on Security and Privacy in Digital Rights Management, Philadelphia, PA, USA (November 2001) 11. Monga, V., Evans, B.: Robust perceptual image hashing using feature points. In: Proc. of IEEE International Conference on Image Processing, vol. 1, pp. 677–680 (2004) 12. Lef`ebvre, F., Macq, B., Legat, J.D.: RASH: RAdon Soft Hash algorithm. In: Proc. of the 11th European Signal Processing Conference, Toulouse, France, vol. 1, pp. 299–302 (September 2002) 13. Swaminathan, A., Mao, Y., Wu, M.: Robust and secure image hashing. IEEE Transactions on Information Forensics and Security 1(2), 215–230 (2006) 14. Swaminathan, A., Mao, Y., Wu, M.: Security of feature extraction in image hashing. In: Proc. of 2005 IEEE International Conference on Acoustics, Speech, and Signal Processing, Philadelphia, PA, USA (March 2005) 15. Radhakrishnan, R., Xiong, Z., Memon, N.: On the security of the visual hash function. Journal of Electronic Imaging 14, 10 (2005) 16. Oppenheim, A., Lim, J.: The importance of phase in signals. Proceedings of the IEEE 69(5), 529–541 (1981) 17. Gegenfurtner, K., Braun, D., Wichmann, F.: The importance of phase information for recognizing natural images. Journal of Vision 3(9), 519a (2003) 18. Ni, X., Huo, X.: Statistical interpretation of the importance of phase information in signal and image reconstruction. Statistics & Probability Letters 77(4), 447–454 (2007) 19. Barker, E., Kelsey, J.: Recommendation for random number generation using deterministic random bit generators. Technical report, NIST (2007) 20. Johnson, M., Ramchandran, K.: Dither-based secure image hashing using distributed coding. In: Proc. of IEEE International Conference on Image Processing, vol. 2, pp. 751–754 (2003) 21. Wang, Z., Bovik, A., Sheikh, H., Simoncelli, E.: Image quality assessment: From error visibility to structural similarity. IEEE Transactions on Image Processing 13(4), 600–612 (2004) 22. Mao, Y., Wu, M.: Unicity distance of robust image hashing. IEEE Transactions on Information Forensics and Security 2(3), 462–467 (2007)
Low-Attention Forwarding for Mobile Network Covert Channels Steffen Wendzel and J¨ org Keller University of Hagen Faculty of Mathematics and Computer Science 58084 Hagen, Germany
Abstract. In a real-world network, different hosts involved in covert channel communication run different covert channel software as well as different versions of such software, i.e. these systems use different network protocols for a covert channel. A program that implements a network covert channel for mobile usage thus must be capable of utilizing multiple network protocols to deal with a number of different covert networks and hosts. We present calculation methods for utilizable header areas in network protocols, calculations for channel optimization, an algorithm to minimize a covert channel’s overhead traffic, as well as implementationrelated solutions for such a mobile environment. By minimizing the channel’s overhead depending on the set of supported protocols between mobile hosts, we also minimize the attention raised through the channel’s traffic. We also show how existing covert network channel infrastructure can be modified without replacing all existing infrastructure elements by proposing the handling of backward-compatible software versions. Keywords: network covert channel, covert channel protocols, covert proxy, mobile security.
1
Introduction
Covert channels are hidden communication channels which are not intended for information transfer at all [5]. The intention of a covert channel is to hide the existence of an information flow that possibly violates a system’s security policy [6, 7]. Covert channels contribute to the free expression of opinion since they are useful to bypass censorship [28]. Network covert storage channels are covert channels that transfer information through a network by altering an attribute within the channel, while network timing channels communicate via modifications of packet timings and packet ordering [14]. The remainder of this paper focuses on covert storage channels. For optimal auto-configuration of a program implementing a covert storage channel, it is necessary to provide means to minimize the raised attention and to select network protocols depending on the situation. Such an auto-configuration of a program is a requirement for mobile implementations which need to be able to add new storage channel types on demand (e.g. in form of an extension). If B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 122–133, 2011. c IFIP International Federation for Information Processing 2011
Low-Attention Forwarding for Mobile Network Covert Channels
123
such a program is able to deal with different network protocols, it can communicate with a number of different covert channel systems, i.e. different covert systems utilizing different network protocols. Fig. 1 depicts the scenario of a mobile covert channel user Bob communicating with Alice by utilizing different covert channels from different locations. Our proposed extendable implementation is able to provide such scenarios.
Fig. 1. Mobile communication from Bob to Alice using a single covert channel software able to deal with multiple networks with a number of covert channel systems utilizing different protocols
In a large number of network protocols such as in TCP, IPv4, IPv6, or ICMP, header areas usable for covert channels are known [14–17]. At the same time, methods to prevent, limit and detect covert channels have been developed as well, such as the shared resource matrix methodology, the pump, covert flow trees, fuzzy timings and others [18–26]. Implementations capable of covert channel protocol switching or proxy chaining exist, e.g. LOKI2 [4], phcct [11, 13], and cctt [1]. Yet, they lack autoconfiguration and dynamic extensibility. The idea of interoperability between covert channel hosts using different network protocols was discussed in [9], where a two-phase protocol was presented which contained a so called “network environment learning phase” (NEL, where the protocols used between two covert hosts where determined), as well as a “communication phase”. This paper describes a solution for the missing aspects of such covert network channels and improves on the related work: We discuss the handling of micro protocol implementations in mobile covert channels. Furthermore, we motivate their forensic usefulness in the context of multiprotocol channels, and we also focus on optimization of case-dependent covert channels (e.g. optimized performance and optimized packet sizes). This optimization is necessary as the available covert space per packet and the packet or header sizes may vary considerably between protocols. Our approach use a linear optimization. Additionally, we present a simple but effective forwarding algorithm for covert proxies, able to reduce a channel’s raised attention in such mobile environments.
124
S. Wendzel and J. Keller
The remainder of this paper is organized as follows. Section 2 serves as an introduction to the topic of utilizable header areas in network protocols while section 3 extends the topic of the previous section for multiprotocol channels with different probabilities, and additionally motivates multiprotocol channels from a forensic’s point of view. Section 4 deepens the knowledge on utilizable areas in network packets, and section 5 focusses on the attribute-dependent optimization of protocol usage. Section 6 discuses implementation related topics and deals with the question, how a smart sharing of utilizable space for both, covert channel internal protocols as well as payload, is feasible. The calculations of sections 2 and 3 are extended for the context of covert channel proxy chains in section 7, where a forwarding algorithm for covert data is presented. The paper closes with a conclusion in section 8.
2
Calculating Basic Space Requirements
It is well-known that many hiding options exist in popular network protocols. A classification of these network protocols from a covert channel user’s point of view enables the user to choose protocols according to his requirements. E.g. one classification attribute would be the detectability of a specific hiding method in a given protocol, another classification attribute is the amount of space to be utilized for covert data in a specific protocol. In case a covert channel user needs to transfer a large amount of data, it is required to place as much data as possible within a single network packet, since too many unexpected or abnormal network packets can alert a monitoring authority.
Fig. 2. Sum of utilizable areas of a network protocol’s header
To calculate the available space spkt within a network packet’s header, it is required to sum up the sizes of all utilizable areas of a header, which is illustrated
Low-Attention Forwarding for Mobile Network Covert Channels
125
in Fig. 2. Given an unidirectional covert network channel and assuming that the amount of covert data soverall required to be sent is known a priori, the number of packets N needed to transfer all data is N=
soverall spkt
(1)
since spkt · N ≥ soverall . If the channel is able to let a receiver acknowledge a network packet with another network packet containing only an ACK flag and the ID of the acknowledged packet, the number of transmitted packets is 2N . Here, we do not count re-transmitted packets. We further assume that the ACK flag and the received packet’s ID fit into a single acknowledgement packet and that each such packet acknowledges only one received packet.
Fig. 3. spkt value for a multi-layer covert channel
When a multi-layered network environment is used (as in case of the TCP/IP model), more than one layer can be used to place hidden data (as shown in Fig. 3). In such cases, the value of spkt is the sum of all utilizable header areas in every header. If a data transport over routers is mandatory, the utilizable areas of non-routeable network headers (e.g. Ethernet frames) cannot be used for the covert channel since they are replaced on every hop of the routing path, thus spkt may not contain these areas. When a plaintext protocol is utilized, it is usually possible to dynamically extend a protocol’s header by a number of attributes (e.g. by adding different HTTP attributes as “User-Agent”). Also, it is possible to dynamically extend the size of a single attribute to a limited size (e.g. the HTTP URL parameter can usually grow up to 1024 bytes length1 ). Such dynamic values can be considered in our calculation when they are used with a static length or if the average length is used. 1
According to [27], HTTP servers should be able to handle an unlimited number of bytes within the URL parameter, but in practice many servers’ URLs, specially in embedded systems as CISCO VoIP telephones, are limited to a length of 1024 bytes or less.
126
3
S. Wendzel and J. Keller
Multiprotocol Channels
Multiprotocol network covert channels were introduced by a proof of concept tool called “LOKI2”[4] and enhanced through transparent protocol switching in [11]. Such channels utilize a number of protocols to increase the effort to detect and analyze the traffic they carry. Multiprotocol channels operate as follows: Before a new covert network packet is sent, the sender chooses one of the available protocol hiding methods (e.g. in HTTP [2] or in DNS [3]), embeds the covert data into the new network packet, and sends it. In case more than one network protocol is used by a covert channel, the amount of available space per packet spkt can vary. Assume a set of network protocols P = {P1 , P2 , . . . , Pn } used by the covert channel with the available spaces s1 , s2 , ..., sn , where si = spkt (Pi ), the average amount of available space s¯pkt is n pi si (2) spkt := i=1
if protocol Pi is chosen with probability pi . In the simplest case, a network protocol header, e.g. the DNS header, contains a combined sum of all spaces, spkt , as mentioned above. Due to defined rules (e.g. protocol standards) for such protocols, it is apparent that it is not always possible to combine all utilizable parts of a header at the same time, e.g. a HTTP POST request is not possible together with a HTTP OPTIONS request. To achieve a correct calculation of spkt , all non-combinable header parts must be treated as different protocols. If we assume that a network covert channel uses HTTP with the three request types GET, POST, and OPTIONS with different utilizable areas, the set of network protocols will be P = {HT T PGET , HT T PP OST , HT T POP T ION S }. In case ICMP type 0 and 8 (echo response and echo request) would be used as well, both ICMP types can be represented by one element picmp−echo in P since the utilizable header areas are identical [8]. If both protocol types are represented as one element, the probability of occurrence of both protocol types picmp−echo must be the sum of picmp−8 and picmp−0 . The utilizable area of a network packet’s header is not always used completely by a covert channel, since attributes such as the raised attention per bit of a packet can be high for some bits. For example, if only the 3 least significant bits of the IPv4 TTL value are used for a covert channel, their raised attention is much lower compared to changing the 3 least significant bits of the IPv4 source address, since it is usual for network packets to take different routes, but it is (at least for internet servers outside of a LAN) not usual to have connections to the same host from a small range of 23 different IP addresses. Thus, protocol utilizations with different attributes should be separated in calculations by representing distinguishable elements of the set P . From a forensic point of view, such multiprotocol covert channels result in a positive side-effect: In case one of the protocols used could be detected as used for a covert channel and was successfully analyzed, i.e. an attacker was able to
Low-Attention Forwarding for Mobile Network Covert Channels
127
understand the hidden content, the non-analyzed data packets of a communication are still hidden since they are transferred within other protocols using different hiding techniques, as described in [12]. Additionally, it is feasible to send highliy sensitive data in fragments to make a forensic analysis harder: As shown in [29], TCP/IP normalizers face a problem called inconsistent TCP retransmissions, i.e. by sending packets that have identical TCP sequence numbers but different content and different TTL values, a normalizer cannot easily determine a connection’s real content. By fragmenting covert channel payload using a multiprotocol channel, a forensic analysis of covert channel traffic faces a similar problem: For a forensic analysis, it is hard to determine the real covert channel’s content, as well as the relevance of the content. This can be achieved by sending a key information in at least two pieces and different protocols: (P art1 , P1 ) and (P art2 , P2 ). For example: A covert channel user sends “secret” using (“se”, P1 ), (“cr”, P2 ), (“et”, P1 ). If an attacker is capable of detecting and analyzing covert traffic of protocol P1 , the attacker will only see the traffic “seet” instead of “secret”. Thus, hiding of critical keywords can be improved and a forensic analysis can result in presumably useless data.
4
Utilizable Areas in Well-Known Protocols
This section exemplifies spkt values for known covert channel utilizations of network protocols. Each implementation of a covert channel can cover different areas of a header, thus spkt values are implementation-dependent. Therefore, it is necessary for the covert communication between two hosts to modify exactly the same bits of a header. To achieve this goal, covert channel software versions should be backward-compatible, i.e. when a transaction begins, one or two version bits should be exchanged and the lowest available covert channel software version available on both hosts may be used to prevent errors. Taking the ICMPv4 address mask request as an example to determine spkt , one can find the information to utilize the 32 bit address mask in [14], i.e. spkt (ICM PAddrMaskReq ) = 32 bits. The same source shows a number of IPv4 header hiding options; in case one chooses the 3 lowest bits of the TTL as well as the “Identification” value (16 bits) for the implementation of a covert channel, then spkt is 19 bits, but it can also have lower and higher values if a different number of header bits is used. If the ICMP address mask request as well as the IP header are utilized together (as shown in Fig. 3) and assuming that they are used as described before, then spkt will be the sum of both spkt values: spkt = 19 + 32 = 51 bits. Different bits of a header correspond to different detection probabilities, e.g. the modification of the least significant bit (LSB) of the IPv4 TTL value is harder to detect than a modification of the most significant bit (MSB) of the TTL. We do not believe that it is possible to define concrete values for the detectability of each bit, since such values always depend on the given monitoring situation of a network. E.g. the probability of detecting a covert channel that uses the ISN
128
S. Wendzel and J. Keller
bits varies depending on whether the TCP ISN flag is checked for covert data for all connections in a network or not [26]. To quantify such detectability values we propose to link a protocol Pi to an element of a small classification set (e.g. low, average and high detectability, C = {Low, M edium, High}). This may require an additional separation of elements of the set P (Sect. 3 already separated incompatible protocol header utilizations like HT T POP T ION S and HT T PGET ). We assume as an example that a HTTP covert channel is used to tunnel data through a requested URL as well as through the “Accept-Language” value via the GET method (cf. [27]). Requested URLs are usually written to a logfile and displayed in monitoring software like “Webalizer”2. The value of the field Accept-Language is unlikely to be monitored. Most small websites do not even handle the language settings sent by a client. Thus, the detection of covert data within the URL is usually more likely than the detection within the AcceptLanguage field. Therefore, a developer should define one element in P containing only the flag with the low (or average) detectability (HT T PGETAccLang ) as well as one element that contains the flag with the higher detectability as well, i.e. P = {HT T PGETAccLang , HT T PGETAll−F ields }, which enables a covert channel to be configured as desired (more space per packet vs. keeping a lower profile). Additionally, it must be kept in mind that a protocol’s classification should also depend on the protocol’s probability of occurence, since exotic protocols can result in raising higher attention.
5
Optimizing Channels on Demand
A covert channel might be used in different situations and with different intentions, e.g.: – When an automatic password cracking program needs to transfer short password information out of a network (e.g. one password per hour), the throughput of the covert channel is not important, while it is highly important to keep a low profile. – When a blogger wants to upload a set of pictures of harmed protesters as soon as possible, a good covert channel throughput is required since the upload is to be done fast because of the relevance of that data in spite of its large size. Still a certain level of coverage from detection is necessary. Being able to develop a covert channel program capable to deal with such dynamic situations, it is required to make its behavior dependent on the attributes of the protocols used. First, we introduce a value qi that indicates how many bits are transferred to send a single covert bit using network protocol Pi with an (average) size sizeof(Pi ) of that protocol’s complete header: qi := 2
sizeof (Pi ) spkt (Pi )
Webalizer is an open source logfile monitoring software available on http://www.mrunix.net/webalizer/
(3)
Low-Attention Forwarding for Mobile Network Covert Channels
129
Depending on the given situation, we use linear optimization to calculate optimal probabilities p1 , ..., pn for all protocols P1 , ..., Pn utilized between two systems. The set of constraints is that i pi = 1 and that 0 < m ≤ pi ≤ 1 for all protocols Pi , where m is a minimum threshold to ensure that each protocol is used at all, even if it is not really desirable in the current situation. A possible value for m is c/n, where c < 1 is a suitable constant. For example, with n = 20 and c = 0.2, each protocol will be chosen with at least 1% probability. Using such a threshold complicates forensic reverse engineering because it prevents concentration on a small number of protocols. If a high throughput of covert data for a fixed number of packets is required, then the target function to be maximized can be chosen as f1 =
n
pi · si .
i=1
If the goal is a small amount of header information sent for a given number of covert data bits, then the target function to be minimized is f2 =
n
pi · qi .
i=1
The latter function can also be maximized by subtracting it from a large fixed value such as i qi . In a similar manner, the number of packets used can be minimized. By using a weighted sum of those target functions, one can optimize for any desired compromise. Furthermore, other optimization criteria may be possible which can be used to construct further target functions. The choice of the target function, i.e. the intended situation, can be communicated between sender and receiver in a manner similar to the choice of the software version.
6
Implementations Based on Micro Protocols
Micro protocols are small protocols placed within the hidden data of a covert channel and used to control the channel [12]. The usage of micro protocols is mandatory for the implementation of the features described in earlier sections.3 Within a micro protocol’s data, information about supported protocols of a client, as well as covert channel version information can be exchanged between covert channel hops. E.g., one can define two bits representing a version information as described in Sect. 4. Other bits can determine supported elements of P . As mentioned in Sect. 1, Yarochkin et. al. 3
While the set of deployed protocols between two hosts is static, a micro protocol is not mandatory. To define a sequence of protocols to use, identical procedures and pseudo-random number generators with identical seeds for random choices can be used on both hosts. Other approaches leading to equal protocol sequences on both hosts are possible as well.
130
S. Wendzel and J. Keller
presented a simple technique for the determination of available protocols between covert channel hosts in [9], but did not focus on optimization, forwarding, protocol classification or covert channel versioning, as we do. An important point mentioned by Yarochkin et. al. is of course the need to automatically filter administrativly blocked protocols. Extensibility of existing covert channel software requires protocol information to be version dependent, i.e., in a new software version, a new protocol Pi could be supported which requires a new representation of Pi within the microprotocol. As mentioned in Sect. 1, the dynamic extendability of a software requires ondemand support for new protocols. Given the explained techniques as well as a modular covert channel software design, all these features can be implemented. Such modular designs are well known from the Apache4 webserver module API as well as from Linux5 kernel modules. In case a micro protocol is used, it should be unified for all utilized network protocols to increase the stability of a covert channel software implementation. To find the maximum size such a micro protocol can use, it is required to find the minimal available size over all protocols used, i.e. smin = mini spkt (Pi ). In case a constant or minimum required payload size is used, smin must also contain this payload, which decreases the header size: spayload + sheader = smin , as presented in the left half of Fig. 4. If the payload is not of a constant size, e.g. if it requires only a defined minimum of space, the remaining space si,remain = si − smin can be used for additional payload.
Fig. 4. Minimum space required for a minimum payload size and a constant micro protocol header size visualized for a set of two network protocols
7
Network Covert Channels with Proxy Chains
Covert communication paths based on covert proxy servers are a means to implement anti-traceability into a network covert channel (see Fig. 5). The value of spkt becomes useful in this context too, if multiple protocols as well as multiple covert proxy hosts are used. 4 5
www.apache.org www.kernel.org
Low-Attention Forwarding for Mobile Network Covert Channels
131
Fig. 5. A sample proxy network containing links with different spkt values (Si = spkt (Pi ), where i = 1, . . . , x), S=sender, R=receiver, Q1 ... Qn represent covert proxies
Each host Qi of a given covert proxy chain from the sender (S) to the receiver (R) chooses one network protocol to transfer data to the next host Qi+1 on the proxy chain. The intersection of the protocol sets on Qi and the next hop Qi+1 can be used for the communication between both hosts as described in [9]. In many cases, it is not necessary for a network covert channel to be fast, as only small amounts of data such as passwords may be transferred. On the other hand, it is important to keep the raised attention of a channel as small as possible. To achieve this goal, the number of network packets per transaction must be limited (e.g. fragmentation must be prevented). A solution for this problem is possible as follows: Let SPi be the intersection of the set P (Qi ) of protocols available on host Qi and the set P (Qi+1 ) of protocols available on host Qi+1 , i.e. SPi = P (Qi ) ∩ P (Qi+1 ). The set Si represents the sizes spkt for all elements Pj of SPi . Now let smax (i) be the maximum over all elements of Si , i.e. smax (i) = max Si . To prevent fragmentation and to send as few packets as possible, host Qi sends network packets with the maximum data size smax (i) to host Qi+1 . When Qi+1 receives the network packet from Qi that needs to get forwarded to Qi+2 , it acts as follows: – If smax (i) = smax (i + 1), then forward the data. – If the transaction ended (i.e. if there is no remaining data), then forward the data, too. Whether a transaction ends (or begins) can be determined by covert channel-internal protocols as presented in [10]. – Otherwise: Send as many complete data packets of size smax (i + 1) as possible. To avoid bursts in case that smax (i + 1) smax (i), one may use a leaky bucket approach to regulate packet frequency. If the data left to send is smaller than smax (i + 1), then wait for a time t for further data to arrive from Qi and then send as many complete data packets as possible. Repeat this step as long as no data is left or no new data arrived for time t. Also here, a linear optimization might be useful that tries to balance the number of packets (by preferring protocols in Si with large spkt ) and detectability (by using a multitude of protocols). Also, one might desire in some scenarios to combine this approach with the one from Sect. 5. If more than one covert channel software version is in use, the newest software version between two hosts must be defined before the intersecting set of all protocols is calculated, since each local set P depends on the software version, as described in section 4. Such functionality can be implemented in a micro protocol.
132
8
S. Wendzel and J. Keller
Conclusions
We presented techniques for the dynamic implementation of network covert storage channels, i.e. it is possible to build a program that is able to dynamically load extension information for the utilization of new network protocol headers. Such dynamic covert channel implementations enable their users to act in mobile covert channel networks. Additionally, our concept enables implementers to continue the development of existing covert channel network infrastructure without dealing with the need to replace all distributed existing software versions. This paper also introduced calculation methods as well as an forwarding algorithm which are able to deal with different protocol types, micro protocols, and the utilization of covert proxy systems, while minimizing the channel’s raised attention to keep a low profile. A drawback of the described techniques is that they – in comparison to existing implementations – result in more complex algorithms, since additional control capabilities must be implemented using micro protocols. A proof of concept implementation is under development but, since it is part of a larger project, unfinished at the moment. Yet previous work, such as [9, 11, 12] indicates that protocol switching using a (micro) protocol is feasible. Future work will include the design of a description structure for the dynamic extension of covert channel programs.
References 1. Castro, S.: cctt (covert channel testing tool) v0.1.8 (2003), http://gray-world.net/it/pr_cctt.shtml 2. Castro, S.: Covert Channel and tunneling over the HTTP protocol detection: GW implementation theoretical design (November 2003), http://www.gray-world.net 3. Born, K.: Browser-Based Covert Data Exfiltration. In: Proc. 9th Annual Security Conference, Las Vegas, NV, April 7–8 (2010) 4. Daemon9: LOKI2 (the implementation), Phrack Magazine, vol. 7(5) (September 1997), http://www.phrack.com/issues.html?issue=51&id=6&mode=txt 5. Lampson, B.W.: A Note on the Confinement Problem. Commun. ACM 16(10), 613–615 (1973) 6. Murdoch, S.J.: Covert channel vulnerabilities in anonymity systems, PhD thesis, University of Cambridge (Computer Laboratory) (2007) 7. Wonnemann, C., Accorsi, R., M¨ uller, G.: On Information Flow Forensics in Business Application Scenarios. In: Proc. IEEE COMPSAC Workshop on Security, Trust, and Privacy for Software Applications (2009) 8. Postel, J.: Internet Control Message Protocol, DARPA Internet Program Protocol Specification, RFC 793 (September 1983) 9. Yarochkin, F.V., Dai, S.-Y., Lin, C.-H., et al.: Towards Adaptive Covert Communication System. In: 14th IEEE Pacific Rim International Symposium on Dependable Computing (PRDC 2008), pp. 153–159 (2008) 10. Ray, B., Mishra, S.: A Protocol for Building Secure and Reliable Covert Channel. In: Sixth Annual Conference on Privacy, Security and Trust (PST), pp. 246–253 (2008)
Low-Attention Forwarding for Mobile Network Covert Channels
133
11. Wendzel, S.: Protocol Hopping Covert Channel Tool v.0.1 (2007), http://packetstormsecurity.org/files/60882/phcct-0.1.tgz.html 12. Wendzel, S.: Protocol Hopping Covert Channels. Hakin9 Magazine 1/08, 20–21 (2008) (in German) 13. Bejtlich, R.: Analyzing Protocol Hopping Covert Channel Tool (November 2007), http://taosecurity.blogspot.com/2007/11/analyzing-protocolhopping-covert.html 14. Ahsan, K.: Covert Channel Analysis and Data Hiding in TCP/IP, M.Sc. thesis, University of Toronto (2002) 15. Rowland, C.H.: Covert Channels in the TCP/IP Protocol Suite, First Monday, vol. 2(5) (May 1997) 16. Scott, C.: Network Covert Channels: Review of Current State and Analysis of Viability of the use of X.509 Certificates for Covert Communications, Technical Report RHUL-MA-2008-11, Department of Mathematics, Roal Holloway, University of London (January 2008) 17. Hintz, D.: Covert Channels in TCP and IP Headers. Presentation Slides of the DEFCON 10 Conference (2002), http://www.thedarktangent.com/images/defcon-10/dc-10-presentations/ dc10-hintz-covert.pdf 18. Berk, V., Giani, A., Cybenko, G.: Detection of Covert Channel Encoding in Network Packet Delays, Technical Report TR536, Rev. 1, Dep. of Computer Science, Dartmouth College (November 2005) 19. Cabuk, S., Brodley, C.E., Shields, C.: IP Covert Timing Channels: Design and Detection. In: Proc. 11th ACM Conference on Computer and Communications Security (CCS 2004), pp. 178–187 (2004) 20. Fadlalla, Y.A.H.: Approaches to Resolving Covert Storage Channels in Multilevel Secure Systems, Ph.D. Thesis, University of New Brunswick (1996) 21. Fisk, G., Fisk, M., Papadopoulos, C., Neil, J.: Eliminating Steganography in Internet Traffic with Active Wardens. In: Petitcolas, F.A.P. (ed.) IH 2002. LNCS, vol. 2578, pp. 18–35. Springer, Heidelberg (2003) 22. Hu, W.-M.: Reducing Timing Channels with Fuzzy Time. In: 1991 Symposium on Security and Privacy, pp. 8–20. IEEE Computer Society, Los Alamitos (1991) 23. Kang, M.H., Moskowitz, I.S.: A Pump for Rapid, Reliable, Secure Communication. In: Proceedings of the 1st ACM Conference on Computer and Communication Security, pp. 119–129 (November 1993) 24. Kemmerer, R.A.: Shared resource matrix methodology: an approach to identifying storage and timing channels. ACM Transactions on Computer Systems (TOCS) 1(3), 256–277 (1983) 25. Kemmerer, R.A., Porras, P.A.: Covert Flow Trees: A Visual Approach to Analyzing Covert Storage Channels. IEEE Transactions on Software Engineering 17(II), 1166–1185 (1991) 26. Murdoch, S.J., Lewis, S.: Embedding covert channels into TCP/IP. In: Barni, M., Herrera-Joancomart´ı, J., Katzenbeisser, S., P´erez-Gonz´ alez, F. (eds.) IH 2005. LNCS, vol. 3727, pp. 247–261. Springer, Heidelberg (2005) 27. Fielding, R., Gettys, J., Mogul, J., et al.: Hypertext Transfer Protocol – HTTP/1.1, RFC 2616 (June 1999) 28. Zander, S., Armitage, G., Branch, P.: Covert Channels and Countermeasures in Computer Networks. IEEE Communications Magazine, 136–142 (December 2007) 29. Handley, M., Paxson, V., Kreibich, C.: Network Intrusion Detection: Evasion, Traffic Normalization, and End-to-End Protocol Semantics. In: Proc. 10th USENIX Security Symposium, vol. 10, pp. 115–131 (2001)
Cryptanalysis of a SIP Authentication Scheme Fuwen Liu and Hartmut Koenig Brandenburg University of Technology Cottbus, Department of Computer Science PF 10 33 44, 03013 Cottbus, Germany {lfw,Koenig}@informatik.tu-cottbus.de
Abstract. SIP (Session Initiation Protocol) is becoming the mostly deployed signaling protocol for VoIP (Voice over IP). Security is of utmost importance for its usage due to the open architecture of the Internet. Recently, Yoon et al. proposed a SIP authentication scheme based on elliptic curve cryptography (ECC) that claimed to provide higher security than other schemes. However, as demonstrated in this paper, it is still vulnerable to off-line dictionary and partition attacks. Keywords: SIP, VoIP, ECC, authentication, cryptanalysis.
1 Introduction VoIP that delivers voice and multimedia over the Internet has gained large popularity nowadays. SIP is the dominant standard signaling protocol used for VoIP. It is defined by IETF (Internet Engineering Task Force) for the establishment, handling, and release of multimedia sessions among participants over the Internet [1]. Because of its flexibility and extensibility, SIP has been also adopted by 3GPP (Third Generation Partnership Project) as the signaling protocol for multimedia applications in 3G mobile networks as well [2]. Thus, SIP has become a key technology to support multimedia communications spanning wired and wireless networks. Like any other services or protocols running in the hostile Internet environment, the SIP protocol is exposed to a wide range of security threats and attacks. Therefore, appropriate security measures have to be taken to protect SIP. The SIP protocol is based on request/response communication model like HTTP. This implies that a SIP client initiates a request on a SIP server and then waits for a response from the server. Mutual authentication between the SIP client and server is required to ensure the communication partner’s identity is legitimate. SIP applies HTTP digest authentication [3] by default to performing authentication. Unfortunately, it is a weak authentication that provides only one-way authentication (i.e. the client is authenticated to the server). This makes server spoofing attacks possible. Moreover, the scheme is vulnerable to the off-line dictionary attacks. Several SIP authentication schemes have been proposed in order to overcome the security weaknesses of the original SIP authentication scheme. Yang et al. developed B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 134–143, 2011. © IFIP International Federation for Information Processing 2011
Cryptanalysis of a SIP Authentication Scheme
135
a secure SIP authentication scheme whose security relies on the difficulty of the discrete logarithm problem (DLP) [4]. However, this scheme is not well suited to wireless network settings, where the computational power of devices and the bandwidth of the wireless links are limited. Durlanik [5] et al. presented an efficient SIP authentication scheme using elliptic curve cryptography (ECC) technologies, whose security is based on the elliptic curve discrete logarithm problem (ECDLP). Durlanik’s scheme requires less computing power and lower network bandwidth than Yang’s scheme, since ECC can achieve the same security level as the conventional cryptosystems by using significantly smaller keys. In 2009, Wu et al. [6] introduced another SIP authentication scheme based on ECC, and proved its security using the Canetti-Krawczyk (CK) security model [7]. The authors stated that the scheme is resilient to various known attacks, such as off-line dictionary attacks, man-in-themiddle attacks, and server spoofing attacks. However, Durlanik’s and Wu’s schemes have been broken meanwhile. Yoon et al. showed that both schemes are prone to offline dictionary attacks, Denning-Sacco attacks, and stolen-verifier attacks [8]. They proposed another SIP authentication scheme that is supposed to withstand the aforementioned attacks. In this paper we reveal serious security vulnerabilities of Yoon’s scheme, and demonstrate that it is actually insecure against off-line dictionary and partition attacks. The remainder of the paper is organized as follows. Section 2 forms the preliminary that gives a short overview on the original SIP authentication procedure and the basic knowledge of elliptic curve cryptography (ECC). Next, Yoon’s scheme is briefly presented in Section 3. A detailed cryptanalysis of Yoon’s scheme is performed in Section 4. Some final remarks conclude the paper.
2 Preliminaries This section first introduces the SIP authentication procedure. Then the elliptic curve cryptography (ECC) is briefly reviewed. 2.1 SIP Authentication Procedure SIP uses a challenge-response based mechanism for authentication that is identical to the HTTP digest authentication. Figure 1 illustrates the SIP authentication mechanism. Prerequisite for the SIP authentication is that password and username of each client in the domain have been configured securely in the SIP server in advance, i.e. the SIP server knows the password and the username of each client. Once a SIP server receives a request, it challenges the client with a nonce to determine the client’s identity. The client acknowledges the SIP server with a response which is computed by using his/her password and the nonce. The server validates the legitimacy of the client by verifying the received response. The SIP authentication procedure is detailed as follows.
136
F. Liu and H. Koenig
Fig. 1. SIP Authentication Procedure
1.
Client → Server: REQUEST The client invokes a REQUEST (e.g. SIP REGISTER message) and sends it to the server (e.g. SIP Registrar)
2.
Server → Client: CHALLENGE (nonce, realm) The CHALLENGE generated by the server contains a nonce and realm. The former offers replay protection, since it is fresh for each session. The latter is the host or domain name of the server, and used to remind the client which username and password should be applied.
3.
Client → Server: RESPONSE (nonce, realm, username, response) The client acknowledges the server with the message RESPONSE containing nonce, realm, username, and response. The response is computed as follows: response = H(nonce, realm, username, password), where H(·) is a one-way hash function.
4.
After receiving the RESPONSE message, the server retrieves the password of the client by referring to the username. The server computes H(nonce, realm, username, password) and compares it with the received response. If they match the client is authenticated.
In the SIP authentication scheme the password is never sent across the network in clear text. So an attacker cannot receive the password directly from the captured messages. But the scheme is insecure against off-line dictionary attacks, where an attacker searches for a password matching the recorded message from the password dictionary. This kind of attacks is difficult to detect and to foil because the adversary only needs to eavesdrop on the protocol messages. With a network sniffer tool, such as Wireshark [9], an attacker may easily capture the message RESPONSE (nonce, realm, username, response), when the protocol is running over the Internet. He/she can randomly select a password pw` from the password dictionary and calculate the hash function H(nonce, realm, username, pw` ). If the hash value does not match the
Cryptanalysis of a SIP Authentication Scheme
137
response the attacker can select another password and repeat the procedure till the correct password is found. This attack can be completed in a reasonable time because a human-chosen password has low entropy. For an 8-character password, its entropy is less than 30 bits (i.e. 230 possible passwords) [10]. In the SIP authentication scheme the client is authenticated to the server, whereas the client does not validate the identity of the server. Thus, the scheme is prone to server spoofing attacks. An attacker can forge the identity of the server and sends a CHALLENGE to the client. The honest client always acknowledges the server with a correct message RESPONSE when receiving the message CHALLENGE. After receiving the RESPONSE message, the attacker can launch an off-line dictionary attack as stated above to crack the password of the client. 2.2 Elliptic Curve Cryptography Elliptic curve cryptography (ECC) [11] uses elliptic curves over a finite field Fp to construct the cryptosystem, where p is a large prime that is usually larger than 224 bits considering security requirements. Let E be an elliptic curve over Fp, so E can be expressed using the Weierstrass equation as follows: y2 = x3 + ax + b (mod p)
(1)
where a, b ∈ Fp, and 4a + 27b ≠ 0 (mod p). A pair (x, y), where x, y ∈ Fp , is a point on the curve if (x, y) satisfies equation (1). The point at infinity denoted by is on the curve. The set of points in E over finite field Fp, denoted as E(Fp), can be expressed as follows: 3
2
E(Fp)={(x,y) ∈ Fp2: y2=x3 + ax + b}
∪{
}
(2)
The number of points in E(Fp) is defined as the order of the curve E over Fp. It is denoted by #E(Fp), which can be estimated by the following formula using Hasse’s theorem.
√
√
p+ 1-2 p ≤ #E(Fp) ≤ p + 1 + 2 p
(3)
Key Generation in ECC The elliptic curve domain parameters (p,a,b,P,n,h) are public, where p is a large prime number, a and b specifying an elliptic curve over Fp , P is a base point in E(Fp) and has prime order n, h is the cofactor which is equivalent to #E(Fp)/n. A private key is an integer k which is randomly selected from the interval [1, n-1], and the corresponding public key is Q = kP. Given domain parameters and Q, it is hard to determine k. This is called elliptic curve discrete logarithm problem (ECDLP). The ECC systems usually employ the standardized domain parameters which can be found in [13]. Encoding Elliptic Curve Points The public key Q = kP is a point on the curve E which satisfies equation (1). The point Q can be represented by a pair of field elements (xQ, yQ), where xQ is the xcoordinate and yQ is the y-coordinate. To reduce the bandwidth required by transmitting point Q can be encoded in a compressed format which is composed of the x-coordinate xQ and an additional bit β to uniquely determine the y-coordinate yQ,
138
F. Liu and H. Koenig
i.e. (xQ, β), where β=0 or 1. Therefore one x-coordinate xQ exactly corresponds to two y-coordinates yQ, when the compressed point format is used. The number of xQ denoted as #xQ can be determined by the following equation when using Hasse theorem:
√
√
(p+1)/2- p ≤ #xQ ≤ ( p+1)/2+ p
(4)
3 Yoon’s Authentication Scheme This section briefly reviews Yoon’s scheme used for SIP authentication. It consists of three phases: system setup phase, registration phase, and authentication phase. A) System Setup Phase In set up phase client and server determine the common elliptic curve domain parameters (p,a,b,P n,h) to be used for the system. B) Registration Phase Client and server execute the following steps over a secure channel to complete the registration procedure. 1. Client → Server: username, H(pw) The client hashes the password pw. The result H(pw) together with the username are sent to the server over a secure channel. 2. The server computes V= H(pw) ⊕H(username,se), where se is a secret key of the server. Then username and V are stored in the verification database table. Here, the purpose of V is to prevent stolen verifier attacks. C) Authentication Phase Fig. 2 illustrates Yoon’s SIP authentication scheme. It proceeds as follows. 1. Client → Server: REQUEST (username, cP⊕H(pw)) The client chooses a random integer c from the interval [1, n-1], computes the public key cP, and encrypts it with the hash value of the password by using the bit-wise exclusive-or (XOR) operation ⊕. The result cP⊕H(pw) and username are sent in message REQUEST to the server. 2. Server → Client: CHALLENGE (realm, sP, H(username,sk)) After receiving the REQUEST message the server derives the public key cP of the client by computing cP⊕H(pw)⊕H(pw). Then it generates a random integer s∈[1, n-1], and computes a common secret session key sk = scP and a message authentication code H(username,sk). Finally the server responds to the client with the CHALLENGE message (realm, sP, H(username,sk)). 3. ClientΔServer: RESPONSE (username, realm, H(username realm,sk)) After receiving the CHALLENGE message client computes the secret session key sk = scP. Thereafter it calculates the hash value H(username,sk) and verifies it with the received one. If both values are unequal, the client aborts
Cryptanalysis of a SIP Authentication Scheme
139
the protocol. Otherwise, the server is authenticated to the client. The client computes the message authentication code H(username,realm,sk), and sends it in the RESPONSE message to the server. 4. Server: Client authentication When receiving the RESPONSE message the server computes the message authentication code H(username,realm,sk) and verifies whether it is equal to the received one. If they are not equal the server rejects the request. Otherwise, the client is authenticated to the server. The server accepts his/her request.
Fig. 2. Yoon’s SIP authentication scheme
Yoon’s scheme provides protection against server spoofing attacks due to the mutual authentication between client and server. The authors claim that the proposed scheme is secure against a variety of attacks, including replay attacks, off-line dictionary attacks, man-in-the middle attacks, modification attacks, Denning-Sacco attacks, and stolenverifier attacks [8]. It should be noted that the prevention of off-line dictionary attacks is the central security requirement for a password-based authentication scheme. If the scheme is vulnerable to off-line dictionary attacks this implies that the password of a client is disclosed to attackers and the whole scheme is broken.
4 Security Vulnerabilities of Yoon’s Scheme This section shows that an adversary can launch off-line dictionary attacks or partition attacks on Yoon’s scheme by using the captured REQUEST message. The partition
140
F. Liu and H. Koenig
attack [12] is a special variant of the off-line dictionary attack in which an attacker can partition the password space into a valid and an invalid part by analyzing the eavesdropped protocol messages. Yoon’s scheme is vulnerable to off-line dictionary attacks when the public key cP is encoded in an uncompressed format and to partition attacks when the public key cP is encoded in a compressed format. We analyze them separately in the following. 4.1 Off-Line Dictionary Attacks It is trivial for an attacker to capture the REQUEST message which contains the username of the client and the masked public key cP⊕H(pw). We assume that the public key cP is encoded in an uncompressed format (x,y). Accordingly, the masked public key is represented by the format (x⊕H(pw), y⊕H(pw)). An attacker can perform off-line dictionary attacks as follows. 1. The attacker chooses a candidate password pwi from the password dictionary D and computes H(pwi). The password space of the dictionary D is expressed as |D|. 2. The attacker computes xi= x⊕H(pw)⊕H(pwi) and yi= y⊕H(pw)⊕H(pwi) in order to get a guessed point (xi,yi). 3. The attacker examines whether the point (xi,yi) is on the elliptic curve E. In other words: he/she checks whether a pair (xi,yi) satisfies equation (1) as described in Section 2.2. If the point (xi,yi) is not on the elliptic curve E the attacker can eliminate the guessed password pwi from the dictionary D and return to step 1. He/she repeats the above steps until a point on curve E is found. Finally, the attacker outputs the correct password pw. Note that the attacker can always verify whether a guessed public key is on the curve because the elliptic curve domain parameters (p,a,b,P,n,h) is public available for their standardizations. The program to perform an off-line dictionary attack can be described as follows. Program off-line dictionary attack Input: D,(x⊕H(pw), y⊕H(pw)); Output: pw; begin 1: for i=0 to |D| 2: { 3: pwi←D; // select password pwi from D 4: Compute H(pwi); 5: xi= x⊕H(pw)⊕H(pwi); 6: yi= y⊕H(pw)⊕H(pwi); 2 3 7: if (yi !=xi +axi+b) then goto 3; 8: else output pw=pwi; 9: } end.
Cryptanalysis of a SIP Authentication Scheme
141
An attacker can identify the correct password pw in a reasonable time by running the program because the password space |D| is usually less than 230 as stated in [10]. 4.2 Partition Attacks Alternatively, the public key cP can be encoded in a compressed format (x,β), where β is 0 or 1. Accordingly the masked public key cP⊕H(pw) is represented by (x⊕ H(pw),β). Note that the masked public key can not be interpreted as (x, β⊕H(pw)) because β is just one bit lang. An adversary can launch a partition attack as follows: 1.
Choose a candidate password pwi from the password dictionary D and compute H(pwi).
2.
Compute xi= x⊕H(pw)⊕H(pwi) and α =xi3+axi +b in order to check whether xi is a valid x-coordinator in the curve E in the next step.
3.
Check whether α is a square in Fp. If so xi is a valid x-coordinator in the curve E accordingly, and put pwi into the valid password set VP. Otherwise, put pwi into the invalid password set UVP.
4.
Return to step 1.
The program to run a partition attack can be described as follows. Program partition attack Input: D, (x⊕H(pw),β); Output: VP, UVP; begin 1: for i=0 to |D| 2: { 3: pwi←D; // select password pwi from D 4: compute H(pwi); 5: xi= x⊕H(pw)⊕H(pwi); 3 6: α=xi +axi+b; 7: if (α is a square) VP←pwi; 8: else UVP←pwi; 9: } end. As stated in equation (4) of Section 2.2, the number of xi is in the interval [(p+1)/2p , ( p+1)/2+ p]. So the possibility that xi is a valid x-coordinate in the curve E over finite field Fp is in the range [1/2-O(1/ p ), 1/2+O(1/ p)] because the total number of points in E(Fp) is nearly p. As a result, the adversary can reduce the possible password space by roughly half by running the above program. This means |VP| ≈ ½|D|, where |·| denotes the password space. After capturing log2|D| SIP communication sessions and executing the above program, the adversary can recover the correct password. Such an attack is feasible in practice. For example, it needs only 30 SIP communication sessions to crack the password for an 8-character password whose password space is |D| ≈ 230.
√
√
√
√
142
F. Liu and H. Koenig
4.3 Lessons Learned The aforementioned cryptanalysis demonstrates that Yoon’s authentication scheme is insecure no matter how the public key is encoded (uncompressed or compressed format). The main reason is that ECC public key is encrypted directly with the hash value of the password. This inherently provides an attacker the chance to rule the invalid passwords out. He/she can use a guessed password to decrypt the masked public key, and check whether the decrypted result satisfies the elliptic curve equation. Thus, the public key cannot be directly encrypted with the password for a secure password based authentication scheme in the ECC domain. There is a need to revise Yoon’s scheme to eliminate the security vulnerabilities. Certainly this means developing a new SIP authentication scheme. Designing a password-based authentication protocol is a challenging task since such protocols are easy vulnerable to offline dictionary attacks. After many years’ research, IEEE has standardized several password-based authentication protocols [14]. Although they can not be directly applied to SIP authentication, their authentication framework can be adapted to the SIP authentication.
5 Final Remarks User authentication is an essential function in the SIP framework. Several schemes have been proposed to overcome the shortcomings of the original SIP authentication scheme. Yoon’s authentication scheme is the newest that claims to be secure against various attacks. In this paper, we have demonstrated that Yoon’s scheme is vulnerable to off-line dictionary and partition attacks. An attacker can recover the correct password in a reasonable time. Further research efforts are needed for developing a secure and efficient SIP authentication scheme.
References 1. Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, A., Peterson, J., Sparks, R., Handley, M., Schooler, E.: SIP - Session Initiation Protocol. IETF RFC3261 (June 2002) 2. Garcia-Martin, M., Henrikson, E., Mills, D.: Private Header (P-Header) Extensions to the Session Initiation Protocol (SIP) for the 3rd Generation Partnership Project (3GPP), IETF RFC3455 (2003) 3. Franks, J., Hallam-Baker, P., Hostetler, J., Lawrence, S., Leach, P., Luotonen, A., Stewart, L.: HTTP Authentication: Basic and Digest Access Authentication. IETF RFC 2617 (June 1999) 4. Yang, C.C., Wang, R.C., Liu, W.T.: Secure Authentication Scheme for Session Initiation Protocol. Computers and Security 24, 381–386 (2005) 5. Durlanik, A., Sogukpinar, I.: SIP Authentication Scheme Using ECDH. World Informatika Society Transaction on Engineering Computing and Technology 8, 350–353 (2005) 6. Wu, L., Zhang, Y., Wang, F.: A New Provably Secure Authentication and Key Agreement Protocol for SIP Using ECC. Computer Standards and Interfaces 31(2), 286–291 (2009)
Cryptanalysis of a SIP Authentication Scheme
143
7. Canetti, R., Krawczyk, H.: Analysis of Key-Exchange Protocols and Their Use for Building Secure Channels. In: Pfitzmann, B. (ed.) EUROCRYPT 2001. LNCS, vol. 2045, pp. 453–474. Springer, Heidelberg (2001) 8. Yoon, E.J., Yoo, K.Y., Kim, C., Hong, Y.S., Jo, M., Chen, H.H.: A Secure and Efficient SIP Authentication Scheme for Converged VOIP Networks. Computer Communications 33, 1674–1681 (2010) 9. Wireshark, http://www.wireshark.org/ 10. Burr, W.E., Dodson, D.F., Polk, W.T.: Electronic Authentication Guideline. NIST Special Publication 800-63 (April 2006) 11. Hankerson, D., Menezes, A., Vanstone, S.: Guide to Elliptic Curve Cryptography. Springer, Heidelberg (2003) 12. Bellovin, S., Merritt, M.: Encrypted Key Exchange: Password-based Protocols Secure against Dictionary Attacks. In: Proceedings of the IEEE Symposium on Research in Security and Privacy (May 1992) 13. Certicom Research: SEC 2: Recommended Elliptic Curve Domain Parameters, http://www.secg.org/collateral/sec2_final.pdf 14. IEEE P1363.2: Password-Based Public-Key Cryptography (September 2008)
Part II
Mapping between Classical Risk Management and Game Theoretical Approaches Lisa Rajbhandari and Einar Arthur Snekkenes Norwegian Information Security Lab, Gjøvik University College, Norway {lisa.rajbhandari,einar.snekkenes}@hig.no
Abstract. In a typical classical risk assessment approach, the probabilities are usually guessed and not much guidance is provided on how to get the probabilities right. When coming up with probabilities, people are generally not well calibrated. History may not always be a very good teacher. Hence, in this paper, we explain how game theory can be integrated into classical risk management. Game theory puts emphasis on collecting representative data on how stakeholders assess the values of the outcomes of incidents rather than collecting the likelihood or probability of incident scenarios for future events that may not be stochastic. We describe how it can be mapped and utilized for risk management by relating a game theoretically inspired risk management process to ISO/IEC 27005. This shows how all the steps of classical risk management can be mapped to steps in the game theoretical model, however, some of the game theoretical steps at best have a very limited existence in ISO/IEC 27005. Keywords: Game theory, Risk management, Equilibrium, Strategies.
1
Introduction
There are many classical risk management approaches and standards [2], [19] like NIST 800-30 [17], RiskIT [7], ISO/IEC 27005 [8] and CORAS [12]. For this paper, we consider the ISO/IEC 27005 [8] standard as it provides a clear description of the stages and terminologies of the risk management process. In a typical classical risk assessment approach, the probabilities are usually guessed and not much guidance is provided on how to get the probabilities right. When coming up with probabilities, people are generally not well calibrated. Besides, history may not always be a very good teacher. The hypothesis of the paper is: ‘Gathering representative probabilities for future events that may not be stochastic, is difficult. We claim it is a lot easier to obtain representative data on how stakeholders assess the values of the outcomes of events/incidents.’ In a game theoretic approach, probabilities are obtained from the actual computation and analysis. Moreover, the strategy (mitigation measure to reduce risk) can be determined with respect to the opponent’s strategy. When the risks are estimated more accurately, the effectiveness of the overall risk management approach increases. B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 147–154, 2011. c IFIP International Federation for Information Processing 2011
148
L. Rajbhandari and E.A. Snekkenes
The main contribution of this paper is to show that game theory can be integrated into classical risk management. For this, we provide a clear structure of both the classical risk management and game theoretical approaches. The intention is to enable the readers to have a better understanding of both methods. We then describe how it can be mapped by relating a game theoretically inspired risk management process to ISO/IEC 27005. This shows how all the steps of ISO/IEC 27005 can be mapped to the steps in the game theoretical model; although some of the game theoretical steps at best have a very limited existence in ISO/IEC 27005. The remainder of this paper is structured as follows. In Sect. 2, we present the state of the art and a summary of contributions. In Sect. 3, we first compare the top level perspectives of classical risk management and game theory. We then provide a more detailed mapping between the two approaches, identifying issues where a correspondence is missing. In Sect. 4, we discuss our findings. Conclusion and future work are given in Sect. 5.
2
State of the Art
The classical risk management approaches takes the perspective of the single player (individual, system, etc.) for which the risk analysis is being carried out. For example, in Probabilistic Risk Analysis (PRA), people and their actions and reactions are not given much importance [6]. Thus, Hausken [6] puts forward the way of merging PRA and game theory taking into account that, in risk assessment, the actions of the people affect each other. In addition, most of the classical risk assessment approaches are inclined to be rather subjective as the value of the probabilities of threats are either assumed or based on historical data. Taleb [18] has provided examples of Black Swan incidents that cannot be predicted accurately based on historical data. In game theory, the incentives of the players are taken into consideration which is important in understanding the underlying motives for their actions. Liu and Zang [11] put forward the incentive-based modeling approach in order to understand attacker intent, objectives and strategies. Anderson and Moore [1] also state the importance of incentives, as misaligned or bad incentives usually cause security failure. Game theory helps to explore the behavior of real-world adversaries [14]. Cox has stated that, by using game theory, the adversarial risk analysis can be improved [5] as the actions of the attacker, which were regarded as random variables and judged from the defender’s perspective, can be computed. QuERIES, a quantitative cybersecurity risk assessment approach, uses game theory for constructing and evaluating the attack/protect model [3], [4]. While there are many papers discussing the use of game theory for specific application areas [4], [10], [16], we are aware of no works that integrate a risk management framework such as ISO/IEC 27005 and game theory.
Mapping between Classical Risk Management
3
149
Mapping between ISO/IEC 27005 and Game Theoretic Approach
In this section, we first compare the top level perspectives of classical risk management and game theory. We then provide a more detailed mapping between the two approaches, identifying issues where a correspondence is missing. 3.1
A Top Level Comparison
As stated above, there are many classical risk management approaches. To apply these approaches a clear understanding of the terminology and the overall process flow is necessary. We consider the risk management steps of the ISO/IEC 27005 [8] standard which is depicted in Fig. 1 (a). These steps can be iterated until the results are satisfactory. The input and output for each of these steps are given in ISO/IEC 27005 [8]. 1. Investigate the Scenario 2. Identify the Players Context Establishment
Risk Communication
Risk Identification Risk Analysis
Risk Estimation Risk Evaluation Risk Decision Point 1 Assessment satisfactory
No Yes
Risk Treatment Risk Decision Point 2 Treatment satisfactory
No
Risk Monitoring And Review
3. For each player
Risk Assessment
3.1 Information Gained 3.2 Determine the Strategies 3.3 Identify the Preferences 3.4 Define Scale & Weight 3.5 Represent by Payoff /Utility 4. Fill in the Matrix (Normal Form) 5. Find the Optimized Strategies/Equilibrium
Yes
Risk Acceptance End of first or subsequent iterations
No Yes
End
Fig. 1. (a) Information Security Risk Management Process (taken from [8]) (b) Game Theoretical Steps
Game theory helps us to understand how the strategic interactions and interdependence among the rational players influence the outcomes they gain [20], [15]. The steps that we have identified are given in Fig. 1 (b). For each of the steps, we provide a short description. In addition, Fig. 2 depicts the input and output for each of the game theoretical steps. 1. The definition of scope of interest and assets that needs to be protected are identified by investigating the scenario.
150
L. Rajbhandari and E.A. Snekkenes Input
Process Steps 1. Investigate the Scenario
Scenario Decision makers whose actions affect each other
2. Identify the Players
Output Scope definition & asset identification Good and bad guys (individual / group of individuals/ environmental / technological factors)
3. For each player
States of knowledge
3.1 Information Gained
- Perfect/ Imperfect Information - Complete/ Incomplete Information
Vulnerabilities, threats, existing controls & measures
3.2 Determine the Strategies
Options of good guy (existing controls & measures) as row headings & bad guy (threats) as column headings
Choice of strategies
3.3 Identify the Preferences
How the players value multiple orthogonal aspects of outcome
Outcomes
3.4 Define Scale & Weight
Preferences
3.5 Represent by Payoff/ Utility
- Scale & weight for comparing outcomes - Preference ranking Consequence estimation (Values for each cell of the matrix)
Players, Strategies, Payoffs/ Utilities
4. Fill in the Matrix(Normal Form)
Complete matrix
Equilibrium strategies (e.g. mixed strategy Nash equilibrium)
5. Find the Optimized Strategies/ Equilibrium
- Probabilities - Expected outcome of the game
Fig. 2. Input and Output for Game Theoretical Steps
2. Players whose actions affect each other are identified. The players are inherently good or bad, and who is ‘good’ or ‘bad’ depends on the perspective of the risk analyst. If the players show seemingly irrational behavior, this can be explained by at least two alternatives: (1) given the analyst (or objective) valuation of utility, it is the players (irrational) reasoning that explains the irrational behavior; (2) the players have a different notion of utility than the risk analyst, but this notion of utility is (partially) unknown to the analyst. For the purpose of this paper, we choose the second alternative. 3. Once the players are identified, for each player we need to determine3.1 Information they have when they make a decision. 3.2 Strategies or options related to the actions of the players to overcome the threats or to gain opportunities. 3.3 Preferences of the players, which can be obtained by asking how they value the outcomes, as the choice of each option results in an outcome. It is conceivable that players value multiple orthogonal aspects of outcome (e.g. cash, trust, reputation and legal compliance). Thus, in many cases, it may be desirable to model outcomes as vectors. 3.4 Scale and weight should be defined so that the various outcomes can be compared. We can then rank the order of the preferences. 3.5 These preferences can then be represented by numbers which are known as payoffs/ utilities. Higher payoffs represent more preferred outcomes. The values are assigned considering the players’ motivation, capabilities
Mapping between Classical Risk Management
151
(e.g. resources to implement or defend the attack) and experiences. The players in general have the incentive to maximize their payoff. 4. The scenario can then be formulated in the normal (strategic) form. 5. The optimum strategies for each player can be identified. The combination of optimum or best strategies chosen by the players is the equilibrium and this specifies the outcome of the game to the players [13]. The process is repeated as the players’ options and their outcome valuation may change. Moreover, in the long run, the entire process should be repeated for effective risk management. 3.2
Mapping Individual Steps
Table 1 shows the result of the mapping between the risk management process of the ISO/IEC 27005 standard and the game theoretical steps. For each of the process of the ISO/IEC 27005 standard, the corresponding game theoretical steps are stated. The comparison is solely based on what is provided (process steps and terminologies) in the ISO/IEC 27005 standard. Both approaches are iterated until the result of the assessment is satisfactory. The mapping shows that all the steps of ISO/IEC 27005 can be mapped to game theory. On the other hand, we have identified that some of the game theoretical steps like information gained, beliefs and incentives of the opposing players and optimization of the strategies by the players are not included in ISO/IEC 27005.
4
Discussion
In classical risk management, risk is calculated as a ‘combination of the likelihood of an event and its consequence’ [9]. The limitations in this approach are: (1) Probability is difficult to assess as the underlying process may not be stochastic. Even if the process is stochastic, lack of historical data makes the parameters of the distribution difficult to estimate. Moreover, it is not appropriate and rather subjective to use the historical data in some of the situations, for example in estimating the risk of a terrorist attack, war or extreme events (Black Swan events). (2) Probability also depends largely on the risk analyst’s perception or expert elicitation. People are generally not well calibrated. Thus, it is subjective in most of the cases. (3) The beliefs and incentives of the opponent are not considered. These limitations might result in inappropriate choices and decisions, which can be overcome by using game theory. The benefits of using game theory for risk management are: (1) The quality of data collected is likely to be better as no actuarial data is needed. It focuses on incentives, capabilities and experiences of the players rather than asking an expert for historically based probabilities. (2) Expert judgment on collected data can be audited as we can determine and investigate how the players assess the values of the outcomes, what information is available to them, and whether they
152
L. Rajbhandari and E.A. Snekkenes Table 1. Mapping between ISO/IEC 27005 and Game Theoretic Approach
ISO/IEC 27005 Process/ Terminology Game Theoretic Step/ Terminology Context Setting the basic criteria Scenario investigation (scope definition establishment Defining the scope & & asset identification) boundaries Player identification (good & bad guys) Organization for information security risk management (ISRM) Risk Identification of assets Included in scenario investigation identification Identification of threats Determine the strategies for the bad guys Identification of existing Identify implemented controls i.e. ‘do controls nothing’ option for the good guys Identification of vulnerabil- Options that can be exploited by ities threats. Included while determining the strategies for the bad guys. Identification of conse- Identify how the players value multiple quences orthogonal aspects of outcomes. Identify the preferences. Risk Assessment of consequences Define scale & weight for comparing estimation outcomes, & ranking preferences. Represent by payoff/ utility (assign values in each cell of the matrix). Assessment of incident like- Computed probabilities for each of the lihood strategies of both the players Level of risk estimation (list Expected outcome for each of the stratof risks with value levels as- egy of the bad guy is the risk for good signed) guy & vice versa. Risk List of risks prioritized Prioritize the expected outcome for both evaluation the players. Risk Risk treatment optionsStrategies (control measures) for the treatment risk reduction, retention, good guys; can be categorized into differavoidance & transfer ent options based on the computed probabilities. Residual risks Expected outcome of the game Risk List of accepted risks based Strategies of the good guy (based on the acceptance on the organization criteria organization criteria) Risk Com- Continual understanding of Strategies of the good guy munication the organization’s ISRM process & results. Risk Monitoring & review of risk Process is repeated as the players’ Monitoring factors options and their outcome valuation may & Review Risk Management monitor- change ing, reviewing & improving Not Included Information gained by the opponent Not Included Beliefs & incentives of the opponent Not Included Optimization of the strategies
Mapping between Classical Risk Management
153
are utility optimizing or not taking into account the strategies of the opponent. (3) Probabilities are obtained from the actual computation and analysis. However, some of the limitations related to this approach are the players’ limited knowledge about their own outcome(s) and the outcomes of others, and strategic uncertainty. ISO/IEC 27005 takes the perspective of the organization for which the risk assessment is being carried out and thus, the information gained, beliefs and incentives of the adversaries and optimization of the strategies by the players are not included. Game theory is compatible with classical risk management and can be integrated into ISO/IEC 27005. This integration will provide the risk analyst additional guidance on what issues to address in his analysis and how more auditable probability estimates can be obtained. This integration also shows that game theoretic framework can be used for the entire risk management process and not just for risk analysis.
5
Conclusion and Future Work
Clear structure for both the classical risk management and game theoretical approaches have been presented. The mapping shows that game theoretically inspired risk management process can be integrated into ISO/IEC 27005. With game theory, we can obtain representative data on how stakeholders assess the value of outcomes of incidents rather than collecting the probability of incident scenarios for future events that may not be stochastic. Moreover, game theory is a rigorous method for computing probability and also the risk analyst can achieve additional guidance on how more auditable probability estimates can be obtained. However, some steps of game theory are not included in the current version of ISO/IEC 27005. For future work, the above approach will be explored with a comprehensive case study and extended to the iterative aspect of risk management. Moreover, we will investigate the feasibility of adopting our ideas in the context of ISO 31000. Acknowledgment. The work reported in this paper is part of the PETweb II project sponsored by The Research Council of Norway under grant 193030/S10. We would also like to thank the anonymous reviewers for their valuable comments and suggestions.
References [1] Anderson, R., Moore, T.: Information security economics – and beyond. In: Menezes, A. (ed.) CRYPTO 2007. LNCS, vol. 4622, pp. 68–91. Springer, Heidelberg (2007) [2] Campbell, P.L., Stamp, J.E.: A Classification Scheme for Risk Assessment Methods. Sandia National Laboratories, Sandia Report (August 2004)
154
L. Rajbhandari and E.A. Snekkenes
[3] Carin, L., Cybenko, G., Hughes, J.: Quantitative Evaluation of Risk for Investment Efficient Strategies in Cybersecurity: The QuERIES Methodology. Approved for Public Release: AFRL/WS-07-2145 (September 2007) [4] Carin, L., Cybenko, G., Hughes, J.: Cybersecurity Strategies: The QuERIES Methodology. Computer 41, 20–26 (2008) [5] Cox Jr., L.A.: Game Theory and Risk Analysis. Risk Analysis 29, 1062–1068 (2009) [6] Hausken, K.: Probabilistic Risk Analysis and Game Theory. Risk Analysis 22(1) (2002) [7] ISACA. The Risk IT Framework (2009), http://www.isaca.org [8] ISO/IEC 27005. Information technology -Security techniques -Information security risk management. International Organization for Standardization, 1st edn. (2008) [9] ISO/IEC Guide 73. Risk management - Vocabulary - Guidelines for use in standards (2002) [10] Jormakka, J., M¨ ols¨ a, J.V.E.: Modelling Information Warfare as a Game. Journal of Information Warfare 4(2), 12–25 (2005) [11] Liu, P., Zang, W.: Incentive-based modeling and inference of attacker intent, objectives, and strategies. In: Proceedings of the 10th ACM Conference on Computer and Communications Security, CCS 2003, pp. 179–189. ACM, New York (2003) [12] Lund, M.S., Solhaug, B., Stølen, K.: A Guided Tour of the CORAS Method. In: Model-Driven Risk Analysis, pp. 23–43. Springer, Heidelberg (2011) [13] Rasmusen, E.: Games and information: An introduction to game theory, 4th edn. Blackwell Publishers, Malden (2006) [14] Fricker Jr., R.D.: Game theory in an age of terrorism: How can statisticians contribute. Springer, Heidelberg (2006) [15] Ross, D.: Game theory. The Stanford Encyclopedia of Philosophy (2010), http://plato.stanford.edu/archives/fall2010/entries/game-theory/ [16] Roy, S., Ellis, C., Shiva, S., Dasgupta, D., Shandilya, V., Wu, Q.: A Survey of Game Theory as Applied to Network Security. In: 43rd Hawaii International Conference on System Sciences (HICSS), pp. 1–10 (January 2010) [17] Stoneburner, G., Goguen, A., Feringa, A.: Risk Management Guide for Information Technology Systems, NIST Special Publication 800-30 (July 2002) [18] Taleb, N.N.: The Black Swan: The Impact of the Highly Improbable. Random House Trade Paperbacks, 2nd edn. (May 2010) [19] Vorster, A., Labuschagne, L.: A framework for comparing different information security risk analysis methodologies. In: Proceedings of the 2005 Annual Research Conference of the South African Institute of Computer Scientists and Information Technologists on IT Research in Developing Countries, SAICSIT 2005, pp. 95–103. South African Institute for Computer Scientists and Information Technologists (2005) [20] Watson, J.: Strategy: An Introduction to Game Theory, 2nd edn. W. W. Norton & Company, New York (2008)
Digital Signatures: How Close Is Europe to Truly Interoperable Solutions? Konstantinos Rantos Kavala Institute of Technology, Kavala GR-65404, Greece [email protected]
Abstract. Digital signatures have been a hot topic in the e-government era as a key enabler for e-services provided to business and citizens, and secure exchange of e-documents. When this exchange crosses the borders of closed systems or EU’s Member States, several interoperability issues arise. In EU many schemes and solutions have been proposed to overcome some problems, yet there is still more to be done. This paper provides a survey of the actions taken to promote interoperable use of digital signatures and identifies areas where EU has to invest in order to achieve the desired level of interoperability. Keywords: Digital Signatures, interoperability, e-documents, standardisation, eGovernment services.
1
Introduction
The penetration of digital signatures across EU has been noticeable the last few years, with more and more digital services adopting them to secure data authentication, integrity and non-repudiation. This flourishing period came after a long “recession” where public key cryptography and its derivatives, such as digital signatures, have been straggling to survive as a technological solution to securing e-services. European initiatives together with work programmes IDABC [15] and ISA [2] and the corresponding action plans have been the turning point that triggered EU’s member states (MSs) to consider them as one of the key infrastructures in their eGovernment action plans. Nowadays, most of the countries across EU have already deployed digital signature schemes for their two main application fields: – entity authentication, i.e. electronic identity (eid) schemes and – data authentication and integrity, i.e. digitally signed documents. Digital signatures have proven to be the robust solution for secure exchange of e-documents in the context of (cross-border) e-service provision. Although currently provided mainly by governments, they are anticipated to be adopted and widely deployed by other critical sectors, such as banking, to satisfy similar needs. Digital certificates and signatures are expected to be the user’s e-passport to e-services. B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 155–162, 2011. c IFIP International Federation for Information Processing 2011
156
K. Rantos
Digital signatures are well studied and standardised by ETSI (European Telecommunications Standards Institute), CEN (European Committee for Standardisation) and other organisations and initiatives such as the EESSI (European Electronic Signature Standardisation Initiative). These standards have solved technological issues and viable and secure yet, more or less, closed solutions are already deployed and have long been tested. However, the plethora of technical standards resulted in diversified environments related to secure signature creation devices (SSCDs), certificates, and signatures, which bring more obstacles to the road towards interoperable solutions. As a result, many activities have been taking place in EU aiming to promote digital signatures interoperability at legal, policy and technical level. Such activities include signatures related studies and projects like CROBIES (Cross-Border Interoperability of eSignatures), cross-border demonstrators and other less signature related but with a significant impact on EU’s policy formulation on digital signatures, PEPPOL (Pan-European Public Procurement Online), STORK (Secure idenTity acrOss boRders linked), and SPOCS (Simple Procedures Online for Cross- Border Services) and secure documents exchange specific solutions like the Business Document Exchange Network (BUSDOX). This paper looks at digital signatures interoperability issues and the work that is carried out in the eGovernment services field, emphasizing on e-document signing, yet ommitting application related issues. In this context, this paper provides an up to date mapping of the standards used by solutions deployed across EU and identifies the points that remain problematic and need our attention in order to strengthen weak links in the interoperability chain. It also aims to assist those that design digital signature enabled applications and have to make important interoperability decisions and those planning to develop signature generation and validation tools as the ISA programme requires. The paper is organized as follows. Section 2 provides a quick overview of the activities that take place in EU regarding digital signatures interoperability and a thorough mapping of signature and e-document related standards. Section 3 sets the cornerstones of interoperability and defines the three levels of interoperability that seek viable solutions while section 4 identifies those areas where EU has to invest for truly interoperable solutions.
2
Current Activities and Standards
The EU’s intentions to promote secure exchange of e-documents in the context of eGovernment services and to use electronic identities (eIDs) across all MSs, have boosted the use of digital signatures across the EU. This cross-border vision, however, has brought up many interoperability problems for digital signature enabled applications. With a plan on e-signatures in 2008 [1], Europe sets the cornerstones for the actions that have to be taken towards MSs acceptable and interoperable e-signatures. These plans are reinforced by the Commission’s decision to define “interoperability of public key infrastructures (PKI)” as one of the key activities of the ISA programme [2] defined in actions 1.8 and 1.9 regarding e-documents exchange and verification of e-signatures respectively.
Digital Signatures: How Close Is Europe to Truly Interoperable Solutions?
157
Prior to start implementing EU’s policy most of the MSs had justifiably designed their eID and e-services solutions considering merely their own needs and based on the applicable legal framework, usually overlooking other MSs needs and specifications. Figure 1 depicts the current status of standardisation in EU regarding digital signatures and e-documents. It is an update to the mapping published by EU [16] enhanced by the strong dependencies between those standards. The already complicated map coupled with the wide variety of options that each of these standards offers and the diversities among the adopted solutions reveal the reason why these e-signature related standards hinder the provision of interoperable solutions. technically fostered specified by by Internet X.509 PKI X.509 V.3 Certificate Profile Commission Decision 2003/511/EC Certificate and CRL for Certificates Issued to Generally Recognised Standards for depends Profile RFC 5280 Natural Persons ETSI 102 280 Signature Products on technically based on recognises Security specified by basis recognises Qualified Certificate Profile Requirements General guidelines for Use of Electronic RFC 3739 ETSI 101 862 for for Cryptographic Trustworthy electronic signature Signatures: Legal and Modules systems managing verification Technical Aspects Policy requirements for adopts FIPS 140-2 certificates CWA 14171 CWA 14365 – Part 1 certification authorities CWA 14167-1,2 Algorithms and issuing qualified amplified Parameters for Public Procurement 2004/18/EC, complemented certificates ETSI 101 456 SSCD EAL4+ by Secure 2004/17/EC by (ISO 15408) guided guided adopts Application interface Electronic CWA 14169 by eInvoicing 2006/112/EC by for smart cards used as Signatures Guidance on SSCDs CWA 14890-1,2 harmonized Services Directive 2006/123/EC ETSI 102 176 – TS 101 456 ... CWA 14167-3,4 by Parts 1-2 extended ETSI 102 437 Cryptographic Guidelines for the by Module implementation of formulated Application of Guidance on TS 101 456 formulated SSCDs CWA 14355 Electronic Signature ETSI 102 437 Standards in Europe Commission Decision based on ETSI 102 438 2011/130/EU CMS CMS Advanced Electronic (Adoption of PKCS#7 Signatures (CAdES) CMS extends X/C/PAdES BES/EPES) Cryptographic ETSI 101 733 , RFC 5126 Cryptographic Commission Decision Message Syntax Message Syntax amended 2009/767/EC RFC 2315 RFC 5652 by amplified Trusted Lists extended by extended by Commission Decision used by 2010/425/EU Profiles of Enhanced Security utilises by Trusted Lists based on CMSAdvanced Services for S/MIME XML format Electronic Signatures XML Advanced Electronic RFC 2634 (updated for signature based on TS101 733 Signatures (XAdES) by RFC 5035 ) Trust-Service Status policies ETSI 102 734 ETSI 101 903 Information ETSI 102 038 ETSI 102 231 extended used by amplified by Signature policy for PDF Advanced by Registered Email: extended business Electronic Signatures ETSI 102 605 model XML based (PAdES) ETSI 102 640 – Parts 1-5 ETSI 102 045 amplified ETSI 102 778 -Part 3 on Profiles of XML Advanced by guided Electronic Signatures complemented by based on TS101 903 TSA Policy requirements by ETSI 102 904 for time-stamping PDF Profiles of XML authorities PAdES: Usage and PDF Advanced Advanced ETSI 102 023 XML Signature implementation Electronic eSignatures (XMLDSig) guidelines Signatures (PAdES) Time stamping profile ETSI 102 904 W3C - XML Signature ETSI 102 923 ETSI 101 861 ETSI 102 778 -Parts Syntax and Processing profiles RFC 3161` 1,2,4,5,6 European Union Directive 1999/93/EC Legal framework for eSignatures and Certification Service Providers
Fig. 1. E-signatures standards map
Recent activities in EU have resulted in the adoption of two significant solutions that aim to promote interoperability (both of them are the result of work carried out in the context of Directive 2006/123/EC):
158
K. Rantos
– Publication of a list, namely Trust-service Status List (TSL), of supervised/accredited certification service providers (CSPs) issuing qualified certificates, to facilitate trust establishment among parties participating in a transaction (Decision 2009/767/EC [3] amended by Decision 2010/425/EU [4]). The plethora of CSPs across EU which operate under the umbrella of Directive 1999/93/EC [7] results in a chaotic situation in terms of trust among participating entitied. TSL helps verifiers establish the necessary trust relationships with foreign CSPs operating in other MSs. TSLs also contain mandatory information one should otherwise find on a certificate, according to the corresponding standards, e.g. information whether the certificate resides on an SSCD or not (ETSI 101 862 [9]) but the CSP did not include it in the certificate in a standardized and commonly accepted manner. Moreover, they complement digital certificates mechanism and the trust model they offer in a way that raises some questions as dependence on a list implies that user’s trust circle has to be expanded to also cover mechanisms and procedures that the supervising authority employs for managing and publishing this list. – EC has also adopted Decision 2011/130/EU [5], which defines common specifications regarding the format of signed documents to facilitate cross-border exchange within the context of service provision. These specifications promote the standardised CAdES [8], XAdES [10], and the recently ETSI adopted PAdES [14] formats for documents signed by competent authorities. Although these standards offer many options to cover a wide range of security concerns, only *-BES (Basic Electronic Signature) and *-EPES (Explicit Policy Electronic Signature) profiles are currently adopted by EU. These flavours have limited features and do not secure the provision of additional information required for the critical long term archiving and time of signature generation.
3
Defining Interoperability
When implementing e-services that entail secure exchange of e-documents, or when signing documents, there are several questions that someone has to answer, including: – What should the signed document format be? – What types of electronic signatures (advanced, advanced based on qualified certificate, qualified) is the relying party willing to accept? In practice, what are the means that the signer has in possession and what is the risk the relying party is willing to accept in terms of signer’s commitment and authorization? – What are the acceptable key sizes, signature algorithms or hash functions? – What additional information does the verifier need to verify and accept the provided signature, including timestamps and validation data?
Digital Signatures: How Close Is Europe to Truly Interoperable Solutions?
159
The answers to these questions disclose the interoperability problems that digital signature deploying services are likely to face and can be mapped to the following interoperability levels. 3.1
Legal Interoperability
The foundations of the legal framework at European level were laid by Directive 1999/93/EC [7], which defined the types of digital signatures and the requirements for the provision of certification services. Although it provided a solid basis for the use of digital signatures, it was only the basis on top of which many other structural components had to be built by MSs. Since Directive 1999/93/EC several enhancements took place by corresponding decisions and plans for the wider deployment of e-services supported by the use of electronic services. Many studies have been conducted within the IDABC programme regarding legal interoperability [15], the details of which are out of the scope of this paper. 3.2
Interoperability at Policy Level
Although law is one of the factors that will affect the validity of a signature, policy restrictions is another and is related to the security considerations of the service for which signatures are deployed. For instance, while an advanced signature based on a qualified certificate might suffice for a document used within a specific service, the same document for a different type of service might require a qualified signature, i.e. the one based on a SSCD [7]. Policy restrictions should be considered by the verifier prior to accepting a digital signature and could include issues related to certificate validity, certificate suitability (e.g. requirements for trust establishment, use of SSCDs, acceptable SSCDs, CSP’s auditing), signature validity (e.g. algorithms and key sizes, timestamps) and document signature properties (e.g. signature format, signature placement). Current practices and EU adopted mechanisms do not secure policy requirements satisfaction leaving verifiers with the difficult task of collecting the necessary information or deciding ad-hoc about the acceptance or not of a signed document (assuming that the verifier has policy requirements). 3.3
Technical Interoperability
Legal and policy restrictions might affect technical decisions on document, signature and certificate formats and characteristics. If signer and verifier cannot support the same algorithms, protocols and mechanisms, signature verification is bound to fail. Technical interoperability problems typically can be bypassed if solutions, algorithms and mechanisms adhere to specific standards and alternatives are narrowed down to a limited set of options.
160
4
K. Rantos
Areas Where EU Should Invest
In this section key areas that are anticipated to play an important role in future e-services seeking maximum interoperability are defined. 4.1
Time Stamping
Time stamps are the means to provide assurances about the existence of data before a particular time [11]. While in some transactions is important to mark the time of data receipt, e.g. in e-procurement services, there are other examples where time-stamps have to be provided by the signer, e.g. a certificate issued by a competent authority some time in the past, in which case the verifier has to establish a trust relationship with the time-stamping authority (TSA) to accept both the time-stamp and the signature. According to Directive 1999/93/EC and ETSI 102 023 [11] a TSA can be considered as a CSP which issues time stamp tokens. Based on EU’s trust model, solid trust relationships between TSA and verifier can only be established if the TSA is a qualified one and therefore, supervised/accredited by the MS’s competent authority and if the services are listed in the MS’s TSL, given that these are included in a voluntary basis [3]. Trust establishment is essential for the verifier to be assured about the procedures, practices and the security measures taken by the authority to provide this service. Otherwise the validity of the time stamp is jeopardized. Time-stamping is a service that has to be put into the picture to enhance digital signature enabled services. 4.2
Signatures with Validation Data and Long Term Archiving
Although currently adopted formats of *-BES and *-EPES seem satisfactory for a very basic exchange of signed e-documents, they do not suffice for stronger requirements set by the receiving entity in a transaction. Verification information, all or part of it, as previously mentioned, can be provided by the signer, which bears the risk of deploying non-interoperable solutions. To avoid this unpleasant situation various solutions adopt alternatives for secure signature validation, such as validation points proposed by Decision 2011/130/EU [5] and used by large scale pilots, such as PEPPOL and SPOCS. Such ad hoc approaches however do not provide robust solutions to signed document validation. Moreover, long-term archiving data are needed for the verification of signer’s signature at a time when the signer’s certificate might be revoked or will have expired. The aforementioned reliable timestamping is much related to the information that supplements the signature for long-term archiving. Such data will prove essential in the long run when governments go all digital, hence all archives are in digital form. In that case keeping all the necessary verification information with the signed document will be vital.
Digital Signatures: How Close Is Europe to Truly Interoperable Solutions?
4.3
161
Policy Restrictions
As the number of e-services is anticipated to grow in the years to come, more diversified and open environments will come into picture with their own requirements and restrictions regarding signature generation and validation. Currently accepted standards allow the signer only to disseminate his/her signature policy using the *-EPES format. This approach, although straightforward for the signer, requires the verifier to adapt his/her requirements on what the signer provides and decide about the validity on an ad-hoc basis. It does not consider the verifier’s requirements regarding signature and, as a result to that, document acceptance. A more appropriate solution would also allow the verifier provide his/her own requirements and establish a kind of agreement with the signer on the corresponding needs prior to signer’s commitment. EU should elaborate on the formulation of an EU wide policy format and MSs must formulate their policies on e-signatures based on this for all cross-border services considering the following. – Signer’s policy adopted format should be unambiguously interpretable by the verifier, and automatically processed to relieve the end user from this very complicated task. ETSI 102 038 [12] and ETSI 102 045 [13] standards can form the basis towards this achievement. – Promote policies recording and mapping based on commonly accepted standards to achieve the much desirable interoperability at policy level. – Work on schemes that will facilitate policy agreements between signer and verifier prior to signer’s commitment. 4.4
Commonly Adopted Standards for Certificate Formats
Current standards provide robust solutions for certificates and facilitate unambiguous interpretation of the type of certificate, its properties and security characteristics. However, not all of them are adopted by CSPs who issue qualified certificates, leaving critical information out of the certificate or included in a non standardized manner. Adoption of commonly accepted profiles based on specific standards, such as those included in Figure 1, would allow certificates to carry all the necessary information that is now “delegated” to other schemes. Such an approach would help overcome the aforementioned problem caused by the issuance of certificates by different CSPs under different procedures, specifications and with a different perspective on the format that a certificate should have. CROBIES has already proposed an interoperable certificate profiles that can be adopted by the EU [6]. Although adoption of common profiles is not an easy task, it will secure the wide acceptance of certificates and convert them to the e-passport that EU envisioned and tries to promote through corresponding plans and actions.
5
Conclusions
The wide variety of standards on digital signatures complemented by the large number of options they offer, have resulted in many cross-border interoperability
162
K. Rantos
problems in EU.This paper provided an up to date mapping of digital signature and e-documents related standards and identified the key areas that still need to be considered by the EU in order to achieve truly interoperable solutions. In contrast to EU’s approach of introducing new ad-hoc mechanisms and solutions to old problems the paper suggests the use of existing standards assuming that the options they offer are narrowed to a commonly accepted subset, an approach that is also adopted by Decision 2011/130/EU [5]. Adopting a more strict set of rules will simplify this complicated environment and will pave the way to a wider deployment of digital signatures, even for digitally analphabetic users.
References 1. Action Plan on e-signatures and e-identification to facilitate the provision of crossborder public services in the Single Market, COM, 798 (2008) 2. Decision 922/2009/EC on interoperability solutions for European public administrations (ISA) 3. Commission Decision 2009/767/EC, Setting out measures facilitating the use of procedures by electronic means through the points of single contact under Directive 2006/123/EC. Corrigendum (November 2009) 4. Commission Decision 2010/425/EU: Amending Decision 2009/767/EC as regards the establishment, maintenance and publication of trusted lists of certification service providers supervised/accredited by Member States 5. Commission Decision 2011/130/EU, Establishing minimum requirements for the cross-border processing of documents signed electronically by competent authorities under Directive 2006/123/EC (February 2011) 6. CROBIES: Interoperable Qualified Certificate Profiles, Final Report (2010) 7. Directive 1999/93/EC of the European Parliament and of the Council of 13 December 1999 on a Community framework for electronic signatures 8. ETSI TS 101 733. Electronic Signatures and Infrastructures (ESI); CMS Advanced Electronic Signatures (CAdES), http://www.etsi.org 9. ETSI TS 101 862. Qualified Certificate profile, http://www.etsi.org 10. ETSI TS 101 903. XML Advanced Electronic Signatures (XAdES), http://www.etsi.org 11. ETSI TR 102 023. Electronic Signatures and Infrastructures (ESI); Policy requirements for time-stamping authorities, http://www.etsi.org 12. ETSI TR 102 038. TC Security - Electronic Signatures and Infrastructures (ESI); XML format for signature policies, http://www.etsi.org 13. ETSI TR 102 045. Electronic Signatures and Infrastructures (ESI); Signature policy for extended business model, http://www.etsi.org 14. ETSI TS 102 778-3. Electronic Signatures and Infrastructures (ESI); PDF Advanced Electronic Signatures (PAdES); PAdES Enhanced – PadES-BES and PAdES-EPES Profiles, http://www.etsi.org 15. IDABC Work Programme 2005-2009, http://ec.europa.eu/idabc/ 16. Mandate M460, Standardisation Mandate to the European Standardisation Organisations CEN, CENELEC and ETSI in the field of Information and Communication Technologies applied to electronic signatures (January 7, 2010)
A Generic Architecture for Integrating Health Monitoring and Advanced Care Provisioning Koen Decroix1 , Milica Milutinovic2 , Bart De Decker2 , and Vincent Naessens1 1
Katholieke Hogeschool Sint-Lieven, Department of Industrial Engineering Gebroeders Desmetstraat 1, 9000 Ghent, Belgium [email protected] 2 K.U. Leuven, Department of Computer Science, DistriNet Celestijnenlaan 200A, 3001 Heverlee, Belgium [email protected]
Abstract. This paper presents a novel approach for advanced personalized care and health services. It consists of four tiers and presents a high level of openness, privacy and manageability compared to existing systems. Moreover, the architecture is driven by realistic underlying business opportunities and is validated through the design of multiple scenarios.
1
Introduction
The average age of individuals is increasing significantly. Homes for the elderly are overcrowded and the government has trouble with financing the increasing costs and distributing the work load in those institutions. Moreover, studies show that elderly people are often reluctant to leave their houses. They prefer to stay at home as long as possible. Therefore, many European research initiatives have been bootstrapped in the past decade resulting in systems to monitor and assist the elderly remotely. A first flavor focuses on the development of monitoring technologies. For instance, body area networks consist of a set of wireless nodes that monitor health parameters of elderly. Examples are heartbeat and blood pressure sensors, fall detectors, etc. Also, systems can be installed in the patient’s house to control access to his medical cabinet, to encourage him to physical practice, etc. A second flavor focuses on remote health or care services. Many existing architectures consist of three tiers: a set of sensors that generate sensor values, a base station that collects and eventually filters the data, and a care center that takes health related decisions and offers health services based on the received inputs. However, very high trust is required in care centers as they receive and store a lot of sensitive medical information about their users. Hence, care centers are often controlled by the government and lack openness. They typically offer a limited and fixed set of services based on sensor data. For instance, a fall detection camera can trigger an alert in the care center. However, the same camera could also be used for remote care services such as preserving social contacts, remote checks by doctors, etc. Hence, current approaches are B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 163–170, 2011. c IFIP International Federation for Information Processing 2011
164
K. Decroix et al.
not user centric and the full potential of home monitoring equipment is not exploited. Moreover, they put up a barrier to the entrance of commercial service providers in the eHealth domain. This paper presents a novel approach for advanced personalized care and health services based on both remote user input and sensor data. The key contribution is that our architecture - which consists of four tiers - presents a high level of openness, privacy and manageability. Moreover, we show that our approach is driven by realistic underlying business opportunities. It allows commercial service providers to penetrate in the eHealth domain. The rest of this paper is structured as follows. Section 2 points to related work. The general approach is described in section 3. Section 4 focuses on architectural details. Section 5 evaluates the strengths and weaknesses of the current architecture and validates our approach through the development of three scenarios. This paper ends with general conclusions.
2
Related Work
Many research initiatives have been bootstrapped during the last decades in the domain of remote health monitoring. Many research only covers a subproblem and results are typically applied in a three tier architecture which consists of a set of sensors, a base station and a health center. At the sensor side, wireless sensor network protocols are embedded in health sensors. Many practical case studies mainly target to guarantee a minimal level of reliability and performance. For instance, [5] proposes a remote system for health care that consists of heterogeneous wireless sensors. The system applies an optimized IDL to enable communication between low resource platforms. In [9], the authors propose to use a wireless Personal Area Network (PAN) of intelligent sensors as a medical monitoring system architecture. The network is organized hierarchically and individual sensors monitor specific physiological signals (such as EEG, ECG, GSR,etc.). [15] describes a monitoring system for the elderly that provides alarm functionality. It consists of a wrist device detecting user indicated alarms. Other research focuses on the development of gateways between sensors (or sensor networks) and a service provider. For instance, MobiCare [4] implements a body sensor network manager (BSNM) on a mobile client. Its major task consists of aggregating data from sensors and forwarding them to a health center or another predefined service provider. [12] describes a home server that offers services as OSGi bundles. This approach centralizes data processing and service provisioning in the home server. A similar architecture is applied for patient rehabilitation [8] and emergency situations [16]. Those solutions often lack flexibility and reliability. It is difficult to ensure a service level if the base station crashes or if a service provider is not available. TeleCARE [3] is a more flexible agent-based approach for building virtual communities around elderly people. Agents migrate to the sensors and base station to offer services. The authors claim that security and privacy are important concerns but these are not really tackled in the approach.
A Generic Architecture
165
In other architectures, many tasks are performed at the care center. For instance, in [10], coordination, data analysis and service provisioning are performed at a central monitoring station. Hence, the user has no substantial control over the services that are deployed and over the information that is released to the care center. Therefore, very high trust is required in those care centers. [14] proposes an architecture for secure central storage of electronic health records (EHRs). The data are pseudonymized. Many research initiatives focus on privacy friendly storage of data. Our architecture does not explicitly focus on EHR storage. It merely focuses on enabling services based on technical means installed in the home environment of elderly people. However, anonymous storage technologies like [6][7] can be foreseen in our platform. Another research flavor that is complementary with our contribution are interoperability initiatives. [11] shows the possibility of interoperability between two standards, namely HL7 and IEEE 1451. HL7 is a messaging standard for exchanging medical information. IEEE 1451 deals with various aspects of sensors, the format of data sheets and how to connect and disconnect the sensors from a system. This work is complementary to our research. These standards can be used to exchange information between entities in our architecture. Multiple European initiatives also focus on remote eHealth provisioning. epSOS [1] is European project for patient summary and electronic prescription that interconnect national solutions. It develops an eHealth framework and ICT infrastructure that enables secure access to patient’s health data. In contrast to this project, our approach is more oriented towards health and care services. MobiHealth [13] develops an architecture for patient monitoring. The framework consists consist of a Body Area Network (BAN) with sensors that collect data from a patient. Data is sent along with video and audio via a cellular network (2.5G and 3G technology) to a healthcare center. This raises major privacy concerns. BraveHealth [2] proposes a patient centric vision to CVD management and treatment, providing people already diagnosed as subjects at risk with a sound solution for continuous and remote monitoring and real time prevention of malignant events.
3
General Approach
Flexibility and reliability are key concerns of our architecture. To increase flexibility and reliability compared to existing systems, our architecture introduces a dispatch center as an essential component. The dispatch center mediates between the infrastructure installed at the elderly’s homes (i.e. a base station and sensors/actuators) and at the service providers, and closes contracts with them. A contract between a patient and a dispatch center defines a set of services that can be consumed by the patient at a (recurrent) fee. Some services are handled by the dispatch center. For instance, the dispatch center detects and handles failures in the user’s base station. Other services are forwarded to external service providers. The dispatch center therefore closes contracts with companies.
166
K. Decroix et al.
For instance, the former can negotiate and fix contracts with multiple catering companies. They are responsible for delivering food to elderly people that registered for that service. Similarly, the dispatch center collaborates with hospitals. An emergency call sent out by a base station is forwarded by the dispatch center to a hospital that can handle the call. Privacy is another major concern in our approach. This is realized by a clear separation of duties. The dispatch center is responsible for discovering and assigning service providers upon a user’s request. However, registered users can submit service requests anonymously (if they paid for that particular service). Hence, the dispatch center does not know which user requested a particular service at a given time (and cannot even link multiple anonymous requests of the same individual). The base station only sends minimal information that is required to select an acceptable service provider. For instance, when an elderly person wants to use a catering service, he passes his city (or region) to the dispatch center. Based on that information, the dispatch center can contact a nearby catering provider. Similarly, if the patient wants to be remotely consulted by a specialist, he only needs to submit the type of specialist he wants to the dispatch center. When closing a contract with an individual, the dispatch center defines what personal information is required for selecting an acceptable service provider. Moreover, the patient can optionally release personal preferences. For instance, he may submit a set of unwanted specialists together with a remote consultation request. Multiple mechanisms are foreseen to control access to personal attributes. First, the user can define personal privacy preferences in his base station. If a dispatch center or service provider requests sensitive information, the user’s consent is required (except in case of an emergency). Second, the dispatch center issues a certificate to each service provider he relies on. It contains the set of information that the user’s base station may release for that service. For instance, a general practitioner may inspect values generated by body sensors during a remote consultation while a catering provider can only request the user’s address. The dispatch center can also intervene in case of disputes. For instance, a user can argue that the service provider does not meet a predefined service level. To solve disputes, the dispatch center stores the contracts he has made with users and service providers. Moreover, base stations and service providers can store a secure anonymous backup of evidence at the dispatch center. It is also clear that this approach presents a higher degree of openness than existing three tier solutions. The base station does not need to be updated if new service providers (f.i. catering providers, cardiologists, etc.) are connected to the dispatch center. Similarly, the dispatch center does not need to know if new sensors or actuators are installed in the user’s house. Moreover, our approach is holistic in the sense that both occasional health and recurrent care services can be supported. Finally, strong privacy guarantees imply that commercial companies can enter the eHealth domain. The dispatch center is mainly involved in discovery and coordination. Hence, sensitive personal information is hidden from them.
A Generic Architecture
4
167
Architectural Details
This section gives an overview of major actions in the system. It shows that the architecture presents a high degree of flexibility and openness to support new elderly users and service providers. Figure 1 depicts the registration and service consumption actions.
Fig. 1. Registration action and service consumption action
Registration. Both elderly people and service providers need to enroll with the dispatch center. The service provider defines the types of services it can offer and defines constraints on the services. Examples are capacity and region constraints. For instance, a catering service can offer up to 500 meals a day within one or more regions. Moreover, it defines the set of attributes that are required from the elderly to fulfill the service request. These refer to personal information that need to be released by the elderly. For instance, catering services need to know the user’s address and diet constraints. Similarly, an emergency team needs to know the user’s address and a (temporary) code to enter the house. The negotiation results in a contract between the dispatch center and the service provider. At this phase, the service provider receives a certificate that contains – besides service-specific information – a set of personal information the service provider is entitled to request from the user. Different types of contracts are possible. Either the service provider pays a fee to the dispatch center each time a service request is handled, or vice versa. The former implies that the patient pays to the service provider. The latter results in a payment between the user and the dispatch center. Similarly, users enter a contract with the dispatch center and receive a base station at registration. The contract defines a set of services that can be requested from the dispatch center. Some services are handled by the dispatch center itself. A typical example is failure detection of base stations and support to fix failures within an agreed upon time. The dispatch center acts as mediator for other services. The contract defines (a) the data that must be released to the dispatch center to select an appropriate service provider and (b) the data that must be released to the service provider. During enrollment, the user receives an anonymous credential with capabilities to consume the requested services.
168
K. Decroix et al.
Service Consumption. The user’s base station keeps track of a set of events that can trigger a service request. Multiple simultaneous requests are handled in priority order. Both (aberrant) sensor data and user input can lead to a service request. For instance, if a heart beat rate threshold value is exceeded an alert is initiated. A user can also input a set of dates at which he wants to receive meals from a catering provider. It is clear that the former request will have a higher priority. The base station first sets up an anonymous connection to the dispatch center. It uses the anonymous credential to prove that it has the right to consume the service and releases the personal information that is required to allow for an acceptable service provider. The dispatch center establishes a secure and mutually authenticated channel with an acceptable service provider and returns the server certificate to the base station. Moreover, it generates a secure random R that is sent to the service provider and the base station. Next, the base station establishes a mutually authenticated anonymous channel with the service provider. Both entities prove to have knowledge of R. The base station further needs to prove the elderly person’s identity and/or release a set of information required to consume the service. Moreover, it can provide access to a set of sensors and/or actuators that are bound to the base station. Logging and Dispute Handling. Users can log both history (such as values generated by sensors) and evidence of interactions with service providers at the dispatch center. Central logging increases reliability. Moreover, the storage component allows users to store encrypted data that can be disclosed conditionally by third parties. For instance, values generated by sensors can be consulted by doctors while evidence can be decrypted by juridical instances to solve disputes.
5
Evaluation and Validation
The proposed architecture meets the privacy requirements of many users. Service requests initiated by the same user remain anonymous and even unlinkable towards the dispatch center. At communication level, anonymous channels are used. At application level, anonymous credentials are used to access services. Moreover, the user only releases the minimal information necessary to consume the service. This implies that users can even remain anonymous (or pseudonymous) for some services. For instance, an external health center can continuously analyze data generated by sensors. The sensor data can be sent by the base station over an anonymous channel. However, to increase the business potential of our approach, multiple payment schemes must be supported. Either the user can (pre-)pay for a set of services to the dispatch center or pay per service request to the dispatch center or service provider. The strategies can also be combined. For instance, a user can pay a monthly fee to the dispatch center for a service package. Possibly, an additional fee must be paid per emergency request to the dispatch center and to the involved entities (e.g. general practitioner, hospital). Therefore, anonymous payment methods must be supported. For the validation, three services are prototyped in the first iteration. They They show that a wide variety of services can be supported by the architecture.
A Generic Architecture
169
The sensor functionality is implemented on SUNSPOT sensors. A Java Virtual Machine runs on the sensors. A middleware layer is implemented to support more advanced communication between the sensors and the base station. The communication middleware consists – amongst others – of a security component to enable secure communication. The Bouncy Castle crypto libraries are instantiated on the SUNSPOT sensors for that purpose. Two sensors are currently configured, namely (1) a fall detection sensor that uses the built-in accelerometer and (2) a sensor that contains buttons to initiate an alert (or emergency) procedure. Moreover, the base station can switch on a video camera and forward video streams to a general practitioner if certain conditions are fulfilled. Finally, the patient can communicate with service providers using a terminal. Three types of services are currently implemented, namely (1) daily care provisioning services, (2) infrequent health provisioning and (3) statistical analysis. We only give a short of overview of each of these services due to space limitations. Daily care provisioning services are related to tasks for which elderly people need support. Examples are washing, cleaning the house, cooking meals, etc. The patient uses his terminal to specify one-time or recurrent tasks. In that scenario, the dispatch center discovers appropriate service providers based on the user’s preferences. The patient only proves that he may request the service while remaining anonymous towards the dispatch center. Moreover, the dispatch center cannot link multiple service requests to the same individual. The second scenario initiates an alert (or emergency procedure) if a fall is detected. The patient can cancel the procedure by pushing the right sensor button. Otherwise, a remote health care provider can switch on the user’s camera and take appropriate steps. In the third scenario, researchers can retrieve anonymized data from the base station to enable statistical analysis. The owner of a base station can configure which data he is willing to release. He may get discount vouchers for some services if he enables that service.
6
Conclusion
This paper presents an architecture for advanced home care provisioning. Privacy, security and openness were key properties during the design phase. Moreover, the architecture enables commercial entities to enter the eHealth domain. In the first iteration, three services with alternative security and privacy requirements have been developed. Future work will evolve in many directions. First, more advanced scenarios will be designed and included. Second, advanced payment and contracting schemes will be added. A final challenge targets the compatibility with the Belgian eHealth platform. It gives access to medical records (mainly stored by non-commercial health providers). Commercial service providers can benefit of controlled release of such data to increase the quality of service. Acknowledgement. This research is partially funded by the Interuniversity Attraction Poles Programme Belgian State, Belgian Science Policy, and by the IWT-SBO project (DiCoMas) “Distributed Collaboration using Multi-Agent System Architectures”.
170
K. Decroix et al.
References 1. epsos - european patients - smart open services (2008-2011), http://www.epsos.eu/ 2. Bravehealth (2011), http://www.ctit.utwente.nl/research/projects/ international/fp7-ip/bravehealth.doc 3. Camarinha-matos, L.M., Afsarmanesh, H.: Design of a virtual community infrastructure for elderly care. In: Collaborative Business Ecosystems and Virtual Enterprises, pp. 439–450. Kluwer Academic Publishers, Dordrecht (2002) 4. Chakravorty, R.: A programmable service architecture for mobile medical care. In: Proceedings of the 4th Annual IEEE International Conference on Pervasive Computing and Communications Workshops, PERCOMW 2006, pp. 532–536. IEEE Computer Society, Washington, DC (2006) 5. Corchado, J.M., Bajo, J., Tapia, D.I., Abraham, A.: Using heterogeneous wireless sensor networks in a telemonitoring system for healthcare. IEEE Transactions on Information Technology in Biomedicine 14(2), 234–240 (2010) 6. di Vimercati, S.D.C., Foresti, S., Jajodia, S., Paraboschi, S., Pelosi, G., Samarati, P.: Preserving confidentiality of security policies in data outsourcing. In: Proceedings of the 7th ACM Workshop on Privacy in the Electronic Society, WPES 2008, pp. 75–84. ACM, New York (2008) 7. Demuynck, L., De Decker, B.: Privacy-preserving electronic health records. In: Dittmann, J., Katzenbeisser, S., Uhl, A. (eds.) CMS 2005. LNCS, vol. 3677, pp. 150–159. Springer, Heidelberg (2005) 8. Jovanov, E., Milenkovic, A., Otto, C., de Groen, P.: A wireless body area network of intelligent motion sensors for computer assisted physical rehabilitation. Journal of NeuroEngineering and Rehabilitation 2(1), 6 (2005) 9. Jovanov, E., Raskovic, D., Price, J., Chapman, J., Moore, A., Krishnamurthy, A.: Patient monitoring using personal area networks of wireless intelligent sensors. Biomedical Sciences Instrumentation 37 (2001) 10. Kim, H.-J., Jarochowski, B., Ryu, D.-H.: A proposal for a home-based health monitoring system for the elderly or disabled. In: Miesenberger, K., Klaus, J., Zagler, W.L., Karshmer, A.I. (eds.) ICCHP 2006. LNCS, vol. 4061, pp. 473–479. Springer, Heidelberg (2006) 11. Kim, W., Lim, S., Ahn, J., Nah, J., Kim, N.: Integration of ieee 1451 and hl7 exchanging information for patients sensor data. Journal of Medical Systems 34, 1033–1041 (2010), doi:10.1007/s10916-009-9322-5 12. Korhonen, I., Parkka, J., Van Gils, M.: Health monitoring in the home of the future. IEEE Engineering in Medicine and Biology Magazine 22(3), 66–73 (2003) 13. European MobiHealth Project 2002-2004, http://www.mobihealth.org 14. Riedl, B., Grascher, V., Neubauer, T.: A secure e-health architecture based on the appliance of pseudonymization. Journal of Software 3(2), 23–32 (2008) 15. Sarela, A., Korhonen, I., Lotjonen, J., Sola, M., Myllymaki, M.: Ist vivago reg; an intelligent social and remote wellness monitoring system for the elderly. In: 4th International IEEE EMBS Special Topic Conference on Information Technology Applications in Biomedicine, pp. 362–365 ( April 2003) 16. Wood, A., Stankovic, J., Virone, G., Selavo, L., He, Z., Cao, Q., Doan, T., Wu, Y., Fang, L., Stoleru, R.: Context-aware wireless sensor networks for assisted living and residential monitoring. IEEE Network 22(4), 26–33 (2008)
A Modular Test Platform for Evaluation of Security Protocols in NFC Applications Geoffrey Ottoy1,2 , Jeroen Martens1 , Nick Saeys1 , Bart Preneel2 , Lieven De Strycker1,3 , Jean-Pierre Goemaere1,3 , and Tom Hamelinckx1,3 1 KAHO Sint-Lieven, DraMCo research group, Gebroeders de Smetstraat 1, 9000 Gent, Belgium {geoffrey.ottoy,dramco}@kahosl.be http://www.dramco.be/ 2 K.U. Leuven, COSIC research group, Kasteelpark Arenberg 10, bus 2446, 3001 Leuven-Heverlee, Belgium http://www.esat.kuleuven.be/cosic 3 K.U. Leuven, TELEMIC research group, Kasteelpark Arenberg 10, bus 2444, 3001 Leuven-Heverlee, Belgium http://www.esat.kuleuven.be/telemic/
Abstract. In this paper we present the advantages and possibilities of a modular test platform for the evaluation of security protocols in NFC applications. Furthermore, we also depict some practical implementation results of this modular system. The scope of the platform is to provide a highly modular system. Adding or removing certain functionality can be done without the need of rebuilding the entire system. Security measures in hardware as well as software can be tested and evaluated with this platform. It can serve as a basis for a broad range of security related applications, NFC being our domain of interest, but even so in other domains. Keywords: NFC, Smart Card, FPGA, Test Platform, Security.
1
Introduction
Near Field Communication (NFC) is a rapidly emerging technology that enables the setup of a wireless connection between electronic devices over a short distance, in a secure and intuitive way. This technology is gaining more and more popularity every day [1]. New developed applications, ranging from mobile payments and ticketing [2,3] to advanced access control [4,5], require enhanced security measures. To quickly develop hardware support for security and to test and evaluate the impact of these security measures, a test platform can be of great value. We developed a platform that offers flexibility, modularity, reliability and reduced development time. With the system described in this paper, a wide scope of hardware and software, to support security protocols for NFC applications, can B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 171–177, 2011. c IFIP International Federation for Information Processing 2011
172
G. Ottoy, et al.
be tested and evaluated. Though primarily developed for NFC applications, the platform can be easily modified to cope with other communication technologies. At this moment, the core components of an FPGA-based test platform are built. A kernel is running on the FPGA’s embedded processor and there’s the ability to run multiple threads within this kernel. Currently three system services are finished. A logging service is used in order to monitor all system and application events; another service is responsible for the NFC communication and finally a web server is used for remote connections to the system. With respect to the security side of the test platform, an AES [6,7] cryptographic core (hardware) is implemented and has proven its reliability. Further development and integration of security and interface modules on the platform is ongoing. In the next chapters we discuss the technical aspects of a modular test platform. First, in section 2, we provide some background about NFC and security within NFC. Then, in section 3, we talk about the advantages and the possibilities of a modular test platform and we clarify the choices made in the development of our system. In section 4, a more thorough discussion the current status of our platform is presented, whereas section 5 highlights some remarkable results. Finally, in section 6, we state some conclusions and we present our objectives for the future.
2
NFC and Security
Near Field Communication is a short range, wireless connectivity technology that allows a fast, short range, bidirectional communication between two electronic devices. This technology is based on the RFID1 technology and works on the same communication frequency of 13.56 MHz (HF). The NFC technology is standardized under the ISO/IEC 18092 (NFCIP-1) and ISO/IEC 21481(NFCIP-2) standards and it is backward compatible with other wireless communication Protocols such as Mifare (NXP, ISO 14443-A) and Felica (Sony, ISO 14443-B). Using NFC technology, consumers can perform contactless payments and transactions, access digital content and share data between NFC-enabled devices, all with just one simple touch 2 . NFC is also typically used to setup communication of other wireless protocols with a higher bandwidth, such as WiFi and Bluetooth [8]. Typical security threats for NFC, as described in [8] en [9], are Eavesdropping, Data Modification, Relay Attack or the more general Man-in-the-middle attack, even though chances for the latter are low due to the RF characteristics and short distance of NFC communication [10]. This implies that the setup of a secured channel is quite easy. Ensuring entity and data authentication further decreases the risk of most attacks. Once the secured channel is established, dedicated cryptographic algorithms should be used to further fulfill the needs of the application (e.g. a ticketing or payment application). Security algorithms can be built in hardware or software, depending on the requirements. 1 2
Radio Frequency Identification. NFC jargon, meaning two devices are brought into each others proximity.
A Modular Test Platform for Evaluation of Security Protocols
3
173
The Modular Test Platform
If we want to develop a test platform for secure NFC applications, we can state several system requirements: modularity in hardware as well as in software, support for security and several debugging and interfacing options. As an extra feature for our platform we also opted for a network connection. The main reason for this is, that a network connection (like Ethernet) is of great surplus value for a lot of applications (e.g. access control). Another advantage is that the platform can be accessed and monitored remotely (e.g. for field testing). With these requirements in mind, we can make a general description of the platform. First of all the required security policies need to be supported by the lowest level in the design, i.e. the hardware [11]. This means that the proper countermeasures against hardware attacks need to be taken. The Side-Channel Analysis Attacks (SCA) are especially dangerous because they exploit the characteristics of the hardware design to extract the secret parameters. In [12], a number of possible attacks on hardware implementations are described. Today’s secure hardware should be resistant to (Differential) Power Analysis (DPA) [13,15], Timing Analysis (TA) [14] and Electromagnetic Attacks [15]. We will not go into detail about this, but it should be taken into account when designing the security-related hardware and software. For easy (and fast) prototyping, modularity and expandability of the base design is required. We are designing a test platform so if we want to add or remove a certain functionality, it is preferable that this can be done without the need to redesign the entire system. By building the system out of several blocks, adding or removing functionality, in hardware as well as in software, can be done easily. In hardware we implement a bus structure (Fig. 1). The bus forms the backbone where all the other hardware blocks are attached to. Each hardware block is responsible for a designated task, e.g. memory management, interrupt handling, cryptographic operations, etc. A controller or processor will be used to control the hardware. Instead of working with a single application or thread, we choose to deploy an Operating System (OS). This further increases the flexibility and makes it more easy to add applications (in the form of threads) to the system. Furthermore, it enables the software designers to write code on a more abstract level, rather than having to know the underlying hardware. A last topic is the need for I/O-functionality. In our case we need at least an NFC connection and an Internet connection, but several other interfaces can prove useful when using the test platform: USB or compact flash for logging, a display, I2 C, etc. It is clear that this architecture can be used in several other applications where testing of security measures is involved. A change of interface (e.g. a GPRS connection instead of NFC) can be done easily because of the modularity of the design.
4
The Platform in Detail
In figure 1, the block diagram of the modular test platform is shown. The heart of the system is an FPGA. We have opted to work with an FPGA because of its
174
G. Ottoy, et al.
flexibility and the rapid development results. In practice, we use a Virtex II Pro Development System 3 . The Virtex II Pro FPGA has two embedded PowerPC’s and enough configurable logic to implement the crypto cores, memory controllers and I/O controllers. The development board provides the necessary hardware to implement a complete system (RAM, Flash, Ethernet PHY, RS-232, etc.) and to expand it using the several I/O connectors. By using a Hardware Description Language (HDL), we can customize the system to our needs. Every component can be described separately and added to or removed from the system when necessary. Because of the bus structure, adding hardware that has been previously described in HDL (e.g. a cryptographic core) is quite a straight-forward thing to do. As an example, we implemented an AES-128 (ECB) core.
Fig. 1. Block diagram of the modular test platform
The two PowerPC’s of the Virtex II Pro are hard core processors. Of course, a soft core processor (described in HDL) could be used as well. As stated in section 3, we choose to work with an operating system. On top of the embedded processor, a kernel is running. We use the Xilkernel [16] on our system. It is fully functional on our platform, outfitted with the necessary hardware drivers and provides threading support under the form of POSIX threads. For the NFC communication, we use an NFC development kit [17]. This kit has been chosen because it supports several protocols (e.g. Mifare, Felica), but it also supports both modes of NFC operation: initiator mode and target mode. The NFC development kit communicates with the FPGA over RS-232. 3
Documentation at: http://www.xilinx.com/products/devkits/XUPV2P.htm.
A Modular Test Platform for Evaluation of Security Protocols
175
Several other interface controllers have been implemented. Most notably a JTAG debug module for programming and debugging the processor over a USB connection and an I2 C controller. The latter is used to interface with a Real-Time Clock (RTC). Another important interface controller is the Ethernet controller (depicted separately in Fig. 1). A special thread is responsible for implementing the web server. In this way, remote connections to the test platform are supported. The web server is able to handle HTTP requests as well as manage single TCP connections. TLS [18] is used to protect the TCP connections. In a test platform as presented here, it is a great asset if there is a service that logs all system and application events that occur. Therefore, a so called logging service has been written. This service makes a chronological registration of all events that occur, together with the date and time (by using the RTC). The logging records are directly written to a Compact Flash memory. The records on the Compact Flash card can be read on a PC, so the system design engineer can track which events did or did not occur, where an application crashes, how long it takes to complete a certain operation, etc.
5
Results
A first practical result is the fact that the TCP/IP stack takes in a lot of processing time. We even noticed that increasing the activity of other processes, results in timing failure of TCP/IP-related threads. Even more, the TCP/IP stack4 intertwines with the kernel in such a manner that the POSIX API is lost. Therefore we have chosen to reserve one processor for the web server and another processor for the rest of the functionality. In this way, the web server threads are not interrupted by other processor or interrupt signals, which increases the stability of the network application. Communication between the two processors can be done by using shared memory or a hardware mailbox5 . It is clear that dividing the functionality over the two processors, helps to control the complexity of the system. Furthermore increases the modularity of the system, because the web-part can easily be omitted now. This can be interesting when developing an off-line terminal or stand-alone platform. As a next result we would like to present the time needed to develop an NFC application. We have chosen to implement an existing protocol, developed by the MSEC research group6 . It uses an NFC enabled smart card and a terminal. The smart card can log on to the terminal if it’s ID is in the database. As a protection against eavesdropping, the ID is encrypted using AES-128. To protect the Master Secret Key (MK) and to randomize the encrypted ID, a Session Key (SK) is generated based on two random challenges. The protocol is depicted in Fig. 2. One can see that the protocol involves some NFC data transactions and some AES operations. Because of the available API for both the NFC hardware as for 4 5 6
We use the lwIP, an open-source lightweight TCP/IP stack. Both are supported by the Xilinx design tools. www.msec.be
176
G. Ottoy, et al.
Fig. 2. Secure NFC ID interchange protocol developed by MSEC
the AES core and with the use of a library with NFC commands, the development of this application took a few hours (1 workday). It is clear that minor changes in the protocol can easily be implemented and evaluated in short period of time.
6
Conclusions
In this article, we described the development of a test and development platform for NFC applications. We highlighted the design choices and commented on the elaborated parts. As explained in the previous paragraphs the modularity of our platform has several advantages. From a hardware point-of-view, it is straightforward to add new cores. Even more, they can be compared with other cores or with an approach in software. Using an OS, further eases the work for the software design engineer. This makes this platform extremely fit for testing the performance of different approaches of implementing a protocol, as well as an ideal tool for quick prototyping and evaluating the feasibility of new protocols. For the next period, we will test and implement new cryptographic cores (e.g. a cryptographic hash function) to further expand the hardware capabilities of the system. Another focus will be on the evaluation of new protocols and applications.
References 1. Madlmayr, G., Langer, J., Scharinger, J.: Managing an NFC Ecosystem, In: 7th International Conference on Mobile Business, ICMB 2008, pp.95–101 (2008) 2. Van Damme, G., Wouters, K., Karahan, H., Preneel, B.: Offline NFC Payments with Electronic Vouchers. In: Proceedings of the 1st ACM Workshop on Networking, Systems, and Applications for Mobile Handhelds (MobiHeld 2009), 6 pages. ACM, New York (2009)
A Modular Test Platform for Evaluation of Security Protocols
177
3. Juntunen, A., Luukkainen, S., Tuunainen, V.K.: Deploying NFC Technology for Mobile Ticketing Services - Identification of Critical Business Model Issues. In: 2010 Ninth International Conference on Mobile Business and 2010 Ninth Global Mobility Roundtable (ICMB-GMR), June 13-15, pp. 82–90 (2010) 4. Peeters, R., Singele, D., Preneel, B.: Threshold-Based Location-Aware Access Control. International Journal of Handheld Computing Research 2(2), 17 pages (2011) 5. Madlmayr, G., Langer, J., Kantner, C., Scharinger, J.: NFC Devices: Security and Privacy. In: Third International Conference on Availability, Reliability and Security, ARES 2008, March 4-7, pp. 642–647 (2008) 6. Daemen, J., Rijmen, V.: The Design of Rijndael: AES - The Advanced Encryption Standard. Springer-Verlag New York, Inc., Secaucus (2002) 7. Federal Information Processing Standards Publication 197. Specification for the Advanced Encryption Standard, AES (2001) 8. Breitfuß, K., Haselsteiner, E.: Security in Near Field Communication, NFC (2006) 9. Van Damme, G., Wouters, K.: Practical Experiences with NFC Security on mobile Phones (2009) 10. ECMA International, NFC-SEC NFCIP-1 Security Services and Protocol Cryptography Standard using ECDH and AES (white paper) (December 2008), http://www.ecma-international.org/activities/Communications/ tc47-2008-089.pdf 11. Manninger, M.: Smart Card Technology. In: Sklavos, N., Zhang, X. (eds.) Wireless Security and Cryptography - Specifications and Implementations, ch.13, p. 364. CRC Press, Boca Raton (2007) ¨ 12. Ors, S.B., Preneel, B., Verbauwhede, I.: Side-Channel Analysis Attacks on Hardware Implementations of Cryptographic Algorithms. In: Sklavos, N., Zhang, X. (eds.) Wireless Security and Cryptography - Specifications and Implementations, ch.7, pp. 213–247. CRC Press, Boca Raton (2007) 13. Coron, J.-S.: Resistance against differential power analysis for elliptic curve cryptosystems. In: Ko¸c, C ¸ .K., Paar, C. (eds.) CHES 1999. LNCS, vol. 1717, p. 292. Springer, Heidelberg (1999) 14. Kocher, P.C.: Timing attacks on implementations of diffie-hellman, RSA, DSS, and other systems. In: Koblitz, N. (ed.) CRYPTO 1996. LNCS, vol. 1109, pp. 104–113. Springer, Heidelberg (1996) ¨ 15. De Mulder, E., Ors, S.B., Preneel, B., Verbauwhede, I.: Differential power and electromagnetic attacks on a FPGA implementation of elliptic curve cryptosystems. Computers and Electrical Engineering 33(5-6), 367–382 (2007) 16. Xilinx Inc., Xilkernel (December 2006) http://www.xilinx.com/ise/embedded/edk91i_docs/xilkernel_v3_00_a.pdf 17. NXP: UM0701-02 PN532 User Manual, Rev. 02 (2007), http://www.nxp.com/documents/user_manual/141520.pdf 18. Stallings, W.: Transport Layer Security. In: Cryptography and Network Security Principles and Practice, 5th edn., ch.16. Pearson Prentice Hall, London (2003)
GPU-Assisted AES Encryption Using GCM Georg Sch¨onberger and J¨ urgen Fuß Upper Austria University of Applied Sciences Dept. of Secure Information Systems Softwarepark 11, 4232 Hagenberg [email protected] [email protected] http://www.fh-ooe.at/sim
Abstract. We are presenting an implementation of the Galois/Counter Mode (GCM) for the Advanced Encryption Standard (AES) in IPsec in this paper. GCM is a so called “authenticated encryption” as it can ensure confidentiality, integrity and authentication. It uses the Counter Mode for encryption, therefore counters are encrypted for an exclusiveOR with the plaintext. We describe a technique where these encryptions are precomputed on a Graphic Processing Unit (GPU) and can later be used to encrypt the plaintext, whereupon only the exclusive-OR and authentication part of GCM are left to be computed. This technique should primarily not limit the performance to the speed of the AES implementation but allow Gigabit throughput and at the same time minimize the CPU load. Keywords: AES, Galois/Counter Mode (GCM), IPsec, GPU, CUDA, Gbit/s, high-performance.
1
Introduction
Todays need of high-performance cryptography implementations is rapidly increasing. As the most calculating intensive part often belongs to an encryption or hashing algorithm, the overall performance of a system highly depends on the speed of the underlying cryptography. To increase performance developers optimize the algorithms tailored for several CPU-architectures. Alternatively one can implement parallel versions of the algorithms using multiple CPU-cores or via GPUs. Especially with the “Compute Unified Device Architecure” (CUDA) and the programming language “C for CUDA” GPUs gained a boost in popularity [1]. An extremely important area concerning speed of cryptographic operations is in the protection of network traffic. In protocol suites like IP Security (IPsec) [6], AES has become a favourite encryption scheme to ensure data confidentiality. In terms of the encryption mode the Counter mode (CTR) of operation is
Funded through a private research by Barracuda Networks, Inc. Funded by the KIRAS programme of the Austrian Federal Ministry for Transport, Innovation and Technology.
B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 178–185, 2011. c IFIP International Federation for Information Processing 2011
GPU-Assisted AES Encryption Using GCM
179
preferred for high-speed connections as it can be implemented in hardware and allows pipelining and parallelism in software [8]. In this paper we focus on GCM (cf. [2] and [17]) and its usage as a mode for Encapsulation Security Payload (ESP) in IPsec (standardised in RFC 4106 [5]) and constitute a new way to compute it: – We show that—accepting slight modifications of the standards—it is possible to precompute the AES-CTR part of AES-GCM without reducing security. – As a practical example we show a prototype using GPUs for the precomputation that puts our ideas into practice. First of all this prototype shows operational reliability, moreover we considered some benchmarks. – The resulting challenges posed by the introduced architecture are analysed. At the same time proposals for problem solutions are discussed as well as their practicability and impacts will be observed. 1.1
Related Work
The implementation of AES-GCM on GPUs itself has not yet been a common topic in research. GCM is defined in the NIST Special Publication 800-38D [2] and used as the default cipher mode in the IEEE Standard 802.1AE—Media Access Control (MAC) Security [3], mainly due to its good performance. Another aspect why GCM in combination with AES can be of value is the fact that the Intel Cooperation provides theirs own instruction set for implementing AES on the Westmere architecture. The most famous set is called “AES-NI” that consists of six new instructions to realize the AES-algorithm [9]. Moreover, there is an instruction called “PCLMULQDQ” for carry-less multiplication which can be used to increase the performance of the universal hash function “GHASH” in GCM [10]. In an AES-NI-GCM implementation within the Linux kernel Intel has also carefully examined the performance gains of these instructions compared to a traditional implementation [11]. The fast development and enhancements of CUDA pushed new and innovative applications on GPUs. One of the first AES implementations using CUDA was by Svetlin A. Manavski who also parallelised AES on instruction level using four threads to produce the 16 encrypted bytes [12]. Other papers followed analysing how to realise AES with CUDA efficiently. A good example of how to examine how well AES can perform on a GPU is the master thesis of Alexander Ottesen [13]. He analysed the differences of AES with CUDA by first using the traditional processing way and then compare this to a lookup table version. He also tried to optimize these applications by fully utilising the different memory spaces of a GPU. There also have been some recent approaches to use AES on GPUs for tasks in networking environments. The researchers of the “Networked & Distributed Computing Systems Lab” in Korea released papers about how to speed-up SSL [15] or how to accelerate a software router—also in connection with IPsec [16]— with GPUs.
180
2
G. Sch¨ onberger and J. Fuß
Our Approach
In our work we combine the potential of GPUs1 to accelerate AES with the benefits of AES-GCM as authenticated encryption. In contrast to recent works on only AES-CTR using CUDA ([13], [14]) we split the AES-GCM into two separate stages. The first realizes the encryption part on the GPU2 and the second stage is responsible for the authentication part. Moreover stage one needs not to be executed with stage two at the same time, as the plaintext is only indispensable at authentication time. This separation is a completely new way to implement AES-GCM’s authenticated encryption.
1.
GPU precomputes Enc. counters
2.
Packet
CPU
XOR 3. GHASH
4. IV Enc.or dec. packet Tag
Fig. 1. Schematic procedure of en-/decrypting a packet
In Fig. 1 we show how we en-/decrypt a packet using AES-GCM with precomputation: 1. As soon as the secret-key and the initialization vector (“Nonce”) for AESGCM are negotiated, we start encrypting continuing counters with the key on the GPU. 2. As soon as packets are arriving the CPU can start en-/decrypting it. Additional Authentication Data (AAD) can be added as well (q.v. [17, sec. 2.3]). 3. The CPU uses the precomputed counters generated by the GPU to XOR with the plaintext for encryption and for authentication GHASH (universal hashing over a binary Galois field [2, Chap. 5]). 1
2
We would like to emphasize the fact that the precomputation described here need not be done on a GPU. Also free cores on a multi-core CPU may be used for this purpose. Nevertheless, in this paper we refer to the processor that does the precomputation as the GPU. At this point we can use the results of the recent papers on AES with CUDA.
GPU-Assisted AES Encryption Using GCM
4 Byte
Salt
4 Byte
4 Byte
ESP-IV
181
4 Byte
Nonce
0 1 Std.
Std. Nonce Salt
ESP-IV
(GHASH)
Mod.
Counter 0
Mod. Nonce Fig. 2. Modifications for the construction of the nonce and the usage of GHASH for non-12 byte IVs
4. After a packet is encrypted and the tag is computed, the packet can be composed. In case of decryption the computed tag must be compared with the encrypted packet’s original tag. To use the counters generated by the GPU efficiently we have to adapt the format of initialization vectors (IV) for AES-GCM. A nonce for AES-GCM consists of the salt (4 bytes) generated by Internet Key Exchange Protocol (IKE) [4] and the ESP-IV (8 bytes) [5, p. 4]. To form the initial counter for AES-GCM this 12 byte sequence is padded with the 32 bit sequence “00 . . . 1” to become 16 byte long [17, p. 5] (see Fig.2). For the subsequent packet the ESP-IV is incremented by one, the salt prefixed and then again padded. This usage of the ESP-IV and the padding is not suited for precomputation as we cannot use a continuous stream of encrypted counters. We only need one counter getting incremented and not two (ESP-IV per packet and the padded-nonce within a packet). Therefore we suggest the following changes for the construction of the nonce (cf. Fig. 2): – Extension of the ESP-IV to 12 bytes so that padding is not necessary. Otherwise we would have to estimate the size of a packet and increase the initial counter to generate enough encrypted data for the XOR. This would go along with a large number of encrypted counters that must discarded or missing counters for long packets. That’s why we use the salt and the 12 byte ESPIV as the initial counter for AES-GCM so as to have one counter and no estimation of the packet sizes is needed. – Normally, GHASH is applied for nonces not equal to 12 bytes. Due to the new ESP-IV length GHASH should be used to form the 16 byte initial counter. Again this usage of GHASH does not allow us to predict what the initial counter for the next packet will be. For this reason we propose to skip GHASH for nonces of 16 bytes. As long as one counter is not encrypted with the same key more than once, this has no security impact. A modification of the format of the IV in a counter mode may have negative impact on the overall security of the encryption scheme. We claim that our
182
G. Sch¨ onberger and J. Fuß
modifications do not reduce the security of the encryption method. Two issues must be considered. – Firstly, the original IV format makes sure that IVs will not repeat, neither in one packet (using a 4 byte per packet counter) nor in different packets (using a packet dependant 12 byte ESP-IV). The modified format can also guarantee unique IVs for AES-GCM; inside a packet as well as in different packets with the same 16 byte counter. Finally, a regular rekeying in the IPsec protocol will prevent this 16 byte counter. – Secondly, we do not use GHASH on a non-12 byte IV. With respect to this modification it can be said that the purpose of GHASH in this case is to have a simple method to guarantee as an output an IV that can be used for AES-GCM. The standard also skips the GHASH in the case of a 12 byte IV (for higher speed).
3
Challenges
As a result of precomputation it is essential to schedule encryption—in matters of XOR and GHASH—asynchronously to the encryption of counters. This means the need for a parallel implementation of encrypting and authenticating packets while new counters for the subsequent packets are encrypted. From this it follows that the overall performance of our AES-GCM depends primarily on the speed of GHASH and memcpy operations to fetch the precomputed encrypted counters. If we reach the point where we can process a packet as fast or even faster as we can precompute, then again the limiting factor is the AES encryption of the counters. In conjunction with GPUs as coprocessors the challenging task is the transfer of the encrypted counters from the GPU to RAM. One might also consider that one GPU could precompute counters for several IPsec connections. Then the management of the GPU as a shared resource is as important as the handling of parallel processing packets and precomputing. As a solution for the management of precomputed counters the CPU can provide double buffers. So, if one buffer gets empty, a switch to the second buffer can be performed and the empty buffer gets refilled asynchronously by the GPU. Sometimes network packets can get lost or mixed up on their way to the receiver. In this case counters from the previous buffer are needed. In this case we can insert some sort of “history” at the beginning of each buffer that contains a certain amount of old counters from the buffer just been overwritten.
4
Benchmarks
We used our implementations on the GPU to test if the precomputation can speed up an existing AES-GCM application. As current AES-implementations with CUDA nearly reach the 10 Gbit/s [13][16] we focused in the first place
GPU-Assisted AES Encryption Using GCM
183
4 LibTomCrypt with GPU-Patch LibTomCrypt CPU 3.5
Throughput (Gbit/s)
3
1MB
10MB
100MB
2.5
2
1.5
1
0.5
0
2
SE
-S 2
2
SE
t-S 2
SE
oS
2
2
SE
oS
SE
-S
st
st
Fa
Fa
-N
as
st
oF
N
Fa t-N
2
SE
-S
2
SE
t-S
as
st
oF
N
Fa
as
2
SE
oS
SE
oS
t-N
-N
as
st
oF
N
Fa
oF
N
Fig. 3. Comparison of the encryption throughputs: (i) LibTomCrypt on the CPU and (ii) LibTomCrypt with our GPU-Patch (both with and without LTC FAST and SSE2 instructions)
on the interaction of CPU and GPU. For our benchmarking we patched the AES-GCM algorithm of the cryptographic toolkit “LibTomCrypt”3 . The patch included the replacement of encrypting counters with fetching counters from precomputed memory. As precomputation can be done in parallel to encryption the throughput depends on the time taken by copying encrypted counters and performing GHASH. In Fig. 3 we compare the runtime of encrypting 1, 10 and 100 Megabyte (MB) of random data. To simulate the encryption of network packets we process the data in chunks of 1024 Byte as this could be a suitable packet size. Fig. 3 shows that our patch is faster in every combination of used instruction sets and for all sizes of data. Also note the fact that with activated LTC FAST and SSE2 our version has more than twice the throughput of the traditional LibTomCrypt library. We also examined the statistical spread of the runtimes by conducting the tests a thousand times. As a result the variation is negligible because only sporadic outliers were detected with a variation around one millisecond. We used an Intel Core i7 940, 12GB 1333MHz RAM and a GTX480 on an Intel DX58SO Motherboard for testing. The operating system was Ubuntu 10.4 and for compiling LibTomCrypt in version 1.17 we used gcc-4.4. 3
Available under public domain at http://libtom.org/.
184
5
G. Sch¨ onberger and J. Fuß
Conclusion
Our implementation of AES-GCM separated into stages shows that this mode of operation has benefits from a cryptographic point of view and also solves performance issues. The fact that the coprocessors can perform encryption in parallel to processing data offers new challenges to high-performance network applications. Current standards for the use of this mode in IPsec are not perfectly suitable for an implementation with precomputation. However, only small modifications are necessary and they do not affect security. We are looking forward how well an implementation running in kernel-mode will perform. Finally, a comparison of AES-GCM with classical combinations of block cipher and hash function will be an important next step.
References 1. NVIDIA Corporation: NVIDIA CUDA C Programming Guide, Developer Manual (2010), http://developer.nvidia.com/object/gpucomputing.html 2. Dworkin, M.: Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) and GMAC. NIST Special Publication 800-38D (2007) 3. IEEE Computer Society: Standard for Local and metropolitan area networks: Media Access Control (MAC) Security, New York (2006) 4. Kaufman, C.: Internet Key Exchange (IKEv2) Protocol, RFC 4306 (2005) 5. Viega, J., McGrew, D.: The Use of Galois/Counter Mode (GCM) in IPsec Encapsulating Security Payload (ESP), RFC 4106 (2005) 6. Kent, S., Seo, K.: Security Architecture for the Internet Protocol, RFC 4301 (2005) 7. Kent, S.: IP Encapsulating Security Payload (ESP), Request for Comments 4303 (2005) 8. Dworkin, M.: Recommendation for Block Cipher Modes of Operation: Methods and Techniques, NIST Special Publication 800-38A (2001) 9. Akdemir, K., et al.: Breakthrough AES Performance with Intel AES New Instructions, Intel Whitepaper (2010), http://software.intel.com/file/27067 10. Gopal, V., et al.: Optimized Galois-Counter-Mode Implementation on Intel Architecture Processors, Intel Whitepaper (2010), http://download.intel.com/design/intarch/PAPERS/324194.pdf 11. Hoban, A.: Using Intel AES New Instructions and PCLMULQDQ to Significantly Improve IPSec Performance on Linux, Intel Whitepaper (2010), http://download.intel.com/design/intarch/papers/324238.pdf 12. Manavski, S.A.: Cuda compatible GPU as an efficient hardware accelerator for AES cryptography. In: Proceedings IEEE International Conference on Signal Processing and Communication, ICSPC (2007) 13. Ottesen, A.: Efficient parallelisation techniques for applications running on GPUs using the CUDA framework, Universitt Oslo (2009), http://www.duo.uio.no/sok/work.html?WORKID=91432 14. Di Biagio, A., Barenghi, A., Agosta, G.: Design of a Parallel AES for Graphics Hardware using the CUDA framework. In: International Parallel and Distributed Processing Symposium (2009)
GPU-Assisted AES Encryption Using GCM
185
15. Jang, K., et al.: SSLShader: Cheap SSL Acceleration with Commodity Processors. In: Proceedings of the USENIX Symposium on Networked Systems Design and Implementation (2011) 16. Han, S., et al.: PacketShader: a GPU-Accelerated Software Router. In: Proceedings of ACM SIGCOMM (2010) 17. McGrew, D.A., Viega, J.: The Galois/Counter Mode of Operation (GCM) revised, Technical Report (2005), http://www.csrc.nist.gov/groups/ST/toolkit/BCM/documents/proposedmodes/ gcm/gcm-revised-spec.pdf
Radon Transform-Based Secure Image Hashing Dung Q. Nguyen, Li Weng, and Bart Preneel Katholieke Universiteit Leuven, ESAT/COSIC-IBBT [email protected], {li.weng,bart.preneel}@esat.kuleuven.be
Abstract. This paper presents a robust and secure image hash algorithm. The algorithm extracts robust image features in the Radon transform domain. A randomization mechanism is designed to achieve good discrimination and security. The hash value is dependent on a secret key. We evaluate the performance of the proposed algorithm and compare the results with those of one existing Radon transform-based algorithm. We show that the proposed algorithm has good robustness against contentpreserving distortion. It withstands JPEG compression, filtering, noise addition as well as moderate geometrical distortions. Additionally, we achieve improved performance in terms of discrimination, sensitivity to malicious tampering and receiver operating characteristics. We also analyze the security of the proposed algorithm using differential entropy and confusion/diffusion capabilities. Simulation shows that the proposed algorithm well satisfies these metrics.
1
Introduction
In order to efficiently identify digital images, perceptual hash techniques have been used [1–3]. A hash value, typically a short binary string, is generated to act as a unique identifier of the corresponding image. Since an image can be stored under different digital representations, a perceptual hash value is expected to be resilient to content-preserving manipulations, such as JPEG compression, filtering, etc. Additionally, perceptual hash algorithms are also useful for secure applications, e.g., image content authentication. Currently, many effective signal processing tools are available to modify image content. Therefore, an image hash algorithm is also required to make the hash output dependent on a secret key [2]. Only the entity knowing the key can generate the corret hash value. It helps to ensure that image information is not tampered with during transmission.
This work was supported in part by the Concerted Research Action (GOA) AMBioRICS 2005/11 of the Flemish Government and by the IAP Programme P6/26 BCRYPT of the Belgian State (Belgian Science Policy). The second author was supported by the IBBT/AQUA project. IBBT (Interdisciplinary Institute for BroadBand Technology) is a research institute founded in 2004 by the Flemish Government, and the involved companies and institutions (Philips, IPGlobalnet, Vitalsys, Landsbond onafhankelijke ziekenfondsen, UZ-Gent). Additional support was provided by the FWO (Fonds Wetenschappelijk Onderzoek) within the project G.0206.08 Perceptual Hashing and Semi-fragile Watermarking.
B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 186–193, 2011. c IFIP International Federation for Information Processing 2011
Radon Transform-Based Secure Image Hashing
187
The performance of a perceptual image hash algorithm primarily consists of robustness, discrimination, and security. Robustness means the algorithm always generates the same (or similar) hash values for similar image contents. Discrimination means different image inputs must result in independent (different) hash values. The security of a perceptual image hash algorithm has two aspects. The first is the ability to detect malicious tampering. Another aspect is the difficulty of deriving a hash value without knowing the key. In this paper, an image hash algorithm is proposed. It exploits the invariance of the Radon transform to rotation and scaling. Our work is inspired by the RAdon Soft Hash algorithm (RASH ) [4]. This algorithm has good robustness, but its discrimination capability is worth improving. Moreover, it does not incorporate a secret key. In the proposed new algorithm, we strengthen the capability of the original algorithm by improving its discrimination and security properties. A special randomization scheme is introduced to maintain the robustness, and meanwhile improve the overall performance. The rest of this paper is structured as follows: Section 2 presents the Radon transform properties and the proposed algorithm in detail. Section 3 provides performance evaluation of the proposed algorithm in comparision with the RASH algorithm. Section 4 discusses security of the proposed algorithm. Section 5 concludes the work.
2
The Proposed Algorithm
The proposed algorithm aims to be robust against content-preserving manipulations. It is also expected to improve discrimination in comparison with the RASH algorithm. 2.1
Radon Transform and Properties
The Radon transform is computed by taking the line integrals of a two-dimensional image f (x, y) along a set of directions. The line integral along a particular direction θ is called a projection. The line integral of the function f (x, y) along the line L defined by the direction θ and the distance x from the origin in the coordinates (x , y ) [4] is given by Rθ (x ) = f (x cosθ − y sinθ, x sinθ + y cosθ)dy . (1) L
The expression (1) leads to two noticeable properties. Scaling: f (ax, ay) ↔ a1 Rθ (ax ) , where a > 0 The Radon transform of a scaled image f (ax, ay) is proportional to the Radon ) transform of the image f (x, y) with the same scaling factor a, or RRθθ(ax (x ) = a. Rotation: if an image f (x, y) is rotated by ω degrees, the Radon transform of the rotated image is f (x.cosω − y.sinω, x.sinω + y.cosω) ↔ Rθ+ω (x ) ,
(2)
188
D.Q. Nguyen, L. Weng, and B. Preneel
i.e., it can be obtained by circularly shifting the transform coefficients of the image f (x, y) according to ω. 2.2
Proposed Algorithm
The proposed algorithm has a hash generation part and a hash verification part. The former consists of three stages: image preprocessing, feature extraction and quantization. In the latter, the hash values may first undergo a rotation detection; then the normalized Hamming distance is computed to measure their similarity. The resultant distance is compared with a threshold to decide if the two values correspond to the same content. The rotation detection stage enhances robustness against rotation. However, it also decreases overall discrimination. Therefore, we come up with two schemes for practice. Scheme 1 involves the rotation detection stage. Scheme 2 skips the rotation detection stage. A. Hash Generation Stage 1: Image Preprocessing. The input image I(x, y) is first converted to gray and down-sampled to the canonical size 512 × 512 pixels. Next it is smoothed by a low-pass filter. Histogram equalization is applied to the filtered image. Stage 2: Feature Extraction. We introduce a new approach to extract invariant image features in the Radon transform domain. In detail, this stage includes two steps: a. Radon transform. We apply the Radon transform to the preprocessed image for the projection angles θ = 0, 1, ..., 179 to obtain a set of projections {Rθ (xi )}. The projection along each angle θ is a vector of line integrals along the lines Li (projection paths) defined by the distance xi to the origin. b. Feature randomization. We next calculate a weighted sum of selected projection paths along each angle θ. An intermediate hash vector of 180 elements is obtained. Np hθ = αi Rθ (xi ), θ = 0, 1, ..., 179 (3) i=1
where Np is the number of selected projection paths; {αi } are normally distributed pseudorandom numbers with mean m and variance σ 2 . Stage 3: Quantization. We uniformly quantize the 180-element intermediate vector to generate a 360-bit hash value. B. Hash Verification The input hash values first go through rotation detection. This stage is only applied in Scheme 1. The hash value of a possibly rotated image (h2 ) is compared with that of the original image (h1 ) to estimate the rotation angle by means of maximum cross-covariance Rh1 ,h2 (m) =
N −m−1 n=0
h1 (n) − h1
h2 (n + m) − h2
(4)
Radon Transform-Based Secure Image Hashing
189
N −1 N−1 1 where h1 = N1 i=0 h1 (i), h2 = N i=0 h2 (i) are the means of the hash values h1 , h2 respectively; N = 360 is the hash length; and m = 0, 1, ..., 359. The rotation angle is determined by ϕ = 360 − argM ax(Rh1 ,h2 (m)) . m
(5)
After the hash values are aligned by ϕ, their normalized Hamming distance (NHD ) is computed as dh1 ,h2 =
3
N 1 |h1 (i) − h2 (i)| . N i=1
(6)
Performance Evaluation
The proposed algorithm is evaluated on a database of 618 different natural scene images. The types of images include architecture, sculpture, humanoid, landscape, food, and vehicle. The image sizes vary from 640 × 480 to approximately 3000 × 2000. In the feature extraction stage, the number of selected projection paths is set as 5 and the distance between them is set as 50. The pseudorandom numbers are normally distributed with the mean 1 and standard deviation 2, controled by a secret key. The robustness of RASH and the proposed algorithm is verified under Table 1. Set of manipulations Type of manipulation Legitimate manipulations Gaussian filtering Median filtering JPEG compression Gaussian noise Salt and Pepper noise Rotation Cropping Malicious manipulation Object insertion
Manipulation parameter Filter size: 11×11, 21×21 Filter size: 3×3, 5×5 Quality factor: 20, 10 Standard deviation: 0.04, 0.08 Noise density: 0.04, 0.08 Angle: 2, 4 degrees Percentage: 2%, 4% Object size after preprocessing: 32×32, 64×64
various manipulations listed in Table 1. We apply legitimate manipulations to each original image and generate 14 perceptually similar images. The hash value of each manipulated image is computed and compared with that of the original image. The NHD between two hash values is expected to be close to zero. The algorithms are also tested under object insertion to measure their sensitivity to malicicious local modifications. In total 16 manipulated images are generated for each original image.
190
D.Q. Nguyen, L. Weng, and B. Preneel Table 2. Normalized Hamming distances for manipulations Manipulation Parameter Normalized Hamming distance RASH Scheme 1 Scheme 2 Gaussian 11×11 0.0031 0.0033 0.0029 filtering 21×21 0.0031 0.0032 0.0029 Median 3×3 0.0094 0.0095 0.0086 filtering 5×5 0.0198 0.0195 0.0197 JPEG 20 0.0079 0.0085 0.0084 compression 10 0.014 0.0144 0.0146 Gaussian 0.04 0.0531 0.054 0.055 noise 0.08 0.0818 0.0793 0.0798 Salt&pepper 0.04 0.027 0.0273 0.0298 noise 0.08 0.0446 0.0442 0.0445 Rotation 2◦ 0.047 0.0427 0.1091 4◦ 0.0781 0.0722 0.1762 Cropping 2% 0.0233 0.0456 0.0435 4% 0.0376 0.0797 0.0788 Object insertion
32×32 64×64
0.0149 0.0232 0.0515 0.0652
0.0233 0.0661
Table 2 shows the average NHDs for the manipulations on the image database. All the algorithms are strongly robust to Gaussian filtering with the NHDs on the order of 10−3 . They also perform well under Median filtering and JPEG compression with the NHDs on the order of 10−2 . For Gaussian noise as well as salt and pepper noise, the NHDs of the algorithms are smaller than 0.1 and comparable to each other. This is partially due to the low-pass filtering in the preprocessing stage. For rotation, the NHDs of Scheme 1 are smaller than those of RASH. This is attributed to the rotation detection. On the other hand, Scheme 2 without the rotation detection is less robust to rotation angles larger than 3◦ . For cropping, the proposed methods show lower performance than RASH. This is because after the cropped image is re-scaled to the canonical size, the selected projection paths at the same distances to the origin have changed; while in the RASH algorithm the medium projection path remains unchanged. Regarding object insertion, the proposed methods have higher NHDs than the RASH algorithm. Therefore, the proposed algorithm has stronger sensitivity to malicious modifications, meaning better ability for content authentication. This is because in the proposed algorithm, the selected projection paths cover a larger area of the image – an advantage of the proposed algorithm. While in the RASH algorithm only the medium path is selected. In our experiment, the NHD for the proposed methods is higher than 0.1 when the object size is larger than 96 × 96 pixels. In order to test the discrimination capability, each original image and its maniplulated images are put into Hash values from different groups are agroup. 2 pair-wise compared. There are 618 ×17 = 55098717 hash pairs in the test. The 2
Radon Transform-Based Secure Image Hashing
191
resultant NHD is expected to be close to 0.5, because hash values of different image contents are independent. The average NHDs between different image contents for the RASH, the proposed scheme 1 and scheme 2 are 0.317, 0.329 and 0.477 respectively. The proposed methods show better discrimination than the RASH algorithm. This is due to the randomization procedure. Scheme 2 has better discrimination than Scheme 1. This is because in Scheme 1 the rotation detection reduces the randomness achieved in the feature extraction stage. In Scheme 2 the NHD is computed directly from two random hash values. Hence there is a tradeoff between robustness and discrimination in the proposed methods. We use the receiver operating characteristics (ROC) to compare the overall performance. The ROC curves (in enlarged scale) are shown in Fig. 1. Given a false positive rate (Pf ), the proposed methods have higher probability of correct detection (Pd ) than the RASH algorithm. Hence the proposed methods achieve better overall performance. It is also observed that Scheme 2 achieves the best ROC curve. Receiver Operating Characteristics of the Algorithms 1 0.95 0.9 0.85
Pd
0.8 0.75 0.7 0.65 0.6 RASH Algorithm Proposed Algorithm − Method 1 Proposed Algorithm − Method 2
0.55 0.5
0
0.1
0.2
0.3
0.4
0.5
Pf
Fig. 1. Receiver operating characteristics of the algorithms
4
Security Analysis
The security of image hashing is still an open area. A well-known security metric is the differential entropy of the hash value, proposed by Swaminathan et al. [2]. It measures the effort of an attacker to estimate the hash value without knowing the secret key. Larger entropy means increasing amount of randomness. In our algorithm, the differential entropy of the hash value increases when the variance of normally distributed pseudorandom numbers becomes larger or when the number of sample points is larger. Following the approach in [2], we derive the differential entropy expression of each hash element for the proposed algorithm
192
D.Q. Nguyen, L. Weng, and B. Preneel
⎛ ⎞ Np 1 H(hθ ) = log2 ⎝(2πe)σ 2 Rθ2 (xi )⎠ . 2 i=1
(7)
Table 3 shows the differential entropy of some image hash algorithms (cf. [2]). The differential entropy of the proposed algorithm is in the range 13.89 − 14.92. It is quite stable, compared with those of Swaminathan’s Scheme-1 and Venkatesan’s algorithm. It is greater than Fridrich’s, Venkatesan’s and Mihcak’s algorithms, and smaller than Swaminathan’s Scheme-2. Table 3. Differential entropy of different hash algorithms [2] Hash algorithm
Differential Entropy Lena Baboon Peppers Proposed algorithm 14.57 - 13.89 - 14.72 14.76 14.19 14.92 Swaminathan’s scheme-1 [2] 8.2 - 13.58 - 8.76 15.6 16.18 15.46 Swaminathan’s scheme-2 [2] 16.28 16.39 16.18 Fridrich’s algorithm [5] 8.31 8.32 8.14 Venkatesan’s algorithm [1] 5.74 - 5.96 - 5.65 11.48 11.70 11.39 Mihcak’s algorithm B [3] 8 8 8
Coskun et al. [6] defined diffusion and confusion capabilities as security metrics. They measure the difficulty of revealing the relationship between the secret key and the hash value (confusion), and the relationship between the input image and the hash value (diffusion). A hash algorithm with good confusion generates statistically independent hash values using different keys for the same image input. In our test, 100 hash values are 100generated for each image using 100 different keys. The average NHD of = 4950 hash pairs is computed for some images and shown in Table 4. The 2 proposed scheme 2 has higher NHD than the proposed scheme 1 for all the tested images. This means that Scheme 2 shows better confusion capability. A hash algorithm with good diffusion generates different hash values for different image contents, corresponding to the discriminative capability. The discrimination test before implies that Scheme 2 has better diffusion capability than Scheme 1.
5
Conclusion and Discussion
In this work, we propose a robust and secure image hash algorithm. The algorithm extracts image features in the Radon transform domain. A randomization mechanism is incorporated to make the hash output dependent on a secret key. It is resilient to filtering, JPEG compression, and noise addition. It is also robust to moderate geometrical distortions including rotation and cropping. The proposed
Radon Transform-Based Secure Image Hashing
193
Table 4. Confusion capability of two proposed methods Proposed algorithm Lena Baboon Boat Peppers Proposed scheme 1 0.3382 0.3047 0.3228 0.3350 Proposed scheme 2 0.4575 0.4442 0.4489 0.4214
algorithm achieves significant improvement to the well-known RASH algorithm. It has better discrimination and higher sensitivity to malicious tampering than RASH, which leads to a better operating characteristic. The key-dependent feature also makes it suitable for a wider range of applications. The security of the algorithm is evaluated in terms of differential entropy and confusion/diffusion capabilities. Good security is confirmed by both metrics. There is a tradeoff between discrimination and robustness in the proposed methods. Scheme 1 takes advantage of rotation detection to improve its robustness against rotation. However, this decreases its discrimination and subsequently lowers the overall performance. Since Scheme 2 achieves better results in the security evaluation than Scheme 1, there is also a tradeoff between robustness and security. In the future, we plan to improve the proposed algorithm by detecting several geometric distortions (e.g. scaling and cropping) before computing the hash distance. This will further enhance robustness. More security metrics will be taken into account. It is interesting to evaluate the maximum number of key re-uses, see [7, unicity distance].
References 1. Venkatesan, R., Koon, S., Jakubowski, M., Moulin, P.: Robust image hashing. In: Proceedings of IEEE International Conference on Image Processing, vol. 3, pp. 664– 666 (2000) 2. Swaminathan, A., Mao, Y., Wu, M.: Robust and secure image hashing. IEEE Transactions on Information Forensics and Security 1(2) (June 2006) 3. Mih¸cak, M.K., Venkatesan, R.: New iterative geometric methods for robust perceptual image hashing. In: Sander, T. (ed.) DRM 2001. LNCS, vol. 2320, pp. 13–21. Springer, Heidelberg (2002) 4. Lefebvre, F., Macq, B., Legat, J.: Rash: Radon soft hash algorithm. In: Proceedings of the European Signal Processing Conference, Toulouse, France (September 2002) 5. Fridrich, J., Goljan, M.: Robust hash functions for digital watermarking. In: Proceedings of the International Conference on Information Technology: Coding and Computing (2000) 6. Coskun, B., Memon, N.: Confusion/diffusion capabilities of some robust hash functions. In: Proceedings of 40th Annual Conference on Information Sciences and Systems (2006) 7. Mao, Y., Wu, M.: Unicity distance of robust image hashing. IEEE Transactions on Information Forensics and Security 2(3) (September 2007)
On Detecting Abrupt Changes in Network Entropy Time Series Philipp Winter, Harald Lampesberger, Markus Zeilinger, and Eckehard Hermann Upper Austria University of Applied Sciences Department of Secure Information Systems 4232 Hagenberg / Softwarepark 11, Austria {philipp.winter,harald.lampesberger,markus.zeilinger, eckehard.hermann}@fh-hagenberg.at
Abstract. In recent years, much research focused on entropy as a metric describing the “chaos” inherent to network traffic. In particular, network entropy time series turned out to be a scalable technique to detect unexpected behavior in network traffic. In this paper, we propose an algorithm capable of detecting abrupt changes in network entropy time series. Abrupt changes indicate that the underlying frequency distribution of network traffic has changed significantly. Empirical evidence suggests that abrupt changes are often caused by malicious activity such as (D)DoS, network scans and worm activity, just to name a few. Our experiments indicate that the proposed algorithm is able to reliably identify significant changes in network entropy time series. We believe that our approach helps operators of large-scale computer networks in identifying anomalies which are not visible in flow statistics. Keywords: entropy, anomaly detection, time series analysis, network flows.
1
Introduction
Large-scale computer networks (i.e. gigabit and above) pose unique challenges to intrusion and anomaly detection; mostly due to the tremendous amounts of data. Network entropy time series were proposed to reduce high-dimensional network traffic to a single metric describing the dispersion or “chaos” inherent to network traffic [11,18]. Our research builds upon these entropy time series. The main contribution of this paper is a detection algorithm based on information theory and statistics. The algorithm is capable of detecting abrupt changes in entropy time series built out of network flows. We define an abrupt change as an unexpectedly high difference between two measurement intervals mt−1 and mt whereas the term “unexpectedly high” depends on a configurable threshold. There is the underlying assumption that abrupt changes are caused by malicious and abnormal network events. This assumption is based on empirical evidence as shown in previous research [18,13,11]. After all, our proposed B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 194–205, 2011. c IFIP International Federation for Information Processing 2011
On Detecting Abrupt Changes in Network Entropy Time Series
195
algorithm achieves satisfying results and allows practical and real-time anomaly detection in large-scale computer networks. The paper is structured as follows: Section 2 provides a short overview of similar research. Section 3 introduces the basic concepts upon which our research rests. The actual algorithm is proposed in Section 4. Section 5 contains the evaluation while Section 6 provides a conclusion as well as future work.
2
Related Work
In [12], Lee and Xiang propose the use of information theoretic measures to conduct anomaly detection. Their proposed measures include entropy, relative entropy and conditional entropy, just to name a few. Lakhina et al. made use of entropy to sum up the feature distribution of network flows [11]. By using unsupervised learning, they show that anomalies can be clustered to “anomaly classes”. Wagner and Plattner make use of the Kolmogorov complexity in order to detect worms in flow data [18]. Their work mostly focuses on implementation aspects and scalability and does not propose any specific analysis techniques. In [17], Tellenbach et al. go further by exploring generalized entropy metrics for the purpose of network anomaly detection. Similar work was done by Gu et al. who made use of maximum entropy estimation to reach the same goal. Nychis et al. conducted a comprehensive evaluation of entropy-based anomaly detection metrics [13]. In [15], Sommer and Paxson point out why anomaly detection systems are hardly used in operational networks. Barford et al. made use of signal analysis techniques in order to detect anomalies [1]. In [7], Feinstein et al. explored statistical approaches for the detection of DDoS attacks. In [3], Brutlag proposed the use of a statistical algorithm to detect abnormal behavior in time series. Finally, Sperotto et al. provide a comprehensive overview about flow-based intrusion detection in general [16].
3 3.1
Preliminaries Network Flows
Our proposed anomaly detection system (ADS) analyzes network flows rather than entire network packets. In their original definition, network flows (in short: flows) provide unidirectional meta information about network packets which share the following characteristics: Source and destination IP address and ports and IP protocol number. It is important to note that all network activity on OSI layer 3 and above results in flows; this includes not only TCP connections but also stateless protocols such as UDP and ICMP. We decided in favor of network flows since they are highly lightweight (a TCP connection which might have transferred several gigabytes accounts for only a few dozen bytes in a flow) and alleviate the analysis of large-scale computer networks. Also, they raise less privacy concerns since no payload is present. Finally, network flows are widely available on network devices, e.g. in the form of Cisco NetFlow [4] or the upcoming standardized IPFIX [5].
196
3.2
P. Winter et al.
Entropy Time Series
The collected flows are analyzed by calculating the entropy over a sliding window. This technique is not new and has been topic of extensive research before [12,13,18,11]. We make use of the Shannon entropy which is defined as n H(X) = − i=1 p(xi ) · log2 (p(xi )). The variables x1 , ..., xn represent the possible realizations of the random variable X and p(xi ) the probabilities of the respective realizations. In order to make the result of the entropy analysis easier to interpret, we normalize it to the interval [0, 1] by using the normalized entropy H0 (X) = H whereas n represents the amount of different observations, i.e. xi values. log2 (n) We calculate the entropy over five flow attributes which can all be found directly in the flow record: – Source and destination IP – Source and destination port – Packets per flow Aside from the fact that previous research already achieved promising results with these attributes [13,18], we believe that they complement each other in a natural way. After all, we are dealing with a total of five entropy time series – one for every flow attribute. The entropy time series are calculated by using an overlapping sliding window comprising all flows observed within the last 5 minutes. The overlapping delta is 4 minutes. So after all, every minute five entropy values are determined and added to their respective time series. Figure 1 depicts such a time series spanning a total of 24 hours. The data stems from the uplink of our university.
Fig. 1. An entropy time series of our universities uplink spanning 24 hours
3.3
What Can We Detect?
As mentioned earlier, packet payload is inevitably lost in network flows. For this reason, all attacks which manifest solely in packet payload (e.g. remote exploits) are invisible on the network layer. Our system focuses on detecting attacks on the
On Detecting Abrupt Changes in Network Entropy Time Series
197
network layer. Some of these attacks, such as DDoS, are quite obvious, others, such as botnets, often highly camouflaged. A comprehensive survey about current research in this domain and about the potentiality of network flows in general is done by Sperotto et al. [16]. We claim that our approach is basically capable of detecting the following malicious activity: – (D)DoS attacks – Large-scale scanning activity – Worm outbreaks Botnet detection is beyond the scope of this contribution. However, our algorithm might be of help if the activity of a certain botnet leads to abrupt changes in entropy time series.
4
The Anomaly Detection Algorithm
The most straightforward approach in order to conduct anomaly detection, a static threshold, is not sufficient due to the high variability of the time series. Hence, we need an algorithm which is 1) robust against noise and 2) sensitive enough so that attacks are still visible. 4.1
Detecting Abrupt Changes
The basic idea for detecting abrupt changes is to continuously conduct shortterm predictions and determine the difference between the prediction and the actual measurement. The higher the difference, the more unexpected and hence abrupt the change is. Many time series prediction algorithms have been proposed in the past. There are algorithms which account for trends, seasonal components or none of these. We chose to use simple exponential smoothing (SES). The SES algorithm is used to smooth time series as well as to conduct short-term predictions. The algorithm is very elementary and accounts neither for seasonal, nor for trend components. We chose SES because it turned out to be very robust against the noise inherent to our data set. Furthermore, there is no need for a more sophisticated prediction algorithm since our measurement interval of 1 minute is so fine-grained (over a seasonal cycle of 24 hours we have 1,440 measurements) that seasonal and trend components are not of relevance. For a time series consisting of the observations x1 , ..., xn , the SES is defined as shown in Equation 1. x0 if t ≤ 1 (1) x ˆt = α · xt−1 + (1 − α) · xˆt−1 otherwise ˆi is the predicted value The series x ˆ1 , ..., xˆn represents the predictions whereas x for the (later) observed value xi . The parameter α, where 0 ≤ α ≤ 1, is referred
198
P. Winter et al.
to as the smoothing factor. The closer α is to 0, the smoother the predictions are. If α = 1, no smoothing is performed and the “smoothed” time series equals the observed time series. For each of the five time series, we now determine the prediction error which is the difference between a predicted and an observed value. The error function is defined as err(xi , x ˆi ) = |ˆ xi − xi |. However, the determined prediction errors are not equal in their significance because the underlying time series feature different levels of dispersion as can be seen in Figure 2. For this reason, we normalize the prediction errors with respect to the dispersion of the respective time series. We measure the dispersion by calculating the sample standard deviation over a sliding window spanning the last 24 hours. We chose 24 hours so that the sliding window captures an entire seasonal cycle. An alternative would be a sliding window of 7 days which covers an entire week including possible deviations on the weekend.
Fig. 2. Box plot depicting the distribution of the five entropy time series over our main flow trace. The whiskers represent the maximum and minimum values, respectively.
We normalize the prediction error of every time series by multiplying it with a weight factor. For each of the five time series and its respective sample standard deviation si , this weight factor is defined as shown in Equation 2. ωi =
1 · max(s1 , ..., s5 ) si
(2)
Accordingly, the time series exhibiting the highest sample standard deviation is assigned a weight factor of 1. To make the proposed ADS for network operators easier to configure, we aggregate all five prediction errors to a single anomaly score referred to as δ. δ=
5
err(ˆ xi , xi ) · ωi
(3)
i=1
A single threshold can now be configured. As soon as the current aggregated anomaly score exceeds this threshold, an alert is triggered.
On Detecting Abrupt Changes in Network Entropy Time Series
199
Algorithm 1. Abrupt change detection algorithm 1: Time series T = {{x, x ˆ, s}1 , ..., {x, x ˆ, s}5 } 2: while True do 3: for all {x, x ˆ, s}i ∈ T do 4: Normalized entropy xt := H0 (flow attributes for i within Δt) 5: Predicted entropy x ˆt+1 := α · xt + (1 − α) · x ˆt 6: Prediction error ei := |ˆ xt − xt | 7: Standard deviation st := s(flow attributes for i within last 24 hours) 8: end for 9: for all {x, x ˆ, s}i ∈ T do 10: Calculate weight factors ωi := 1s · max({∀{x, x ˆ, s} ∈ T | st }) 11: end for 12: Calculate anomaly score δ := n i=1 ωi · ei 13: if δ ≥ predefind threshold then 14: Trigger alert 15: end if 16: Sleep Δt 17: end while
Algorithm 1 shows the entire algorithm summed up in pseudo code. The first line defines our five time series. Each time series consists of a set of three variables: x is the actual entropy time series over a specific flow attribute, x ˆ represents the entropy predictions and s stands for the sample standard deviation over the last 24 hours. The endless while-loop spanning the lines 2 – 17 triggers an analysis every Δt minutes. Then, for each time series, the current entropy, the predicted entropy, the prediction error and the standard deviation is computed (line 3 – 8). Afterwards, the respective weight factors, one for each of the five time series, are determined (line 9 – 11). Finally, the overall anomaly score is computed (line 12) by summing up all the weighted prediction errors. If the anomaly score exceeds the threshold defined by the network operator (line 13 – 15), an alert is triggered. 4.2
Choosing Parameters
Several parameters must be set for our algorithm. Each combination of parameters is associated with a trade-off: High detection rates at the cost of many false alarms or vice versa. The following enumeration lists the configurable parameters and points out strategies to set them. Sliding window size for entropy analysis: Small windows are very sensitive: On the one hand, this will increase the detection rate. On the other hand, the false alarm rate will increase too. Large sliding windows are comparably insensitive which leads to the contrary effect. Our sliding window spans over 5 minutes which seems to be an acceptable trade-off.
200
P. Winter et al.
Sliding window overlapping size for entropy analysis: An overlapping sliding window results in more fine-grained time series. We want our system to react as quickly as possible to abrupt changes. For this reason, we chose to overlap the sliding window for 4 minutes. So every minute a measurement “looking back” over the last 5 minutes is added to our time series. Sliding window size for standard deviation: We decided to set the sliding window size for calculating the standard deviation to 24 hours. That way, the sliding window covers an entire seasonal cycle. Basically, we recommend multiples of 24 hours. The next natural choice would be 7 days which covers an entire week including possible deviations on the weekend. Smoothing parameter α: In order to use the SES algorithm, one has to come up with the smoothing factor α. As pointed out in more detail in [3], the formula 1 − (1 − α)n informs about how much weight is given to the most recent n observations. One can rearrange this formula to an equation which provides a natural approach to choose alpha: α = 1 − n (1 − ω). In this form, one can obtain α by specifying how much weight 0 < ω < 1 should be given to the most recent n observations. A small α (i.e. a smoother time series) is less sensitive to high amounts of noise. We set α to 0.05 which means that the last 60 observations (i.e. the last hour in the time series) are assigned a weight of 95%. However, as our experiments suggested, the choice of α should depend on the respective network. So a sound approach is to first have a look at the entropy time series and then decide which α makes sense in practice.
5
Evaluation
The following sections cover the evaluation of our proposed algorithm. In short, due to the lack of ground truth, we injected handmade anomalies into our flow trace in order to evaluate our algorithm. 5.1
The Data Set
Our primary data set stems from the server network of an ISP. It contains 5 days of unsampled unidirectional network flows: from 25th October 2010 until 30th October 2010. The set holds a total of ∼260 million flows which means that on average, ∼36,000 flows were received every minute. The network was more active during day than during night times, though. The data set was anonymized [6] but preserves a one-to-one mapping of anonymized to real IP addresses. So the traffic distributions are not affected by the anonymization. Figure 3 contains an entropy analysis over one day of our flow trace. The overall anomaly score is shown as black time series at the bottom. The higher the anomaly score, the more abrupt are the aggregated changes in the five time series. One can see that there are a handful of events with a particularly high anomaly score, i.e. 0.4 or higher.
On Detecting Abrupt Changes in Network Entropy Time Series
201
Fig. 3. Entropy time series over one day including anomaly score
5.2
Injection of Synthetic Anomalies
A sound evaluation of an ADS requires data sets for which ground truth is available or can be established. Since we did not have any ground truth per se, we modeled and injected synthetic anomalies into our original data set. For this purpose, we used a modified version of the tool FLAME [2]. FLAME facilitates the injection of hand-crafted anomalies into flow traces. Anomalies can be described by Python scripts which serve as “flow generators”. We implemented two such flow generators: HTTP DDoS attack: The generated flows represent a synthetic middle-scale DDoS attack launched over HTTP. The DDoS attack lasted for 11 minutes and a total of 500 distributed hosts formed up as attackers. The victim of the attack was a single webserver. The amount of flows generated by the attackers represents a normal distribution since not all attackers have the same bandwidth and can attack at the same scale. As a result from the attack, the webserver was under very high load and could respond only to a small fraction of HTTP requests. The DDoS attack resulted in ∼220,000 flows which were superposed over the original data set. Horizontal network scan: The purpose of this generator was to yield flows which represent a large-scale horizontal network scan. The attacker used a single IP address for scanning and scanned an entire /16-network which consists of 65,534 valid IP addresses. The purpose of the scan was to find open TCP ports 21, i.e. running FTP daemons. The attacker did not use any obfuscation methods such as inter-scan-delays. The scan resulted in ∼67,000 flows. Note that worm outbreaks are very similar to network scans when looked at on the network layer: Infected computers scan for other vulnerable hosts [8,19]. We injected the two described anomalies into our original data set which resulted in a new data set containing our handmade anomalies. The anomalies were inserted at 13:00 (DDoS) and at 14:00 (Scan) on October 28th. Both dates were randomly determined.
202
5.3
P. Winter et al.
Analysis Results
Figure 4 shows the entropy analysis for the entire day of October 28th. The upper diagram covers the original data set whereas the lower one holds our injected anomalies. Again, both diagrams contain time series for all of our five flow attributes. The black time series holding the anomaly score is the most interesting one. In both diagrams, we highlighted particularly high anomaly scores and labeled them from A to D. Anomaly A and B represent “natural” anomalies which were already present in the original data set.
!
"#$% &
!"#
$%&' ("
Fig. 4. Entropy time series over the original (upper) and the modified (lower) flow trace. The marked spots represent anomalies.
It turns out that anomaly A was caused predominantly by two hosts, entitled H1 and H2 . Apparently, H1 initiated thousands of HTTP connections over a longer period of time to H2 . After this communication stopped, the port time series converged and the entropy of the IP distributions increased significantly. Anomaly B was again caused by HTTP communication. A set of multiple distributed hosts sent TCP segments to a single host H3 . The source port of all distributed hosts is 80 and the destination port of H3 a random high port. This phenomenon only lasted for 5 minutes. During this time period, around 30,000 flows were sent to H3 . We presume that the anomaly was caused by a spoofed DoS attack or a misbehaving network device. Anomaly C and D represent our handmade DDoS attack (C) as well as the network scan (D). We also conducted experiments with synthetic anomalies of smaller scale. A network scan covering only a /20 network (i.e. 4,094 addresses) is almost invisible
On Detecting Abrupt Changes in Network Entropy Time Series
203
in our data set and disappears in noise. So depending on the network link, attacks have to reach a certain minimum scale in order to be detectable. Eventually, if the scale of an attack gets large enough, anomalies will show up in widely used flow statistics such as “bytes per minute”. However, this is not the case with the four anomalies we analyzed. Figure 5 illustrates three popular flow statistics: bytes, packets and flows per minute. The diagram was built out of the same flow trace as Figure 4. The anomalies (again, labeled from A to D) disappear in noise.
Fig. 5. Time series of three popular flow statistics; the natural and injected anomalies disappear in noise
5.4
Evasion and Counteractive Measures
For an attacker, the obvious way to evade our detection algorithm is to launch an attack in a slow but continuously increasing way to “stay under the radar” of our algorithm. E.g., a large-scale DDoS-attack against a web server could be started by sending only a dozen HTTP requests per second. Then, the scale can be increased more and more up to millions of requests per second. The algorithm will not detect the attack if the observed changes between the single measurement intervals are not significant enough. Our algorithm is capable of detecting such stealth attacks too if applied to a larger time scale. What might be invisible when looking at a time scale of minutes can become obvious on a time scale of hours or days. 5.5
Scalability
Finally, we want to verify our claim that the proposed approach is suitable for analyzing large-scale networks. We implemented a prototype in the form of a patch for the NetFlow analysis tool nfdump [9] (for entropy calculation) and as plugin for the corresponding frontend NfSen [10] (for visualization). It turned out that our implementation is able to analyze up to 940,000 flows per second.1 On average, our ISP flow trace contains 600 flows per second (with 1
R The measurement was done on a 2.66GHz IntelXeon CPU using the ISP flow trace.
204
P. Winter et al.
peaks of up to 2000 flows/s) whereas the uplink of our university produces around 70 flows/s (with peaks of up to 250 flows/s). Accordingly, our implementation is not even close to its limits.
6
Conclusion
In this paper, we proposed an algorithm for detecting abrupt changes in network entropy time series. Attacks such as DDoS, worms and scans often lead to such abrupt changes. The main idea of the algorithm is to continuously conduct shortterm predictions and determine the difference between the predictions and the actual observed entropy value. The higher the difference, the more abrupt the change is. Our evaluation suggests that the algorithm performs well and is robust against background noise. However, it is important to note that the attacks have to reach a certain scale in order to be detectable. Small-scale DDoS attacks might be invisible on a high-speed network link. We believe that our proposed algorithm is a valuable tool for network operators. It is straightforward to configure, fast to deploy and does not need training data. There is much room for future work. First of all, we noticed that the root cause identification of anomalies is often a nontrivial task. The cause (e.g. a SSH brute force attack occurred) can be quite subtle and difficult to find. It would be of great help if network operators were assisted by automated tools [14]. In addition, future research could cover the adaption and testing of the algorithm on a larger time scale in order to detect slowly but continuously emerging attacks which would otherwise evade our system. Acknowledgements. We would like to thank Peter Haag for providing us with flow traces. Also, we want to thank the anonymous reviewers for providing us with helpful comments which improved the quality of this paper.
References 1. Barford, P., Kline, J., Plonka, D., Ron, A.: A Signal Analysis of Network Traffic Anomalies. In: Proc. of the 2nd ACM SIGCOMM Workshop on Internet Measurement, IMW 2002, pp. 71–82. ACM, New York (2002) 2. Brauckhoff, D., Wagner, A., May, M.: FLAME: A Flow-Level Anomaly Modeling Engine. In: Proc. of the Conference on Cyber Security Experimentation and Test, pp. 1–6. USENIX Association, Berkeley (2008) 3. Brutlag, J.D.: Aberrant Behavior Detection in Time Series for Network Monitoring. In: Proc. of the 14th USENIX Conference on System Administration, pp. 139–146. USENIX Association, Berkeley (2000) 4. Cisco Systems, http://www.cisco.com/web/go/netflow 5. Claise, B.: Specification of the IP Flow Information Export (IPFIX) Protocol for the Exchange of IP Traffic Flow Information. RFC 5101 (Proposed Standard) (January 2008), http://www.ietf.org/rfc/rfc5101.txt 6. Fan, J., Xu, J., Ammar, M.H., Moon, S.B.: Prefix-Preserving IP Address Anonymization: Measurement-based Security Evaluation and a New Cryptographybased Scheme. Computer Networks 46(2), 253–272 (2004)
On Detecting Abrupt Changes in Network Entropy Time Series
205
7. Feinstein, L., Schnackenberg, D., Balupari, R., Kindred, D.: Statistical Approaches to DDoS Attack Detection and Response. In: DARPA Information Survivability Conference and Exposition, vol. 1, pp. 303–314 (2003) 8. Fitzgibbon, N., Wood, M.: Conficker.C – A Technical Analysis. Tech. rep., Sophos Inc. (2009) 9. Haag, P.: NFDUMP, http://nfdump.sourceforge.net 10. Haag, P.: NfSen, http://nfsen.sourceforge.net 11. Lakhina, A., Crovella, M., Diot, C.: Mining Anomalies Using Traffic Feature Distributions. In: Proc. of the 2005 Conference on Applications, Technologies, Architectures, and Protocols for Computer Communications, SIGCOMM 2005, pp. 217–228. ACM, New York (2005) 12. Lee, W., Xiang, D.: Information-Theoretic Measures for Anomaly Detection. In: Proc. of the 2001 IEEE Symposium on Security and Privacy, pp. 130–143. IEEE Computer Society, Washington, DC (2001) 13. Nychis, G., Sekar, V., Andersen, D.G., Kim, H., Zhang, H.: An Empirical Evaluation of Entropy-based Traffic Anomaly Detection. In: Proc. of the 8th ACM SIGCOMM Conference on Internet Measurement, IMC 2008, pp. 151–156. ACM, New York (2008) 14. Silveira, F., Diot, C.: URCA: Pulling out Anomalies by their Root Causes. In: Proc. of the 29th Conference on Computer Communications, INFOCOM 2010, pp. 722–730. IEEE Press, Piscataway (2010) 15. Sommer, R., Paxson, V.: Outside the Closed World: On Using Machine Learning for Network Intrusion Detection. In: IEEE Symposium on Security and Privacy, pp. 305–316 (2010) 16. Sperotto, A., Schaffrath, G., Sadre, R., Morariu, C., Pras, A., Stiller, B.: An Overview of IP Flow-Based Intrusion Detection. IEEE Communications Surveys Tutorials 12(3), 343–356 (2010) 17. Tellenbach, B., Burkhart, M., Sornette, D., Maillart, T.: Beyond Shannon: Characterizing Internet Traffic with Generalized Entropy Metrics. In: Moon, S.B., Teixeira, R., Uhlig, S. (eds.) PAM 2009. LNCS, vol. 5448, pp. 239–248. Springer, Heidelberg (2009) 18. Wagner, A., Plattner, B.: Entropy Based Worm and Anomaly Detection in Fast IP Networks. In: Proc. of the 14th IEEE International Workshops on Enabling Technologies: Infrastructure for Collaborative Enterprise, pp. 172–177. IEEE Computer Society, Washington, DC (2005) 19. Weaver, N., Paxson, V., Staniford, S., Cunningham, R.: A Taxonomy of Computer Worms. In: Proc. of the 2003 ACM Workshop on Rapid Malcode, WORM 2003, pp. 11–18. ACM, New York (2003)
Motif-Based Attack Detection in Network Communication Graphs Krzysztof Juszczyszyn and Grzegorz Kołaczek Institute of Informatics, Faculty of Computer Science and Management, Wroclaw University of Technology, Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland {Krzysztof.Juszczyszyn,Grzegorz.Kolaczek}@pwr.wroc.pl
Abstract. We propose an original approach which allows the characterization of network communication graphs with the network motifs. As an example we checked our approach by the use of network topology analysis methods applied to communication graphs. We have tested our approach on a simulated attacks inside a scale-free network and data gathered in real networks, showing that the motif distribution reflects the changes in communication pattern and may be used for the detection of ongoing attacks. We have also noticed that the communication graphs of the real networks show a distinctive motif profile.
1 Introduction The most intensively explored approach to unknown threats detection is anomaly detection. We first give a brief list over some earlier traffic anomaly detection methods. The earliest anomaly detection-based approach, proposed by Denning, employs statistics to construct a point of reference for system behavior. The training of an anomaly detection sensor is accomplished by observing specific events in the monitoring environment such as system calls, network traffic or application usage, over a designated time period [12]. The basic problem is what method should be used to measure deviation. The example of statistical anomaly detection is e.g. Haystack [13], Intrusion Detection Expert System (IDES) [14], Next-Generation Intrusion Detection Expert System (NIDES) [15]. Machine learning techniques focus on building a system that improves its performance based on previous results. This type of anomaly detection involves learning the behavior and recognizing significant deviations from the normal. [16] Another machine learning technique that has been frequently used in the domain of machine learning is the sliding window method and Bayesian network-based methods which are frequently used in conjunction with statistical techniques [17,18]. Another approach is- principal components analysis (PCA) which aims to make more efficient anomaly detection process. Principal components analysis allows to reduce the complexity of the analysis [19]. To eliminate the manual and ad hoc elements researchers are increasingly looking at using data mining techniques for anomaly detection [19,20]. This paper proposes an original approach of applying motif analysis to the characterization of network communication graphs. To best authors’ knowledge this is the novel approach to anomaly detection in Internet traffic. B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 206–213, 2011. © IFIP International Federation for Information Processing 2011
Motif-Based Attack Detection in Network Communication Graphs
207
2 Network Motifs as Local Topology Patterns 2.1 Network Motifs A biased distribution of local network structures, a.k.a. network motifs is widely observed in complex biological or technology-based networks [8]. The motifs are small (usually 3 to 7 nodes in size) subgraphs that occur far more (or less) often then in the equivalent (in terms of the number of nodes and edges and node degree distribution) random networks [7]. The statistical significance of motif M is usually defined by its Z–score measure ZM:
ZM =
rand nM − nM
σ Mrand
rand where nM is the frequency of motif M in the given network, nM
(1) rand and σ M are the
mean and standard deviation of M’s occurrences in the set of random networks [3]. If some motif is not found, its Z-score is not defined. Most algorithms assume full enumeration of all subgraphs. However, the algorithm presented in [4] is asymptotically independent of the network size and enables fast detection of motifs. It will be used in this paper to simplify the detection of network motifs. For biological networks, it was suggested that network motifs play key information processing roles [7,9]. Such results reveal that, in general, we may conclude about function and properties of very large networks from their basic building blocks [2,6]. SPs for small social networks (<100 nodes) were studied in [9], more detailed study is presented in [21]. A web network counting 3.5x105 nodes [1] was used to show the usability of motif sampling algorithm [5].
Fig. 1. All possible 3-node directed triads
There are 13 different motifs that consist of three nodes (Fig. 1). Their ID=1,2,…,13 are used in the further descriptions interchangeably with the corresponding abbreviations M1, M2,…, M13. The vector of 13 Z-scores of 3-node motifs measured for the network under consideration will be called (following [8]) a Triad Significance Profile (TSP). The TSP constitutes the individual profile of the network, showing its tendency to form triads of the given type [9]. Our experiments with motif detection were performed with FANMOD tool dedicated for motif detection in large networks [12][13]. If not stated otherwise, our sets of reference networks always consisted of the 1000 random networks used for structure comparison.
208
K. Juszczyszyn and G. Kołaczek
3 Experiment 1: Seq. Scanning (SS) and Hit-List (HL) Attacks 3.1 Attack Patterns Internet worms are programs that self-propagate across a network exploiting security or policy flaws in widely-used services. There are two major classes of them, scanbased worm” and email worms. Email worms propagate through emails and they do it relatively slowly; scan-based worms propagate by generating IP addresses to scan vulnerable target, in result the act much faster - Slammer worm in January 2003 infected 90% of vulnerable computers in the Internet within just 10 minutes [11]. In this paper, we concentrate on scan-based worms. Internet worms use many scanning strategies [18]. The basis of our Internet worm modelling is the classical epidemic model [12]. The experimental testbed is assumed to be a homogeneous network — any infectious host has the equal probability to infect. Once a host is infected by a disease, it is assumed to remain in this state. We simulate: 1. 2.
Sequential Scanning: This scenario lets each newly infected system choose a random addresses and, then scans sequentially from there. Hit-list worm: A hit-list worm first scans and infects all vulnerable hosts on the hit-list first, then randomly scans the entire network to infect others. It means that at first, worm tries to infect the hosts which previously communicated with the infected node.
The following parameters were assumed for our experiments: • • •
N=1000: The total number of host hosts in experimental network. V=30: The population of vulnerable hosts in experimental network, the number of vulnerable hosts is approximately 3% of all population [10]. I=100: Average scan rate. The average number of scans an infected host sends out per unit time (time window).
We have assumed relatively high scan rate, in order to track short time, massive attacks. The normal communication for experimental network has been modeled using Barabási scale-free network model [13] with γ = 3; 1000 nodes and 1971 edges. Communication is being observed in time windows. We have modelled the perturbations during normal network operation - 25% of links disappear and emerge between time windows. The network consisted of 1000 nodes and a subset of the initial set of 1971 edges, with average value 1210 edges. During experiments the worm related communication patterns has been added to these “normal” patterns according to the abovementioned Sequential-scanning and Hit-list scenarios. 3.2 The Results for SS and HL Attacks First, we have computed the Triad Significance Profiles (see sec. 2.1) for the test network working at normal conditions. As our test network is generated a random procedure (preferential attachment), all the Z-scores should be close to 0, which is exactly the result we have obtained: the absolute values of all of them were below 3 with the missing Z-scores for motifs M11-M13, especially M13 (if a subgraph is not found in the network, its Z-score is undefined). Typically, this concerns subgraphs
Motif-Based Attack Detection in Network Communication Graphs
209
which contain many edges, hardly found in networks generated by random schemes. Next, we have checked the TSP for the network under attack. We have assumed an infection of single host during T=1, so the first results of an attack are visible in T=2. The TSPs during an SS attack are shown on the Fig.2. Note the visible difference in the values of most Z-scores, especially the appearance of dominant value for M13. During a worm attack, the communication graph becomes more dense, as the worm communicates randomly with potential victims. The following characterize the network under attack: • •
Growing values of Z-scores of all the motifs (each edge added to the network may constitute several new subgraphs of the size 3, so we experience growing values in the entire TSP). The appearance of Z-scores of the motifs M12-M13, with the value for M13 being dominant in the TSP.
The simulation of an HL attack leads to the similar results (Fig.3). The result for M13 is still significant, with additional appearance of M11 (which may be associated with probing two communicating nodes by the worm). 80 60 T=2
40
T=3
20
T=4
0
T=5
-20 -40 Fig. 2. TSPs for 4 consecutive phases of SS attack
As the motifs M11-M13 show maximum link density, we may conclude that in sparse graphs an attack leads to the emergence of new links which are responsible for the high values of their Z-scores. Next we have checked the sampling strategy. For TSPs the sampling provides much faster way to obtain them. Network sampling procedure used the approach proposed in [5] and was based on checking the neighbourhood of randomly chosen network nodes. It [5] it was shown that it may be enough to check around 10% of existing triads to build a TSP with good accuracy. The results below show the TSPs for the two already discussed attacks (we have used exactly the same data) – only 10% of existing triads were checked. The results of assessing the TSPs with network sampling are presented on the Figs 4 and 5 and should be interpreted in the same way as those from Figs 2 and 3.
210
K. Juszczyszyn and G. Kołaczek
100 80
M13
M12
M11
M9
M10
-40
M8
T=5
-20
M7
0 M6
T=4 M5
20 M4
T=3
M3
40
M2
T=2
M1
60
Fig. 3. TSPs for 4 consecutive phases of HL attack
30 25 20 15 10 5 0 -5
T=2 T=3 T=4 T=5
Fig. 4. TSPs for 4 phases of SS attack. – estimation with network sampling algorithm
100 80 60
T=2
40
T=3
20
T=4
0
T=5
-20
Fig. 5. TSPs for 4 phases of HL attack. – estimation with network sampling algorithm
The consecutive phases of attacks are visible – the abnormal Z-scores of M13 and, partially, M12 show that the communication graphs are deformed. However, the measured Z-scores are lower, due to sampling accuracy. Additionally, in the case of SS attack, its profile is not detected for T=2 (Fig.4).
Motif-Based Attack Detection in Network Communication Graphs
211
4 Experiment 2: Network Logs 4.1 Experimental Data In this part we are dealing with the communication graph built from the real network data gathered in short periods of time. This has serious consequences in the context of structural analysis of these graphs. During short intervals we observe a broadcast-type communication, which results in graphs reach in hubs and isolated nodes. Additionally, the nodes often change their roles – the network hub may become isolated node in the next time window. In result, the typical structural network measures are not effective. For this experiment, traffic logs have been taken from MAWI (Measurement and Analysis on the WIDE Internet) database samplepoint-D [22]. The data include week-long record (from 25-31.01.2009) among more than 3000 IPv6 nodes. 4.2 The TSP Change during the DDoS Attack Contrary to the former experiment, the data for this one were collected during short periods of time, which resulted in extremely sparse graphs. Fig.6 presents the TSPs for the networks based on the communication data. The Z-scores are low, however, in the case of real network their values are not random-like (this happens only when the network is generated by randomly-driven algorithm) but show a distinctive pattern (always negative scores of M5 and M8, positive for M4 and M7 etc.) which feature will be later used in more detailed analysis during our research.
15
1 2
10
3
5
4 5
0 -5
M1 M2 M3 M4 M5 M6 M7 M8 M9 M10 M11 M12
6 7 ddos1
-10
ddos2
-15 Fig. 6. The TSPs of networks during normal operation and a DDoS attack
The attack data (series ddos1 and ddos2 on Fig.6) were analyzed for time windows of the same length as normal network data (series 1-7). Motif M13 was not detected, which can be explained by the sparsity of the network. The TSPs for attack networks are not distinguishable for M1-M10, but there is a difference for M11 and M12. The
212
K. Juszczyszyn and G. Kołaczek
occurrences of these motifs were not found in normal networks, they appear only in the case of the attacks. Moreover, the Z-score of M11 is negative, showing its frequency below average, the M12 Z-score is positive. Their values constitute a fingerprint of the attack, visible in the Z-score. The last experiment was to apply network sampling to the DDoS attack data, but in this case, due to the sparsity of the network and the number of links hardly exceeding the number of nodes, the sampling procedure did not return any significant results.
5 Conclusions and Future Work We have presented an original approach which allows the characterization of network communication graphs with the network motifs. We have tested our approach on a simulated attack inside a scale-free network showing that the TSPs reflect the changes in communication pattern and may be used for the detection of ongoing attacks. In the next step we have evaluated our method on the logs collected during real attacks, with additional restriction of very short time windows, for which the networks were created. The above results are preliminary and open a way for further development of our method: • • • • •
Discovering the attack type by the network TSP analysis. Characterizing the TSP of networks during normal operation. Checking the possibilities of applying sampling procedures to attack detection. Merge the sampling idea with a model of distributed multiagent system dedicated for attack detection. Checking the approach in various classes of networks (WAN, LAN, wireless…).
The above concepts will be also tested in real environments starting from the Wroclaw University of Technology network and its network security laboratory. Acknowledgements. This work was supported by the Polish Ministry of Science and Higher Education, grant no. N N516 518339.
References 1. Barabasi, A.-L., Albert, R.: Emergence of scaling in random networks. Science 286, 509– 512 (1999) 2. Chung-Yuan, H., Chuen-Tsai, S., Chia-Ying, C., Ji-Lung, H.: Bridge and brick motifs in complex networks. Physica A 377, 340–350 (2007) 3. Itzkovitz, S., Milo, R., Kashtan, N., Ziv, G., Alon, U.: Subgraphs in random networks. Physical Review E 68, 026127 (2003) 4. Kashtan, N., Itzkovitz, S., Milo, S., Alon, R.,, U.: Efficient sampling algorithm for estimating subgraph concentrations and detecting network motifs. Bioinformatics 20(11), 1746–1758 (2004) 5. Mangan, S., Alon, U.: Structure and function of the feedforward loop network motif. PNAC 100(21), 11980–11985 (2003) 6. Mangan, S., Zaslaver, A., Alon, U.: The coherent feedforward loop serves as a sign-sensitive delay element in transcription networks. J. Molecular Biology 334, 197–204 (2003)
Motif-Based Attack Detection in Network Communication Graphs
213
7. Milo, R., Itzkovitz, S., Kashtan, N., Levitt, R., Shen-Orr, S., Ayzenshtat, I., Sheffer, M., Alon, U.: Superfamilies of evolved and designed networks. Science 303(5663), 1538–1542 (2004) 8. Milo, R., et al.: Network motifs: simple building blocks of complex networks. Science 298, 824–827 (2002) 9. Shen-Orr, S., Milo, R., Mangan, S., Alon, U.: Network motifs in the transciptional regualtion network of Escherichia coli. Nat. Genet. 31, 64–68 (2002) 10. Wernicke, S.: Efficient detection of network motifs. IEEE/ACM Transactions on Computational Biology and Bioinformatics 3(4), 347–359 (2006) 11. Wernicke, S., Rasche, F.: FANMOD: a tool for fast network motif detection. Bioinformatics 22(9), 1152–1153 (2006) 12. Smaha, S.E.: Haystack: An intrusion detection system. In: IEEE Fourth Aerospace Computer Security Applications Conference, Orlando, FL, pp. 37–44 (1988) 13. Anderson, D., et al.: Next generation intrusion detection expert system (NIDES), SRI International, USA, TR SRI-CSL-95-0 (1994) 14. Kruegel, C., Mutz, D., Robertson, W., Valeur, F.: Bayesian event classification for intrusion detection. In: 19th CSA Conference, Las Vegas, NV (2003) 15. Cohen, W.W.: Fast effective rule induction. In: Proceedings of the 12th International Conference on Machine Learning, USA, pp. 115–123 (1995) 16. Warrender, C., Forrest, S., Pearlmutter, B.: Detecting intrusions using system calls: alternative data models. In: IEEE Symposium on Security and Privacy, Oakland, CA, USA, pp. 133–145 (1999) 17. Valdes, A., Skinner, K.: Adaptive model-based monitoring for cyber attack detection. In: Debar, H., Mé, L., Wu, S.F. (eds.) RAID 2000. LNCS, vol. 1907, pp. 80–92. Springer, Heidelberg (2000) 18. Shyu, M.L., Chen, S.C., Sarinnapakorn, K., Chang, L.: A novel anomaly detection scheme based on principal component classifier. In: IEEE Foundations and New Directions of Data Mining Workshop, Melbourne, FL, USA, pp. 172–179 (2003) 19. Lee, W., Stolfo, S.J., Mok, K.W.: A data mining framework for building intrusion detection models. In: IEEE Symposium on Security and Privacy, Oakland, CA, pp. 120– 132 (1999) 20. Ramaswamy, S., Rastogi, R., Shim, K.: Efficient algorithms for mining outliers from large data sets. In: ACM SIGMOD, Dallas, USA, pp. 427–438 (2000) 21. Breunig, M., Kriegel, H.-P., Ng, R.T., Sander, J.: LOF: identifying density-based local outliers. In: Proceedings of the ACM SIGMOD International Conference on Management of Data, Dallas, TX, pp. 93–104 (2000) 22. Juszczyszyn, K., Musial, K., Kazienko, P., Gabrys, B.: Temporal Changes in Local Topology of an Email-Based Social Network. Computing and Informatics 28(6), 763–779 (2009) 23. MAWIlab, http://mawi.wide.ad.jp/mawi/
Secure Negotiation for Manual Authentication Protocols Milica Milutinovic1 , Roel Peeters2 , and Bart De Decker1 1 2
K.U. Leuven, Dept. of Computer Science, DistriNet/SecAnon [email protected] K.U. Leuven, Dept. of Electrical Engineering - ESAT, COSIC [email protected]
Abstract. In this paper we propose a protocol that allows users operating mobile wireless personal devices to agree on a manual authentication protocol to use in a fair and secure way in order to bootstrap secure communication. Our protocol also has adjustable level of security and a variant of it is applicable to low-end devices with constrained user interfaces. Keywords: Ad hoc Association, Fair Negotiation, Manual Authentication.
1
Introduction
Mobile ad hoc networks, which are formed by spontaneous peer-to-peer associations between devices with no prior context, are brought into the spotlight as they provide an efficient solution for communication of mobile devices. However, the flexibility of ad hoc networks introduces a new dimension in securing this communication process. Since the associations are often established in a constrained environment, the existence of a Trusted Third Party (TTP), a Public Key Infrastructure (PKI) or on-line Certificate Authorities (CAs) cannot be assumed. In addition, the association is spontaneous, which means that devices do not have a prior security context. Therefore, the use of traditional means for establishing secure communication is very limited. A possible alternative is the deployment of an authenticated Out-Of-Band (OOB) channel for key exchange or establishment. The OOB channel is considered to have a low bandwidth and is used for transferring only a small amount of authenticated data. User involvement is unavoidable in establishing such an auxiliary channel. Protocols in which a user is considered to be an authenticated channel are denoted as manual authentication protocols. These OOB channels are established using human senses, such as sight, hearing or touch. By employing the usercontrolled channel, one can bootstrap secure communication between devices, assuming that the security can be reduced to authenticated key agreement. The OOB channel is therefore used to authenticate the data exchanged over the main, insecure channel or to establish an authenticated shared secret. During the last B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 214–221, 2011. c IFIP International Federation for Information Processing 2011
Secure Negotiation for Manual Authentication Protocols
215
decade, many manual authentication techniques have been proposed, differing in the type of user involvement and required devices’ interfaces. To the best of our knowledge, this is a first work that focuses on secure and fair negotiation between previously unfamiliar devices for which manual authentication protocol to use. Since devices can differ in capabilities and available interfaces, it is necessary for them to agree on a manual authentication method prior to the association. In addition, users operating the associating devices can have different preferences regarding the type of their involvement and the desired level of security. Therefore, it is necessary to ensure fairness, i.e. that the preferences of both users are equally valued. Secure negotiation is also important, as an adversary could perform a MITM attack in order to persuade devices in agreeing on a less secure authentication method or a method that uses a specific medium as an OOB channel, which he has the ability to jam. This paper seeks to address these issues. It describes a protocol that allows associating devices to agree on a manual authentication method in a fair and secure way. An evaluation of the claimed properties is also provided.
2
Manual Authentication Protocols
The communication between personal mobile devices is usually performed over the wireless link since many devices are already equipped with appropriate transceivers and no additional equipment is required. However, wireless communication is inherently vulnerable to attacks. Since the receiving device cannot be assured of the sender’s identity, a Man-In-The-Middle (MITM) attack a serious security threat. In order to protect from eavesdropping and prevent an attacker from obtaining sensitive information exchanged over the wireless link, devices can exchange encrypted messages. Even though this ensures confidentiality of the exchanged data, security reduces to secure key establishment. There is no straightforward way to establish or exchange keys in a secure way, since devices cannot be assumed to have a prior trust relationship or share a common point of trust, such as a Trusted Third Party (TTP), a Public Key Infrastructure (PKI) or a Certificate Authority (CA). In order to set up secure key establishment, we have to consider MITM attacks, where the attacker does not only obtain sensitive data exchanged between the legitimate parties, but can also modify them unobserved. For example, let us assume that devices are using each other’s public keys for data encryption. Recalling that there are no TTPs and that the devices don’t have a prior context, it means that they cannot check whether or not they hold a legitimate public key. Furthermore, the alternative of establishing shared symmetric keys (i.e. Diffie-Hellman key exchange protocol [3]) is also prone to attacks, as the adversary can intercept the exchanged messages in such a way that it establishes a secret key with each of the devices. Since the attack can be unobservable by the legitimate communicating partners, complete communication can be performed over the adversary who can read, modify, create or drop messages and still remain undetected.
216
M. Milutinovic, R. Peeters, and B. De Decker
In order to tackle these problems, an auxiliary low-bandwidth channel is used for exchanging authenticated data. It is intended for authenticated exchange of key derivation parameters, public keys or secret data that is subsequently used for key authentication. However, the inherent properties of the ad hoc mobile networks require user involvement for establishing such a channel. The idea is to make use of human sensory capabilities as a means of authenticating transferred data. Therefore, those protocols are noted as manual authentication protocols. This idea was put forward by Stajano and Anderson [16]. They investigated secure association of devices with low computational capabilities with trusted devices, using physical contact, as a location-limited channel, for exchange of secret keys. Authenticity of data is provided as there is no ambiguity about which devices are associating. The first manual authentication protocol was proposed by Balfanz et al. [1]. In their protocol, devices exchange their public keys over the insecure wireless channel and hashes of those public keys over an authentic OOB channel, in order to bootstrap secure communication. Devices would check the authenticity of the received keys by checking whether the received hash values correspond to the received keys. With this approach, the wireless channel can be considered to be under complete control of an adversary and only messages sent over the OOB channel would need to be authentic. Their suggested candidates for locationlimited, OOB channels are contact, infrared signals and sound. Later approaches are based on these ideas. 2.1
Examples of Manual Authentication Protocols
Strings. Some of the early approaches introduced protocols based on short numerical strings. Examples are the four MANA protocols [4] and SAS-based protocols [17], [7]. Authentication of devices is achieved by manually transferring short data strings from one device to the other, entering strings on both devices or by manually comparing strings that are output of the two devices. Images. Values such as public keys or their hashes are coded into images compared by the user [10], or barcodes which devices with cameras can capture and verify [9]. Since these protocols require devices to have high resolution displays, other approaches employ simple LEDs for emitting visual patterns to be compared [12]. Audio. Audio is considered to be a pervasive interface and is therefore a good candidate for associating low end devices [14], [5], [11]. Authenticity of the channel lies in the fact that users can easily verify the source of the signal. Location. These protocols rely on using RF signals or ultrasound for measuring the distance of the communicating partner and therefore provide assurance that the communication is carried out between legitimate devices [6], [2], [13]. Movement. Some recent approaches introduce movement imposed by the user as a means to create a shared secret. Imposing movement on associating devices by shaking them together or simultaneous button presses in order to create shared secret data was discussed in [8] and [15].
Secure Negotiation for Manual Authentication Protocols
217
It is obvious that different techniques require different device interfaces and user interaction. In addition, they provide different security levels. Some of the techniques are also not applicable to all user groups such as users with impaired hearing or vision. The choice of the appropriate method depends on the context and the users controlling the devices at the time of association.
3
The Negotiation Protocol
Major differences between the auxiliary channels introduced in the previous section are in terms of required device interfaces, ranging from widespread interfaces like a single button or audio to very uncommon ultrasonic transceivers. Therefore, devices need to agree on a manual authentication method to use in order to bootstrap secure communication. In this section we present a protocol that provides secure and fair negotiation on which manual authentication protocol to use. When two devices want to associate securely they have to perform a pairing procedure which typically consists of three phases. In the first, discovery phase, devices exchange their identifiers. The second phase comprises of a pairing protocol where the devices establish keys that will be used to secure subsequent communication. In the final phase, the entities that exchanged messages in the pairing protocol are authenticated to each other. In mobile ad hoc networks, the devices authenticate each other using a manual authentication protocol. We propose an extension of the procedure above: an additional phase, executed before the authentication phase, in which the devices run the negotiation protocol. The negotiation protocol consists of several phases. In the first phase the user is prompted to create a list of preferred manual authentication methods. In the second phase both devices commit to these lists and subsequently reveal them. In the final phase the manual authentication method is selected and the users can verify the correctness of the protocol. An overview of the protocol is given in Fig. 1. In the next section, we evaluate the fairness and security properties of the proposed protocol. We will now discuss each phase in more detail. 3.1
List Creation
Each device prompts the user what actions he is willing to perform. Examples of the offered options are: ‘Compare Strings’, ‘Transfer Strings’, ‘Compare Audio Sequences’ or ‘Align Devices’. The user grades the actions he is willing to perform, according to his preferences. This will result in an ordered list of possible manual authentication methods (listA and listB , respectively). If a device has only limited capabilities and no display, it is assumed that it has a predetermined list of possible methods, as the user cannot be prompted about the desired methods. 3.2
List Exchange
Each device will first commit to its ordered list. After receiving the commitment from the other device, it will reveal its list. We will now discuss these two steps. Committing to these ordered lists prevents the party that receives the list first from adjusting its own according to the other party’s preferences. The ordered
218
M. Milutinovic, R. Peeters, and B. De Decker
"$%#
"$%# !
!
hA = h(listA || nonceA) hB = h(listB || nonceB) mA = listA || nonceA & ! mB = listB || nonceB & !
& !
& ! Verify eA = eB
Fig. 1. Negotiation protocol
lists are appended with a random nonce (nonceA and nonceB , respectively). These concatenated values are hashed using a hash function h and devices exchange the hash values, thereby committing to the lists without revealing them. Since the number of possible ordered lists is limited, appending a nonce to the list is important. This will prevent the other party from deriving the actual list from its hash value by creating look-up tables. Following the exchange of commitments, each device sends its list and nonce in clear. After receiving the list and the nonce from the other device, they check whether the received bitstrings indeed correspond to the received commitments. If this check fails, the device will abort the protocol and alert the user. 3.3
Method Selection
For each common method on the two lists, the grades are aggregated and the method with the greatest sum is selected. If there are multiple such methods,
Secure Negotiation for Manual Authentication Protocols
219
the protocol providing the best security-usability properties is chosen. Therefore, there should be a universal agreement on the prioritization of methods. Additionally, the user can verify the correctness of the protocol, as the complete exchange of messages is performed over the wireless and insecure channel. For this purpose, devices calculate a bitwise sum of the received and generated commitment. The resulting bitstring is coded for the users to compare (eA and eB , respectively). If one of the devices does not have the means to perform the verification due to a lack of a interfaces, this step is not performed. In this case, the benefit of the protocol lies in the fact that an adversary cannot guess the chosen methods before committing to his own, reducing his success probability. As an additional security measure, a device that received a list with only one method from the other device, will alert the user and reveal the list. This will discourage cheating parties from selecting only their most preferred method in order to persuade the communicating party to agree on it. If one method was indeed the only possibility, it will be confirmed by the sending user.
4
Evaluation
In this section we will discuss and evaluate the fairness and security properties of the proposed negotiation protocol. 4.1
Fairness
We define fairness as the property that the preferences of all parties are equally respected. Informally speaking, this makes it impossible for one party to persuade the other in accepting the authentication method it prefers. For example, to persuade the other device to accept its preferred method, the device could state that this is the only possible method. Since its list is revealed, this option can be ruled out as a possibility. Another option is to create a list of methods that would have only one match with the received list. In order to do so, the device would have to guess the list of the other device before creating its own. This is prevented by having the devices first commit to their lists before revealing them. As already discussed before, appending a nonce to the list will prevent a cheating party from creating a lookup table for the limited number of possible lists. Since the size of the random nonce, n, is chosen to be sufficiently large, the brute force search is considered to be infeasible in this case. Another important parameter is the length of the hash function output. For an h-bit hash value, the collision probability (two different inputs having the same hash value) is 2−h/2 . In this protocol, if a cheating party manages to find two lists that would hash to the same value, it would be able to choose which one to reveal after receiving the other party’s choices. Therefore, the size of the hash value should be sufficiently large, so that a cheating party would have a negligible probability of finding a collision.
220
4.2
M. Milutinovic, R. Peeters, and B. De Decker
Security
We define security against an adversary that succeeds in having the two devices successfully complete the protocol on different inputs. More specifically, for listA the list of preferences sent by device A and listA the list of preferences of device A as received by device B (and vice versa), the probability that devices A and B complete the protocol successfully for listA = listA or listB = listB should be negligible. Informally speaking, the protocol should be secure against ManIn-The-Middle (MITM) attacks. There are two motives for a MITM attack. Firstly, the adversary can try to persuade the devices to agree on a less secure authentication method, which would increase his probability of a successful attack. Secondly, if he has the means to jam one type of OOB channel, he would persuade the parties to agree on the authentication method using that specific channel. That would give him an advantage to perform a Denial of Service (DoS) attack and disable the authentication. Assuming that the main wireless channel is under complete control of an adversary, we can evaluate the probability of a successful attack. We will describe the scenario where the attacker replaces the ordered lists of device A by its own list. Since he can modify exchanged messages unobserved, instead of forwarding hA to device B, he sends hash value of his own list and a nonce, hA . He then receives hB from device B and forwards hB to device A. If the users compare the bitwise sums of hash values, hB = hB ⊕ hA ⊕ hA needs to hold. This means that the attacker needs to find a message mB such that it hashes to hB . This means that he needs to find a pre-image for the hash function. For an h-bit hash value, the probability of finding a pre-image is 2−h . Finally, an an important feature of the protocol is the fact that every protocol execution is acknowledged by the user, making it impossible for an adversary to unobservably start multiple protocol instances in order to learn user’s preferences and succeed in the attack.
5
Conclusion and Future Work
In this paper, we proposed and analysed a fair and secure protocol for negotiating a manual authentication method as part of bootstrapping secure communication in mobile ad hoc networks. The proposed protocol does not have any special interface requirements and provides adjustable levels of security and usability. Users are allowed to choose the size of the encoded verification value to compare, thereby choosing the desired security level. For the future work, a possible improvement of the protocol could provide more flexibility by allowing the devices to download an XML description of the authentication protocols, specifying the required interfaces and the security level they provide, from a server and update it regularly. Each device would adjust the list of actions offered to the user according to this file and the interface/hardware capabilities. Thereby, the devices would not need to agree on a predetermined logic by which authentication methods are chosen if there is a collision of grades.
Secure Negotiation for Manual Authentication Protocols
221
References 1. Balfanz, D., Smetters, D.K., Stewart, P., Wong, H.C.: Talking to strangers: Authentication in ad-hoc wireless networks. In: NDSS (2002) 2. Cagalj, M., Capkun, S., Hubaux, J.-P.: Key agreement in peer-to-peer wireless networks. In: Proceedings of the IEEE Special Issue on Security and Cryptography, vol. 94 (2006) 3. Diffie, W., Hellman, M.E.: New directions in cryptography. IEEE Transactions on Information Theory 22, 644–654 (1976) 4. Gehrmann, C., Mitchell, C.J., Nyberg, K.: Manual authentication for wireless devices. RSA Cryptobytes 7(1), 29–37 (2004) 5. Goodrich, M.T., Sirivianos, M., Solis, J., Soriente, C., Tsudik, G., Uzun, E.: Using audio in secure device pairing. Int. J. Secur. Netw. 4, 57–68 (2009) 6. Kindberg, T., Zhang, K.: Validating and securing spontaneous associations between wireless devices. In: Boyd, C., Mao, W. (eds.) ISC 2003. LNCS, vol. 2851, pp. 44– 53. Springer, Heidelberg (2003) 7. Laur, S., Nyberg, K.: Efficient mutual data authentication using manually authenticated strings. In: Pointcheval, D., Mu, Y., Chen, K. (eds.) CANS 2006. LNCS, vol. 4301, pp. 90–107. Springer, Heidelberg (2006) 8. Mayrhofer, R., Gellersen, H.-W.: Shake well before use: Authentication based on accelerometer data. In: LaMarca, A., Langheinrich, M., Truong, K.N. (eds.) Pervasive 2007. LNCS, vol. 4480, pp. 144–161. Springer, Heidelberg (2007) 9. McCune, J.M., Perrig, A., Reiter, M.K.: Seeing-is-believing: Using camera phones for human-verifiable authentication. In: IEEE Symposium on Security and Privacy, pp. 110–124 (2005) 10. Perrig, A., Song, D.: Hash visualization: a new technique to improve real-world security. In: International Workshop on Cryptographic Techniques and E-Commerce, pp. 131–138 (1999) 11. Prasad, R., Saxena, N.: Efficient device pairing using “Human-comparable” synchronized audiovisual patterns. In: Bellovin, S.M., Gennaro, R., Keromytis, A.D., Yung, M. (eds.) ACNS 2008. LNCS, vol. 5037, pp. 328–345. Springer, Heidelberg (2008) 12. Saxena, N., Ekberg, J.-E., Kostiainen, K., Asokan, N.: Secure device pairing based on a visual channel. In: 2006 IEEE Symposium on Security and Privacy, pp. 306– 313 (2006) 13. Singelee, D., Preneel, B.: Location verification using secure distance bounding protocols. In: IEEE International Conference on Mobile Adhoc and Sensor Systems Conference (November 2005) 14. Soriente, C., Tsudik, G., Uzun, E.: Hapadep: Human-assisted pure audio device pairing. In: Wu, T.-C., Lei, C.-L., Rijmen, V., Lee, D.-T. (eds.) ISC 2008. LNCS, vol. 5222, pp. 385–400. Springer, Heidelberg (2008) 15. Soriente, C., Tsudik, G., Uzun, E.: Beda: Button-enabled device pairing. Cryptology ePrint Archive, Report 2007/246 (2007) 16. Stajano, F., Anderson, R.: The resurrecting duckling: Security issues for ad-hoc wireless networks. In: Malcolm, J.A., Christianson, B., Crispo, B., Roe, M. (eds.) Security Protocols 1999. LNCS, vol. 1796, pp. 172–194. Springer, Heidelberg (2000) 17. Vaudenay, S.: Secure communications over insecure channels based on short authenticated strings. In: Shoup, V. (ed.) CRYPTO 2005. LNCS, vol. 3621, pp. 309– 326. Springer, Heidelberg (2005)
A Secure One-Way Authentication Protocol in IMS Context Mohamed Maachaoui1,2, Anas Abou El Kalam1,2, and Christian Fraboul1 1
Université de Toulouse, IRIT-ENSEEIHT. Toulouse, France 2 Université Cadi-Ayyad, ENSA. Marrakesh, Morocco {mohamed.maachaoui,anas.abouelkalam, christian.fraboul}@enseeiht.fr
Abstract. The IMS (IP Multimedia Subsystem) architecture is the key control for next generation networks (NGN). IMS gives network operators the opportunity to extend their services, including voice and multimedia communications and deliver them in new environments with new goals. Its security is paramount, especially authentication. In IMS, authentication is divided into two phases a PS (Packet-Switch) domain-level with the 3GPP-AKA protocol, and a second at IMS level using the IMS-AKA protocol. The latter is based on 3GPP-AKA, which leads to a large duplication of steps between the two phases. Some Works have tried to reduce this duplication and increase the IMS-AKA efficiency, but they add new vulnerabilities to IMS-AKA. The aim of this paper is to solve the security problems of IMS-AKA while maintaining good efficiency. Keywords: Authentication, IMS, IMS-AKA, 3GPP-AKA, SIP, Diffie-Hellman.
1 Introduction The move toward an all IP architecture for service delivery appears to be a strong trend. In this context, customers seem to desire an access to personalized interactive, multimedia services, on any device, and anywhere. This trend introduces new requirements for network infrastructures. The IP Multimedia Subsystem (IMS) is seen as a promising solution for fulfilling these expectations. IMS refers to a functional architecture for multimedia service delivery, based upon Internet protocols. Its aim is to merge Internet and cellular worlds, in order to enable rich multimedia communications [1, 2]. It is specified in the 3rd Generation Partnership Project (3GPP). IMS is intended to be “access agnostic”, which means that service delivery should be independent of the underlying access technology. Thus, the use of open Internet Protocols is specified in IMS for better interoperability. In Next Generation Network (NGN), IMS has become the core of control and fused multi-access modes. Based on IMS, ubiquitous services will be implemented easily. Therefore, IMS is supposed to become the favorite solution for fixed and mobile multimedia providers, but also one of the favorite attackers target. Consequently, strong and complex security services and mechanisms are needed to implement a robust security framework. B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 222–231, 2011. © IFIP International Federation for Information Processing 2011
A Secure One-Way Authentication Protocol in IMS Context
223
One of the important needs and requirements is to ensure mutual authentication between users and the network. To do this, IMS defines the authentication protocol AKA (Authentication and Key Agreement) [3], which is based on the 3GPP-AKA protocol [4] and has a similar security level. IMS-AKA is based on SIP (Session Initiation Protocol) [5] and Diameter [6, 7, 8]. When a User Equipment (UE) wants to access IMS services, it must pass two authentications: (1) Packet-Switch (PS) level authentication using the 3GPP-AKA protocol, called the packet-switch domain authentication. (2) IMS level authentication using the IMS-AKA protocol. IMS-AKA reuses the same concept and principles of 3GPP-AKA. Both the PS and IMS authentications are necessary for IMS subscriber. If only the PS domain authentication is used, an attacker can impersonate other IMS subscribers in IMS, so-called fraudulent IMS usage [9]. However, since IMS-AKA is based on 3GPP AKA, it is inefficient that almost all involved steps in the two-pass authentication are duplicated. In This paper we propose a new IMS authentication mechanism that improves the IMS-AKA in terms of security and efficiency. Actually, the proposed AKA does not need the duplicated AKA operations. Besides, it can withstand IMS-AKA security attacks, and keep the mutual authentication and key agreement capabilities. The remainder of this paper is thus organized as follows. We briefly introduce the IMS architecture and its security mechanisms in Section 2. Section 3 presents the related works and discusses their common security problems. Subsequently, in section 4 we give details of the proposed protocol and we provide our analysis in terms of security and performance. Finally, we draw our conclusions in Section 6.
2 Security in IMS The main components of this architecture are CSCF (Call Session Control Function) and HSS (Home Subscriber Server). HSS (Home Subscriber Server) contains subscriber databases, e.g., user identity and registration information. HSS entity interacts with other network entities via the Diameter protocol. CSCF (Call Session Control Function), which is a SIP server, is an essential node in IMS. CSCF processes SIP signaling in IMS. There are three types of CSCF, (1) A proxy-CSCF (P-CSCF), (2) A serving-CSCF (S-CSCF) and, (3) An interrogating-CSCF (I-CSCF).
Fig. 1. Network domain Security in IMS
224
M. Maachaoui, A.A. El Kalam, and C. Fraboul
IMS security is divided into access security specified in 3GPP TS 33.203 [10] and network security specified in 3GPP TS 33.210 [11]. Network security deals with securing traffic within one security domain or between different security domains. A security domain is a network that is managed by a single administrative authority. Traffic between SEGs is protected using IPsec ESP (specified in RFC 2406 [12] and RFC 4303 [13]) running in tunnel mode. The interface used between entities in the same security domain is called Zb interface. The interface between SEGs from different domains is called Za. Figure 1 illustrates this point. Authentication, integrity protection, and encryption are mandatory in the Za interface. As the interface Zb only carries intra-operator traffic, it is up to the operator to decide whether to deploy the interface. Cx is a reference points for S-CSCF and I-CSCF to acquire subscriber information from HSS, in which the employed protocol is the diameter protocol. Access security includes authentication of users and the network, and protection of the traffic between the IMS terminal and the network. In this article we are interested to this type of security mainly the IMS authentication.
3 Related Work Assume that the user has been well authenticated at the PS-domain. The IMS terminal can begin registration/authentication at IMS-level. IMS-level registration / authentication procedure uses the IMS-AKA [3] (Authentication Key Agreement) protocol and it is accomplished by a SIP Register request. Registration with the IMS is mandatory before the IMS terminal can establish a session. In this section we will present the authentication process in IMS network with the IMS-AKA protocol as well as two proposals that reduce redundant steps in IMS-AKA and then increase the efficiency of the protocol in terms of number of messages exchanged, after that we provide our security analysis for the three mechanisms. 3.1 IMS Authentication: IMS-AKA Protocol Assume that the user has been well authenticated at the PS-domain. The process of IMS-AKA uses two successive SIP Register requests and responses. It can be divided as shown in Figure 2 to the following steps: I1: The UE sends a SIP Register message to S-CSCF (with the parameter) through the P-CSCF and I-CSCF. I2: If S-CSCF does not have a valid authentication vector (AV) array for UE, S-CSCF sends a Multimedia Authentication Request (MAR) over Cx interface to HSS for obtaining an AV array. Otherwise, this Step and Step I3 can be skipped. Note that an AV contains (1) a random number RAND, (2) an expected response XRES, (3) a cipher key CK, (4) an integrity key IK, and (5) an authentication token AUTH. I3: HSS generates an ordered array of n AVs. HSS sends the AV array over Cx interface to S-CSCF via a Multimedia Authentication Answer (MAA) message. I4: S-CSCF selects the next unused authentication vector from the ordered AV array and sends the parameters RAND and AUTH (from the selected authentication vector)
A Secure One-Way Authentication Protocol in IMS Context
225
to the UE through a SIP 401 Unauthorized message. This message contains also the keys CK and IK which are kept by P-CSCF. I5: The UE verifies the received AUTN. If the result is positive, UE derives RES, CK and IK. Both IK and CK are used for IP security (IPsec) security association between UE and P-CSCF. Then, UE sends RES to S-CSCF through P-CSCF and I-CSCF. This response is generated in a new registration SIP Register request. I6: S-CSCF verifies the user response XRES, If the result is positive, the authentication and key agreement exchange is successfully completed. S-CSCF sends a Server Assignment Request (SAR), over the Cx interface, to inform HSS about which S-CSCF will serve the UE. I7: HSS stores the name of S-CSCF and sends the user’s profile through a Server Assignment Answer (SAA) message over the Cx interface. I8: Finally, S-CSCF sends a SIP 200 OK message to UE to notify him of the success of the registration process. Note that in the second phase (second SIP Register) messages between the UE and P-CSCF are protected by an IPsec security association. This session is negotiated during the first phase through the two fields Security-Client (message 1) and SecurityServer (message 10). The terminal includes a Security-Verify (message 11) header field in this Register mirroring the contents of the Security-Server header field received previously to withstand a man-in-the-middle attack.
Fig. 2. IMS authentication/registration with IMS-AKA protocol
226
M. Maachaoui, A.A. El Kalam, and C. Fraboul
3.2 One-Pass GPRS and IMS Authentication Procedure for UMTS This approach [9] proposes a one-pass authentication (performed at the PS level) that can authenticate an IMS user without explicitly performing the IMS-level authentication. To achieve this, it relies on the IMSI (International Mobile Subscriber Identity) parameter that the PS Domain previously stored while authenticating the UE, by adding it to the SIP message. The authentication is successful if the IMSI stored by the HSS equals the one sent by the PS Domain. This eliminates the use of authentication vectors (AV). 3.3 One-Pass Authentication and Key Agreement Procedure in IP Multimedia Subsystem for UMTS This proposal [14] analyzes security issues in the previous approach, and improves the authentication process without losing efficiency (one-way). In contrast to IMS AKA, the UE starts with the challenge by sending a “digest-response” message with an authentication request header and a timestamp. Then, The S-CSCF verifies the message; if correct it authenticates the UE and replies with the authentication vector and a “response-auth” message. After that, the UE verifies the “response-auth” message, if it is valid; the UE assumes that the S-CSCF is legal, so it calculates the encryption and integrity keys that will be shared with the P-CSCF for data protection. 3.4 Security Analysis In the first approach Lin et al. [9] may save up to 50% of the IMS registration/authentication traffic, as compared with the 3GPP two-pass procedure. However, this proposal adds new security issues to the authentication procedure. In the Chung-Ming et al. approach (second solution) [14] the main objective is to keep the one-way proposed in Lin et al’s scheme with an improvement in security. Despite that all three mechanisms are still suffering from the following vulnerabilities: - No mutual authentication: in the first approach S-CSCF only verifies UE’s identities (Private identities in PS domain and IMS level). UE does not authenticate S-CSCF and so it cannot be sure of whom is it exchanging information with. In this way, UE may have potential security problems, i.e., fake S-CSCF. - Loss of key agreement capability: CK and IK are agreed for IPsec security association as explain in I4 and I5 in the sub-section 3.1. Unfortunately, in Lin et al.’s one-pass authentication proposition the encryption key CK and integrity key IK are not used. It results that confidentiality and integrity are not ensured for the exchanged messages between UE and P-CSCF. - No guarantee of integrity and confidentiality: again, on Lin et al.’s schema, information does not travel through a secure channel since the protocol does not negotiate the IPsec security association. IMS-AKA and Chung-Ming et al. approaches are similarly vulnerable, since the traffic transmitted during the session initialization travels in plaintext, allowing an attacker to read or alter such information.
A Secure One-Way Authentication Protocol in IMS Context
227
- In the core network, for the three schemas, there is no secure channel that prevents register data manipulation, since the implementation of the interface Zb is optional. It may not be implemented by the network administrator. Moreover, in IMS-AKA and Chung-Ming et al. approaches, the keys CK and IK could be captured by attackers inside the network.
4 The Proposed Authentication Protocol 4.1 Principle Assume that UE shares with the IMS network (HSS) the algorithms to use for encryption and hashing, and we also assume that CSCFs and HSS share, in addition to encryption algorithms (such as AES [15] for example) and hashing (SHA-1 as [16] for example), a prime number p and a base g. The operations and mechanisms used are based on cryptographic functions contained in IMS-AKA’s specification, in order to minimize the changes to system architecture. The aim of our proposal is to keep the one-way proposed by Lin et al. and at the same time resolve security issues already discussed. In order to transfer sensitive data we propose to use a secure channel based on Diffie-Hellman for the key exchange. The proposed mechanism illustrated in Figure 3 can be decomposed into the following steps: S1: In the ith authentication with the proposed protocol, UE derives a random RANDi from a vector RAND as: RANDi = RAND [i]
(1)
Where RAND is a random vector that UE has generated at the first authentication, if UE does not have a RAND vector it generates a new one and in this case i=0 i.e. RANDi = RAND [0]. After that, UE computes the RESi and derives the CKi and IKi keys as follows: RESi = f2k(RANDi), CKi= f3k(RANDi), IKi= f4k(RANDi)
(2)
Where: k is the shared secret key between UE and HSS. fi are the cryptographic algorithms shared between UE and HSS. f2 is a message authentication codes, f3 and f4 are key generation functions. Next, UE sends a SIP Register request with the IMPI (IP Multimedia Private Identity), RANDi and the RESi to S-CSCF via the P-CSCF. Moreover in our protocol CKi and IKi are also used to secure the SIP Register request between UE and S-CSCF. Actually, to ensure data confidentiality and integrity, we propose to encrypt the critical information and authenticate all information that will not be modified during the transmission. S2: P-CSCF forwards the request to S-CSCF via I-CSCF. In addition, P-CSCF begins a Deffie-Hellman negotiation with S-CSCF. P-CSCF adds a value α to the message before transmission to S-CSCF where: α = ga mod p, with a is a random number. S3: Assume that S-CSCF does not have the AVs for this UE; otherwise this step and step S4 can be skipped. S-CSCF invokes the authentication vector distribution
228
M. Maachaoui, A.A. El Kalam, and C. Fraboul
procedure by MAR message with HSS over the cx interface. In order to secure the HSS response containing the AV information S-CSCF performs a key exchange with HSS. Therefore, S-CSCF adds a value β = gb mod p where b is a random value. S4: HSS uses RAND and IMPI, to find the pre-shared secret key k with this user, to derive AV. Then, it calculates χ = gc mod p with c is a random number. Next, HSS encrypts MAA message using the negotiated key DH(βχ) (DH for Diffie-Hellman [17]). χ value is sent in clear. S5: Upon receipt of the message MAA, S-CSCF retrieves χ and calculates DH(βχ) to decrypt the message and extract AV. Then, S-CSCF checks the hash value received in step S2 to verify the integrity of the request, if the result is positive S-CSCF decrypts the message. After that, S-CSCF extracts RESi. RESi is compared with XRESi contained in AVi. If they are equal, it means that UE is a legal user. S6: S-CSCF sends a SIP OK message with AUTNi, CKi and IKi to P-CSCF with the value β’= gb’ mod p. β’ value is used with the value α to build a shared key between S-CSCF and P-CSCF. The β’ value is sent in clear. S7: P-CSCF decrypts the message using the key DH (αβ’), stores CKi, IKi, and forwards the SIP OK message with AUTNi to UE. This message is encrypted and authenticated by CKi and IKi (as in step S1). S8: UE calculates AUTNi and compares it with the one received from S-CSCF. SCSCF is considered well authenticated if the result is positive. To establish an IPsec security association between UE and P-CSCF, we use as in IMS-AKA the two fields “Security_Client” and “Security_Server”. 4.2 Security Analysis In this section we will analyze the proposed mechanism in terms of security. The security properties in the proposed mechanism are presented as follows: - Mutual authentication between UE and S-CSCF (IMS Network): The proposed protocol allows mutual authentication between UE and the S-CSCF. In the network side, S-CSCF retrieves AV from the HSS and verifies UE by comparing the XRESi with the RESi sent by UE. In the user side, to authenticate the network UE compares the AUTNi received from S-CSCF with the one that it calculates. - Using key CK & IK (key agreement property insured): CK and IK are used to establish an IPsec session between UE and P-CSCF after authentication. Moreover, in our approach these two keys are used during the authentication/ registration procedure between UE, S-CSCF and P-CSCF (step S1 and S7). - Replay Attack: To prevent replay attacks, RAND is a vector RAND. RANDi values are thus defined according to a specific order between UE and HSS. Moreover, the response AUTNi is calculated using a sequence number SQN (same as IMS-AKA). - Data Confidentiality and Integrity: Confidentiality and integrity of data travelling between UE and the network is secured by encryption and hashing SIP messages using the keys CK and IK with encryption / hash algorithms pre-shared. At the core network, information exchanged is encrypted and authenticated using the keys negotiated with Deffie-Hellman between CSCF.
A Secure One-Way Authentication Protocol in IMS Context
229
Fig. 3. The proposed one-way AKA protocol
4.3 Performance Analysis In this section we evaluate our protocol in terms of performance, and then we will compare it with the IMS-AKA and Lin et al’s solution. The comparison is based on the number of exchanged messages as they are transmitted across an air interface. We adopt the assumption depicted in [9]. The assumption is as follows, suppose that the expected SIP message delivery cost between UE and S-CSCF is one unit, and the expected Cx message delivery cost between the CSCF and the HSS is α units. It is anticipated that α<1 for the following two reasons: (1) CSCFs and HSS exchange the Cx messages through IP network. (2) Besides the IP network overhead, SIP communications between UE and S-CSCF involves PS domain core network and access radio network. (3) CSCFs and HSS are typically located at the same location, while UE is likely to reside at a remote location. Let C: be the total cost of IMS AKA, and CP: The total cost of the proposed protocol. If S-CSCF does not have valid AVs, the delivery cost of IMS-AKA is expressed by C1. Otherwise, if the messages MAR/MAA are not executed in IMS-AKA, the delivery cost of IMS-AKA is expressed by C2: C1 = 4 + 6α.
C2 = 4 + 4α.
(3)
IMS registration is periodically performed. In Steps I2 and I3 of the IMS-AKA procedure, an AV array of size n is sent from HSS to S-CSCF. Assume that the number of operations (authentication/registration) that the same S-CSCF performs is m. Therefore, only ceilling (m/n) messages MAA/MAR are executed. With ceilling (x) is the upper integer part. From (3), and let x = ceilling(m/n), the average delivery cost of IMS-AKA C can be expressed as:
230
M. Maachaoui, A.A. El Kalam, and C. Fraboul C = x/m C1 + (m – x )/m C2 = 4 + (2x/m + 4)α
(4)
Similar to IMS-AKA, the average delivery cost of the proposed AKA Cp is: CP = x/m CP1 + (m – x )/m CP2 = 2 + (2x/m + 4)α
(5)
From (4) and (5), the improvement I of the proposed protocol over the IMS-AKA is expressed as:. I = (C - CP)/C = m/(xα + 2m(1 + α))
(6)
Similarly, the improvement I’ of the Lin et al.’s solution over the IMS-AKA is: I’ = (C - CL)/C = (m + xα)/(xα + 2m(1 + α))
(7)
Figures 4 and 5 plot I and I’ as a function of α and m when m = 10.
Fig. 4. Improvement of the proposed protocol Fig. 5. A comparison of the two improvements over the IMS-AKA I (proposed solution) and I’ (Lin et al. solution)
As Figure 4 illustrates, the proposed one-pass AKA can save up to 50% of the SIP/Cx traffic over the IMS-AKA. When α approximates to 1, i.e. all of the network elements are located within the same network, I is lower than I’, which difference is less than 10% (Figure 5). This difference is due to the fact that in our proposal S-CSCF needs to retrieve AVs from HSS. However, according to the security analysis Lin et al.’s solution has some security problems and loses the mutual authentication and key agreement capabilities.
5 Conclusion In this paper we proposed a new protocol for authentication in IMS. This proposal resolves the IMS-AKA security issues protocol; in addition our protocol keeps the one-way proposed by Lin et al. The performance analysis shows that the efficiency of the proposed protocol is comparable to that provided by the scheme of Lin et al. Table 1 shows a comparison between the different solutions discussed in this article.
A Secure One-Way Authentication Protocol in IMS Context
231
Table 1. Comparison between the proposed protocol and the discussed solutions
Mutual Authentication One-way Key Agreement Efficiency Confidentiality Integrity
IMS-AKA
One-Pass Authentication
Proposed One-Pass AKA
Proposed protocol
Yes
No
Yes
Yes
No Yes --Yes (*) Yes (*)
Yes No Very good No No
Yes Yes Good No No
Yes Yes Good Yes Yes
In future works we expect to specify our protocol with appropriate formalisms (UML, Petri nets, etc.), in order to check various aspects of its behavior. The main goal is to validate the security aspects of the proposed protocol based on rigorous analysis of its vulnerabilities.
References 1. Camarillo, G.: Introduction to TISPAN NGN. Ericsson, Tech. Rep. (2005) 2. Tadault, M., Soormally, S., Thiebault, L.: Network evolution towards IP multimedia subsystem. Alcatel, Tech. Rep. (2003), http://www.alcatel.com/doctypes/ articlepaperlibrary/pdf/ATR2003Q4/T0312-IP-Multimedia-EN.pdf 3. 3GPP TS 33.102: Security architecture. V8.4.0 2009-10 4. 3GPP TS 33.105: Cryptographic algorithm requirements. s.l. : ETSI, 2009-02. vol. 8 5. Rosenberg, J., Schulzrinne, H., Camarillo, G., Johnston, A., Peterson, J., Spark, R., Handley, M., Schooler, E.: Session Initiation Protocol. RFC 3261 (June 2002) 6. Calhoun, P., Loughney, J., Guttman, E., Zorn, G., Arkko, J.: Diameter Base Protocol RFC 3588 IETF (2003) 7. 3GPP TS 29.228. Technical Specification Core Network; IP Multimedia Subsystem Cx and Dx Interfaces; Signaling Flows and Message Contents (Release 5) 8. 3GPP TS 29.229. Technical Specification Core Network; Cx and Dx Interfaces Based on the Diameter Protocol; Protocol Details 9. Lin, Y.-B., Chang, M.-F., Hsu, M.-T., Wu, L.-Y.: One-pass GPRS and IMS authentication procedure for UMTS. IEEE Journal on Selected Areas in Communications 23(6), 1233– 1239 (2005) 10. 3GPP TS 33.203: Access security for IP-based services. V8.6.0 s.l. : ETSI, 2009-07 11. 3GPP TS 33.210: 3G security; Network Domain Security (NDS); IP network layer security. V8.3.0 s.l.: ETSI, 2009-07 12. Kent, S., Atkinson, R.: IP Encapsulating Security Payload (ESP). RFC 2406, Internet Engineering Task Force (November 1998) 13. Kent, S.: IP Encapsulating Security Payload (ESP). RFC 4303, Internet Engineering Task Force (December 2005) 14. One-Pass Authentication and Key Agreement Procedure in IP Multimedia Subsystem for UMTS. Chung-Ming, Huang y Jian-Wei, Li. s.l.: IEEE (2007) 15. Frankel, S., Glenn, R., Kelly, S.: The aes-cbc cipher algorithm and its use with ipsec, ietf, rfc3602 (2003) 16. Madson, C., Glenn, R.: The use of hmac-sha-1 within esp and ah. ietf, rfc2404 (1998) 17. Rescorla. Diffie-Hellman Key Agreement Method. RFC 2631 (June 1999)
Part III
High Capacity FFT-Based Audio Watermarking Mehdi Fallahpour and David Megías Estudis d'Informàtica, Multimèdia i Telecomunicació Internet Interdisciplinary Institute (IN3) Universitat Oberta de Catalunya, Barcelona, Spain {mfallahpour,dmegias}@uoc.edu
Abstract. This paper proposes a novel high capacity audio watermarking algorithm to embed data and extract it in a bit-exact manner based on changing the magnitudes of the FFT spectrum. The key idea is to divide the FFT spectrum into short frames and change the magnitude value of the FFT samples based on the average of the samples of each frame. Using the average of FFT magnitudes makes it possible to improve the robustness, since the average is more stable against changes compared with single samples. In addition to good capacity, transparency and robustness, this scheme has three parameters which facilitate the regulation of these properties. Considering the embedding domain, audio watermarking techniques can be classified into time domain and frequency domain methods. In frequency domain watermarking [1-7], after taking one of the usual transforms such as the Discrete/Fast Fourier Transform (DFT/FFT) [4-6], the Modified Discrete Cosine Transform (MDCT) or the Wavelet Transform (WT) from the signal [7], the hidden bits are embedded into the resulting transform coefficients. In [4-6], which were proposed by the authors of this paper, the FFT domain is selected to embed watermarks for making use of the translation-invariant property of the FFT coefficients to resist small distortions in the time domain. In fact, using methods based on transforms provides better perceptual quality and robustness against common attacks at the price of increasing the computational complexity. In the algorithm suggested in this paper, we select the middle frequency band of the FFT spectrum (4–12 kHz) for embedding the secret bits. The selected band is divided into short frames and a single secret bit is embedded into each frame. Based on corresponding secret bit, all samples in each frame should be changed by the average of all samples or the average multiplied by a factor. If the secret bit is “0”, all FFT magnitudes should be changed by the average of all FFT magnitudes in the frame. If the secret bit is “1”, we divide the FFT samples into two groups based on the sequence and, then, we change the magnitude of the first group using a scale factor, α, multiplying the average of all samples and the magnitude of the second group multiplying (2 – α) by the average. These changes both in embedding “0” or “1”, keep the average of the frame unchanged after embedding. Using the average of a frame is very useful to increase the robustness against attacks, whereas embedding a secret bit into a single sample is usually fragile. In addition, using FFT magnitudes, sqrt(realଶ imagଶ ሻ, results in better robustness against attacks compared to using the real or the imaginary parts only. B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 235–237, 2011. © IFIP International Federation for Information Processing 2011
236
M. Fallahpour and D. Megías The Objective Difference Grade (ODG) has been used to evaluate the transparency of the proposed algorithm. The ODG is one of the output values of the ITU-R BS.1387 PEAQ standard, where ODG = 0 means no degradation and ODG = –4 means a very annoying distortion. Additionally, the OPERA software based on the ITU-R BS.1387 has been used to compute this objective measure of quality. The experimental results show that this method achieves a high capacity (about 0.5 to 4 kbps), provides robustness against common signal processing attacks and entails very low perceptual distortion (ODG is about –1). The proposed scheme is robust against several attacks such as AddDynNoise, ADDFFTNoise, Addnoise, AddSinus, Amplify, Invert, LSBZero, RC_HighPass, and RC_LowPass of the Stirmark Benchmark for Audio [9]. The method proposed in this paper has been compared with several recent audio watermarking strategies. Almost all the audio data hiding schemes which produce very high capacity are fragile against signal processing attacks. Because of this, it is not possible to establish a comparison of the proposed scheme with other audio watermarking schemes which are similar to it as capacity is concerned. Hence, we have chosen a few recent and relevant audio watermarking schemes in the literature. We compare the performance of the proposed watermarking algorithm and several recent audio watermarking strategies robust against the MP3 attack. [1, 8, 2, 4, 3] and the proposed scheme have capacity equal to 2, 2.3, 4.3, 2996, 689, and 506 to 4025 (bits per second) respectively, also transparency in term of Objective Difference Grade (ODG) is (–1.66 to –1.88), Not reported, Not reported, –0.6, Not reported and (–0.1 to –1.5). [1, 2, 8] have low capacity but are robust against common attacks. [3] Evaluates distortion by using the mean opinion score (MOS), which is a subjective measurement, and achieves transparency between imperceptible and perceptible but not annoying (MOS = 4.7). Capacity, robustness and transparency are the three main properties of an audio watermarking scheme. Considering a trade-off between these properties is necessary. E.g. [1] proposed a very robust, low capacity and high distortion scheme. However [3] and the proposed scheme lead to high capacity and low distortion but they are not as robust as the low-capacity method described in [1]. The scheme presented in [4], which was also proposed by the authors of this paper, has good properties, but the scheme proposed in this paper can manage the needed properties better since there are three useful adjustable parameters. For example, in the proposed scheme by using a frame size of d = 8 getting robustness against MP3–64 is straightforward. On the other hand, in [4], low bit rate MP3 compression was not considered. In short, we present a high-capacity watermarking algorithm for digital audio which is robust against common audio signal processing. A scaling factor, the frame size and the selected frequency band are the three adjustable parameters of this method which regulate the capacity, the perceptual distortion and the robustness of the scheme accurately. Furthermore, the suggested scheme is blind, since it does not need the original signal for extracting the hidden bits. The experimental results show that this scheme has a high capacity (0.5 to 4 kbps) without significant perceptual distortion and provides robustness against common signal processing attacks such as added noise, filtering or MPEG compression (MP3).
High Capacity FFT-Based Audio Watermarking
237
Acknowledgement. This work is partially supported by the Spanish Ministry of Science and Innovation and the FEDER funds under the grants TSI2007-65406-C0303 E-AEGIS and CONSOLIDER-INGENIO 2010 CSD2007-00004 ARES.
References 1. Xiang, S., Kim, H.J., Huang, J.: Audio watermarking robust against time-scale modification and MP3 compression. Signal Processing 88(10), 2372–2387 (2008) 2. Mansour, M., Tewfik, A.: Data embedding in audio using time-scale modification. IEEE Trans. Speech Audio Process. 13(3), 432–440 (2005) 3. Garcia-Hernandez, J.J., Nakano-Miyatake, M., Perez-Meana, H.: Data hiding in audio signal using Rational Dither Modulation. IEICE Electron. Express 5(7), 217–222 (2008) 4. Fallahpour, M., Megías, D.: High capacity audio watermarking using FFTamplitude interpolation. IEICE Electron. Express 6(14), 1057–1063 (2009) 5. Fallahpour, M., Megías, D.: High capacity method for real-time audio data hiding using the FFT transform. In: Park, J.H., Zhan, J., Lee, C., Wang, G., Kim, T.-h., Yeo, S.-S. (eds.) ISA 2009. CCIS, vol. 36, pp. 91–97. Springer, Heidelberg (2009) 6. Fallahpour, M., Megías, D.: Robust high-capacity audio watermarking based on FFT amplitude modification. IEICE Trans. on Information and Systems E93-D(01), 87–93 (2010) 7. Fallahpour, M., Megías, D.: DWT–based high capacity audio watermarking. IEICE Trans. on Fundamentals of Electronics, Communications and Computer Sciences E93-A(01), 331– 335 (2010) 8. Li, W., Xue, X.: Content based localized robust audio watermarking robust against time scale modification. IEEE Trans. Multimedia 8(1), 60–69 (2006) 9. Stirmark Benchmark for Audio, http://wwwiti.cs.uni-magdeburg.de/~alang/smba.php
Efficient Prevention of Credit Card Leakage from Enterprise Networks Matthew Hall1 , Reinoud Koornstra2, and Miranda Mowbray3 1
No Institutional Affiliation [email protected] 2 HP Networking, USA [email protected] 3 HP Labs, UK [email protected]
Abstract. We have developed a new approach to the problem of preventing the leakage of credit card numbers in traffic on a large enterprise network. In contrast to a previously-used method, it has higher throughput, and it can be partly implemented in hardware without any additional libraries. Keywords: cloud security, privacy, data leak prevention.
1
Previous Approaches to Preventing Credit Card Leakage
The danger of credit card numbers leaking from enterprise networks onto the public Internet is exacerbated both by the rise of targeted phishing attacks, and by the increasing use of cloud computing and consequent increase in data traffic between enterprises and the public cloud. Several companies (for example Symantec, Websense, Vericept, Mimecast and Code Green networks), offer products and services that examine the data layer of a packet on an enterprise network and determine whether it contains credit card numbers, so as to prevent these from being leaked. These products and services use one of two approaches. The first approach is to store digital fingerprints of a set of card numbers and check the data for exact matches to these digital fingerprints. This however can only detect credit card numbers whose fingerprints are in the stored set. The second approach, which can detect any card numbers, begins by performing a first pass on the packet data to identify candidate numbers that fit a regular expression for potential card numbers. For example, all American Express (Amex) card numbers are 15-digit numbers beginning with 34 or 37. Then a second pass is performed to determine if any of the candidate numbers satisfy a check called the Luhn check. All valid credit card numbers satisfy this check, although the converse does not hold. One way to carry out the Luhn check on a number n is to double every alternate digit in the number including the penultimate digit, sum the digits of B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 238–240, 2011. c IFIP International Federation for Information Processing 2011
Efficient Prevention of Credit Card Leakage from Enterprise Networks
239
the resulting numbers, set L(n) to the remainder mod 10 of this sum, and check whether L(n) = 0. For example, 932152 passes the Luhn check because the sum of the digits of the numbers 18 3 4 1 10 2 is 20, but 93215 does not because the sum of the digits of the numbers 9 6 2 2 5 is 24, which is not divisible by 10. The candidate numbers that are found to pass the Luhn check are sent to leakage inspectors for Amex, VISA etc. that have full information about the set of valid numbers issued by the provider. If a valid credit card number is identified, further transmission of the packet containing this number may be blocked. Unfortunately, the regular-expression pass is an expensive operation in terms of resource requirements. Our experience is that for the high packet volumes on modern enterprise networks, it is infeasible to run the regex pass on all packets. This pass might be speeded up by implementing it in hardware: however this would require the regex library, which has considerable size, to be stored in the hardware.
2
Our Approach
We have developed and prototyped a new approach to this problem. Instead of first performing a regex pass and then a Luhn check pass, we first use a novel high-speed streaming Luhn algorithm which identifies, in a single pass, all 14, 15 or 16-digit numerical substrings of the data packet that pass the Luhn check. (We have applied for a US patent for this streaming algorithm, application number PCT/US2011/022709). Lightweight custom string check functions, in software, are then run on the set of numbers that are reported by the algorithm as passing the Luhn check. These string checks are designed so that the Luhn check algorithm and string checks between them carry out exactly the same set of checks as the first two passes of the approach that uses regular expressions. Any candidate card numbers passing these checks are forwarded to leakage inspectors, as before. In a benchmarking experiment (details of which are omitted from this extended abstract for lack of space), a software implementation using our approach achieved more than 4.7 times the throughput of an implementation of the approach using a regex pass. If even faster throughput is necessary to process high traffic volumes, our Luhn check algorithm is simple and easy to implement in hardware, without the need for additional libraries. The pseudocode for the algorithm is below. The notation sd(a,b) is shorthand for the string with entries d[a],d[a+1], ... d[b], where a, b ∈ ZZ and 0 ≤ a ≤ b. The vector d stores the sequence of digits received from the stream since the beginning or the last non-digit character, and i records this sequence’s length. When a new digit is read in from the string, i is updated and the variable x[i] is set such that x[i] is equivalent mod 10 to L(sd(1,i)). Then the algorithm determines whether the substrings of length 13, 14 or 15 ending at this new digit pass the Luhn check. To determine this, the algorithm uses the fact that if s1, s2 are digital strings and s2 is of even length, and s1 · s2 is the concatenation of s1 with
240
M. Hall, R. Koornstra, and M. Mowbray
s2, it follows from the definition of L that L(s2) = (L(s1 · s2) − L(s1))%10. If i > 13, putting s1=sd(1,i-14), s2=sd(i-13,i) in this equation implies that sd(i-13,i) passes the Luhn check iff (L(sd(1,i))−L(sd(1,i-14))%10 = 0, which is equivalent to (x[i]−x[i-14])%10 = 0. The checks for sd(i-14,i) and sd(i-15,i) can be derived similarly by setting s1=d[i-14], s2=sd(i-13,i) and s1=sd(1,i-16), s2=sd(i-15,i) respectively. Start by setting i=0, d[0]=0, x[0]=0. While there are more entries in the string, repeat the following: Get the next entry, and set e to it if e is other than a base-10 digit set i = 0 if e is a base-10 digit increase i by 1 set d[i] = e if i == 1 set x[1] = e if i > 1 set x[i] = d[i] + 2d[i-1] + x[i-2] if d[i-1] > 4 increase x[i] by 1 if i > 13 set c = (x[i] - x[i-14]) % 10 if c == 0 report sd(i-13,i) as passing the check if i > 14 add d[i-14] to c if c % 10 == 0 report sd(i-14,i) as passing the check if i > 15 and (x[i] - x[i-16]) % 10 == 0 report sd(i-15,i) as passing the check This algorithm could be further refined. For instance, lookup tables can further reduce computation requirements, and memory requirements can be reduced by over-writing all but the 17 most recent elements of the vectors, since only these are used. The number of strings processed by the string check can be reduced at the expense of slightly more computation during the Luhn check pass, by modifying the Luhn check algorithm to only report numbers beginning with 3, 4, 5 or 6. Using a streaming algorithm in place of a regex check might also speed up the detection of other types of personal data, for example IBAN numbers or numbers in some national ID schemes. There is increasing use of protocols such as SSL which transmit data in encrypted form. This protects data in transit, but leaves open the possibility that personal data may be transmitted by mistake and misused after it has been decrypted by the recipient. Companies such as Symantec, Code Green Networks and Trend Micro offer products that can intercept data before transmission (they are known as Endpoint DLP products). If used in combination with some interception means, our method could be used to inspect data before transmission, and block the transmission where necessary.
Security Warnings for Children’s Smart Phones: A First Design Approach Jana Fruth, Ronny Merkel, and Jana Dittmann Otto-von-Guericke University Magdeburg, P.O. Box 4120, 39016 Magdeburg, Germany {fruth,merkel,dittmann}@ovgu.de
Abstract. In this paper we introduce a first design approach for security warnings on smart phones for children, based on recommendations of a paediatrician expert. To ease the understanding of malware threats and facilitate a adequate handling, children friendly design principles and simple descriptions are used. Amongst visual information, acoustical and haptic information are used to raise children’s attention to the warning. Currently, a first implemented prototype of security warnings is tested with primary school students.
1 Problem Modern mobile phones, so called “smart phones” are a part of our daily lives. According to a German survey [1], nearly 82% of children between 10 and 12 years possess an own mobile phone. These personal devices are often in the focus of attackers because of the storage of person related data, such as address books, which are worthwhile targets. To realise remote attacks on smart phones, attackers often use malicious codes. Today, more than 100 different malicious codes for mobile phones are known [3]. Certain malware attacks on children are not published presently. But above mentioned malware could also be used to attack smart phones of children. Anti-malware programs could protect mobile phones against these malware threats. Today, these security applications inform users with security warnings for malware infections of the mobile phone. In our opinion, these security warnings are designed for standard users and so they are not adequately adapted for specific user groups, which could differ in skills, dependending on user’s age, profession, or health. We think, the adaptation of security warnings to childrens skills could help to sensitise them to malware threats on their smart phones and to train them playfully in right handling of anti-malware programs. In this paper, we introduce a first design approach for multi-media security warnings for smart phones used by children. Currently, a first implementation on an iPhone is tested with primary school students (age between 8 to 9 years).
2 Solution Our concept for a design of security warnings for smart phones is inspired by different preliminary works from different application domains, like automotive domain [4] and automated production domain [2]. Our warning message design approach should fulfil two main requirements: the adaptation to children’s skills and adaptation to requirements of smart phones. To realise children friendly security warnings, we follow the B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 241–243, 2011. c IFIP International Federation for Information Processing 2011
242
J. Fruth, R. Merkel, and J. Dittmann
recommendations of a paediatrician expert and design the warnings by using elements which children know from their infantile experience and imaginary world. So, we apply characters known from computer games or comics, to express threat levels, for example through the character’s mimic or colours (see figure 1). Furthermore, security program functions like the deletion of malicious codes are playfully communicated. For example, to quarantine a malware the child has to arrest a monster in prison (see figure 2). A feedback information about the success of the action is given. Additionally, a simple action chain is used to communicate malware threats and countermeasures to children. First: warning the user about current malware on his/her smart phone, second: explaining the risk level of the malware, third: recommending of security measures. Following the paediatrician expert’s recommendation, multi-media stimuli are use to facilitate the children’s learning. Therefore, the warnings include visual (different colours, simple textual descriptions), acoustical (warn signals, spoken descriptions, sounds) and tactile information (vibration). Our security warnings are adapted to the properties of smart phones, like limited size of display, limited audio quality and possibilities to interact with them.
Fig. 1. Example of security alert character and visualisation of security threat levels (Remark: in the implementation the hachured regions are replaced by colours)
Fig. 2. Exemplary function of a security program on children’s smart phones
3 Future Work In the future, our first design approach has to be specified and realised on different mobile devices and evaluated with infantile test persons. Currently, an implementation of
Security Warnings for Children’s Smart Phones: A First Design Approach
243
our security warning design approach on an iPhone is tested and evaluated with primary school pupils in the age between 8 and 9 years. The enhancement and evaluation of the sequence of warning, information and instructions, variations of threat levels, presented multi-media information, and different characters for security guides or malicious codes are also necessary. Furthermore, it is very interesting to evaluate, how interand intra-individual variability could have an influence on children’s perception. Also, our design approach for security warnings could be adapted and used on other embedded devices, e.g. game robots. Another interesting question is, the design of security warnings and user instructions for disabled people using embedded devices, like smart phones or intelligent service robots. Acknowledgement. We want to thank Prof. Dr. med. Jorch, PhD Hinz, PhD Herper, and Wiebke Menzel. The work of Jana Fruth is funded by the German Ministry of Education and Science (BMBF), project 01IM10002A. The presented work is part of the ViERforES1 project.
References 1. Bundesverband Informationswirtschaft, Telekommunikation und neue Medien e.V (BITKOM). Jugend 2.0: Eine repr¨asentative Untersuchung zum Internetverhalten von 10- bis 18-J¨ahrigen (2011) 2. Fruth, J., Kr¨atzer, C., Dittmann, J.: Design and Evaluation of Multi-Media Security Warnings for the Interaction between Humans and Industrial Robots. In: IS&T / SPIE Electronic Imaging (2011) 3. German Federal Office for Information Security (BSI). Mobile Endger¨ate und mobile Applikationen: Sicherheitsgef¨ahrdungen und Schutzmaßnahmen (2006) 4. Tuchscheerer, S., Dittmann, J., Hoppe, T., Krems, J.: Theoretical analysis of security warnings in vehicles and design challenges for the evaluation of security warnings in virtual environments. In: International Workshop on Digital Engineering (IWDE 2010), Magdeburg, pp. 33–37 (2010)
1
www.vierfores.de
Ciphertext-Policy Attribute-Based Broadcast Encryption Scheme Muhammad Asim1 , Luan Ibraimi2 , and Milan Petkovi´c1,3 1
Philips Research Eindhoven, The Netherlands Faculty of EWI, University of Twente, The Netherlands 3 Faculty of Mathematics and Computer Science, Eindhoven University of Technology, The Netherlands {muhammad.asim,milan.petkovic}@philips.com,[email protected] 2
Abstract. In this work, we design a new attribute-based encryption scheme with the revocation capability. In the proposed schemes, the user (broadcaster) encrypts the data according to an access policy over the set of attributes, and a list of the identities of revoked users. Only recipients who have attributes which satisfy the access policy and whose identity is not in the list of revoked users will be able to decrypt the message.The proposed scheme can be used for revocation of up to t users. The complexity of proposed schemes is dependent on the number of revoked users r, rather than on the total number n of users in the system. The security of the scheme has been proved under the Decisional Bilinear Diffie-Hellman (DBDH) assumption. Keywords: Attribute-Based Encryption, Revocation.
1
Introduction
In a broadcast encryption scheme with revocation capability, the sender (broadcaster) sends a ciphertext to a group of recipients such that only non-revoked users inside the group can decrypt the broadcasted content. Such a scheme allows the broadcaster to specify the list of revoked users who are not allowed to decrypt the digital content that is either broadcasted or placed in the organization’s or public databases. Efficient revocation of users in broadcast encryption schemes have achieved significant attention over the years as the revocation of the user is necessary and inevitable in numerous use cases. For example, in the domain of healthcare, privacy regulations such as healthcare insurance portability and accountability act (HIPAA)[1] give rights to patients to specify their consent policy. More specifically, individuals can request restrictions on the use and disclosure of health information. As a consequence, a doctor, who may be able to view the patient’s data according to these regulations, may not be able to view the data. The revocation of a user also becomes important when an employee leaves the organization. Ideally the system should be able to revoke a user without (or minimally) affecting the non-revoked users. B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 244–246, 2011. c IFIP International Federation for Information Processing 2011
Ciphertext-Policy Attribute-Based Broadcast Encryption Scheme
245
Our Contributions.In this paper we propose a new ciphertext-policy attributebased broadcast encryption scheme. In our proposed scheme the encryptor encrypts the data according to the access policy τ and the list of the identities of revoked users. Only the users with the attribute set that satisfy the access policy τ and their identities are not in the list of revoked users would be able to decrypt the ciphertext. The proposed scheme is inspired by the Naor and Pinkas [2] scheme, which in turn uses a secret sharing technique to revoke users. We use this idea in the context of CP-ABE where only non-revoked users who satisfy the access policy will be capable to reconstruct the secret in the exponent and be able decrypt the ciphertext.
2
The Construction
1. Setup(k). The setup algorithm selects a bilinear group G0 of prime order p and generator g. It also chooses a bilinear map eˆ : G0 × G0 → G1 and hash function H1 : G1 → {0, 1}l. Next to this, the setup picks α, β, x1 , x2 , · · · , xk ∈ Zp . For a set of attributes Ω = {a1 , a2 , ..., ak }, it sets Tj = g xj (1 ≤ j ≤ k). For the purpose of revocation, picks R ∈ Zp and generates a random polynomial P (z) = a ˜0 + a ˜1 z + a ˜1 z 2 + ... + a ˜t z t of degree t over Zp such that P (0) = R. Next, the setup algorithm generates N identifiers {Iu1 , Iu2 , · · · , IuN } and a share P (i) where 1 ≤ i ≤ N . In this paper, P (i) will be alternatively denoted by P (Iui ). The setup algorithm also computes t extra dummy shares which will be used when the number of revoked users is less than t. The public key-PK and master secret key-MK consist of the following components: N t k PK = eˆ(g, g)α , eˆ(g, g)β , {Tj }j=1 , {g P (Iui ) }i=1 , {g P (Idq ) }q=1 k MK = α, β, R = P (0), {xj }j=1 2. KeyGeneration(MK, ω, Iui ). The key generation algorithm takes as input the attribute set ω the user has, and the identifier Iui assigned to the user. The key generation algorithm first picks at random x,y ∈ Zp , and then computes the private key SK Iui ,ω that consists of the following components: x−β (2) D (1) = g α−x−yP (0) , {Dj = g xj }a ∈ω , SKIui ,ω = j D (3) = g yP (Iui ) , D(4) = g y 3. Encryption(m, τ, , PK). To encrypt a message m ∈ {0, 1}l, under the access policy τ over a set of attributes, and the set of revoked users = {Iu1 , Iu2 , · · · , Iut } the encryption algorithm picks at random s ∈ Zp , and assigns sˆi values (which are shares of s) to attributes in τ . For example the attributes are transformed into an access tree where the inner nodes represent an AND or OR boolean operators, and the leaf nodes are attributes. It assigns the value s to the root node. For AND node, it assigns a share
246
M. Asim, L. Ibraimi, and M. Petkovi´c
to each child node, such as the sum of all shares is s. If the node is OR, it assigns the same value s to its child nodes. For the sake of simplicity, we mentioned only access policies which consist of AND and OR nodes, however, our scheme could also support threshold nodes or Out Of nodes. The resulted ciphertext consists of the following components: CTτ, =
C (1) = m ⊕ H1 (ˆ e(g, g)αs ), C (2) = eˆ(g, g)βs , C (3) = g s , (4) (5) xj sˆi {Cj,ˆi = g } , {Ci = g sP (Iui ) }Iui ∈
aj,ˆi ∈τ
4. Decryption(CTτ, , SKIui ,ω ). The decryption algorithm takes as input the ciphertext CT τ, and the decryption key SKIui ,ω . It checks if the secret key SKIui ,ω related to the attribute set ω satisfies the access policy τ . If yes, then the algorithm chooses the smallest subset ω that satisfies τ and proceeds as follows: x−β (2) (4) Z (1) = C (2) eˆ(Dj , Cj,ˆi ) = eˆ(g, g)βs eˆ(g xj , g xj sˆi ) = eˆ(g, g)xs aj ∈ω
Z (2) = eˆ(C (3) , D(3) )λu ·
aj ∈ω
eˆ(Ci , D (4) )λuj = eˆ(g s , g yP (0) ) (5)
Iuj ∈
Z
(3)
= eˆ(D
(1)
,C
(3)
)·Z
(1)
· Z (2) = eˆ(g, g)αs
The decryption algorithm recovers the message m as: m = C (1) ⊕ H1 (Z (3) . −j Note:λi ’s in Z (2) are Lagrange coefficients λi = j=i i−j . A revoked user (2) will not be capable to compute Z , hence it cannot recover the message m. 2.1
Security Proof
Theorem 1. Suppose the DBDH assumption holds. Then no polynomial adversary can break the CP-ABBE scheme with non-negligible advantage. The complete security proof will be provided in the full version of this paper.
3
Conclusions
In this work, we present a ciphertext-policy attribute-based broadcast encryption scheme. The proposed scheme has the capability to revoke users from a broadcasted message. The scheme could be used to revoke a limited number of users with fixed and small ciphertext size. The security has been proved under DBDH assumption.
References [1] The US Department of Health and Human Services. Summary of the HIPAA Privacy Rule (2003) [2] Naor, M., Pinkas, B.: Efficient trace and revoke schemes. In: Frankel, Y. (ed.) FC 2000. LNCS, vol. 1962, pp. 1–20. Springer, Heidelberg (2001)
Anonymous Authentication from Public-Key Encryption Revisited (Extended Abstract) Daniel Slamanig Carinthia University of Applied Sciences, Primoschgasse 10, 9020 Klagenfurt, Austria [email protected]
Anonymous authentication apparently seems to be an oxymoron, since authentication is the task of proving one’s identity to another party and anonymity is concerned with hiding one’s identity. However, there are quite different constructions like ring [5] and group signatures [1] to solve this task. We are focusing on anonymous authentication protocols using public-key encryption schemes as their underlying building block, which, in contrast to the aforementioned, do receive only little attention. However, such anonymous authentication protocols are much simpler than other constructions and they can provide significant advantages over the aforementioned approaches. Firstly, they are fully compatible with deployed public-key infrastructures (PKIs) and thus can be adopted very easily. Secondly, such schemes enjoy an “ad-hoc” character and thus do not require involved registration or setup procedures. This is especially advantageous in dynamic environments, e.g. when users dynamically join and leave the group of authorized users. In this context existing primitives like group signatures to date lack of an efficient and practical solution. Furthermore, the “ad-hoc” character of these schemes allows users to flexibly choose their level of anonymity, i.e. the size of the group (anonymity set), for the sake of improved efficiency and additionally do not suffer from linear complexity such as ring signatures. Such constructions were for the first time discussed in [6], although quite limited, since only applicable with deterministic public-key encryption (PKE) schemes. This idea was later on improved in [8] and [4] to the efficient use with probabilistic public-key encryption schemes. We review existing approaches to anonymous authentication from public-key encryption and present constructions which lead to the most efficient protocols. Let us denote by U the set of authorized users and every user ui ∈ U (the prover) who wishes to authenticate to a verifier V should be able to pass the authentication (correctness). This means that ui is able to prove to V that he belongs to U ⊆ U , whereas even a dishonest V should not be able to tell which uj ∈ U has actually conducted the proof (anonymity). Hence, from the point of view of the verifier every user is equally likely to be the one who is actually authenticating. Furthermore, the verifier should be able to restrict the ability of passing the authentication to members of U (unforgeability). The basic idea behind the protocols of interest is that a verifier runs n parallel instances of a challenge-response protocol based on PKE using n distinct public-keys (representing U ) but a single challenge r. Note that removing a user from U means B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, pp. 247–249, 2011. c IFIP International Federation for Information Processing 2011
248
D. Slamanig
revocation, which is an easy task. The most crucial issue is that the verifier may cheat and try to identify the anonymous authenticator by encrypting distinct challenges. When using deterministic PKE the authenticator can easily verify whether the verifier behaves honest by re-encrypting the challenge. However, in case of probabilistic PKE, one needs an additional round to obtain the respective random coins and only achieves verifiable anonymity (a cheating verifier is able to identify the anonymous authenticator, who, however, can detect this and terminate the protocol). Now we elaborate on which indicators characterize an optimal anonymous authentication protocol from PKE: Communication: They require at least two messages (one round). Bandwidth: The minimum bandwidth required by an anonymous authentication protocol are |U | ciphertexts plus the challenge (from ui to V ). Computation: At least |U | encryptions (verifier) and at least one decryption (prover). In addition one has to check whether the verifier cheats. The most efficient solutions known so far require |U | − 1 encryption operations. A general approach to solve the cheating problem is to require that the verifier provides a proof of plaintext equality, e.g. using a general non-interactive zero knowledge proof (NIZK) of language membership. By “generalizing” the NaorYung paradigm one obtains such a proof system for the subsequent language L, but the size of the proof makes it impractical for real world applications: L = {(c1 , . . . , cn , pk1 , . . . , pkn )| ∃m s.t. c1 = Epk1 (m; ω1 )∧· · ·∧cn = Epkn (m; ωn )} Although one is able to construct more efficient proofs when putting restrictions on the used PKE schemes, this is far from being optimal. Encrypt-EwH: Using the EwH transformation of [3], we can reduce the round complexity to a single round for probabilistic schemes, i.e. any IND-CPA secure public-key encryption scheme. The approach is as follows: The verifier chooses r ∈R {0, 1}l, chooses a suitable hash function H, computes the ciphertext sequence c = (Epk1 (r; H(r)), . . . , Epkn (r; H(r))) and sends c to the user (ui ). The user computes r = Dski (c[i]) and checks whether c[j] = Epkj (r ; H(r )) for all j = i. If this holds he accepts, otherwise he terminates the authentication. This construction gives round optimal schemes and was already used to construct traceable anonymous identification schemes in [8]. Furthermore, we note that combining it with randomness re-use [2] and selection of suitable PKEs, we get round and bandwidth optimal schemes. One can also generalize this approach by using a PRNG (called Encrypt-PRNG) and iteratively apply it to the challenge r to derive the random coins. Encrypt and Commit: Another approach is to use an unconditionally binding commitment scheme for bit strings in addition to public-key encryption. Basically, the idea is as follows: The verifier chooses a random challenge r and computes a commitment c for r. Then, he encrypts the open information using every public-key of U and sends the ciphertext sequence c along with c to the anonymous user. The user decrypts the respective element of the sequence, obtainins the open information for the commitment, opens the commitment and thus receives the challenge. Then, he returns the challenge to the verifier, who in turn checks whether the sent and the received challenge match. Due to the
Anonymous Authentication from Public-Key Encryption Revisited
249
unconditional binding property of the commitment scheme, it is infeasible for the verifier to provide distinct open informations that will open the commitment to distinct values. Hence, the computational effort for the prover reduces to a single decryption operation and the opening of the commitment scheme. Note that a verifier’s best cheating strategy is to partition the set U into two equal sized sets whereas one receives the open information and the other half receives rubbish. Thus, the probability of detecting a cheating verifier is 0.5, which makes cheating not a good strategy for the verifier. However, to further reduce the cheating probability the approach below can be additionally applied. Reducing Computations by Less Trial Encryptions: The most efficient approach so far requires the prover to perform only a single decryption operation and the opening of a commitment. All other approaches require the user to perform one decryption operation and n − 1 trial encryptions. But we can obtain more efficient protocols by relaxing the anonymity and letting the prover solely check whether k < n randomly chosen elements of the ciphertext sequence were encrypted properly. The verifier’s best cheating strategy is to choose partitions of equal size, since all users are equally probable. Hence, the probability that a verifier will succeed equals l−k . Thus, the chances to cheat unnoticeable decrease with the number of partitions exponentially in k. Note that in addition to a reduced computational effort, users also do not need to retrieve all the public-keys of other users in U , which saves bandwidth. Postactive Anonymity: Another idea from [7] is that users do not compute trial encryptions anymore, but post their received ciphertext sequence along with a signature for the ciphertext sequence (and r - without message recovery) from the verifier and the decrypted random challenge to a public bulletin-board. This delegates verification to others while at the same time passing back the full risk to the dishonest verifier and makes honesty verifier’s best behavior strategy.
References 1. Ateniese, G., Camenisch, J.L., Joye, M., Tsudik, G.: A Practical and Provably Secure Coalition-Resistant Group Signature Scheme. In: Bellare, M. (ed.) CRYPTO 2000. LNCS, vol. 1880, pp. 255–270. Springer, Heidelberg (2000) 2. Bellare, M., Boldyreva, A., Kurosawa, K., Staddon, J.: Multirecipient Encryption Schemes: How to Save on Bandwidth and Computation Without Sacrificing Security. IEEE Transactions on Information Theory 53(11), 3927–3943 (2007) 3. Bellare, M., Boldyreva, A., O’Neill, A.: Deterministic and Efficiently Searchable Encryption. In: Menezes, A. (ed.) CRYPTO 2007. LNCS, vol. 4622, pp. 535–552. Springer, Heidelberg (2007) 4. Lindell, Y.: Anonymous Authentication. JPC 2(2) (2011) 5. Rivest, R.L., Shamir, A., Tauman, Y.: How to Leak a Secret. In: Boyd, C. (ed.) ASIACRYPT 2001. LNCS, vol. 2248, pp. 552–565. Springer, Heidelberg (2001) 6. Schechter, S.E., Parnell, T., Hartemink, A.J.: Anonymous Authentication of Membership in Dynamic Groups. In: Franklin, M.K. (ed.) FC 1999. LNCS, vol. 1648, pp. 184–195. Springer, Heidelberg (1999) 7. Slamanig, D., Rass, S.: Anonymous But Authorized Transactions Supporting Selective Traceability. In: SECRYPT 2010, pp. 132–141. SciTePress (2010) 8. Slamanig, D., Schartner, P., Stingl, C.: Practical Traceable Anonymous Identification. In: SECRYPT 2009, pp. 225–232. INSTICC Press (2009)
Part IV
Mobile Identity Management Jaap-Henk Hoepman TNO, Groningen, The Netherlands Radboud University Nijmegen, The Netherlands [email protected]
Abstract. Identity management consists of the processes and all underlying technologies for the creation, management, and usage of digital identities. Business rely on identity management systems to simplify the management of access rights to their systems and services for both their employees and their customers. Users may benefit from identity management to simplify logging in to websites and computer systems (single sign-on), as well as streamlining management of their personal information and preferences (user centricity). Current systems for identity management only partially achieve these goals, and still suffer from several security, privacy and usability issues. We will discuss how personal mobile devices (like mobile phones and PDA’s) can be used to overcome this ’identity crisis’, to increase the security, privacy and usability of identity management systems.
Short Biography. Jaap-Henk Hoepman is senior scientist computer security, privacy and identity management at TNO (the Dutch Organisation for Applied Scientific Research) and associate professor at the Radboud University Nijmegen, the Netherlands. His research into information security and cryptographic protocols is inspired by practical problems. He focuses on the design of secure and privacy friendly protocols for the Internet of Things. Apart from that he studies privacy and identity management. He speaks on these topics at national and international congresses and publishes papers in (inter)national journals. He also appears in the media as security expert, and writes about his research in the popular press. He is actively involved in the public debate concerning security and privacy in our society.
B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, p. 253, 2011. c IFIP International Federation for Information Processing 2011
Who Needs Facebook Anyway - Privacy and Sociality in Social Network Sites Ronald E. Leenes Tilburg Institute for Law, Technology, and Society, The Netherlands [email protected]
Abstract. SNSs pose a plethora of privacy issues that are reasonably well known and understood. Many issues boil down to the same problem: information makes it to the wrong audience. This problem is inherent to the design and business model of the current social network sites. How to cope with this? Two approaches seem obvious: address user behaviour and/or address the architecture of social network sites. In this presentation I will argue that the options for changing users’ behaviour are limited by highlighting some of the social dynamics of SNS. Next I will focus on three areas of privacy issues: those caused by individual SNS users, those used by the SNS platform providers and those caused by the non subscribers. I will show how these issues are addressed within the EU FP7 project PrimeLife in the Clique prototype.
Short Biography. Dr. Ronald Leenes is full professor in regulation by technology at TILT, the Tilburg Institute for Law, Technology, and Society (Tilburg University). His primary research interests are privacy and identity management, ID fraud, biometrics and Online Dispute Resolution. Leenes (1964) studied Public Administration and Public Policy at the University of Twente. He received his PhD for a study on hard cases in law and Artificial Intelligence and Law from the same university. Ronald was work package leader on socio-cultural aspects of privacy-enhancing IDM in the EU FP6 PRIME project. He was work package leader for access control in social software in the FP7 PrimeLife project and is work package on legal requirements within the FP7 ENDORSE project. He has contributed to and edited various deliverables for the EU FP6 Network of Excellence ’Future of IDentity in the Information Society’ (FIDIS). He has published extensively on privacy in online applications, including Second Life and Social Network Sites.
B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, p. 254, 2011. c IFIP International Federation for Information Processing 2011
From Protecting a System to Protecting a Global Ecosystem Moti Yung Google Inc. and Department of Computer Science, Columbia University [email protected]
Abstract. The area of security used to be classified by technologies: authentication, access control, monitoring, firewalls, cryptography, etc., or by systems: web security, application security, database security, operating systems security, communication security, etc. However, nowadays, in order to operate a service over the Internet, facing customers, one needs to manage a complex infrastructure. This infrastructure hosts many components, numerous technologies, and various devices and computers. The infrastructures and their supported systems are dynamically evolving and can be characterized as ecosystems. On the other hand, attackers over the global Internet exploit weaknesses in one component to attack other parts of the system, taking advantage of lack of global security view. The trend will only increase in the future as more computers are embedded in the infrastructure and more demanding applications are to be developed, and as bigger parts of the global economy move to cyberspace. Given the state of the art and future developments, it seems mandatory to develop a holistic yet practical approach to protect the computing infrastructure, and to embed it in the ecosystem development process. This infrastructure will be a collection of inter related ecosystems, with providers at the center of each ecosystem. In addition, the ecosystem has to be dynamically evolving. The position stated here is that approaches and methodologies for security will need to change and evolve as well. The view of security as a process rather than as a component is becoming clearer, given the global trend, and the integration of security with the various other steps of the ecosystem evolution is becoming a must as well.
Short Biography Dr. Moti Yung is a Research Scientist at Google, and an Adjunct Senior Research Faculty at the Computer Science Department, Columbia University. Before that he was a member of IBM Research, was a consultant to leading companies and governments, was with Certco and with RSA Laboratories as well. His main research interests are in the areas of Security, Cryptography and Privacy where he has been working on numerous scientific aspects as well as industrial solutions, for over 25 years. In 2010 he delivered the annual IACR’s Distiguished Lecture in Cryptography. His Main research interests are in the areas of security, privacy, cryptography, and their relationships to other areas of engineering and science. B. de Decker et al. (Eds.): CMS 2011, LNCS 7025, p. 255, 2011. c IFIP International Federation for Information Processing 2011
Author Index
Abou El Kalam, Anas 222 Alagheband, Mahdi R. 18 Aref, Mohammad Reza 18 Arndt, Christian 85 Asim, Muhammad 244 Bugiel, Sven
32
Chan, Patrick P.F.
94
Decroix, Koen 163 De Decker, Bart 3, 163, 214 De Strycker, Lieven 171 Dietrich, Kurt 45 Dittmann, Jana 59, 241 Fallahpour, Mehdi Fraboul, Christian Fruth, Jana 241 Fuß, J¨ urgen 178
235 222
Goemaere, Jean-Pierre
171
Hall, Matthew 238 Hamelinckx, Tom 171 Hermann, Eckehard 194 Hoepman, Jaap-Henk 253 Huber, Reinhard 72 Hui, Lucas C.K. 94 Ibraimi, Luan
244
Juszczyszyn, Krzysztof
206
Keller, J¨ org 122 Koenig, Hartmut 134 Kohlweiss, Markulf 3 Kolaczek, Grzegorz 206 Koornstra, Reinoud 238 K¨ ummel, Karl 85 Lampesberger, Harald 194 Lapon, Jorn 3 Leenes, Ronald E. 254 Liu, Fuwen 134 Luzhnica, Granit 45
Maachaoui, Mohamed 222 Martens, Jeroen 171 Meg´ıas, David 235 Merkel, Ronny 59, 241 Milutinovic, Milica 163, 214 Mowbray, Miranda 238 Naessens, Vincent 3, 163 Nguyen, Dung Q. 186 N¨ urnberger, Stefan 32 Ottoy, Geoffrey
171
Peeters, Roel 214 Petkovi´c, Milan 244 Podesser, Siegfried 45 Preneel, Bart 108, 171, 186 Rajbhandari, Lisa 147 Rantos, Konstantinos 155 Sadeghi, Ahmad-Reza 32 Saeys, Nick 171 Scheidat, Tobias 85 Schneider, Thomas 32 Sch¨ onberger, Georg 178 Slamanig, Daniel 247 Snekkenes, Einar Arthur 147 St¨ ogner, Herbert 72 Uhl, Andreas
72
Vielhauer, Claus
59, 85
Wendzel, Steffen 122 Weng, Li 108, 186 Winter, Johannes 45 Winter, Philipp 194 Yiu, S.M. 94 Yung, Moti 255 Zeilinger, Markus
194