Integrated Circuits and Systems
Series Editor Anantha Chandrakasan, Massachusetts Institute of Technology Cambridge, Massachusetts
For other titles published in this series, go to http://www.springer.com/series/7236
Ingrid M.R. Verbauwhede Editor
Secure Integrated Circuits and Systems
123
Editor Ingrid M.R. Verbauwhede Department of Elektrotechniek (ESAT) Katholieke Universiteit Leuven COSIC Division Kasteelpark Arenberg 10 3001 Leuven Belgium
[email protected]
ISSN 1558-9412 ISBN 978-0-387-71827-9 e-ISBN 978-0-387-71829-3 DOI 10.1007/978-0-387-71829-3 Springer New York Dordrecht Heidelberg London Library of Congress Control Number: 2009942092 c Springer Science+Business Media, LLC 2010 All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper Springer is part of Springer Science+Business Media (www.springer.com)
Preface
Security is as strong as the weakest link. The mathematical design and analysis of cryptographic algorithms has evolved a lot over the last decades (ever since the invention of public key cryptography at the end of the 1970s). The mathematical strength of the cryptographic algorithms is now at such a level that the attacker will choose the ‘implementation’ as the weak link in the chain. Many incidents have been reported for hardware and software implementations. Even the human factor, forgetting or using easy passwords, is often the weak link. Weak implementations are becoming an even bigger problem as more and more information processing moves to small portable embedded devices. These small devices are cheap, lightweight, easy to carry around, and also easy to loose. The need for embedded security is omnipresent in cell phones, PDA’s, medical devices, automotive, consumer, smart cards, RFID tags, sensor nodes, and so on. At the other end of the spectrum computations and storage of sensitive data move from hard disks on our personal PCs to central servers and to the so-called clouds. Also in these environments efficient and secure implementations are a necessity to provide security and privacy. The goal of this book, Secure Integrated Circuits and Systems, is to give the integrated circuits and system designer an insight in the basics of security and cryptography from the implementation viewpoint. This means that the designer should aim at efficient implementations, i.e., optimizing power, area, throughput, as well as secure implementations, i.e., implementations that resist attacks and more specifically side-channel attacks. This book therefore covers techniques both to improve efficiency and to resist side-channel attacks. The book consists of four major parts to introduce the topic. Part I gives the basics. This includes an introduction to the basic arithmetic used in mostly publickey algorithms and an introduction to side-channel attacks. Part II describes basic building blocks of any cryptographic systems. When building a complex system, such as a system-on-chip, a designer will build, obtain, or license intellectual property (IP) modules. The basic modules are symmetric key algorithms, public key algorithms, and hash functions. Other building blocks are random number generators, nonce generators, and physically uncloneable functions (PUFs).
v
vi
Preface
The aim of part III is to describe the design methods for secure design. Each link in the chain has to be secure: this means that each part of the design process should have security in mind. This has to be the case for back-end design from a registertransfer level description down to layout. This also has to be the case for higher level design: e.g., the GEZEL design environment promotes secure hardware/software co-design. Part IV is used to illustrate the topic by examples: security for RFID, end-point security for FPGA’s, and securing flash memories. Secure Integrated Circuits and Systems is written for any integrated circuit or embedded systems designer who makes designs for ASIC’s, FPGA’s, small embedded processors, and/or embedded systems. By no means, I claim that this book is complete. It is only a start to get the designer going. And it is an attempt to bridge the gap between the theoretical math of cryptography and the design issues to make it possible in practice. I would like to thank the contributors of this book and the people working in this field for their indirect contributions. July 2009
Ingrid M.R. Verbauwhede
Contents
Part I Basics 1 Modular Integer Arithmetic for Public-Key Cryptography . . . . . . . . . Tim G¨uneysu and Christof Paar
3
2 Introduction to Side-Channel Attacks . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 Franc¸ois-Xavier Standaert
Part II Cryptomodules and Arithmetic 3 Secret Key Crypto Implementations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45 Guido Marco Bertoni and Filippo Melzani 4 Arithmetic for Public-Key Cryptography . . . . . . . . . . . . . . . . . . . . . . . . . 63 Kazuo Sakiyama and Lejla Batina 5 Hardware Design for Hash Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79 Yong Ki Lee, Miroslav Kneˇzevi´c, and Ingrid M.R. Verbauwhede
Part III Design Methods for Security 6 Random Number Generators for Integrated Circuits and FPGAs . . . 107 Berk Sunar and Dries Schellekens 7 Process Variations for Security: PUFs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125 Roel Maes and Pim Tuyls vii
viii
Contents
Part IV Applications 8 Side-Channel Resistant Circuit Styles and Associated IC Design Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 Kris Tiri 9 Counteracting Power Analysis Attacks by Masking . . . . . . . . . . . . . . . . 159 Elisabeth Oswald and Stefan Mangard 10 Compact Public-Key Implementations for RFID and Sensor Nodes . . 179 Lejla Batina, Kazuo Sakiyama, and Ingrid M.R. Verbauwhede 11 Demonstrating End-Point Security in Embedded Systems . . . . . . . . . . 197 Patrick Schaumont, Eric Simpson, and Pengyuan Yu 12 From Secure Memories to Smart Card Security . . . . . . . . . . . . . . . . . . . 215 Helena Handschuh and Elena Trichina Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235
Contributors
Lejla Batina Katholieke Universiteit Leuven, Leuven-Heverlee, Belgium and Radboud University Nijmegen, The Netherlands,
[email protected] Guido Marco Bertoni STMicroelectronics, Centro Direzionale Colleoni 20041 Agrate, Italy,
[email protected] ¨ Tim Guneysu Chair for Embedded Security, Ruhr University Bochum, Bochum, Germany,
[email protected] Helena Handschuh Katholieke Universiteit Leuven, ESAT/COSIC, Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee,
[email protected] Miroslav Kneˇzevi´c Katholieke Universiteit Leuven, ESAT/COSIC, Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee, Belgium,
[email protected] Yong Ki Lee University of California, Los Angeles, CA, USA; Electrical Engineering, 420 Westwood Plaza, Los Angeles, CA 90095-1594, USA,
[email protected] Roel Maes Katholieke Universiteit Leuven, ESAT/COSIC, Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee, Belgium,
[email protected] Stefan Mangard Infineon Technologies AG, Security Innovation, Am Campeon 1-1285579 Neubiberg, Germany,
[email protected] Filippo Melzani STMicroelectronics, Centro Direzionale Colleoni 20041 Agrate, Italy,
[email protected] Elisabeth Oswald Computer Science Department, University of Bristol, Merchant Venturers Building, Woodland Road, Bristol, BS8 1UB, UK; Institute for Applied Information Processing and Communication, Graz University of Technology, Inffeldgasse 16a, 8010 Graz, Austria,
[email protected] Christof Paar Chair for Embedded Security, Ruhr University Bochum, Bochum, Germany,
[email protected] ix
x
Contributors
Kazuo Sakiyama University of Electro-Communications, Tokyo, Japan,
[email protected] Patrick Schaumont ECE Department, Virginia Tech, Blacksburg, VA 24061, USA,
[email protected] Dries Schellekens Katholieke Universiteit Leuven, ESAT/COSIC, Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee, Belgium,
[email protected] Eric Simpson ECE Department, Virginia Tech, Blacksburg, VA 24061, USA Franc¸ois-Xavier Standaert UCL Crypto Group, Place du Levant 3, B-1348 Louvain-la-Neuve, Belgium,
[email protected] Berk Sunar Electrical and Computer Engineering Department, Worcester Polytechnic Institute, Worcester MA 01609–2280, USA,
[email protected] Kris Tiri Work performed while at UCLA,
[email protected] Elena Trichina Advanced System Technology ST Microelectronics Rousset, France,
[email protected] Pim Tuyls Intrinsic-ID, Eindhoven, The Netherlands,
[email protected] Ingrid M.R. Verbauwhede Katholieke Universiteit Leuven, ESAT/COSIC, Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee, Belgium,
[email protected] Pengyuan Yu ECE Department, Virginia Tech, Blacksburg, VA 24061, USA
Part I
Basics
Chapter 1
Modular Integer Arithmetic for Public-Key Cryptography ¨ Tim Guneysu and Christof Paar
For most of the century-old history of cryptography, symmetric-key (or private-key) algorithms were used for data encryption. In private-key cryptography, the communicating parties share one secret key. For a given encryption function y = ek (x), the corresponding decryption function must satisfy the condition: x = dk (y) = ek−1 (y), where both functions use the same key k. Unfortunately, for confidential communication, the symmetric key k needs to be established between the parties before they can exchange messages. This key exchange requires a separate secure channel. Although symmetric algorithms can be used for establishing keys, e.g., in systems like Kerberos [15], such systems do not scale very well and have single points of failure. In 1976, Diffie and Hellman [12] as well as Merkle [32] invented a novel branch of cryptography called public-key cryptography1 creating a new (and public available) paradigm. The idea of public key cryptography was eagerly absorbed by the scientific community and led to numerous commercial applications in the 1980s and 1990s. Public-key methods offer the advantage of elegant key agreement schemes with which a secret key, e.g., for use with symmetric ciphers, can securely be established over unsecure channels. In addition to solving the key management problem, the other major application of PKC is digital signatures, with which non-repudiation of messages exchanges can be achieved. In this context, recall that message authentication based on conventional symmetric means (e.g., message authentication codes) allows all parties to create authenticators with the shared key, so that it is not possible to proof the origin of an authenticated message if one party is dishonest. To enable all these new features, public-key cryptography has evolved into a major
T. G¨uneysu (B) Chair for Embedded Security, Ruhr University Bochum, Bochum, Germany e-mail:
[email protected] 1 According to [14], the discovery of public-key cryptography (PKC) in the intelligence community is attributed to John H. Ellis in 1970. The discovery of the equivalent of the RSA cryptosystem [38] is attributed to Clifford Cocks in 1973 while the equivalent of the Diffie–Hellman key exchange was discovered by Malcolm J. Williamson, in 1974. However, it is believed that these British scientists did not realize the practical implications of their discoveries at the time of their publication (see, for example, [39, 11]).
I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3 1, C Springer Science+Business Media, LLC 2010
3
4
T. G¨uneysu and C. Paar
application area for seemingly specific mathematical topics from number theory and algebra. Today, almost all PKC methods with practical relevance are based on arithmetic in finite fields or finite rings. Public-key cryptography makes use of a key pair P K = (kpub , ksec ) consisting of a public component kpub that is distributed among all communication partners and a secret part ksec for private use. Hence, PKC is also referred to as asymmetric cryptography due to the utilization of different keys for private and public usage. Practical PK schemes are based on one-way trapdoor functions. PKC should enable everyone to make use of a cryptographic service or operation involving the public key kpub and the one-way function y = f (x, kpub ) to protect a message x. The message x can only be recovered using the inverse trapdoor function x = g(y, ksec ), which requires knowledge of the secret component ksec . One-way trapdoor functions for PKC are selected from a set of hard mathematical problems augmented with a trapdoor for easy recovery with special knowledge.2 One-way trapdoor functions which are used in well-established cryptosystems are based on the following mathematical problems: Integer factorization problem (FP): For a composite integer n = pi consisting of unknown primes pi , it is considered hard to retrieve pi when n (and the primes pi ) are sufficiently large. Discrete logarithm problem in finite fields (DLP): For an element a ∈ G and b ∈ a, where G is the multiplicative group of a finite field and a the subgroup generated by a, it is assumed to be hard to compute where b ≡ a if a is sufficiently large. Elliptic curve discrete logarithm problem (ECDLP): For an element a ∈ E and b ∈ a, where E is an elliptic curve over a finite field and a the subgroup generated by a, it is assumed to be hard to compute where b ≡ a if a is sufficiently large. The definitions stated above give rise to the question of how a “sufficiently large” module n or a subgroup a is defined. The security of given one-way trapdoor functions must be directly related to the best known attacks. The more powerful an attack is (in terms of time complexity) the longer a corresponding security parameter (e.g., the module n or subgroup a) must be chosen to achieve the desired level of protection. Currently, the FP is known to be most efficiently attacked with number field sieve (NFS) methods. Related attacks are known for the DLP as well: index calculus (IC) attacks on the DLP have a comparable complexity to NFS. Both attacks show a 1/3 2/3 subexponential time complexity of about C(n) = e(1.9229+o(1)) ln n ln(ln n) where n is considered the module. In contrast, for the ECDLP, the √ best known computational method possesses a time complexity of about C(n) = π n/2 steps [37] if the curve
2 It is important to understand that NP-complete problems from computer science cannot be simply converted for use with PKC since a one-way function for PKC must guarantee hardness in all cases which is usually not the case for all NP-complete problems.
1 Modular Integer Arithmetic for Public-Key Cryptography
5
parameters have been chosen carefully. This algorithm is the Pollard’s Rho attack for generic groups. Having those attack complexities at hand, we can compute the required parameter lengths for PKC to achieve a comparable level of protection with respect to a (secure) symmetric block cipher, where we assume that an exhaustive key search on the key space is the best symmetric attack method. Roughly speaking, a block cipher with |k| = 80 bit key space provides the same security as |kFP,DLP | ≥ 1024 bit and kECDLP ≥ 160 bit. For more information about how to select and to compare key sizes for different cryptographic applications, we refer to [29]. Of course, the field of PKC is not limited to the one-way trapdoor functions which we have mentioned above. With the rise of PKC in the last decades, there have been several alternative proposals, e.g., based on codes [30] or lattices [19]. However, such alternative public-key systems are hardly used in practical systems up to now. Thus, we will restrict ourselves in this chapter to schemes of practical relevance. In particular, we will discuss the arithmetic required for schemes based on FP, DLP, and ECDLP. Up to now, we have briefly introduced the underlying mathematical problems of PKC but we have not discussed how to actually build cryptosystems from them. The most prominent example for employing the FP in cryptography is the RSA cryptosystem proposed in 1978 [38]. Popular cryptographic protocols involving the DLP are the Diffie–Hellman key exchange (DHKE) [12], the digital signature algorithm (DSA)[1], and ElGamal encryption and signature scheme [13]. The main operation of all of those schemes is the computation of a modular exponentiation a e mod m with n-bit multi-precision integers a, e, m ∈ Zm . Thus, integer exponentiation and the underlying operations of modular multiplication and inversion in finite rings or fields are crucial for modern PKC. Other applications of finite field arithmetic are elliptic curve cryptosystems (ECC), introduced by Miller and Koblitz [34, 24], and hyperelliptic curve cryptosystems (HECC), a generalization of elliptic curves introduced by Koblitz in [25]. These schemes also require the availability of modular additions and subtractions, but use shorter operands. We will discuss the mathematical background on finite field arithmetic and classes in Section 1.1 in more detail. Since parameters used in cryptographic devices for PKC are very large, often between 1024 and 4096 bit(!), the corresponding modular exponentiation is computationally very challenging, especially if the target platform is constrained, e.g., a smart card processor. The time complexity of an n-bit modular exponentiation is O(n 3 ), which corresponds to hundred of thousands operations on a typical CPU. Modular multi-precision addition and subtraction as used in ECC/HECC is considered to have only a minor impact on the overall performance.3 Hence, the major focus when realizing cryptosystems based on modular exponentiation like RSA or DHKE is to use efficient implementations of the underlying modular arithmetic. This significantly improves the performance of every top layer application or crypto protocol which heavily relies on basic operations. The below figure depicts the relationship and relevance of the different computational layer in a PKC. Note
3
Recall that the time complexity of n-bit addition or subtraction is only in O(n).
6
T. G¨uneysu and C. Paar
that ECC and HECC cryptosystems introduce, in addition to basic arithmetic and exponentiation, another layer since computations over elliptic curve are based on a specific group operation. For the group operation of ECC and HECC cryptosystems we distinguish two cases: having two points P, Q ∈ E with P = ±Q we denote the group operation as point addition P + Q, otherwise for P = Q a point doubling 2P. Building an exponentiation unit for a public-key scheme such as
RSA or DSA can easily involve thousands of modular multiplications with 1024-bit long or larger. This makes fully pipelined hardware architecture nearly impossible even with today’s advanced IC technology. Hence, especially in hardware, many degrees of freedom exist for implementing a PKC and it requires a careful choice of suitable basic building blocks. Moreover, a developer is often faced with implementation constraints given by the target device. For instance, when designing for a device like smart cards or a sensor node, tight restrictions in terms of minimal energy consumption and area must be met. In this chapter we will not be able to explicitly highlight all various possibilities to implement public-key primitives for all possible situations, but we will give an outline how common architectures with an optimal area–time product can be built. In this chapter we will introduce arithmetic building blocks for implementing public key cryptosystems based on the DLP and ECDLP problem and the underlying arithmetic for F p and F2m . Since the arithmetic operations required for RSA computations are similar to those for F p almost identical building blocks can be employed for this cryptosystem as well. In Section 1.1, we will begin with general remarks on finite field operations and their relevance in popular cryptosystems. Next, we will discuss building blocks for modular addition, subtraction, multiplication, and inversion in prime fields F p . We will highlight Montgomery-based architectures as well as implementations using fast reduction schemes based on generalized Mersenne primes. In Section 1.3, we explain the usage of binary extension fields F2m and their advantages for hardwarebased cryptography. For these fields, we will highlight bit-wise and digit-wise multipliers as well as a fast binary field inversion based on Itoh and Tsuijii’s method.
1 Modular Integer Arithmetic for Public-Key Cryptography
7
1.1 Modular Arithmetic in Finite Fields As mentioned in the introduction, many popular public-key cryptosystems rely on modular arithmetic over finite fields. They need modular exponentiation operations (DHKE, DSA, ElGamal) as well as sequences of field operations as in case for ECC and HECC. In the following, we would like to highlight the mathematical background of fields and their arithmetic. From a mathematical point of view, a field is defined as a set F of elements with a multiplication and addition operation which satisfy both the rules of associativity and commutativity of addition and multiplication and the distributive law. For a field, the existence of the additive and multiplicative identities are also required as well as inverse operations like subtraction and (multiplicative) inversion complementing the set of functions on F. Note that the inverse of the multiplication is usually not considered as division for general fields and defined only on elements a ∈ F\{0}. The following well-known sets of elements are examples for (infinite) fields: Q, R, and C refer to the sets of rational, real, and complex numbers, respectively. In applications in cryptography, however, fields with a finite number of elements are needed. A straightforward construction of a finite field is to use modular arithmetic of integers from the interval [0, p − 1] with a prime modulus p. Operations for addition and multiplication will form groups over the sets Z p and Z∗p , respectively. Please note that in the multiplicative group Z∗p the zero element is always excluded since inversion for an element a = 0 is not defined. Prime fields (or at least their multiplicative group Z∗p ) are used for constructing the DHKE, DSA, and ElGamal encryption. Finite fields are not limited to prime fields. By using these fields as a basis, we can create an extension by introducing an m-dimensional vector space over F p . Combined with some further mathematical properties, these fields are denoted as extension fields F pm . Fields over F pm have p m elements and are typically represented by adjoining a variable X in polynomial representation. Using the special case p = 2 will result in binary extension fields F2m : these fields are well suited for hardware implementation since its elements can be efficiently written as bit vectors of length m. Elements of F2 can be represented by the logical values 0 and 1 and thus, elements of F2m can be simply represented as vectors of zeros and ones. For example, assume the binary extension field F23 . This field consists of eight elements which can be represented either in bit vector or polynomial notation. When we add two elements A(X ), B(X ) over binary extension fields, the rules for polynomial additions apply, i.e., we add all coefficients component-wise. Multiplication C(X ) = A(X ) · B(X ), however, requires not only polynomial multiplication but also the module reduction of the product with an irreducible polynomial. We denote this modular multiplication operation involving the irreducible field polynomial F(X) by C(X ) = A(X ) · B(X ) mod F(X ). Beside binary extension fields, there is another class of extension fields F pm over primes p > 2. Until 1997 applications of fields F pm for odd p were scarce in the cryptographic literature. Even though binary fields are still by far the most popular type of extension field, the more general fields F pm have been treated in literature
8
T. G¨uneysu and C. Paar
Table 1.1 Vector and polynomial representation of field elements from the binary extension field F23 Element
Vector z = {0, 1}3
Polynomial Z (X )
#1
z 0 = 000
Z 0 (X ) = 0
#2
z 1 = 001
Z 1 (X ) = 1
#3
z 2 = 010
Z 2 (X ) = X
#4
z 3 = 011
Z 3 (X ) = X + 1
...
...
...
#7
z 6 = 110
Z 6 (X ) = X 2 + X
#8
z 7 = 111
Z 7 (X ) = X 2 + X + 1
since the late 1990s [33, 3, 27, 41, 4]. The more recent introduction of pairing-based cryptographic schemes [8] gives also new relevance to such fields. Although there is ongoing research on fields F pm , e.g., with respect to HECC cryptosystems, we will not focus on them due to their limited practical relevance. As mentioned, all arithmetic in finite fields requires a reduction scheme, either by a modulus p for prime fields or an irreducible polynomial F(X ) for extension fields. From an arithmetic point of view, this reduction step is usually very complex, hence for many field types optimization have been developed. For prime fields, Mersenne primes p = 2k − 1 can be used to allow for a very efficient reduction scheme. However, there are obviously only very few suitable Mersenne primes, and in addition, they can pose security risks depending on the cryptosystem in question. Hence, variants of Mersenne primes have been proposed which are widely used in modern PKC: we distinguish pseudo Mersenne primes p = 2m − c and generalized Mersenne primes p = 2m − 2n ± 2o ± · · · ± 1 [42], whereas the latter have more practical relevance due to standardization [36] and complexity advantages. Similarly, special extension fields have been proposed offering optimized reduction schemes on polynomials: examples are composite fields [18] and optimal extension fields [4] which introduce similar tricks for more efficient reduction. However, both composite and optimal extension fields have limited practical relevance and will not be highlighted in this chapter in further detail. An overview about classes of finite fields with application in cryptography is shown in Fig. 1.1. In this chapter we will mainly focus on building blocks for PCK involving finite fields F p for general primes and generalized Mersenne numbers, as well as on binary extension fields. These have the most relevance in real-world applications and are therefore of major interest. It should be noted that, although computationally more complex in hardware, prime fields have a higher practical relevance than binary extension fields. It is important to mention that the RSA cryptosystem actually does not have a finite field as basic arithmetic structure. RSA computations are performed in an integer ring Zn due to the composite modulus n = pq which is constructed from two primes p, q. However, the fundamental arithmetic operations in integer rings
1 Modular Integer Arithmetic for Public-Key Cryptography
9
Fig. 1.1 Classes of finite fields for applications in cryptography
and prime fields show no difference so that we can use identical hardware architectures. There are many possibilities for implementing field operations in hardware, which are not dependent on the field type. In general, we distinguish between sequential and parallel implementations and algorithms. Parallel algorithms yield the advantage of a high throughput with wide data paths at the cost of a large gate count. Sequential algorithms tend to have smaller data paths by operating on small portions (and even single bits only) of the operands, resulting in a longer execution time. Of course, there are combined approaches like digit-based circuits to achieve the best trade-off between hardware requirements and high performance. In fact, in practice, such combined architectures often yield optimum results. The choice of selecting the appropriate implementation strategy heavily depends on the development target platform and the corresponding constraints. For instance, for area and cost-limited cryptography in smart cards the employment of small, heavily sequential algorithms is preferred, whereas applications such as server-side crypto accelerators require high-throughput solutions, leading to more parallelized implementations.
1.2 Crypto Building Blocks for Fields F p In this section, we will outline efficient hardware architectures for performing addition, subtraction, multiplication, and inversion in fields F p where p is a prime p > 2. Section 1.2.1 deals with integer adders which will be fundamental building blocks for the F p multipliers presented in Section 1.2.2. Furthermore, some cryptosystems like DSA and ElGamal require to find the multiplicative inverse to a given element. Thus, we will address inversion circuits for fields F p in hardware as well.
10
T. G¨uneysu and C. Paar
1.2.1 Addition and Subtraction in F p In this section we will briefly introduce adders for unsigned integers as we will need them for more complicated arithmetic operations such as multipliers. Furthermore, we will discuss how to integrate those primitive adder units to implement efficient modular addition and subtraction. In the following,we will consider the addition of two n-bit integers X = n−1 n−1 i i i=0 x i 2 and Y = i=0 yi 2 with X + Y = S = cout 2n +
n−1
si 2i and si = xi + yi + cin
i=0
yielding a result bounded by (n + 1) bit. We refer to X and Y as the inputs (and their bits xi and yi as the input bits) and S as the sum (and its bits si for i = 0 · · · n − 1 as the sum bits). Single-bit half-adders (HA) and full-adders (FA) known from computer engineering are the basic building blocks used to synthesize more complex adders. Hence, each input bit xi and yi is combined, e.g., using a full-adder cell into a sum bit si and a corresponding carry ci . In this chapter, we distinguish four different types of adder cells for integer addition: carry ripple adders (CRA), carry look-ahead adders (CLA), carry save adders (CSA), and carry delayed adders (CDA). Instead of explaining the respective advantages of each adder type, we have assembled the asymptotic complexities of each adder type into Table 1.2. This table can be used by hardware developers for selecting the appropriate adder type according to area and timing constraints. For further information, the interested reader is referred to [28, 17]. Table 1.2 Asymptotic area and time complexities of different n-bit adders Adder type
Abbreviation
Area
Time
Carry ripple adder
CRA
O(n)
O(n)
Carry look-ahead adder
CLA
O(n log n)
O(log n)
Carry save adder
CSA
O(n)
O(1)
Carry delayed adder
CDA
O(n)
O(1)
Combining several single adder cells together allows for the implementation of an n-bit adder block. Implementing an n-bit modular addition or subtraction A ± B mod p based on this adder block, the result of an addition or subtraction has to be reduced modulo p. Generally, in case of a modular addition X + Y mod p we check whether the intermediate result fulfills X + Y ≥ p, and if this is the case, reduce the result by subtracting the modulus p once. In the case of modular subtraction X − Y mod p, we subtract first and check whether X − Y < 0 and add the modulus if applicable.
1 Modular Integer Arithmetic for Public-Key Cryptography
11
In hardware implementations, we can follow a different approach to achieve a more regular architecture. Instead of testing if a result SADD = X + Y > p or SSUB = X − Y < 0 has exceeded or undershot the interval [0, p − 1], respectively, we always apply the reduction and then select the result accordingly using an output multiplexer controlled by the corresponding carry bits Ci . A combined algorithm for computing the modular sum or difference of two inputs A, B is given by Algorithm 1. Please note that the corresponding operation is selected via a operation flag f computing X − Y mod p when f = 1 and X + Y mod p otherwise. Algorithm 1 can be implemented using two n-bit adder units comprising of CRA, CLA, and CDA adder units. The use of CSA adders, however, would imply a recombination step of the corresponding outputs C and S to be able to determine whether X + Y > p or X − Y < 0.
Algorithm 1 Modular addition and subtraction Input: X, Y, p with 0 ≤ X, Y < p; Operation flag f ∈ {0, 1} denotes a subtraction when f = 1 and addition otherwise Output: u = X ± Y mod p 1: (C0 , S0 ) = X + (−1) f Y ; 2: (C1 , S1 ) = S0 + (−1)1− f p; 3: if C f = 0 then 4: Return S f ; 5: else 6: Return S1− f ; 7: end if
1.2.2 Multiplication in F p In this section, we will describe hardware architectures of two algorithmic concepts for the modular multiplication over general prime fields. Modular multiplication in F p is the mathematical operation Z = X · Y mod p with X, Y, p ∈ F p and X, Y < p, where X and Y are called the operands and p denotes the modulus. The most straightforward way to implement a modular multiplication would be a full multiplication with a subsequent division to determine the remainder. However, this approach requires a wide data path and an expensive multi-precision operation (recall that operands tend to have hundreds of bits). Thus, other methods like the modular Montgomery multiplication and the interleaved modular multiplication have been proposed allowing for a more area–time efficient design. Since modular multiplication is usually one of the most complex operations in cryptographic algorithms, we need to consider carefully which type of algorithm to choose:
12
T. G¨uneysu and C. Paar
• Parallel algorithms: Most such algorithms are optimized for high throughput and calculate the modular product with a time complexity of O(log p) [45]. Their disadvantage is a huge area complexity, resulting in an expensive hardware implementation. But many practical applications require low-cost solutions, especially now where an increasing number of high-volume products require cryptographic foundations (e.g., in consumer electronics). • Sequential algorithms: The sequential algorithms of highest importance are the classical modular multiplication [22], Barrett modular multiplication [5], interleaved modular multiplication [7, 40], and modular Montgomery multiplication [35]. They operate on bits or chunks of input data sequentially which results in longer runtimes but allows for an area-optimal implementation due to a small data path. A more efficient approach to conventional multiplication with a subsequent division, Barrett has proposed an alternative based on three standard multiplications and some additions. The disadvantage of this solution is the high time complexity of three multiplications. During interleaved modular multiplication, the multiplication and the calculation of the remainder of the division are interleaved. The advantage is that the length of the intermediate result is only one or two bits larger than the operands. The disadvantage is the use of subtractions in order to reduce the intermediate results. An efficient implementation of an interleaved multiplication can be found in [9]. Since the modular Montgomery multiplication is the most frequently used method for modular multiplication, we will highlight it in more detail in this chapter. The computation is done in Montgomery domain which is defined as a mapping a → a · 2 R mod p for an element a ∈ F p and an R with p < 2 R . The Montgomery domain allows for efficient reductions based on multiplication only. But prior to computation, all input values must be transformed into Montgomery domain (and converted back after the result has been computed), which adds some additional complexity for pre- and postcomputation steps. As an advantage of this method, we can save on costly reductions and replace them with divisions by 2 (bit shifts). Given two factors X, Y in Montgomery coordinates, i.e., Xˆ = X · 2 R mod p and Yˆ = Y · 2 R mod p, a standard multiplication will compute Zˆ = Xˆ · Yˆ = X Y · 22R . Note that the result of this computation is neither in Montgomery nor in standard domain and requires a correction. This has to be taken into account by a specialized Montgomery multiplication computing X · Y · 2−R mod p instead of X · Y mod p. It is important to mention that the additional transformation steps for Montgomery computations can be neglected as soon several repetitive modular multiplication operations are involved, e.g., in case of a modular exponentiation. The results of these partial multiplications are added successively from the least significant to the most significant bit. In each iteration, we determine whether the intermediate result is odd or even. For this purpose the least significant bit of the intermediate result is inspected and, in case this bit is equal to “1,” the modulus is added to the intermediate sum. This guarantees the sum always to be even. At the end of each iteration, the intermediate result is divided by 2 what avoids a growing complexity
1 Modular Integer Arithmetic for Public-Key Cryptography
13
in the size of intermediate results. Algorithm 2 describes the Montgomery modular multiplication. Algorithm 2 Montgomery modular multiplication Input: X, Y < p < 2n , with 2k−1 < p < 2n and p = 2t + 1, with t ∈ N. Output: u = X · Y · 2−k mod p. k: number of bit & in X , xi : i th bit of X 1: u = 0; 2: for i = 0; i < n; i + + do 3: u = u + xi · Y 4: if u 0 = 1 then 5: u = u + p; 6: end if 7: u = u div 2; 8: end for 9: if u ≥ p then 10: u = u − p; 11: end if
The algorithm requires two additions per loop iteration. By introducing redundant representation it is possible to modify the algorithm for building a very efficient architecture Montgomery multiplication involving CSA adders. This architecture is shown in Fig. 1.2. The shown Montgomery architecture operates one bit of X per clock cycle and hence has a time complexity is n where n is the number of bits of an operand. Remember that CSA adders require 3 XOR, 2 AND, and 1 OR gate per adder cell and are thus rather expensive in terms of hardware. The signal propagation is
RAM
x MUX
Register C
Register S
CSA >> 2 z1 Fig. 1.2 Montgomery modular multiplication with one CSA
>> 2 z2
0 p y p+y
14
T. G¨uneysu and C. Paar
mainly determined by the CSA adder which can be implemented with a latency of t = 2 XOR gates. For a detailed comparison of efficient hardware architectures implementing Montgomery multiplication and interleaved multiplication we refer the reader to [2].
1.2.3 Faster Reduction in F p Although the performance of the reduction step can be significantly improved with Montgomery or interleaved multiplication architecture, a significant amount of time and hardware is additionally required to implement the reduction step to transform intermediate integer results to values from F p . For this purpose, special primes have been proposed by Solinas [42] allowing for a reduction scheme based on additions and subtractions only. This approach has later been standardized by NIST [36], e.g., for use with ECC over prime and binary extension fields. Special primes pl with fixed bit lengths l = {192, 224, 256, 384, 521} are part of the standard whereof p224 and p256 bits are probably the most relevant bit sizes for future implementations of the next decades. In the following, we will exemplarily highlight p192 = 2192 − 264 − 1 since this is the most elementary reduction scheme. Consider a full-length multiplication of two 192-bit integers A, B ∈ G F p192 resulting in a product C. Let C = c5 2320 + c4 2256 + c3 2192 + c2 2128 + c1 264 + c0
(1.1)
be the 64-bit representation of C with a maximum bit length of log2 (C) ≤ 384. We can then reduce the higher powers of 2 in Eq. (1.1) using the congruences 2192 ≡ 264 + 1 mod p 2256 ≡ 2128 + 264 mod p 2320 ≡ 2128 + 264 + 1 mod p Next, we can rewrite c with c = c5 (2128 + 264 + 1) + c4 (2128 + 264 ) + c3 (2128 + 264 ) + c2 2128 + c1 264 + c0 mod p whose monomials can be recombined into a few addition as shown in Algorithm 3. Obviously, the entire reduction for p192 can be performed by three 192-bit additions and a final reduction to make sure that z = z 1 + z 2 + z 3 + z 4 is in [0, p − 1]. Note that due to the inner structure of addends, Step 2 needs to operate on a number 0 < z < 3 p192 where two subtraction of p192 might be involved to lift z back into G F p192 . Similarly to p192 , this scheme can be applied for all other standardized general Mersenne primes as specified in [36]. In the following, we present the reduction algorithms for p224 and p256 due to their considerable relevance for the next years.
1 Modular Integer Arithmetic for Public-Key Cryptography
15
Algorithm 3 NIST reduction with p192 = 2192 − 264 − 1 2 Input: Double-sized integer c = (c5 , . . . , c2 , c1 , c0 ) in base 264 and 0 ≥ c ≥ p192 Output: Single-sized integer c mod p192 . 1: Concatenate ci to following 192-bit integers z j :
z 1 = (c2 , c1 , c0 ), z 2 = (0, c3 , c3 ), z 3 = (c4 , c4 , 0), z 4 = (c5 , c5 , c5 ) 2: Compute (z 1 + z 2 + z 3 + z 4 mod p192 )
According to Algorithm 4 the modular reduction for p224 can be performed with two 224-bit subtractions and additions. Hence, these four consecutive operations can lead to a potential over- and underflow in Step 2 of Algorithm 4 which needs to be estimated in advance. With Z = z 1 + z 2 + z 3 − z 4 − z 5 , we can determine the bounds − p < Z < p reducing the number of final correction steps to a single addition or subtraction to compute Z mod p224 . Algorithm 4 NIST reduction with p224 = 2224 − 296 + 1 2 Input: Double-sized integer c = (c13 , . . . , c2 , c1 , c0 ) in base 232 and 0 ≥ c ≥ p224 Output: Single-sized integer c mod p224 . 1: Concatenate ci to following 224-bit integers z j :
z 1 = (c6 , c5 , c4 , c3 , c2 , c1 , c0 ), z 2 = (c10 , c9 , c8 , c7 , 0, 0, 0), z 3 = (0, c13 , c12 , c11 , 0, 0, 0), z 4 = (0, 0, 0, 0, c13 , c12 , c11 ), z 5 = (c13 , c12 , c11 , c10 , c9 , c8 , c7 ) 2: Compute (z 1 + z 2 + z 3 − z 4 − z 5 mod p224 )
Algorithm 5 presents the modular reduction for p256 requiring two doublings, four 256-bit subtractions, and four 256-bit additions. Based on the computation Z = z 1 + 2z 2 + 2z 3 + z 4 + z 5 − z 6 − z 7 − z 8 − z 9 , a result Z can range from −3 p < Z < 4 p what requires a significantly more costly over- and underflow handling of Z mod p256 in hardware than for the case p224 . Since only chains of n bit additions and subtractions (and a small additional implementation overhead due to a final reduction according to the potential overflow of Z ) are required for this reductions scheme, it is significantly faster than conventional reduction algorithms. This designates them as a popular choice for ECC implementations.
1.2.4 Inversion in F p The field inversion is usually the most expensive operation in the multiplicative group of a finite field F p . This always calls for methods to avoid this type of operation as often as possible, e.g., in ECC computations, a projective coordinate system
16
T. G¨uneysu and C. Paar
Algorithm 5 NIST reduction with p256 = 2256 − 2224 + 2192 + 296 − 1 2 Input: Double-sized integer c = (c15 , . . . , c2 , c1 , c0 ) in base 232 and 0 ≥ c ≥ p256 Output: Single-sized integer c mod p256 . 1: Concatenate ci to following 256-bit integers z j :
z 1 = (c7 , c6 , c5 , c4 , c3 , c2 , c1 , c0 ), z 2 = (c15 , c14 , c13 , c12 , c11 , 0, 0, 0), z 3 = (0, c15 , c14 , c13 , c12 , 0, 0, 0), z 4 = (c15 , c14 , 0, 0, 0, c10 , c9 , c8 ), z 5 = (c8 , c13 , c15 , c14 , c13 , c11 , c10 , c9 ), z 6 = (c10 , c8 , 0, 0, 0, c13 , c12 , c11 ), z 7 = (c11 , c9 , 0, 0, c15 , c14 , c13 , c12 ), z 8 = (c12 , 0, c10 , c9 , c8 , c15 , c14 , c13 ), z 9 = (c13 , 0, c11 , c10 , c9 , 0, c15 , c14 ) 2: Compute (z 1 + 2z 2 + 2z 3 + z 4 + z 5 − z 6 − z 7 − z 8 − z 9 mod p256 )
can be used to replace required field inversions nearly completely by trading them for a few multiplications. In general, there are two ways to compute the inverse element X −1 to a given element X ∈ G F( p) using the extended greatest common divisor (gcd) algorithm [31] or, in case of constrained hardware area, an existing exponentiation circuit to compute the X −1 using Fermat’s little theorem by X −1 ≡ X p−2 mod p [26]. Because we already introduced the domain of Montgomery coordinates with the modular multiplication, we will present an inversion method that can handle Montgomery transformed input. Please recall that an Xˆ = X · 2 R in Montgomery domain representation will be inverted to X −1 · 2−R . This can be either corrected using another multiplication or by adapting the inversion circuit accordingly. Kaliski has defined a sequence of two algorithms for inverting elements: the almost Montgomery-inverse algorithm computes for an element X a biased inverse X −1 2z where z is a positive but variable value. The second phase is used to correct the bias and restore the Montgomery representation. In the following, we will explain the details of the Kaliski inversion [10] without a final correction step (which is a repetition of bit shifts and additions of p until the desired factor of 2 R is restored). The implementation of the algorithm itself is expensive to implement in hardware since each iteration requires two simultaneous n-bit subtractions and one n-bit addition to achieve minimal latency. Figure 1.3 depicts the schematic of the almost Montgomery inverse algorithm consisting of three CLA adders, four n-bit registers, and corresponding multiplexers.
1.3 Crypto Building Blocks for Fields F2m Although prime fields have more relevance in common cryptography, binary extension fields F2m are often selected for hardware implementations [6] due to their computation without carries. The arithmetic in extension fields not only simplifies the general architecture but also reduces the area and issues with long signal propagation paths due to the lack of carry arithmetic, e.g., the addition and subtraction
1 Modular Integer Arithmetic for Public-Key Cryptography
17
Algorithm 6 Almost Montgomery inverse Input: X ∈ F p and p or X · 2 R ∈ F p and p Output: Intermediate values r and z where r = X −1 · 2z mod p and h ≤ z ≤ 2h 1: u ← p, v ← X , r ← 0, s ← 1 2: k ← 0 3: while v > 0 do 4: if u is even then 5: u ← u/2, s ← 2s 6: else if v is even then 7: v ← v/2, r ← 2r 8: else if u > v then 9: u ← (u − v)/2, r ← r + s, s ← 2s 10: else 11: v ← (v − u)/2, s ← r + s, r ← 2r 12: end if 13: k ←k+1 14: end while 15: if r ≥ p then 16: r ← r − p {make sure that r is within its boundaries} 17: end if 18: return r ← p − r
p
a
ui,vi,ri,si
MUX Layer I1.1
I1.2
I2.1
CLA
CLA
REG u
REG v
I2.2
I3.1
I3.2
CLA
REG r
CTL
REG s
Dynamic Shift ( <<2 | >>2 ) ui+1, vi+1, ri+1, si+1
Fig. 1.3 Almost Montgomery inverse algorithm in hardware using CLA adders
18
T. G¨uneysu and C. Paar
in F2m can be implemented as a simple XOR operation allowing to compute each coefficient individually like in vector spaces. As already mentioned, thefield F2m m−1 is generated by an irreducible polynomial F(x) = x m + G(x) = x m + i=0 gi x i m over F2 of degree assume to be a root of F(x), αm−1 m−1thusi for X, Y, Z ∈ F2 , we m−1m. We i i x α , Y = y α , Z = z α , with bit coefficients write X = i i i i=0 i=0 i=0 xi , yi , z i ∈ F2 . Note that by assumption F(α) = 0 since α is a root of F(x). Therefore, α m = G(α) =
m−1
gi α i
(1.2)
i=0
provides an easy way to perform modulo reduction, whenever we encounter powers of α greater than m − 1 (cf. Section 1.1). For hardware implementations of more complex operations like multiplication and inversion, trinomial and pentanomial reduction polynomials are chosen as they enable a very efficient implementation with only a few gates. In the following, we present efficient architectures for multiplier and squarer implementations for binary fields in hardware.
1.3.1 Multiplication in F2m Multiplication of two elements X, Y ∈ F2m with X (α) = m−1 yi α i is performed by computing Y (α) = i=0 Z (α) =
m−1
m−1 i=0
xi α i and
z i α i ≡ X (α) · Y (α) mod F(α)
i=0
where the multiplication is a polynomial multiplication, and all α t , with t ≥ m, are reduced with Eq. (1.2) and α a root of the underlying field. We will discuss a bit-parallel architecture for F2m in the following section. 1.3.1.1 Bit Multipliers in F2m The canonical algorithm for field multiplication for binary fields is the shift-and-add method [23] with the reduction step interleaved shown as Algorithm 7. Note that due their independence, the computation of yi X and Z α mod F(α) can be performed in parallel in Step 3 of Algorithm 7. However, the value of Z i of a current iteration depends on both the value of Z i−1 at the previous iteration and on the currently computed value yi X . This dependency has the effect of making the MSB multiplier have a longer critical path than that of the least significant bit (LSB) multiplier, described later in the following section. For hardware, the efficient shift-and-add method is suitable when area is constrained. In case that the bits of Y are processed in reverse order, i.e., from most significant bit to least significant bit (as in
1 Modular Integer Arithmetic for Public-Key Cryptography
19
Algorithm 7 Shift-and-add most significant bit (MSB) first F2m multiplication m−1 m−1 Input: X = i=0 ai α i , Y = i=0 yi α i where xi , yi ∈ F2 . m−1 Output: C ≡ X · Y mod F(α) = i=0 z i α i where ci ∈ F2 . 1: Z ← 0 2: for i = m − 1 downto 0 do 3: Z ← Z · α mod F(α) + yi · X 4: end for 5: Return (Z )
Algorithm 7), we call this implementation a most significant bit-serial (MSB) multiplier [43]. We have already emphasized that reduction in modular multiplication is a step involving significant efforts. Using specific reduction polynomials can help us reducing the overhead for lifting back intermediate multiplication results to F2m . In the following, we will present bit-wise multipliers which incorporate an efficient reduction schemein F2m . For an MSB multiplier, assume a quantity of the form Qα, m−1 qi α i ∈ F2m , to be reduced mod F(α). Multiplying Q by α, where Q(α) = i=0 we obtain Qα =
m−1
qi α i+1 = qm−1 α m +
i=0
m−2
qi α i+1
(1.3)
i=0
With the property of the reduction polynomial from Eq. (1.2) at hand, we can substitute for α m and rewrite the index of the second summation in Eq. (1.3). Qα mod F(α) can then be calculated as follows: Qα mod F(α) =
m−1 i=0
(gi qm−1 )α i +
m−1 i=1
qi α i = (g0 qm−1 ) +
m−1
(qi−1 + gi qm−1 )α i
i=1
where all coefficient arithmetic is in F2 . As an example, we consider the structure of a 163-bit MSB multiplier shown in Fig. 1.4. In this multiplier, the operand X is placed onto the data-bus X of the multiplier directly from the memory register location. The individual bits of yi are sent from a memory location by implementing the memory registers as a cyclic shift register (with the output at the most significant bit). The intermediate reduction is performed on the accumulating result z i , as in Step 3 in Algorithm 7. The taps that are fed back to z i are directly based on the reduction polynomial. Figure 1.4 shows an implementation for the reduction polynomial F(x) = x 163 + x 7 + x 6 + x 3 + 1, where the taps XOR the result of c162 to c7 , c6 c3 , and c0 . As mentioned before, time and area requirements of F2m hardware are efficient and predetermined. The complexity of the multiplier is n AND + (n + t − 1) XOR gates where t = 3 for a trinomial reduction polynomial and t = 5 for a pentanomial reduction polynomial. The latency for the multiplier output is n clock cycles. Furthermore, the maximum critical path is 2ΔXOR (independent of n), where ΔXOR represents a single delay in an XOR gate.
20
X
T. G¨uneysu and C. Paar 163
x163
x162
x7
x3
x2
x1
x0
z3
z2
z1
z0
yi
...
z163
z162
...
z7
163
XY mod F(x) = Z
Fig. 1.4 Most significant bit-serial (MSB) multiplier circuit for F2163
Similar to the presented MSB multiplier, a least significant bit-serial (LSB) multiplier can be implemented and the choice between the two depends on the design architecture and goals. In an LSB multiplier, the coefficients of Y are processed starting from the least significant bit y0 and continues with the remaining coefficients one at a time in ascending order. Thus, multiplication according to this scheme is performed in the following way:
Z ≡ X Y mod F(α) ≡ y0 X + y1 (X α mod F(α)) + y2 (X α 2 mod F(α)) + . . . + ym−1 (X α m−1 mod F(α)) ≡ y0 X + b1 (X α mod F(α)) + y2 ((X α)α mod F(α)) + . . . + ym−1 ((X α m−2 )α mod F(α))
1.3.1.2 Digit Multipliers in F2m Compared to the previously mentioned methods for multiplication, digit multipliers provide trade-offs between speed, area, and power consumption [20]. This is achieved by processing several of Y ’s coefficients simultaneously at the cost of more hardware area. The number of coefficients that are processed in parallel is defined to be the digit size D. Let the total number of digits in the polynomial of degree − 1 to be given by d = m/D. Then, we can rewrite the multiplier as m d−1 Yi α Di , where Y = i=0
1 Modular Integer Arithmetic for Public-Key Cryptography
Yi =
D−1
y Di+ j α j , 0 ≤ i ≤ d − 1
21
(1.4)
j=0
and we assume that B has been padded with zero coefficients such that yi = 0 for m − 1 < i < d · D. The multiplication can then be performed as Z ≡ X · Y mod F(α) = X
d−1
Yi α Di mod F(α)
(1.5)
i=0
The least significant digit-serial (LSD) multiplier is a generalization of the LSB multiplier in which the digits of B are processed starting from the least significant to the most significant. Using Eq. (1.5), the product in this scheme can be computed as follows: Z ≡ X · Y mod F(α) ≡ [Y0 X + Y1 (X α D mod F(α)) + Y2 (X α D α D mod F(α)) + . . . + Yd−1 (X α D(d−2) α D mod F(α))] mod F(α) Algorithm 8 shows the details of the LSD multiplier. The full multiplier core requires the additional operation Z ← Yi X + Z (Step 4 of Algorithm 8). It consists of ANDing the multiplicand X with each element of the digit of the multiplier Y and XORing the result into an accumulator. As an optimization, Z can be initialized to a value I ∈ F2m in Algorithm 8. Then, we can obtain as output the quantity, X · Y + I mod F(α) at no additional (hardware or delay) cost. This operation, known as a multiply/accumulate operation is very useful, e.g., in elliptic curve-based systems. Algorithm 8 Least significant digit-serial (LSD) multiplier [43]
m−1 mD −1 Input: X = i=0 xi α i , where xi ∈ F2 , Y = i=0 Yi α Di , where Yi as in (1.4) m−1 i Output: : C ≡ X · Y = i=0 ci α , where z i ∈ F2 1: Z ← 0 2: for i = 0 to mD − 1 do 3: Z ← Yi X + Z 4: X ← X α D mod F(α) 5: end for 6: Return (Z mod F(α))
Considering the reduction, the operation X ← X α D mod F(α) from Step 3 of Algorithm 8 needs to be efficiently implemented. Trivially, the multiplicand X is shifted left by the digit-size D which is equivalent to multiplying by α D . Then, the result is reduced with the reduction polynomial by a logical AND of the higher D elements of the shifted multiplicand with the reduction polynomial F(α) and a subsequent exclusive-or with the result.
22
T. G¨uneysu and C. Paar
A final reduction circuit performs the operation X mod F(α), where X is of size m + D − 2. It is implemented similarly to the main reduction circuit but without any shifting. The area requirement for this circuit is (k + 1)(D − 1) AND gates and (k + 1)(D − 1) XOR gates. The critical path of the final reduction circuit is ΔAND + log 2 (D)ΔXOR which is less than that of the main reduction circuit.
1.3.2 Squaring in F2m Polynomial basis squaring of C ∈ F2m is implemented by expanding Z to double its bit length by interleaving 0 bits in between the original bits of Z and then reducing the double length result as shown here: C ≡ X 2 mod F(α) ≡ (xm−1 α 2(m−1) + xm−2 α 2(m−2) + . . . + x1 α 2 + x0 ) mod F(α) In hardware these two steps can be combined if the reduction polynomial has a small number of non-zero coefficients, such as in the case of irreducible trinomials and pentanomials. The architecture of the squarer implemented as a hardwired XOR circuit is shown in Fig. 1.5. Here, the squaring is efficiently implemented for F(x) = x 163 + x 7 + x 6 + x 3 + 1, to generate the result in one single clock cycle without significant area requirements. It involves first the expansion by interleaving with zeroes, which in hardware is just an interleaving of 0 bit valued lines on to the bus to expand it to 2n bits (where n is the bit length of the original parameter). The reduction of this polynomial is inexpensive, first, due to the fact that reduction
x0
x160
x81
x162 x160
x2
x161
x83
x81
x161 x160
163
x161 x159
X
... z163
z3
z2
z1
z0
163
X2 mod F(x) = Z Fig. 1.5 Squaring circuit for field F2163
1 Modular Integer Arithmetic for Public-Key Cryptography
23
polynomial used is a pentanomial, and second, the polynomial being reduced is sparse with no reduction required for n/2 of the higher order bits. As an example for the efficient implementation of a binary field squarer unit in hardware, XOR requirements and the maximum critical path (assuming an XOR tree implementation) for three different reduction polynomials used in elliptic curve cryptography are given in the Table 1.3. Table 1.3 F2m gate consumption and latency figures for a squaring unit Reduction polynomial F(x)
XOR gates
Critical path
x 193 + x 15 + 1
96 XOR
2 ΔXOR
+x +x +x +1
246 XOR
3 ΔXOR
x 131 + x 8 + x 3 + x 2 + 1
205 XOR
3 ΔXOR
x
163
7
6
3
1.3.3 Inversion in F2m using Itoh–Tsujii Algorithms In Section 1.2.4 we have discussed the utilization of gcd algorithms for computing the multiplicative inverse in a finite field F p . Alternatively, Fermat’s little theorem was mentioned to determine the inverse element at the cost of cn modular multiplications, where n is the bit length of an operand and c > 1 a constant dependent on the applied exponentiation algorithm.4 Originally introduced in [21], the Itoh and Tsujii algorithm (ITA) is a further exponentiation-based algorithm for inversion in finite fields which reduces the complexity of computing the inverse of a non-zero element in F2m to at most 2log2 (m − 1) multiplications in F2m and m − 1 cyclic shifts. Next, we will show how to compute the multiplicative inverse of X ∈ F2m , X = 0, according to the binary method for exponentiation. From Fermat’s little theorem we m know that X −1 ≡ X 2 −2 , which can be computed as m
X2
−2
2
m−1
= X2 · X2 · · · X2
This requires m − 2 multiplications and m − 1 cyclic shifts. As we have seen in Section 1.3.2 squaring is a linear operation. In [21] Itoh and Tsujii proposed three algorithms. The first two algorithms describe addition chains for exponentiation-based inversion in fields F2m while the third one describes a method based on subfield inversion. The first algorithm is only applicable to values of m such that m = 2r + 1, for some positive r , and it is based 4 Exponentiation algorithms significantly influence the performance of cryptosystems like RSA, DHKE, and ElGamal. Please find further details how to speed up exponentiation methods in [31, 16, 44].
24
T. G¨uneysu and C. Paar
on the observation that the exponent 2m − 2 can be rewritten as (2m−1 − 1) · 2. Thus 2r if m = 2r + 1, we can compute X −1 ≡ (X 2 −1 )2 . Furthermore, we can rewrite r 22 − 1 as r −1 r −1 r −1 r 22 − 1 = 22 − 1 22 + 22 − 1
(1.6)
Equation (1.6) and the previous discussion result in Algorithm 9. Note that Algorithm 9 performs r = log2 (m − 1) iterations. In every iteration, one multiplication and i cyclic shifts, for 0 ≤ i < r , are performed which leads to an overall complexity of log2 (m − 1) multiplications and m − 1 cyclic shifts. Algorithm 9 Multiplicative Inverse Computation in F2m with m = 2r + 1 [21] Input: X ∈ F2m , X = 0, m = 2r + 1 Output: Z = X −1 Z←X for i = 0 to r − 1 do 2i Y ← Z 2 {cyclic shifts by 2i } Z←Z·D end for Z ← Z2 Return (Z)
This is again an improvement over prime field algorithms for inversion. With a closer look to the involved basic operations, they can obviously efficiently implement using techniques that we already proposed in Sections 1.3.1 and 1.3.2.
1.4 Summary In this chapter, we presented a survey of finite field architectures that are suitable for hardware implementations of popular cryptographic systems. The hardware architectures for addition/subtraction, multiplication, and inverse were presented for both finite fields popularly used in cryptography: binary extension and prime fields. Furthermore, we have highlighted selected optimizations for reduction schemes both in prime and binary fields, e.g., using general Mersenne primes or trinomial or pentanomial reduction polynomials. Further information on implementations of publickey cryptosystems and cryptosystems can also be found in Chapter 4.
References 1. FIPS 186-2: Digital Signature Standard (DSS). 186-2, February 2000. Available for download at http://csrc.nist.gov/encryption. 2. D. N. Amanor, C. Paar, J. Pelzl, V. Bunimov, and M. Schimmler. Efficient Hardware Architectures for Modular Multiplication on FPGAs. In 2005 International Conference on Field
1 Modular Integer Arithmetic for Public-Key Cryptography
3.
4. 5.
6. 7. 8.
9.
10. 11.
12. 13. 14. 15. 16. 17.
18.
19. 20. 21. 22. 23. 24. 25.
25
Programmable Logic and Applications (FPL), Tampere, Finland, pages 539–542. IEEE Circuits and Systems Society, August 2005. D. V. Bailey and C. Paar. Optimal Extension Fields for Fast Arithmetic in Public-Key Algorithms. In H. Krawczyk, editor, Advances in Cryptology — CRYPTO ’98, volume LNCS 1462, pages 472–485, Springer-Verlag, Berlin, 1998. D. V. Bailey and C. Paar. Efficient Arithmetic in Finite Field Extensions with Application in Elliptic Curve Cryptography. Journal of Cryptology, 14(3):153–176, 2001. P. Barrett. Implementing the Rivest, Shamir and Adleman public-key encryption algorithm on standard digital signal processor. In A. Odlyzko, editor, Advances in Cryptology — CRYPTO’86, volume 263 of LNCS, pages 311–323. Springer-Verlag, Berlin 1987. L. Batina, S. B. Ors, B. Preneel, and J. Vandewalle. Hardware architectures for public key cryptography. Integration, the VLSI Journal, 34(6):1–64, 2003. G. Blakley. A computer algorithm for calculating the product A · B modulo M. IEEE Transactions on Computers, C-32(5):497–500, May 1983. D. Boneh and M. Franklin. Identity-Based Encryption from the Weil Pairing. In J. Kilian, editor, Advances in Cryptology — CRYPTO 2001, volume LNCS 2139, pages 213–229. Springer-Verlag, Berlin 2001. V. Bunimov and M. Schimmler. Area and Time Efficient Modular Multiplication of Large Integers. In IEEE 14th International Conference on Application-specific Systems, Architectures and Processors, June 2003. A. Daly, L. Marnaney, and E. Popovici. Fast Modular Inversion in the Montgomery Domain on Reconfigurable Logic. Technical report, University College Cork, Cork, Ireland, 2004. W. Diffie. Subject: Authenticity of Non-secret Encryption documents. World Wide Web, October 6, 1999. Email message sent to John Young. Available at http://cryptome.org/ukpk-diffie.htm. W. Diffie and M. E. Hellman. New directions in cryptography. IEEE Transactions on Information Theory, IT-22(6):644–654, November 1976. T. ElGamal. A public key cryptosystem and a signature scheme based on discrete logarithms. IEEE Transactions on Information Theory, 31:469–472, 1985. J. H. Ellis. The Story of Non-secret Encryption. Available at http://jya.com/ellisdoc.htm, December 16th, 1997. I. E. T. Force. The Kerberos Network Authentication Service (V5). RFC 4120, July 2005. D. M. Gordon. A survey of fast exponentiation methods. Journal of Algorithms, 27:129–146, 1998. J. Guajardo, T. G¨uneysu, S. S. Kumar, C. Paar, and J. Pelzl. Efficient hardware implementation of finite fields with applications to cryptography. Acta Applicandae Mathematicae, 93:75–118, 2006. J. Guajardo and C. Paar. Efficient Algorithms for Elliptic Curve Cryptosystems. In B. Kaliski, Jr., editor, Advances in Cryptology — CRYPTO ’97, volume 1294, pages 342–356, Springer Verlag, Berlin August 1997. J. Hoffstein, D. Lieman, J. Pipher, and J. H. Silverman. NTRU: A Public Key Cryptosystem. Technical report, Aug. 11 1999. K. Hwang. Computer Arithmetic: Principles, Architecture and Design. John Wiley & Sons, Inc. New York, 1979. T. Itoh and S. Tsujii. A fast algorithm for computing multiplicative inverses in G F(2m ) using normal bases. Information and Computation, 78:171–177, 1988. D. Knuth. The Art of Computer Programming, Seminumerical Algorithms, volume 2. Addison-Wesley, Reading, MA November 1971. 2nd printing. D. E. Knuth. The Art of Computer Programming, Vol. 2: Seminumerical Algorithms, volume 2. Second edition, Addison-Wesley, Reading, MA 1973. N. Koblitz. Elliptic curve cryptosystems. Mathematics of Computation, 48(177):203–209, January 1987. N. Koblitz. Hyperelliptic cryptosystems. Journal of Cryptology, 1(3):129–150, 1989.
26
T. G¨uneysu and C. Paar
26. N. Koblitz. A Course in Number Theory and Cryptography. Springer Verlag, New York, 1994. 27. N. Koblitz. An Elliptic Curve Implementation of the Finite Field Digital Signature Algorithm. In H. Krawczyk, editor, Advances in Cryptology — CRYPTO 98, volume LNCS 1462, pages 327–337. Springer-Verlag, Berlin 1998. 28. C ¸ . K. Koc¸, T. Acar, and B. S. Kaliski. Analyzing and comparing Montgomery multiplication algorithms. IEEE Micro, 16(3):26–33, June 1996. 29. A. Lenstra and E. Verheul. Selecting Cryptographic Key Sizes. In H. Imai and Y. Zheng, editors, Practice and Theory in Public Key Cryptography—PKC 2000, volume 1751, pages 446–465, January 2000. 30. R. J. McEliece. A public-key cryptosystem based on algebraic coding theory. DSN Progress Report, pages 42–44, 1987. 31. A. J. Menezes, P. C. van Oorschot, and S. A. Vanstone. Handbook of Applied Cryptography. The CRC Press series on discrete mathematics and its applications. 1997. 32. R. C. Merkle. Secure communications over insecure channels. Communications of the ACM, 21(4):294–299, 1978. 33. P. Mih˘ailescu. Optimal Galois Field Bases Which Are Not Normal. Recent Results Session — FSE ’97, 1997. 34. V. S. Miller. Use of Elliptic Curves in Cryptography. In H. C. Williams, editor, Advances in Cryptology — CRYPTO ’85, volume 218, pages 417–426, August 1986. 35. P. Montgomery. Modular multiplication Without trial division. Mathematics of Computation, 44(170):519–521, April 1985. 36. National Institute of Standards and Technology (NIST). Recommended Elliptic Curves for Federal Government Use, July 1999. csrc.nist.gov/csrc/fedstandards.html. 37. J. Pollard. Monte Carlo methods for index computation mod p. Mathematics of Computation, 32(143):918–924, July 1978. 38. R. L. Rivest, A. Shamir, and L. Adleman. A method for obtaining digital signatures and public-key cryptosystems. Communications of the ACM, 21(2):120–126, February 1978. 39. B. Schneier. Crypto-Gram Newsletter. World Wide Web, May 15, 1998. Available at http://www.schneier.com/crypto-gram-9805.html. 40. K. Sloan. Comments on a computer algorithm for calculating the product A · B modulo M. IEEE Transactions on Computers, C-34(3):290–292, March 1985. 41. N. Smart. Elliptic curve cryptosystems over small fields of odd characteristic. Journal of Cryptology, 12(2):141–151, Spring 1999. 42. J. Solinas. Generalized Mersenne Numbers. Technical Report, CORR 99-39, Department of Combinatorics and Optimization, University of Waterloo, Canada,, 1999. 43. L. Song and K. K. Parhi. Low energy digit-serial/parallel finite field multipliers. Journal of VLSI Signal Processing, 19(2):149–166, June 1998. 44. J. von zur Gathen and M. N¨ocker. Exponentiation in Finite Fields: Theory and Practice. In T. Mora and H. Mattson, editors, Applied Algebra, Algebraic Algorithms and Error Correcting Codes — AAECC-12, volume LNCS 1255, pages 88–113. Springer-Verlag, 2000. 45. C. Walter. Logarithmic speed modular multiplication. Electronics Letters, 30(17):1397–1398, 1994.
Chapter 2
Introduction to Side-Channel Attacks Franc¸ois-Xavier Standaert
2.1 Introduction A cryptographic primitive can be considered from two points of view: on the one hand, it can be viewed as an abstract mathematical object or black box (i.e., a transformation, possibly parameterized by a key, turning some input into some output); on the other hand, this primitive will in fine have to be implemented in a program that will run on a given processor, in a given environment, and will therefore present specific characteristics. The first point of view is the one of “classical” cryptanalysis; the second one is the one of physical security. Physical attacks on cryptographic devices take advantage of implementation-specific characteristics to recover the secret parameters involved in the computation. They are therefore much less general – since specific to a given implementation – but often much more powerful than classical cryptanalysis and are considered very seriously by cryptographic devices manufacturers. Such physical attacks are numerous and can be classified in many ways. The literature usually sorts them among two orthogonal axes: 1. Invasive vs. non-invasive: Invasive attacks require depackaging the chip to get direct access to its inside components; a typical example of this is the connection of a wire on a data bus to see the data transfers. A non-invasive attack only exploits externally available information (the emission of which is, however, often unintentional) such as running time, power consumption. 2. Active vs. passive: Active attacks try to tamper with the devices’ proper functioning, for example, fault-induction attacks will try to induce errors in the computation. As opposed, passive attacks will simply observe the devices behavior during their processing, without disturbing it.
F.-X. Standaert (B) UCL Crypto Group, Place du Levant 3, B-1348 Louvain-la-Neuve, Belgium e-mail:
[email protected] Postdoctoral researcher of the Belgian Fund for Scientific Research (FNRS).
I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3 2, C Springer Science+Business Media, LLC 2010
27
28
F.-X. Standaert
The side-channel attacks we consider in this chapter are a class of physical attacks in which an adversary tries to exploit physical information leakages such as timing information [9], power consumption [10], or electromagnetic radiation [1]. Since they are non-invasive, passive and they can generally be performed using relatively cheap equipment, they pose a serious threat to the security of most cryptographic hardware devices. Such devices range from personal computers to small embedded devices such as smart cards and RFIDs (radio frequency identification devices). Their proliferation in a continuously larger spectrum of applications has turned the physical security and side-channel issue into a real, practical concern that we aim to introduce in this chapter. For this purpose, we start by covering the basics of side-channel attacks. We discuss the origin of unintended leakages in recent microelectronic technologies and describe how simple measurement setups can be used to recover and exploit these physical features. Then, we introduce some classical attacks: simple power analysis (SPA) and differential power analysis (DPA). In the second part of the chapter, we put forward the different steps of an actual side-channel attack through two illustrative examples. We take advantage of these examples to stress a number of practical concerns regarding the implementation of side-channel attacks and discuss their possible improvements. Finally, we list a number of countermeasures to reduce the impact of physical information leakages.
2.2 Basics of Side-Channel Attacks 2.2.1 Origin of the Leakages Side-channel attacks are closely related to the existence of physically observable phenomenons caused by the execution of computing tasks in present microelectronic devices. For example, microprocessors consume time and power to perform their assigned tasks. They also radiate an electromagnetic field, dissipate heat, and even make some noise [22]. As a matter of fact, there are plenty of information sources leaking from actual computers that can consequently be exploited by malicious adversaries. In this chapter, we focus on power consumption and electromagnetic radiation that are two frequently considered side-channels in practical attacks. Since a large part of present digital circuits is based on CMOS gates, this introduction also only focuses on this technology. As will be mentioned in Section 2.4, other types of logic circuits could be considered for sidechannel attacks, sometimes providing improved resistance compared with standard CMOS. Power consumption in CMOS devices. Static CMOS gates have three distinct dissipation sources [19]. The first one is due to the leakage currents in transistors. The second one is due to the so-called short-circuit currents: there exists a short period during the switching of a gate while NMOS and PMOS are conducting
2 Introduction to Side-Channel Attacks
29
Fig. 2.1 Charge vs. discharge of a CMOS inverter
simultaneously. Finally, the dynamic power consumption is due to the charge and discharge of the load capacitance C L represented by the dotted paths in Fig. 2.1. The respective importance of these dissipation sources typically depends on technology scalings. But the dynamic power consumption is particularly relevant from a side-channel point of view since it determines a simple relationship between a device’s internal data and its externally observable power consumption. It can be written as Pdyn = C L VD2 D P0→1 f
(2.1)
where P0→1 f is called the switching activity, P0→1 is the probability of a 0 → 1 transition, f is the work frequency of the device, and VD D is the voltage of the power supply. In CMOS devices, when measuring the power consumption (either at the ground pin or at the power pin), the highest peak will appear during the charge of the capacitance (i.e., 0 → 1 event). During the discharge, the only current we can measure is the short-circuit path current. This data-dependent power consumption is the origin of side-channel information leakages. EM radiation in CMOS devices. Just as the power consumption of CMOS devices is data-dependent, it can be showed that its electromagnetic radiation also is. From a theoretical point of view, electromagnetic leakages are usually explained from the Biot–Savart law: dB =
μIdl × r 2 4πr
(2.2)
where μ is the magnetic permeability, I is the current carried on a conductor of infinitesimal length dl, r is the unit vector specifying the distance between the current element and the field point, and r is the distance from the current element to the field point. Although such a simple equation does not describe the exact (complex) radiation of an integrated circuit, it already emphasizes two important facts: (1) the
30
F.-X. Standaert
field is data-dependent due to the dependence on the current intensity and (2) the field orientation depends on the current direction. This data-dependent radiation is again the origin of side-channel information leakages. In general, any physically observable phenomenon that can be related to the internal configuration or activity of a cryptographic device can be a source of useful information to a malicious adversary. Leakage models. From the previous physical facts, side-channel adversaries have derived a number of (more or less sophisticated) leakage models. They can be used both to simulate the attacks or to improve an attack’s efficiency. For example, the Hamming distance model assumes that, when a value x0 contained in a CMOS device switches into a value x1 , the actual side-channel leakages are correlated with the Hamming distance of these values, namely H D (x0 , x1 ) = HW (x0 ⊕ x1 ). The Hamming weigh model is even simpler and assumes that, when a value x0 is computed in a device, the actual side-channel leakages are correlated with the Hamming weight of this value, namely HW (x0 ). As will be emphasized in Section 2.4, good leakage models have a strong impact on the efficiency of a side-channel attack. Hamming weight and distance models assume both that there are no differences between 0 → 1 and 1 → 0 events and that every bit in an implementation contributes identically to the overall power consumption. Improved models relax these assumptions, e.g., by considering different leakages for the 0 → 1 and 1 → 0 events [18], assigning different weights to the leakage contributions of an implementation’s different parts [23] by considering advanced statistical tools to characterize a device’s leakage [6].
2.2.2 Measurement Setups As far as the practical implementation of a side-channel attack is concerned, the building of a good measurement setup is of primary importance. They aim to convert the physical features of an observable device into digitally exploitable data. Such setups are generally made of the following elements [12]:
– A target cryptographic device, e.g., a smart card, FPGA of integrated circuit running some cryptographic primitive, e.g., a block cipher. – If not embedded on-chip, an external power supply, clock generator, and any additional circuitry required for the device to run properly. – A leakage probe. For example, power consumption can be monitored by inserting a small resistor within the supply chain of the target device. Electromagnetic radiation can be captured with simple handmade coils. – An acquisition device, e.g., digital oscilloscope with sufficient features (typically, 1 GS/s, 8 bits of resolution, etc.), connected to a computer for the statistical analysis of the side-channel traces.
2 Introduction to Side-Channel Attacks
31
Just as leakage models, measurement setups have a strong influence on the efficiency of side-channel attacks. The quality of a measurement setup is mainly quantified by the amount of noise in its traces. Noise is a central issue in side-channel attacks and more generally in any signal processing application. In our specific context, various types of noise are usually considered, including physical noise (i.e., produced by the transistors and their environment), measurement noise (i.e., caused by the sampling process and tools), model matching noise (i.e., meaning that the leakage model used to attack does possibly not perfectly fit to real observations), or algorithmic noise (i.e., produced by parasitic computations in an implementation). All these disturbances similarly affect the efficiency of a side-channel attack and reduce the amount of information in the leakages.
2.2.3 Classical Attacks: SPA and DPA Beyond the previous classification of physical attacks (i.e., invasive vs. non-invasive, active vs. passive), the literature also classifies the attacks according to the statistical treatment applied to the leakage traces. For example, “simple” and “differential” attacks were introduced in the context of power analysis [10]. Simple power analysis (SPA) attempts to interpret the power consumption of a device and deduce information about its performed operations. This is nicely illustrated with the example in Fig. 2.2. It shows the power consumption trace of a device performing an AES (advanced encryption standard) encryption [17]. The figure clearly shows a pattern that is repeated 10 times and corresponds to the 10 rounds of the AES when implemented in its 128-bit version.
Fig. 2.2 SPA monitoring from a single AES encryption performed by a smart card
Of course, this information is not an attack in itself. Everybody knows that AES128 has 10 rounds, and knowing that a device is performing an AES encryption does not expose its secrets at all. However, such a visual inspection of the leakage traces may the preliminary step in a more powerful attack, e.g., by determining the parts of the traces that are relevant to the adversary. In addition, there are cases in which this sequence of operations can provide useful information, mainly when the instruction flow depends on the data. Modular exponentiation performed with a square and multiply algorithm is a good example. If the square operation is implemented differently than the multiple operation – a tempting choice, as this will allow
32
F.-X. Standaert
specific optimizations for the square operation, resulting in faster code – and provided this difference results in different consumption patterns, then the power trace of an exponentiation directly yields the (secret) exponent’s value. Generally speaking, all programs involving conditional branch operations depending on secret parameters are at risk. By contrast, differential power analysis (DPA) intends to take advantage of data dependencies in the power consumption patterns. It is again better illustrated with an example. Figure 2.3 shows power consumption curves that typically correspond to the simple Hamming weight or distance leakage models introduced in Section 2.2.1. These data dependencies exploited by powerful statistics lead to a more general class of (so-called differential) attacks that are detailed through an example in the next section.
Fig. 2.3 Illustration of Hamming weight or distance data dependencies in the power consumption traces of a smart card using an 8-bit data bus
2.3 An Exemplary Differential Attack Against the DES A side-channel attack against any cryptographic device typically involves a number of active steps for the adversary. In this section, we aim to illustrate these different steps with an exemplary attack against the DES (Data Encryption Standard) that is briefly described in Appendix 1. For simplicity, we follow the practice-oriented definition of a side-channel attack introduced in [24]. 1. Selection of the target algorithm and implementation. The adversary determines the algorithm (e.g., the DES) and a target platform (e.g., an ASIC, FPGA, or smart card) from which he aims to recover secret information. 2. Selection of the leakage source and measurement setup. The adversary determines the type of leakage he wants to exploit, e.g., power consumption, electro-
2 Introduction to Side-Channel Attacks
33
magnetic radiation, or a combination of both. This step includes the preparation of the measurement setup described in Section 2.2.2. 3. Selection of the target signal. Side-channel attacks are generally based on a divide-and-conquer strategy in which different parts of a secret key are recovered separately. Consequently, the adversary selects which part of the key is the target of his attack, e.g., the six key bits entering the first DES S-box S0. We denote this target part of the block cipher key as a key class s. 4. Selection of the device inputs. If allowed, the adversary selects the inputs that are to be feeded to the target device, e.g., randomly. If not allowed, it is generally assumed that a side-channel adversary can monitor the plaintexts. 5. Derivation of internal values within the algorithm. This is the core of the divideand-conquer strategy. For a number of (known) input plaintexts, the adversary predicts (key-dependent) internal values within the target device that are to be computed during the execution of the algorithm. For computational reasons, only values depending on a small part of the key are useful. For example, one could predict the 4 bits after the permutation in the first DES round for each of the 64 possible key values entering S0, as illustrated in the central table of Fig. 2.4. As a result of these values’ derivation phase, the adversary has predicted internal values of the block cipher implementation for q plaintexts and each key class candidate s ∗ (out of 64 possible ones), stored in vecq tors vs ∗ ’s.
Fig. 2.4 Derivation of the internal values and leakage modeling within the DES
6. Modeling of the leakage. For the same set of key class candidates as during the derivation of the internal values, the adversary models a part or function of the actual target device’s leakage. For example, assuming that the power consumption in CMOS devices depends on the switching activity occurring during a computation, the Hamming weigh or distance models can be used to predict the leakage, as illustrated in the right table of Fig. 2.4. In this context, q the models are directly derived from the internal values, e.g., M(s ∗ , vs ∗ ) = q HW (vs ∗ ). 7. Measurement of the leakage. Thanks to his measurement setup, the adversary monitors the leakage (e.g., the power consumption) of the target device.
34
F.-X. Standaert
As a consequence, he obtains a leakage vector lq = [l1 , l2 , . . . , lq ] that contains q leakage traces li ’s corresponding to the encryption of q different plaintexts. 8. Selection of the relevant leakage samples. Since the leakage traces obtained from an acquisition device may contain hundreds of thousands samples, actual side-channel adversaries usually reduce the data dimensions to lower values. This may be done using simple techniques such as SPA or by using advanced statistical processing. In the example of Fig. 2.5, only the maximum value of the clock cycle corresponding to the DES permutation is extracted from the traces. As a result of this phase, the adversary obtains a reduced vector: R(lq ).
Fig. 2.5 Selection of the relevant leakage samples thanks to a transform T
9. Statistical comparison. For each of the key class candidates, the adversary finally applies a statistic to compare the predicted leakages with the transformed measurements. If the attack is successful, it is expected that the model corresponding to the correct key candidate gives rise to the best comparison result. For example, q in our previous illustrations, both the values derivation vectors vs ∗ and reduced traces R(li ) s have q elements. Therefore, if we store the hypothetical Hamming q q weight models in a vector ms ∗ = HW (vs ∗ ), the empirical correlation coefficient can be used for comparison [5] : q
∗
i ˆ ˆ q − E(R(l q ))) · (m s ∗ − E(ms ∗ )) q i 2 ˆ ˆ q 2 i=1 (li − E(R(lq ))) · i=1 (m s ∗ − E(ms ∗ ))
corr(s ) = q
i=1 (li
(2.3)
ˆ denotes the empirical mean. In Fig. 2.6, such a correlation attack where E(.) is applied to our leaking DES implementation and the coefficient is computed for an increasing number of observations. It clearly illustrates that the attack is successful after approximately 100 measured encryptions.
2 Introduction to Side-Channel Attacks
35
Fig. 2.6 Statistical comparison with the correlation coefficient
2.4 Improved Side-Channel Attacks The previous section described a typical side-channel attack against an unprotected implementation of the DES, based on simple statistical tools and leakage models. This section aims to put forward how such a simple attack can be improved. As a matter of fact, such improvements basically correspond to the improvement of any of the individual steps in the previous section. Specifically, the following ideas are generally considered in the literature: 1. Improving the measurement setup, by reducing any possible source of noise, better designing the side-channel probes, etc. This is a preliminary step to the development of any powerful side-channel attack. 2. Selecting the inputs adaptively as suggested and analyzed in [11]. 3. Pre-processing the side-channel leakage traces, e.g., by averaging or filtering. 4. Improving the leakage models, e.g., by profiling and characterizing the target device or by gaining information about critical implementation details. 5. Taking advantage of multivariate statistics either by using the so-called higherorder attacks [14] or by considering optimal strategies such as template attacks [6] or stochastic models [20] (which generally require to characterize the device leakage prior to the actual application of the attack). 6. Using various statistical tests: Difference of mean tests, correlation analysis, or Bayesian classification are the most frequently considered ones.
36
F.-X. Standaert
7. Combining various types of side-channel leakages, e.g., power and EM [2]. With this respect, it is interesting to see that different side-channels generally give rise to different types of information. As an illustration, we provide two exemplary leakage traces of the same leaking device in Appendix 2, corresponding, respectively, to the power and EM channels. They clearly illustrate that, e.g., the field orientation and therefore the current direction within the device can be obtained from actual EM measurements while the power leakages only provide information about the amplitude of this current. In practice, the Bayesian classification of key classes based on side-channel leakages exploiting the statistical profiling of a target device is usually denoted as a template attack [6]. It is particularly important both for theoretical and practical reasons. First from a theoretical point of view, it is usually assumed that such a side-channel attack is the most powerful from an information theoretic point of view. Consequently, it has important consequences in the security evaluation of a cryptographic device and when provable security issues are discussed [24]. But in practice, it also corresponds to a significantly different implementation context than the previously described differential attack. Indeed the construction of a statistical model for the side-channel leakages (i.e., templates) requires the profiling of a target device. In the worst case, this may involve the ability to change the keys within a device that is identical to the target. For these reasons, we now provide a second illustrative example of a side-channel attack, exploiting templates. We use the same steps as in the previous sections in order to put forward the specificities of such an adversarial context.
2.4.1 A Exemplary Profiled Attack Against the DES The main objective of a profiled attack is to take advantage of a better leakage model than, e.g., assuming Hamming weight dependencies. For this purpose, one generally starts by profiling or characterizing the device leakages with a statistical model. In practice, this involves an additional step in the attack. Preparation of the leakage model. Different approaches can be used for this purpose. The most investigated solution is to assume that the leakage samples R(li )’s were drawn from a Gaussian distribution:1 N (R(li )|μis , σsi ) =
−(R(li ) − μis )2 1 exp , √ σsi 2π 2σsi 2
(2.4)
1 We just consider the univariate case in this example. But the extension toward the multivariate case where several leakage samples are considered is straightforward. Note also that in practice, one has to decide what to characterize. For example, one can build templates for different key candidates or for different Hamming weights at the output of an S-box. The selection of operations and data to characterize is important from a practical point of view since it determines the computational cost of the attack (i.e., building more templates is more expensive).
2 Introduction to Side-Channel Attacks
37
in which the mean μis and standard deviation σsi specify completely the noise associated to each key class s. In practice, these parameters are estimated thanks to sets of typically a few hundreds to a few thousands traces. As a consequence, the adversary has an estimation of the probabilities Pr[s ∗ |li ] with the Gaussian distribution ∗ ˆ ˆ is ∗ , σˆ si∗ ), where μ ˆ is and σˆ si , respectively, denote the sample Pr[R(l i )|s ] = N (R(li )|μ mean and variance for a given leakage sample. Once the leakage model has been characterized, the adversary follows essentially the same steps as during a classical differential attack, with only a few differences in steps 6 and 9 that we re-detail as follows: 6. Modeling of the leakage.Rather than using the Hamming weights of some internal (key-dependent) values within the device, the adversary uses the previously ∗ ˆ defined probabilistic model. That is, M(s ∗ , R(li )) = Pr[R(l i )|s ]. 9. Statistical comparison. Finally, from the estimated conditional probabilities ∗ ˆ Pr[R(l i )|s ]’s, the adversary applies Bayes theorem and selects the key classes ˆ ∗ |R(lq )]. In Fig. 2.7, such a template according to their likelihood: L(s ∗ ) = Pr[s attack is applied to our leaking DES implementation and the key likelihoods are computed for an increasing number of observations. It clearly illustrates that the attack is successful after approximately 50 measured encryptions.
Fig. 2.7 Statistical comparison with the correlation coefficient
2.5 Countermeasures In this section, we finally describe possible countermeasures to prevent side-channel attacks and discuss the resulting security vs. efficiency trade-off. Some of these techniques are extensively described in the following chapter of this book. Countermeasures against side-channel attacks range among a large variety of solutions. However, in the present state of the art, no single technique allows to
38
F.-X. Standaert
provide perfect security. Protecting implementations against physical attacks consequently intends to make the attacks harder. In this context, the implementation cost of a countermeasure is of primary importance and must be evaluated with respect to the additional security obtained. The exhaustive list of all possible solutions to protect cryptographic devices from side-channel opponents would deserve a long survey in itself. In this section, we only suggest a few examples in order to illustrate that security can be added at different abstraction levels: 1. At the physical level, shields, conforming glues [3], physically unclonable functions [26], detectors, detachable power supplies [21], etc. can be used to improve the resistance of a device against physical attacks. 2. At the technological level, dynamic and differential logic styles (as an alternative to CMOS) have been proposed in various shapes (e.g., [25]) to decrease the data dependencies of the power consumption. 3. At the algorithmic level, time randomization [13], encryption of the buses [4], hiding (i.e., making the leakage constant), or masking (i.e., making the leakage dependant of some random value, e.g., in [8]) are the usual countermeasures. 4. At all the previous levels, noise addition is the generic solution to decrease the amount of information in the side-channel leakages. 5. Countermeasures also exist at the protocol level, e.g., based on key updates.
2.6 Conclusions Side-channel attacks are an important class of cryptanalytic techniques. Although less generic than classical cryptanalysis, since they target a specific implementation rather than an abstract algorithm, they are generally much more powerful. Such attacks are applicable to most (if not all) present circuit technologies and have to be considered as a serious threat for the security of actual embedded devices. From an operational point of view, security against side-channel attacks can be obtained by the sound combination of various countermeasures. However, significant attention has to be paid to the fair evaluation of these countermeasures in order to properly assess the security of any cryptographic device and trade it with implementation efficiency [24]. Additionally, side-channel attacks are only a part of the physical reality and resisting them may induce weaknesses with respect to other issues. The development of a unified framework for the analysis of physical security concerns and possibly a theory of provable physical security is a long-term goal in cryptographic research, initiated in [7, 15, 27].
Appendix 1 The Data Encryption Standard : A Case Study In 1977, the DES algorithm [16] was adopted as a Federal Information Processing Standard (FIPS) for unclassified government communication. Although a new
2 Introduction to Side-Channel Attacks
39
Fig. 2.8 Data encryption standard
Advanced Encryption Standard was selected in October 2000 [17], DES is still widely used, particularly in the financial sector. DES encrypts 64-bit blocks with a 56-bit key and processes data with permutations, substitutions, and XOR operations. The plaintext is first permuted by a fixed permutation IP. Next the result is split into two 32-bit halves, denoted with L (left) and R (right) to which a round function is applied 16 times. The ciphertext is calculated by applying the inverse of the initial permutation IP to the result of the 16th round. The secret key is expanded by the key schedule algorithm to sixteen 48-bit round keys K i and in each round, a 48-bit round key is XOR ed to the text. The key schedule consists of known bit permutations and shift operations. Therefore, finding any round key bit directly involves that the secret key is corrupted. The round function is represented in Fig. 2.8a and is easily described by L i+1 = Ri Ri+1 = L i ⊕ f (Ri , K i ) where f is a non-linear function detailed in Fig. 2.8b: the Ri part is first expanded to 48 bits with the E box, by doubling some Ri bits. Then, it performs a bitwise modulo 2 sum of the expanded Ri part and the 48-bit round key K i . The output of the XOR function is sent to eight non-linear S-boxes. Each of them has six input bits and four output bits. The resulting 32 bits are permuted by the bit permutation P. Finally, DES decryption consists of the encryption algorithm with the same round keys but in reversed order.
40
F.-X. Standaert
Appendix 2 Exemplary Power and EM Leakage Traces
Fig. 2.9 Exemplary power and EM leakage traces
References 1. D. Agrawal, B. Archambeault, J. Rao, P. Rohatgi, The EM Side-Channel(s), in the Proceedings of CHES 2002, LNCS, vol 2523, pp 29–45, Redwood City, CA, USA, August 2002. 2. D. Agrawal, J. Rao, P. Rohatgi, Multi-channel Attacks, in the Proceedings of CHES 2003, LNCS, vol 2779, pp 2–16, Cologne, Germany, Sept. 2003.
2 Introduction to Side-Channel Attacks
41
3. R. Anderson, M. Kuhn, Tamper Resistance – a Cautionary Note, in the proceedings of the USENIX Workshop on Electronic Commerce, pp 1–11, Oakland, CA, USA, November 1996. 4. E. Brier, H. Handschuh, C. Tymen, Fast Primitives for Internal Data Scrambling in Tamper Resistant Hardware, in the Proceedings of CHES 2001, LNCS, vol 2162, pp 16–27, Paris, France, May 2001, Springer-Verlag. 5. E. Brier, C. Clavier, F. Olivier, Correlation Power Analysis with a Leakage Model, in the Proceedings of CHES 2004, LNCS, vol 3156, pp 16–29, Boston, MA, USA, August 2004. 6. S. Chari, J. Rao, P. Rohatgi, Template Attacks, in the Proceedings of CHES 2002, LNCS, vol 2523, pp 13–28, CA, USA, August 2002. 7. R. Gennaro, A. Lysyanskaya, T. Malkin, S. Micali, T. Rabin, Algorithmic Tamper-Proof Security: Theoretical Foundations for Security Against Hardware Tampering, in the Proceedings of TCC 2004, LNCS, vol 2951, pp 258–277, Cambridge, MA, USA, February 2004. 8. L. Goubin, J. Patarin, DES and Differential Power Analysis, in the Proceedings of CHES 1999, LNCS, vol 1717, pp 158–172, Worcester, MA, USA, August 1999. 9. P. Kocher, Timing Attacks on Implementations of Diffie-Hellman, RSA, DSS and Other Systems, in the Proceedings of Crypto 1996, LNCS, vol 1109, pp 104–113, Santa Barbara, CA, USA, August 1996. 10. P. Kocher, J. Jaffe, B. Jun, Differential Power Analysis, in the Proceedings of Crypto 1999, LNCS, vol 1666, pp 398–412, Santa-Barbara, CA, USA, August 1999. 11. B. K¨opf, D. Basin, An Information Theoretic Model for Adaptive Side-Channel Attacks, CCS 2007, Alexandria, VA, USA, October 2007. 12. S. Mangard, E. Oswald, T. Popp, Power Analysis Attacks: Revealing the Secrets of Smart Cards, Chapter 3, Section 4, Springer, Berlin 2007. 13. D. May, H. Muller, N. Smart, Randomized Register Renaming to Foil DPA, in the Proceedings of CHES 2001, LNCS, vol 2162, pp 28–38, Springer-Verlag Paris, France, May 2001. 14. T.S. Messerges, Using Second-Order Power Analysis to Attack DPA Resistant Software., in the Proceedings of CHES 2000, LNCS, vol 2523, pp 238–251, Worcester, MA, USA, August 2000. 15. S. Micali, L. Reyzin, Physically Observable Cryptography, in the Proceedings of TCC 2004, LNCS, vol 2951, pp 278–296, Cambridge, MA, USA, February 2004. 16. National Bureau of Standards, FIPS 46, The Data Encryption Standard, Federal Information Processing Standard, NIST, U.S. Dept. of Commerce, 1977. 17. National Bureau of Standards, FIPS 197, Advanced Encryption Standard, Federal Information Processing Standard, NIST, U.S. Dept. of Commerce, 2001. 18. E. Peeters, F.-X. Standaert, J.-J. Quisquater, Power and Electromagnetic Analysis: Improved Models, Consequences and Comparisons, in Integration, the VLSI Journal, 40, 52–60, Spring 2007. 19. J. M. Rabaey, Digital Integrated Circuits, Prentice Hall International, Upper Saddle River, NJ 1996. 20. W. Schindler, K. Lemke, C. Paar, A Stochastic Model for Differential Side-Channel Cryptanalysis, in the Proceedings of CHES 2005, LNCS, vol 3659, pp 30–46, Edinburgh, Scotland, September 2005. 21. A. Shamir, Protecting Smart Cards from Passive Power Analysis with Detached Power Supplies, in the Proceedings of CHES 2000, LNCS, vol 1965, pp 238–251, Worcester, MA, USA, August 2000. 22. A. Shamir, E. Tromer, Acoustic cryptanalysis On nosy people and noisy machines, available from http://theory.csail.mit.edu/ tromer/acoustic/ 23. F.-X. Standaert, E. Peeters, F. Mac´e, J.-J. Quisquater, Updates on the Security of FPGAs Against Power Analysis Attacks, in the Proceedings of ARC 2006, LNCS, vol 3985, pp 335–346, Springer-Verlag, Delft, The Netherlands, March 2006. 24. F.-X. Standaert, T.G. Malkin, M. Yung, A Unified Framework for the Analysis of SideChannel Key Recovery Attacks, International Association of Cryptographic Research, Cryptology ePrint Archive, Report 2006/139.
42
F.-X. Standaert
25. K. Tiri, M. Akmal, I. Verbauwhede, A Dynamic and Differential CMOS Logic with Signal Independent Power Consumption to Withstand Differential Power Analysis on Smart Cards, in the Proceedings of ESSCIRC 2003. 26. P. Tuyls, G.J. Schrijen, B. Skoric, J. van Geloven, N. Verhaegh, R. Wolters, Read-Proof Hardware from Protective Coatings, in the Proceedings of CHES 2006, LNCS, vol 4249, pp 369– 383, Yokohama, Japan, October 2006. 27. UCL Crypto Group, Theoretical Models for Side-Channel Attacks, home page and related publications: http://www.dice.ucl.ac.be/ fstandae/tsca.
Part II
Cryptomodules and Arithmetic
Chapter 3
Secret Key Crypto Implementations Guido Marco Bertoni and Filippo Melzani
3.1 Introduction Digital devices are the driving force for new services and an increasing level of communication. In this scenario there are security threats for the end users, for the business model of device manufacturers, and service providers. Cryptography is one of the tools necessary for answering to the increasing demand of security, and symmetric key algorithms are the most used primitives for achieving privacy, data integrity, and authentication. This chapter presents the algorithm selected in 2001 as the Advanced Encryption Standard. This algorithm is the base for implementing privacy in all new applications. Together with the algorithm there are a set of modes of operation specified in order of properly managed information to be processed by the algorithm. Finally an overview of the different techniques of software and hardware implementation is given.
3.2 Block Cipher and Stream Cipher The most used primitives in cryptography are symmetric algorithms. The symmetric algorithm refers to the fact that the same key is used for both encrypting and decrypting functionality. Synonymous are secret key algorithm or private key algorithm. Symmetric key algorithms have been the first form of encryption algorithms. Nowadays there is a distinction between two families of symmetric key algorithms: block cipher algorithms and stream cipher algorithms. In both cases the specification of the algorithm is publicly available, and the security of the algorithm is due only to the secrecy of the key, not to the unknown details of the design.
G.M. Bertoni (B) STMicroelectronics, Centro Direzionale Colleoni, 20041 Agrate, Italy e-mail:
[email protected]
I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3 3, C Springer Science+Business Media, LLC 2010
45
46
G.M. Bertoni and F. Melzani
A block cipher algorithm encrypts a block of fixed length of input data in an encrypted block of the same length. Typical lengths of the block to process are 64 and 128 bits. The algorithm requires also the secret key as input. Usually the encryption time and the decryption one are comparable. In the field of digital communications the Data Encryption Standard has been the most used algorithm [29]. In the case of DES the key is composed of 56 bits, while the data block is 64 bits wide. A stream cipher is generally based on the one-time pad structure. The one-time pad algorithm was invented by Vernam in 1917. The basic idea is to have a key as long as the plaintext and combine them through bit-wise XOR. Since the key should look as a random stream, the cipher text will not give any possibility to detect frequency of the symbols and thus to decrypt it without the knowledge of the key. The stream cipher is composed of a key stream generator, called even pseudorandom number generator, and a bit-wise XOR for adding the key stream to the plain text. The decryption functionality is executed generating the same key stream. The most used stream cipher is the RC4 algorithm developed by Ron Rivest. The development of encryption algorithms is based on open contributions such as universities or research centers, companies that patent algorithms or keep them as industrial secrets, and standardization bodies such as the National Institute of Standard and Technology (i.e., NIST). There are two main technical points of discussion on an encryption algorithm: the security aspects and the implementation ones. These topics are mainly discussed on two worldwide workshops: the Fast Software Encryption and the Cryptographic Hardware and Embedded Systems (FSE and CHES). These workshops are held yearly and are supported by the International Association for Cryptologic Research (IACR). Besides discussion in the research community, the NIST has played a fundamental role in the definition of standard algorithms for the US government which were used in commercial applications as well. DES has been the first case: NIST (NBS at that time) wrote the standard and from then it has been widely used and it is still used in many commercial applications. Once the limit of DES has become clear, 3DES was indicated by NIST as a temporary alternative and then a new selection was prepared. NIST was radically new in the selection process for the replacement of DES: instead of giving a commitment to a company, it decided to open a public competition with public comments. This solution has to be considered a big success. The winner algorithm of the competition has been named Advanced Encryption Standard (AES). After the AES competition there have been two other projects on the analysis of symmetric algorithms. These projects have been made just as evaluation and indication of security status of algorithms, since NIST had already made the selection of one algorithm as replacement of DES/3DES. One of these was the European project called NESSIE [28], while the second was held in Japan and called CRYPTREC [10]. The NESSIE project evaluated block ciphers, stream ciphers, hash functions, and signature schemes. From the analysis, a 3-year effort resulted with the contribution of academia and while these two projects are now closed, there is a third one in Europe for the analysis of stream ciphers named eSTREAM [13].
3 Secret Key Crypto Implementations
47
3.3 The Advanced Encryption Standard In 1997 the NIST decided to begin the process for selecting a new block cipher algorithm for unclassified government documents with the explicit aim of replacing DES. The selection process was very innovative compared to that adopted for the selection of DES many years before. In fact the DES development was made by IBM researchers under the invitation of the National Bureau of Standards (now NIST). For the AES instead, the complete selection process was public and well documented. The procedure and terms for submitting an algorithm were quite simple. Each candidate algorithm was required to be able to encrypt an input block of 128 bits to an output block of 128 bits and admit three different key sizes (i.e., 128 bits, 192 bits, and 256 bits). Moreover, the security level of the algorithm was required to be comparable with that of the other submitted candidate algorithms. Another requirement was that the algorithm should be efficiently implementable in both software and hardware platforms. If selected, the algorithm was also required to be royalty-free for the government uses. A first call for the algorithms was opened in January 1997; 15 candidate algorithms satisfied the initial submission requirements. In August 1999, 5 out of the 15 candidate algorithms were selected as finalists. They were MARS from IBM; RC6 from RSA; Rijndael, developed by Rijmen and Daemen from Belgium; Serpent by Biham et al.; and Twofish by Schneier et al. Three scientific conferences took place during the selection process receiving the names of AES1, AES2, and AES3 and held in 1998, 1999, and 2000, respectively. These conferences were similar to typical scientific conferences: authors could submit papers regarding the security level of a particular candidate algorithm(s) or its performance and implementation options when targeting different software and hardware platforms. The choice of the winner algorithm was particularly hard as all five finalist algorithms are still considered good cryptographic algorithms and no practical attacks have been found against any of them. One particular characteristic that made Rijndael the winner algorithm is likely to have been its suitability for efficient implementation in constrained environments, such as 8-bit smart cards and, more generally, in embedded systems. It is worth noticing that the scientific community also appreciated the selection of a European algorithm: many people in fact were convinced that a US federal agency would not select a foreign algorithm for encryption purposes. As a consequence of the public selection process, it is now possible to access all the design details of the winner algorithm [36]. For readers interested in the details of the Rijndael design, the inventors Daemen and Rijmen have reported their work in the book The Design of Rijndael [12]. The original proposed Rijndael algorithm allowed block and key sizes independently specified to any multiple of 32 bits, with a minimum of 128 bits and a maximum of 256 bits. The standard choice, which we refer to simply as AES, restricts to 128 bits the choice for input data block size, while the secret key can be chosen from 128, 192, or 256 bits. Nevertheless the combination of 128 bits for both the data block and the secret key is the most
48
G.M. Bertoni and F. Melzani
frequently used configuration and most research work focuses on this parameter combination only. In the remainder of this section, we shall make reference mainly to the 128–128 option. Like all other block cipher algorithms, AES is a composition of a basic round, which processes the input data block to encrypt, and of a key schedule, which processes the secret key to calculate the round keys. The encryption algorithm starts with an initial round, which simply adds the first round key to the input data block. Then, the round transformation is repeated 10, 12, or 14 times depending on whether the key is 128-bit, 192-bit, or 256-bit long, respectively. The basic round of the AES, shown in Fig. 3.1, is composed of four internal transformations: SubBytes, ShiftRows, MixColumns, and AddRoundKey. The last round is different from the previous rounds, since it lacks the MixColumns transformation. INPUT DATA BLOCK
SUBBYTES
SHIFTROWS
MIXCOLUMNS
ADDROUNDKEY
ROUND KEY
OUTPUT DATA BLOCK Fig. 3.1 The internal structure of an AES round
The byte is the atomic information element for the AES cipher algorithm; this feature is intentional and it was adopted by the Rijndael designers to allow efficient implementations on 8-bit CPUs as well as 32-bit CPUs. To easily describe the algorithm it is useful to model the data block as if it was arranged as a two-dimensional array, which is called “state matrix” or simply “state.” Each entry in the state matrix corresponds to a byte, the state matrix being squared with four rows and four columns.
3 Secret Key Crypto Implementations
49
The first internal transformation of the round is called SubBytes. As the name itself suggests, in this transformation the value of each byte constituting an entry of the state matrix is substituted. The substitution operation, as it usually happens in most block cipher algorithms, is non-linear. The substitution law is a mathematical function in a finite field (or Galois field). The SubBytes transformation consists of two stages and it considers the byte as an element of the Galois field GF(28 ). First the multiplicative inverse of the byte element is computed, with the additional constraint that if the initial byte is 0, then it is mapped to the 0 element as well. After performing the inversion, an affine transformation is applied to each bit of the byte. The following equation represents the affine transformation: bi = bi + b(i+4)mod8 + b(i+5)mod8 + b(i+6)mod8 + b(i+7)mod8 + ci where bi indicates the ith bit of the byte before the affine transformation, while bi indicates the same bit after the affine transformation, c is a constant with a hexadecimal value of 0x63. The two stages of the transformation can be combined together and implemented as a look-up table of 256 bytes. When implemented as a look-up table, the SubBytes transformation is often called S-BOX. Figure 3.2 depicts the SubBytes transformation.
Fig. 3.2 SubBytes transformation, implemented as a S-BOX
The second internal transformation to be applied on the state matrix is called ShiftRows. It executes a fixed rotation of the four rows of the state matrix. The first (top) row is not touched; the second row is rotated by 1 byte position to the left; the third row is rotated by 2 byte positions to the left; and the fourth (bottom) row is rotated by 3 byte positions to the left. Figure 3.3 summarizes in a graphical form the ShiftRows transformation, putting into evidence how the leftmost column is diffused through the state matrix. The so-called MixColumns internal transformation is the third one, and it executes a vector–matrix multiplication of the columns of the state matrix times a 4-by4 square matrix having fixed coefficient entries interpreted as elements of GF(28 ). Since the bytes are still considered elements of the finite field GF(28 ), the multiplications are performed in this finite field. The transformation is called MixColumns because each entry of the output state matrix in column i depends on all the four entries of column i in the original input state matrix, weighted by the coefficients of the constant matrix. In other words, MixColumns spreads information across
50
G.M. Bertoni and F. Melzani
Fig. 3.3 ShiftRows transformation and its diffusion effect
columns while ShiftRows does the same, but across rows. Of course, such a form of orthogonality is intentional and aims at mixing and diffusing information. Figure 3.4 depicts the calculation of the leftmost column as the multiplication of the column times the constant square matrix. As it can be seen, the coefficient entries of the constant square matrix have been properly chosen: their values (1, 2, and 3) make the multiplication computationally quite simple, since the multiplication times 2 is just a left shift of the byte (in the case the most significant bit of the byte is 1, reduction must be computed), while multiplication times 3 can be obtained by a left shift followed by one addition. The constant square matrix is invertible to allow decryption. Another characteristic of the constant matrix is that the four entries of the top row are cyclically repeated in the three rows down (this simplifies storing the coefficients).
Fig. 3.4 MixColumns transformation, processing a single column
The fourth and last internal transformation is called AddRoundKey. In this transformation the round key is added modulus 2 to the state matrix. This task just requires an array of XOR gates. It should be noted that the first and second transformation, SubBytes and ShiftRows, can be permuted without affecting the round. In fact, they both work in parallel and independently on the 16 entries of the state matrix. Similarly to any other block cipher algorithm, the secret key is iteratively processed to obtain the various round keys. In the Rijndael algorithm the key schedule is parameterized by the size of the secret key. The simplest formulation of key schedule applies to a 128-bit secret key. In the first round the secret key is added as it is, while in the following rounds the current round key is derived from the previous round key in a sequential manner. Since the round key is just added to the state
3 Secret Key Crypto Implementations
51
matrix, we can imagine the round key as if it was arranged as a matrix (the same arrangement adopted for the data block); each column of the round key is conceived as a word of 32 bits (four bytes). In total, the round key is composed of four such words. When the current round key must be calculated, its first (leftmost) word (column) is derived from the first (leftmost) word (column) and from the fourth (rightmost) word (column) of the previous round key. In particular, the fourth (rightmost) word is first rotated, then it is processed through the S-BOX (each of its four bytes is processed independently) and eventually it is added to a constant. After this transformation, the fourth (rightmost) word is bit-wise XORed to the first (leftmost) word (Fig. 3.5).
Fig. 3.5 Key schedule
The other three words are calculated in a simpler way: they are obtained as a bit-wise XOR between the word immediately on the left and the word in the same position of the previous round key. With key sizes longer than 128 bits, the key schedule is a little more complex, but even in this case 128 bits of the expanded key are used for each round. The NIST standard [36] gives a detailed description of the key schedule for keys of 128, 192, and 256 bits. The decryption algorithm is implemented using the four inverse internal transformations of the encryption algorithm applied in reverse order. As seen before, the decryption round is composed of four transformations: invSubBytes, invShiftRows, invMixColumns, and invAddRoundKey. InvSubBytes consists of two steps: again the byte is considered as an element of the field GF(28 ); it is processed through the inverse of the affine transformation used in SubBytes and then it is inverted.
52
G.M. Bertoni and F. Melzani
InvShiftRows is a shift of rows, but in the inverse direction with respect to the direction of the ShiftRows transformation. InvMixColumns is a vector–matrix multiplication, but the coefficient matrix is the inverse of that used in the MixColumns transformation. InvAddRoundKey is still a bit-wise XOR operation, since the XOR gate is its self-inverse. The round keys are used in reverse order. If the complete sequence of round keys is available at a time, it can be read starting from the end rather than from the beginning. The encryption process starts with an initial round consisting only of AddRoundKey, then the rounds 1 through 9 (11 or 13 if using 192-bit or 256-bit long keys) follow, each of which is the sequence of SubBytes, ShiftRows, MixColumns, and AddRoundKey. The algorithm ends with the last round, which lacks MixColumns, hence it is composed of SubBytes, ShiftRows, and AddRoundKey. The decryption algorithm starts with an initial round consisting only of invAddRoundKey, invShiftRows, and invSubBytes. The nominal round is a sequence of invAddRoundKey, invMixColumns, invShiftRows, and invSubBytes. The last round is followed by a single invAddRoundKey. It is also possible to exchange the order of execution of the invShiftRows and invSubBytes internal transformations (similarly to SubBytes and ShiftRows during the encryption process). Notice that the sequence of internal transformations applied during the inverse round is different from that of the encryption round. The difference is in the order of invMixColumns and invAddRoundKey.
3.4 Modes of Operation There exist many different ways for using a block cipher depending on the security target to achieve. Electronic code book. The simplest mode for using a block cipher is to divide the plaintext in blocks of the same length as the input of the block cipher and encrypt each of them. This mode is usually referred as Electronic Code Book mode (i.e., ECB) and it is shown in Fig. 3.6.
Fig. 3.6 Schematic of ECB encryption
3 Secret Key Crypto Implementations
53
It is not always recommendable to use this mode, since some properties of the plaintext could be understandable from the ciphertext: this fact is particularly relevant for images. This happens because the ECB mode maps input of the same value to output of the same value, and then in the case of images it is possible to derive information. This weakness is shown in Fig. 3.7, since the encryption of the same data always gives equal results.
Fig. 3.7 Image before and after encryption with AES in ECB mode
For going over this limitation, in the context of the DES, together with the algorithm, other three modes of operation were proposed [1]. In the following, we will illustrate the three modes presented with DES and a new one presented with the AES. The following notation will be used: Pi denotes the ith plaintext block, while Ci the corresponding ciphertext block. Most of the modes also require an initialization vector (IV): this value is a random number with the same size of the input block, and it should be transmitted to the receiver. It is not required to protect the privacy of the IV; it is just requested that the attacker is not capable of predicting the value in advance. Modes of operation for protecting privacy are divided in two main families: those inspired to stream ciphers and the others. In the case of a stream cipher mode, the block cipher is used to generate a key stream that is XORed to the plaintext. Nonstream cipher modes perform a direct elaboration of the plaintext, or a function of it, through the encryption algorithm. When a mode as ECB is used on a plaintext with the length that is not a multiple of the block size, it is necessary to apply padding. There are different methods for padding, such as for instance adding a one and as many zeros as necessary to reach the multiple of the block size. An alternative solution is the so-called ciphertext stealing [26]. This technique consists in the following steps: first of all the second-last block is encrypted and then from the result of this encryption as many bits as needed to pad the last block are taken. These two ciphertext blocks are afterward swapped by position and then the last ciphertext block correspond to the second-last plaintext block and vice versa. Once the receiver obtains the ciphertext and knows the length of the plaintext, it is possible to decrypt the second-last ciphertext block and then divide the result between bits of the padding and bits of the last plaintext block. The bits of the
54
G.M. Bertoni and F. Melzani
padding are decrypted together with the last ciphertext block for obtaining the second-last plaintext block. Cipher block chain. In the case of cipher block chain (i.e., CBC) the encryption is ruled by the following law: C0 = enc(P0 ⊕ I V ) Ci = enc(Pi ⊕ Ci−1 ) Compared to ECB, the CBC mode does not release information even when some plaintexts are equals, since the input of the encryption primitive is randomized through the output of the previous encryption (Fig. 3.8). The drawback of CBC compared to ECB is the implicit serialization of the encryption function. This means that for processing the ith input block it is necessary to have already completed the encryption of the block i − 1.
IV
P0
P1
P2
Pn
+
+
+
+
Ek
Ek
Ek
C0
C1
C2
........
Ek
Cn
Fig. 3.8 Schematic of CBC mode encryption
Cipher feedback. A third proposed mode is the cipher feedback (i.e., CFB), represented by the law: C0 = P0 ⊕ enc(I V ) Ci = Pi ⊕ enc(Ci−1 ) This mode is part of the stream cipher modes, in which the block cipher is used to generate a stream to XOR with the plaintext. The advantage of this mode is that it requires the encryption primitive only. Padding is not needed, since in the case the last block was not a multiple of the block cipher input block, only a part of the stream can be used. Output feedback. An evolution of CFB is the output feedback mode of operation (i.e., OFB):
3 Secret Key Crypto Implementations
55
O0 = enc(I V ) Oi = enc(Oi−1 ) Ci = Pi ⊕ Oi The advantage in this case is the possibility of preparing the encryption stream in advance compared to the availability of the plaintext. Counter mode. With the standardization of the AES, another mode was proposed – the Counter mode (i.e., CTR or CM) (Fig. 3.9): O0 = I V Oi = Oi−1 + 1 Ci = Pi ⊕ enc(Oi )
P0
IV
P1
P2
+1
+1
+1
Ek
Ek
Ek
Pn
+1
........
Ek
+
+
+
+
C0
C1
C2
Cn
Fig. 3.9 Schematic of Counter mode encryption
This is still a stream cipher and then only the encryption functionality is needed. The XOR is executed between the key stream and the plaintext. Usually the I V is not completely random: the least significant 32 bits start from the zero value and are incremented, while the other part of the I V is a random (in some cases, it is composed by a random part and a part that contains protocol-related data). The advantage of the counter mode is the possibility of parallelizing the key stream generation, since this feature is not available in the case of CFB and OFB. Stream cipher modes are useful in those applications where the key stream can be computed in advance and then encryption/decryption operations can be reduced to a simple XOR. In the case of counter mode, it is even possible to use multiple
56
G.M. Bertoni and F. Melzani
encryption cores for generating the key stream simultaneously since the calculation of the ith block is simply derived as the increment of the i − 1, and then it is not necessary to wait for the result of the encryption of the i − 1 block. All the previously described modes are oriented to privacy protection, while another requested security feature is data integrity. For this purpose, NIST has recently proposed a mode called CMAC, which is a modification of CBC-MAC. The CBC-MAC mode is a simple derivation of CBC: in fact the message is elaborated with CBC mode but only the last encrypted block is kept as a MAC (i.e., Message Authentication Block) [2]. The CBC-MAC is not used as it is because of security problems. CBC-MAC is considered secure only if applied to messages of the same length and multiple of the block size [3]. Different variations have been proposed such as XCBC, two-key CBC-MAC, and one-key CBC-MAC (i.e., OMAC) [16, 22, 20]. CMAC is the standardization of OMAC. The main difference between CBC-MAC and CMAC is the padding of the last input block that varies depending on the length of the message to be processed. This modification aims to solve the problem related to the required fixed length of the plaintexts. CMAC will be a good candidate for substituting HMAC constructions, which are mainly used today for MAC computation but that have the drawback of the necessity of using an hash algorithm, while CMAC requires only the AES algorithm already used for privacy protection. Finally, there are modes of operation that combine privacy and data integrity as the CCM or the GCM [30, 31]. These modes are oriented to all the applications, as networking, where the message is divided in two different parts (e.g., the header and the payload). The entire message is protected for data integrity, but only one part of the message is protected for privacy (i.e., the payload). The header has to remain in clear for allowing the correct forwarding of the packet. The CCM is a combination of counter mode for privacy and CBC-MAC for data integrity. The MAC is encrypted in such a way that insecurity of CBC-MAC are not exploitable by an attacker. Tweakable modes of operation. There is a working group in the IEEE, called IEEE P1619, with the aim of defining an encryption standard for different types of storing devices. In this standard different modes have been analyzed and most of them are belonging to a family called tweakable modes of operation. This concept has been introduced by Liskov et al. with the LRW mode of operation [23]. The actual draft of the standard is oriented to a derivation of LRW called XTS [19]. Tweakable modes allow to bind data to encrypt and position on the storing device. This prevents to an attacker to understand if the same file is positioned in different locations, and in addition moving an encrypted file to a different location will result in a decrypted message completely different from the original one. For achieving this result, tweakable modes use besides a key for encryption, also an index I related to the position of the block in the address space of the memory. This feature is not an alternative comparable to data integrity achieved with a MAC algorithm.
3 Secret Key Crypto Implementations
57
The XTS mode can be represented by the law: T = enc(I ) ⊗ α j Ci = enc(Pi ⊕ T ) ⊕ T The tweak value T is used for binding the encryption operation with the index value I . For computing T a modular multiplication in the finite field GF(2128 ) is used (α is a fixed primitive element of the finite field and α j is the jth power of α).
3.5 Implementation of the AES AES can be efficiently implemented in software on different types of CPUs, including low-end 8-bit microcontrollers, like those used in smart cards and in embedded systems. However, for applications that require very specific target, such as high throughput or low power consumption, it becomes necessary to implement the encryption algorithm in hardware.
3.5.1 Software Implementation The basic implementation of AES in software is done storing the S-BOX as an array and implementing the MixColumns transformation as a computational stage. In the case of 32-bit CPUs, it is possible to process a whole column of the state matrix, combining S-BOX and MixColumns in a single step. S-BOX and MixColumns are applied together on all the 256 possible values of a byte and the results are stored in an array where each element is a word of 32 bits. For the decryption, it is possible to swap the order of invAddRoundKey and invMixColumns, because invMixColumns is linear. This allows to keep the same structure for both encryption and decryption. In this case it is necessary to apply invMixColumns also to the round key when it is derived. It is worth noticing that if the matrix multiplication of the MixColumns is truly calculated (and not stored as a look-up table), the direct and the inverse operation behave differently. This because the coefficients used for the MixColumns allow simpler computations than the ones for the invMixColumns. There is a set of different software versions implementable from the consideration that S-BOX and MixColumns can be collapsed in single computational step. The simplest way is to create the so-called T-table, with all the possible results of this step, and keep the S-BOX alone for the last round where the MixCol is not executed. A further possibility is to instantiate four tables: one with the same content of the T-Table and the others rotated of one, two, and three bytes, in such a way that no additional elaboration is needed. For improving the performance of the last round, which lacks the MixColumns stage, it is possible to store the S-BOX in a 32-bit array instead of a 8-bit one. This is useful for all the processors that have a word-oriented memory access. Furthermore, it is possible to have four tables for the S-BOX, one storing the S-BOX values in the
58
G.M. Bertoni and F. Melzani
least significant byte, a second storing the values in the second least significant byte and so on, this is made for avoiding the computation of shifting. Clearly there is a trade-off between the required memory space and the execution time. The selection of the best trade-off is not simple to apply when the cache memory is present. In this case the execution time is connected to the cache memory organization, the amount of data to be executed, and the amount of look-up table used, as shown by Bertoni et al. [4]. The cache miss penalty could be even used as information for performing a side-channel attack to the implementation. Finally, the developer can implement the loop unrolling technique for reaching the maximum speed at the cost of a longer code size. During and also after the selection of the AES algorithm the scientific community has performed a series of different developments and comparisons for evaluating the proposals in software, such as cAESar [24] and crypto++ [37]. Table 3.1 can give an overview of the performances on different CPUs. The execution time is relative to a single encryption in ECB mode. Key schedule is computed in advance. In the case of CPU with cache, the timing is relative to an encryption of a buffer made in such a way that the cache miss penalty is negligible. Table 3.1 Performances of software AES on different CPUs CPU Clock cycles
Code size
8051 [11] ARM7 [24] AMD Opteron 2.4 GHz [37]
less 1 kByte 2889–1467 –
4000 1440–3900 283
The instruction-level parallelism of the Rijndael algorithm for high-end CPUs was studied by Clapp [9]. Different studies have been conducted on the design of dedicated instructions for the AES algorithm. This solution could be interesting for achieving a speed-up on software implementations with the save of silicon area compared to the hardware implementations. In addition, the use of look-up tables stored in memory has been indicated as a possible mean of attacks in all the systems equipped with a memory hierarchy, where cache miss can be directly connected with the value of the key. Dedicated instructions could also be used for avoiding this kind of attacks [34, 5].
3.5.2 Hardware Implementation The Rijndael algorithm has a very modular structure which allows efficient implementations with different data sizes (e.g., 8-bits, 32-bits, 128-bits). The first choice for implementing in hardware AES is the architecture to use depending on area and throughput requirements. Another crucial element to take into account is the implementation of the S-BOX, since it is the bigger and most complex component of the whole AES. Very different S-BOXes have been developed for different targets, such as shortening the critical path, lowering area, or power consumption.
3 Secret Key Crypto Implementations
59
The straightforward approach for the hardware implementation of Rijndael is to instantiate the combinational logic for performing the round function in a single clock cycle and using it iteratively. This implementation will require as many clock cycles as the number of rounds of the algorithm. Since the design principle is the simplicity, it is possible to implement it using less area and incrementing the computational time. Instead of instantiating the combinational logic for the full round (i.e., 128-bits structure, 16 S-BOXes), it is possible to use one-half (i.e., 64-bit structure, 8 SBOXes), one-quarter (i.e., 32-bit structure, 4 S-BOXes), or even down to a 8-bit architecture with a single S-BOX. On the opposite direction it is possible to pipeline the round logic or instantiate more than one round. Pipelining can be made difficult by the modes of operation, since many of them require finishing the encryption of one block before starting with the next one. Regarding the key schedule, generally in hardware the preferred solution is performing it on the fly for saving the memory requirement, but in some cases it is even possible to store the round keys in memory, especially if more than one round is present in the architecture. The main change in AES implementations, compared to DES ones, is that the clear and simple mathematical formulation of the AES S-BOX, which is the most expensive operation, allows for some freedom at the time of implementation. Related to the SubBytes and invSubBytes functionalities, there exist two S-BOXes: the direct one used for encryption and the inverse one for decryption. One obvious possibility is to conceive the SBOXes as two different tables without considering their mathematical formulation. The two SBOXes can then be stored in memory or they can be synthesized as combinatorial networks. These two immediate solutions are usually the faster ones and the choice of one of the two depends usually on the particular technology. Alternatively, the S-BOX can be decomposed into a number of stages. For instance, the calculation of the multiplicative inverse (which is the same for both the direct and the inverse S-BOX) is followed by the affine transformation. In this way, it is possible to save some silicon area or some memory cells, partially overlapping the SBOXes for encryption and decryption. The calculation of the multiplicative inverse in a finite field is not an easy task. Algorithms based on the well-known extended Euclidean algorithm or Fermat little theorem for the computation of the multiplicative inverse in a finite field exist, but they are generally not considered because they are hard to implement and expensive in terms of area when implemented in hardware. A first immediate solution consists in storing the inversion function alone in a look-up table. Another solution is based on the mathematical observation that the finite field GF(28 ) can be seen as a composite field GF((24 )2 ), and thus inversion in GF(28 ) can be reduced to inversion in GF(24 ), as presented by Rijmen [32]. The advantage of this formulation is that the look-up table storing the inverse in GF(24 ) is smaller, and the implementation by means of a memory or a combinatorial network is certainly simpler and less expensive. The adoption of composite finite fields is generally considered the smallest possible implementation of the S-BOX, in terms of silicon area. However, it is also the one requiring the longest critical path, as shown by Morioka and Satoh [27].
60
G.M. Bertoni and F. Melzani
Fig. 3.10 Architecture of the finite field inversion in composite fields
Canright presented a study of compact S-BOX [7]; the structure of the finite field inversion is depicted in Fig. 3.10. A different S-BOX has also been developed targeting the lowest power consumption, as described by Bertoni et al. [6]. A comparison between the different techniques for implementing AES S-BOX is given by Tillich et al. [35]. Some different implementations of the AES algorithm with different technologies can be easily found in technical literature. Kuo and Verbauwhede presented an ASIC AES chip that can reach 1.8 Gbits/s [21]. This implementation can manage key length of 128, 192, and 256 bits, requiring about 173 Kgates. An improved version is presented by Hwang et al. in [18]. In this case the authors have introduced protection against differential power analysis. Rijndael implemented on a Xilinx FPGA can be found on McLoone and McCanny with a throughput of 7 Gbits/s [25]. One of the fastest AES implementation is reported in [17], this AES can reach 70 Gbits/s with a careful design of the pipeline. Fischer and Drutarovsky implemented the AES on an Altera FPGA with a throughput of 750 Mbits/s [15]. Sklavos and Koufopavlou presented two ASIC implementations with throughputs from 259 Mbits/s to 3.6 Gbits/s [33]. For details about the implementation with four S-BOXes, it is possible to see the work of Chodowiec and Gaj [8], which can fit in small FPGA and can reach 166 Mbits/s. Finally, a very compact implementation of the AES is presented by Feldhofer et al. [14]. It is capable of encrypting and decryptiong with 128 bits keys only. This tiny AES is simply composed by the control unit, a 128 bits register for the message, a 128 bits register for the key, and the round logic block, which uses a single S-BOX. It requires less than 4 Kgates and can reach more than 12 Kbits/s in encryption.
3.6 Conclusions Stream ciphers and block ciphers are the most used cryptographic algorithms. In the case of block ciphers there is a vast selection of secure algorithms, especially the AES finalists have been studied even after the AES selection of Rijndael. Rijndael is becoming the widespread standard for new protocols and applications. This is due to the good security properties and efficient implementations on a large set of devices, such as small micro, PC-like architecture and dedicated hardware.
3 Secret Key Crypto Implementations
61
References 1. DES Modes of Operation, FIPS, Federal Information Processing Standard, Pub No. 81. Available at http://csrc.nist.gov/fips/change81.ps, December 1980. 2. ISO/IEC 9797. Data integrity mechanism using a cryptographic check function employing a block cipher algorithm. ISO, 1989. 3. M. Bellare, J. Kilian, and P. Rogaway. The security of cipher block chaining. In Advances in Cryptology — CRYPTO ’94, pages 340–358, 1994. 4. G. Bertoni, A. Bircan, L. Breveglieri, P. Fragneto, M. Macchetti, and V. Zaccaria. About the performances of the Advanced Encryption Standard in embedded systems with cache memory. In Proceedings of the 2003 International Symposium on Circuits and Systems, 2003. ISCAS ’03. 25–28 May 2003, volume 5, pages 145–148, 2003. 5. G. Bertoni, L. Breveglieri, R. Farina, and F. Regazzoni. Speeding Up AES By Extending a 32 bit Processor Instruction Set. In Proceedings of the IEEE 17th International Conference on Application-specific Systems, Architectures and Processors (ASAP’06), pages 275–282, 2006. 6. G. Bertoni, M. Macchetti, L. Negri, and P. Fragneto. Power-efficient asic synthesis of cryptographic sboxes. In D. Garrett, J. Lach, and C. A. Zukowski, editors, ACM Great Lakes Symposium on VLSI, pages 277–281. ACM, 2004. 7. D. Canright. A very compact s-box for aes. In CHES, pages 441–455, 2005. 8. P. Chodowiec and K. Gaj. Very compact FPGA implementation of the AES algorithm. In C. D. Walter, C¸. K. Koc¸, D. Naccache, and C. Paar, editors, Proceedings of the Workshop on Cryptographic Hardware and Embedded Systems — CHES 2003, LNCS 2779, pages 319–333, Springer-Verlag, Berlin, 2003. 9. C. Clapp. Instruction-level parallelism in AES Candidates. In Proceedings: Second AES Candidate Conference (AES2), Rome, Italy, March 1999. 10. CRYPTOREC. Cryptography Research and Evaluation Committees. http:// www.cryptrec.jp/ english/about.html. 11. J. Daemen and V. Rijmen. AES proposal: Rijndael. In First Advanced Encryption Standard (AES) Conference, Ventura, CA, USA, 1998. 12. J. Daemen and V. Rijmen. The Design of Rijndael. Springer-Verlag, Berlin, Germany, 2001. 13. ESTREAM. ECRYPT Stream Cipher Project. http://www.ecrypt.eu.org/stream. 14. M. Feldhofer, S. Dominikus, and J. Wolkerstorfer. Strong authentication for rfid systems using the aes algorithm. In M. Joye and J.-J. Quisquater, editors, CHES, LNCS 3156, pages 357–370. Springer, 2004. 15. V. Fischer and M. Drutarovsky. Two Methods of Rijndael Implementation in Reconfigurable Hardware. In C ¸ . K. Koc¸, D. Naccache, and C. Paar, editors, Proceedings of the Second Workshop on Cryptographic Hardware and Embedded Systems — CHES 2001, LNCS 2162, pages 51–65, Springer-Verlag, Berlin, Germany, 2001. 16. V. D. Gligor and P. Donescu. Fast encryption and authentication: Xcbc encryption and xecb authentication modes. In Fast Software Encryption, FSE2001, pages 92–108, 2001. 17. A. Hodjat and I. Verbauwhede. Area-throughput trade-offs for fully pipelined 30 to 70 gbits/s aes processors. IEEE Transactions on Computers, 55(4):366–372, 2006. 18. D. Hwang, K. Tiri, A. Hodjat, B.-C. Lai, S. Yang, P. Schaumont, and I. Verbauwhede. Aesbased security coprocessor ic in 0.18-um cmos with resistance to differential power analysis side-channel attacks. IEEE Journal of Solid-State Circuits, 41(4):781–792, 2006. 19. IEEE. IEEE Security in Storage Working Group. IEEE P1619, www.ieee-p1619.wetpaint.com, 2007. 20. T. Iwata and K. Kurosawa. Omac: One-key cbc mac. In T. Johansson, editor, FSE, LNCS 2887, pages 129–153. Springer, 2003. 21. H. Kuo and I. Verbauwhede. Architectural Optimization for a 1.82Gbits/sec VLSI Implementation of the AES Rijndael Algorithm. In C ¸ . K. Koc¸, D. Naccache, and C. Paar, editors, Proceedings of the Second Workshop on Cryptographic Hardware and Embedded Systems — CHES 2001, LNCS 2162, pages 51–65, Springer-Verlag, Berlin, Germany, 2001.
62
G.M. Bertoni and F. Melzani
22. K. Kurosawa and T. Iwata. Tmac: Two-key cbc mac. In M. Joye, editor, CT-RSA, LNCS 2612, pages 33–49. Springer, 2003. 23. M. Liskov, R. Rivest, and D. Wagner. Tweakable block ciphers. In Advances in Cryptology — CRYPTO ’02, pages 31–46, 2002. 24. G. Hach¨ez, F. Koeune, and J.-J. Quisquater. cAESar results: Implementation of four AES candidates on two smart cards. In Proceedings: Second AES Candidate Conference (AES2), Rome, Italy, March 1999. 25. M. McLoone and J.V. McCanny. High performance single-chip FPGA Rijndael algorithm implementations. In C ¸ . K. Koc¸, D. Naccache, and C. Paar, editors, Proceedings of the Second Workshop on Cryptographic Hardware and Embedded Systems — CHES 2001, LNCS 2162, pages 65–76, Springer-Verlag, Berlin, Germany, 2001. 26. C. H. Meyer and S. M. Matyas. Cryptography: A New Dimension in Computer Data Security. John Wiley & Sons, New York, NY, 1982. 27. S. Morioka and A. Satoh. An Optimized S-box circuit architecture for low power AES design. In C ¸ . K. Koc¸, B.S. Kaliski Jr. and C. Paar, editors, Proceedings of the Second Workshop on Cryptographic Hardware and Embedded Systems — CHES 2002, LNCS 2523, pages 172–186, Springer-Verlag, Berlin, Germany, 2002. 28. NESSIE. New European Schemes for Signatures, Integrity, and Encryption. http://www.cryptonessie.org. 29. NIST FIPS PUB 46-3. Data Encryption Standard. Federal Information Processing Standards, National Bureau of Standards, U.S. Department of Commerce, 1977. 30. NIST Special Publication 800-38C. Recommendation for Block. Cipher Modes of Operation: The. CCM Mode for Authentication. http://csrc.nist.gov. 2004. 31. NIST Special Publication 800-38D. Recommendation for Block Cipher Modes of Operation: Galois/Counter Mode (GCM) for Confidentiality and Authentication. Federal Information Processing Standards, http://csrc.nist.gov, 2007. 32. V. Rijmen. Efficient Implementation of the Rijndael S-box, 2001. Available at http://www.esat.kuleuven.ac.be/ rijmen/rijndael/sbox.pdf. 33. N. Sklavos and O. Koufopavlou. Architectures and VLSI implementations of the AESproposal Rijndael. IEEE Transactions on Computers, 51(12):1454–1459, December 2002. 34. S. Tillich and J. Groschdl. Accelerating AES using instruction set extensions for elliptic curve cryptography. In Computational Science and Its Applications - ICCSA 2005, pages 665–675, Springer-Verlag, Berlin, Germany, 2005. 35. S. Tillich, M. Feldhofer, and J. Großsch¨adl. Area, delay, and power characteristics of standardcell implementations of the aes s-box. In S. Vassiliadis, S. Wong, and T. H¨am¨al¨ainen, editors, SAMOS, LNCS 4017, pages 457–466. Springer, 2006. 36. U.S. Department of Commerce/National Institute of Standard and Technology. FIPS PUB 197, Specification for the Advanced Encryption Standard (AES), November 2001. Available at http://csrc.nist.gov/encryption/aes. 37. Wei Dai. Crypto++. www.cryptopp.com, 2004.
Chapter 4
Arithmetic for Public-Key Cryptography Kazuo Sakiyama and Lejla Batina
4.1 Introduction In this chapter, we discuss arithmetic algorithms used for implementing public-key cryptography (PKC). More precisely, we explore the various algorithms for RSA exponentiation and point/divisor multiplication for curve-based cryptography. The selection of the algorithms has a profound impact on the trade-off between cost, performance, and security. The goal of this chapter is to introduce the different recoding techniques to reduce the number of computations efficiently. Building blocks for PKC, i.e., arithmetic in finite fields and finite rings are discussed in Section 1.3.
4.2 RSA Modular Exponentiation The two most straightforward algorithms to implement RSA modular exponentiation are given in Algorithm 1, where c is an integer (e.g., ciphertext), n is a product of two prime numbers, and d is a t-bit positive integer (e.g., private key). The algorithms perform modular exponentiation cd mod n used for the RSA encryption and decryption processes. The basic operations in both algorithms are modular multiplications and modular squarings. Let d = (111101)2 , here t = 6. Table 4.1 shows the detailed computational steps for computing c61 mod n using Algorithm 1. For side-channel issues, it is preferable to have a constant operation time for both modular multiplication and modular squaring. To be able to do this, the same datapath is used for both operations. Namely, modular squarings are not performed on dedicated hardware, but on the same multiplier as the one used for modular multiplication. Taking into account an expected value of 2t ones in d, the total number of modular multiplications and modular squarings in both algorithms is 3t2 . For the left-to-right algorithm, the modular multiplication in step 5 has to be performed K. Sakiyama (B) University of Electro-Communications, Tokyo, Japan e-mail:
[email protected] I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3 4, C Springer Science+Business Media, LLC 2010
63
64
K. Sakiyama and L. Batina
Algorithm 1 Algorithms for left-to-right and right-to-left binary modular exponentiation Require: c, d = (dt−1 dt−2 · · · d1 d0 )2 . Ensure: cd mod n. 1: T ← 1 2: for i from t − 1 down to 0 do 3: T ← T 2 mod n 4: if di = 1 then 5: T ← T · c mod n 6: end if 7: end for 8: Return T
Require: c, d = (dt−1 dt−2 · · · d1 d0 )2 . Ensure: cd mod n. 1: S ← 1, T ← c 2: for i from 0 to t − 1 do 3: if di = 1 then 4: S ← S · T mod n 5: end if 6: T ← T 2 mod n 7: end for 8: Return S
Table 4.1 Example of left-to-right and right-to-left modular exponentiation i 5 4 3 2 1 0 Left-to-right
Right-to-left
di T
1
1 c
1 c3
1 c7
1 c15
0 c30
1 c61
i
-
0
1
2
3
4
5
di S T
1 c
1 c c2
0 c c4
1 c5 c8
1 c13 c16
1 c29 c32
1 c61 c64
after the modular squaring in step 3 finishes. That is, the modular exponentiation can be performed in 3t2 M, where M is computation time for one modular multiplication or one modular squaring. This algorithm requires one memory location T for intermediate values. On the other hand, in the right-to-left algorithm the modular multiplication and the modular squaring can be parallelized, which enables one to operate modular exponentiation in time t M. However, the right-to-left algorithm requires two memory locations, S and T , for intermediate values. It is possible to parallelize the modular operations also with the left-to-right algorithm by using the Montgomery powering ladder [14] as shown in Algorithm 2. This algorithm always operates modular multiplication and modular squaring within one for-loop regardless the value of the key bit. However, the modular operations can be parallelized when using two modular multipliers. Therefore, the performance and the cost for memory is the same as the right-to-left algorithm. Table 4.2 shows the detailed computation steps for computing c61 using Algorithm 2. Table 4.2 Example of parallelized left-to-right modular exponentiation i 5 4 3 2 1 0 Parallelized left-to-right
di S T
1 c
1 c c2
1 c3 c4
1 c7 c8
1 c15 c16
0 c30 c31
1 c61 c62
4 Arithmetic for Public-Key Cryptography
65
Algorithm 2 Algorithm for parallelized left-to-right binary modular exponentiation (the Montgomery powering ladder [14]). Require: c, d = (dt−1 dt−2 · · · d1 d0 )2 . Ensure: cd mod n. 1: S ← 1, T ← c 2: for i from t − 1 down to 0 do 3: if di = 1 then 4: S ← S · T mod n, T ← T 2 mod n 5: else 6: T ← T · S mod n, S ← S 2 mod n 7: end if 8: end for 9: Return S
Depending on the choice of algorithms, the performance of modular exponentiation can be accelerated up to t M by processing modular operations in parallel. However, the drawback of the use of these parallelized algorithms is that the twofold parallel computation needs two datapaths and double memory locations for intermediate variables. The next section discusses relatively cheaper solutions to accelerate modular exponentiation.
4.2.1 Exponent Recoding Another approach to accelerating RSA modular exponentiation without parallel processing is to use a different representation of the exponent d, so-called exponent recoding. This section describes a technique for exponent recoding. Many techniques for recoding the exponent d have been proposed in the literature starting with Reitwiesner [16]. Here we mention the signed-digit reprel i sentation. Consider an integer representation of the form d = i=0 di 2 , where di ∈ {−1, 0, 1}. This is called the (binary) signed-digit (SD) representation (see [1, 3, 13]). If an SD representation has no adjacent non-zero digits, it is called a nonadjacent form (NAF). Every integer d has a unique NAF which has the minimum Hamming weight of any signed-digit representation of d. Algorithm 3 shows a method for recoding the exponent.
Algorithm 3 Signed-digit exponent recoding [13]. Require: d = (dt+1 dt dt−1 dt−2 · · · d1 d0 )2 with dt+1 = dt = 0. Ensure: d , an SD representation of d or NAF(d). 1: c0 ← 0 2: for i from 0 to t do 3: ci+1 ← (di + di+1 + ci )/2, di ← di + ci − 2ci+1 4: end for 5: Return d
66
K. Sakiyama and L. Batina
Let d = (111101)2 for an example of exponent recoding. Table 4.3 shows the detailed computational steps for exponent recoding of d using Algorithm 3. We ¯ 2 , where 1¯ = −1. obtain d = (1000101) Table 4.3 Example of signed-digit exponent recoding i 0 1 2 3 4 5 6 Inputs
di ci di+1
1 0 0
0 0 1
1 0 1
1 1 1
1 1 1
1 1 0
0 1 0
Outputs
ci+1 di
0 1
0 0
1 1¯
1 0
1 0
1 0
0 1
When the exponent is given in an SD representation, c−1 mod n needs to be pre-computed to perform modular exponentiation with NAF(d). Unless modular inversion can be computed efficiently, only the left-to-right method in Algorithm 1 is worthwhile because the right-to-left method requires additional modular operations for computing the intermediate variable U (see Algorithm 4). As a result, we can expect that the average performance of modular exponentiation is 4t3 M + I with a single datapath, where M and I are denoting modular multiplication (squaring) and modular inversion c−1 mod n, respectively, because the average density of non-zero digits in the SD representation is approximately 1/3 [3]. The left-to-right modular exponentiation can be accelerated further by using the k-ary method, where k-bit of the exponent are evaluated simultaneously (Algorithm 5). The total number of modular multiplications for one t-bit modular exponentiation is approximately (2k −2)+u ·(k +1), where u = kt . For 1024-bit or 2048-bit RSA, the best performance is obtained when k 6 and the computational cost for one modular exponentiation is approximately 6t5 M. This result is slightly faster than the exponent recoding method. However, this algorithm requires (2k − 1) memory locations for pre-computed values. Algorithm 4 Algorithms for left-to-right and right-to-left binary modular exponentiation using an exponent d = NAF(d). Require: c, d = (dt dt−1 · · · d1 d0 )2 . d Ensure: c mod n. 1: T ← 1 2: for i from t down to 0 do 3: T ← T 2 mod n 4: if di = 1 then 5: T ← T · c mod n 6: else if di = −1 then 7: T ← T · c−1 mod n 8: end if 9: end for 10: Return T
Require: c, d = (dt dt−1 · · · d1 d0 )2 . d Ensure: c mod n. 1: S ← 1, T ← c, U ← c−1 mod n 2: for i from 0 to t do 3: if di = 1 then 4: S ← S · T mod n 5: else if di = −1 then 6: S ← S · U mod n 7: end if 8: T ← T 2 mod n 9: U ← U 2 mod n 10: end for 11: Return S
4 Arithmetic for Public-Key Cryptography
67
Algorithm 5 Pre-computation for k-ary modular exponentiation (left) and algorithms for left-to-right k-ary modular exponentiation (right). Require: n, k, c. Ensure: ci mod n (i = 0, · · · , 2k − 1). 1: c0 ← 1, c1 ← c 2: for i from 2 to 2k − 1 do 3: ci ← ci−1 · c mod n 4: end for 5: Return ci (i = 0, · · · , 2k − 1)
Require: n, ci (i = 0, · · · , 2k − 1), d = (du−1 du−2 · · · d1 d0 )2k . Ensure: cd mod n. 1: T ← 1 2: for i from u − 1 down to 0 do k 3: T ← T 2 mod n 4: T ← T · cdi mod n 5: end for 6: Return T
We can further improve the k-ary method by allowing freedom in positioning the windows. This is the so-called sliding-window method [13]. This method needs less memory and fewer modular multiplications compared to the k-ary method. Another approach to reducing the number of modular multiplications is to apply the NAF method to a wider window size. The details will be discussed in Section 4.3.5.
4.3 Curve-Based Cryptography The main operation in any curve-based primitive is point/divisor multiplication. Here, the arithmetic operations required for curve-based cryptography are introduced.
4.3.1 ECC over GF( p) When E is an elliptic curve defined over GF( p), the typical equation is E : y 2 = x 3 + ax + b
(4.1)
where a, b ∈ GF( p) and (4a 3 + 27b2 ) mod p = 0. The sets of points on the elliptic curve together with the point at infinity (denoted as O) can be seen as an additive Abelian group. The main operation, point multiplication, is performed by the double-and-add algorithm (or the binary algorithm), that is, using the point group operations. On the other hand, finite field operations in GF( p) such as addition, subtraction, multiplication, and inversion are required to perform the group operations. For an arbitrary point P on a curve E, an inverse of the point P = (x1 , y1 ) is −P = (x1 , −y1 ). The sum P + Q of points P = (x1 , y1 ) and Q = (x2 , y2 ) (assuming that P, Q = O, and P = ±Q) is a point R = (x3 , y3 ) where
68
K. Sakiyama and L. Batina
y2 − y1 x2 − x1 x3 = β 2 − x1 − x2 β=
(4.2)
y3 = (x1 − x3 )β − y1 This is called point addition. For P = Q, we get the following point doubling formulae 3x12 + a 2y1 2 x3 = β − 2x1 y3 = (x1 − x3 )β − y1 β=
(4.3)
Similar to the left-to-right and right-to-left binary algorithms for modular exponentiation in Algorithm 1, point multiplication can be performed using Algorithm 6, where P is a point on the elliptic curve and k is a positive integer. The point at infinity O is the identity element for elliptic curve operations. Algorithm 6 Algorithms for left-to-right and right-to-left binary point multiplication. Require: a point P = (x, y), k = (kl−1 kl−2 · · · k0 )2 . Ensure: k P. 1: Q ← O 2: for i from l − 1 down to 0 do 3: Q ← 2Q 4: if ki = 1 then 5: Q←Q+P 6: end if 7: end for 8: Return Q
Require: a point P = (x, y), k = (kl−1 kl−2 · · · k0 )2 . Ensure: k P. 1: Q ← O, S ← P 2: for i from 0 to l − 1 do 3: if ki = 1 then 4: Q←Q+S 5: end if 6: S ← 2S 7: end for 8: Return Q
As for RSA exponentiation, EC point multiplication can be parallelized with the left-to-right algorithm as shown in Algorithm 7. Table 4.4 shows the computational steps for k = 61. There are many types of coordinates in which an elliptic curve may be represented. In Equation (4.2) and (4.3), affine coordinates are used, but the so-called projective coordinates have some implementation advantages. More precisely, by using projective coordinates, the point addition and point doubling can be computed without modular inversions, except only one at the end of the point multiplication. Many types of projective coordinates are proposed in the literature. In particular, a weighted projective representation (also referred to as Jacobian representation) is preferred in the sense of faster arithmetic on elliptic curves [1, 6]. In this representation a triplet (X, Y, Z ) corresponds to the affine coordinates (X/Z 2 , Y/Z 3 )
4 Arithmetic for Public-Key Cryptography
69
Algorithm 7 Algorithm for parallelized left-to-right binary point multiplication (the Montgomery powering ladder [14]). Require: a point P, a non-negative integer k = (1kl−2 · · · k1 k0 )2 . Ensure: k P. 1: Q ← P, S ← 2P 2: for i from l − 2 down to 0 do 3: if ki = 1 then 4: Q ← Q + S, S ← 2S 5: else 6: S ← S + Q, Q ← 2Q 7: end if 8: end for 9: Return Q Table 4.4 Example of parallelized left-to-right point multiplication (the Montgomery powering ladder) i 4 3 2 1 0 Parallelized left-to-right
ki Q S
P 2P
1 3P 4P
1 7P 8P
1 15P 16P
0 30P 31P
1 61P 62P
for Z = 0. In this case, we have a weighted projective curve equation of the form E : Y 2 = X 3 + a X Z 4 + bZ 6
(4.4)
Conversion from projective to affine coordinates costs 1 inversion and 4 multiplications, while vice versa it is trivial. The total cost for point addition is 1I + 3M in affine coordinates and 16M in projective coordinates (11M if using the special case Z 1 = 1, i.e., one point is given in affine coordinates and the other one in projective coordinates). Here, I and M denote modular inversion and modular multiplication, respectively. In the case of point doubling (with a special case of a = p − 3), this relation is 1I +4M in affine coordinates against 8M in projective coordinates. Thus, the choice of coordinates is determined by the ratio I : M. Therefore, multiplication in a finite field is the most important operation to focus on when working with projective coordinates. On the other hand, the extra datapath for modular inversion is indispensable for the affine coordinate representation, because one modular inversion has to be performed for every point addition or doubling. In this chapter, we use the weighted projective coordinates for an efficient implementation. Point addition/doubling over GF(p). Here we assume that the two points that will be added, i.e., P = (X 1 , Y1 , Z 1 ) and Q = (X 2 , Y2 , Z 2 ) are already transformed to the weighted projective coordinates (Jacobian representation), where (X, Y, Z ) corresponds to the affine coordinates (X/Z 2 , Y/Z 3 ). We now create a schedule for point addition and doubling using a single datapath. Therefore, the left-to-right binary algorithm shown in Algorithm 6 can be used, and the result point is stored as Q, i.e., Q = Q + P for point addition and Q = 2Q for point doubling. In
70
K. Sakiyama and L. Batina
Algorithm 8 Scheduling of point addition and doubling over GF( p) reformulated based on [1]. Require: P = (X 1 , Y1 , Z 1 ), Q = (X 2 , Y2 , Z 2 ). Ensure: Q = Q + P. 1: t1 = Z 1 Z 1 ; 2: t2 = X 2 t1 ; 3: t3 = Z 2 Z 2 ; 4: t4 = X 1 t3 + t2 ; 5: t2 = X 1 t3 − t2 ; 6: t5 = t1 Z 1 ; 7: t6 = Y2 t5 ; 8: t1 = t3 Z 2 ; 9: t3 = t1 Y1 + Y2 ; 10: Y2 = t1 Y1 − Y2 ; 11: t5 = t2 t2 ; 12: t1 = t4 t5 ; 13: X 2 = Y2 Y2 − t1 ; 14: t4 = −2X 2 + t1 ; 15: t1 = t5 t2 ; 16: t1 = t3 t1 ; 17: t3 = t4 Y2 − t1 ; 18: Y2 = t3 /2; 19: Z 2 = t2 Z 2 ; 20: Z 1 = Z 1 Z 2 ; 21: Return Q;
Require: P = (X 1 , Y1 , 1), Q = (X 2 , Y2 , Z 2 ). Ensure: Q = Q + P. 1: t3 = Z 2 Z 2 ; 2: t4 = X 1 t3 + X 2 ; 3: t2 = X 1 t3 − X 2 ; 4: t1 = t3 Z 2 ; 5: t3 = t1 Y1 + Y2 ; 6: Y2 = t1 Y1 − Y2 ; 7: t5 = t2 t2 ; 8: t1 = t4 t5 ; 9: X 2 = Y2 Y2 − t1 ; 10: t4 = −2X 2 + t1 ; 11: t1 = t5 t2 ; 12: t1 = t3 t1 ; 13: t3 = t4 Y2 − t1 ; 14: Y2 = t3 /2; 15: Z 2 = t2 Z 2 ; 16: Return Q;
Require: Q = (X 2 , Y2 , Z 2 ). Ensure: Q = 2Q. 1: t1 = X 2 X 2 ; 2: t1 = 3t1 ; 3: t2 = Z 2 Z 2 ; 4: t2 = t2 t2 ; 5: t2 = at2 + t1 ; 6: t1 = 2Y2 ; 7: Z 2 = Z 2 t1 ; 8: t3 = t1 t1 ; 9: t4 = X 2 t3 ; 10: X 2 = 2t4 ; 11: X 2 = t2 t2 − X 2 ; 12: t1 = t1 t3 ; 13: t1 = t1 Y2 ; 14: t3 = t4 − X 2 ; 15: Y2 = t2 t3 − t1 ; 16: Return Q;
Algorithm 8, a possible schedule for point addition and doubling is given including the mixed-addition case, Z 1 = 1. In our case both point addition and doubling consist of 15 steps in the case of Z 1 = 1. Here, each step uses only multiply-add/subtract operations in the underlying finite field in order to have a constant operation time and in order to have a compact datapath. Therefore, the average computation cost for point multiplicaMab+c , where Mab+c denotes one tion k P with an l-bit scalar k is estimated as 45l 2 multiply-add/subtract operation over GF( p). If we use an algorithm processing point addition and doubling in parallel (e.g., the right-to-left algorithm), the cost becomes 15l Mab+c .
4.3.2 ECC over GF(2m ) Here, we consider a finite field of characteristic 2, i.e., GF(2m ), a non-supersingular elliptic curve E over GF(2m ) that is defined as the set of solutions (x, y) ∈ GF(2m )× GF(2m ) of the equation, E : y 2 + x y = x 3 + ax 2 + b
(4.5)
4 Arithmetic for Public-Key Cryptography
71
where a, b ∈ GF(2m ), b = 0, together with O. The inverse of the point P1 = (x1 , y1 ) is −P1 = (x1 , x1 + y1 ). The points on the curve and the point at infinity O form an Abelian group. The sum P + Q of the points P = (x1 , y1 ) and Q = (x2 , y2 ) (P, Q = O and P = ±Q) is the point R = (x3 , y3 ) where y1 + y2 x1 + x2 x3 = β 2 + β + x1 + x2 + a β=
(4.6)
y3 = (x1 + x3 )β + x3 + y1 This operation is called point addition. For P = Q, the point doubling formulae are y1 + x1 x1 x3 = β 2 + β + a y3 = (x1 + x3 )β + x3 + y1 β=
(4.7)
We use weighted projective coordinates where an affine point (x, y) is converted to a projective point (X, Y, Z ) by computing x = X/Z 2 , y = Y/Z 3 . The projective curve equation corresponding to the affine equation shown in Equation (4.5) is given by E : Y 2 + X Y Z = X 3 + a X 2 Z 2 + bZ 6
(4.8)
Point addition/doubling. Similar to the case over GF( p), point addition over GF(2m ) is computed by Q = Q + P (P = Q). For point doubling we use Q = 2Q. The computation sequences for point addition and doubling on the projective curve are presented in 9 including the special case Z 1 = 1 for point addition. √ Algorithm m−2 Here, c = 4 b = b2 . We assume this curve parameter is pre-computed. The computation sequence for point subtraction Q − P is obtained simply by using t1 = t2 (X 1 + Y1 ) + Y2
(4.9)
in place of step 4 of the point addition in Algorithm 9.
4.3.3 ECC over a Composite Field For cryptographic security, it is typically recommended to use fields GF(2m ) where m is a prime. As an example we consider the case where m = 163. For ECC over 2 composite fields, we should choose GF((283 ) ) in order to have the equivalent level of security [12]. More precisely, we consider ECC over a field of a quadratic exten2 sion of GF(283 ), so GF((283 ) ) = GF(283 )[y]/g(y) and deg(g) = 2. In this way
72
K. Sakiyama and L. Batina
Algorithm 9 Scheduling of point addition over GF(2m ) reformulated based on [1]. Require: P = (X 1 , Y1 , Z 1 ), Q = (X 2 , Y2 , Z 2 ). Ensure: Q = P + Q. 1: t6 = Z 1 Z 1 ; 2: t3 = t6 X 2 ; 3: t2 = Z 2 Z 2 4: t4 = t2 X 1 + t3 5: t1 = t6 Z 1 ; 6: t3 = t1 Y2 ; 7: t2 = t2 Z 2 ; 8: t1 = t2 Y1 + t3 ; 9: t5 = t4 Z 1 ; 10: t3 = t5 Y2 ; 11: t3 = t1 X 2 + t3 ; 12: Z 2 = t5 Z 2 ; 13: t5 = t1 (t1 + Z 2 ); 14: Y2 = t4 t4 ; 15: t2 = t4 Y2 + t5 ; 16: X 2 = a Z 2 ; 17: X 2 = X 2 Z 2 + t2 ; 18: t4 = Y2 t3 ; 19: t4 = t4 t6 ; 20: Y2 = X 2 (t1 + Z 2 ) + t4 ; 21: Return Q;
Require: P = (X 1 , Y1 , 1), Q = (X 2 , Y2 , Z 2 ). Ensure: Q = P + Q. 1: t2 = Z 2 Z 2 ; 2: t4 = t2 X 1 + X 2 ; 3: t2 = t2 Z 2 ; 4: t1 = t2 Y1 + Y2 ; 5: t3 = t4 Y2 ; 6: t3 = t1 X 2 + t3 ; 7: Z 2 = t4 Z 2 ; 8: t5 = t1 (t1 + Z 2 ); 9: Y2 = t4 t4 ; 10: t2 = t4 Y2 + t5 ; 11: X 2 = a Z 2 ; 12: X 2 = X 2 Z 2 + t2 ; 13: t4 = Y2 t3 ; 14: Y2 = X 2 (t1 + Z 2 ) + t4 ; 15: Return Q;
Require: Q = (X 2 , Y2 , Z 2 ). Ensure: Q = 2Q. 1: t1 = X 2 X 2 ; 2: t2 = t1 t1 ; 3: t4 = Y2 Z 2 + t1 ; 4: t3 = Z 2 Z 2 ; 5: Z 2 = X 2 t3 ; 6: t5 = ct3 + X 2 ; 7: t3 = t5 t5 ; 8: X 2 = t3 t3 ; 9: t1 = X 2 (Z 2 + t4 ); 10: Y2 = t2 Z 2 + t1 ; 11: Return Q;
one can obtain a speedup and benefit even more from the parallelism. The reason is that in composite field notation, each element is represented as c = c1 t + c0 , where c0 , c1 ∈ GF(283 ), and the multiplication in this field takes three modular multiplications and four modular additions in GF(283 ) [11]. In [2], the Weil descent attack is introduced against EC defined over binary fields of composite degree. This work puts some doubt on security of composite field implementations of ECC, in general. However, further investigations have shown that composite fields with degree 2 · m (i.e., extension of degree two), where m is prime, remain secure against Weil Descent attacks and its variants [12].
4.3.4 Hyperelliptic Curve Cryptography (HECC) Another appealing candidate for PKC is HECC. Recently many software and hardware implementations of HECC have been described, while more theoretical work has shown HECC also secure for curves with a small genus [17]. As already mentioned, ECC over composite fields allows one to work in a finite field where bit lengths are shorter with a factor 2, when compared to ECC. That means, for the equivalent level of security of ECC over GF(2163 ), we should choose GF(283 ). We obtain a similar result for HECC on a curve of a genus 2. Nevertheless, the
4 Arithmetic for Public-Key Cryptography
73
Algorithm 10 Modular multiplication over GF((2m )2 ). Require: irreducible polynomial P, A = A1 t + A0 , B = B1 t + B0 , where P, A0 , A1 , B0 , B1 ∈ GF(2m ). Ensure: AB mod P = C1 t + C0 , where C0 , C1 ∈ GF(2m ). 1: S ← A1 B1 mod P 2: T ← A0 B0 mod P 3: C0 ← (S + T ) mod P 4: S ← (A1 + A0 ) mod P 5: T ← (B1 + B0 ) mod P 6: U ← ST mod P 7: C1 ← (U + C0 ) mod P 8: Return C1 t + C0
Algorithm 11 Modular addition over GF((2m )2 ). Require: irreducible polynomial P, A = A1 t + A0 , B = B1 t + B0 , where P, A0 , A1 , B0 , B1 ∈ GF(2m ). Ensure: (A + B) mod P = C1 t + C0 , where C0 , C1 ∈ GF(2m ). 1: C0 ← (A0 + B0 ) mod P 2: C1 ← (A1 + B1 ) mod P 3: Return C1 t + C0
performance is still much slower than the one of private-key cryptography such as AES [4, 5]. The only difference between ECC and HECC is the sequence of operations at the middle level. The sequence for HECC is more complex when compared with the ECC point operations, however, HECC uses shorter operands. One can perform inversion also with a chain of multiplications [7] and only provide hardware for finite field addition and multiplication.
4.3.5 Scalar Recoding The exponent recoding method introduced in Algorithm 3 is also used for scalar recoding to accelerate EC point multiplication and HEC divisor multiplication. Taking one step further, the NAF method with a wider window size is explained in this section. This method is especially suited for ECC because point subtraction can be computed with a slightly modified sequence from point addition. Therefore, ¯ negative integers in the recoded scalar introduced by the NAF methods (e.g., 1) can be handled without degrading the performance. Algorithm 12 is a generalized algorithm for generating a NAF representation with the w-bit window size or a width-w NAF for a scalar k. Algorithm 3 used for exponent recoding is considered the special case for w = 2. Let k = (111101)2 for an example of scalar recoding with the window size w = 3. Table 4.6 shows the detailed computational steps for scalar recoding of k using Algorithm 12. The result indicates that the scalar is recoded as NAF3 (61) = ¯ 2. (1000003)
74
K. Sakiyama and L. Batina
Algorithm 12 Algorithm for generating width-w NAF [3]. Require: window size w, positive integer k. Ensure: a width-w NAF representation of k (NAFw (k)). 1: i ← 0 2: while k ≥ 1 do 3: if k is odd then 4: ki ← k mod 2w , k ← k − ki (−2w−1 ≤ ki < 2w−1 ) 5: else 6: ki ← 0 7: end if 8: k ← k/2, i ← i + 1 9: end while 10: Return (ki−1 ki−2 · · · k1 k0 )
Table 4.5 Example of scalar recoding with the scalar k = 61 and the window size w Input i 0 1 2 3 4 5 6 k 61 32 16 8 4 2 1 ki NAF3 (k) 3¯ 0 0 0 0 0 1
=3 7 0 -
Algorithm 13 Left-to-right point multiplication with NAFw (k).
i Require: window size w, NAFw (k) = l−1 i=0 ki 2 , pre-computed points Pi = i P for i = 1, 3, 5, · · · , 2w−1 − 1. Ensure: k P. 1: Q ← O 2: for i from l − 1 down to 0 do 3: Q ← 2Q 4: if ki > 0 then 5: Q ← Q + Pki 6: else if ki < 0 then 7: Q ← Q − Pki 8: end if 9: end for 10: Return Q
Point multiplication with a scalar represented in a width-w NAF can be computed by Algorithm 13. As can be found in the algorithm, the pre-computed points are used iteratively in the for-loop (steps 5 and 7). However, in the case of the right-to-left and the parallelized left-to-right algorithm, we need to perform point doublings for all pre-computed points in every for-loop (see step 6 of the right-to-left method in Algorithm 6). The cost for these point doublings is crucial for w > 2. Therefore, the width-w NAF method is suitable for the left-to-right algorithm. The left-to-right algorithm first needs the MSB of a (recoded) scalar, although the result of scalar recoding is generated from the LSB as shown in Table 4.5. Therefore, point multiplication can start only after the scalar-recoding algorithm finishes. In this sense, it is beneficial to have an algorithm that recodes a scalar from the MSB. Although scalar recoding is not computationally intensive compared to
4 Arithmetic for Public-Key Cryptography
75
point multiplication, MSB-first (left-to-right) scalar recoding is beneficial, because it can reduce the overhead cycles by recoding a scalar on the fly. Okeya et al. introduce a novel idea for MSB-first scalar recoding by using the MOF (mutual opposite form) [15]. More precisely, the method can generate the result of scalar recoding from the MSB by introducing the MOF representation of a given scalar k = (kt−1 · · · k1 k0 )2 . The MOF representation of k can be obtained simply by applying an SD representation to the value (2k − k). The MOFw representation is obtained by performing the left-to-right sliding-window method [3] with the support of a look-up table. Let k = (111101)2 for an example of MSB-first scalar recoding with the window size w = 3, i.e., MOF3 (k). Table 4.6 summarizes the procedure of this recoding method based on Algorithm 14. We need a look-up table when converting a binary string in MOF representation into MOF3 . In this example, TABLE3 (100) = 100 and ¯ 1) ¯ = 003. ¯ As a result, MOF3 (61) = (1000003) ¯ 2 is obtained, which is TABLE3 (11 the same result as NAF3 (61) obtained in Table 4.5.
Algorithm 14 Algorithm for generating MSB-first scalar recoding with window size w (width-w MOF or MOFw (k)) [15]. Require: window size w, positive integer k = (kt−1 · · · k1 k0 ), a look-up table TABLEw (). Ensure: a width-w MOF representation of k (MOFw (k)). 1: k−1 ← 0, kt ← 0, i ← t 2: while i ≥ w − 1 do 3: if ki = ki−1 then 4: di ← 0, di ← 0, i ← i − 1 5: else 6: for j from i down to i − w + 1 do 7: d j ← k j − k j−1 8: end for , · · · , di−w+1 ) ← TABLEw (di , di−1 , · · · , di−w+1 ) 9: (di , di−1 10: i ←i −w 11: end if 12: end while 13: if i ≥ 0 then 14: (di , di−1 , · · · , d0 ) ← TABLEi+1 (di , di−1 , · · · , d0 ) 15: end if 16: Return (dt , dt−1 , · · · , d0 )
Table 4.6 Example of MSB-first scalar recoding with the window size w = 3 i 6 5 4 3 2 1 0 Inputs Outputs
ki−1 ki
1 0
1 1
1 1
1 1
MOF(k) : di MOF3 (k) : di
1 1
0 0
0 0
0 0
0 1 1¯ 0
1 0 1 0
0 1 1¯ 3¯
76
K. Sakiyama and L. Batina
4.4 Recent Trends As closing remarks of this chapter, we briefly explain another property of an algorithm, i.e., algorithm-level countermeasures against timing analysis and power analysis attacks [9, 10]. Algorithm 2, one of the algorithms for RSA exponentiation, always computes one modular multiplication and one modular squaring in the for-loop regardless the value of the key bit di . Therefore, the computation time is independent of the value of the exponent d and the same operation patterns are repeated, which means that it is hard to extract bits of exponent, i.e., di from side-channel attacks on an RSA system. Algorithm 1 can be a resistant algorithm against timing analysis and simple power analysis attacks by slightly modifying the sequence, e.g., introducing a dummy modular multiplication if di = 0. The same holds true for EC point multiplication shown in Algorithms 6 and 7. In 2007, Joye proposed a new algorithm for EC point multiplication (and for RSA exponentiation) that uses only a point addition sequence [8]. The proposed algorithm is shown in Algorithm 15 and the computational sequence for the case of k = 61 is described in Table 4.7. As can be seen from Table 4.7, memory space for storing three intermediate points is needed. This is relatively expensive compared to the introduced binary methods (see Algorithms 6 and 7). However, there is no need to implement the point doubling sequence in cryptosystems, which may result in area-efficient and secure implementations. Further investigation on this new addonly algorithm will be needed.
Algorithm 15 Algorithm for add-only right-to-left binary point multiplication [8]. Require: a point P, a non-negative integer k = (kl−1 kl−2 · · · k1 k0 )2 . Ensure: k P. 1: Q 0 ← P, Q 1 ← P, S ← 2P 2: for i from 1 to l − 1 do 3: Q 1−ki ← Q 1−ki + S 4: S ← Q0 + Q1 5: end for 6: Q k0 ← Q k0 − P 7: Return Q 0
Table 4.7 Example of parallelized left-to-right point multiplication i 1 2 3 4 5 0 Add-only right-to-left
ki Q0 Q1 S
P P 2P
0 P 3P 4P
1 5P 3P 8P
1 13P 3P 16P
1 29P 3P 32P
1 61P 3P 64P
1 61P 2P 64P
4 Arithmetic for Public-Key Cryptography
77
4.5 Conclusions This chapter described algorithms for RSA exponentiation, EC point multiplication, and HEC divisor multiplication. We mainly discussed the functionality and the trade-off between cost and performance of the different algorithms. These algorithms are used in Section 4.1 that introduces compact public-key implementations for RFID (radio frequency identification) and sensor nodes. As recent topics of arithmetic for PKC, we also introduced algorithm-level countermeasures against timing analysis and power analysis attacks (refer to Section 2.1 for side-channel attacks).
References 1. I. Blake, G. Seroussi, and N. P. Smart. Elliptic Curves in Cryptography. London Mathematical Society Lecture Note Series 265, Cambridge University Press, 1999. 2. G. Frey. How to disguise an elliptic curve (Weil descent). Presentation given at the 2nd Elliptic Curve Cryptography (ECC’98), 1998. 3. D. Hankerson, A. Menezes, and S. Vanstone. Guide to Elliptic Curves Cryptography. Springer-Verlag, New York, 2004. 4. A. Hodjat and I. Verbauwhede. Area-throughput trade-offs for fully pipelined 30 to 70 Gbits/s AES processors. IEEE Transactions on Computers, 55(4):366–372, 2006. 5. D. Hwang, K. Tiri, A. Hodjat, B.-C. Lai, S. Yang, P. Schaumont, and I. Verbauwhede. AESbased security coprocessor IC in 0.18-μm CMOS with resistance to differential power analysis side-channel attacks. IEEE Journal of Solid-State Circuits , 41(4):781–792, 2006. 6. IEEE P1363. Standard specifications for public key cryptography, November 2000. http://grouper.ieee.org/groups/1363/ 7. T. Itoh and S. Tsujii. Effective recursive algorithm for computing multiplicative inverses in GF(2m ). Electronics Letters, 24(6):334–335, 1988. 8. M. Joye. Highly regular right-to-left algorithms for scalar multiplication. In P. Paillier and I. Verbauwhede, editors, Proceedings of 9th International Workshop on Cryptographic Hardware in Embedded Systems (CHES’07), number 4727 in Lecture Notes in Computer Science, pp. 135–147, Springer-Verlag, New York, 2007. 9. P. Kocher. Timing attacks on implementations of Diffie-Hellman, RSA, DSS and other systems. In N. Koblitz, editor, Advances in Cryptology – Proceedings of CRYPTO’96, number 1109 in Lecture Notes in Computer Science, pp. 104–113, Springer-Verlag, New York, 1996. 10. P. Kocher, J. Jaffe, and B. Jun. Differential power analysis. In M. Wiener, editor, Advances in Cryptology – Proceedings of CRYPTO’99, number 1666 in Lecture Notes in Computer Science, pp. 388–397, Springer-Verlag, New York, 1999. 11. R. Lidl and H. Niederreiter. Finite fields, volume 20 of Encyclopedia of Mathematics and its Applications. Cambridge University Press, Cambridge, second edition, 2000. 12. M. Maurer, A. Menezes, and E. Teske. Analysis of the GHS Weil descent attack on the ECDLP over characteristic two finite fields of composite degree. In C. P. Rangan and C. Ding, editors, Proceedings 2nd International Conference on Cryptology in India (INDOCRYPT’01), number 2247 in Lecture Notes in Computer Science, pp. 195–213, Springer-Verlag, New York, 2001. 13. A. Menezes, P. van Oorschot, and S. Vanstone. Handbook of Applied Cryptography. CRC Press, Boca Raton, FL 1997. 14. P. Montgomery. Speeding the pollard and elliptic curve methods of factorization. Mathematics of Computation, 48(177):243–264, 1987. 15. K. Okeya, K. Schmidt-Samoa, C. Spahn, and T. Takagi. Signed binary representations revisited. In M. K. Franklin, editor, Advances in Cryptology – Proceedings of CRYPTO’04, number 3152 in Lecture Notes in Computer Science, pp. 123–139, Springer, 2004.
78
K. Sakiyama and L. Batina
16. G. Reitwiesner. Binary arithmetic. Advances in Computers, 1:231–308, 1960. 17. N. Th´eriault. Index calculus attack for hyperelliptic curves of small genus. In C. S. Laih, editor, Proceedings of Advances in Cryptology – Proceedings of ASIACRYPT’03, number 2894 in Lecture Notes in Computer Science, pp. 75–92, Springer-Verlag, New York, 2003.
Chapter 5
Hardware Design for Hash Functions Yong Ki Lee, Miroslav Kneˇzevi´c, and Ingrid M.R. Verbauwhede
5.1 Introduction Cryptographic hash algorithms are one of the most important primitives in security systems. They are most commonly used for digital signature algorithms [1], message authentication, and as a building block for other cryptographic primitives such as hash-based block ciphers (Bear, Lion [8] and Shacal [22]), stream ciphers, and pseudo-random number generators. Hash algorithms take input strings – M of arbitrary length and translate them to short fixed-length output strings, so-called message digests – Hash(M). The typical example of hash-based message authentication is protecting the authenticity of the short hash result instead of protecting the authenticity of the whole message. Similarly, in digital signatures a signing algorithm is always applied to the hash result rather than to the original message. This ensures both performance and security benefits. Hash algorithms can also be used to compare two values without revealing them. A typical example for this application is a password authentication mechanism [39]. As the hash algorithms are widely used in many security applications, it is very important that they fulfill certain security properties. Those properties can be considered as follows: 1. Preimage resistance: It must be hard to find any preimage for a given hash output, i.e., given a hash output H getting M must be hard such that H = Hash(M). 2. Second preimage resistance: It must be hard to find another preimage for a given input, i.e., given M0 and Hash(M0 ) getting M1 must be hard such that Hash(M0 ) = Hash(M1 ). 3. Collision resistance: It must be hard to find two different inputs of the same hash output, i.e., getting M0 and M1 must be hard such that Hash(M0 ) = Hash(M1 ).
Y.K. Lee (B) University of California, Los Angeles, CA, USA; Electrical Engineering, 420 Westwood Plaza, Los Angeles, CA 90095-1594 USA e-mail:
[email protected]
I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3 5, C Springer Science+Business Media, LLC 2010
79
80
Y.K. Lee et al.
An algorithm that is characterized by the first two properties is called a oneway hash algorithm. If all three properties are met the hash algorithm is considered collision resistant. Finding collisions in a specific hash algorithm is the most common way of attacking it. There are a few different types of hash algorithms described in literature. They are based on block ciphers, modular arithmetic, cellular automation, knapsack and lattice problem, algebraic matrices, etc. The most commonly used hash algorithms, known as dedicated hash algorithms, are especially designed for hashing and are not provably secure. The biggest class of these algorithms is based on the design principles of the MD4 family [40]. In this chapter we show how retiming and unfolding, well-known techniques used in Digital Signal Processing, can be applied as a design methodology for very fast and efficient hardware implementations of MD-based hash algorithms.
5.2 Popular Hash Algorithms and Their Security Considerations The design philosophy of the most commonly used hash algorithms such as MD5, SHA family, and RIPEMD is based on design principles of the MD4 family. In this section, we will give a short overview and provide historical facts about existing attacks on these algorithms. MD4 is a 128-bit cryptographic hash algorithm introduced by Ron Rivest in 1990 [40]. The MD4 algorithm is an iterative algorithm which is composed of three rounds, where each round has 16 hash operations. Therefore, MD4 has 48 iterations. On each iteration (hash operation), intermediate results are produced and used for the next iteration. A hash operation is a combination of arithmetic additions, circular shifts, and some Boolean functions. All the operations are based on 32-bit words. The differences among the MD4 families are in the word size, the number of the iterations, and the combinations of arithmetic additions and Boolean functions. After MD4 was proposed several other hash algorithms were constructed based on the same design principles: 256-bit extension of MD4 [40], MD5 [41], HAVAL [50], RIPEMD [3], RIPEMD-160 [17], SHA0 [5], SHA1 [6], SHA-256, SHA-384, SHA-512, SHA-224 [7], etc. The first attack on MD4 was published already in 1991 by den Boer and Bosselaers [14]. The attack was performed on reduced version of MD4 (two out of three rounds). Additionally, in November 1994, Vaudenay shows that the fact that the inert algorithm is not a multipermutation allows to mount an attack where the last round is omitted [43]. In the fall of 1995, Dobbertin finds collisions for all three rounds of MD4 [16]. A few years after Rivest designed the strengthened version MD5, it was shown by den Boer and Bosselaers [9] that the compression function of MD5 is not collision resistant. At the beginning of 1996, Dobbertin also found a free-start collision of MD5 in which the initial value of the hash algorithm is replaced by a non-standard value making the attack possible [15].
5 Hardware Design for Hash Functions
81
Finally, in the rump session of Crypto 2004, it was announced that collisions for MD4, MD5, HAVAL-128, and RIPEMD were found. In 2005 Wang et al. published several cryptanalytic articles [48, 46, 49, 47] showing that use of the differential attack can find a collision in MD5 in less than an hour while the same attack applied to MD4 can be performed in less than a fraction of a second. The first version of SHA family is known as SHA0 and was introduced by the American National Institute for Standards and Technology (NIST) in 1993 [5]. This standard is also based on the design principles of the MD4 family. One year after proposing SHA0, NIST discovered a certification weakness in the existing algorithm. By introducing a minor change it proposed the new secure hash standard known as SHA1 [6]. The message digest size for both algorithms is 160 bits. The first attack on SHA0 was published in 1998 by Chabaud and Joux [10] and was probably similar to the classified attack developed earlier (the attack that yielded to the upgrade to SHA1). This attack shows that collision in SHA0 can be found after 261 evaluations of the compression function. According to the birthday paradox, a brute force attack would require 280 operations on average. In August 2004, Joux et al. first showed a full collision on SHA0 requiring complexity of 251 computations [26]. Finally, in 2005 Wang, Yin, and Yu announce full collision in SHA0 in just 239 hash operations [49] and report that collision in SHA1 can be found with complexity of less than 269 computations [47]. The following generation of SHA algorithms known as the SHA2 family was introduced in 2000 and adopted as an ISO standard in 2003 [4]. All three hash algorithms (SHA-256, SHA-384, and SHA-512) have much larger message digest size (256, 384, and 512 bits, respectively). The youngest member of this family is SHA-224 and was introduced in 2004 as a Change Notice to FIPS 180-2 [7]. There are only a few security evaluations of the SHA2 algorithms so far. The first security analysis was in 2003 by Gilbert and Handschuh [21], and it showed that neither Chabaud and Joux’s attack nor Dobbertin-style attacks apply to these algorithms. However, they show that slightly simplified versions of the SHA2 family are surprisingly weak. In the same year, Hawkes and Rose announce that second preimage attacks on SHA-256 are much easier than expected [23]. Although pointing to the possible weaknesses of SHA2 family, these analyses do not lead to actual attacks so far. Cryptanalysis for step-reduced SHA2 can be found in [24]. In [44] Oorschot and Wiener show that in 1994 a brute force collision search for a 128-bit hash algorithm could be done in less than a month with a $10 million investment. Nowadays, according to the Moore’s law, the same attack could be performed in less than 2 hours. As a countermeasure to this attack the size of the hash result has to be at least 160 bits. RIPEMD-160 is a hash algorithm with the message digest of 160 bits and was designed by Dobbertin, Bosselaers, and Preneel in 1996 [18]. The Intention was to make a stronger hash algorithm and replace existing 128-bit algorithms such as MD4, MD5, and RIPEMD. To the best of our knowledge the only study concerning security of RIPEMD-160 so far is published by Rijmen et al. [34]. In this analysis the authors extend existing approaches using recent results in cryptanalysis of hash algorithms. They show that methods successfully used to attack SHA1 are not applicable to full RIPEMD-160. Additionally, they use analyt-
82
Y.K. Lee et al.
ical methods to find a collision in three rounds variant of RIPEMD-160. To the best of our knowledge no attack has been found for the original RIPEMD-160 algorithm as of 2008. As a conclusion of this section we would like to point out that hash algorithms such as SHA0, SHA1, MD4, and MD5 are not considered to be secure anymore. As a replacement to these algorithms we would recommend the use of RIPEMD-160 or SHA2 family. Finally, in response to the recent attacks on the MD-based hash functions, NIST launched a worldwide competition for the development of a new hash function, the so-called SHA3 standard. A large number of hash functions have advanced to the first round, several of which have already been broken. Recently, NIST has selected the second round candidates, narrowing the final choice down to only 14 hash proposals. The final five candidates are to be selected in the summer 2010 and the new hash standard will be announced in 2012.
5.3 Common Techniques Used for Efficient Hardware Implementation of MD4-Based Hash Algorithms Besides the security properties of the hash algorithms, a commonly required property is the high throughput. This becomes more critical since the data amount to process is dramatically increasing every year. Therefore, designing a high-throughput architecture for a given hash algorithm is one of the most important issues for the hardware designers. There are several techniques to increase the throughput in the MD4-based hash algorithm implementations. Due to the same design principles in the MD4-based hash algorithms, most of the techniques used in one algorithm can be used for the others. The most commonly used techniques are pipelining, loop unrolling, and using carry save adders (CSA): Pipelining techniques reduce critical path delays by properly positioning registers, whose applications can be found in [12, 13, 32]; unrolling techniques improve throughput by performing several iterations in a single cycle [35, 33, 11, 31]; CSA techniques reduce arithmetic addition delays of two or more consecutive additions [12, 13, 32, 35, 31]. Many of the published papers combine multiple techniques to achieve a higher throughput. SHA1 is implemented in [35, 31, 36, 20, 42, 45], SHA2 in [12, 13, 33, 11, 31, 42], MD5 in [20, 42, 45, 37, 25], and RIPEMD-160 in [20, 42, 37, 27]. Despite numerous proposals for high-throughput hash implementations, a delay bound analysis had been neglected and architecture designs are mostly done by intuition until a recent date. For example, in [12], the authors present a design that achieves the iteration bound, though they do not claim optimality. In fact, their design is the last revision of several other suboptimal attempts [13]. The iteration bound analysis, which defines the mathematical upper limit of throughput in the microarchitecture level, of SHA1 and SHA2 was introduced in [29, 30] recently.
5 Hardware Design for Hash Functions
83
5.4 Throughput Optimal Architecture of SHA1 The iteration bound analysis starts from drawing a DFG (data flow graph) corresponding to a given algorithm. After analyzing the iteration bound, we apply transformation techniques such as the retiming transformation and the unfolding transformation, which are comparable to the pipelining and the unrolling, respectively. The iteration bound analysis and the transformations are publicly known and proven techniques in the signal processing area [38]. We adopt most of the notation and the definition and formalize a design methodology for MD4-based hash algorithms. A related work can be found in [29, 30].
5.4.1 The SHA1 Hash Algorithm and Its DFG SHA1 is the most widely used hash algorithm as of 2008. It produces an output of 160-bit length with an arbitrary input size less than 264 bits and requires 80 iterations to digest one message block of 512 bits. The mathematical expression of SHA1 is described in Fig. 5.1 where ROTLk represents a k-bit circular left shift, and K t is a constant value depending on the number of iterations, t. Mt(i) is the tth 32-bit word of the ith message block. ⎧ (x ∧ y) ∨ ((¬x) ∧ z) ⎪ ⎪ ⎨ x⊕y⊕z Ft (x,y,z)= (x ∧ y) ∨ (x ∧ z) ∨ (y ∧ z) ⎪ ⎪ ⎩ x⊕y⊕z
0 ≤ t ≤ 19 20 ≤ t ≤ 39 40 ≤ t ≤ 59 60 ≤ t ≤ 79
(a)Non-linear function Wt =
(i)
Mt ROTL 1 (Wt−3 ⊕ Wt−8 ⊕ Wt−14 ⊕ Wt−16 )
0 ≤ t ≤ 15 16 ≤ t ≤ 79
(b)Expander computation
TEMP t = ROTL 5 (At )+ Ft (Bt ,C t ,D t )+ Et + Wt + Kt Et+1 = Dt Dt+1 = Ct Ct+1 = ROTL 30 (Bt ) Bt+1 = At At+1 = TEMP t (c)Compressor computation
Fig. 5.1 SHA1 Hash computation
In order to perform the iteration bound analysis and the transformation techniques, we need to convert Fig. 5.1c to a DFG. Deriving a DFG can be done straightforwardly as shown in Fig. 5.2 where S5 and S30 represent ROTL5 and ROTL30 , respectively. The dashed lines are for driving outputs at the t-th iteration and the outputs of the last iteration are used to produce a hash output for one message block. The solid lines indicate the data flow throughout the DFG. Box A, B, C,
84
Y.K. Lee et al.
Fig. 5.2 SHA1 data flow graph
D, and E represent registers, which give the output at cycle t, and circles represent some functional nodes which perform the given functional operations. A D on edges represents an algorithmic delay, i.e., a delay that cannot be removed from the system. Next to algorithmic delays, nodes also have functional delays. The functional delays are the propagation delays to perform the given operations. We express the functional delays of + and Ft as Prop(+) and Prop(Ft ), respectively. The iteration bound analysis assumes that every functional operation is atomic. This means that a functional operation cannot be split or merged into some other functional operations. The meaning of the bold lines will be explained in the next section.
5.4.2 Iteration Bound Analysis The iteration bound analysis defines the minimally achievable delay bound of an iterative algorithm and hence it defines the maximally achievable upper bound of throughput. This will not only give designers a goal but also prevent futile efforts to achieve better than the theoretical optimum. If tl is the loop calculation time and wl is the number of algorithmic delays in the lth loop, the lth loop bound is defined as tl /wl . The iteration bound is the maximum loop bound.
T∞
tl = max l∈L wl
(5.1)
where L is the set of all possible loops. In Fig. 5.2, the loop with the maximum loop bound is the one with bold lines and shaded nodes. The shifts of a fixed number are negligible in hardware implementations, and therefore, we ignore the delays of shifts. In Fig. 5.1a, it can be seen that the worst case of Prop(Ft ) is the “Three Input Bitwise Exclusive OR” operation. This is the same as the critical path of CSA (this fact will be used in the following section
5 Hardware Design for Hash Functions
85
with more explanation about CSA) and is definitely less than the critical path delay of a 32-bit addition. So, we can assume that Prop(Ft ) ≈ Prop(CSA) < Prop(+). Since in the marked loop the loop calculation time is 2 × Prop(+) + Prop(Ft ) and the number of algorithmic delays is 2, the iteration bound of the SHA1 hash algorithm can be defined as follows:
SHA1 T∞
2 × Prop(+) + Prop(Ft ) 2 Prop(Ft ) = Prop(+) + 2
tl = max l∈L wl
=
(5.2)
Note that the order of the four adders in SHA1 does not make a difference in the mathematical calculation, which means that there are several different ways to represent a SHA1 DFG. For example, (a + b) + c and (b + c) + a will have different DFGs though they are mathematically equivalent. When a DFG is drawn, the DFG of the minimum iteration bound must be chosen. The DFG in Fig. 5.2 is one of the DFGs with the minimum iteration bound. The critical path delay is defined as the maximum calculation delay between any two consecutive algorithmic delays, i.e., Ds. In Fig. 5.2, the critical path delay is 4 × Prop(+), which is larger than the iteration bound. In order to obtain the critical path delay of the iteration bound, we use the retiming and unfolding transformations.
5.4.3 Iteration Bound Analysis with Carry Save Adders In the iteration bound analysis, we assume that each functional node in a DFG cannot be merged or split into some other operations. Therefore, in order to use a CSA (carry save adder), we have to draw another DFG. A CSA produces two values (carry and sum) from three input operands. Since CSA has small area and propagation delay in throughput-optimized implementations, it is commonly used when three or more operands are summed. The SHA1 DFG with CSA is drawn in Fig. 5.3. Since Prop(Ft ) ≈ Prop(CSA) < Prop(+) as shown in the previous section, the loop with the maximum loop bound is the one with bold lines and shaded nodes in Fig. 5.3 and its iteration bound is given in Equation (5.3). SHA1(CSA) T∞ = Prop(+) + Prop(CSA) ≈ Prop(+) + Prop(Ft )
(5.3)
SHA1(CSA) SHA1 Since T∞ < T∞ , the use of CSA does not help to reduce the iteration bound. The critical path delay of this case is Prop(+) + 3 × Prop(CSA), which is SHA1(CSA) . also larger than T∞
86
Y.K. Lee et al.
Fig. 5.3 SHA1 Data flow graph with CSA
5.4.4 Retiming Transformation The attainable minimum critical path delay with the retiming transformation (not using the unfolding transformation) is given in the following Equations (5.4, 5.5):
SHA1 = T∞
2 × Prop(+) + Prop(Ft ) 2
= Prop(+) + Prop(Ft )
SHA1(CSA) = Prop(+) + Prop(CSA) ≈ Prop(+) + Prop(Ft ) T∞
(5.4) (5.5)
Assuming that a functional node cannot be split into multiple parts, · is the maximum part when the iteration bounds is evenly distributed into N parts, where N is the number of algorithmic delays in a loop, which sits The in the denominator. SHA1 SHA1(CSA) , respectively. and T∞ values of N are denoted by the 2 and 1 in T∞ SHA1 In the case of T∞ since the iteration bound 2 × Prop(+) + Prop(Ft ) can be partitioned into one delay Prop(+) and the other delay Prop(+) + Prop(Ft ), the attainable critical path delay by the retiming transformation is Prop(+) + Prop(Ft ). The retiming transformation modifies a DFG by moving algorithmic delays, i.e., Ds, through the functional nodes in the graph. Delays of outgoing edges can be replaced with delays from incoming edges and vice versa. Note that the outgoing edges and the incoming edges must be dealt as a set independently of the number of outgoing or incoming edges. Figure 5.4 shows the retiming transformation for each of two cases. Even though applying the CSA technique increases the iteration bound, the critical path delays of two cases are similar if there is no unfolding transformation. The time indices of F, K , and W are changed due to the retiming transformation. The shaded nodes represent the critical path of each case, which are A−−−−→ B−−−−→ Ft−1 −−−−→ + and A−−−−→ S5−−−−→ CSA−−−−→ +. Note that there is no propagation delay on A, B, and S5. Though the two critical paths in Fig. 5.4 are similar, in practice (b) is preferred since CSA has smaller area than a throughput optimized adder.
5 Hardware Design for Hash Functions
87
Fig. 5.4 Retiming transformation of SHA1
Due to the retiming transformation, some of the square nodes are no longer paired with algorithmic delays. Therefore, care must be used to properly initialize the registers and extract the final result: this will be explained in the implementation section (Section 5.7).
5.4.5 Unfolding Transformation The DFG of SHA1 with CSA (Fig. 5.4b) achieves its iteration bound by applying the retiming transformations, but the DFG without CSA (Fig. 5.4a) does not. Note SHA1(CSA) SHA1 SHA1 < T∞ , our goal is to achieve T∞ . that since T∞ SHA1 , i.e., Fig. 5.4a, we apply the In order to achieve the iteration bound of T∞ unfolding transformation. The unfolding transformation improves performance by calculating several iterations in a single cycle. The minimally required unfolding factor is the denominator of the iteration bound. This fact can be inferred by noting that the difference between Equations (5.2, 5.4) is caused by the uncanceled denominator of the iteration bound. In SHA1, the required unfolding factor is two.
88
Y.K. Lee et al.
Fig. 5.5 Unfolding transformation of SHA1
For the unfolding transformation, we expand the equations in Fig. 5.1c to Equation (5.6). Now the register values of the time index t + 2 are expressed using registers only with the time index t. Note that the functional nodes are doubled due to the unfolding transformation. The resulting DFG is given in Fig. 5.5. In the indexes of F, K , and W , t is also replaced by 2t due to the unfolding factor of two. T E M Pt = S5(At ) + Ft (Bt , Ct , Dt ) + E t + Wt + K t T E M Pt+1 = S5(At+1 ) + Ft+1 (Bt+1 , Ct+1 , Dt+1 ) + E t+1 + Wt+1 + K t+1 E t+2
= S5(T E M Pt ) + Ft+1 (At , S30(Bt ), Ct ) + Dt + Wt+1 + K t+1 = Dt+1 = Ct (5.6)
Dt+2 = Ct+1 = S30(Bt ) Ct+2 = S30(Bt+1 ) = S30(At ) Bt+2 = A A+1 = T E M Pt At+2 = T E M Pt+1 After the unfolding transformation, we can substitute two consecutive adders with one CSA and one adder. We have shown in the previous section that using CSA does not reduce the iteration bound and therefore does not improve the throughput. However, since CSA occupies less area than a throughput optimized adder, we substitute adders with CSA as long as it does not increase the iteration bound. Figure 5.6 shows the DFG which uses CSA. Some consecutive adders are not replaced by CSA since doing so would increase the iteration bound.
5 Hardware Design for Hash Functions
89
Fig. 5.6 Unfolding transformation of SHA1 with CSA
Fig. 5.7 Unfolding and retiming transformation of SHA1
Since 3 × Prop(CSA) < Prop(+), the loop with the maximum loop delay is the loop marked with bold lines in Fig. 5.6. Finally, after performing some proper retiming transformations, we get the result of Fig. 5.7. The critical path is the path of shaded nodes in Fig. 5.7 (i.e., S5−−−−→ +−−−−→ A−−−−→ F2t+1 −−−−→ +). The normalized critical path delay, Tˆ , can be calculated by dividing the critical path
90
Y.K. Lee et al.
delay by the unfolding factor, which is now equal to the iteration bound as shown in Equation (5.7). 2 × Prop(+) + Prop(Ft ) SHA1 Tˆ SHA1 = = T∞ 2
(5.7)
5.5 Throughput Optimal Architecture of SHA2 The SHA2 family of hash algorithms [7] includes SHA-256, SHA-384, and SHA512. The input message is expanded into 64 (for SHA-256) or 80 (for SHA-384 or SHA-512) words. The expanded message is again compressed into 256, 384, or 512 bits depending on the algorithm. For one message block, the required iterations are 64 (for SHA-256) or 80 (for SHA-384 or SHA-512). Since all the SHA2 family hash algorithms have the same architecture except for input, output, and word sizes; constants; non-linear scrambling functions, i.e., Σ0 , Σ1 , Maj, Ch, σ0 , and σ1 ; and the number of the iterations, they can be expressed in the same DFG. Therefore, we only consider the SHA-256 algorithm which is shown in Fig. 5.8. Ch( x,y,z )=( x ∧ y) ⊕ (¬x ∧ z) Maj ( x,y,z )=( x ∧ y) ⊕ (x ∧ z) ⊕ (y ∧ z) {256} (x) = ROTR 2 (x) ⊕ ROTR 13 (x) ⊕ ROTR 22 (x) Σ0 {256} (x) = ROTR 6 (x) ⊕ ROTR 11 (x) ⊕ ROTR 25 (x) Σ1 {256} σ0 (x) = ROTR7 (x) ⊕ ROTR18 (x) ⊕ SHR 3 (x) {256} (x) = ROTR17(x) ⊕ ROTR 19 (x) ⊕ SHR10 (x) σ1 (a) SHA-256 functions Wt =
(i)
{256}
σ1
Mt {256} (Wt−2 )+ Wt−7 + σ0 (Wt−15 )+ Wt−16
0 ≤ t ≤ 15 16 ≤ t ≤ 63
(b) SHA-256 expander computation {256}
{256}
T 1t = Ht + Σ1 (Et )+ Ch (Et ,F t ,G t )+ Kt {256} T 2t = Σ0 (At )+ Maj(At ,B t ,C t ) Ht+1 = Gt Gt+1 = Ft Ft+1 = Et Et+1 = Dt + T 1t Dt+1 = Ct Ct+1 = Bt Bt+1 = At At+1 = T 1t + T 2t
+ Wt
(c) SHA-256 compressor computation
Fig. 5.8 SHA-256 hash computation
The design methodology used for SHA1 can be similarly applied to the SHA2 family. However, there are some noticeable differences between the two. The SHA2 family requires only the retiming transformation to achieve their iteration bounds, while SHA1 requires both unfolding and retiming transformations. In the SHA2
5 Hardware Design for Hash Functions
91
family, we also design the expander part since its straightforward architecture has a larger critical path delay than the compressor’s iteration bound.
5.5.1 DFG of SHA2 Compressor Since within one iteration the order of additions in SHA2 does not affect the results, there are several possible DFGs. A DFG having the minimum iteration bound must be chosen. In SHA2 compressor, since there are only seven adders, finding a DFG having the minimum iteration bound is not difficult as long as we understand how to calculate the iteration bound. The DFG in Fig. 5.9 is a straightforward DFG. The shaded loop indicates the loop with the largest loop bound and gives the following iteration bound: (5.9) = max T∞ l∈L
tl wl
= 3 × Prop(+) + Prop(Ch)
(5.8)
Fig. 5.9 Basic SHA2 compressor DFG
However, by reordering the sequence of additions, the DFG of Fig. 5.10 can be obtained which has the smallest iteration bound. As we assume that Prop(Σ0) ≈ Prop(Maj) ≈ Prop(Σ1) ≈ Prop(Ch), the two bolded loops have the same maximum loop bound. Since the loop bound of the left-hand side loop cannot be reduced further, no further reduction in the iteration bound is possible. Therefore, the iteration bound of Fig. 5.10 is as follows: (5.10) = max T∞ l∈L
tl wl
= 2 × Prop(+) + Prop(Ch)
(5.9)
If we assume that any operation in the DFG cannot be merged or split into other operations, the iteration bound of SHA2 is given in Equation (5.9). However, if we are allowed to use a carry save adder (CSA), we can substitute two consecutive
92
Y.K. Lee et al.
Fig. 5.10 Optimized SHA2 compressor DFG
Fig. 5.11 Optimized SHA2 compressor DFG with CSA
adders with one CSA and one adder. The resulting DFG is shown in Fig. 5.11. Note that some of the adders are not replaced with CSA since doing so would increase the iteration bound. Therefore, the final iteration bound is achieved as shown in Equation (5.10).
SHA2 T∞
tl = max l∈L wl
= Prop(+) + Prop(CSA) + Prop(Ch)
(5.10)
In the next step, we perform transformations. Since there is no fraction in the iteration bound, we do not need the unfolding transformation. Only the retiming transformation is necessary to achieve the iteration bound. The retimed DFG achieving the iteration bound is depicted in Fig. 5.12. Note that the indexes of K t+2 and Wt+3 are changed due to the retiming transformation. In order to remove the ROM access time for K t+2 , which is a constant value from ROM, we place an algorithmic delay, i.e., D, in front of K t+2 . This does not change the function.
5 Hardware Design for Hash Functions
Fig. 5.12 Final SHA2 compressor DFG with retiming transformation
Fig. 5.13 SHA2 expander DFG
93
94
Y.K. Lee et al.
5.5.2 DFG of SHA2 Expander A straightforward DFG of the SHA2 expander is given in Fig. 5.13a. Even though the iteration bound of the expander is much less than the compressor, we do not need to minimize the expander’s critical path delay less than the compressor’s iteration bound (the throughput is bounded by the compressor’s iteration bound). Figure 5.13b shows a DFG with CSA, and Fig. 5.13c shows a DFG with the retiming transformation where the critical path delay is Prop(+).
5.6 Throughput Optimal Architecture of RIPEMD-160 RIPEMD-160 [17] is a hash algorithm designed by Hans Dobbertin et al. in 1996. It is composed of two parallel iterations, where each iteration contains five rounds, and each round is composed of 16 hash operations. The equation and DFG of RIPEMD160 are shown in Equation (5.11) and Fig. 5.14(a), respectively. TEMPt = St {At + Ft (Bt , Ct , Dt ) + X t + K t } + E t E t+1 = Dt Dt+1 = S10(Ct ) Ct+1 = Bt Bt+1 = TEMPt At+1 = E t
(5.11)
TEMPt = St At + Ft (Bt , Ct , Dt ) + X t + K t + E t E t+1 = Dt Dt+1 = S10(Ct ) Ct+1 = Bt Bt+1 = TEMPt At+1 = E t
From Equation (5.11) we can see that the two parallel iterations of RIPEMD160 have identical DFGs. Therefore, we need to analyze only one part and then replicate the results for the second iteration. St is a cyclic shift function, X t is a selection of padded message words, and K t is a constant which depends on the time index t. The loop with the maximum loop bound, i.e., B−−−−→ Ft −−−−→ +−−−−→ St −−−−→ + −−−D −−→ B, is shown in Fig. 5.14a using shaded nodes and its iteration bound is shown in Equation (5.12).
5 Hardware Design for Hash Functions
T∞ = max l∈L
tl wl
95
= 2 × Prop(+) + Prop(Ft )
(5.12)
Fig. 5.14 RIPEMD-160 data flow graph and its transformation
The retiming transformation of RIPEMD-160 which achieves the iteration bound is shown in Fig. 5.14b. The critical path is marked by bold line (B−−−−→ Ft −−−−→ +−−−−→ St −−−−→ +).
5.7 Implementation of the Designed Hash Algorithms In order to verify the design methodology, we synthesized SHA1, SHA2, and RIPEMD-160 using 0.13 μm CMOS standard cell library. We verified that the actual critical paths occur as predicted by our analyses and that the hash outputs are correct.
96
Y.K. Lee et al.
5.7.1 Synthesis of the SHA1 Algorithm For SHA1 we synthesized two versions: one after only the retiming transformation and the other after both the unfolding and retiming transformations. Since the unfolding transformation introduces duplications of functional nodes, its use often incurs a significant increase in area. For the version using only the retiming transformation, we select the DFG in Fig. 5.4b since Fig. 5.4b has less area than Fig. 5.4a with the same critical path delay. Another benefit of Fig. 5.4b is a smaller number of overhead cycles than Fig. 5.4a, which will be explained in this section. For the version of the unfolding and retiming transformation, we synthesized the DFG in Fig. 5.7. SHA1 with the retiming transformation. In the transformed DFGs, some of the register values, i.e., A, B, . . ., E, are no longer paired with an algorithmic delay D. As a result, the register B is no longer necessary except for providing the initial value. The retiming transformation moves the delay D associated with register B between two CSAs (we name this delay T) in Fig. 5.4b. Though the size of T is doubled (to store both the sum and carry values produced by the CSA), our experiments showed a smaller gate area in Fig. 5.4b than Fig. 5.4a due to the small size of CSA. Another difference between the original DFG and the transformed DFG occurs during initialization. In the original DFG (Fig. 5.2), all the registers are initialized in the first cycle according to the SHA1 algorithm. In contrast, initialization requires two cycles in the retimed DFG (Fig. 5.4b). This is because there should be one more cycle to propagate initial values of B, C, D, and E into T before the DFG flow starts. In the first cycle, the values of A, B, C, D, and E are initialized according to the SHA1 algorithm. At the second cycle, A holds its initial value and C, D, E, and T are updated using the previous values of B, C, D, and E. From the third cycle, the registers are updated according to the DFG (Fig. 5.4b). Due to the two cycles of initialization, the retimed DFG introduces one overhead cycle. This fact can also be observed noting that there are two algorithmic delays from E to A. In order to update A with a valid value at the beginning of the iteration, two cycles are required for the propagation. For the case of the retimed SHA1 without CSA (Fig. 5.4a), there are three overhead cycles due to the four algorithmic delays in the path from E to A. Therefore, the required number of cycles for Fig. 5.4b is the number of iterations plus two cycles for initialization, which results in 82 cycles. Since the finalization of SHA1 can be overlapped with the initialization of the next message block, one cycle is excluded from the total number of cycles. When extracting the final results at the end of the iterations, we should note the indexes of registers. In Fig. 5.4b, the index of the output extraction of the register A, i.e., At , is one less than the others. Therefore, the final result of the register A is available one cycle later than the others. SHA1 with the unfolding and retiming transformation. In the case of Fig. 5.7, there are six algorithmic delays and two of them are not paired with a square node. We name the register for the algorithmic delay between two adders T 1 and the
5 Hardware Design for Hash Functions
97
register for the algorithmic delay between an adder and S5 T 2. However, since T 2 is equivalent to B, we do not need a separate register for T 2. Therefore, the total required registers remain at 5 (retiming does not introduce extra registers in this case). Since there is only one algorithmic delay in all the paths between any two consecutive square nodes, there is no overhead cycle resulting in the total number of cycles of 41, i.e., 40 cycles for iterations plus one cycle for initialization. When extracting the final result of A, the value must be driven from +(S5(B), T 1). This calculation can be combined with the finalization since the combined computational delay of +(S5(B), T 1), whose delay is Prop(+), and the finalization, whose delay is Prop(+), is 2 × Prop(+) which is less than the critical path delay. Synthesis results and comparison. The synthesis results are compared with some previously reported results in Table 5.1. The 82 cycle version is made by the retiming transformation (Fig. 5.4b), and the 41 cycle version is made by the retiming and the unfolding transformations together (Fig. 5.7). The throughputs are calculated using the following Equation (5.13). Throughput =
Frequency × (512 bits) # of Cycles
(5.13)
In Table 5.1, the work of [20] is a unified solution for MD5, SHA1, and RIPEMD-160 so its gate count is quite large. The architecture of [28] has a small cycle number and a large gate area due to the unfolding transformation with a large unfolding factor of 8. Even with the use of a large unfolding factor, its critical path delay could not achieve the iteration bound. Comparing our architectures with [42], which is also using 0.13 μm CMOS, ours achieve much higher throughputs. Table 5.1 Synthesis results and comparison of SHA1 Hash algorithm Technology Area Frequency Throughput (ASIC)(μ) (Gates) (MHz) Cycles (Mbps) [36] [20]a [42] [28] [2] Our proposal a
0.25 0.18 0.13 0.18 0.18 0.13
20,536 70,170 9,859 54,133 23,000 13,236 16,259
143 116 333.3 72.7 290 943.4 558.7
82 80 85 12 82 82 41
893 824.9 2,006 3,103 1,810 5,890 6,976
This is a unified solution for MD5, SHA1, and RIPEMD-160.
5.7.2 Synthesis of the SHA2 Algorithm In the DFG of Fig. 5.12, there is no algorithmic delay between registers F and H . Therefore, the values of H will be the same as F except for the first two cycles: in the first cycle, the value of H should be the initialized value of H according to the
98
Y.K. Lee et al.
SHA2 algorithm; in the second cycle, the value of H should be the initialized value of G. Therefore, the value of F will be directly used as an input of the following CSA. Due to the four algorithmic delays from the register H to the register A, there is the overhead of three cycles. Therefore, the total number of cycles required for one message block is the number of iterations plus one cycle for initialization and finalization plus three overhead cycles due to the retiming transformation, which results in 68 cycles for SHA256 and 84 cycles for SHA384 and SHA512. A comparison with other works is shown in Table 5.2. The throughputs are calculated using the following equation: Frequency × (512 bits) # of Cycles Frequency × (1024 bits) = # of Cycles
Throughput256 = Throughput384,512
(5.14)
Table 5.2 Synthesis results and comparison of SHA2 family Hash algorithms Technology Area Frequency Throughput Algorithm (ASIC)(μm) (Gates) (MHz) Cycles (Mbps) [2] [42] [12] Our proposal
SHA256 SHA256 SHA384/512 SHA256 SHA256 SHA384/512
0.18 0.13 0.13 0.13
22,000 15,329 27,297 N/A 22,025 43,330
200 333.3 250.0 >1,000 793.6 746.2
65 72 88 69 68 84
1,575 2,370 2,909 >7,420 5,975 9,096
Since our HDL programming is done at register transfer level and we have mostly focused on optimizing microarchitecture rather than focusing lower-level optimization, some other reported results, e.g., [12], achieve better performance with the same iteration bound delay. However, the iteration bound analysis still determines the optimum high-level architecture of an algorithm.
5.7.3 Synthesis of the RIPEMD-160 Algorithm Using the same design principles, we synthesized RIPEMD-160 algorithm according to Fig. 5.14b. Optimizing DFG, in this case, is rather simple and requires only one retiming transformation. Similar to the previous implementations the register value A, after retiming transformation, is no longer paired with algorithmic delay D. Therefore, the value of A will be equal to the value of E except for the first cycle. In the first cycle register A is initialized according to the RIPEMD-160 algorithm. For detailed description of initialization step one should refer to [17]. Assuming that the input message is already padded, our implementation of RIPEMD-160 requires 82 clock cycles for calculating the hash result of 512-bit
5 Hardware Design for Hash Functions
99
large padded input. One cycle is necessary for initialization, one for finalizing the hash output, and 80 cycles for performing each of 5 rounds 16 times. Note here that using additional registers for message expansion is omitted as the padded message can simply be stored in two 16 × 32 RAM blocks and appropriate message blocks can be read by providing the correct address values. Storing the message into the RAM cells requires 16 additional cycles and both initialization cycle and hash evaluation step can be performed concurrently with the message storing schedule which makes the total number of cycles equal to 96. Our result and comparison with previous work is given in Table 5.3. For synthesis we again use Synopsys Design Vision with 0.13 μm standard cell library. The throughput is calculated according to Equation (5.13).
Table 5.3 Synthesis results and comparison of RIPEMD-160 Hash algorithm Technology Area Frequency Throughput (ASIC)(μ) (Gates) (MHz) Cycles (Mbps) [19]a [20]b [42] Our proposal a b
0.6 0.18 0.13 0.13
10,900 + RAM 70,170 24,775 18,819 + 2RAM
59 116 270.3 431
337 80 96 96
89 824.9 1,442 2,299
This is a unified solution for MD5, SHA1, and RIPEMD-160. This is a unified solution for MD5, SHA1, SHA-256, and RIPEMD-160.
Observing the results from Table 5.3 we can conclude that, concerning the speed, our proposal outperforms the previous fastest implementation [42] for almost 60%. As the size of two 16 × 32 RAM blocks is not larger than 10 k gates, the area of our implementation is comparable to the size of architecture proposed in [42].
5.8 Hardware Designers’ Feedback to Hash Designers We have shown how a hardware designer can design an architecture of a given MD4based hash algorithm. As shown in this chapter, the optimal architecture for high throughput is limited by the iteration bound. An improvement of the iteration bound is only possible at the stage of the hash algorithm design. Before concluding this chapter, we would give some feedback to hash algorithm designers for the potential of a better hardware architecture. Note that the suggestion given in this section is made without considering much of the cryptographic analysis of a hash algorithm. Therefore, it may not be possible to follow the suggestion without sacrificing the security level. We just hope that when a hash designer has some choices, it would be a guideline to choose the one which can result in a better hardware architecture.
100
Y.K. Lee et al.
5.8.1 High-Throughput Architecture Minimize the iteration bound by properly placing the algorithmic delays. Achieving a better throughput is directly related with this chapter. If we compare RIPEMD-160 (Fig. 5.14a) with SHA1 (Fig. 5.2), it is obvious how the maximum throughput can be different depending on the hash algorithm. The iteration bound of RIPEMD-160 is exactly twice of SHA1, i.e., RIPEMD-160 would achieve one-half throughput of SHA1, though two algorithms look similar regarding their DFGs. This difference is caused by the loops having the iteration bounds. While the marked loop (which determines the iteration bound) of SHA1 has two algorithmic delays, the marked loop of RIPEMD-160 has only one algorithmic delay. Even though the calculation delays of the two loops are the same, one more algorithmic delay of SHA1 resulted in a double throughput of RIPEMD-160. If we modify RIPEMD-160 to have one more algorithmic delay just like SHA1, the throughput of RIPEMD-160 can be doubled. Therefore, the hash designers should consider the placement of the algorithmic delays if the high throughput is one of the design criteria.
5.8.2 Compact Architecture Have the iteration bound without a denominator. In SHA1, in order to achieve the maximum throughput, the unfolding transformation was needed. This is caused by the uncanceled denominator of the iteration bound as shown in Equation (5.2). The denominator 2 caused the unfolding transformation with unfolding factor of 2. Note that the unfolding transformation introduces extra circuit area since it duplicates the functional nodes. Therefore, if it is possible to design a hash algorithm without a denominator in the iteration bound, it would not be needed to have the unfolding transformation to achieve the maximum throughput. Reduce the number of registers. The circuit areas of hash algorithms given in this chapter are dominated by the registers. For example, in our SHA1 implementations (Table 5.1), the portion of the registers (in 82 cycle version) is about 77%. This situation is similar to the other MD4-based hash algorithms. In this implementation, we use 27 registers of 32-bit words. Eleven registers are used in the compressor where six are for the variables (i.e., to implement the algorithmic delays in Fig. 5.4b), and five are for keeping the intermediate results after processing one message block, which is used for the next message block. The other 16 registers are used in the expander. Since the number of registers in the compressor is directly related to the size and the information entropy of the hash output, the reduced number of registers will directly weaken the security level of a hash algorithm. Therefore, the possible way to minimize the registers is in the expander. In the SHA2 expander, for example, shown in Fig. 5.13, Wt is used to generate Wt+16 , which has the feedback depth of 16. Therefore, it requires to store one whole message block, i.e., 16 words, in the registers. Since there is no much message scrambling activity in the expander compared to the compressor, it may be possible to reduce the feedback depth. For example, if the feedback depth is reduced to 8 from 16, 8 registers can be saved.
5 Hardware Design for Hash Functions
101
Some other comments. Besides those mentioned above, reducing the number of iterations and minimizing the complexity of operations (i.e., functional nodes in a DFG) would result in a better hardware architecture and/or throughput. However, doing this must be considered with the possibility of reducing the security level of a hash algorithm. Additionally, we can note that while an addition is a more expensive operation in hardware implementation (since it requires more delay and circuit area) than a non-linear function, a non-linear function is a more expensive operation in software (since it requires more cycles) than an addition. It must be also noted that a smaller iteration bound does not guarantee a higher throughput architecture, but it just gives a larger upper bound for throughput. If a hash designer wants to have an algorithm to achieve its iteration bound, he/she needs to consider the techniques presented in this chapter, such as the iteration bound analysis, the unfolding transformation, and the retiming transformation.
5.9 Conclusions and Future Work In this chapter, we gave a brief overview of MD4-based cryptographic hash algorithms, summarized their security analysis, and introduced a design methodology for the throughput optimal architectures. Among this class of hash algorithms, we chose SHA1, SHA2, and RIPEMD-160 to design and implement for their throughput optimum. Although SHA1 is not considered to be secure anymore, it is still most widely used hash algorithm. For SHA2 and RIPEMD-160 there has been no critical attacks discovered so far, which makes them good candidates for the future cryptographic applications. Though our implementations shown in this chapter are limited to a few hash algorithms, the design methodology can be applied to any other or new MD4-based hash algorithm. Hash designers may not be familiar with the hardware implementation of an algorithm, so a designed hash algorithm can result in a poor performance. Accordingly, we give some feedback to hash designers. Concerning the future research, a further optimization in a circuit level could be explored. Note that the presented architectures are claimed to achieve the theoretical optimums in the microarchitecture level. A lower level optimization, such as designing faster adders or non-linear functions used in hash algorithm, is not considered. Furthermore, it is assumed that the functional nodes of a DFG are atomic in the iteration bound analysis. In other words, a given functional node cannot be split or merged into some other functional nodes. Therefore, it may be possible to design a faster functional node by merging multiple functional nodes in a given DFG resulting in a smaller iteration bound.
References 1. Digital Signature Standard. In National Institute of Standards and Technology. Federal Information Processing Standards Publication 186-2.
102
Y.K. Lee et al.
2. Helion SHA-1 hashing cores. Helion Technology. 3. RIPE, Integrity Primitives for Secure Information Systems, Final Report of RACE Integrity Primitives Evaluation (RIPE-RACE 1040). LNCS 1007, A. Bosselaers and B. Preneel, Eds., Springer-Verlag, 1995. 4. ISO/IEC 10118-3, Information technology – security techniques – hash functions – Part 3: Dedicated hash functions. 2003. 5. Federal Information Processing Standards Publication 180. Secure Hash Standard. National Institute of Standards and Technology. 1993. 6. Federal Information Processing Standards Publication 180-1. Secure Hash Standard. National Institute of Standards and Technology. 1995. 7. Federal Information Processing Standards Publication 180-2. Secure Hash Standard. National Institute of Standards and Technology. 2003. 8. R. Anderson and E. Biham. Two practical and provably secure block ciphers: BEAR and LION. In International Workshop on Fast Software Encryption (IWFSE’96), pages 113–120. LNCS 1039, D. Gollmann, Ed., Springer-Verlag, 1996. 9. B. Boer and A. Bosselaers. Collisions for the Compression Function of MD5. In Advances in Cryptology, Proceedings of EUROCRYPT’93, pages 293–304, 1993. 10. F. Chabaud and A. Joux. Differential collisions in SHA-0. In Advances in Cryptology, Proceedings of CRYPTO’98, pages 253–261, 1998. 11. F. Crowe, A. Daly, and W. Marnane. Single-chip FPGA implementation of a cryptographic coprocessor. In Proceedings of the International Conference on Field Programmable Technology (FPT’04), pages 279–285, 2004. 12. L. Dadda, M. Macchetti, and J. Owen. An ASIC design for a high speed implementation of the hash function SHA-256 (384, 512). In ACM Great Lakes Symposium on VLSI, pages 421–425, 2004. 13. L. Dadda, M. Macchetti, and J. Owen. The design of a high speed ASIC unit for the hash function SHA-256 (384, 512). In Proceedings of the Conference on Design, Automation and Test in Europe (DATE’04), pages 70–75, 2004. 14. B. den Boer and A. Bosselaers. An attack on the last two rounds of MD4. In Advances in Cryptology, Proceedings of CRYPTO’91, pages 194–203. LNCS 576, J. Feigenbaum, Ed., Springer-Verlag, 1991. 15. H. Dobbertin. The status of MD5 after a recent attack. In Cryptographic Laboratories Research, 1996. 16. H. Dobbertin. Cryptanalysis of MD4. Journal of Cryptology, 11:253–271, November 4, 1998. 17. H. Dobbertin, A. Bosselaers, and B. Preneel. RIPEMD-160: A strengthened version of RIPEMD. In Fast Software Encryption, pages 71–82. LNCS 1039, D. Gollmann, Ed., Springer-Verlag, 1996. 18. H. Dobbertin, A. Bosselaers, and B. Preneel. RIPEMD-160: A strengthened version of RIPEMD. In Fast Software Encryption, pages 71–82, 1996. 19. S. Dominikus. A hardware implementation of MD-4 family hash algorithms. In Proceedings of the IEEE International Conference of Electronics Circuits and Systems (ICECS’02), pages 1143–1146, 2002. 20. T. S. Ganesh and T. S. B. Sudarshan. ASIC Implementation of a unified hardware architecture for non-key based cryptographic hash primitives. In Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC’05), pages 580–585, 2005. 21. H. Gilbert and H. Handschuh. Security analysis of SHA-256 and sisters. In Selected Areas in Cryptography, pages 175–193, 2004. 22. H. Handschuh and D. Naccache. SHACAL (- Submission to NESSIE -). 23. P. Hawkes, M. Paddon, and G. Rose. On Corrective Patterns for the SHA-2 Family. Cryptology ePrint Archive, Report 2004/207, http://eprint.iacr.org/2004/207, 2004. 24. S. Indesteege, F. Mendel, B. Preneel, and C. Rechberger. Collisions and other non-random properties for step-reduced SHA-256. In Annual Workshop on Selected Areas in Cryptography. To be appear in LNCS, Springer-Verlag, 2008.
5 Hardware Design for Hash Functions
103
25. K. J¨arvinen, M. Tommiska, and J. Skytt¨a. Hardware implementation analysis of the MD5 hash algorihtm. In Proceedings of the Annual Hawaii International Conference on System Science (HICSS’05), page 298, 2005. 26. A. Joux, P. Carribault, W. Jalby, and C. Lemuet. Collisions in SHA-0. In Rump session of CRYPTO’04, 2004. 27. M. Knezevic, K. Sakiyama, Y. K. Lee, and I. Verbauwhede. On the high-throughput implementation of RIPEMD-160 hash algorithm. In Proceedings of the IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP’08), 2008. 28. Y. K. Lee, H. Chan, and I. Verbauwhede. Throughput optimized SHA-1 architecture using unfolding transformation. In IEEE International Conference on Application-specific Systems, Architectures and Processors (ASAP’06), pages 354–359, 2006. 29. Y. K. Lee, H. Chan, and I. Verbauwhede. Iteration bound analysis and throughput optimum architecture of SHA-256 (384,512) for hardware implementations. In The 8th International Workshop on Information Security Applications (WISA’07), pages 102–114. LNCS 4867, S. Kim, H. Lee, and M. Yung, Eds., Springer-Verlag, 2007. 30. Y. K. Lee, H. Chan, and I. Verbauwhede. Design methodology for throughput optimum architectures of hash algorithms of the MD4-class. Journal of Signal Processing Systems, Springer, Online first, 2008. 31. R. Lien, T. Grembowski, and K. Gaj. A 1 Gbit/s partially unrolled architecture of hash functions SHA-1 and SHA-512. In CT-RSA 2004, pages 324–338. LNCS 2964, T. Okamoto, Ed., Springer-Verlag, 2004. 32. M. Macchetti and L. Dadda. Quasi-pipelined hash circuits. In Proceedings of the IEEE Symposium on Computer Arithmetic (ARITH’05), pages 222–229, 2005. 33. R. P. McEvoy, F. M. Crowe, C. C. Murphy, and W. P. Marnane. Optimization of the SHA-2 family of hah functions on FPGAs. In Proceedings of the Emerging VLSI Technologies and Architectures (ISVLSI’06), pages 317–322, 2006. 34. F. Mendel, N. Pramstaller, C. Rechberger, and V. Rijmen. On the collision resistance of RIPEMD-160. In Information Security, pages 101–116, 2006. 35. H. Michail, A.P. Kakarountas, O. Koufopavlou, and C.E. Goutis. A low-power and highthroughput implementation of the SHA-1 hash function. In IEEE International Symposium on Circuits and Systems (ISCAS’05), pages 4086–4089, 2005. 36. Y. Ming-Yan, Z. Tong, W. Jin-Xiang, and Y. Yi-Zheng. An efficient ASIC implementation of SHA-1 engine for TPM. In IEEE Asia-Pacific Conference on Circuits and Systems, pages 873–876, 2004. 37. C. Ng, T. Ng, and K. Yip. A unified architecture of MD5 and RIPEMD-160 hash algorithms. In Proceedings of the International Symposium on Circuits and Systems (ISCAS’04), pages 889–892, 2004. 38. K. K. Parhi. In VLSI Digital Signal Processing Systems: Design and Implementation, pages 43–61 and 119–140. Weley, 1999. 39. B. Preenel. Encyclopedia of Cryptography and Security, Davies-Meyer Hash Function. H. C. A. van Tilborg, Ed., Springer, 2005. 40. R. Rivest. The MD4 message digest agorithm. In Advances in Cryptology, Proceedings of CRYPTO’90, pages 303–311. LNCS 537, S. Vanstone, Ed, Springer-Verlag, 1991. 41. R. Rivest. The MD5 Message-Digest Algorithm. Request for Comments: 1321, 1992. 42. A. Satoh and T. Inoue. ASIC-hardware-focused comparison for hash functions MD5, RIPEMD-160, and SHS. In Proceedings of the International Conference on Information Technology: Coding and Computing (ITCC’05), pages 532–537, 2005. 43. Vaudenay Serge. On the need for multipermutations: cryptanalysis of MD4 and SAFER. In Fast Software Encryption, pages 286–297, 1994. 44. P. C. van Oorschot and M. J. Wiener. Parallel collision search with cryptanalytic applications. Journal of Cryptology: The journal of the International Association for Cryptologic Research, 12(1):1–28, 1999.
104
Y.K. Lee et al.
45. M. Wang, C. Su, C. Huang, and C. Wu. An HMAC processor with integrated SHA-1 and MD5 algorihtms. In Proceedings of the Asia and South Pacific Design Automation Conference (ASP-DAC’04), pages 456–458, 2004. 46. X. Wang, X. Lai, D. Feng, H. Chen, and X. Yu. Cryptanalysis of the hash functions MD4 and RIPEMD. In Advances in Cryptology, Proceedings of EUROCRYPT’05, pages 1–18, 2005. 47. X. Wang, Y. L. Yin, and H. Yu. Finding collisions in the full SHA-1. In Advances in Cryptology, Proceedings of CRYPTO’05, pages 17–35, 2005. 48. X. Wang and H. Yu. How to break MD5 and other hash functions. In Advances in Cryptology, Proceedings of EUROCRYPT’05, pages 19–35, 2005. 49. X. Wang, H. Yu, and Y. L. Yin. Efficient collision search attacks on SHA-0. In Advances in Cryptology, Proceedings of CRYPTO’05, pages 1–16, 2005. 50. J. Pieprzyk, Y. Zheng, and J. Seberry. HAVAL – A one-way hashing algorithm with variable length of output. In Advances in Cryptology, Proceedings of AUSCRYPT’90, pages 83–104. LNCS 718, J. Seberry and Y. Zheng, Eds., Spring-Verlag, 1992.
Chapter 6
Random Number Generators for Integrated Circuits and FPGAs Berk Sunar and Dries Schellekens
6.1 Introduction Random number generators are essential for modern day cryptography. Typically the secret data or function is established through the use of a random number generator. It is assumed that the attacker has no access to these random bits. According to Kerckhoffs’ principles the security of the cryptographic scheme should not depend on the secrecy of the algorithm but rather the secrecy of the key. Hence, in many cryptographic schemes the compromise of the random number generator leads to the collapse of the overall security. As the security of the overall system rests on these secrets, it is natural to set high standards for random number generators that produce them. The random number generator is expected to produce a stream of independent, statistically uniform, and unpredictable random bits. The output should be unpredictable even to the strongest adversary. At this point we wish to draw a distinction between pseudo- and (true-)random number generators. Remember that pseudo-random number generators are stretching functions that take a short random string (a key) and extend it into an arbitrarily long string. In contrast, true random number generators collect randomness from physical phenomena such as temperature, noise, radiation, which are assumed to contain random components unpredictable to anybody. In this treatment we only cover true random number generators (RNGs). RNGs are used in cryptography to • • • • • •
initialize key bits for secret- and public-key algorithms, seed pseudo-random number generators, generate nonces for challenge–response schemes, randomize padding bits, generate initialization vectors of block ciphers, randomize interactive proofs and zero knowledge protocols.
B. Sunar (B) Electrical and Computer Engineering Department, Worcester Polytechnic Institute, Worcester MA, 01609-2280, USA e-mail:
[email protected] I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3 6, C Springer Science+Business Media, LLC 2010
107
108
B. Sunar and D. Schellekens
A large number of RNG designs have been proposed in the literature. For instance, the design introduced in [2] uses a combination of analog and digital components for amplification and sampling of white noise. The main problem with this type of circuit is the amplification stage, which requires significant power to bring the noise level up a few orders of magnitude to the digital logic level. A similar design was developed by Intel Corporation [15], where the thermal noise on a junction is amplified and used to drive a voltage-controlled oscillator which is then sampled by another oscillator. The output sequence is post-processed using the von Neumann corrector and SHA-1. The design in [12] samples the jitter in a phase-locked loop (PLL) – an analog component – on a specialized reconfigurable logic platform. The innovative design introduced in [29] randomly samples the output of an LFSR and a cellular automaton. The randomness comes from the jitter in the two oscillator circuits which are used to clock the two deterministic circuits. Although the design is purely digital and no amplification is needed, it is difficult to verify the harvesting/sampling technique due to a complicated harvesting scheme. Despite the size of the design the entropy source is limited to the two oscillators. In another design [11], a simple architecture based on metastable circuits is proposed. The design passes the statistical tests only when a large number of such circuits are combined. A completely different class of RNG designs makes use of chaotic maps, e.g. see [21, 5]. Chaos-based RNGs are not very popular in cryptographic applications because they require analog circuitry. Reference [27] provides a detailed mathematical treatment of chaos-based RNGs. In this treatment, we present an overview on practical RNGs that could be realized either in ASIC or on reconfigurable logic. We therefore exclude designs that use entropy sources which are impractical for inclusion into an integrated circuit, e.g. nuclear decay radiation, user input (keyboard stroke interval timing), convection in anisotropic media. In what follows, we briefly summarize statistical and true randomness tests, present an overview of RNG components and conclude with a survey of select RNG designs.
6.2 Testing for Randomness 6.2.1 Statistical Tests Statisticians spent a good deal of time in developing tests to sort out weak pseudorandom number generators. These tests were further optimized against pseudorandom number generators and found use as an essential tool in testing the statistical performance of cryptographic primitives, e.g. stream ciphers, block ciphers, hash functions. From a cryptographic point of view, the statistical tests provide only a tripwire and not a comprehensive technique to determine the security level. For cryptographic applications besides a well-behaving statistical output distribution
6 Random Number Generators for Integrated Circuits and FPGAs
109
the RNG must also ensure forward and backward unpredictability. In fact, statistical tests are limited to perform blackbox analysis, and hence might miss internal weaknesses which in the hands of a cryptanalyst may compromise the security of the RNG. Naturally, it is expected that RNGs pass all statistical tests with flying colours. In many RNG designs, a complex harvesting scheme is employed and the output is further processed with a cryptographic hash function. The goal is to overdesign the RNG in case weaknesses are discovered in the design after deployment. This makes it difficult to give a mathematical justification for proper collection of the entropy. Hence, the only option left is to apply statistical randomness tests. Indeed, all of the designs cited in the previous section are validated by George Marsaglia’s DIEHARD [18] or/and Test Suites [19] developed by the Random Number Generation Technical Working Group of the National Institute of Standards and Technology (NIST). Despite the existence of other test suites, the DIEHARD and NIST test suites remain the defacto test tools among practitioners of cryptography. The following table lists the individual tests included in the two test suites. Despite the overlap of some of the tests a best practice dictates the application of both test suites. In practice, the DIEHARD test tends to be more picky than the NIST test suite, although this is not a general rule. It should be noted that, the NIST test suite is demanding on the number of sample input bits. On closer inspection one may identify several classes of tests categorized according to subject area: linear algebra, number theoretical, combinatorial, information theoretical and spectral tests. A good number of these tests are discussed in Knuth’s classic series The Art of Computer Programming, vol. 2 [16].
Diehard Battery of Tests Birthday spacing test Tough birthday spacing test GCD test Gorilla test Overlapping 5-permuations test Binary rank tests Bitstream test Overlapping pairs sparse occupancy test Overlapping quad. sparse occupancy test DNA test Count-the-1s test on a stream of bytes Count-the-1s test for specific bytes Parking lot test The minimum distance test The 3D spheres test Squeeze test Overlapping sums test Up–down run test Craps Test
NIST 800-22 Randomness Test Suite The frequency (monobit) test Frequency test within a block The runs test Test for the longest-run-of-ones in a block The binary matrix rank test The discrete Fourier transform (spectral) test The non-overlapping template matching test The overlapping template matching test Maurer’s “universal statistical” test Lempel–Ziv compression test The linear complexity test The serial test The approximate entropy test The cumulative sums test The random excursions test The random excursions variant test
110
B. Sunar and D. Schellekens
6.2.2 True Randomness Tests Statistical tests which were originally proposed for pseudo-RNGs have served well over the years in casting out weak generators. Due to lack of true randomness tests they have found use also in this purpose. More recently, the research community has recognized the importance for testing RNG devices and developed clever tests as well as RNG construction techniques to aid true randomness testing. Recently, Schindler and Killman [24] sketched a methodology for evaluating true RNGs and outlined the pioneering standardization efforts of the German Department for Security in Information Technology, BSI, as described in [1]. In their treatment, Schindler et al. advocate rigorous testing of RNGs and note that a statistical blackbox testing strategy may not be employed for this purpose. The AIS document provides clear evaluation criteria for RNGs and also allows RNG designers to present their own criteria. The authors advocate testing of RNGs during start-up to detect total breakdown of the entropy source and during operation of the RNG. The latter is also divided into light tests applied with regular intervals and more rigorous tests applied less frequently to uncover more subtle weaknesses. The authors also propose a specific set of light tests – which might be implemented as online tests – based on simple Ξ distribution tests. In a more recent work, Bucci and Lucci [4] emphasize the importance of designing for testability and make the observation that it is difficult and perhaps impossible to test the quality of RNGs after complex post-processing techniques have been employed. Hence, the authors propose the use of stateless RNG circuits which allow tests to be formulated. Design for testability is a big part of building robust RNG designs. For instance, extra buffers inside the design placed near the entropy source makes it possible to directly validate the entropy source before the internal random bits are post-processed. Similarly, a state reset circuit would allow detection of degradation of the RNG output via simple correlation tests that apply to stateless RNGs. The authors propose to augment designs with reset circuits that clear the state of the RNG. This is done to support a so-called certification mode which establishes whether the RNG is trustworthy. In the certification mode, the RNG is restarted before the collection of each output bit. The objective behind the restart is to eliminate any dependencies between collected bits. Then the output of the RNG is either stuck in a fixed bit and no entropy is generated or it generates independent bits. The former can be checked via a scheme that simply counts the transitions. If the transition rate is as expected, then biases in the output may be eliminated by using a stateless post-processor. The stateless post-processor preserves the independence among output blocks. In principle the proposed restart approach is applicable to pretty much any entropy source that permits a restart. The key point is though that the output diverges quickly from the start state into an unpredictable state. Hence, the amount of time required for an entropy source to produce diverging outputs after reset may be used as a metric. Moreover, Bucci and Luzzi note that robustness against attacks, faults, and production defects should also be considered as factors when considering the quality of a RNG design. An important side benefit of the stateless RNG approach is that it
6 Random Number Generators for Integrated Circuits and FPGAs
111
makes detection of forcing attacks much easier when stateless linear post-processors are used. A non-(pseudo)-random bias introduced by the attacker will be visible at the output due to the independence of the output bits and the linearity of the post-processor. We summarize the prominent true randomness tests as follows: • Tot test: Material ageing effects, extreme changes in operating conditions and adversarial effects may cause a total breakdown of the noise source of the RNG device. For instance, a simple fault attack may cause a permanent failure close to the noise source turning the RNG into a deterministic pseudo-random number generator at best. Hence it would be desirable to have a set of tests performed at the start-up of the RNG device before any bits are collected. The test should discover such defects as early as possible as contamination of other components with predictable bits is possible. • Start-up test: The goal of these tests is to ensure the functionality of the RNG at the start-up of the hardware. Note that this is different from online tests which monitor the output during operation, as special steps may be taken during startup. For instance, the RNG may go into a certification mode during start-up. Hence, start-up tests need to take into account the various settings and states the RNG goes through before reaching the normal mode of operation. • Online tests: The goal of online tests is to detect deterioration of the quality of randomness while the RNG is in operation. Relatively simple tests can be implemented directly inside the RNG while more sophisticated tests may be run in the software driver. If a weakness is discovered the RNG should be disabled or an alternative strategy should be taken to prevent an adversary from gaining any advantage.
6.3 Post-processing Techniques RNG designs vary significantly according to the entropy source they use and the design principles they employ. Roughly put an RNG is built from three components: an entropy source (e.g. metastability and jitter in digital circuits, Johnson noise on a junction), a collection mechanism that mines the entropy source and produces bits with a certain level of randomness and a post-processing algorithm which eliminates the bias and dependencies in the output of the collection scheme. We will expose various entropy sources and collection mechanisms in the survey we present in the next section. Here we focus only on post-processing algorithms. Resilient functions were proposed as the post-processing step for the rings design. The goal was to filter any deterministic bits by using the resilient function. Treating bits effected by the adversary as deterministic bits enables one to study the tolerance properties of resilient functions against active adversaries. The reference recommends using higher resiliency degrees than necessary to remove deterministic bits. The difference between the degree of the resilient function and the number of
112
B. Sunar and D. Schellekens
deterministic bits expected in a sampling window quantifies the tolerance (in bits) of the RNG to active adversaries. Resilient functions are formally defined as follows. Definition 1 (t-Resilient Function) An (n, m, t)-resilient function is a function F(x1 , x2 , . . . , xn ) = (y1 , y2 , . . . , ym ) from Z n2 to Z m 2 enjoying the property that for any t coordinates i 1 , . . . , i t , for any constants a1 , . . . , at from Z 2 and any element y of the codomain Prob[F(x) = y|xi1 = a1 , . . . , xil = al ] =
1 . 2m
In the computation of this probability all xi for i ∈ {i 1 , . . . , i t } are viewed as independent random variables each of which takes on the value 0 or 1 with probability 0.5. In more informal terms, if up to any t of the input bits are deterministic and the remaining bits are random, the output of the resilient function will be perfectly random (or unpredictable). From a cryptographic viewpoint, knowledge of any t values of the input to the function does not allow one to make any better than a random guess at the output. Resilient functions are used in a number of cryptographic applications where the adversary is assummed to have captured or determined a number of the key bits. A simple technique for constructing resilient functions is given in the following theorem: Theorem 1 (e.g. [6]) Let G be a generator matrix for an [n, m, d] linear code C. Define a function f : {0, 1}n → {0, 1}m by the rule f (x) = x G T . Then f is an (n, m, d − 1)-resilient function. For more information on resilient functions and their connections to codes and designs see [7] and [26]. When compared to extractor functions, resilient functions appear to be much more limited in their capabilities of eliminating the effects an active adversaries on the output stream. The reason for this is that resilient functions are defined to work on either perfectly random or perfectly deterministic bits. In contrast, extractor functions assume only a specific min-entropy at the input. On the positive side resilient functions give perfect output distribution ( = 0) and are easily constructed from codes. When linear codes are used for the construction the resilient function is also linear and therefore allows testability of the RNG design in the sense of Bucci and Luzzi [4].
6.3.1 The von Neumann Corrector The von Neumann corrector was initially proposed to reduce accumulation of rounding errors in batch computations. Later on the von Neumann corrector found
6 Random Number Generators for Integrated Circuits and FPGAs
113
widespread use in post-processing RNG outputs. The main advantage of the von Neumann corrector is that it is very easy to implement in both hardware and software. The corrector processes consecutive pairs of bits, e.g. ai ai+1 , of the random bit stream. If they are of identical value, i.e. ai = ai+1 , it deletes the pair from the random bit stream. If they are different, it retains the first bit in the stream. There are negative aspects of this scheme: • the von Neumann corrector can handle only localized biases, i.e. biases confined to pairs of bits, • the output bit rate is 1/4th of the input bit rate for nearly uniform input streams and much lower for input distributions with more bias, • the variable output bit rate makes interfacing with the RNG more difficult; in most cases an output buffer is utilized. Finally, we should point out that the von Neumann corrector is a small instance of an extractor or a resilient function.
6.3.2 Cryptographic Hash Functions Another popular but much more sophisticated technique for post-processing is to apply a cryptographically strong hash function (e.g. SHA-1, MD5) to the output of the RNG design. A simplification of the SHA algorithm for use of post-processing was proposed in [22]. It is undesirable to implement a costly hash function to secure the output of an RNG especially in embedded platforms where space is limited. On the other hand, if implemented the RNG benefits greatly from established properties of the cryptographic function. For instance, even if a single input bit changes, the output of the cryptographic hash has decent statistical performance. More importantly, the pre-image and second pre-image resistance properties ensure the unpredictability of the output. Finally, adversarial influences and a partial breakdown of the entropy source may be survived by the use of a hash function.
6.3.3 Extractor Functions Barak, Shaltiel and Tomer [3] proposed using extractor functions to improve the resiliency of RNGs against changing environmental conditions due to adversarial influences. Extractor functions are complexity theoretical tools that take a distribution with a minimum level of min-entropy and produce an output with nearly uniform distribution. The technique will work as long as the adversaries’ influence to the RNG is limited to the extent that the min-entropy produced by the collection mechanism of the RNG does not fall below the threshold of the extractor function. The authors extend the usual definition of extractor functions to cover upto t influences (in bits) of adversarial modifications to the RNG state, e.g. in operating voltage, temperature, frequency. Furthermore, they show how explicit constructions
114
B. Sunar and D. Schellekens
can be achieved from universal hash function families. Since there is a rich literature on extractor functions we refrain from giving precise definitions. There are great benefits in using an extractor function for post-processing. For instance, a fairly simple extractor function will provide a much higher level of protection against adversarial influences than the von Neumann corrector. Note that the von Neumann corrector itself is perhaps the smallest useful instance of an extractor function. When compared to cryptographic hash functions, the quantifiable properties of extractors and the smaller footprint are an advantage. Unfortunately, extractor functions are not a substitute for cryptographic hash functions as they do not provide cryptographic properties such as collision resistance. Another downside is that it is unclear how the adversaries abilities will be captured.
6.4 A Pottpouri of RNG Designs In this section we present selected RNG designs. There are numerous designs and the ones included here are only meant to showcase popular design techniques. Also many designs first appeared in patents and not academic articles. We speculate that many innovative designs are kept as trade secrets. In any case, we find it useful to present chosen representative designs to expose a few of the RNG construction techniques.
6.4.1 The Intel RNG Design The Intel RNG design was made public in a report published by Jun and Kocher [15]. The design shown in Fig. 6.1 is built around two resistors in differential configuration. The differential thermal noise is amplified and used to drive a voltagecontrolled oscillator (VCO). The VCO is then sampled by another oscillator. The entropy source of the design is thermal noise on a junction. The goal behind the differential configuration is to make the design more robust against power supply and environmental variations. The output of the sampler is buffered and post-processed using the von Neumann corrector and then hashed using SHA-1. The output is further tested using the NIST FIPS 140-1 randomness tests monobit, runs, and poker Voltage Controlled Oscillator −
VCO
+
Latch
Amplifier Thermal Noise Source
Sampler
High Speed
OSC
Oscillator
Fig. 6.1 The Intel RNG uses junction noise as entropy source
Corrector
6 Random Number Generators for Integrated Circuits and FPGAs
115
implemented in the software driver of the RNG. Jun and Kocher [15] who have analysed the RNG output using 16 specialized tests and the NIST FIPS 140-1 test suite report that no weaknesses were found in the RNG output before processing with SHA-1. The reference, however, notes that the von Neumann post-processing technique is essential for eliminating biases in the output stream which indicates that perhaps the output of the VCO is sampled at a too high rate. Reference [15] only gives details of the experiments on the statistical performance of the Intel RNG output. From the driver software we deduce that the output rate must be around 7–9 kbits/s. The analysis of Jun and Kocher is limited to statistical tests. The analog components, i.e. noise amplifier and voltage-controlled oscillator, of the design make it impossible to include the Intel RNG in FPGA designs.
6.4.2 The Tkacik RNG Design The Tkacik design [29] hosts a linear feedback shift register (LFSR) and a cellular automata shift register (CASR) clocked by free running oscillators as shown in Fig. 6.2. The output of the two components is simply XOR-ed to form the RNG output. The jitter in the two free running oscillator circuits provides the randomness source. The output stream is verified using the DIEHARD [18], NIST 140-1 [19] and the Crypt-X suites [8]. The output of the design is shown to have significantly better statistical performance as opposed to the output of the two components alone, i.e. output of an LFSR or CASR clocked with a free running oscillator. Hence, the diversification achieved by the combination of two components is a major benefit of the design. Dichtl [9] identified two weaknesses in the Tkacik RNG design. The LFSR and the CASR act as a pseudo-random number generator seeded with only two low-entropy oscillators. Furthermore, an attacker may model the LFSR and at least partially solve it for unknown bits. The attack allows an adversary to predict the output bits assuming he/she had access to earlier bits. The treatment is theoretical and thus it is unclear if the attack would work in practice. Dichtl proposes to lower the output rate and to include non-linear components to make the design robust.
OSC
43−bit LFSR
32−bit Select
OSC
37−bit CASR
32−bit Select
Fig. 6.2 The design by Tkacik uses two deterministic circuits clocked by free running oscillators
116
B. Sunar and D. Schellekens
6.4.3 The Epstein et al. RNG Design Select sel in1
out
in2
random output
sel in1 in2
out
Fig. 6.3 Bi-stable memory component of the Epstein et al. RNG design
Epstein et al. [11] proposed a simple architecture based on bi-stable circuits. The component shown in Fig. 6.3 is the basic building block of the RNG design. A large number of these units are combined to form the RNG. As may be easily deduced from the figure depending on the value of the select signal the configuration acts as either • two seperate single inverter rings or • two stable cascaded inverters. Hence the circuit will switch back and forth between two stable conditions. Although the two modes of operation are stable, during the switching there will be occasions when the inverter rings will be in opposite phases. This is the source of randomness the RNG uses. That is, the circuit will switch between the two stable states in a bi-stable manner and the transitions will provide the entropy source of the RNG design. The same reference reports an implementation realized from 15 instances of the bi-stable components manufactured using a 0.18 μm CMOS technology. All outputs passed the DIEHARD tests after being post-processed by the von Neumann corrector. Being constructed only from digital components the design could be implemented on reconfigurable logic as well. Also the design is compact and should be power efficient.
6.4.4 The Fischer–Drutarovsk´y Design Fischer and Drutarovsk´y [12] proposed the RNG design shown in Fig. 6.4 that samples the jitter in a phase-locked loop (PLL) provided by an Altera field programmable
6 Random Number Generators for Integrated Circuits and FPGAs
f clk
D.
PLL
117
Q
T D
Q
T
1/k
.. . T D
Q
Fig. 6.4 Architecture of the Fischer–Drutarovsk´y RNG design
logic device family (e.g. APEX E and APEX II families). The jitter of the clock signal generated by the on-chip PLL is sampled via delay-cascaded samplers. The goal of the cascaded sampler is to increase the likelihood of collecting a bit from a transition zone. The probability of this event was also established in the paper. The samples taken by the cascaded samplers are XOR-ed together to give a sample with high uncertainty. Downsampling of this output yields the RNG output stream. The design is discussed in great detail in the same reference. Specific values for programming the PLL to give optimal performance are provided. The reported implementation yielded a bit rate of nearly 70 kbits/s that passed the NIST test suite. Considering that the design operates only on FPGAs that include a PLL, its use in reconfigurable logic is limited. Perhaps a more important contribution of this design is that it raised the attention paid to building RNGs on FPGAs.
6.4.5 The Kohlbrenner–Gaj Design The Kohlbrenner–Gaj design [17] uses jitter in ring oscillators as the entropy source. What makes this design unique is that it is designed to perfectly match the CLB architecture of a Xilinx Virtex-II FPGA. The oscillator, for instance, is built into a CLB. The oscillator signal passes twice through the CLB structure and is flipped in only one of the passes (in LUT1) as shown in Fig. 6.5. The clock and reset signals are not shown in the figure. The oscillation frequency is determined by the delay elements on the oscillator path, i.e. two lookup tables, four multiplexers and two memory cells. Kohbrenner notes that this particular configuration gives a sufficiently stable 130 MHz oscillator signal. The RNG samples one such oscillator with another one. The RNG output is also post-processed with a simple successive XOR scheme to eliminate biases. The reported bit rate is in the order of several hundred kbits/s. The exact rate depends on the strength of the XOR post-processing scheme. Although the rate is relatively low, the design is fairly compact and its bit rate will be sufficient for many applications. The output sequence was statistically verified using the NIST test suite.
118
B. Sunar and D. Schellekens
A4 A3 A2 A1
LUT1
A4 A3 A2 A1
LUT2
D
D
Oscillator Output
Q
D
Q
D
Fig. 6.5 Oscillator/CLB structure of the Kohlbrenner–Gaj design
6.4.6 The Rings Design The rings design shown in Fig. 6.6 was proposed by Sunar, Martin, and Stinson [28] The design is very simple. Basically, free running ring oscillator outputs are combined together via an XOR operation and then sampled. The source of randomness is phase jitter. The main idea is to populate the output waveform with transition zones and then to sample randomly. The authors provide a mathematical framework and rigorous analysis of the quality of the output of the RNG based on a set of assumptions at the input. Furthermore, to reduce the number of rings, the authors propose to use a resilient function for post-processing of the RNG output. By keeping the degree of the resilient function high, the RNG develops a quantifiable tolerance against active adversaries. The rings design has two main contributions: the analysis framework and the introduction of resilient functions for post-processing. The analysis builds a simple jitter model and computes the minimum number of rings that needs to be included in the design to achieve a certain fill rate in the sampling window at a certain confidence level. The deterministic bits collected from the unfilled portion of the sampling window are eliminated by a resilient function of appropriate strength.
...
R1
Ψ1
... Ψ2
...
R2
Rr
Fig. 6.6 The ring oscillators design
...
Ψr
Ψ
D
Q
fs
6 Random Number Generators for Integrated Circuits and FPGAs
119
An initial reference implementation of the rings design was provided by Schellekens et al. [23] on a Xilinx Virtex-II FPGA. The implementation produced a stream at a 2.5 Mbps bit rate with a sampling frequency of 40 MHz and using 110 rings with 13 inverters and the resilient function constructed from the linear cyclic code [256,16,113]. The output sequence was verified using the DIEHARD and NIST tests. Schellekens et al. also observed that the rings design is stateless and use a linear stateless post-processing technique (a resilient function constructed from a linear code) and therefore satisfy the criteria for testability introduced earlier by Bucci and Luzzi [4]. The practical aspects of the rings design including IC routing effects and the effects of power supply and temperature variations were investigated in [30]. The authors first note that if the signal is subsampled, then there is a chance especially at low fill rates that the sampler will be stuck in a deterministic portion of the sampling window. The authors therefore recommend sampling at a frequency that is relatively prime to the oscillation frequency. The authors note that IC level effects such as phase interlock, narrow signal rejection in the XOR tree and narrow signal attenuation affects will limit the scalability and performance of the rings RNG design. Furthermore, the same reference shows via experiments performed on an FPGA implementation that by changing the temperature and supply voltage the oscillation frequency may be shifted to invalidate the relatively prime condition. Hence, the rings RNG may be vulnerable to non-invasive temperature and supply voltage variation attacks. Finally, to make the design robust against such attacks the authors propose to use more than one ring length in the design. A design that features two ring lengths is proposed. The design passes the DIEHARD and NIST tests and delivers a throughput of 67 Mbps at a power consumption less than 300 mW with an area of less than 1000 LUTs. The design is also shown to be robust to temperature and power supply variations.
6.4.7 The O’Donnell et al. PUF-Based RNG Design In this scheme a physically unclonable function (PUF) is used to implement a random number generator [20]. A PUF circuit takes advantage of the inner signal delay variations (see Chapter 7). In particular, the circuit creates a racing condition between two identical signals, the two paths of the signals are determined by the challenges passed to the PUF. The output of the PUF will in turn depend on which of the two signals arrives first. Such a circuit will produce a particular output for each challenge input. However, due to the sensitivity of these inner delays, the circuit will observe instances of metastability. These are challenges, which produce a 0 or a 1 with about 0.5 probability every time the challenge is used. In addition, the fact that a certain challenge produces a metastable output is only temporary and will depend on the temperature and voltage variation. Experiments show that about 1 out of every 1000 challenges will be metastable. A PUF-RNG takes advantage of these metastable challenges. The circuit starts by injecting a certain challenge for N consecutive iterations. The N output bits are
120
B. Sunar and D. Schellekens
then tested to see whether the number of 1’s in the output is about N /2. If this is not the case, the circuit uses the same challenge to produce the N output bits M times. If the percentage of the 1’s observed in the output string is still not close enough to 50%, then a new challenge is used. A PRNG can be used to produce a new challenge which in turn will be tested to be metastable. Once a metastable challenge is detected, a random string can be generated. The N bit output generated by the metastable challenge can be used to produce a random string of length N /2. The N bit string is passed through a von Neumann corrector, this will consider consecutive instances of 0 and 1. Whenever a 1 is followed by a 0 it will constitute a 1 in the final random string. And whenever a 0 is followed by a 1, it is considered a 0. Consecutive pairs of bits which are identical are ignored in this scheme. Naturally if more bits are needed the same challenge can be used to produce more metastable bits.
6.4.8 The Goli´c FIGARO Design Goli´c proposed an RNG design built from bi-stable oscillator rings called FIGARO (Fibonacci Galois Ring Oscillator). The RNG design simply XORs the output of a Figaro oscillator with the output of a Galois oscillator and samples the XOR output.
... Oscillator f1
f2
f r−1
Output
Fig. 6.7 The Fibonacci oscillator design
The Fibonacci oscillator is constructed from inverters in feedback configuration. A generic Fibonacci ring is shown in Fig. 6.7. Similar to a linear feedback shift register, the feedback positions are controlled by switches. If f i = 0 then the switch is closed and otherwise it is open. The corresponding feedback polynomial is given as follows: f (x) =
r
f i x i where f 0 = fr = 1.
i=0
The necessary and sufficient condition for the oscillator to be alternating was established by Goli´c in the following theorems. Theorem 2 ([14]) A Fibonacci ring oscillator does not have a fixed state if and only if f (x) = (1 + x)h(x) and h(1) = 1.
6 Random Number Generators for Integrated Circuits and FPGAs
121
Theorem 3 ([14]) A Galois ring oscillator does not have a fixed state if and only if f (1) = 1 and r is odd. If h(x) is chosen to be a primitive polynomial for both the Fibonacci and the Galois oscillators, then the FIGARO design is guaranteed to have a short cycle of only two states and a long cycle comprised of the remaining 2r − 2 states. The Galois configuration of the oscillator ring is shown in Fig. 6.8. To eliminate local correlations and biases the author also proposes to use a self-controlled LFSR for post-processing of the output. The performance of the RNG was analysed later in the Dichtl and Goli´c RNG design. f1
f2
f r−1
... Oscillator Output
Fig. 6.8 The Galois oscillator design
6.4.9 The Dichtl and Goli´c RNG Design Dichtl and Goli´c compared the performance of Fibonacci and Galois ring oscillators and classical ring oscillators [10]. In order to assess the entropy rate of these new ring oscillator configurations, they analysed the behaviour of a single oscillator by restarting it from identical starting conditions. This restart approach allows to distinguish between the true random contributions of phase jitter and the pseudorandomness due to the LFSR style feedback; true randomness behaves differently in each repetition of the experiment, while pseudo-randomness is deterministic. Their experimental results show that both Fibonacci and Galois ring oscillators are capable of producing much higher entropy rates than classical ring oscillators. They provide some rudimentary theoretical reasonings why this is the case. The amount of randomness generated per time unit, roughly determined by the product of the total number of logic gates and their average switching frequency, is smaller for traditional ring oscillators. The oscillation frequency of a classical ring oscillator is inversely proportional to the ring length, and consequently the product of the number of inverters used and the oscillation frequency is independent of the ring length. For Fibonacci and Galois oscillators, however, this product increases with the number of invertors used, as their experiments show that the average switching frequency does not decrease with the number of invertors. On the other hand, the more complex feedback transforms the primary randomness produced by individual gates into a form more suitable for extraction by sampling, since traditional ring oscillators require sampling near the edges of the oscillating signal. The restart method, which they used for true randomness testing, can also be used as a mode of operation, creating a testable stateless RNG. Whenever a random bit is
122
B. Sunar and D. Schellekens
needed, the oscillator is started from a static reset state and runs for a short period of time. After sampling, the oscillator is stopped and reset to its initial state. By running a Fibonacci ring oscillator of length 15 for 60 ns and subsequently waiting 100 ns before restarting it, they are able to achieve a bit rate of 6.25 Mbps, with a small bias that can be removed by appropriate post-processing, on a Xilinx Spartan 3 FPGA. In continuous mode of operation higher speeds are possible, but such design no longer is testable. Additionally Dichtl and Goli´c describe a novel sampling technique almost doubling the entropy rate. Commonly a D-type flip-flop is used to sample the noise source, but this can cause some bias. To get more balanced output bits, an intermediate flip-flop that toggles on each 0–1 transition can be applied; this is equivalent to counting the number of 0–1 transitions reduced modulo 2. They propose to sample the entropy source with and without an intermediate toggle flip-flop, as they sample two different and supposedly complementary properties of the oscillation: the bit sampled directly depends on the signal value at the sampling time, whereas the toggling flip-flop bit represents the number of transitions since the last restart or since the previous sampling time. They experimentally verified that the mutual information, expressing their statistical independence, is very low.
6.4.10 An ADC-Chaos RNG Design In this section we present an overview of a chaos-based RNG design reported by Pareschi, Setti and Rovatti [21]. The original idea as well as its theoretical foundation was established in an earlier paper [5]. The earlier reference introduced the novel idea of using pipeline analog-to-digital converters (ADCs) to implement a chaotic circuit. Furthermore, the chaotic circuit has been theoretically proven to generate independent and identically distributed symbols. The main benefit of this approach is that the RNG design makes use of a popular mixed IC component, i.e. a pipeline analog-to-digital converter modified to operate as a set of interleaved chaotic maps. Such a component is readily available in many platforms and therefore the RNG could be incorporated into ADC-enabled chips virtually for free with small additional circuitry. The design is complemented with a post-processing technique based on the simplification of the SHA algorithm proposed earlier in [22]. The post-processing scheme operates at no throughput loss. The design is shown in Fig. 6.9. The solid lines show the original ADC design. The additional components and routes required for the RNG are drawn with dashed lines. The major changes are the feedback path shown on top of the design, the (mbit) ADC stage output busses labelled by D0 , D1 , . . . , Dk−1 and the post-processing circuit shown in a box at the bottom of the figure. The changes are essentially trivial. The most area consuming part of the changes is clearly the post-processing network (and the register it contains). The prototype 0.35 μm CMOS technology implementation is reported in the same reference ([21]) with throughput of 40 Mbits per second (when operated at
6 Random Number Generators for Integrated Circuits and FPGAs clk
clk
ADC INPUT
ADC stage 0 m D0
S/H
ADC stage 1
S/H
clk
...
m D_ 1
123
Digital Corrector
ADC stage k−2
S/H
ADC stage k−1
m
m
Dk−2
Dk−1
Sample & RNG Postprocessing Network
ADC OUTPUT RNG OUTPUT
Fig. 6.9 The ADC-chaos RNG design
5 MHz), area consumption of about 0.52 mm2 and power consumption of less than 30 mW. The output of the RNG is verified using the NIST test suite. The design has a relatively large footprint and its analog nature will limit its application. However, given the pervasive use of ADCs in mixed IC chips the design has great potential.
References 1. Anwendungshinweise und Interpretationen zum Schema (AIS). AIS 32, Version 1, Bundesamt fr Sicherheit in der Informationstechnik, 2001. 2. V. Bagini and M. Bucci. A design of reliable true random number generator for cryptographic applications. In C¸. K. Koc¸ and C. Paar, editors, Workshop on Cryptographic Hardware and Embedded Systems — CHES 1999, pages 204–218, Berlin, Germany, LNCS 1717, SpringerVerlag, 1999. 3. B. Barak, R. Shaltiel, and E. Tomer. True random number generators secure in a changing environment. In C ¸ . K. Koc¸ and C. Paar, editors, Workshop on Cryptographic Hardware and Embedded Systems — CHES 2003, pages 166–180, Berlin, Germany, LNCS 2779, SpringerVerlag, 2003. 4. M. Bucci and R. Luzzi. Design of testable random bit generators. In J. R. Rao and B. Sunar, editors, Proceedings of the Workshop on Cryptographic Hardware and Embedded Systems – CHES 2005, pages 131–146, LNCS 3659, Springer-Verlag Berlin Heidelberg, August 2005. 5. S. Callegari, R. Rovatti, and G. Setti. Embeddable ADC-based true random number generator for cryptographic applications exploiting nonlinear signal processing and chaos, IEEE Transaction on Signal Processing, vol. 53, no. 2, pp. 793–805, February 2005. 6. B. Chor, O. Goldreich, J. H˚astad, J. Friedman, S. Rudich, and R. Smolensky. The bit extraction problem or t-resilient functions, 26th IEEE Symposium on Foundations of Computer Science, pages 396–407, 1985. 7. C. J. Colbourn, J. H. Dinitz, and D. R. Stinson. Applications of combinatorial designs to communications, cryptography and networking, Surveys in Combinatorics, 1999, pages 37–100, British Combinatorial Conference, 1999. 8. Crypt-X http://www.isi.qut.edu.au/resources/cryptx/. 9. M. Dichtl. How to predict the output of a hardware random number generator. In C. D. Walter, C ¸ . K. Koc¸, C. Paar, editors, Proceedings of the Workshop on Cryptographic Hardware and Embedded Systems – CHES 2003, pages 181–188, LNCS 2779, Springer-Verlag Berlin Heidelberg. 10. M. Dichtl and J. D. Golic. High-speed true random number generation with logic gates only. In P. Paillier and I. Verbauwhede editors, Proceedings of the Cryptographic Hardware and Embedded Systems – CHES 2007, 9th International Workshop, Vienna, Austria, LNCS 4727, pages 45–62, Springer Verlag, September 10–13, 2007.
124
B. Sunar and D. Schellekens
11. M. Epstein, L. Hars, R. Krasinski, M. Rosner, and H. Zheng. Design and implementation of a true random number generator based on digital circuit artifacts. In C.D. Walter, C¸. K. Koc¸, C. Paar, editors, Workshop on Cryptographic Hardware and Embedded Systems — CHES 2003, pages 152–165, LNCS 2779, Springer-Verlag Berlin Heidelberg, 2003. 12. V. Fischer and M. Drutarovsk´y. True random number generator embedded in reconfigurable hardware. In B. S. Kaliski Jr., C¸. K. Koc¸, C. Paar, editors, Workshop on Cryptographic Hardware and Embedded Systems — CHES 2002, pages 415–430, Berlin, Germany, LNCS 2523 Springer-Verlag Berlin Heidelberg, 2003. 13. I. Goldberg and D. Wagner. Randomness in the Netscape Browser. Dr. Dobbs Journal, January 1996. 14. J. D. Goli´c. New Paradigms for Digital Generation and post-processing of Random Data, http://eprint.iacr.org/2004/254.ps. 15. B. Jun and P. Kocher. The Intel random number generator, April 1999. White Paper Prepared for Intel Corporation. 16. D.E. Knuth. Art of Computer Programming, Volume 2: Seminumerical Algorithms, AddisonWesley Professional; 3 edition, November 14, 1997. 17. P. Kohlbrenner and K. Gaj. An embedded true random number generator for FPGAs. International Symposium on Field Programmable Gate Arrays. Proceedings of the 2004 ACM/SIGDA 12th international symposium on Field programmable gate arrays, pages 71–78, ACM Press, New York, NY, 2004. 18. G. Marsaglia. DIEHARD: A Battery of Tests of Randomness, http://stat.fsu.edu/pub/diehard/, 1996. 19. NIST Special Publication 800–22. A Statistical Test Suite for Random and Pseudorandom Numbers. December 2000. 20. C. W. O’Donnell, G. E. Suh, and S. Devadas. PUF-Based Random Number Generation, MIT CSAIL Technical Memo 481, 2004. 21. F. Pareschi, G. Setti and R. Rovatti. A fast chaos-based true random number generator for cryptographic applications, Proceedings of 26th European Solid-State circuit Conference (ESSCIRC2006), pages 130–133. Montreux, Switzerland, 19–21 September 2006. 22. S. Poli, S. Callegari, R. Rovatti, and G. Setti. Post-processing of data generated by a chaotic pipelined ADC for the robust generation of perfectly random bitstreams, Proceedings of ISCAS, vol. IV, pp. 585–588, Vancouver, May 2004. 23. D. Schellekens, B. Preneel, and I. Verbauwhede. FPGA Vendor Agnostic True Random Number Generator. To appear in the Proceedings of the 16th International Conference on Field Programmable Logic and Applications. 24. W. Schindler and W. Killmann. Evaluation criteria for true (physical) random number generators used in cryptographic applications. In B. S. Kaliski Jr., C¸. K. Koc¸, C. Paar, editors, Proceedings of the Workshop on Cryptographic Hardware and Embedded Systems – CHES 2002, pages 431–449, LNCS 2523, Springer-Verlag Berlin Heidelberg, August 2002. 25. R. A. Schulz. Random number generator circuit. United States Patent, Patent Number 4905176, February 27 1990. 26. D. R. Stinson and K. Gopalakrishnan. Applications of designs to cryptography. In C.D. Colbourn, and J.H. Dinitz, editors, CRC Handbook of Combinatorial Designs, CRC Press, 1996. 27. T. Stojanovski and L. Kocarev, Chaos based random number generators Part I: Analysis, IEEE Transaction on Circuits and Systems – I, vol. 48, pp. 281–288, March 2001. 28. B. Sunar, W. J. Martin, and D. R. Stinson. A provably secure true random number generator with built-in tolerance to active attacks, IEEE Transactions on Computers, vol 58, no 1, pages 109–119, January 2007. 29. T. E. Tkacik. A hardware random number generator. In B. S. Kaliski Jr., C¸. K. Koc¸, C. Paar, editors, Workshop on Cryptographic Hardware and Embedded Systems — CHES 2002, pages 450–453, Berlin, Germany, LNCS 2523, Springer-Verlag Berlin Heidelberg, 2003. 30. True random number service v2.0 beta. www.random.org. 31. S.-K. Yoo, D. Karakoyunlu, B. Birand and B. Sunar. Improving the Robustness of Ring Oscillator TRNGs, Pre-print: http://ece.wpi.edu/∼sunar/preprints/rings.pdf.
Chapter 7
Process Variations for Security: PUFs Roel Maes and Pim Tuyls
7.1 Introduction Process variations in deep-submicron technology lead usually to undesired effects. Manufacturers of ICs try to remove those as much as possible in order to be sure that all their devices function in the same and expected way. In this chapter, we show how process variations which make a device unique can be used to provide new, cheap and enhanced security functionality to the device. We identify physical unclonable functions (PUFs) based on process variations that are present on an IC and explain how they can be used to provide enhanced security features for the IC.
7.1.1 Background The technological advances to make smaller and faster ICs drive our society towards a world that is completely dependent on information. Information on all kind of topics, including human beings is stored in small embedded devices like RFID tags, sensors that are becoming pervasively present in our environment. The plans to use RFID tags for connecting the virtual and the real world, the so-called Internet of Things, are taking shape and are unattendedly being rolled out. There is no doubt that this will lead to many benefits, that this will enable new, interesting and useful applications and that this will possibly enhance the experience of a richer world. The use of these new technologies has also some drawbacks that have to be taken care of. Firstly, since we are more and more dependent on information stored in computational devices surrounding us, it is important that we can rely on that information, i.e. that it is not compromised, e.g. changed into other information by
R. Maes (B) Katholieke Universiteit Leuven, ESAT/COSIC, Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee, Belgium e-mail:
[email protected]
I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3 7, C Springer Science+Business Media, LLC 2010
125
126
R. Maes and P. Tuyls
a person with malicious intents. On top of this we have to be sure that the device we are communicating with or using is indeed a legitimate device and not a counterfeit or compromised one. A striking example that illustrates this is the cloning attack on bank passes: skimming. By placing a small magnetic coil in an ATM machine and a camera in the roof, the card can be copied and the pincode stolen. Secondly, embedded devices often contain information that has some confidential character and should only be revealed to authorized persons. It is usually not realized that the information stored in those devices has a personal character which implies that when it is compromised, it could damage the privacy of the persons involved. A third drawback comes from a completely different direction. More and more companies produce new innovative functionality based on programmable devices. Their value is in the design of the new product, not in the hardware itself. The production of the product is outsourced. This implies that, when not properly protected, their design is easily stolen or copied. Moreover, without further precautions, the product in which their design will be used is easily overbuilt. It needs no further explanation that this leads to revenue losses for the legitimate owner. On top of this, it often yields brand damage and sometimes it has even more tragic consequences, e.g. crashing cars and planes [10]. These new and small devices like smart-cards, RFID tags, sensor nodes and digital transponders come with many conflicting challenges. Since they are small, they can often not store much information. Furthermore, they often have only access to a limited amount of power to carry out their functionality and they are everywhere. This means that they can easily be captured by an attacker who can investigate them by physical means and retrieve the secrets [1, 12, 2]. At first sight, one might be tempted to think that by implementing the appropriate cryptographic algorithms into the devices the problem is solved. This is however not as straightforward as it seems for several reasons. First, it is a very difficult task to make secure cryptographic algorithms that run at low power with a few resources, although a lot of progress has been made in this field [5, 6, 3]. A second reason follows from the fact that the security of cryptographic algorithms is based on the secrecy of the secret key. This implies in particular that the key has to be stored in a secure way, such that it cannot be retrieved by an attacker who has access to a whole arsenal of physical tools (e.g. laser cutters, focused ion beams, and antennas). Consequently, it is a very challenging task to build secure embedded devices in a world where the black box assumption made by cryptography does not hold. In this chapter, we present a new primitive in hardware security that exploits the uniqueness of every IC due to deep sub-micron process variations, to address the physical authenticity, the cloning and the secure key storage problem described above. First we start with an overview of process variations in Section 7.2. Then in Section 7.3 we introduce the concept of a physical unclonable function or PUF. Furthermore, the concept is illustrated by several examples and it explains how a PUF has to be used. In Section 7.4 we introduce the algorithms that are needed to turn the responses of a PUF into a cryptographically secret key. Finally, in Section 7.5 we present some concrete applications.
7 Process Variations for Security: PUFs
127
7.2 Process Variations Digital ICs that are produced with an identical fabrication process will not be completely identical. Production parameters and environmental factors during fabrication can only be controlled within certain boundaries and random fluctuations within these boundaries are called process variations. Chip manufacturers have to take extensive precautions to keep the tolerance boundaries for process variations on physical parameters as small as possible. Circuit designers on the other hand also have to take process variations into account, such that their designs keep functioning correctly within these boundaries, also called design corners. Variations in the fabrication process of ICs have a direct impact on the geometry and material properties of the structures present on the IC, which in turn affect the electrical properties of these structures [4]. Because the electrical properties of the components and interconnects on the IC are the key to a correct functionality of the implemented circuit, they should be controlled with great care and remain within strict tolerance boundaries. Moreover, the electrical parameters of the components and interconnects also determine other circuit properties like delay and power consumption. Table 7.1 gives an overview of the most important geometrical and material properties of MOSFET components and interconnects on an IC, which can be directly affected by process variations during the IC fabrication. Additionally given are the electrical parameters on which this has an (indirect) impact. Table 7.1 Overview of the most important parameters that are directly affected by process variations, and the electrical parameters on which this has an impact [4] Components (MOSFET) Interconnect Geometry
Effective channel length (L eff ) Gate length (L) Gate oxide thickness (tox ) Component width (W )
Line width and space Metal thickness Dielectric thickness Contact and via size
Material parameters
Doping variations Deposition and anneal (Discrete doping variations)
Contact and via resistance Metal resistivity Dielectric constant
Electrical parameters
Threshold voltage (VT ) Parasitic capacities Input capacity Drain–source–current (IDS ) Gate and source resistivity Leakage currents
Line resistivity Line capacity
In a well-designed circuit on an IC with process variations within the tolerance interval, there should be no noticeable effect of the process variations on the circuit’s functioning or result. In a bad circuit design, or when the process variations become too large, there might occur a deviation from the intended functionality because of this. We assume that the occuring process variations are random for a given IC, but that they are fixed once the fabrication process is finished. Because the process
128
R. Maes and P. Tuyls
variations have a random impact on many meaningful parameters on an IC, we can also assume that the total set of parameters is unique for a given IC. This leads to an unexpected useful application of process variations. By carefully measuring a number of these parameters that are random up to a certain extent due to process variations, we obtain a dataset that is unpredictable and unique for the given IC and can therefore be used to uniquely identify the IC. In other words, we obtain a unique and random identification key that is inherently present in every IC.
7.3 Physical Unclonable Functions: PUFs In this section, we explain the notion of a physical unclonable function and present some examples of PUFs that are present on ICs due to process variations. We distinguish between two classes of PUFs: (i) those that are intrinsically present on an IC: Intrinsic PUFs and (ii) PUFs that are added to an IC by adding some additional structure that is heavily influenced by process variations. The PUFs in class (i) are PUFs that are present in an IC due to the process variations during the manufacturing process of an IC, as described in Section 7.2. Definition 1 A PUF is a physical structure that consists of many random, uncontrollable components, which is therefore very hard to clone. The structure satisfies the following properties: (i) It is easy to challenge the PUF and measure the response of the PUF according to the challenge. The combination of a challenge C and a response R: (C, R) is called a challenge–response pair (CRP). (ii) Given a challenge C, the response R of the PUF is unpredictable. (iii) The PUF can not even be reproduced by the manufacturer: non-manufacturer reproducibility. (iv) The PUF should be tamper evident. (v) Preferably the PUF is inseparably bound to the object it is trying to protect. We explain the most important properties in more detail. Unclonability of a PUF does not necessarily mean fundamentally unclonable as the unclonability of qubits in quantum cryptography. Here we mean that it is a very hard and time consuming task to build a PUF which has the same properties, i.e. Challenge–Response behaviour, as the original one, but it is not impossible. Additionally, a mathematical model that simulates the CRP behaviour of the PUF sufficiently accurately takes a very long time to compute the response. Cryptographically speaking, we can say that: making a random PUF is easy but making a specific PUF is very difficult. (i) Easy to challenge and measure the response of the PUF implies that the circuitry needed for this should be small, cheap and easily integratable on an IC.
7 Process Variations for Security: PUFs
129
(ii) Unpredictability of a response given a challenge means that an attacker, not having the PUF at hand, should have a significant amount of uncertainty about the response R of a PUF to a given challenge C. (iii) Non-Manufacturer reproducibility means that even the manufacturer can not make a clone of the PUF. Production of the PUF is not based on a secret only known to the manufacturer. (iv) Tamper evidence: When the PUF is attacked in an invasive way by an attacker, the PUF will be damaged to such an extent that its CRP behaviour drastically changes: the PUF behaves differently. This will imply that under an invasive attack, the key that is extracted from the PUF will be seriously damaged or even destroyed. Finally, we stress that there are also practicality requirements for a PUF in order to be really useful. The PUF should be cheap and it has to be easy to integrate it into the production process of an IC. Additionally it should have excellent mechanical and chemical properties. Finally, the PUF needs to be very robust to various environmental changes such as temperature and humidity.
7.3.1 Coating PUF Here we present a brief discussion on Coating PUFs. For a more elaborate discussion we refer to [19]. In order to build a Coating PUF, a protective coating that covers the IC is added. The coating is situated on top of the IC, i.e. on top of the passivation layer. Just underneath the passivation layer, in the top metal layer, an array of capacitive sensors is laid down in order to be able to read out the capacitive properties of the coating. The protective coating consists of a matrix material which is doped with randomly distributed dielectric particles. The dielectric particles have random size, shape and location and a varying relative dielectric constant εr differing from the dielectric constant of the coating matrix. A cross section of a Coating PUF can be seen in Fig. 7.1. The developed coating consists of TiO2 and TiN particles in a matrix of aluminophosphate. It has the following properties: (i) the coating is opaque; (ii) the coating is conductive and very hard and (iii) the coating is chemically relatively inert. The way to challenge this PUF is by applying a voltage of a certain frequency and amplitude over the plates of the capacitive sensors. The response is the capacitance value that is measured by the sensors due to the presence of the coating on top of it. It was shown in [19] that secure keys can be extracted from this PUF and that it has very good robustness to external circumstances.
7.3.2 Intrinsic PUFs In this section, we describe PUFs that are intrinsically present in an IC, i.e. without having to add any modifications to the IC or its production process.
130
R. Maes and P. Tuyls
Fig. 7.1 Cross section of a coating PUF. The opaque coating at the top consists of random dielectric particles. The capacitance is measured between the broad metal sensors in the top layer (M5) and used as a PUF response
SRAM PUFs: An SRAM memory cell consists of two cross-coupled inverters which are connected to the outside by two additional transistors. Each inverter consists of a p-MOS and an n-MOS transistor. Hence in total an SRAM memory cell consists of six transistors. A schematic view of the circuit of an SRAM cell is given in Fig. 7.2. The main parameter that determines the behaviour of a transistor is the threshold voltage VT . The exact value of the threshold voltage is determined by the dimensional variability in a transistor and by variations in the doping concentrations. These last variations have the main impact and will become relatively larger when we go to smaller dimensions. Hence, it follows that the set of all these threshold voltages will determine the behaviour of the SRAM cell. Since there are two inverters
Vdd V1T
V2T
Fig. 7.2 Schematic of an SRAM cell circuit consisting of six MOSFETs. The threshold voltages of the two p-MOS transistors of the cross-coupled inverters are depicted as V1T and V2T
7 Process Variations for Security: PUFs
131
we will refer to them as “1” and “2” and we will use these names as indices of the threshold voltages of the p-MOS transistors of which they consist V1T , V2T . Let us assume for the sake of simplicity that |V1T | < |V2T |. Let us start from an initial situation when the cells are at ground. At this point in time the p-MOS transistors are in the ON (conducting state) while the n-MOS transistors are switched off (resistor). This implies that when the voltage Vdd starts coming up, it first hits |V1T | and later |V2T |. Hence, the p-MOS transistor of the first inverter goes first towards the OFF state. Consequently the first p-MOS transistor is first brought into the non-conducting state. Hence, the output of the first inverter is low and that of the second one therefore high. The SRAM cell contains a “0”. If on the other hand |V2T | < |V1T | the SRAM cell would start up in a “1”. We remind that this phenomenon depends on the relation between the threshold voltages V1T , V2T which are randomly distributed due to differences in doping concentrations. By applying this start-up behaviour of the SRAM memory cells to all the cells in the memory, and measuring the contents of the memory after start up, one has measured the response. The challenge corresponds to those memory cells of the whole memory that you consider. The response of the memory can be considered as the biometric of the SRAM memory. In [9] this phenomenon was investigated in detail. There was shown that noise due to temperature variations does not exceed 14%. The difference however between the SRAM memories of two different FPGAs is about 50%. These two facts imply that the Intra-Class variation is about 14%, while the Inter-Class variation amounts to 50%. Hence two devices are perfectly distinguishable, even in the presence of noise. Furthermore, it was shown that the responses of SRAM memories contain a lot of entropy of about 95%. Consequently, the maximum length of the keys that are extracted from an SRAM memory equals 76% of the number of memory cells in the memory. Silicon PUFs: A Silicon PUF was the first example of an Intrinsic PUF reported on in the literature (according to the knowledge of the authors). Details can be found in [8]. Silicon PUFs aim to exploit the statistical delay variations of transistors and wires in the IC. Delays in the basic components of transistors are due to several factors such as the loading time of the capacitors in a transistor and the propagation time of electrical signals in conductors. These delays are typically very small, in the order of nanoseconds. In order to be able to measure these small times sufficiently accurately to be able to see differences between several devices and challenges, a parameterized selfoscillating circuit consisting of switches is created. The switches are the basic delay elements which allow to choose between two possible paths. By putting n switches in series, 2n possible paths can be selected. This construction can be seen in Fig. 7.3. Each of these 2n paths has its characteristic delay time. Since these delay times are very small, these times are measured repeatedly. Therefore an oscillating loop is constructed by adding an inverter at the end of the delay circuit and coupling the output signal back to the input, as can be seen in Fig. 7.4. Then one measures the
132
R. Maes and P. Tuyls Challenge (N bit)
Parametrizable Delay Circuit ´0´
´1´
´0´
´1´
´1´
...
Switch (x N)
Fig. 7.3 Construction of a delay circuit as an array of switch elements. The path through all the switches can be selected by setting the challenge of the circuit
Counter Response Parametrizable Delay Circuit
Challenge Clock Fig. 7.4 Implementation of the measurement circuit. The delay element is placed in an oscillating loop and a counter is used to determine the frequency of the oscillation
number of oscillations in a fixed amount of clock cycles or the frequency of the oscillating circuit. We implemented this PUF construction on a Xilinx Spartan3 FPGA (XC3S200) FPGA and thousands of different challenges were repeatedly applied and measured. The same experiment, with the same challenges, was repeated on 20 different FPGAs of the same type. An overview of the histogram of measurement results can be seen in Fig. 7.5. Here an histogram plot of the inter-class and intra-class distances is shown. The inter-class distances are the differences between the responses of pairs of identically challenged PUFs on different devices. It is clear that these differences are caused by process variations. The intra-class distances are the differences between consecutive responses from one identically challenged PUF and are hence caused by noise. From the plot it becomes clear that on average the inter-class distance, or the distinguishability of two PUFs, is two orders of magnitude larger than the intra-class distance or noise. There is however a non-negligible overlap between the two histograms, meaning that in some cases the noise might be large enough to prevent a positive identification of a PUF based on that measurement. This noise is dealt with by applying appropriate post-processing. It is important to compute the number of CRPs that are at least needed to extract a key of a certain length. Therefore we need a method to predict the average amount of
7 Process Variations for Security: PUFs
133
key material that can be derived from one CRP, given the inter-class and intra-class distance quantities of the measurements. Information theory provides the necessary tools to do this. The distinguishability of different PUFs is firstly determined by the inter-class distance of their reference measurements. If the reference measurements on equal challenges on different PUFs are well separated, it is easy to tell the PUFs apart. Such CRPs thus provide a lot of information about the identity of the PUF. This notion of amount of provided information is quantified in terms of entropy and mutual information. Suppose that the inter-class distribution of the PUF responses (over all PUFs) for a fixed challenge C is given by PX (X ) where X is the random variable referring to the PUF response. The entropy, or the average amount of information measured in bits, provided by this CRP is given by
H(X ) = −
n
PX (xi ) · log2 PX (xi )
i=1
with n the number of possible reference responses xi to the challenge C. In an ideal case, this would be the average amount of information each CRP can provide. Due to noise this number will further decrease. Noise will prevent a PUF from reconstructing the exact reference measurements from which the key is derived, only noisy approximations of the reference measurement can be obtained. A measure for this noise is given by the intra-class distances. The mutual information measures on average how much of the information present in the reference measurement is still present in the noisy approximations. Suppose we denote a particular (noisy) measurement on a given challenge on a random PUF as a random variable Y , and say Y is distributed over all PUFs according to PY (Y ). The joint distribution of Y and the reference measurements X is given by PX,Y (X, Y ). The mutual information between X and Y is then given by
I(X ; Y ) =
m n i=1 j=1
PX,Y (xi , y j ) · log2
PX,Y (xi , y j ) = H(X ) − H(X |Y ) PX (xi )PY (y j )
Note that when X and Y are completely independent, the mutual information becomes zero. On the contrary, when X completely determines Y then I(X ; Y ) = H(X ). It was shown in [18] that the mutual information equals the secrecy capacity C S of a PUF, i.e. the maximum number of secret bits that can be extracted from a PUF. It is well known that the secrecy capacity C S of a Gaussian channel is given by [18] σ2 1 inter-class I(X ; Y ) = C S = log2 1 + 2 2 σ intra-class
(7.1)
134
R. Maes and P. Tuyls
with σinter-class and σintra-class the respective standard deviations of the interclass and intra-class distances. As mentioned above, the secrecy capacity C S gives an upper bound for the average number of bits that can be securely extracted from one CRP. Actual extraction techniques, e.g. those proposed in [7], will always remain a considerable amount below this limit. Using Equation (7.1) and the data from Fig. 7.5, the secrecy capacity of one CRP of a silicon PUF is estimated. One delay measurement contains in this case on average 4.8 information bits. The number of actual bits that can be extracted per challenge will be smaller. Known implementations can efficiently extract up to 2.2 bits per measurement on this type of delay PUF [14].
Fig. 7.5 Histogram of the intra-class vs. the inter-class distances of the measurements of a silicon delay PUF. The inter-class distances are calculated as the distance between the reference measurements of all possible pairs of PUFs. The intra-class distances are calculated as the distance between a single measurement and the reference measurement for that challenge on that PUF. The measurements already went through some post-processing but are still in analog form, hence the use of Euclidean distances. Also note the logarithmic distance scale
However, silicon PUFs are sensitive to environmental variations like temperature and voltage. Therefore Lim et al. [13] introduce the concept of arbiter-based PUF which uses a differential structure – two identical delay paths – and an arbiter to distinguish the difference in the delay between the paths. While the silicon PUF based on an oscillating circuit has an analog output signal, the arbiter-based implementation has a binary output by nature.
7 Process Variations for Security: PUFs
135
7.3.3 How to Use a PUF It follows from the explanations above that the responses of a PUF can be considered as the biometric identifiers of an IC. As with biometrics, one first has to register/enrol a PUF before it can be used. Therefore we distinguish two phases: the enrolment and the key reconstruction or verification phase. During the enrolment phase, a number of CRPs of a PUF are measured. Then the responses are processed and stored together with a publicly known identification code of the PUF and possibly a set of helper data (see Section 7.4), needed to extract a secret key from the responses. This happens during production. Some level of trust is required in the party carrying out this enrolment procedure. During the key reconstruction phase, the PUF reconstructs its key for carrying out some security functionality such as encryption, signing and proving its identity. Using the provided helper data, the legitimate PUF is able to extract the secret key from its unique response to the applied challenge.
7.4 Helper Data Algorithm or Fuzzy Extractor There are two problems that have to be solved before the responses of a PUF are usable for cryptographic purposes. Firstly, the responses of a PUF are noisy. Since the slightest error at the input of a cryptographic function creates an avalanche of errors at the output, it follows that some processing has to be performed to remove the noise. Secondly, the responses of a PUF are usually not uniformly random from an attacker’s point of view. An attacker thus may have a significant advantage in guessing unknown response values by taking the most likely value from the distribution. In a uniform distribution, there is no most likely value, and hence no possible advantage for an attacker. Therefore additional processing is needed to transform the (possibly unknown) distribution of the physical measurements into a uniform distributed response value.
7.4.1 Information Reconciliation The action of eliminating noise present on the physical measurements is called Information Reconciliation. We begin by introducing a conceptual description of a Secure Sketch as defined in [7]. A Secure Sketch of a PUF response is in essence a map of the response X onto an element W of a set of Helper Data, with the following two important properties: (i) There exists a deterministic recovery function Rec that can recover X from the Helper Data W and a noisy version of the response Y , as long as the distance between X and Y remains below a certain bound, i.e. the noise level is not too high. If the distance exceeds this bound, no information can be extracted about the recovered response.
136
R. Maes and P. Tuyls
(ii) The amount of uncertainty about an unknown response X only decreases with a limited amount when the Helper Data W for that response is known, i.e. the Helper Data W only reveals a limited amount of information about the response from which it was deduced. This loss of useful uncertainty is unavoidable.
The first property is very useful to transform a noisy response Y into the correct reference response X from which a cryptographic key is derived, using the Helper data W . The second property states that W only reveals a limited amount of information about the response X , and thus the derived key, and may therefore be publicly presented to the recovery function. This means that Helper Data can be stored in public non-volatile memory of an IC, or be sent over public channels without the risk of compromising the secret key. In [7], a couple of practical implementations of Secure Sketches are given for different distance measures. Because cryptographical keys are mostly presented as bitstrings, an implementation for Hamming distance will be most useful. The basic operation of this construction is based on the use of error correcting codes (ECC). More details on this construction can be found in [7].
7.4.2 Privacy Amplification Another important post-processing step, besides noise removal, is to guarantee a uniform distribution of the derived key bits. This process is called Privacy Amplification. The physical measurement X produced by the PUF construction contains some degree of randomness. One can however not assume anything about the uniformity of the distribution of this random variable X . At best an approximation of the underlying distribution can be obtained by collecting many samples on different PUFs. We need a procedure to extract the uniform randomness from the random measurement X . In [7] a Strong Extractor is defined as a probabilistic function Ext : {0, 1}n → {0, 1} with the property that it can transform a random variable X , with a possible unknown distribution into a new random variable K with a distribution very close to the uniform random distribution. The actual statistical distance from uniformity that can be achieved is constrained by the collision probability of X [15] and the length of the random variable K that is derived. The security of this construction follows from the leftover hash-lemma [15]. Strong extractors can be implemented as pairwise independent universal hash functions. A hash function has to be selected in a uniformly random way. Additional Helper Data S contains the seed of the used hash function for one or some CRPs and should also be presented during the reconstruction phase. A non-uniform selection of a hash function, or even using the same hash function over and over, gives only a minor deviation from uniformity in the distribution of K and can be tolerated.
7 Process Variations for Security: PUFs
137
7.4.3 Fuzzy Extractor A Fuzzy Extractor is a primitive that combines Information Reconciliation and Privacy Amplification. We give an informal definition. A Fuzzy Extractor is defined [7] by two procedures Gen and Rep with the following properties: (i) Gen is a probabilistic procedure that takes as input a reference measurement X and generates an extracted secure key K and a public set of Helper Data P. The distribution of K is uniform and P leaks only a limited amount of information about X and K . (ii) Rep is a deterministic procedure that can reproduce K from a Y , which is a noisy version of X , and the corresponding set of Helper Data P, given that the distance between X and Y is bounded. Using the properties of a Secure Sketch for Information Reconciliation and a Strong Extractor for Privacy Amplification, a Fuzzy Extractor can be constructed, by combining the functions of noise removal and randomness extraction (see Fig. 7.6). Both primitives produce some amount of Helper Data in the enrolment phase which they require in the reconstruction phase. The Helper Data is unique for a CRP on a particular PUF, but does not contain any substantial information about the
Fig. 7.6 Construction of a Fuzzy Extractor out of a Secure Sketch and a Strong Extractor. The Helper Data P = [W, S] that is produced during the enrolment is needed again in the reconstruction. This data can however easily and publicly be transfered because it leaks only limited information about the reference measurement X and the extracted keybits K
138
R. Maes and P. Tuyls
PUF-response and can hence be publicly presented to the PUF. This means that it can be permanently stored in some non-volatile memory on the device containing the PUF, or sent to the device together with the challenge over public channels.
7.4.4 Quantization The given implementation of a Secure Sketch for Hamming distances in [7] using block codes, already assumes that the PUF produces digital bitstring responses. This is however not always the case, e.g. for Silicon PUFs measuring delays with an oscillating loop-construction as in Fig. 7.4, the response is a continuous value. Another example of this are Coating PUFs. These analog PUF responses require an additional quantization step to obtain a bitstring. A well considered quantization technique can reduce the requirements of the Information Reconciliation and Privacy Amplification considerably. Noisy analog measurements can produce bit errors during quantization. This is especially true when the reference measurement lies close to the border of a quantization interval. In that case, only little noise can cause the measurement to lie at the wrong side of the quantization border, causing one or multiple bit errors. By shifting all reference measurements to the middle of the quantization intervals as in [19], the probability that this happens is minimized. This shift however also has to be applied during the reconstruction phase and therefore should be added to the Helper Data. When collecting measurements on a large number of PUFs, an approximation of the underlying distribution can be made. In many cases the distribution will turn out to be Gaussian, and the mean and standard deviation can be estimated over a number of samples. When a good approximation is obtained, the sizes of the quantization intervals can be adapted such that each interval captures an equally likely part of the distribution. In that case the quantized values will already be near-uniform, possibly reducing the need for Privacy Amplification.
7.5 Applications In this section, we discuss the following applications: secure key storage/generation in ICs and the protection of Intellectual Property (IP). In this context IP has to be understood as designs of ICs or as the programs (bitstreams) of programmable devices such as FPGAs.
7.5.1 Secure Key Storage Instead of storing a key in secure non-volatile memory, a key can be “stored” or better generated on an IC as follows. We describe the situation for symmetric keys, but an asymmetric case is also possible [17]. We assume here that the factory where the ICs are produced is trusted. If this is not the case, the enrolment phase for
7 Process Variations for Security: PUFs
139
key generation has to be carried out at some other place in the chain. During the enrolment phase, the key K and the corresponding helper data P are generated on the IC using the algorithms of Section 7.4. The helper data is then stored into the non-volatile memory present on the IC. Later during the key reconstruction phase, the IC will probe the PUF when it needs the key K . Next, it loads the helper data P from the non-volatile memory. Then, it combines the obtained response and the Helper Data using the Fuzzy Extractor and produces the original key K which is kept in some volatile memory for as long as needed. Finally the key is removed again from the device. We note that this implies that the key is not permanently present in the device. It is only present when needed. This makes the key much less vulnerable than when it is stored in secure non-volatile memory. For the PUFs that are damaged under a physical attack, this implies furthermore that the key is destroyed when the IC is attacked such that the security of the IC is not compromised.
7.5.2 IP Protection IP stands for Intellectual Property and has to be understood in this context as designs, configuration files (bitstreams) of programmable devices. In order to develop those, a lot of intellectual effort has been spent. Hence there is a lot of value for the design house in the functionality implemented by the bitstreams. On programmable devices like SRAM FPGAs 1 (where we here focus on) these bitstreams are stored in some external non-volatile memory. At start-up the bitstream is loaded onto the FPGA which starts carrying out its functionality. During loading of the bitstream, a technically competent attacker can tap the bitstream with a logic analyser or oscilloscope and program the bitstream into other FPGAs of the same family. This means that he has in fact copied the product, i.e. the FPGA with its functionality. Due to this copying the original (legitimate) design house misses some revenues. Since moreover this copied FPGA often is used in a product of lower quality it might fire back on him as brand damage. This so-called cloning attack is very dangerous since it is so easy to carry it out. In the literature some methods are mentioned to thwart this attack. The most common one is based on bitstream encryption [11]. As the name tells, in this method the bitstream is stored in encrypted form in the external memory. In order to be usable, the decryption key has to be stored in the FPGA. This requires that some non-volatile memory is needed on the FPGA to store this key. There are several ways to do this: (i) adding non-volatile memory to the FPGA or (ii) adding a battery to turn volatile into non-volatile memory. Both solutions come however with added cost and solution (ii) also suffers from reliability. The key storage problem can be solved in a cheap way by using an Intrinsic PUF (SRAM PUF or Silicon PUF) on the FPGA for key storage [16]. Therefore two IP blocks have to be used: an enrolment and a key reconstruction block which 1
We remind that SRAM FPGAs constitute the major part of the market.
140
R. Maes and P. Tuyls
implements the reconstruction part of a Fuzzy Extractor. During the enrolment phase, the PUF is read out and its response is turned into a key K and Helper Data P by using the algorithms of Section 7.4. This has to be carried out in a trusted way. By embedding the key reconstruction block into the FPGA, the following procedure is carried out. The IP vendor stores the appropriate helper data P into the external memory next to the bitstream. When the FPGA starts, first the PUF is read out and the helper data is loaded from the external memory. The PUF response and the helper data are then combined to reconstruct the key K . When the encrypted bitstream is loaded onto the FPGA, it is first decrypted by using the key K . After decryption the FPGA is configured. By using a logic analyser an attacker can still tap the bitstream and copy it onto another FPGA. Since the new FPGA has a different intrinsic PUF on board, it will with the same helper data P reconstruct a key K = K . Hence, decryption will fail and the FPGA will not carry out the expected and desired functionality. In this way the bitstream or IP is bound to one specific FPGA. Finally by using the Intrinsic PUF as an identifier the bitstream is bound to the FPGA, by verifying whether the correct PUF is present. The PUF response obtained during enrolment is then stored in an authenticated way in the external memory. During verification only a check is performed. In this way the IP is bound to the FPGA but the confidentiality is not fully guaranteed as is the case when bitstream encryption with PUFs is applied. This solution is called: node locking.
7.6 Conclusions Process variations are normally considered to be a major problem during IC manufacturing. In this chapter however, we described an interesting application that benefits from the unique process variations that are present in each IC. A physical unclonable function or PUF is a recently developed cryptographic primitive that is based on unpredictable parameters of a physical system. In case of an IC, random variations on parameters are intrinsically present due to process variations and can hence be used as a PUF-source. We demonstrated the feasibility of Intrinsic PUFs based on circuit delay (Silicon PUFs) and on start-up values of SRAM-cells (SRAM PUFs) by discussing the obtained results. In the second part of this chapter we gave an overview of the Helper Data Algorithms that are needed for the extraction of a secure cryptographical key from a CRP. Finally, important applications of intrinsic PUFs are discussed: Secure Key Storage and IP Protection.
References 1. R. J. Anderson and M. G. Kuhn. Low cost attacks on tamper resistant devices. In Proceedings of the 5th International Workshop on Security Protocols, pages 125–136, London, UK, 1998. Springer-Verlag.
7 Process Variations for Security: PUFs
141
2. E. Biham and A. Shamir. Differential fault analysis of secret key cryptosystems. In CRYPTO ’97: Proceedings of the 17th Annual International Cryptology Conference on Advances in Cryptology, pages 513–525, London, UK, 1997. Springer-Verlag. 3. A. Bogdanov, L. R. Knudsen, G. Leander, C. Paar, A. Poschmann, M. J. B. Robshaw, Y. Seurin, and C. Vikkelsoe. PRESENT: An ultra-lightweight block cipher. In CHES, pages 450–466, 2007. 4. D.S. Boning and S.R. Nassif. Models of process variations in device and interconnect. In A. Chandrakasan and B. Bowhill, editors, Design of High Performance Microprocessor Circuit. IEEE Press, 2000. 5. C. D. Canniere. Trivium: A stream cipher construction inspired by block cipher design principles. In ISC, pages 171–186, 2006. 6. C. D. Canniere and B. Preneel. Trivium specifications. 7. Y. Dodis, L. Reyzin, and A. Smith. Fuzzy Extractors: How to generate strong keys from biometrics and other noisy data. In EUROCRYPT, pages 523–540, 2004. 8. B. Gassend, D. E. Clarke, M. van Dijk, and S. Devadas. Silicon physical unknown functions. In V. Atluri, editor, ACM Conference on Computer and Communications Security — CCS 2002, pages 148–160. ACM, November 2002. 9. J. Guajardo, S. S. Kumar, G. J. Schrijen, and P. Tuyls. FPGA intrinsic PUFs and their use for IP protection. In CHES, pages 63–80, September 2007. 10. D. M. Hopkins, L. T. Kontnik, and M. T. Turnage. Counterfeiting Exposed: Protecting your Brand and Customers. Business Strategy. Wiley, 2003. 11. T. Kean. Secure configuration of field programmable gate arrays. In FPL ’01: Proceedings of the 11th International Conference on Field-Programmable Logic and Applications, pages 142–151, Springer-Verlag, London, UK, 2001. 12. P. C. Kocher, J. Jaffe, and B. Jun. Differential power analysis. In CRYPTO ’99: Proceedings of the 19th Annual International Cryptology Conference on Advances in Cryptology, pages 388–397, Springer-Verlag, London, UK, 1999. 13. D. Lim, J. W. Lee, B. Gassend, G. E. Suh, M. van Dijk, and S. Devadas. Extracting secret keys from integrated circuits. IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 13(10):1200–1205, October 2005. 14. R. Maes, P. Tuyls and I. Verbauwhede. Statistical Analysis of Silicon PUF responses for Device Identification. Workshop on Secure Component and System Identification – SECSI 2008. 15. V. Shoup. A Computational Introduction to Number Theory and Algebra. Cambridge University Press, June 2005. 16. E. Simpson and P. Schaumont. Offline hardware/software authentication for reconfigurable platforms. In L. Goubin and M. Matsui, editors, Cryptographic Hardware and Embedded Systems — CHES 2006, LNCS 4249, pages 311–323. Springer, October 10-13, 2006. 17. P. Tuyls and L. Batina. RFID-tags for Anti-Counterfeiting. In D. Pointcheval, editor, Topics in Cryptology - CT-RSA 2006, LNCS, Springer-Verlag, San Jose, USA, February 13-17 2006. 18. P. Tuyls and J. Goseling. Capacity and examples of template-protecting biometric authentication systems, 2004. 19. P. Tuyls, G.-J. Schrijen, B. Skoric, J. van Geloven, N. Verhaegh, and R. Wolters. Read-proof hardware from protective coatings. In Cryptographic Hardware and Embedded Systems — CHES 2006, LNCS 4249, pages 369–383, Springer, October 10-13, 2006.
Part III
Design Methods for Security
Chapter 8
Side-Channel Resistant Circuit Styles and Associated IC Design Flow Kris Tiri
8.1 Introduction The supply current variations, which are being analyzed to find the secret information, are the aggregated effect of the supply current variations of the individual switching logic gates that make up the microcontroller- or ASIC-based encryption system under attack. The fundamental reason that the information is leaked through the power supply is that the logic gates have an asymmetric power consumption. Indeed, as discussed in Section 2.1, only when the output of the logic gate makes a 0–1 transition, a current comes from the power supply and charges the output capacitance. In all other cases, no or only a limited amount of energy (due to short circuit or leakage) is consumed from the power supply. Hence by observing the supply current, one has information on the switching event and the state of the logic gate. This chapter discusses a mitigation strategy which flattens the power consumption by using logic gates that have a constant power consumption. A major advantage of this approach is that it is independent of the cryptographic algorithm or arithmetic implemented and that there is no need to train the VLSI designer to become a security expert or vice versa. Since the power dissipation of the smallest building block is a constant and independent of the signal activity, no information is leaked through the power supply regardless of the algorithm, the implementation, and the experience of the digital designer. An alternative mitigation strategy, which has the same advantages, is to randomize the power consumption by using masked logic styles that employ a random mask-bit to equalize the output transition probabilities of each logic gate [1–3]. This technique, which is discussed in more detail in Section 9.1, randomizes the signal values at the internal circuit nodes while still producing the correct ciphertext. Recent developments, however, have exposed several vulnerabilities of the technique for which mitigation strategies do not come cheap [4–6].
K. Tiri (B) Work done while at University of California, Los Angeles e-mail:
[email protected]
I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3 8, C Springer Science+Business Media, LLC 2010
145
146
K. Tiri
The remainder of this chapter is organized as follows. The next section introduces the requirements for logic styles to have transition-independent power consumption. Section 8.3 first describes how such a logic style can be built using a regular static CMOS standard cell library and then presents how regular place and route tools can be used to meet the constraints posed on the interconnect capacitances. In Section 8.4, the results of a prototype IC, which has been fabricated to validate the presumptions, are discussed.
8.2 Requirements for Transition-Independent Power Consumption When logic values are measured by charging and discharging capacitances, as is the case in traditional static complementary CMOS logic, the requirements for transition-independent power consumption are that (1) a logic gate has one and only one switching event per clock cycle and this is independently of the input signals and (2) the logic gate charges the same capacitance value for each potential switching event. In other words, even though a gate might not change state and even though different physical capacitances might be switched, the logic gate must have the unique property of charging in every clock cycle a total capacitance with a constant value. Alternatively, any logic style that continuously draws a current from the supply and measures its state through the path the current takes also has transition independent power consumption. This type of logic is referred to as current mode logic and has been proposed to mitigate power analysis [7–9]. Aside from challenges associated with building perfectly constant current sources, the major drawback is the static power consumption, which makes this logic style unacceptable for embedded, battery-operated devices.
8.2.1 Single Switching Event per Clock Cycle Implementing a dual rail with precharge logic, which also known as dynamic and differential logic, meets the first requirement: it has a switching factor of 100% and does not suffer from glitching. Glitches are the spurious signal transitions of a logic gate caused by differences in input signal arrival times, which have been exploited to mount the first attacks against the masked logic styles that randomize the signal values at the internal circuit nodes. The dynamic behavior breaks the dependence from the input sequence while differential behavior breaks the dependence from the input value. Table 8.1 shows the output events for dual rail with precharge logic in the form of a truth table. A dynamic logic style alternates precharge and evaluation phases, in which the output is precharged to 1 and conditionally evaluated to 0, respectively. A differential logic style holds two output signals with opposite polarity. As a result,
8 Side-Channel Resistant Circuit Styles and Associated IC Design Flow
147
Table 8.1 Truth table of dual rail with precharge inverter ini
ini+1
ini
ini+1
outpre
outi
outpre
outi+1
outpre
outi
outpre
outi+1
0 0 1 1
0 1 0 1
1 1 0 0
1 0 1 0
1 1 1 1
1 1 0 0
1 1 1 1
1 0 1 0
1 1 1 1
0 0 1 1
1 1 1 1
0 1 0 1
the combination of dynamic and differential logic will evaluate exactly one of both precharged output nodes to 0 in order to generate a complementary output and this is independently of the input value. During the subsequent precharge phase, the discharged node is charged and this is independently of the input sequence. Note that many self-timed asynchronous design techniques are actually based on some form of dual rail with precharge logic and have been proposed to mitigate power analysis [10–13]. Their terminology refers to dual rail encoded data, in which code words are interleaved with spacers. The code words can be seen as differential data in the evaluation phase, while the spacers as the precharge values in the precharge phase.
8.2.2 Same Capacitance Value for Each Switching Event To fulfill the second requirement, the load at the two differential output nodes should be identical such that independent of which output node switches the same capacitance value is charged (see Fig. 8.1). Both capacitances are identical if all its components, which are the intrinsic output capacitances of the logic gate, the interconnect capacitances and the input capacitances of the following logic gates, are balanced.
Fig. 8.1 Unbalanced load capacitance (left); balanced load capacitance (right)
In addition to the intrinsic input and output capacitances, the total charge coming from the power supply depends on the precise combination of parasitic capacitances internal to the gate that have been discharged during the evaluation phase. This phenomenon is known as memory effect. Another phenomenon that enables power analysis is early propagation, which is caused by the fact that the evaluation instant depends on arrival time and on the exact combination of the input signals [14]. Several custom logic cells, each with their own merit, have been proposed to address some or all of these issues. Sense amplifier-based logic (SABL) and dynamic current mode logic (DyCML) are two examples [15, 16]. SABL for instance uses advanced circuit techniques to guarantee that the load capacitance due
148
K. Tiri
to the logic gate has a constant value. The intrinsic capacitances at the differential in- and output signals are symmetric and it discharges and charges the sum of all the internal node capacitances. In addition, it can be constructed to ensure that no evaluation will start before all inputs are stable and complementary.
8.2.3 Capacitance Matching Precision Custom logic cells not only incur the develop cost of a custom-designed cell library but often induce challenges during the backend such as connection restrictions between gate types, stringent requirements on signal arrival times, or precharge signal distribution to each gate. Consequently, it is important to understand the required precision on the capacitance matching. It is unwise to focus on the quality and optimizations of the logic style while ignoring the matching precision of the interconnect capacitance. Figure 8.2 plots the capacitances at true signal nets versus the capacitances at corresponding false signal nets for the intrinsic input capacitances of differential gates built using a regular static CMOS standard cell library and for the interconnect capacitances extracted with HyperExtract of a DES Sbox with key addition implemented with the differential gates. The figure shows that the variation of the input capacitances is much smaller than the variation of the interconnect capacitances. This means that one first has to have full control over the interconnect capacitances before using specialized cells.
Fig. 8.2 Capacitances at true signal nets versus capacitances at corresponding false signal nets
8 Side-Channel Resistant Circuit Styles and Associated IC Design Flow
149
8.3 Secure Digital Design Flow The secure digital design flow’s origins are the principles discussed in the previous section. It incorporates two key modifications into the backend of a regular design flow to flatten the power consumption of a design. It uses ordinary static CMOS standard cells to build compound logic cells that have dual rail with precharge behavior and tweaks commercially available place and route tools to control the interconnect capacitances.
8.3.1 Wave Dynamic Differential Logic Wave dynamic differential logic (WDDL) can be implemented with static complementary CMOS logic [17]. Ordinary standard cells are combined to form compound logic gates with dual rail with precharge behavior. The logic gates can be readily implemented from an existing standard cell library and are thus fully supported by accurate EDA library files from the library vendor. WDDL also results in a dynamic differential logic with only a small load capacitance on the precharge control signal and with the low power consumption and the high noise margins of static CMOS. Furthermore, since the gates do not precharge in parallel, it benefits from a low supply current derivative di/dt and peak supply current. A WDDL gate is constructed by combining two positive complementary gates, one calculating the true output using the true inputs, the other the false output using the false inputs. The positive characteristic of the gates produces the precharge behavior as a positive operator produces a zero output for an all zero input. The AND gate and the OR gate are examples of positive gates. The complementary characteristic of the gates produces the dual rail behavior as a complementary operator, sometimes also referred to as a dual operator, expresses the false output of the original operator using the false inputs of the original operator. The AND gate fed with the true input signals and the OR gate fed with the false input signals are two dual gates. As an example, Fig. 8.3 shows the WDDL AND gate, which is a compound static CMOS AND gate and OR gate, and the WDDL OR gate, which is a compound static CMOS OR gate and AND gate. In the evaluation phase, the input signals (a,¯a) and ¯ are differential and the WDDL gates calculate the differential outputs (and, (b,b) ¯ are set nand) and (or, nor). In the precharge phase, the input signals (a,¯a) and (b,b) at (0,0) resulting in the outputs of the gates being at (0,0). WDDL AND
a b a b
and
WDDL OR
a b a
nand
b
or
nor
Fig. 8.3 Wave dynamic digital logic: compound gate composition
150
K. Tiri
in
single-ended input
differential registers
d
out
clk
out
out
prch.
q
eval.
clk
prch.
eval.
d in
d d
q
q
out q
clk
clk
Fig. 8.4 Wave dynamic digital logic: precharge wave generation
The technique results in dynamic and differential behavior and can be implemented using any cell library using the guidelines below. – Precharge wave propagation. A module implemented with WDDL gates does not distribute the precharge signal to each individual gate. During the precharge phase, the input vector of the combinatorial logic is set at all 0s. Each individual gate will eventually have all its inputs at 0; evaluate its output to 0, and propagate this 0-wave to the next gate. – Precharge wave generation. The precharge wave has to be launched at the start of every combinatorial logic tree. This is done by inserting a precharge operator for each input of the encryption module and after each register as shown in Fig. 8.4. The precharge operators produce an all 0 output in the precharge phase (clksignal high) but let the differential signal through during the evaluation phase (clk-signal low). – Connection rules. Special design rules, like np-rules or domino logic rules, used to cascade conventional dynamic gates are unnecessary. WDDL gates can be freely interconnected and the wires can be freely interchanged to produce inverting logic. – WDDL library construction. Any combination of AND operator, OR operator and its dual, which can be derived with the help of the de Morgan’s law (the and, or operators are interchanged and the input signals are inverted), will behave as a WDDL gate. Additionally, since all signals will eventually be differential, the input signals may be inverted and the output signals may be inverted. By way of example, Fig. 8.5 shows the WDDL OAI32 gate with drive strength 2 and the original static CMOS gate. A0 A1 A2
OAI32X1
INVX2 Y
B0 B1 A0 A1 A2
A0 A1 A2
OAI32X2 Y
AOI32X1
INVX2 Y
B0
B0
B1
B1
Fig. 8.5 OAI32 gate with drive strength 2 in static CMOS (left) and WDDL (right)
8 Side-Channel Resistant Circuit Styles and Associated IC Design Flow
151
8.3.2 Place and Route Approach A place and route approach that connects the logic gates with parallel routes that are at all times on adjacent grid lines, on the same layers and of the same length (see Fig. 8.6), assures that the two routes have the same parasitic capacitances C and C and the same parasitic resistances R and R [18]. R C
Metal x Metal y Via xy
R’ C’
Fig. 8.6 Parallel routes on adjacent grid lines on the same layers and of the same length
The parasitic effects, which are caused by the distributed resistance and by the distributed capacitance to the substrate and to neighboring wires in other metal layers, are very similar for both nets. The resistance is very similar since both interconnects have the same number of vias and have the same length in each metal layer. The capacitance to the other layers is very similar since, in general, the length of the differential route is several orders of magnitude larger than the pitch between the two differential routes and hence both nets travel in the same environment. Making every other metal layer a ground plane would completely control the capacitance to other layers. Yet, there are also cross-talk effects, which is the phenomenon of noise induced on one wire by a signal switching on a neighboring wire. Cross-talk is caused by the distributed cross-coupling capacitance to adjacent wires in the same metal layer (see Fig. 8.7, left). Note that the capacitance between the two wires of a differential pair can be ignored as the charging effect is always the same. During a switching event precisely one output line switches and the other output line remains quiet. Cross-talk effects can be controlled with different techniques that can easily be incorporated in the differential routing methodology presented next. All cross-talk effects can be removed by reserving one grid line out of three upfront for a VDD or GND line to shield the differential routes on either side (see Fig. 8.7, middle). Alternatively, the cross-talk effects can be reduced by merely increasing the pitch of the routing grid, which increases the distance between different differential routes (see Fig. 8.7, right). Differential pair and shielded routing has been available through shape-based routers. Router performance, however, degrades rapidly with increasing number of differential nets. Gridded routers, which are optimized for speed and capacity, are very difficult to adapt for “co-constrained” wires and even avoid running wires in parallel to combat the cross-talk effects discussed earlier. The place and route
152
K. Tiri
Fig. 8.7 Cross-coupling effects (left), shielding lines (middle), and increased distance (right)
approach of the secure digital design flow works as follows. It abstracts the differential pair as a single fat wire. The differential design is routed with the fat wire and at the end, the fat wire is decomposed into the differential wire. The technique forces the two output signal at all times on adjacent grid lines and can be built on top of commercial gridded place and route tools using the guidelines below. – Transformation procedure. The transformation consists of two translations of the fat wire and a width reduction to the original width. The translation must occur both in the horizontal and the vertical direction. As shown at the right of Fig. 8.8, a consistent shift of all segments of the fat wire with a ΔX in the X direction and a ΔY in the Y direction will result in one wire; a shift with a −ΔX and a −ΔY in the other wire. The shifts ΔX and ΔY are half the pitch lengths of the original wires in the X and Y direction. ΔY Metal x Metal y Via xy
-ΔY
-ΔX
non-preferred routing
ΔX
electric short
Fig. 8.8 Fat routes (left), translation operation resulting in differential routes (right)
– Translation procedure. Since wires in a routed design file are described as lines between two points and vias are assigned as points, a parser only needs to edit the (X, Y ) coordinates of the endpoints. The translation is done by (1) repeating each statement that defines a net; (2) attaching the first statement to the positive pins and translating it in a positive (ΔX , ΔY ) direction; and (3) attaching the second statement to the negative pins and translate it in a negative (ΔX , ΔY ) direction. – Width reduction. The wire width and via characteristics are defined in the library database. By reloading the parsed design with the differential library database,
8 Side-Channel Resistant Circuit Styles and Associated IC Design Flow
153
which contains the original grid definition, the original wire definition, the original via definition, and the differential gates with the differential pin information, the width reduction is accomplished. – Standard cell restriction. The routing step uses fat standard cells in which the differential pins are abstracted as a single fat pin. The differential pins of the differential standard cells must be aligned on a positive tilted diagonal and with the same offsets ΔX and ΔY. The upper pin is the pin associated with the true net; the lower pin is associated with the false net. Only in this way, the translation can occur in a consistent way. – Grid definition. To facilitate the placement, the height and width of a standard cell should be a multiple of the horizontal and vertical pitch, respectively. In addition, since we can only route on the grid, the pins should be situated on the grid crossings. The most straightforward manner is to take the pitch of the fat design as a multiple of the original grid. – Non-preferred routing. If the fat wire takes a turn in a metal layer, the wires of a differential route may cross in the same metal layer and result in an electric short between both wires (see Fig. 8.8 right). This can be avoided by disabling non-preferred routing.
8.3.3 Secure Digital Design flow The resulting secure digital design flow is depicted in Fig. 8.9 [19]. In addition to the regular steps in an IC design (logic design, logic synthesis, place and route, and stream out), one can recognize the two additional steps: “cell substitution,” which inserts the WDDL gates and “interconnect decomposition,” which inserts the differential routing. These operations have been inserted in the backend of the flow and do not interfere with the creative part of a design, indicated by the “logic design” task. The modifications have been automated and only have a minimal influence on the design flow. For the prototype IC discussed in the next section, the additional steps required a total of 6 min of CPU time on a SunFire v100 with 550 MHz CPU and 2 GB of RAM.
8.4 Prototype IC and Measurement Results A prototype IC, which is used for embedded cryptographic and biometric processing and contains a complete AES core with a 128-bit encryption data path and on-the-fly key scheduling, has been fabricated in 0.18 μm CMOS to demonstrate the secure digital design flow [20]. Two functionally identical processors have been fabricated on the same die. The first “protected” processor is implemented using WDDL and differential routing. The second “unprotected” processor serves as benchmark and is implemented using regular standard cells and regular routing techniques. Both processors have been implemented starting from the same synthesized gate level
154
K. Tiri ... AOI21X2 U94 (.A0(n47),.A1(n88), .B0(n90),.Y(n6)); ...
design specs
logic design
behavior.v
logic synthesis
rtl.v
cell substitution
fat.v
.. AOI21X2FAT U94 (.A0(n47),.A1(n88), .B0(n90),.Y(n6)); ...
... - n88 (U94 A1) (U188 Y) + ROUTED METAL2 (252 247) via2 (293 *); ...
diff_lib.lef
layout
stream out
fat_lib.lef
diff.def
interconnect decomposition
fat.def
place & route
... - n88 (U94 A1) (U188 Y) + ROUTED METAL2 (251 248) via2 (292 *); - n88B (U94 A1B) (U188 YB) + ROUTED METAL2 (253 246) via2 (294 *); ...
Fig. 8.9 Secure digital design flow
netlist. The WDDL gates have been derived from the commercial static CMOS standard cell library used in the regular unprotected design. Figure 8.10 shows power supply current measurements obtained using a Tektronix CT1 current probe with a 25 KHz to 1 GHz bandwidth and HP54542C oscilloscope with a 2 GHz sampling frequency. The supply current of the unprotected processor not only reveals the encryption operation but also exhibits exactly 11 peaks one for each round of the encryption operation. The secure processor, on the other hand, has a continuous current whether or not data is being processed and does not reveal any information in a simple power analysis. Without some signal indicating the start of an encryption, it is impossible to isolate the operation. No state-of-the-art mitigation provides perfect security; instead they increase the required number of measurements to find the secret information. Hence, the resistance of an implementation against a power analysis is assessed by the required
8 Side-Channel Resistant Circuit Styles and Associated IC Design Flow encryption = 11 peaks
155
encryption
flat supply current
encryption starts
Fig. 8.10 Supply current of unprotected processor (left) and protected processor (right)
number of measurements to find the information. For the correlation analysis discussed in Section 2.1, this is the point where the correlation coefficient for the correct key guess becomes larger than the correlation coefficients corresponding to all the wrong key guesses. For the security assessment of the prototype IC, the supply current of up to 1,500,000 encryptions, which is larger than the lifetime of the secret key in most practical systems, has been obtained. For the unprotected processor, the correct key bytes were found with very few measurements. On average, 2,000 measurements were required to disclose a key byte. For one key byte, a mere 320 measurements were sufficient to find the correct value. The protected processor, on the other hand, substantially increased the resistance. The assessment showed that out of 16 key bytes, WDDL effectively protected 5 key bytes. One and a half million measurements were not sufficient to disclose the correct key values. The 11 key bytes that were found required on average 255,000 measurements. Cross-talk effects are the expected culprits responsible for the power variations that correlated with the power estimations of the 11 key bytes. This means that the techniques discussed in Section 3.2 to control these cross-coupling effects are recommended to assure a high-power analysis resistance of the full system. Alternatively, design time security assessment [21] could be used to identify and correct mismatches. Bear in mind, however, that the quality of the assessment is only as good as the power consumption simulation model used [22]. Table 8.2 summarizes the assessment results and provides area, timing, and power numbers. The trade-off is an increase in area, power consumption, and minimum clock period. This should not come as a surprise. Security adds a new dimension to a design in addition to area, performance, and power consumption optimization [23]. Side-channel attack resistance is no exception and thus far prevailing strategies rarely come cheap. Several techniques can reduce the area overhead. For instance, custom logic cells can be made more compact than compound standard logic cells and security partitioning reduces the part of the design that has to be protected [24]. The power
156
Parameter
K. Tiri Table 8.2 Prototype IC summary Unprotected AES
Area [mm2 ] 0.79 Maximum frequency (@1.8V)[MHz] 330.0 Power consumption (@1.8V, 50 MHz)[mW] 54 Average measurements to disclosurec 2,133 Key bytes not found (@1.5M Meas.) n/a a Duty factor of clock > 50% to guarantee precharge of all gates b Estimation based on area ratio AES vs. entire system c Based on correctly guessed key bytes
Protected AES 2.45 85.5a 200b 255,391 5
consumption increase is in line with the likely penalty of other state-of-the-art mitigations such as algorithmic masking that require pre- and post-computation of the data values. In terms of absolute power consumption, the protected processor is orders of magnitude faster and expends less energy than software on an embedded processor. The latter has a throughput normalized with the power consumption of 0.0011 Gb/s/W (gcc, 1 mW/MHz @120 Mhz Sparc, 0.25 μm CMOS) while the former achieves over 2.9 Gb/s/W.
8.5 Conclusion A constant power consuming logic style can be used as a building block of a secure design protected against power analysis. In the secure digital design flow, the building blocks are constructed using regular standard cells to have dual rail with precharge behavior, which ensures that every logic gate has a single charging event per cycle and are put together to form the design using a differential pair place and route technique that automatically matches the output capacitance of the logic gate, which ensures that every logic gate charges a constant capacitance value. A prototype IC has been fabricated to validate the secure digital design flow and to experimentally assess the obtained power attack resistance. Measurement-based experimental results show that the same power analysis on an unprotected processor requires only 8,000 acquisitions to disclose the entire 128 bit secret key, while on the protected processor still does not disclose the entire secret key at 1,500,000 acquisitions.
References 1. D. Suzuki, M. Saeki, T. Ichikawa, “Random Switching Logic: A Countermeasure against DPA based on Transition Probability,” Cryptology ePrint Archive, report 2004/346, 2004. 2. T. Popp, S. Mangard, “Masked Dual-Rail Pre-charge Logic: DPA Resistance without the Routing Constraints,” Proceedings of the Workshop on Cryptographic Hardware and Embedded Systems, LNCS 3659, p. 172–186, August 2005. 3. Z. Chen, Y. Zhou, “Dual-Rail Random Switching Logic: A Countermeasure to Reduce Side Channel Leakage,” Proceedings of the Workshop on Cryptographic Hardware and Embedded Systems, LNCS 4249, p. 242–254, October 2006.
8 Side-Channel Resistant Circuit Styles and Associated IC Design Flow
157
4. P. Schaumont, and K. Tiri, “Masking and Dual-Rail Logic don’t Add up”, Workshop on Cryptographic Hardware and Embedded Systems, September 2007. 5. E. Oswald, S. Mangard, C. Herbst, S. Tillich “Practical Second-Order DPA Attacks for Masked Smart Card Implementations of Block Ciphers” CT-RSA, pp. 192–207, 2006. 6. T. Popp, M. Kirschbaum, T. Zefferer, S. Mangard “Evaluation of the Masked Logic Style MDPL on a Prototype Chip” Workshop on Cryptographic Hardware and Embedded Systems, September 2007. 7. K. Tiri, I. Verbauwhede, “A Dynamic and Differential CMOS Logic Style to Resist Power and Timing Attacks on Security IC’s”, IACR eprint archive, report 2004/066, 2004. 8. Z. T. Deniz, Y. Leblebici, “Low-Power Current Mode Logic for Improved DPA-Resistance in Embedded Systems”, International Symposium on Circuits and Systems, pp. 1059–1062, May 2005: 9. I. Hassoune, F. Mace, D. Flandre, J.-D. Legat, “Low-Swing Current Mode Logic (LSCML): A New Logic Style for Secure and Robust Smart Cards against Power Analysis Attacks”, Microelectronics Journal, vol. 37, pp. 997–1006, May 2006. 10. S. Moore, R. Anderson, R. Mullins, G. Taylor “Balanced Selfchecking Asynchronous Logic for Smart Card Applications,” Journal of Microprocessors Microsystems, vol. 27.9, pp. 421– 430, 2003. 11. K. Kulikowski, A. Smirnov, A. Taubin “Automated Design of Cryptographic Devices Resistant to Multiple Side-Channel Attacks” Workshop on Cryptographic Hardware and Embedded Systems, LNCS, pp. 399–413, 2006. 12. G. Bouesse, M. Renaudin, S. Dumont, F. Germain, “DPA on Quasi Delay Insensitive Asynchronous Circuits: Formalization and Improvement” DATE 2005, pp. 424–429. 13. K. J. Kulikowski, M. Su, A. B. Smirnov, A. Taubin, M. G. Karpovsky, D. MacDonald, “Delay Insensitive Encoding and Power Analysis: A Balancing Act” ASYNC 2005, pp. 116–125. 14. K. J. Kulikowski, M. G. Karpovsky, A. Taubin “Power Attacks on Secure Hardware Based on Early Propagation of Data” International On-Line Testing Symposium, pp. 131–138, 2006. 15. K. Tiri, M. Akmal, and I. Verbauwhede, “A Dynamic and Differential CMOS Logic with Signal Independent Power Consumption to Withstand Differential Power Analysis on Smart Cards”, European Solid-State Circuits Conference, pp. 403–406, September 2002. 16. F. Mace, F.-X. Standaert, I. Hassoune, J.-J. Quisquater, J.-D. Legat, “A Dynamic Current Mode Logic to Counteract Power Analysis Attacks”, Conference on Design of Circuits and Integrated Systems, pp. 186–191, November 2004 17. K. Tiri, and I. Verbauwhede, “A Logic Level Design Methodology for a Secure DPA Resistant ASIC or FPGA Implementation”, Design, Automation and Test in Europe Conference, pp. 246–251, February 2004. 18. K. Tiri, and I. Verbauwhede, “Place and Route for Secure Standard Cell Design”, International Conference on Smart Card Research and Advanced Applications, pp. 143–158, August 2004. 19. K. Tiri, and I. Verbauwhede, “A Digital Design Flow for Secure Integrated Circuits”, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 25, no. 7, pp. 1197–1208, July 2006. 20. K. Tiri, D. Hwang, A. Hodjat, B.-C. Lai, S. Yang, P. Schaumont, and I. Verbauwhede, “Prototype IC with WDDL and Differential Routing – DPA Resistance Assessment”, Workshop on Cryptographic Hardware and Embedded Systems, LNCS, vol. 3659, pp. 354–365, August 2005. 21. H. Li, A. Markettos, S. Moore “Security Evaluation Against Electromagnetic Analysis at Design Time” Workshop on Cryptographic Hardware and Embedded Systems, pp. 280–292, 2005. 22. K. Tiri, and I. Verbauwhede, “Simulation Models for Side-Channel Information Leaks”, Design Automation Conference, pp. 228–233, June 2005. 23. P. Kocher, R. Lee, G. McGraw, A. Raghunathan, S. Ravi “Security as a New Dimension in Embedded System Design” Design Automation Conference, pp.735–760, 2004. 24. D. Hwang, P. Schaumont, K. Tiri, and I. Verbauwhede, “Securing Embedded Systems”, IEEE Security & Privacy Magazine, vol.4, no. 2, pp. 40–49, April 2006.
Chapter 9
Counteracting Power Analysis Attacks by Masking Elisabeth Oswald and Stefan Mangard
9.1 Introduction The publication of power analysis attacks [12] has triggered a lot of research activities. On the one hand these activities have been dedicated toward the development of secure and efficient countermeasures. On the other hand also new and improved attacks have been developed. In fact, there has been a continuous arms race between designers of countermeasures and attackers. This chapter provides a brief overview of the state-of-the art in the arms race in the context of a countermeasure called masking. Masking is a popular countermeasure that has been extensively discussed in the scientific community. Numerous articles have been published that explain different types of masking and that analyze weaknesses of this countermeasure. The goal of masking and every other countermeasure is to make the power consumption of a cryptographic device independent of the intermediate values of the executed cryptographic algorithm. Masking achieves this by randomizing the intermediate values that are processed by the cryptographic device. An advantage of this approach is that it can be implemented at the algorithm level without changing the power consumption characteristics of the cryptographic device. In other words, masking allows making the power consumption independent of the intermediate values, even if the device has a data-dependent power consumption. Section 9.2 describes the principle of masking in more detail. The basics of attacks on masking are subsequently presented in Section 9.3. The most effective attacks on masking are second-order DPA attacks and template attacks. Readers who are interested in an extensive discussion of these attacks and power analysis in general are referred to [13], which is a book exclusively devoted to power analysis attacks.
E. Oswald (B) Computer Science Department, University of Bristol, Bristol, BS8 1UB, UK; Institute for Applied Information Processing and Communication, Graz University of Technology, Inffeldgasse 16a, 8010 Graz, Austria e-mail:
[email protected]
I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3 9, C Springer Science+Business Media, LLC 2010
159
160
E. Oswald and S. Mangard
9.2 Masking In a masked implementation, each intermediate value v of a cryptographic algorithm is concealed by a random value m that is called mask: vm = v ∗ m. The mask m is generated internally, i.e., inside the cryptographic device and varies from execution to execution. Hence, it is not known by the attacker. The operation ∗ is typically defined according to the operations that are used in the cryptographic algorithm. Hence, the operation ∗ is most often the Boolean exclusive-or function ⊕, the modular addition +, or the modular multiplication ×. In the case of modular addition and modular multiplication, the modulus is chosen according to the cryptographic algorithm. Typically, the masks are directly applied to the plaintext or the key. The implementation of the algorithm needs to be slightly changed in order to process the masked intermediate values and to keep track of the masks. The result of the encryption is also masked. Hence, the masks need to be removed at the end of the computation in order to obtain the ciphertext. A typical masking scheme specifies how all intermediate values are masked and how to apply, remove, and change the masks throughout the algorithm. It is important that every intermediate value is masked all the time. This must be guaranteed also for intermediate values that are calculated based on previous intermediate values. For instance, if two masked intermediate values are exclusive-ored, we need to ensure that the result is masked as well. For this reason, we typically use several masks. Hence, different intermediate values are concealed by different masks. It turns out that it is not advisable to use a new mask for each intermediate value because the number of masks decreases the performance. Consequently, the number of masks needs to be chosen carefully in order to achieve a reasonable performance. We distinguish between Boolean and arithmetic masking. In Boolean masking, the intermediate value is concealed by exclusive-oring it with the mask: vm = v⊕m. In arithmetic masking, the intermediate value is concealed by an arithmetic operation (addition or multiplication). Often, one uses the modular addition: vm = v + m (mod n). The modulus n is defined according to the cryptographic algorithm. The other arithmetic operation that is frequently used is the modular multiplication: vm = v × m (mod n). Some algorithms are based on Boolean and arithmetic operations. Therefore, they require both types of masking. This is problematic because switching from one type of masking to another type often requires a significant amount of additional operations, see [5, 9]. In addition, cryptographic algorithms use linear and non-linear functions. A linear function f has the property that f (x ∗ y) = f (x) ∗ f (y). For example, if the operation ∗ is the exclusive-or operation ⊕, then a linear function has the property that f (x ⊕ m) = f (x) ⊕ f (m). Hence, in a Boolean masking scheme linear operations change the mask m in a way that is easily computable. This means, a linear operation is easy to mask with Boolean masking. The AES S-box is a non-linear operation: S(x ⊕ m) = S(x) ⊕ S(m). Since the Boolean masks are
9 Counteracting Power Analysis Attacks by Masking
161
changed in a more complicated way, a lot of effort has to be spent on computing how the masks are changed. Thus, it cannot be easily concealed by Boolean masking. However, the S-box is based on computing the multiplicative inverse of a finite field element: f (x) = x −1 . It is compatible to multiplicative masking because f (x × m) = (x × m)−1 = f (x) × f (m). The authors of [1] show an efficient scheme to switch between Boolean masks and multiplicative masks. However, multiplicative masks have one major disadvantage. They cannot conceal the intermediate value 0. In this chapter, we focus on Boolean masking. We always use the letter m to refer to a mask. If we want to point out that a particular mask is used to conceal a particular intermediate value v, then we refer to this mask by m v . Masking can be implemented in software, in hardware, or directly at the cell level. We now briefly describe the basic idea of masking at these three abstraction levels.
9.2.1 Software The first papers that discussed masking mainly have looked at software implementations. A lot of research has been devoted to 8-bit smart card implementations. In particular, the AES selection process has stimulated research that has investigated how to secure the AES finalists against power analysis attacks on smart cards, see for instance [4]. Most of the recent research on masking focuses more or less exclusively on AES implementations. Recently, masking schemes have also been designed for implementations in dedicated hardware. Most papers in this context also deal with implementations of AES. A typical implementation of a Boolean masking scheme in software works in a straightforward manner. As explained before, we exclusive-or the mask(s) with the plaintext (or key), make sure that all intermediate values are masked throughout the computation, keep track of how the masks are changed, and at the end, remove the masks from the output. If all operations of the algorithm are (linear) Boolean operations, then Boolean masking fits nicely and is easy to implement. This is different in the case of non-linear operations. These operations are more difficult to cope with. Masking Table LookUps In addition to simple operations, cryptographic algorithms also use complex operations including non-linear operations. These complex operations require more than plain Boolean masking. Consequently, special attention needs to be paid to their secure and efficient implementation. Most modern block ciphers allow implementing the non-linear operations as table lookups. This means, for each input v of the non-linear operation, the output is stored at the corresponding index in a table T . The table is stored in memory where it can be accessed fast. This method is actually the most popular method for software implementations of block ciphers on smart cards. In a masked implementation, such a table needs to be masked. Consequently, one needs to produce a table Tm with the property Tm (v ⊕ m) = T (v) ⊕ m. Generating such a table is a simple process. However, in order to generate such a masked table, it is necessary to run through all inputs v,
162
E. Oswald and S. Mangard
look up T (v), and store T (v)⊕m for all m in the masked table. This process needs to be done for all masks m that are used in context with this operation. Consequently, the computational effort and the amount of memory increase with the number of masks that are used to mask the table lookup. Example for Masked AES We now give an example of how a smart card implementation of AES can be masked. The masking scheme that we describe here uses Boolean masks only, and it is tailored to a standard AES software implementation (for smart cards). The focus of this example is on the AES round transformation. In practice, however, the key schedule would have to be masked as well, see [10]. Masking the round keys is important to prevent SPA attacks on the key schedule. Hence in practice, at the beginning of an encryption some masks are exclusive-ored with the plaintext and some other masks are exclusive-ored with the first round key. In the remainder of this section, we first discuss how the four operations of the AES round transformation are masked individually, and then we describe the masking scheme as a whole. AddRoundKey: The round key bytes k are masked with m in our scheme. Consequently, performing AddRoundKey automatically masks the bytes d of the state: d ⊕ (k ⊕ m) = (d ⊕ k) ⊕ m. SubBytes: The only non-linear operation of AES is SubBytes. In software implementations on a microcontroller, SubBytes is typically implemented as table lookup. Hence, we use a masked S-box table for this operation. ShiftRows: The ShiftRows operation moves the bytes of the state to different positions. In our scheme, all bytes of the state are masked with the same mask at this point of the algorithm. Therefore, this operation does not affect the masking. MixColumns: The MixColumns operation requires more attention because MixColumns mixes the bytes from different rows of a column. Hence, MixColumns requires at least two masks. If only two masks are used for the bytes of a column, then MixColumns needs to be done very carefully to make sure that all intermediate values stay masked. This typically is inefficient and hence it is advisable to use separate masks for the different rows of the AES state. These masks can be used for the MixColumns operation in all AES rounds. Now, we describe the masked AES implementation as a whole. We use six independent masks in our scheme. The first two masks, m and m , are the input and output masks for the masked SubBytes operation. The remaining four masks m 1 , m 2 , m 3 , and m 4 are the input masks of the MixColumns operation. At the beginning of each AES encryption, two precomputations take place. First, we compute a masked S-box table Sm such that Sm (x ⊕ m) = S(x) ⊕ m . Second, we compute the output masks for the MixColumns operation by applying this operation to (m 1 , m 2 , m 3 , m 4 ). We denote the resulting output masks of MixColumns by (m 1 , m 2 , m 3 , m 4 ).
9 Counteracting Power Analysis Attacks by Masking
163
A masked AES round works as follows. At the beginning of each round, the plaintext is masked with m 1 , m 2 , m 3 , and m 4 . Then, the AddRoundKey operation is performed. The round key is masked such that the masks change from m 1 , m 2 , m 3 , and m 4 to m. Next, the table lookup with the S-box table Sm is performed. This changes the masks to m . ShiftRows has no influence on the masks as explained before. Before MixColumns, we change the mask from m to m 1 in the first row, to m 2 in the second row, to m 3 in the third row, and to m 4 in the fourth row. MixColumns changes the masks m i to m i for i = 1, . . . , 4. Note that these are the masks that we also had at the beginning of the round. Consequently, we can mask an arbitrary number of rounds in this way. At the end of the last encryption round, the masks are removed by the final AddRoundKey operation. Figure 9.1 shows a graphical representation of this scheme. The costs that are imposed by masking are typically quite high in terms of computation time. However, the high effort does not come from the additional operations that have to be performed within a round. It comes from precomputing the masked S-box table. This implementation and the corresponding performance figures can also be found in [10].
9.2.2 Hardware – Architecture Level Implementations of masking schemes in hardware require similar considerations as implementations in software. Boolean masking schemes fit well to many block ciphers. Hence, only for those parts of the round function that require different types of masking more effort has to be spent. In contrast to software implementations, more trade-offs between size and speed are possible. In this section, we discuss how to mask multipliers, how to use random precharging, and how to mask buses. Furthermore, we discuss pitfalls of masking and explain how to mask an AES S-box implementation using composite field arithmetic. Masking Multipliers In hardware, adders and multipliers are among the basic building blocks when implementing cryptographic algorithms. For example, MM(vm , wm , m v , m w , m) = (vm × wm ) ⊕ (wm × m v ) ⊕ (vm × m w ) ⊕ (m v × m w ) ⊕ m
(9.1)
hardware implementations of the AES S-box usually decompose the S-box into a sequence of additions and multiplications. Because additions are typically easier to mask than multiplications, we focus on how to mask multiplications. The goal is to define a masked multiplier MM. Hence, we need a circuit that computes the product of two masked inputs vm = v ⊕ m v , wm = w ⊕ m w and some masks m v , m w , and m such that MM(vm , wm , m v , m w , m) = (v × w) ⊕ m. Values such as vm × m w = (v ⊕ m v ) × m w and wm × m v = (w ⊕ m w ) × m v are secure, see [17]. This observation can be used to build a masked multiplier MM as shown in (9.1). Figure 9.2 shows a block diagram of this multiplier. It can be observed
164
E. Oswald and S. Mangard State m1'
m1'
m1'
m1'
m2'
m2'
m2'
m2'
m3'
m3'
m3'
m3'
m4'
m4'
m4'
m4'
m1' m m2' m m3' m m4' m
Round key m1' m1' m m m2' m2' m m m3' m3' m m m4' m4' m m
m1' m m2' m m3' m m4' m
AddRoundKey m
m
m
m
m
m
m
m
m
m
m
m
m
m
m
m
State
SubBytes, ShiftRows m'
m'
m'
m'
m'
m'
m'
m'
m'
m'
m'
m'
m'
m'
m'
m'
State
Remasking
State
m1
m1
m1
m1
m2
m2
m2
m2
m3
m3
m3
m3
m4
m4
m4
m4
MixColumns
State
m1'
m1'
m1'
m1'
m2'
m2'
m2'
m2'
m3'
m3'
m3'
m3'
m4'
m4'
m4'
m4'
Fig. 9.1 The AES round functions change the masks of the AES state bytes
9 Counteracting Power Analysis Attacks by Masking
vm w m
mv
165
mw
v w
m
m
Fig. 9.2 A masked multiplier MM consists of four standard multipliers and four standard adders
that the masked multiplier MM requires four standard multipliers and four standard adders. Random Precharging Random precharging can also be applied to hardware. This means, random values are sent through the circuit in order to randomly precharge all combinational and sequential cells of the circuit. In a typical implementation, such as the one reported in [3], random precharging requires duplicating the sequential cells, i.e., the number of registers is doubled. The duplicates of the registers are inserted in between the original registers and the combinational cells of the circuit. Hence, random precharging is achieved as follows. In the first clock cycle, the duplicates of the registers contain random values. As these registers are connected to the combinational cells, the outputs of all combinational cells are randomly precharged. When switching from the first to the second clock cycle, the result of the combinational cells (notice that this a random result) is stored in the original registers that contain the intermediate values of the executed algorithm. At the same time, the intermediate values are moved from these registers to the duplicates. This means that the role of the registers is switched. Therefore, in the second clock cycle the combinational cells are connected to registers that contain the intermediate values of the algorithm. Hence, the execution of the algorithm is continued in this cycle. When switching from the second to the third clock cycle, the role of the registers is switched again and the combinational cells are precharged again.
166
E. Oswald and S. Mangard
When implementing random precharging like this, all combinational and sequential cells process random data in one cycle and intermediate values in the next clock cycle. Hence, assuming the device leaks the Hamming weight of the data it processes, resistance against power analysis attacks is achieved. The power consumption is masked implicitly. Another approach to implement random precharging has been presented in [15]. The basic idea of this approach to randomize the register usage. Hence, the intermediate values of the algorithm are stored in different registers with potentially different data during each execution. Consequently, also a kind of implicit masking is achieved. Masking Buses Bus encryption has a long tradition in small devices. Bus encryption refers to the concept of encrypting the data and address buses that connect the processor of a smart card to memory and cryptographic co-processors. The purpose of bus encryption is to prevent eavesdropping on the bus. Buses are particularly vulnerable to power analysis attacks because of their large capacitance. The encryption algorithms that are used for bus encryption are often quite simple. A pseudo-random key is generated and used in a simple scrambling algorithm (simple means mainly using exclusive-or operations). Hence, the simplest version of bus encryption, where a random value is exclusive-ored to the value on the bus, corresponds to masking the bus. There are only few articles available that discuss bus encryption techniques. Some recent articles are [2, 8, 6]. Pitfalls Similar to software implementations, special attention needs to be paid when a masked value and its corresponding mask (or two intermediate values that are concealed by the same mask) are processed consecutively. For instance, a masked value and its corresponding mask should not be stored consecutively in the same register, if the register leaks Hamming distance information. Differently to software implementations, one also needs to pay attention to the optimizations that synthesis tools, which are typically used in hardware design, apply to a design. These tools have the property that they remove redundancies in the circuit description of the designer. Masked implementations have a lot of redundancies because masks are exclusive-ored at some point in the algorithm and removed or changed at another point. The tools recognize this and remove the parts of the circuit that correspond to the masking. This is of course absolutely undesired. Hence, the circuit designer needs to define which parts of the circuit description the tools may not touch. Example for Masked AES S-Box The most challenging part of a masked AES implementation in hardware is the masking of the S-box. In this example, we summarize formulas to compute the S-box in a masked way. It is important to observe that all terms of these formulas are “provably secure.” At the end of this example, we provide performance figures for implementations of this masking scheme of the S-box. The presented masking scheme is based on the S-box architecture described in [24], which uses composite field arithmetic. In this approach, the S-box input is seen as an element of a finite field with 256 elements. Mathematicians often use the abbreviation GF(256) to refer to this finite field. There are several representations
9 Counteracting Power Analysis Attacks by Masking
167
of finite field elements. Naturally, one chooses a representation that leads to an efficient implementation. In the case of the AES S-box, it turns out that representing each byte of the state as a linear polynomial vh x + vl over a finite field with 16 elements is efficient. In other words, an element of GF(256) can be represented by a combination of two elements of GF(16). Therefore, the finite field GF(256) is a quadratic extension of GF(16). The inversion of the element vh x+vl can be computed using operations in GF(16) only: (vh x ⊕ vl )−1 = vh x ⊕ vl vh = vh × w
(9.2) (9.3)
vl = (vh ⊕ vl ) × w w = w −1
(9.4) (9.5)
w = (vh2 × p0 ) ⊕ (vh × vl ) ⊕ vl2
(9.6)
All operations are done modulo a field polynomial that is fixed when the quadratic extension is defined. The element p0 is defined in accordance with this field polynomial. In order to calculate the inversion of a masked input value, we first map the value as well as the mask to the composite field representation. Such a mapping has been defined in [24]. The mapping is a linear operation and therefore it is easy to mask. After the mapping, the value that needs to be inverted is represented by (vh ⊕m h )x ⊕(vl ⊕m l ). Note that both elements in the composite field representation are masked additively. Our goal is that all input and output values in the computation of the inverse are masked, see (9.7): ((vh ⊕ m h )x ⊕ (vl ⊕ m l ))−1 = (vh ⊕ m h )x ⊕ (vl ⊕ m l )
(9.7)
Hence, we replace each addition and multiplication in (9.3)–(9.5) with a masked addition and a masked multiplication. It turns out that we can do this in such a way that (9.8)–(9.10) hold: vh ⊕ m h = vh × w ⊕ m h vl
w
⊕ m l ⊕ m w
(9.8)
= (vh ⊕ vl ) × w ⊕ = w −1 ⊕ m w
m l
w ⊕ m w = (vh2 × p0 ) ⊕ (vh × vl ) ⊕ vl2 ⊕ m w
(9.9) (9.10) (9.11)
We still have to master one more difficulty. In (9.10) we need to calculate the inverse in GF(16). Calculating the inverse in GF(16) can be reduced to calculating the inverse in GF(4) by representing GF(16) as quadratic extension of GF(4). Like before, we can express an element of GF(16) as a linear polynomial, but now the coefficients are elements of GF(4). Hence, the same formulas as given in
168
E. Oswald and S. Mangard
(9.8)–(9.11) can be used to calculate the masked inverse in the quadratic extension of GF(4). In GF(4), the inversion operation is equivalent to squaring: x −1 = x 2 ∀x. Hence, in GF(4) we have that (x ⊕ m)−1 = (x ⊕ m)2 = x 2 ⊕ m 2 . The inversion operation preserves the masking in this field. Now we discuss the performance of this masking scheme. When looking at the formulas it should be clear that an implementation of this scheme is considerably slower and larger than an implementation of an unmasked S-box in composite field arithmetic. For instance, the most efficient implementation of this idea so far, which has been published in [17], requires nine multiplications, two multiplications with a constant, and two square operations in GF(16). Note that for the sake of simplicity we only count the expensive operations in the bigger field and do not consider the operations in GF(4). An efficient implementation of an unmasked S-box in composite field arithmetic, which has been reported in [24], requires only three multiplications, one multiplication with a constant, and two squaring operations in GF(16). This is considerably less. In addition, the length of the critical path of the masked S-box increases significantly. It has been reported in [19] that an implementation of this scheme is about two to three times larger and slower than an implementation of a corresponding unmasked S-box in composite field arithmetic.
9.2.3 Hardware – Cell Level The first DPA-resistant logic styles that have been proposed to counteract power analysis attacks have all been based on the concept of hiding, see Chapter 8 of [13]. Masking has mainly been implemented at the architecture level. Recently, also several DPA-resistant logic styles that use masking have been proposed. Such DPAresistant logic styles are usually referred to as masked logic styles. Applying masking to the cell level means that the logic cells in a circuit only work on masked values and the corresponding masks. Cells that are used in such circuits are called masked cells. The circuits themselves are called masked circuits. The theory of these circuits is the following. Since the masked values are independent of the unmasked values, the power consumption of the masked cells should also be independent of the unmasked values. As a result, the total power consumption of a cryptographic device should be independent of the processed data and theperformed operations. However, like in the case of all other countermeasures, complete independence cannot be achieved in practice. It is only possible to make the power consumption largely independent of the corresponding unmasked values. Usually, Boolean masking is used for masked circuits. Figure 9.3 shows a twoinput unmasked cell and a corresponding two-input masked cell. The input signals a, b and the output signal q of the unmasked cell are carried on single wires. In the case of the masked cell, the input signals and the output signal are split into masked values and the corresponding masks.
9 Counteracting Power Analysis Attacks by Masking
169
Fig. 9.3 A two-input unmasked cell and a corresponding two-input masked cell
Note that glitches in masked circuits can lead to a strong dependency between the power consumption and the unmasked values [14]. As a result, masked logic styles are usually built in a way such that glitches are completely avoided. MDPL [18], for example, completely avoids glitches. Another masked logic style that avoids glitches was proposed by Suzuki et al. [22]. Fischer and Gammel [7] proposed a masked logic style which overcomes the glitch problem if each masked input value of a cell arrives at the same time as the corresponding mask. Other issues that can compromise the DPA resistance of a masked logic style are data-dependent differences in the arrival times of masked signals [21] and the observability of mask values in single power traces [23, 20].
9.3 Second-Order DPA Attacks and Template Attacks In the previous sections of this contribution we have described how masking works and how it can be implemented at different levels of abstraction. In this section, we briefly discuss two techniques to attack masked implementations. We illustrate their working principle based on a masked software implementation of AES. For a more in-depth discussion of attacks on masking we advise to consult [13]. In our masked software implementation of AES, the inputs and outputs of each operation are masked additively. Like in many software implementations, we first compute AddRoundKey, SubBytes, and ShiftRows for all bytes of the state. Then, MixColumns follows. For the sake of simplicity, we will mount all our attacks on the sequence of AddRoundKey, SubBytes, and Shiftrows, see Fig. 9.4. We assume that plaintext di and key k j are masked m and that the input mask and the output mask of SubBytes are equal. This is a quite typical implementation of a masked software AES (Fig. 9.4).
kj di
m m1
m1
AddRoundKey
di
kj
m SubBytes
S(di
kj ) m
ShiftRows
Fig. 9.4 Part of a masked AES round. The sequence of AddRoundKey, SubBytes, and Shiftrows is applied to all bytes of the AES state. We assume that input and output masks of SubBytes are equal
170
E. Oswald and S. Mangard
The microcontroller that we have used is a standard 8-bit microcontroller which is similar to many microcontrollers found in smart cards. Because it precharges the bus lines, it leaks the Hamming weight of the data.
9.3.1 Second-Order DPA Attacks Second-order DPA attacks exploit the leakage of two intermediate values that are related to the same mask. In general, this leakage cannot be exploited directly because the two intermediate values often occur in different operations of the algorithm. Hence, they might be computed subsequently and contribute to the power consumption at different times. In this case, it is necessary to preprocess the power traces in order to obtain power consumption values that depend on both intermediate values. However, even if the intermediate values contribute to the power consumption at the same time, it is possible that the distribution of the power consumption has the same mean but different variances for all hypotheses. In this case, a DPA attack using the statistical methods such as a distance-of-mean test or the correlation coefficient do not succeed because these methods work with the mean value. In order to mount successful DPA attacks in this case it is necessary either to use other statistical methods that exploit the variance or to preprocess the traces in such a way that the mean-based methods work. The preprocessing is typically done in the second step of a DPA attack, which consists of recording the power consumption of the device. Preprocessing The preprocessing prepares the power traces for the DPA attack. There are three cases that occur in practice. In the first case, the targeted intermediate values occur in different clock cycles. In this case, the preprocessing combines two points within a trace. This first case typically occurs in software implementations of masking schemes. Second, the targeted intermediate values occur within one clock cycle. In this case, the preprocessing function is applied to single points in the trace. Third, the targeted intermediate values occur within a clock cycle and the power consumption characteristics allow exploiting the leakage directly. In this case, the preprocessing step can even be omitted. The two latter cases typically occur in hardware implementations of masking schemes. DPA Attacks on Preprocessed Traces A second-order DPA attack simply applies a DPA attack to the preprocessed traces. This means, in step 1 of a second-order DPA attack, we choose two intermediate values u and v. These values do not occur as such in the device because we study a masked implementation. Recall that in an implementation that uses Boolean masking, only the masked intermediate values u m = u ⊕m and vm = v ⊕m are present in the device. In step 2, we record the power traces and we actually do the preprocessing. In step 3, we calculate hypothetical values that are a combination of u and v: w = comb(u, v). In attacks on Boolean masking, this combination function typically is the exclusive-or function: w = comb(u, v) = u ⊕ v = u m ⊕ vm
(9.12)
9 Counteracting Power Analysis Attacks by Masking
171
Fig. 9.5 Result of a second-order DPA attack on a masked AES implementation in software
Fig. 9.6 Evolution of the correlation coefficient over an increasing number of traces
Note that we can calculate the value of the combination of two masked intermediate values without having to know the mask! In step 4, we map w to hypothetical power consumption values h. In step 5, we compare the hypothetical power consumption with the preprocessed traces. Example for Masked AES In this example, we show how to attack the masked software implementation of AES. We target the S-box input and the S-box output in our second-order DPA attack. For the second-order DPA attack we have measured the power consumption of our masked AES software implementation during the first encryption round. In order to reduce the number of points in the measured traces, we have compressed them. We have identified the first round of AES with a visual inspection of the compressed power traces. From the first round, we have only taken the points that are within the interval that likely contains the first S-box lookup. Consequently, we have applied a preprocessing function (we used the absolute-difference function) to this interval only. Then, we have computed the hypothetical intermediate values u i, j = di ⊕ k j and vi, j = S(di ⊕ k j ). We have combined them with the exclusive-or function to derive wi, j = u i, j ⊕vi, j = (di ⊕k j )⊕ S(di ⊕k j ), and we have mapped them to hypothetical
172
E. Oswald and S. Mangard
power consumption values h i, j using the Hamming weight model: h i, j = HW(wi, j ) = HW((di ⊕ k j ) ⊕ S(di ⊕ k j ))
(9.13)
Finally, we have compared the hypothetical power consumption with the preprocessed traces. Figures 9.5 and 9.6 show the result of this attack. Figure 9.5 depicts the correlation traces for the correct key hypotheses, which are plotted in black, versus the correlation traces for the incorrect key hypotheses, which are plotted in gray. Note that high correlation peaks occur in all segments that are related to the processing of the two attacked intermediate values. The highest correlation that occurs in this trace is about 0.23. Figure 9.6 shows that it is easily possible to distinguish the correct key hypothesis from the incorrect key hypotheses with about 500 traces.
9.3.2 Template Attacks Power traces can be characterized by a multivariate normal distribution, which is fully defined by a mean vector and a covariance matrix (m, C), see [16]. We refer to this pair (m, C) as template from now on. In a template attack, we assume that we can characterize the device under attack. This means, we can determine templates for certain sequences of instructions. For example, we might possess another device of the same type as the attacked one that we can fully control. On this device, we execute these sequences of instructions with different data di and keys k j in order to record the resulting power consumption. Then, we group together the traces that correspond to a pair of (di , k j ), and estimate the mean vector and the covariance matrix of the multivariate normal distribution. As a result, we obtain a template for every pair of data and key (di , k j ): h di ,k j = (m, C)di ,k j . Later on, we use the characterization together with a power trace from the device under attack to determine the key. This means, we evaluate the probability density function of the multivariate normal distribution with (m, C)di ,k j and the power trace of the device under attack. In other words, given a power trace t of the device under attack and a template h di ,k j = (m, C)di ,k j , we compute the probability: ex p − 12 · (t − m) · C−1 · (t − m) p(t; (m, C)di ,k j ) = (2 · π )T · det(C)
(9.14)
We do this for every template. As a result, we get the probabilities p(t; (m, C)d1 ,k1 ), . . . , p(t; (m, C)d D ,k K ). The probabilities measure how well the templates fit to a given trace. Intuitively, the highest probability should indicate the correct template. Because each template is associated with a key, we also get an indication for the correct key. This intuition is also supported by the statistical literature, see [11]. If all keys are equally likely, then the decision rule which minimizes the probability for a wrong
9 Counteracting Power Analysis Attacks by Masking
173
decision is to decide for h di ,k j if p(t; h di ,k j ) > p(t; h di ,kl ) ∀l = j
(9.15)
This is also called a maximum-likelihood (ML) decision rule. Template Building Phase In the previous section, we have explained that in order to characterize a device, we execute a certain sequence of instructions for different pairs (di , k j ) and record the power consumption. Then, we group together the traces that correspond to the same pair and estimate the mean vector and the covariance matrix of the multivariate normal distribution. Note that the size of the covariance matrix grows quadratically with the number of points in the trace. Clearly, one needs to find a strategy to determine the interesting points. We denote the number of interesting points by NI P . The interesting points are those points that contain the most information about the characterized instruction(s). In practice, there are different ways to build templates. For instance, the attacker can build templates for a specific instruction such as the MOV instruction or for a longer sequence of instructions. Which strategy is best typically depends on what is known about the device that is attacked. In the following, we discuss some strategies for building templates. Templates for Pairs of Data and Key. The first strategy, which is also the one that we have mentioned in the previous section, is to build templates for each pair of (di , k j ). The interesting points of a trace, which are used to build the templates, are therefore all points that correlate to (di , k j ). This implies that all instructions that involve di , k j and functions of (di , k j ) lead to interesting points. For example, we can build templates for the sequence of instructions in our AES assembly implementation that implement AddRoundKey, SubBytes, and ShiftRows. Or, we can build templates for one round of the key schedule in our AES assembly implementation. Templates for Intermediate Values. The second strategy is to build templates for some suitable function f (di , k j ). The interesting points of a trace, which are used to build the templates, are therefore all points that correlate to the instructions that involve f (di , k j ). For example, suppose that we build templates for the MOV instruction in an AES assembly implementation that moves the S-box output from the accumulator back into the state register. This means, we build templates that allow us to deduce k j given S(di ⊕ k j ). Instead of building 2562 templates, one for each pair (di , k j ), we can simply build 256 templates h vi j , with vi j = S(di ⊕ k j ), i.e., one for each output of the S-box. Note that the 256 templates can be assigned to the 2562 pairs (di , k j ). Template-Based DPA Attacks The ability to include several points of interest, which correspond to several intermediate values, makes template-based DPA attacks directly applicable to attack masked implementations. Remember that a secondorder DPA attack works because it exploits the joint leakage of two intermediate values. By building templates with interesting points that correspond to two intermediate values that are concealed by the same mask, we exploit the joint leakage
174
E. Oswald and S. Mangard
of these two intermediate values. Consequently, template-based DPA attacks can be directly applied to break masked implementations. This is the strongest attack that can be mounted on a masked implementation in the sense that it requires the smallest number of traces. We assume that the attacker has the ability to build templates for the targeted cryptographic device. These templates are built in such a way that the interesting points correspond to at least two intermediate values that are concealed by the same mask. During the attack, the templates and traces are matched. Recall that we attack a masked intermediate value and we do not know the value of the mask m in a certain encryption run. This implies that we have to perform the template matching for all M values that the mask m can take. Consequently, the template matching gives the probabilities p(ti |k j ∧ m), and we have to derive p(ti |k j ) by calculating (9.16) p(ti |k j )
=
M−1
p(ti |k j ∧ m) · p(m)
(9.16)
m=0
With p(ti |k j ) we can calculate the probability (9.17) for a key hypothesis k j after the observation of a set of traces T. The set T consists of D traces. Hence, except for the extra calculation of (9.16), a template-based DPA attack on a masked implementation works in exactly the same manner as a template-based DPA attack on an unmasked implementation: D
p(t |k ) · p(k j ) i j i=1 p(k j |T) = K D l=1 i=1 p(ti |kl ) · p(kl )
(9.17)
Example for Masked AES In this example, we have built templates that contain the joint leakage of two instructions. The first instruction is related to the mask m that is used to conceal the S-box output S(di ⊕ k j ) and the second instruction is 1
Probability
0.8
Fig. 9.7 Result of a template-based DPA attack. The correct key hypothesis has probability one. The incorrect key hypotheses all have probability zero
0.6 0.4 0.2 0
0
50
100 150 200 Key hypothesis
255
9 Counteracting Power Analysis Attacks by Masking
1 0.8 Probability
Fig. 9.8 Evolution of the probability over an increasing number of traces. The correct key hypothesis is plotted in black. The incorrect key hypotheses are plotted in gray
175
0.6 0.4 0.2 0
5
10
15 20 Traces
25
30
related to the masked S-box output S(di ⊕ k j ) ⊕ m. Our templates take the power model of the microcontroller into account. Hence, we have built 81 templates, one for each pair of HW(m) and HW(S(di ⊕ k j ) ⊕ m): h HW(m),HW(S(di ⊕k j )⊕m) The template matching then gives the probabilities for p(ti |k j ∧ m): p(ti |k j ∧ m) = p(ti ; h HW(m),HW(S(di ⊕ k j ) ⊕ m) ) With these probabilities, and by assuming that p(m) = 1/M, we have calculated (9.16) and subsequently we have derived p(k j |T) with (9.17). The result of this attack is depicted in Fig. 9.7. It shows that one key has probability one, whereas all other key hypotheses have probability zero. Figure 9.8 shows that with about 15 traces the correct key can be identified. This shows that this template-based DPA attack on a masked AES implementation in software works in the same way, and leads to similar results, as the template-based DPA attack on and unmasked AES implementation in software. This example demonstrates that template-based DPA attacks require about the same number of traces in order to break an unmasked and a masked implementation on the same device.
9.4 Conclusions In this chapter we have surveyed and summarized masking countermeasures. We have explained why they work and how they can be implemented on different levels of abstraction. Thereafter, we have discussed a method to attack masked implementations. For these two methods, second-order DPA attacks and templatebased DPA attacks, we have provided concrete examples from a software implementation on an 8-bit microcontroller.
176
E. Oswald and S. Mangard
It has turned out that both attacks are feasible in practice. Even worse, templatebased DPA attacks require only little more effort in terms of computation, but not in terms of the number of needed power traces, in order to reveal the key. This shows that the long arms race is still going to continue. Masking only provides security (if well implemented) against standard DPA attacks. Other types of power analysis attacks are still possible. It is therefore important that designers are aware of how the countermeasures that they implement work, i.e., they have to be aware of implementation pitfalls and of the general limitations of masking.
References 1. M.-L. Akkar and C. Giraud. An implementation of DES and AES, secure against some attacks. In C¸. K. Koc¸, D. Naccache, and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 2001, Third International Workshop, Paris, France, May 14–16, 2001, Proceedings, volume 2162 of Lecture Notes in Computer Science, pages 309–318. Springer, 2001. 2. L. Benini, A. Galati, A. Macii, E. Macii, and M. Poncino. Energy-efficient data scrambling on memory-processor interfaces. In I. Verbauwhede and H. Roh, editors, International Symposium on Low Power Electronics and Design, 2003, Seoul, Korea, August 25–27, 2003, Proceedings, pages 26–29. ACM Press, 2003. 3. M. Bucci, M. Guglielmo, R. Luzzi, and A. Trifiletti. A power consumption randomization countermeasure for DPA-resistant cryptographic processors. In E. Macii, O. G. Koufopavlou, and V. Paliouras, editors, 14th International Workshop on Integrated Circuit and System Design, Power and Timing Modeling, Optimization and Simulation, PATMOS 2004, Santorini, Greece, September 15–17, 2004, Proceedings, volume 3254 of Lecture Notes in Computer Science, pages 481–490. Springer, 2004. 4. S. Chari, C. S. Jutla, J. R. Rao, and P. Rohatgi. A cautionary note regarding evaluation of AES candidates on smart-cards. In Second Advanced Encryption Standard (AES) Candidate Conference, Rome, Italy, 1999. 5. J.-S. Coron and L. Goubin. On boolean and arithmetic masking against differential power analysis. In C ¸ . K. Koc¸ and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 2000, Second International Workshop, Worcester, MA, USA, August 17–18, 2000, Proceedings, volume 1965 of Lecture Notes in Computer Science, pages 231–237. Springer, 2000. 6. R. Elbaz, L. Torres, G. Sassatelli, P. Guillemin, C. Anguille, M. Bardouillet, C. Buatois, and J.-B. Rigaud. Hardware engines for bus encryption: a survey of existing techniques. In 2005 Design, Automation and Test in Europe Conference and Exposition (DATE 2005), 7–11 March 2005, Munich, Germany, pages 40–45. IEEE Computer Society, 2005. 7. W. Fischer and B. M. Gammel. Masking at gate level in the presence of glitches. In J. R. Rao and B. Sunar, editors, Cryptographic Hardware and Embedded Systems – CHES 2005, 7th International Workshop, Edinburgh, UK, August 29 - September 1, 2005, Proceedings, volume 3659 of Lecture Notes in Computer Science, pages 187–200. Springer, 2005. 8. J. D. Goli´c. DeKaRT: a new paradigm for key-dependent reversible circuits. In C. D. Walter, C ¸ . K. Koc¸, and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 2003, 5th International Workshop, Cologne, Germany, September 8–10, 2003, Proceedings, volume 2779 of Lecture Notes in Computer Science, pages 98–112. Springer, 2003. 9. L. Goubin. A sound method for switching between boolean and arithmetic masking. In C ¸ . K. Koc¸, D. Naccache, and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 2001, Third International Workshop, Paris, France, May 14–16, 2001, Proceedings, volume 2162 of Lecture Notes in Computer Science, pages 3–15. Springer, 2001.
9 Counteracting Power Analysis Attacks by Masking
177
10. C. Herbst, E. Oswald, and S. Mangard. An AES smart card implementation resistant to power analysis attacks. In J. Zhou, M. Yung, and F. Bao, editors, Applied Cryptography and Network Security, Second International Conference, ACNS 2006, volume 3989 of Lecture Notes in Computer Science, pages 239–252. Springer, 2006. 11. S. M. Kay. Fundamentals of Statistical Signal Processing - Detection Theory. Signal Processing Series. Prentice Hall, 1st edition, 1998. ISBN 0-13-504135-X. 12. P. C. Kocher, J. Jaffe, and B. Jun. Differential power analysis. In M. Wiener, editor, Advances in Cryptology - CRYPTO ’99, 19th Annual International Cryptology Conference, Santa Barbara, CA, USA, August 15–19, 1999, Proceedings, volume 1666 of Lecture Notes in Computer Science, pages 388–397. Springer, 1999. 13. S. Mangard, E. Oswald, and T. Popp. Power Analysis Attacks – Revealing the Secrets of Smart Cards. Springer, 2007. ISBN 978-0-387-30857-9. 14. S. Mangard, T. Popp, and B. M. Gammel. Side-channel leakage of masked CMOS gates. In A. Menezes, editor, Topics in Cryptology - CT-RSA 2005, The Cryptographers’ Track at the RSA Conference 2005, San Francisco, CA, USA, February 14–18, 2005, Proceedings, volume 3376 of Lecture Notes in Computer Science, pages 351–365. Springer, 2005. 15. D. May, H. L. Muller, and N. P. Smart. Random register renaming to foil DPA. In C ¸ . K. Koc¸, D. Naccache, and C. Paar, editors, Cryptographic Hardware and Embedded Systems – CHES 2001, Third International Workshop, Paris, France, May 14–16, 2001, Proceedings, volume 2162 of Lecture Notes in Computer Science Lecture Notes in Computer Science, pages 28–38. Springer, 2001. 16. E. Oswald and S. Mangard. Template attacks on masking—resistance is futile. In Topics in Cryptology - CT-RSA 2007, The Cryptographers’ Track at the RSA Conference 2007, San Francisco, CA, USA, February 5–9, 2007, Proceedings, Lecture Notes in Computer Science. Springer, 2007. 17. E. Oswald, S. Mangard, N. Pramstaller, and V. Rijmen. A side-channel analysis resistant description of the AES S-box. In H. Gilbert and H. Handschuh, editors, Fast Software Encryption, 12th International Workshop, FSE 2005, Paris, France, February 21–23, 2005, Revised Selected Papers, volume 3557 of Lecture Notes in Computer Science, pages 413–423. Springer, 2005. 18. T. Popp and S. Mangard. Masked dual-rail pre-charge logic: DPA-resistance without routing constraints. In J. R. Rao and B. Sunar, editors, Cryptographic Hardware and Embedded Systems – CHES 2005, 7th International Workshop, Edinburgh, UK, August 29 – September 1, 2005, Proceedings, volume 3659 of Lecture Notes in Computer Science, pages 172–186. Springer, 2005. 19. N. Pramstaller, E. Oswald, S. Mangard, F. K. G¨urkaynak, and S. Haene. A masked AES ASIC implementation. In E. Ofner and M. Ley, editors, Austrochip 2004, Villach, Austria, October 8th, 2004, Proceedings, pages 77–82, 2004. ISBN 3-200-00211-5. 20. P. Schaumont and K. Tiri. Masking and dual-rail logic don’t add up. In P. Paillier and I. Verbauwhede, editors, Cryptographic Hardware and Embedded Systems - CHES 2007, 9th International Workshop, Vienna, Austria, September 10–13, 2007, Proceedings, volume 4727 of Lecture Notes in Computer Science, pages 95–106. Springer, 2007. 21. D. Suzuki and M. Saeki. Security evaluation of DPA countermeasures using dual-Rail precharge logic style. In Cryptographic Hardware and Embedded Systems – CHES 2006, 8th International Workshop,Yokohama, Japan, October 10–13, 2006, Proceedings, Lecture Notes in Computer Science. Springer, 2006. 22. D. Suzuki, M. Saeki, and T. Ichikawa. Random switching logic: a countermeasure against DPA based on transition probability. Cryptology ePrint Archive (http://eprint.iacr.org/), Report 2004/346, 2004. 23. K. Tiri and P. Schaumont. Changing the odds against masked logic. In Selected Areas in Cryptography, 13th International Workshop, SAC 2006, Montreal, Quebec, Canada, August 17–18, 2006, Lecture Notes in Computer Science. Springer, 2006. Available online at http://rijndael.ece.vt.edu/schaum/papers/2006sac.pdf.
178
E. Oswald and S. Mangard
24. J. Wolkerstorfer, E. Oswald, and M. Lamberger. An ASIC implementation of the AES SBoxes. In B. Preneel, editor, Topics in Cryptology - CT-RSA 2002, The Cryptographers’ Track at the RSA Conference 2002, San Jose, CA, USA, February 18–22, 2002, Proceedings, volume 2271 of Lecture Notes in Computer Science, pages 67–78. Springer, 2002.
Part IV
Applications
Chapter 10
Compact Public-Key Implementations for RFID and Sensor Nodes Lejla Batina, Kazuo Sakiyama, and Ingrid M.R. Verbauwhede
10.1 Introduction Devices such as mobile phones, PDAs, smart cards, key immobilizers, and recently RFIDs have become unavoidable in everyday life. Because these embedded devices are integrated into personal and professional infrastructures, the issue of security is of utmost importance. In this case, a mere protection of information is not enough as the embedded device itself can be lost or stolen and subject to various security attacks. Obvious examples are RFID tags and sensor nodes that can be easily accessed by everybody while at the same time they are severely resource constrained. An additional difficulty in adding security services to those platforms is that they are often battery-operated (e.g., sensor nodes) or need to rely on energy from the environment (so-called scavenging). The most challenging tasks for embedded security are implementations of publickey cryptography (PKC). Public-key cryptosystems are required in almost all spheres of digital communication. They resolve common bottlenecks of symmetric-key cryptography such as key management and distribution. Namely, they facilitate secure communications over insecure channels without prior exchange of a secret key. One can say that PKC substantially simplifies security protocols. It was also observed that the use of PKC reduces power due to less protocol overhead [13]. On the other hand, the total energy cost is much higher when PKC is used instead of protocols based on secret key. It was shown in [29] that the total energy cost of the Diffie–Hellman key agreement process using ECC in an ad hoc network is between
L. Batina (B) Katholieke Universiteit Leuven, ESAT/SCD-COSIC and IBBT, Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee, Belgium e-mail:
[email protected] This work was supported in part by the IAP Programme P6/26 BCRYPT of the Belgian State (Belgian Science Policy), by FWO projects G.0475.05 and G.0300.07, by the European Commission through the IST Programme under Contract IST-2002-507932 ECRYPT NoE, and by the K.U. Leuven-BOF.
I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3 10, C Springer Science+Business Media, LLC 2010
179
180
L. Batina et al.
one and two orders of magnitude larger than the key exchange process based on the AES algorithm. In this chapter we are dealing with two emerging examples of PKC applications, i.e., RFIDs and sensor networks. They put almost impossible requirements on implementations of PK algorithms with very tight constraints in a number of gates, power, bandwidth, etc. However, recent research by many authors shows that PKC could be possible for pervasive security. In particular, we elaborate how curve-based cryptosystems can be further optimized for these new challenging applications. We show that ECC and HECC processors can be designed in such a way to qualify for lightweight applications suitable for RFID tags and wireless sensor networks. Here, the term lightweight assumes small die size and low power consumption. Therefore, our goal is a hardware processor supporting ECC and HECC that features very low footprint and low power. We argue that two types of curve-based cryptography are possible for pervasive security. One solution relies on ECC over binary fields F2n and another on HECC on curves of genus 2 over F2 p . Furthermore, this option can be extended with ECC over composite fields of the form F22 p . Therefore, we investigate two types of solutions for ECC, one of which can be applied to ECC over binary fields F2 p where p is a prime and another one to ECC over a composite field or for HECC on curves of genus 2. The latter implies an arithmetic unit for both cases which is a factor 2 smaller than for the first ECC option. This chapter is organized as follows. In Section 10.2 we give an overview of related work. Section 10.3 gives some background information on curve-based cryptography and supporting arithmetic. In Section 10.4 we elaborate on a suitable selection of parameters and algorithms and we outline an architecture for pervasive applications. Section 10.5 concludes the chapter and outlines future work.
10.2 Related Work Low-power and compact implementations became an important research area with the constant increase in the number of hand-held devices such as mobile phones, smart cards, PDAs that also require security solutions. For example, Goodman and Chandrakasan [12] proposed a low-power cryptographic processor for ECC over both types of finite fields. The power consumed in ultra-low-power mode (3 MHz at VDD = 0.7 V) is at most 525 μW. Also other results obtained make this solution not suitable for RFIDs. Basically, until a few years ago all concepts for “low power” were not suitable for the context of RFID and sensor networks. Therefore, in this overview we mention only those results that were specifically meant for one or the other. Wireless distributed sensor networks are expected to be used in a broad range of applications, varying from military to meteorological applications [14]. As the
10 Compact Public-Key Implementations for RFID and Sensor Nodes
181
current generation is powered by batteries, ultra-low energy circuitry is a must for these applications. There is a clear need for PKC in this context, especially for services such as key exchange protocols that are typically provided by means of PKC. Other desired protocols that could benefit from PKC include authentication and broadcast encryption. RFID tags are passive devices that include a microchip connected with an antenna. Typically, they have no battery and they obtain power from the electromagnetic field produced by an RFID reader. Today they are mainly used for identification of products, access control, supply chains, and inventory management but recent works suggest also using RFIDs for counterfeiting [10]. Furthermore, many new applications are envisioned such as vehicles tracking, transport control, security of newborns. In short, RFID tags are meant to be a ubiquitous replacement for bar codes with some added functionality. To the best of our knowledge a few papers discuss the possibility of PKC for the pervasive applications, although the benefits of PKC are evident. The work of Gaubatz et al. [14] deals with the feasibility of PKC protocols in sensor networks. In [14], the authors investigated implementations of two algorithms for this purpose, i.e., Rabin’s scheme and NTRUEncrypt. They concluded that NTRUEncrypt features a suitable low-power and small footprint solution with a total complexity of 3 000 gates and power consumption of less than 20 μW at 500 kHz. As one would expect, they showed that Rabin’s scheme is not a feasible solution because the featured design requires around 17,000 gates and it consumes the total power of 148.18 μW. In [13] the authors compared the previous two implementations with an ECC solution for wireless sensor networks. The architecture of the ECC processor occupies an area of 18,720 gates and consumes less than 400 μW of power at 500 kHz. The field used is a prime field of order ≈ 2100 . PKC processors meant for RFID tags exclusively include the results of Wolkerstorfer [21] and Kumar and Paar [18]. Wolkerstorfer [21] showed that ECC-based PKC is feasible on RFID tags by implementing the ECDSA on a small IC. The chip has an area complexity of around 23,000 gates and it features a latency of 6.67 ms for one scalar multiplication at 68.5 MHz. However, it can be used for both types of fields, e.g., F2191 and F p192 , as the author considered implementation of signature algorithm, i.e., ECDSA. The results of Kumar and Paar [18] include an area complexity of almost 12 kgates and a latency of 18 ms for one scalar multiplication over F2131 at 13.56 MHz. The frequency they assume is actually the transmission frequency for an antenna of RFID tags and it is too high for the operating frequency due to drastic increases in power consumption. Therefore, the results cannot be properly evaluated or compared with other related work. An ECC solution to provide identification for an RFID tag by means of Schnorr’s identification scheme was discussed in [10] and the complete implementation is given in [16]. The authors show how secure identification protocols can be implemented on a constrained device such as an RFID tag requiring between 8 500 and 14,000 gates, depending on the implementation characteristics.
182
L. Batina et al.
The work we presented in [19] describes a low-cost modular arithmetic logic unit (MALU) for elliptic/hyperelliptic curve cryptography (ECC/HECC) suitable for these applications. The best solution for the MALU among various trade-offs supporting ECC field arithmetic features 2171 gates with an average power consumption of less than 40 μW in a 0.25 μm CMOS at 175 kHz operating frequency. The result is obtained by hardware resource sharing of the datapath and the usage of composite fields for ECC. The follow-up work in [17] describes a low-power and low-footprint processor using the MALU for ECC suitable for sensor networks. The best solution features 6718 gates for the arithmetic unit and control unit (data memory not included) in 0.13 μm CMOS technology over the field F2131 . In [20] we describe a new solution based on hyperelliptic curve cryptography (HECC) and on ECC over composite fields. HECC has some advantages over ECC because of the possibility to work in a smaller field, e.g., for HECC (in the case of genus 2 curves) one can work in the field F2n while obtaining the same level of security as for ECC over fields of bit lengths that are twice as large. The same holds for ECC over composite fields. This property allows for more compact ALU than in the case of ECC. We specify the processors and give more detailed results in Section 10.4.
10.3 Preliminaries In this section we give some background information on ECC and HECC. We also discuss a general strategy for low-cost applications of curve-based cryptography.
10.3.1 ECC/HECC over Binary Fields Elliptic curve cryptography was proposed in the mid-1980s by Miller [7] and Koblitz [27]. ECC relies on a group structure induced on an elliptic curve. A set of points on an elliptic curve together with the point at infinity, denoted ∞, and with point addition as binary operation has the structure of an abelian group. Here we consider finite fields of characteristic two. A non-supersingular elliptic curve E over F2n is defined as the set of solutions (x, y) ∈ F2n × F2n to the equation y 2 + x y = x 3 + ax 2 + b where a, b ∈ F2n , b = 0, together with ∞. HECC was proposed in 1988 by Koblitz [28] as a generalization of elliptic curve cryptography. For a detailed mathematical background we refer to [25]. ECC/HECC protocols are mainly based on one or a few scalar multiplications. Namely, the majority of costs for the protocols come from this operation although some other arithmetic operations can also be included such as random number generation, hash function. Therefore, the main operation in any curve-based primitive is the scalar multiplication which can be viewed as the top-level operation. At the next (lower) level are the point/divisor group operations. The lowest level consists of finite field operations such as addition, subtraction, multiplication, and inversion required to perform the group operation. The only difference between ECC and HECC is at the middle level that in this case consists of different sequences of
10 Compact Public-Key Implementations for RFID and Sensor Nodes
183
operations. Those for HECC are a bit more complex when compared with the ECC point operations, but they use shorter operands. In Fig. 10.1(a) the hierarchy of operations is given. Elliptic curves can be viewed as a special case of hyperelliptic curves, i.e., an elliptic curve is a hyperelliptic curve of genus g = 1. Here we consider a hyperelliptic curve C of genus g = 2 over F2n , which is defined by an equation of the form C : y 2 + h(x)y = f (x) in F2n [x, y], where h(x) ∈ F2n [x] is a polynomial of degree at most g and f (x) is a monic polynomial of degree 2g + 1. For genus 2 curves, in the general case the following equation is used: y 2 +(h 2 x 2 +h 1 x +h 0 )y = x 5 + f4 x 4 + f3 x 3 + f2 x 2 + f1 x + f0 . In this work we deal with the case of elliptic curves over F2 p with p prime (as recommended by standards) and also over composite fields F22· p , where F22· p is a field of quadratic extension over F2 p , so we can write F22· p = F2 p [x]/( f (x)) for deg( f ) = 2. In this case each element from the field F22· p can be represented as c = c1 t + c0 where c0 , c1 ∈ F2 p , and the multiplication in this field takes three multiplications in F2 p plus four additions. In standards such as IEEE [23], the fields recommended are of the form F2 p , where p is a prime. The reason lies in the possibility to apply special attacks, i.e., Weil descent [26, 24], which was successfully used for some cases of EC over binary fields of composite degree n. Further research showed that some composite fields should be avoided for ECC [15, 22]. However, it was shown that the composite fields with degree n = 2 · p (i.e., fields of quadratic extension over F2 p , where p is prime) remain secure against Weil descent attacks and its variants [4].
10.3.2 Algorithms Selection and Optimizations In this chapter we revisit the algorithms for HECC and ECC over binary fields on each level of ECC/HECC hierarchy. We also discuss possible optimization on the number of registers, resulting in an area-optimized solution. There are many algorithms known for scalar (also called point and divisor multiplication for ECC and HECC, respectively) and the basic one is simple double-andadd or the binary method [3]. Other options include windowing, comb techniques, and addition chains [1]. Also well known is the Montgomery ladder that provides simple side-channel resistance, i.e., it makes scalar multiplication algorithm secure against simple side-channel attacks [15]. On the lower level is the group operation (i.e., point or divisor addition and in a special case doubling), which includes a sequence of finite field operations. The exact sequence is related to the choice of coordinates. The usual way to avoid inversions, which are very expensive in hardware, is to use projective coordinates (such as Jacobian, Chudnovsky) [25]. Algorithms on the two top levels are usually implemented as finite state machines (FSMs) for hardware implementations or in software for a case of co-design, but one still has to choose algorithms that require the smallest number of registers or that result in the smallest possible arithmetic unit, etc.
184
L. Batina et al.
The lowest level consists of finite field operations, i.e., addition, multiplication, and inversion. For ECC over a prime field one has to consider also subtractions but for binary fields those are actually additions. As mentioned above, inversion can be traded with multiplications and it is also possible to implement addition and multiplication in such a way that both share one data path (see Fig. 10.1). An arithmetic unit that is designed by following this principle is described in Section 10.4. The conclusion is that the best way to optimize area is to make the arithmetic unit as small as possible by reducing the number of field operations to as few as possible. This idea was proposed in [19]. Point/Divisor Multiplication
(a)
(b)
Point/Divisor Multiplication
Point/Divisor Addition
Point/Divisor Doubling
Controller Point/Divisor Addition
Finite Field Addition
Point/Divisor Doubling
Finite Field Multiplication
Finite Field Inversion
Finite Field Operation E.g. AB or B+C mod P
Finite Field Inversion
Datapath
Fig. 10.1 The hierarchy of ECC/HECC operations
10.3.3 Algorithms for ECC/HECC Arithmetic Here we describe some particular options for algorithms that lead to a more compact processor. For the ECC point multiplication we chose the method of Montgomery [8] as given in Algorithm 1 that maintains the relationship P2 –P1 as invariant. It uses a representation where computations are performed on the x-coordinate only in affine coordinates (or on the X - and Z -coordinates in projective representation). That fact allows us to save registers which is one of the main criteria for obtaining a compact solution. Algorithm 1 Algorithm for point multiplication Require: an integer k > 0 and a point P Ensure: x(k P) k ← kl−1 , ..., k1 , k0 P1 ← P, P2 ← 2P. for i from l − 2 downto 0 do If ki = 1 then x(P1 ) ← x(P1 + P2 ), x(P2 ) ← x(2P2 ) Else x(P2 ) ← x(P2 + P1 ), x(P1 ) ← x(2P1 ) end for Return x(P1 )
As starting point for the optimization, one can use the formulae of L´opez and Dahab [5]. The original formulae in [5] require two or three intermediate registers
10 Compact Public-Key Implementations for RFID and Sensor Nodes
185
if the point operations are performed sequentially or in parallel, respectively. In the case of sequential processing it is enough to use two intermediate variables but it is also possible to eliminate one more intermediate register, which adds a few more steps to the original algorithms. The results of the optimizations are given in Algorithm 2. Algorithm 2 EC point operations that minimize the number of registers Require: X i , Z i , for i = 1, 2, x4 = x(P2 − P1 ) Ensure: X (P1 + P2 ) = X 2 , Z (P1 + P2 ) = Z 2 1: X 2 ← X 2 · Z 1 2: Z 2 ← X 1 · Z 2 3: T ← X 2 · Z 2 4: Z 2 ← Z 2 + X 2 5: Z 2 ← Z 2 2 6: X 2 ← x4 · Z 1 7: X 2 ← X 2 + T
Require: b ∈ F2n , X 1 , Z 1 Ensure: X (2P1 ) = X 1 , Z (2P1 ) = Z 1 , 1: X 1 ← X 12 2: Z 1 ← Z 12 3: T ← Z 12 4: Z 1 ← X 1 · Z 1 5: T ← b · T 6: X 1 ← X 12 7: X 1 ← X 1 + T
Algorithm 2 requires only one intermediate variable T , which results in five registers in total. The required registers are for the storage of the following variables: X 1 , X 2 , Z 1 , Z 2 , and T . Also, the algorithm shows the operations and registers required if the key-bit ki = 0. Another case is completely symmetric and it can be performed accordingly. More precisely, if the addition operation is viewed as a function f (X 2 , Z 2 , X 1 , Z 1 ) = (X 2 , Z 2 ) for ki = 0 due to the symmetry for the case ki = 1 we get f (X 1 , Z 1 , X 2 , Z 2 ) = (X 1 , Z 1 ) and the correct result is always stored in the first two input variables. This is possible due to the property of scalar multiplication based on Algorithm 1. However, mapping this algorithm to the corresponding one suitable for composite fields requires some additional intermediate registers as we translate the field arithmetic to the arithmetic in the subfield. In this case one addition, multiplication in F(2 p )2 can be performed by two additions, three multiplications + four additions in F2 p , respectively. For our HECC implementation we used so-called type II curves [2], which are defined by h 2 = 0, h 1 = 0. As a starting point for divisor operations we used formulae from [6]. Those curves allow for faster doublings than for a general curve, while security remains intact. However, one can optimize the formulae on the number of intermediate variables which results in a small increase in the number of multiplications. In this way, we can perform a trade-off between area and performance.
10.3.4 Binary Field Arithmetic As mentioned above we also consider composite field implementations. As typical examples we deal with the fields F2163 and F22·83 to maintain a similar level of security. In the latter the field F22·83 is represented as F22·83 [t]= F283 /(t 2 + t + 1). In
186
L. Batina et al.
general, for composite fields the field arithmetic translates to the arithmetic in the subfield.
10.4 Curve-Based Processors for Low-Cost Applications One solution for a curve-based processor is shown in Fig. 10.2. This processor could handle both types of ECC and also HECC. The architecture consists of the following blocks: – a control unit (FSM), – a modular arithmetic unit (MALU), – memory (RAM and ROM). In ROM the ECC/HECC parameters and some constants used in algorithms are stored. On the other hand, RAM contains all input and output variables and therefore it communicates with both the ROM and the MALU.
Fig. 10.2 Architecture of a curve-based processor
The FSMs control the scalar multiplication k × P and the point/divisor operations. In addition, the controller commands the MALU which performs field opern k −1 ki 2i , ki = {0, 1}, ations. When the START signal is set, the bits of k = i=0
10 Compact Public-Key Implementations for RFID and Sensor Nodes
187
n k = log2 k, are evaluated from MSB to LSB. The control consists of a number of simple state machines and a counter and its area cost is small.
10.4.1 Modular Arithmetic Logic Unit (MALU) In this section the architecture of the modular arithmetic logic unit (MALU) for ECC/HECC is explained [19]. The datapath of the MALU is an MSB-first bitarithmetic serial F2n multiplier with digit size d as illustratedin Fig. 10.3. This n−1 n−1 ai x i , B(x) = i=0 bi x i , unit computes A(x)B(x) mod P(x) where A(x) = i=0 n−1 pi x i . The proposed MALU computes A(x)B(x) mod and P(x) = x n + i=0 P(x) by the following steps: The MALUn sums up three types of inputs which are ai B(x), m i P(x), and T (x) and then outputs the intermediate result, Tnext (x), by computing Tnext (x) = (T (x) + ai B(x) + m i P(x))x where m i = tn . By providing Tnext as the next input T and repeating the same computation for n times, one can obtain the multiplication result. Modular addition, A(x) + C(x) mod P(x), can also be supported by the same hardware logic by setting C(x) to the register for T (x) instead of resetting register T (x) when initializing the MALU. This operation requires additional multiplexors and XORs; however, the cost of this solution is cheaper compared to the case of having a separate modular adder. This type of hardware sharing is very important for such low-cost applications. The proposed datapath is scalable in the digit size d which can be changed easily by exploring the best combination of performance and cost. In Fig. 10.3 the architecture of the MALU is shown for finite field operations in F2163 . To perform a finite field multiplication, the cmd value should be set to 1 and the operands should be loaded into registers A and B. The value stored in A is evaluated digit per digit from MSB to LSB. We denote the digit size by d. The clock cycles. result of the multiplication will be provided in register T after 163 d A finite field addition is performed by giving cmd the value 0, resetting register A, and loading the operands into registers B and T . The value that is loaded into T is denoted by C in Fig. 10.3. After one clock cycle, the result of the addition is provided in register T . The cmd value makes sure that only the last cell is used for this addition. The cells inside the ALU all have the same structure, which is depicted in Fig. 10.4. A cell consists of a full-length array of AND-gates, a full-length array of XOR-gates, and a smaller array of XOR-gates. The position of the XOR-gates in the latter array depends on the irreducible polynomial. In this case, the polynomial P(x) = x 163 + x 7 + x 6 + x 3 +1 is used. The cmd value determines whether the reduction needs to be done or not. In case of a finite field multiplication, the reduction is needed. For finite field addition, the reduction will not be performed. The output value Tout is either given (in a shifted way) to the next cell or to the output register T in Fig. 10.4. The input value Tin is either coming from the previous cell or from the output register T .
188
L. Batina et al.
Fig. 10.3 Architecture of the MALU for the field F2163 [17]
The strong part of this architecture is that it uses the same cell(s) for finite field multiplication and addition without a big overhead in multiplexors. This is achieved by using T as an output register as well as an input register. The flip-flops in T are provided with a load input, which results in a smaller area overhead compared to a solution that would use a full-length array of multiplexors.
Fig. 10.4 Logic inside one cell of the MALU [17]
10 Compact Public-Key Implementations for RFID and Sensor Nodes
189
10.4.2 Performance Results and Discussion Now we give the results for area complexity, power consumption, and the latency for both ECC and HECC processors. As mentioned above the core part of each curve-based protocol is one point/divisor multiplication. For example, the protocol of Schnorr allows for authentication at the cost of only one point multiplication [9]. First we give the results for area complexity. The designs were synthesized by Synopsys Design Vision using a 0.13 μm CMOS library. We used binary fields from bit-sizes 131 to 163 as recommended by NIST. The results for various architectures of the MALU with respect to the choice of fields and the size of d for ECC and HECC are given in Tables 10.1 and 10.2 respectively. The results for the complete architecture in gates for the cases of ECC over composite fields and HECC are given in Table 10.3. So, in this table the results include also control logic except the MALU. For the case of ECC over composite fields and HECC, the ALU shrinks in size with a factor 2 because we are working in the fields that consist of elements which have bit-lengths that are twice shorter than those for ECC over a prime extension field offering the same level of security. However, the total number of field operations increases but some speed-up is possible, which we obtain by means of digit-serial multiplications (instead of bit-serial one, i.e., d = 1). Table 10.1 The area complexity in kgates of the MALU and the control logic for the ECC processor for various digit sizes Field size d=1 d=2 d=3 d=4 131 139 163
4281 4549 5314
4758 5043 5900
5219 5535 6486
5685 6028 7052
Table 10.2 The area complexity in kgates of the MALU and the control logic for HECC processor and the ECC processor over composite fields for various digit sizes Field size d=1 d=2 d=4 d=6 d=8 67 71 79 83
2171 2299 2564 2693
2421 2563 2854 2997
2901 5535 6016 6486
3420 3576 4012 4168
3899 4083 4530 4794
Table 10.3 The complete area corresponding to the gray box of Fig. 10.2 in gates Field size d=1 d=2 d=3 d=4 d=6
d=8
ECC: 67 HECC: 67 ECC: 83 HECC: 83 ECC: 131 ECC: 163
6103 7652 7193 8747 – –
4345 5893 5071 6622 6718 8214
4600 6147 5375 6930 7191 8791
– – – – 7645 9368
5089 6635 5971 7513 8104 9926
5612 7166 6558 8112 – –
190
L. Batina et al.
All results for area are summarized in Fig. 10.5. In this figure results are given for the complete processors excluding the data memory part. The performance for ECC in each case is calculated by use of formulae for point operations as in Algorithm 2 and we calculate the total number of cycles for each field operation by use of the following formulae for field operations. The total number of cycles for one field multiplication is dn + 3 where n and d are the bit size of an arbitrary element from the field in which we are working and the digit size, respectively. On the other hand, one field addition takes four cycles. The number of cycles required for one point multiplication in the case of field F2 p , where p is a prime is, (n k − 1)[11( ndk + 3) + 12]. Here, n k denotes the number of bits of the scalar k, e.g., the secret key. The results for the total number of cycles of one point multiplication for ECC over F2163 and F(283 )2 are given in Table 10.4. The number of cycles required for one point multiplication in the case of ECC over a composite field F22 p is (2n k − 1)[36( ndk + 3) + 324]. For HECC over F2 p we get the following number of cycles for one scalar multiplication: (2n k − 1)[115( ndk + 3) + 621].
10000
9000
HECC over
d=4
ECC over
d=3
d=8
ECC over 8000
Area [gate]
d=2
d=6 d=1 d=4
7000
6000
d=2 d=1 d=8 d=6
5000
d=4 d=2 d=1
4000 50
70
90
110
130
150
170
Field length [bit]
Fig. 10.5 Area in number of kgates for all three cryptosystems
Performance results are summarized in Fig. 10.6. To calculate the time for one point multiplication we need an operating frequency. However, the frequency that can be used is strictly limited by the total power. We assume an operating frequency of 500 kHz in order to estimate the actual timing because the power estimates obtained for this frequency are acceptable
10 Compact Public-Key Implementations for RFID and Sensor Nodes
191
Table 10.4 The number of cycles required for one scalar multiplication for ECC over fields F2131 and F(267 )2 and for HECC Field size
d=1
d=2
d=3
d=4
d=6
d=8
ECC-131 ECC-67 HECC-67
229,774 378,252 1,153,243
119,079 220,248 648,508
81,613 – –
62,880 138,852 388,493
– 114,912 312,018
– 100,548 266,133
600,000
d=1
HECC over ECC over
500,000
Performance [cycle]
ECC over 400,000
300,000
d=4 d=2
d=6
d=1
d=8 d=4
200,000
d=2
d=6 d=8
d=3
100,000 d=4 0 50
70
90
110
130
150
170
Field length [bit] Fig. 10.6 Performance in number of cycles for all three cryptosystems
for lightweight applications [21]. (Actual figures for power suggested in [21] are I < 10 μ
[email protected] V.) In order to estimate the power consumption, we use Synopsys PrimePower for our designed gate-level netlist synthesized with a 0.13 μm CMOS library and the conservative wire load model (pre-layout netlist). All results for power consumption of all three cryptosystems are given in Fig. 10.7. The conclusion from the graphs shown above is that ECC over composite fields is the best choice for low power and area but for the performance it is better to use “regular” ECC. More precisely, we get 126 and 201 ms when operating frequency is 500 kHz for the best case of ECC over F2131 (d = 4) and F267∗2 (d = 8), respectively, and 532 ms for the best case of HECC (F267 and d = 8). The results of all relevant works are compared in Table 10.5. We underline again that the results for the area complexity for our designs [17, 20] do not include data memory as explained above. The amount of storage that
192
L. Batina et al. 30 Clock Frequency = 500 kHz
HECC over
d=4
ECC over
d=3 d=2
ECC over
25
Power [uW]
d=1 d= d= d= d= d=
20
8 6 4 2 1
15 d= d= d= d= d=
10 50
8 6 4 2 1
70
90
110
130
150
170
Field length [ bit]
Fig. 10.7 Results for power for all three cryptosystems Table 10.5 Comparison with other related work References # bits Area [gates ] Tech. [μm]
f [kHz]
Time for 1 PM [ms]
Power [μW]
[18] [13] [17] [20] [20] [30]
13 560 500 500 500 500 323
18 410.45 106 201 532 244
– <400 <30 <13 <17 <13
131 100 131 134 134 163
11 969.93 18 720 8104 6103 7652 12 863
0.35 0.13 0.13 0.13 0.13 0.13
is required for the implementations is to store 13n and 28n bits for ECC and HECC, respectively, where n is the number of bits of elements in a subfield. For ECC, the required storage is 5 p for a field F2 p . The work of Lee et al. [30] is a complete elliptic curve processor and the results for area in Table 10.5 include memory. This work is the most recent one and gives a state-of-the-art ECC implementation for RFID with the highest security level. The processor is also using the MALU for ECC over F2163 but improves on control logic and storage. More details about the processor can be found in [31] and [30].
10.5 Conclusions and Future Challenges This work gives an overview of low-power and low-footprint processors proposed for PKC applications of pervasive security. We describe in detail a processor for ECC and HECC suitable for RFIDs and sensor nodes. We give detailed results for area and power and performance estimates for ECC over F2 p with p prime,
10 Compact Public-Key Implementations for RFID and Sensor Nodes
193
over composite fields F22· p , and for HECC over binary fields for curves of genus 2. Furthermore, we propose to use the same ALU for HECC as for composite fields because both cryptosystems rely on the same subfield arithmetic. The results from our own research show that the arithmetic unit for HECC and ECC over composite fields is smaller than the one for standardized ECC, but memory requirements are slightly bigger. Using RAM for storage, which would result in a smaller number of gates compared to register files, one could have a feasible low-cost solution for the different types of curve-based cryptography. Our compact and low-power ECC processor contains a modular arithmetic logic unit (MALU) for ECC field arithmetic, the best solution with respect to the performance (106 ms @500 kHz) we get for ECC. The processor features 8104 gates for the MALU and control unit over the field F2131 for d = 4, which provides a reasonable level of security for the time being. Another option that is better for the compactness is ECC over F267∗2 . In this case the processor consumes 6103 gates and the performance is 201 ms. Considering power numbers we see that ECC over composite fields is the best as the numbers for power range from 11 to 16 μW. Altogether the power consumed by MALU does not grow with the digit length, but only with the field size. The most recent work [30] focuses on ECC only and it results in a compact processor of less than 13 kgates that consumes less than 13 μW of power and completes one point multiplication in 244 ms. Future research should question the optimality of the solution and consider sidechannel security. It is hard to predict where the bounds are but we expect that a feasible performance and low power could be obtained with less than 10 kgates. In addition to side-channel security, research on low-cost countermeasures is crucial due to the constraints. Further on, one should go another level up and investigate suitable protocols and the costs they imply. Namely, it is possible by choosing weaker security models to obtain more efficient protocols. By weaker we imply the assumptions on an adversary being not the strongest possible. In this way one can obtain a trade-off between security and efficiency.
References 1. D.E. Knuth. The Art of Computer Programming – Vol. 2 – Seminumerical Algorithms, Addison-Wesley, Third ed., 1998. 2. B. Byramjee and S. Duquesne. Classification of genus 2 curves over F2n and optimization of their arithmetic. Cryptology ePrint Archive: Report 2004/107. 3. I. Blake, G. Seroussi, and N.P. Smart. Elliptic Curves in Cryptography. London Mathematical Society Lecture Note Series. Cambridge University Press, 1999. 4. C. Diem and J. Scholten. Cover Attacks – A report for the AREHCC project 2003, http://www.arehcc.com 5. J. L´opez and R. Dahab. Fast multiplication on elliptic curves over GF(2m ). In C ¸ . K. Koc¸ and C. Paar, editors, Cryptographic Hardware and Embedded Systems — CHES, vol. 1717 of LNCS, pp. 316–327. Springer-Verlag, 1999. 6. A. Hodjat and L. Batina and D. Hwang and I. Verbauwhede. HW/SW Co-design of a Hyperelliptic Curve Cryptosystem using a μCode Instruction Set Coprocessor. Elsevier Science Integration the VLSI Journal, 1 (40): pp. 45–51
194
L. Batina et al.
7. V. S. Miller. Use of elliptic curves in cryptography. In H.C. Williams, editor, Advances in Cryptology – CRYPTO ’85, vol. 218 of LNCS, pp. 417–426. Springer-Verlag, 1986. 8. P. Montgomery. Speeding the pollard and elliptic curve methods of factorization. Mathematics of Computation, 48:243–264, 1987. 9. C.-P. Schnorr. Efficient identification and signatures for smart cards. In G. Brassard, editor, Advances in Cryptology — CRYPTO ’89, vol. 435 of LNCS, pages 239–252. Springer, 1989. 10. P. Tuyls and L. Batina. RFID-tags for anti-counterfeiting. In D. Pointcheval, editor, Topics in Cryptology - CT-RSA 2006, vol. 3860 of LNCS, pp. 115–131. Springer Verlag, February 13–17 2006. 11. M. Joye and S.-M. Yen. The montgomery powering ladder. In B.S. Kaliski Jr. and C¸.K. Koc¸ and C. Paar, editors, Proceedings of 4th International Workshop on Cryptographic Hardware and Embedded Systems CHES, vol. 2523 of LNCS, pp. 291–302, Springer-Verlag, 2002. 12. J. Goodman and A.P. Chandrakasan. An energy-efficient reconfigurable public-key cryptography processor. IEEE Journal of Solid-State Circuits, 36(11):1808–1820, November 2001. ¨ urk, and B. Sunar. State of the art in ultra-low power public key 13. G. Gaubatz, J.-P. Kaps, E. Ozt¨ cryptography for wireless sensor networks. In 2nd IEEE International Workshop on Pervasive Computing and Communication Security (PerSec 2005), Kauai Island, Hawaii, March 2005. 14. G. Gaubatz, J.-P. Kaps, and B. Sunar. Public key cryptography in sensor networks – revisited. In 1st European Workshop on Security in Ad-Hoc and Sensor Networks (ESAS 2004), Heidelberg, Germany, August 2004. 15. N.P. Smart. How secure are elliptic curves over composite extension fields? In B. Pfitzmann, editor, Advances in Cryptology: Proceedings of EUROCRYPT’01, vol. 2045 of LNCS pp. 30– 39, Springer-Verlag, 2001. 16. L. Batina, J. Guajardo, T. Kerins, N. Mentens, P. Tuyls, and I. Verbauwhede. Public-key cryptography for RFID-tags. In In Fourth IEEE International Workshop on Pervasive Computing and Communication Security – PerSec 2007, 6 pages, 2007. 17. L. Batina, N. Mentens, K. Sakiyama, B. Preneel, and I. Verbauwhede. Low-cost elliptic curve cryptography for wireless sensor networks. In L. Buttyan, V. Gligor, and D. Westhoff, editors, In Third European Workshop on Security and Privacy in Ad hoc and Sensor Networks, vol. 4357 of LNCS, pp. 6–17. Springer-Verlag, 2006. 18. S. Kumar and C. Paar. Are standards compliant elliptic curve cryptosystems feasible on RFID? In Proceedings of Workshop on RFID Security, Graz, Austria, July 2006. 19. K. Sakiyama, L. Batina, N. Mentens, B. Preneel, and I. Verbauwhede. Small-footprint ALU for public-key processors for pervasive security. In Proceedings of Workshop on RFID Security 2006, 12 page, Graz, Austria, 2006. 20. L. Batina, N. Mentens, K. Sakiyama, B. Preneel, and I. Verbauwhede. Public-key cryptography on the top of a needle. In In Proceedings of IEEE International Symposium on Circuits and Systems (ISCAS 2007), Special Session: Novel Cryptographic Architectures for Low-Cost RFID, 4 pages, 2007. 21. J. Wolkerstorfer. Is Elliptic-Curve Cryptography Suitable to Secure RFID Tags?, 2005. Workshop on RFID and Lightweight Crypto, Graz, Austria. 22. A. Menezes, E. Teske, and A. Weng. Weak fields for ECC. In Springer-Verlag, editor, In Topics in Cryptology - CT-RSA - The Cryptographers’ Track at the RSA Conference, vol. 2964 in LNCS, pp. 366–386, 2004. 23. IEEE P1363. Standard specifications for public key cryptography, 1999. 24. P. Gaudry, F. Hess, and N.P. Smart. Constructive and destructive facets of Weil descent on elliptic curves. Journal of Cryptology, 15:19–46, 2002. 25. H. Cohen and G. Frey. Handbook of Elliptic and Hyperelliptic Curve Cryptography. Chapman & Hall/CRC, 2006. 26. G. Frey. How to disguise an elliptic curve. Talk at Waterloo workshop on the ECDLP, 1998, http://www.cacr.math.uwaterloo.ca/conferences/1998/ecc98/slides.html 27. N. Koblitz. Elliptic Curve Cryptosystem. Mathematics of Computation, 48: 203-209, 1987.
10 Compact Public-Key Implementations for RFID and Sensor Nodes
195
28. N. Koblitz. A family of Jacobians suitable for discrete log cryptosystems. In S. Goldwasser, editor, Advances in Cryptology: Proceedings of CRYPTO’88, vol. 403 in LNCS pp. 94–99. Springer-Verlag, 1988. 29. A. Hodjat and I. Verbauwhede. The energy cost of secrets in Ad-hoc networks. In Proceedings of IEEE Circuits and Systems Workshop on Wireless Communications and Networking, 4 pages, 2002. 30. Y.K. Lee, K. Sakiyama, L. Batina, and I. Verbauwhede. Elliptic curve based security processor for RFID. In IEEE Transactions on Computers – Special section on special-purpose hardware for cryptography and cryptanalysis Vol. 57, Nr. 11, pp. 1514–1527, November 2008. 31. Y.K. Lee and I. Verbauwhede. A compact architecture for montgomery elliptic curve scalar multiplication processor. In Workshop on Information Security Applications – WISA, vol. 4867 in LNCS pp. 115–127, 2008.
Chapter 11
Demonstrating End-Point Security in Embedded Systems Patrick Schaumont, Eric Simpson, and Pengyuan Yu
Many embedded systems handle private and confidential information. From a security perspective, the embedded system environment is extremely hostile. It is a resource-limited environment, and resources often have to be shared among multiple distrustful users or applications. Moreover, an embedded system is susceptible to a wide range of physical, side-channel, and logical attacks. Many attacks consist of simply copying confidential information from accessible, unprotected interfaces which are abundant in embedded systems. Therefore, there is a need to protect information across the boundaries of hardware and software. This chapter describes a demonstrator for end-point security in a video peripheral. The demonstrator enables a third-party to securely display a message, stored on a compact flash card, on a video monitor attached to the system. The hardware platform is implemented using configurable logic. The message is personalized to a specific instance of the platform and cannot be copied, modified, or cloned. The message is only decoded just before rendering of the message on the display. Our contributions are threefold. Starting from a standard system-on-chip architecture with an embedded video display interface, we present architectural enhancements to hardware and software to implement end-point security. We show that our system effectively implements a secure ‘tunnel’: a trusted path from the compact flash memory up to the pins of the VGA connector. We also present an adequate security protocol that offers end-point security services to a third-party user of the resulting platform. Finally, we illustrate a methodology to implement end-point security.
11.1 End-Point Security for Embedded Systems Consider the video display device system illustrated in Fig. 11.1. Bob has encoded a message on a compact flash card (CF card) and sends the card to Alice. Alice reads the message on the card using the display system, which can read the CF card and P. Schaumont (B) ECE Department, Virginia Tech, Blacksburg, VA 24061, USA e-mail:
[email protected] I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3 11, C Springer Science+Business Media, LLC 2010
197
198
P. Schaumont et al.
Fig. 11.1 Issues in secure video message display
can render the stored message on an attached monitor. The system is implemented using configurable logic (FPGA) and contains a CF card interface, an embedded processor with memory, and a video interface. The video display device contains two types of configuration (Table 11.1). The first is the platform configuration, which consists of the bitstream configuration of the FPGA. The second is the video configuration, which includes the message to display as well as additional software and hardware peripheral programming required to display the message. The hardware configuration is stored in an off-chip EPROM. The video configuration is stored on the CF card which is attached to the display device. Table 11.1 Platform configuration and video configuration Platform configuration Video configuration Hardware Software Data
FPGA bitstream Boot software (inside bitstream)
VGA parameters Message display software Message
The problem of an end-point security is the following: how can we enforce security properties for the message (for example, confidentiality) as it travels from the CF card through the display system to the pins of the VGA monitor. Thus, we are considering the problem of implementing the secure communications link between Bob and Alice inside of an embedded system. This is a valid concern because of the large amount of programmable components which are present in such embedded systems. The display device used by Alice in Fig. 11.1 is implemented using configurable logic (FPGA) and embedded cores. The platform configuration as well
11 Demonstrating End-Point Security in Embedded Systems
199
as the video configuration may be tampered or modified. The objective of end-point security is to mitigate the risks of such modifications. We briefly enumerate the potential risk hazards. 1. Hardware tampering: A hacker may extend the hardware configuration with additional interfaces. Such interface may enable sniffing of data and programs, up to full control of the platform. 2. Software tampering: A hacker may modify the software running on the platform and attempt similar attacks as mentioned above. For software residing in on-chip memories (such as illustrated in Fig. 11.1), the integrity problem is the same as for the case of hardware tampering. However, we also consider dynamic download of software from the CF card in combination with the message. 3. Message tampering: A hacker may modify the message on the CF card. 4. Message copying: A hacker may copy the CF card content and run it on another compatible display device. While each of these problems can be solved using existing cryptographic protocols, it is important to combine all these elements together into a single, consistent strategy to protect the message. We call this strategy end-point security: protecting an information flow between the end points of an embedded system. In current systems, end-point security is either absent or else implemented only partially. Attacks that exploit the lack of end-point security are abundant [1, 2]. There are many practical applications for end-point security, including runtime intellectual property protection on programmable platforms [3], design-rights management for video graphics [4], securing of disk-drive data [5], and computer I/O peripherals [6], to name a few of them. In the following sections, we will describe our prototype implementation. We will first define the intended security assurances for messages stored on the CF card. Following that, we describe the system architecture of the prototype, detailing how the security assurances are achieved and how this leads to end-point security.
11.2 Required Security Assurances In an embedded system, a component is said to be trusted if that component will always operate as intended. The component is said to be trustworthy if it operates as intended without violating the security policy [7]. In the video messaging system, many different components are involved: the CF card, the microprocessor subsystem and its software, the system bus, and the VGA interface. To achieve an end-point security, we need to build a trustworthy channel from the CF card to the VGA display. We will do this by creating a chain-of-trust: multiple trustworthy components which are linked together by a trustworthy relation. A well-known embodiment of a chain-of-trust is the use of a trusted platform module (TPM) in the implementation of trusted computing functionality on a personal computer [8]. There are three
200
P. Schaumont et al.
security components involved in the chain-of-trust: platform authentication, message integrity, and message confidentiality. • In order to establish a chain-of-trust on a remote end-point, both the incoming video configuration and video peripheral need to authenticate and verify each other. Authentication is required because the chain-of-trust is supposed to be established between a specific video configuration and a specific video peripheral. Without being able to identify either the configuration or the peripheral, it is impossible to securely establish a unique link between the incoming video configuration and the remote device. • In addition to establishing the identity for both the video peripheral and the incoming configurations, end-point security requires the ability to verify the integrity of the incoming video messages. Integrity protection ensures that each link in the chain-of-trust is utilized in its original, unaltered form. This provides the ability for the video system to detect modifications to the data, software, and hardware decryption service that the original author did not intend. • Lastly, confidentiality is required to ensure the secrecy of the video configuration. Ensuring the secrecy for the software and hardware decryption service provides the assurance that the incoming configuration is only intelligible and usable by the single specified video peripheral. Also, without confidentiality, utilizing private information such as cryptographic keys in the video peripheral would be impossible. We will now define the key elements which will support a cryptographic protocol that achieves all of the above-mentioned security assurances. The first is the integrity and authentication of a video message configuration called Video IA. It is obtained through cryptographic hashing: Video IA = Hash(Video Config).
(11.1)
The Video IA tag is used in the cryptographic protocol that authenticates the video configuration. Authentication of the video peripheral is achieved by integrating a challenge response protocol with a platform-unique function. This platformunique function performs the mapping from an arbitrary 128-bit challenge to 128-bit response. Our demonstrator accomplishes this by utilizing a 128-bit AES cipher with a secret key. Therefore, the platform-unique function performs the following operation to produce an individual challenge response pair (CRP). With C a 128-bit challenge, R a 128-bit response, and PHW ( ) a platform-unique challenge response function, we have the following relation: R = PHW (C).
(11.2)
Lastly, confidentiality of the video messaging configurations is obtained by integrating the challenge response function with the AES encryption algorithm. Individual message configurations are encrypted by using the response value from a CRP as the encryption key. Utilizing CRPs for deriving encryption keys provides
11 Demonstrating End-Point Security in Embedded Systems
201
two functions. First, since the required response value can only be generated by a single platform-unique function, video configurations can be securely packaged for a single specific device. Second, secret keys do not have to be stored and managed by the video message provider. Instead, only the challenges need to be managed. Since the challenges do not provide useful information to anything but the original platform-unique function, they can be communicated and stored in insecure storage. An example video message configuration securely packaged for a unique device is shown below: C, {Video Config} R where {. . .} R ≡ AES (Video Config) R .
(11.3)
C and R values are one of the CRPs produced by device’s platform-unique function. The response value is used as the key to encrypt the video message configuration and identity. The encrypted configuration under key R from challenge C is denoted by {. . .} R . By providing dynamic confidentiality, integrity, and authentication, we achieve end-point security on a remote, off-line video system. In Section 11.3 we present the system architecture that implements the abovementioned security assurances, and we describe how the chain-of-trust is systematically created from the moment the device boots until the actual message is displayed.
11.3 Secure Video System Architecture We describe the system architecture of the secure video messaging system, and describe how the system boots. Since the architecture creates a unique link between the message and the display device, we also need a means to enroll new messages onto the architecture. This will be described in Section 11.3.3.
11.3.1 System Layout The secure video messaging demonstrator is built around a PowerPC core in the Virtex-II Pro VP30 FPGA. Figure 11.2 presents the system-level block diagram for the video messaging system. The design is a single-chip implementation and uses only on-chip memory resources. A two-level bus architecture includes a high-speed processor local bus (PLB) and a peripheral bus (OPB). The system boots from two on-chip memory modules (OCM) of 32 KB each. Software that comes as part of the video configuration will be downloaded in a larger 128 KB RAM attached to the PLB. The on-chip peripherals include a UART, a compact flash interface, a keyboard interface, a SAM module, and a VGA display controller. The SAM module is a Secure Authentication Module and contains the platform-unique function which is
202
P. Schaumont et al.
Fig. 11.2 Secure video messaging system architecture
required to authenticate and decrypt video configurations. Video configurations can contain messages, software applets, and video peripheral configurations. Decrypted software applets are stored in the PLB RAM. Decrypted video peripheral configurations are directly programmed into the video peripheral. In our architecture, the video peripheral configuration consists of the key of a Trivium stream cipher, which enables the video peripheral to provide runtime video decryption services.
11.3.2 Booting the Chain-of-Trust
Fig. 11.3 SAM security architecture establishes a chain-of-trust
The chain-of-trust allows Bob to control the operations that handle the message as it travels from the CF card through software to the pins of the VGA connector. The start-up process of the chain-of-trust proceeds in three steps and is illustrated in Fig. 11.3.
11 Demonstrating End-Point Security in Embedded Systems
203
• The initial step in the chain-of-trust is during power-on when the base system is configured. The base system configuration is protected by utilizing the Virtex bitstream encryption facility. Part of the base system configuration is the PowerPC boot loader code. • The next step in the chain-of-trust is initiated when the PowerPC boot loader scans the CF card for the software that is responsible for displaying secured content. Utilizing boot loader software to search the external devices keeps the base design flexible and easily portable to other interfaces. Once the software has been located, SAM reads the encrypted software from the CF card. This software is then decrypted, authenticated, and its integrity verified. If the software is found to be authentic and not tampered with, SAM loads the software into the PLB instruction memory and allows it to begin executing. Since no other software or components have access to the PLB instruction store, the securely integrated software is assured trustworthy operation. • The next step is to extend the chain-of-trust to the VGA controller by loading video decryption service from the CF card. In our video messaging system, this is accomplished by the software IP that was just loaded into the PLB RAM. That software instructs SAM to load the desired video decryption service from the CF card. Currently, the video decryption service is a Trivium stream cipher configuration. Once again though, the hardware configuration is securely packaged for the currently executing SAM device. If the video decryption configuration is tampered with or copied to another system, it will not be authenticated and cannot be utilized. At this point, Bob has a secure tunnel from CF card to the VGA controller’s pins. He can now pipe content from external sources through the secure tunnel to the display. While our current implementation pulls protected content from the CF card, the architecture enables the software to stream data from any source that can be attached to an on-chip peripheral bus. The software provided as part of the video configuration controls the secure tunnel: it can configure the hardware decryption service, it pulls the message content from the CF card, and it has the control of the data path between the CF card and the video peripheral.
11.3.3 The SAM Protocol In order to use SAM, we need a systematic mechanism to introduce new content (messages, software, or hardware configurations) to the platform. This is achieved using the SAM protocol, which will be summarized in this section. A detailed discussion of this protocol is out-of-scope for this paper, but may be consulted in [9]. Here, we provide a summary of the protocol features as they apply to the video messaging system. The SAM protocol includes two phases: an online phase, in which the video messaging system is assumed to be connected to a network, and an off-line phase. In the online phase, new content can be stored on the CF card. In the off-line phase,
204
P. Schaumont et al.
content can be played back under the trust guarantees described earlier. This situation is similar to, for example, an iPod, which is only connected online on a sporadic basis. OFF-LINE VGA
SAM-enabled FPGA
abc
3
Bob
Protected Message Applet
SAM
Message Encryptor
Message Applet
ON-LINE
1 Enroll SAM identity
Enroll Video Configuration Identity
SAM-specific challenge
2
Enroll Station Trusted Third-Party
Fig. 11.4 Video configuration enrollment and distribution
11.3.3.1 SAM Protocol Online Phase During the online phase, the content of the CF card is prepared for off-line operation. This online phase makes use of a trusted third-party. SAM and video configuration are first enrolled (Fig. 11.4, step 1), and next, the video configuration is encrypted for a specific SAM (Fig. 11.4, step 2). As part of the identity enrollment in step 1, SAM needs to generate a list of challenge/response pairs (CRP), which will be used to implement platform-specific encryption. The CRP list generation is shown in Algorithm 1. Algorithm 1 CRP list generation Input: seed, 128-bit input that determines initialization vector and N , number of CRPs to generate Output: Vector of N independent (Ci , Ri ) tuples begin C0 ← PH W (seed); Ro ← PH W (C0 ); for i ← 1 to N do Ci ← PH W (Ri−1 ) ˆ i ˆ Ri−1 ; Ri ← PH W (Ci ); output(Ci , Ri ); end
11 Demonstrating End-Point Security in Embedded Systems
205
The construction of Algorithm 1 exhibits some key properties. The CRP generation process does not allow the response for a given challenge to be determined. That is, access to the platform-unique function is controlled. Therefore, even if an attacker knows a challenge and has physical access to the hardware, the attacker will not be able to instruct SAM to generate the corresponding response. This is a key security property because, otherwise, someone with physical access to the hardware could simply input the plaintext Cenroll , have the platform-unique function generate the corresponding Renroll (encryption key), and defeat the entire security system.
11.3.3.2 SAM Protocol Off-line Phase The normal operation mode of SAM is off-line. This off-line phase of the SAM protocol supports the chain-of-trust boot process described earlier. During the boot process, SAM will process the encrypted video configuration stored on the CF card. The decryption key is generated at runtime using the challenge response function of SAM (Fig. 11.4, step 3). The information is stored on the CF card in the following three part format shown in Table 11.2.
(A) (B)
Opcode Challenge Video IA
(C)
Video IP
Table 11.2 SAM secure IP format SAM packet code SAM uses to generate decryption key CRP encrypted video identity and authentication from enroll station CRP encrypted video configuration
The Opcode in Part (A) is used to instruct SAM that a secure video configuration follows. Part (B) is from the enrollment station and contains information that can validate the authenticity and integrity of the video configuration in Part (C) of the packet. CRP-encryption provides data encryption without making the system developer responsible for managing secret keys. Instead, the enrollment station provides SAM a challenge value. The challenge is public, but only the single specific SAM device can generate the required response. Using the generated response, SAM decrypts the {Video IA} packet, this contains another challenge that SAM can use to generate the key for decrypting the {Video IP} packet. This way, separate keys protect the enrollment station and video configuration packets, and neither key has to be managed by the system developer. The protocol described in [9] shows how this two-step approach to enrollment and authentication enables the management of intellectual property components. The SAM protocol therefore has a broader application range than the video messaging system described here.
206
P. Schaumont et al.
11.4 Secure Authentication Module (SAM) Implementation In this section we provide further details on the system architecture introduced in Fig. 11.5, including the internal architecture of the SAM and of the secure video peripheral.
11.4.1 SAM Architecture
Fig. 11.5 Secure authentication module (SAM)
SAM is designed as a stand-alone micro-engine and interfaces with the rest of the video messaging system through four memory-mapped registers attached to the OPB bus. Figure 11.5 displays the micro-engine architecture of SAM. A Picoblaze microcontroller serves as the micro-engine controller, and a range of custom data paths assist this controller to implement the SAM protocol. The key data-processing components in SAM include an Advanced Encryption Standard (AES) cipher, a challenge/response function (PHW ), a 64-bit counter, and a register file. The platform-unique challenge/response function is derived from a block cipher with a secret fixed key. The data path components operate by means of load/done commands, and communicate over three busses: a data-input bus, a data-output bus, and a response bus. The micro-engine architecture of SAM is a compromise between two opposing objectives: ease of design and design efficiency. With a flexible controller and a few key functions in custom data paths, quick development is combined with trustworthy operation. Based off of the micro-engine architecture and design methodology described further, the entire design was carried out from concept to implementation in approximately 3 person months. SAM operates in one of two major modes, corresponding to the online operation and off-line operation mode described earlier. In the online operation mode, the micro-program on the Picoblaze generates a chain of challenge/response pairs starting from a seed. New video configurations can be enrolled using these
11 Demonstrating End-Point Security in Embedded Systems
207
challenge/response pairs. In the off-line operation mode, SAM verifies the authenticity of the video configuration. Successful authentication results in decryption of that video configuration. When SAM authenticates software, the decrypted module is returned to the OPB interface. If on the other hand a hardware configuration is authenticated, the result is a user-defined value inside the hard-wired register file. In the secure video messaging system, this capability is used to program a decryption key directly into the video peripheral.
11.4.2 System to SAM Communication
Fig. 11.6 SAM to software communication
As shown in Fig. 11.6, SAM is integrated into the secure video system as a slave on the OPB. SAM communicates with external sources through a pair of 32-bit data registers and a pair of 8-bit control and status registers. The 32-bit I/O width was chosen to match the data width of the OPB. One of the 32-bit registers is used for system to SAM communication, while the other is used for SAM to system communication. SAM is a command controlled peripheral where incoming commands, or opcodes, are written to the 8-bit control register. These commands control the operation of SAM by informing SAM the type of data that is being sent to the module. The result of command completion can be observed through the status register. For example, when the video system commands SAM to load a secure video message, the result of the operation is communicated via the status register. When a video configuration is successfully loaded, this is acknowledged in the status register and when the configuration fails this is also acknowledged in the status register. As illustrated in Fig. 11.6, SAM is synchronized with the host-processor software through request/acknowledge handshaking on a single bit of the control and status register shared between SAM and the OPB host processor. This makes SAM easily portable to other processors and only requires a set of general purpose digital I/O ports for integration.
208
P. Schaumont et al.
The two valid opcodes to initiate commands for SAM are • Loading secured IP: Opcodeload • Generating challenge response pairs: OpcodeCRP Each opcode is followed by a variable amount of data words that present additional arguments for the command. The command format is illustrated in Table 11.3. In the following, we will discuss the operation of the SAM by considering the activities triggered as a result of executing the loading secured IP command.
Opcode Opcodeload
Table 11.3 SAM command format Operand CIA {
Video ID, Hash(Video SW,Video ID), CIP , Nonce
}R IA {length, Nonce, Video SW}R IP OpcodeCRP
seed number of pairs to generate
11.4.3 Loading Secured Video Configurations After boot-up, SAM scans the CF card looking for valid opcodes. When presented with an Opcodeload , SAM directs the PUF to load the next 128-bits and compute the SAM-specific response value. The following Picoblaze code snippet illustrates how the microcode control works. LOAD_IP: CALL OUTPUT CALL OUTPUT
RD2PUF one,PORT_PUF WAIT_PUF S0,PORT_AES_LD
; ; ; ;
Challenge into PUF Start PUF (Control cmd) PUF computes Response Reponse to AES (Control cmd)
The microcode executed by the Picoblaze in Line (1) instructs incoming data from the OPB to be loaded into the PUF as a challenge. The RD2PUF microcode completes execution and returns after the 128-bit challenge has been read from the 32-bit OPB. Line (2) contains a Picoblaze OUTPUT instruction. As shown in Fig. 11.5, I/O instructions result in control signals to be send to the SAM data path modules. In the case of Line (2), the PUF is started to perform the challenge to response mapping. Once the PUF has been started, the Picoblaze calls the WAIT PUF microcode. This microcode monitors and waits for the PUF done signal to be asserted. After the done signal has been asserted, the next step is to instruct
11 Demonstrating End-Point Security in Embedded Systems
209
the AES module to load the PUF-computed response as the AES key. Line (4) of the code illustrates the Picoblaze outputting the control signal that causes the AES module to load the PUF response as the key. Note that in addition to the simple control structure, performance is not impacted because the Picoblaze is not in the data path. Instead, data flows directly from HW block to HW block. This is because the Picoblaze is not actually performing the reads or computations. The Picoblaze is only modifying the internal control signals in SAM. The following block of Picoblaze instructions implement the parsing of the IA tag. A similar program is used to decrypt and validate the Video IP data. RD_IA_TAG: CALL OUTPUT CALL CALL ADD OUTPUT SUB JUMP
RD2AES one,PORT_AES_ST WAIT_AES SAVE_IA_BLOCK S1,01 one, AES_LDKEY cnt, 01 NZ,RD_TTP_TAG
; ; ; ; ; ; ;
Next 128 bits to AES Start AES (Control cmd) AES to decrypt Save result Move to next block in tag Reset AES (Control cmd) Continue onto next block
Since the platform-specific SAM is the only one that can generate the required responses to decrypt the data, the entire message can be saved to insecure storage. This IP containing message can then be validated, loaded, and run by the off-line FPGA indefinitely. This authentication scheme is not limited to loading a single authenticated IP module. The FPGA can load an arbitrary number of Video IP components through SAM. Even while running, the system can be configured to load new Video IP modules and swap old ones out of the system. Next we illustrate how we converted a VGA interface into a secure VGA interface.
11.4.4 Secure Video Peripheral The secure video peripheral is a character-mode VGA controller with a resolution of 75 characters by 100 lines. Starting from a VGA reference design by Xilinx, we added runtime decryption capabilities based on a stream cipher. The message data written into the video RAM is still in an encrypted form. While the data is being scanned out of the video RAM, it is decrypted with the key stream of a Trivium stream cipher. The Trivium stream cipher algorithm is currently being proposed in the ECRYPT ESTREAM framework as a new hardware stream cipher [10]. The Trivium key is controlled by SAM and requires successful authentication of a
210
P. Schaumont et al.
Fig. 11.7 Video peripheral with streaming decryption
hardware configuration. The decrypted character stream is converted into a pixel stream by means of a character-table RAM (Fig. 11.7). The synchronization between the Trivium stream cipher and video scan process is illustrated in Fig. 11.8. The algorithm specifies 288 bits of state, and can be easily parallelized to generate a key stream of appropriate width. We implemented a Trivium kernel that generates 1 key byte per clock cycle. The Trivium key stream is controlled using video synchronization signals, such that the initial key schedule is run during vertical retrace period, and that each character position corresponds to a single key byte. Since each character line consists of eight scan lines, the Trivium key sequence cannot run in a strictly linear fashion. Rather, each set of 80 key bytes needs to be generated eight times (for eight consecutive scan lines) before the next set of 80 key bytes can be created. In our implementation of the stream cipher, we implement two-state registers, which allows the key stream to be restarted at the end of a video line.
Fig. 11.8 Synchronization of Trivium with video raster scan
11 Demonstrating End-Point Security in Embedded Systems
211
From the user’s viewpoint, this synchronization process is handled in a transparent manner. After writing an encrypted message into the video RAM and authenticating a video hardware configuration using SAM to provide the Trivium key, the message becomes visible on-screen.
11.4.5 Design Methodology The implementation of SAM involved a hardware–software codesign approach, with the design flow for SAM shown in Fig. 11.9. The underlying AES, PHW , OPB, and secure video interfaces were implemented in the GEZEL hardware description language (FDL) [11], while the interfaces between them are controlled by software programmed into the Picoblaze softcore. SAM’s design involved a simulation platform that includes an ARM instruction set simulator (ISS) for the C bootcode, Picoblaze ISS for the SAM microcode, and GEZEL HDL for the SAM and hardware components. The GEZEL environment supports cosimulation of the resulting setup, providing verification of the system before implementation.
Fig. 11.9 SAM-based secure video system design flow
The components developed in GEZEL were then converted into synthesizable VHDL code and mapped into a peripheral for Xilinx Embedded Developer’s Kit (EDK)-based system. GEZEL provides automatic translation of FDL into VHDL. In addition, the microprocessor C-code was imported as a software component in the EDK setup and the Picoblaze code was translated to synthesizable VHDL
212
P. Schaumont et al.
Component
Table 11.4 SAM coding complexity GEZEL
SAM control/storage Picoblaze AES PUF Counter Register interface Trivium Total
320 407 659 213 54 125 146 1,924
VHDL 1,397 2,320 4,472 1,497 309 778 774 11,547
by the Picoblaze assembler. The coding complexity of the system is illustrated in Table 11.4. The resulting software and VHDL were then imported into the EDK design and synthesized. Of note is that no VHDL simulations are involved in the design flow. Instead, SAM is verified in the GEZEL environment and filtered through the EDK system to produce a bitstream. The resulting bitstream contains a functional system that is downloaded to the V2P30 board. Sample video messages, including the video display software applet and the Trivium hardware configuration were then converted into SAM-specific IP modules with the SAM enrollment protocol. The resulting packaged IP is copied onto a CF card is utilized by the system at runtime.
11.5 Results The secure messaging demonstrator was implemented on a XUP Virtex-II Pro development board. The demonstration setup in Fig. 11.10 shows two boards running an identical software applet and Trivium hardware-IP configuration. The board on the right displays the correct authentication between the software IP and hardware platform. The left board in Fig. 11.10 displays the results of trying to utilize the IP on another, unauthorized system. Since this system fails authorization, the Trivium key from the video configuration cannot be correctly decrypted. The result is that the display shows random data, resulting from utilizing a mismatching Trivium key stream. Table 11.2 illustrates the coding complexity of SAM. Table 11.3 highlights the design performance of SAM. As previously noted, the current implementation of SAM is a highly parallel one, resulting in a capacity of 1 million CRP pairs per second and over 100 Mbits/s configuration speed of secure intellectual property (Table 11.5).
11 Demonstrating End-Point Security in Embedded Systems
incorrect authentication
213
correct authentication
Fig. 11.10 SAM demonstration setup showing a correctly (right) and an incorrectly authenticated message (left)
Property
Table 11.5 SAM performance results Performance
Clock frequency CRP generation CRP pairs/s CRP bits/s IP authentication/configuration IP blocks/s IP bits/s
50 MHz 1 Million 256 Million 805,451 103,225,806
11.6 Conclusions We have presented a demonstrator for the concept of end-point security in an embedded system. The SAM design supports the display of authenticated and confidential video messages in an architecture that may itself be untrusted. As information technology is rapidly evolving toward distributed and pervasive information access, the assumption of trusted and safe computing and data-processing platforms is no longer valid. Users, as well as content providers, will need more systematic support for authentication, confidentiality, and integrity of digital information. As we demonstrated, the SAM design focuses on providing protection at the end-points of embedded systems; these end-points also happen to be the places where external low-level attacks are most obvious. There are several interesting open challenges for SAM-based designs. A first challenge is how to implement platform-unique functions cheaply and in large quantities. In our current implementation, we use a fixed-key block cipher to simulate the platform-unique function. Recent research has shown that it is possible to use the properties of the underlying silicon itself to create such a function [12].
214
P. Schaumont et al.
A second challenge is how to increase the amount of hardware flexibility in the chain-of-trust. In the secure video-message demonstrator, a secure hardware configuration is just the programming of a decryption key into a fixed video decoder. However, FPGA technology has the potential to go significantly beyond parameter setting and consider a reconfigurable hardware plug-in as a secure third-party module. This offers the potential of building a physical chain-of-trust into an embedded device. For example, a video-content provider can provide specific decoding hardware into the users’ video player. The video-content provider can rely on the chain-of-trust to make sure that the intellectual property (the video content as well as the player) remain protected. A third challenge is the design and application of end-point secure interfaces. As we could see from the SAM design, the creation of an end-point secure interface requires tight integration of a cryptographic hardware primitive with an existing peripheral. Similar end-point secure interfaces may be created for audio, disks and memory, sensor interfaces, and so on. Efficient end-point security will require novel and creative ways to integrate cryptographic hardware primitives. Finally there are also interesting system-level challenges left, such as the design of a challenge response distribution and management scheme. Such a scheme needs to implement a scalable interaction between the chip manufacturer, the system integrator, and the content provider.
References 1. A. Bylund, “Apple’s DRM cracked again,” http://arstechnica.com/news.ars/post/20060830– 7619.html, 2006. 2. A. Huang, “Keeping secrets in hardware: the Microsoft Xbox (TM) case study,” Proceedings of the Workshop on Cryptographic Hardware and Embedded Systems 2002 (CHES 2002), 13–15. 3. E. Charbon, I. Tounoglu, “On intellectual property protection,” Proceedings 2001 IEEE Custom Integrated Circuits Conference, 517–523. 4. W. Shi, H. S. Lee, R. M. Yoo, A. Boldyreva, “A digital rights enabled graphics processing system,” Proceedings of the 21st ACM SIGGRAPH symposium on Graphics Hardware, 2006. 5. R. Thibadeau, “Trusted computing for disk drives and other peripherals,” IEEE Security and Privacy Magazine, 4(5):26–33, September 2006. 6. I. Arce, “Bad Peripherals,” IEEE Security and Privacy Magazine, 3(1):70–73, 2005. 7. R. Anderson, “Security engineering,” Wiley, 2001. 8. E. Gallery, “An overview of trusted computing technology,” in C. Mitchell, editor, “Trusted Computing,” IEE Professional Applications of Computing Series, 2005, IEE Press. 9. E. Simpson, P. Schaumont, “Offline hardware/software authentication for reconfigurable platforms,” Proceedings of Cryptographic Hardware and Embedded Systems Workshop (CHES 2006), 311–323. 10. C. D. Canniere, B. Preneel, “Trivium Specifications,” ECRYPT eStream Phase 3, 2006. 11. GEZEL Homepage, http://rijndael.ece.vt.edu/gezel2 12. D. Lim, J. Lee, B. Gassend, E. Suh, M. Van Dijk, S. Devadas, “Extracting secret keys from integrated circuits,” IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 13(10):1200–1205, October 2005.
Chapter 12
From Secure Memories to Smart Card Security Helena Handschuh∗ and Elena Trichina∗∗
12.1 Introduction All computer-based systems contain memory where information is stored while awaiting to be handled by the computer’s CPU. Memory provides the backbone to the systems in which it is used: it allows them to remember what they are, what they are supposed to do and how, by keeping operating instructions and system data in memory that retains information when the power supply is switched off. The system code and parameters can be changed during system’s operation and by storing these changes the system keeps track of its own experience. The type of memory which allows keeping the content without energy supply is called non-volatile memory (NVM). Using standard digital technology, non-volatile memory can be implemented by writing data permanently during manufacturing (masked-programmed ROM) or by blowing fusible links afterwards, thus changing permanently the cell content (programmable ROM). However, such memory cannot be re-programmed in system; for this an electronically erasable programmable memory (EEPROM) or flash is used. Memory is also the bloodstream of computer systems: during operation, code and data are accessed on-the-fly, often in no particular order, by the CPU, and everchanging intermediate results are produced and used again; for this, random access memory (RAM) capable of very fast read and write at fine granularity is required. Usually, RAM is volatile, i.e. its content is lost when the power is switched off. Two typical types of volatile memories are static RAM (SRAM), in which information is stored by setting the state as long as power is applied, and dynamic RAM (DRAM), in which information is stored by charging a capacitor which must be refreshed periodically as the charge leaks away. H. Handschuh (B) Katholieke Universiteit Leuven, ESAT/COSIC, Kasteelpark Arenberg 10, B-3001 Leuven-Heverlee e-mail:
[email protected] ∗ ∗∗
This work was performed while the author was at Spansion, France. This work was performed while the author was at Spansion, Germany.
I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3 12, C Springer Science+Business Media, LLC 2010
215
216
H. Handschuh and E. Trichina
Ideally, what all system designers want is a non-volatile memory which could be electrically programmed and erased in system, offering at the same time great flexibility (i.e. random access with fine granularity for Read and Write operations), short access times, low power consumption, high density/low cost and excellent reliability. While such “ideal” memory is yet to be invented, flash memory seems to be the next best thing as it satisfies many of these requirements to a certain degree! Flash memory represents the state-of-the-art of the semiconductor industry and is positioned in the area that crosses over applications of many different types of memories, thus making it an interesting medium for new products. In the next chapter we provide a short overview on flash memories and their role in system design.
12.2 Flash Memory Technology and Architecture of Flash Devices The flash device architecture is the result of (a) cell architecture; (b) cell functionality, i.e. electrical conditions for Read, Program and Erase operations; (c) array organisation that is derived from cell functionality and (d) target device performance and flexibility, which are usually derived from application requirements. In what follows we consider all these aspects in turn.
12.2.1 Memory Cell Architecture The semiconductor device used in almost all memories is the field-effect transistor, specifically, metal-oxide semiconductor (MOSFET). It is formed in a p+ type silicon substrate into which two regions of n+ material are formed to create the source and drain regions. A thin oxide layer is grown on the top by exposure to O2 , onto which a polysilicon gate is formed through chemical vapour disposition. Typically, the source and substrate are held at reference potential, and the gate (VGS ) and drain (VDS ) voltages are measured with respect to this reference potential. In the neutral state, there is no conductive path between the drain and the source regions. When a positive charge is applied to both the gate and the drain, a channel begins to form between the source and the drain. When the gate voltage VGS is sufficiently large the channel is completely formed and electrons flow from the source to the drain, creating a current (IDS ). The voltage at which the channel forms is called the threshold voltage (of the transistor), VT . Increase in the control gate voltage beyond VT results in an increase in IDS . The basic non-volatile memory cell depicted in Fig. 12.1 (a) is very similar to the basic MOSFET transistor cell, with the addition of a “floating gate”, which resides in between the conducting channel of the device and the traditional (control) gate and is insulated from the latter by an oxide–nitride–oxide (ONO) layer. When
12 From Secure Memories to Smart Card Security
217
Selected Bitline
Selected Wordline
Wordline Control Gate Floating Gate Bitline
n
Input Buffers
Address Control signals
ATD
n
P r e d e c o d e r
r
S n+
R o w
I P Substrate
D n+
(a)
D e c o d e r
IDS
n−r
Erased
Programmed
IErase
Column Decoder
IProg
Reference Cell Vref Matrix
I/V
I/V
I/V
I/V
V T (E) Sense Amplifier
VRead V T (P) (b)
VGS
Mux Output Output Buffers
Fig. 12.1 Typical NOR array organisation
completely surrounded by dielectric, the floating gate traps the electric charge transferred to it and keeps this charge even when the power is switched off. The cell with the charge trapped in its floating gate is usually interpreted as logical “1”, while an erased cell is interpreted as logical “0”. Methods of evaluating if the cell is charged or not, forcing and removing the charge to/from the floating gate of a cell comprise Read, Program and Erase operations.
12.2.2 Cell Functionality (Program, Erase and Read Operations) The process of forcing electrons to accumulate in a floating gate is called programming and is typically accomplished by hot electron injection (CHE). The principle of CHE, as well as other programming mechanisms, is based on the fundamental law that like charges repel each other and opposite charges attract each other. Thus to attract free electrons to a floating gate we need to place the latter in a positive electric field. This can be done by placing a large positive voltage on the control gate (VGS >> VT ) and thus inverting the channel from the p-type to the n-type. However, electrons need to gain sufficient energy to jump over the thin oxide layer; this energy can be given by increasing the drain bias (VDS ). When VDS is sufficiently high, the electric field between the drain and the source increases, electrons become “hot” and have enough energy to overcome the energy barrier, jump over the thin oxide and get trapped in the floating gate.
218
H. Handschuh and E. Trichina
As the floating gate becomes more and more negative due to stored electrons, it starts repelling other electrons being accelerated through the thin oxide. To put it in other words, electrons residing in a floating gate increase the apparent threshold voltage VT of the cell, thus decreasing the IDS current, and the transistor is being gradually switched off. Thus, cell programming is self-limited in nature. The greatest advantage of flash memory is that its content can be re-written or re-programmed. For this, old data must first be erased, which means that the charge stored previously in a floating gate of a cell must be removed. One way to remove the charge from a floating gate is to inverse relationship between the gate and the source: e.g. by keeping the gate at a reference potential and biasing the source with a high voltage with the drain left floating. This mechanism, called source erase, was used by the first generation of flash devices. However, due to very high voltages, the working conditions were very close to a junction breakdown and required dualvoltage. The idea of dividing the voltage drop between the source and the floating gate by applying a negative voltage to the control gate and lowering the source bias to the external supply voltage was a great improvement because it allowed having a singlevoltage supply while using internal charge pumps to generate all other required voltages. This scheme is called source erase with negative voltage because, like the source erase, it extracts electrons from the small overlap area between the floating gate and the source. In modern flash devices Erase is based on a so-called channel erase scheme, which divides erasing potential between the negative control gate and a positive bulk of the array. For this, a p-well is created by isolating a chunk of a p-doped substrate under the gate by surrounding it with an n-doped well, allowing to apply small voltage during Erase not only to the source but to the p-well and the drain too, thus moving tunnelling region from the source to the whole cell channel. In other words, electrons are extracted from the floating gate along the whole channel, without parasitic contributions from the source junction, which reduces the erase current enormously thus reducing power consumption in addition to improving device reliability and ability to scale down by decreasing tunnel oxide thickness. How can we now determine what data – 1 or 0 – is stored in a cell? This is done by reading the current driven by the cell at a fixed gate bias; the current is measured with respect to a ground potential. The programmed and erased cells exhibit the same transconductance (gain) curves in the current–voltage plane as shown in Fig. 12.1 (b). However, when the charge is stored in the floating gate, as is the case with a programmed cell, the apparent threshold voltage of the transistor increases because “similar charges repel each other”, and the curve is shifted by the quantity ΔVT which is proportional to the stored electron charge. Hence, once the threshold voltage shift ΔVT is determined, it is possible to set the reading voltage in such a way that the current of the erased cell is high (tens of microamperes), while the current of the programmed cell is zero. The read voltage Vread is selected to be between the programmed Vprogram and erased Verase threshold voltages. If current flows at the read voltage, then the cell is read as erased.
12 From Secure Memories to Smart Card Security
219
Thus, to read a value stored in a floating gate, we must determine the relative threshold voltage of a cell and compare it to a standard reference. For this, we need to have a means of measuring current and have some reference cells for comparison. The reference cells dictate the boundaries where the cell is considered programmed or erased by relating the VT ’s of a given cell against those dictated by an appropriate reference (e.g. Vread , Vprogram-verify Verase ). The process of making this comparison is called sensing and is accomplished by sense amplifiers, which are special circuits that can compare two currents. Sense amplifiers have two inputs: the drain of every cell eventually connects with a sense amplifier, and every reference also connects with the amplifier.
12.2.3 Array Organisation The cells are arranged in a regular pattern consisting of rows and columns as schematically illustrated in Fig. 12.1. The gates of all the cells in a row are connected through a polysilicon layer called a wordline, while the drains of all the cells in the column are connected through a metal 1 layer called a bitline. The bitlines and wordlines provide a means for placing the voltages appropriate for Read and Program operations on each cell individually or in groups. Random access can be achieved by connecting cells to a bitline in parallel as typically done in NOR flash. By arranging memory transistors in a series by 8, 16 or 32 cells, as in NAND flash, a higher density can be achieved for the cost of flexibility. An individual cell can be addressed through the row and column decoders. The externally provided address, which is an n-bit number, must be converted into a corresponding 2n digits data in which only 1 bit at a time is on, to select a single row and a single column inside the array. This is done by a standard combinational circuitry in a pre-decoding stage; then specially designed row and column decoders bias the gate and the drain of the selected cell as required by an operation. A number of output pins (8, 16 or 32) dictate how many bits can be read at once. Due to physical limitations of the package the number of pins is always limited, therefore many columns must converge to the same output, and a reading circuitry must be attached to it in order to evaluate the cells contents. The reading circuitry comprises current-to-voltage converters which perform conversion for both the array and the reference cells (where transistors are used as loads), and finally the two voltages are compared by a differential amplifier. The result of the sensing (which is a voltage) passes through a multiplexer to the output buffers, which causes the external data pins to change their state accordingly. The organisation of source lines is very important for the granularity of the Erase operation: through the source line the voltage necessary for an Erase operation can be forced to all the cells connected to this line. An array where the source line is common to all the cells is “bulk erasable”, i.e. all the arrays are erased simultaneously.
220
H. Handschuh and E. Trichina
To provide a better flexibility, an array can be physically divided into a number of sectors by connecting all cells in the rows and the columns in the sector to a single source line and by using a dedicated source switch to select a sector. The sectors can be of equal or different sizes. While in theory there is no limit to a sector dimension, in practice many considerations must be taken into account while designing sector organisation in the array, such as double or single supply voltage, area/cost/performance of the device, impact on Program/Erase cycling endurance. If a VPP pin is present, sectors are usually divided by columns, because the cells of different sectors share the same wordline but have different bitlines and source lines. If only a VCC pin is present in the device, sectors are arranged by rows, and a divided bitline scheme is implemented, where the main bitline is common to all sectors, while the local bitlines are assigned to each sector as Fig. 12.2 illustrates. High-density modern devices usually combine both row- and column-based organisation. Local Bitline (Metal 1)
P r e d e c o d e r
Address
S e c t o r
D e c o d e r
Local Row Decoder Sector 1
Local Column Decoder Local Row Decoder Sector 2
Local Column Decoder Global Bitline (Metal 2)
Global Column Decoder
Fig. 12.2 Hierarchical column sector organisation
12.2.4 Flash Memory User Interface To control flash memory, pins such as chip enable (CE#), output enable (OE#), write enable (WE#), reset/power down (RP#) and some others are required to provide control signals from the host. Explaining the meaning of these control pins helps in understanding flash functionality. The CE# activates input buffers, memory
12 From Secure Memories to Smart Card Security
221
control logic and sense amplifiers, and it can also be used for writing commands into command registers. When high, the CE# unselects the device putting it in a stand-by mode. The WE# controls writing to the Command, Address and Data latches and registers when the CE# pin is low. Addresses and data are latched on either the rising or the falling edge of the CE#. The OE# gates the output through the data buffers during a Read operation. The RP# pin provides the hardware reset function and in addition it provides protection against undesired command writes due to invalid bus conditions and system noise that may occur during power-on. The reset pulse width must be large enough to avoid undesired reset caused by spikes on the pin. By applying proper voltages on the control pins, it is possible to perform some simple operations such as Stand by, Output Disable and Reset. More complex operations like Read, Program and Erase require rather complex control sequences. In the first generation of flash devices these command sequences were driven by an external microprocessor, but nowadays complex algorithms are embedded in the flash device in the form of either a Finite State Machine or a microcontroller, and a proper command sequence on pins only activates the algorithms. A command is made up of a sequence of binary codes given to the device through the I/O pins and the address pins with specified timing on the control pins such as CE# and WE# which act as a clock for the command interpreter.
12.3 General Architecture Scheme We have already mentioned that, apart from an array of memory cells, any modern flash device contains analogue blocks such as charge pumps, regulators, charge amplifiers, I /V converters and comparators. To communicate with the outside world, there are I/O and (for NOR flash) address buffers as well as the control buffer connected to control pins, i.e. CE#, WE#, OE# and PWD#. An address transition detection (ATD) detects any changes on memory array inputs and subsequently generates a signal-activating decoder circuits which biases rows and columns with appropriate signals and starting Read operation. As other operations are more complex than Read, in order to optimise and simplify the interface with the host, an embedded logic controller handles the command interface: after checking correctness of the control signals, it decodes the command and commits its execution to specialised logic circuits such as Program/Erase controllers which execute the corresponding complex flash operation algorithms. These controllers can be implemented either as embedded finite state machines or as an embedded microcontroller. In the first case, the finite state machine flow evolves step by step according to the actual logic state and the value of the input signals, driving in every step a proper set of actions/control signals able to accomplish the task. The latter case is becoming commonplace, with both a microcontroller and its code memory placed inside the flash. While until recently it was essential that the instruction set should be defined carefully so that on the one hand it would be possible to execute complex flash algorithms but on the other hand it would keep the
222
H. Handschuh and E. Trichina
embedded microcontroller small, there is an increasing tendency nowadays towards embedding general-purpose microcontrollers inside the flash, such as an 8-bit 8051. While algorithms are the core of the flash controller, the latter comprises other digital and even analogue circuits, such as an address counter, an oscillator, a time counter, a cycle counter, an end-of-count logic and an auxiliary combinatorial logic.
12.4 Secure Memories The explosive growth of the flash memory market during the last decade has been driven by different types of electronic portable equipment, such as mobile PC, MP3 audio player, digital cameras, palm tops and in particular mobile phones. Security features offered by memory manufacturers are continuously evolving with respect to the security requirements of these markets. We distinguish three types of security mechanisms: (a) standard protection against accidental code and data modification, (b) protection against malicious code and data modification and (c) protection against unauthorised access to code and data and against cloning. Typically, these mechanisms are built one on top of the other and are implemented by means of special hardware, firmware and increasingly often software. (a) Standard Protection Against Accidental Code/Data Modification Accidental code or data modification can originate from system noise or sudden voltage glitches and result in inadvertent Write operations or values being erroneously read from a cell or sector. Therefore one of the first security mechanisms implemented by flash manufacturers has been to protect the memory against such accidental failure. Each such mechanism is usually implemented as a combination of hardware and low-level software. For example, at power-up, no Write command is accepted by the memory device, which is in read-only state by default. Similarly, no Write cycle is accepted if the voltage supply is lower than the lock-out voltage. At any given time, the complete control of a Write operation is done via logical means, for example, by allowing a Write cycle to occur only if the CE# and WE# pins are high while the OE# pin is low. In order to protect against voltage glitches, no Write cycle will be initiated if pulses of less than 5 ns are received by these three pins. Also, 4–6 bus cycles are required for every Write and Erase sequence. All this allows efficiently protecting the memory against any type of accidental Read, Write or Erase operation. (b) Protection Against Malicious Code and Data Modification One of the new features of flash-based applications is that the whole operating system and customer applications reside in flash, including the code the host CPU tries to boot from. Therefore these applications are particularly vulnerable to attacks in which one reprograms the flash and simply circumvents whatever security checks,
12 From Secure Memories to Smart Card Security
223
conditional accesses or system security features such applications contain. A wellknown example of such an attack is the FunCard attack on early Pay-TV systems. Thus flash manufacturers had to propose a solution which would allow protecting applications against subsequent malicious alterations of the memory contents: onetime programmability (OTP). The first implementation of such an OTP mechanism was done in hardware: after the device is programmed, a WE# pin is set permanently low and manual access to this pin is prevented by using a BGA package. The BGA package also prevents from soldering another flash with malicious code on top of the genuine one. But as a matter of fact this WE# pin prevents the entire memory array from being programmed a second time, which is not suitable for applications which may potentially need to update their operating system over the air or when the device is fielded. Nevertheless, part of the operating system still needs to be protected from malicious modification, namely the boot loader sequence. This sequence is normally stored at the top or bottom address sector in flash which allows the host to jump to this hard-coded address at power-on. It contains secure booting instructions and in particular a sequence of operations which allows to check the integrity of the whole operating system and application code. Therefore, only this small portion of the flash memory actually requires hard-wired OTP protection. The next idea is thus to introduce a special Write Protection pin (WP#) to physically lock out only a few of those sectors. This pin is set permanently low after the device is programmed and again protected from manual access by BGA packaging. With the growing size of the memory and the growing complexity of the system and application software, the simple jump to top or bottom flash addresses has become obsolete for high-density flash devices. Gradually a new requirement has appeared for microcontrollers to be allowed to directly jump to the middle of the addresses and for the flash device to be able to protect the integrity of any sector or range of sectors within the flash memory array. Modern flash chips have introduced the possibility to protect various sectors from malicious programming and erasing by more flexible software programmable mechanisms. Usually the detailed implementation of these soft sector protection mechanisms is not publicly disclosed, but relies on the following principles: each sector of the device is associated with a special OTP protection bit. This bit can only be accessed by the microcontroller or via the finite state machine embedded in the flash and is set by a secret sequence of commands. Once this bit is set, it cannot be unset and the associated sector is deemed locked, whereas it is unlocked otherwise. The reason why the bit cannot be unset is because no such command is implemented by the microcontroller or the finite state machine. If the host tries to write or erase a locked sector, i.e. one for which the OTP bit is set, the microcontroller or FSM checks this OTP bit and reports an error. Another example of secure OTP memory is the concept of out-of-address map sectors such as Spansion’s secure silicon area (SecSi) sectors. These sectors are simply not visible from the main memory address map and typically contain 256 bytes of one-time programmable memory each. Their contents can only be seen and accessed in sector 0 when special commands have been issued and are interpreted
224
H. Handschuh and E. Trichina
by the embedded state machine resulting in the device entering a special mode. Such SecSi sectors contain unique chip identification numbers or random numbers which can be personalised into the memory either at manufacturing stage or programmed once and for all into that memory by software using a special secret SecSi entry command which takes many cycles.
(c) Protection Against Unauthorised Access to Code/Data and Against Cloning In the previous paragraph, we explained how OTP bits can be set to permanently lock certain flash sectors from being erased and re-programmed. This feature can also be extended to become software programmable and reversible, which allows to lock but also to unlock sectors. For this, a programmable protection bit vector is associated with the memory array, as schematically shown in Fig. 12.3. Each bit in the vector defines whether the associated sector is locked or unlocked. Individual bits of this vector can be programmed by a special sequence of commands and erased by another special sequence of commands called “clear access bit”, which therefore makes them software programmable. The status of the protection bit vector is kept in non-volatile memory and restored at each power-on. Of course this programmable protection bit vector can itself be locked using a special OTP lock command. In this case one first defines the desired memory configuration by setting and erasing the protection bit vectors as many times as needed; once in the lifetime of the chip, one then locks the configuration by setting the special OTP bit of that vector. This step is irreversible. In order to prevent this OTP bit from being accidentally set, a long sequence of commands is used which cannot happen by chance.
Fig. 12.3 Persistent sector protection from malicious code and data alteration by a combination of hardware and software mechanisms
12 From Secure Memories to Smart Card Security
225
Finally, an even more flexible feature allows to provide conditional access to the protection bit vector in order to prevent it from unauthorised access and update. This is meant for applications which need to securely lock some sectors but which do not want to lock the protection bit vector forever, thus leaving an option for re-programming the whole memory array by an authorised party. Such a feature is implemented using a password protection mechanism. The password sector protection mechanism gives a possibility to recover from a locked memory configuration to an unlocked one by presenting a correct 64-bit password. The password itself is stored in an out-of-memory address space, similarly to the SecSi sectors. It is programmed by a special sequence of commands and then locked via an OTP lock bit. It can only be read by a special system command and cannot be accessed by the user or by an application. This mechanism can be implemented not only for Program and Erase but also for Read protection. Additional software features allow system software to prevent various sectors from inadvertent and malicious Write and Erase operations by dynamically setting sectors’ lock bits for a given session on each power-on and erasing them at each power-off. This results in sensitive sectors being protected against writing and erasing while the chip is operating and could be useful against attacks in which someone tries to corrupt the given sectors at run-time.
NVM Password RAM
Sectors 0 1 2 ... ... N-1 N
L L L U U L Dynamic Protection Vector
L L U L U U Persistent Protection
WP# pin Legend:
Sector permanently locked for program and erase operations by hardware. Sector locked for program/erase operations persistently. Can be unlocked by software in specified circumstances. Sector locked for program/erase operations at run-timeonly. Unlocked sector. Can be accessed for all operations.
Fig. 12.4 Advanced sector protection mechanisms
226
H. Handschuh and E. Trichina
Fig. 12.4 illustrates how extremely advanced flexible and robust secure memory architecture can be achieved by a combination of dynamic and static software protection mechanisms combined with hardware-based methods. As one can see, the WP# pin permanently prevents an outmost sector from any alteration. Typically this sector would have been initially programmed with a boot loader after which WP# pin protection turns it effectively into an OTP area. The rest of the memory array had been divided into zones where some sectors are “locked” by persistent protection vector, and some are left “unlocked” thus allowing for any update operation. However, by setting a corresponding bit in the dynamic protection vector, system or application software may prevent Erase and Program operations on some sectors at a run-time. However, the memory configuration can be changed by the authorised software upon presenting a password to “unlock” persistent protection vector thus allowing Erase and Program anew bits in this vector. This feature can be used, for example, for installing application software upgrades or patches. These are the basic ideas behind secure memories but as integration is a strong trend in semiconductors as a means to address the demands of higher system performance within the reduced board space, flash memory has become more evolved and manufacturers are turning towards full systems-on-chip and high-density flashbased smart cards. In the next sections, we briefly review the history of memory cards and of traditional smart cards; then, we address the most advanced concept of high-density smart cards and explain the many security features and challenges for such cards.
12.5 From Secure Memories to Smart Cards Secure memories offer a limited number of security features to the application provider. Adding more security functionality onto secure memory means either adding some hardwired security logic, such as can be found in memory smart cards, or going even further by adding a small integrated circuit into the system which provides the possibility to perform computations and to implement cryptographic algorithms. Memory cards were first introduced in 1983 in the French payphone system and consisted of a memory chip embedded into a plastic body. They simply included a hardwired authentication logic which, when the card was inserted into the payphone, would perform a challenge–response protocol and securely decrement the available balance of phone credit units on the card. Later, a micro-chip was added and the card became known as a microprocessor card. Today these cards are almost as powerful as some of the first personal home computers. They are used for advanced security applications such as the EMV payment scheme defined by Europay, Mastercard and VISA or as subscriber identity modules (SIM) for the wireless GSM and UMTS telecommunication systems. The term “smart card” collectively denominates both memory cards and microprocessor cards; these comply with a number of standards, namely the ISO/IEC
12 From Secure Memories to Smart Card Security
227
7816 series. These standards have several different parts which define the physical shape of the plastic card, the electrical contacts on the plastic body and the electrical characteristics of the chip, the communication protocols including the command format and the robustness and functionality of the card. What makes a smart card secure from a functional point of view is the fact that the operating system provides for a closed environment, accepting only a predetermined set of commands from the outside world, returning only a pre-defined set of responses to the outside world and being organised into a secure file system. This means that the card has different files, directories and functionalities for which individual access rights are defined. These access rights can, for example, mean that one has to present a PIN code (Personal Identification Number) in order to access a given file or functionality; another example is the fact that secret key files used for cryptographic computations can never be accessed by a user, only by the operating system of the chip itself, under given restricted conditions. The set of all security rules and access rights which are implemented by the operating system is defined in the security policy of the application. Looking more closely at the architecture of the integrated circuit, one can find the traditional elements of a system-on-chip. First of all, there is a CPU. On first generations of smart cards, this would be an 8-bit controller such as a typical 8051 microcontroller. Today, high-end cards embed up to a 32-bit controller, for example, an ARM controller. This allows for quite advanced embedded software implementations. Second, there are a number of different embedded memory types in a smart card. These include ROM (Read-Only Memory), which can only be written once at manufacturing stage and is typically used to hold the operating system of the card, and RAM (Random Access Memory), which is used for temporary variables and computations and is erased every time the card gets reset. Finally there is EEPROM (Electrically Erasable Programmable Read-Only Memory) which both allows to store and over-write non-volatile information such as applicationspecific data or secret keys. EEPROM does not loose its contents when the chip stops being powered by the smart card reader. On the other hand, its contents can be over-written by the application as soon as the card is powered up again. A last type of memory is flash memory, which has recently appeared on microprocessor cards and which is gradually replacing EEPROM and ROM. Flash memory is also a non-volatile type of memory but has different Write and Erase characteristics from EEPROM. It is typically written in bytes or pages and erased by full sectors, when EEPROM can be accessed, written and erased at the byte level. Very high-end smart cards also known as high-density cards now offer megabytes of flash memory to hold all of the operating system, application-specific data and all types of user data (pictures, movies and phone directories). Compared to these, traditional smart cards have a limited storage capacity of 512 kbytes of ROM and 64 kbytes of EEPROM only. The available RAM size for high-end smart cards is in the order of 5 kbytes. Other characteristics of traditional smart cards include the fact that the nominal frequency is 3.57 MHz, which can be boosted to 10 or 20 MHz by an internal PLL. The input/output interface runs at 9600 bits/s (up to 115 kbits/s on contact-
228
H. Handschuh and E. Trichina
less radio-frequency cards). A smart card generally also includes one or several crypto-coprocessors such as an AES, a DES or an RSA crypto-coprocessor that allow to speed-up heavy cryptographic computations for on-the-fly encryption or computations involving very large integers (RSA requires modular exponentiations of 1024-bit integers). All these elements have to fit into less than 25 mm2 .
12.6 High-Density Cards Compared to traditional smart cards described in the previous section, high-density cards have a number of new features, see Table 12.1. First of all, EEPROM and ROM are replaced by non-volatile high-density flash memory, for example, 90 nm or even smaller, 45 nm, technology. This allows to embed up to a few hundreds of megabytes of such memory in the new cards and to eliminate any space limitations for current operating systems, application data or user data which would not fit into traditional cards. This of course comes at the cost of increasing the die size from a currently standardised maximum of 25 mm2 to somewhere below 80 mm2 . Nevertheless, the form factor stays the same. RAM memory also increases to 64 Kbytes on the latest high-density cards. This allows putting in place very efficient algorithms for flash file system management. Table 12.1 Traditional smart cards compared to high-density cards Typical card High density card CodeFlash (1 MB EclipseTM Flash) Emulated EEPROM (1 MB EclipseTM Flash) 5 KB 24, 48 and 64 KB EEPROM (256 KB) 4–256 MB EclipseTM Flash ISO 9600 bit/s + USB, MMC High-speed protocols 12–26 Mbit/s 25 mm2 0.13 μm technology Less than 80 mm2 45 nm technology
Proprietary operating system ROM (512 KB) Application data, secret keys EEPROM (256 KB) RAM User data Interface Die size
Of course, addressing 256 Mbytes of flash memory cannot be done via the traditional ISO 7816 interface anymore, since it runs only at 9600 bit/s. It would take hours. Therefore full-speed USB and MMC interfaces running at 12 and 26 Mbit/s, respectively, have been added to the system. Future even faster interfaces will try to match the high-speed USB interface at 480 Mbit/s.
12.6.1 HD-SIM as an Application Example As an example of such high-density cards, we consider the concept of a highdensity SIM card, i.e. a high-density subscriber identity module for the GSM or
12 From Secure Memories to Smart Card Security
229
UMTS wireless telecommunication networks. A SIM module typically belongs to the issuer, in this case the telecommunications operator. It serves as a secure key holder for the customer-specific secret key used for the network authentication and key agreement protocol executed between the card (via the handset) and the closest available network base station. Each such key is customer-specific and allows the operator to recognise which user is trying to access the network; this represents the only authenticated and thus measurable (i.e. revenue generating) link between the customer and the operator. This is one reason for the operators to be highly concerned with security issues since these may result in fraudulent attempts to access and use the network for free or at the expense of another (genuine) customer. In addition to simply carrying the voice over to another user of the system, once such link is established, the operator can propose many other services to the customer including short message services (SMS), multi-media message services (MMS) with pictures or animations or just plain data stream over the network (Internet connections, e-mail and WAP servers). Protecting the secret customer key is of the utmost importance. But the SIM module gradually plays a more important role. In addition to the secret customer key, it also holds a proprietary operating system and uses proprietary cryptographic algorithms for network authentication and key agreement. Mutual authentication between the network and the card as well as key agreement is performed either using an example of cryptographic suite of algorithms called MILENAGE and designed by the GSM association or using above-mentioned proprietary versions of this algorithm suite. Once mutual authentication has been achieved, the SIM module also derives session keys to be used by the handset for further voice encryption and signalling data encryption and authentication. These latter algorithms reside in the handset and have to be standardised so that the partner network of the receiver at the other end of the line is able to decrypt the received messages and route the messages to the correct user. The SIM module further serves as a secure token which allows deriving service keys for many different applications requiring high levels of security and high storage capacity at the same time. This is where a simple SIM module becomes a high-density SIM. The European Telecommunications Standards Institute (ETSI) has defined a Generic Bootstrapping Architecture (GBA) which allows any SIM module to derive such service keys in a secure way and to make them available for advanced applications. ETSI has also defined protocols for Multimedia Broadcast Multicast Services (MBMS) and the like. Many of these newly defined services, such as Mobile TV services, require a high storage capacity at the user’s end, which has in turn led to the introduction of high-density SIM cards. An HD-SIMTM can, for example, securely handle Digital Rights Management (DRM) systems including storing encrypted digital content in the flash memory of the chip and having a DRM agent on the card which stores, updates and/or disables rights associated to such DRM content. In such a chip a user can further store megabytes of digital data such as videos, pictures and huge address books, corporate directories, corporate files and so on.
230
H. Handschuh and E. Trichina
12.7 Smart Card Tamper Resistance As previously mentioned, for the issuer, security is a major concern. Thus one of the first requirements that an issuer expects from a smart card is that it be tamperresistant, i.e. that nobody is able to attack the chip and extract its secrets. Smart card tamper resistance is achieved by a careful design methodology and by adding numerous security features on the chip which allow to detect when it is being misused. Another security feature is the resistance of cryptographic engines against so-called side-channel attacks, which allow an attacker to capture information which leaks through physical characteristics such as the power consumption of the card, the electromagnetic radiations, the time it takes to make a computation and last but not least from corrupted computations. Such information leakage allows attackers to extract the secret keys used by cryptographic algorithms via advanced statistical cryptanalytic methods. This very rich research area is addressed in another chapter of this book, and we therefore focus on hardware attacks and security measures at the hardware design level only.
12.7.1 Hardware Attacks Smart cards contain many hardware blocks. Among these, we focus on the different types of memory and some of the logic blocks, excluding cryptographic algorithms. Memories, buses, random number generators and the like are vulnerable to a number of attacks using heavy equipment such as probe stations, focused ion beams, scanning electron microscopes, lasers and so on. The idea is to use equipments which normally serve for hardware manufacturing and to turn them into equipments for invasive attacks. First of all, mechanical or chemical etching is applied to the chip surface in order to decapsulate it, remove the passivation layer and gain access to the bare internal layers. At this point, scanning electron microscopes help identify areas of the chip in which sensitive memory blocks are located, and laser stations can be used in an attempt to modify the memory cells’ content or to influence the behaviour of a part of the logic. Next, probe stations allow to select a given memory bit or bus line and to read its content while the chip is operating. Focused ion beams allow drilling wholes through a number of metal layers or hardware blocks in order to reach buried blocks, buses or individual signals and to read out their content too. They can also be used to cut through some tracks or to link tracks together in order to defeat or circumvent built-in security features. A classical example is to disconnect the on-board random number generator so that it is permanently stuck at zero, making it practically useless; a blown fuse which irreversibly locks down the chip in a given state can be re-constructed and the chip can, for example, be set back into test mode. To summarise, the objective of these attacks are to open up the chip, identify its large functional blocks, read out secret content, disconnect or disable its security features or modify its behaviour to defeat any subsequent normal and secure use of it. These
12 From Secure Memories to Smart Card Security
231
attacks have led security chip designers to come up with ever more complex security features which we describe in the next section.
12.7.2 Countermeasures at the Hardware Design Level Chip card manufacturers have been studying hardware attacks and implementing relevant security features for a long time. It all starts with a careful analysis of the threat model. In this chapter we describe how to protect a silicon design from invasive attacks. The first line of thought is to try to detect when the chip is under attack so that it can take preventive steps to either hide or destroy its secrets or to simply shut down and refuse to operate (the so-called card is mute status). This can be achieved by any of the means described hereafter. Generally, a tamper-resistant chip will include a large set if not all of those. First security mechanisms called sensors are added into the design which detect when the normal operating range changes. These sensors include voltage sensors which detect when the applied voltage is out of range or when the chip is submitted to voltage glitches, temperature sensors which detect when the silicon is over-heated or when one tries to freeze it down (for example, to freeze the content of the RAM during a critical computation involving on-board secrets), frequency detectors which detect when clock glitches occur or when the chip is clocked too fast or too slow and light detectors which detect when the passivation layer is removed and light flashes are applied. These abnormal conditions generally tend to indicate someone is trying to produce computation errors which, as we have discussed before, allow cryptanalysts to extract secrets. Next, one typically adds a metal mesh of power lines or a serpentine of power lines which can detect when one tries to remove it to gain access to lower layers or even to drill holes through it for probing purposes. The second line of thought is to design according to some specific security guidelines. These include using several independent metal layers and to bury the most sensitive buses and signals in the lower layers of the chip. Designing in a submicron scale makes it all the more difficult for an attacker to locate different target blocks and to invade the chip without functionally destroying it. Some critical elements in the design such as the random number generator require the use of specific glue logic design techniques to make things even more confusing. The test logic specifically needs to reside on the scribe line so that it gets sawn away and physically destroyed when the chip is removed from the die. This is the only way to ensure that one cannot easily access the test logic on a fielded device. The third line of thought is related to the fact that even if a chip is decapsulated and open for inspection, an attacker should not be able to make any sense of the information he probes on the bus or from the memory cells. This means that every bit of information in the memory and on the bus needs to be scrambled. Typically, both data and addresses need to be scrambled so that one cannot easily apply known or chosen text attacks to recover the scrambling key. The issue with scrambling is
232
H. Handschuh and E. Trichina
that this mechanism should ideally have the same strength as an encryption algorithm, but needs to execute in one single cycle in order to not introduce any delay, and should not represent too many additional gates in the design to make it costeffective. Such scrambling mechanisms usually remain proprietary since they would definitely not withstand the scrutiny of cryptanalysts. Implementing a subset or all of these features on the silicon generally suffices to reach an acceptable security level for today’s smart card industry.
12.7.3 New Security Challenges for High-Density Cards Comparing with the traditional smart card market, high-density cards obviously inherit all previously mentioned threats and need to accommodate the basic features described in the previous section. On top of those, high-density cards also make heavy use of cryptographic algorithms and need to make sure the hardware design of those algorithms takes into account the additional threat represented by side-channel attacks. However, having extra megabytes of flash memory on-board the card but no more ROM memory also introduces a number of new challenges in the high-density world. First of all, the encryption cores that allow for secure storage applications now need to run at a much higher throughput, typically 3–5 Mbytes/s, in order to accommodate the speed of the USB and MMC protocols, while still being able to securely operate in low power USIM mode (i.e. sometimes as low as 4 mA). This requires a complete redesign of the associated hardware cores. Designing high-throughput, low gate count, side channel-resistant encryption cores is currently an open problem. One of the next big challenges is to make sure to be able to integrate the whole design on a single die. Dual die architectures have an intrinsic security weakness since they have to include a very large and visible bus between both dies which can easily be probed. And since scrambling mechanisms are not as secure as encryption mechanisms, this would allow a cryptanalyst to rather easily reverse-engineer the scrambling mechanism and to subsequently read out data in clear from that bus. A further challenge is how to boot up the system securely from flash. Recall that there is no ROM memory available for the operating system anymore. Therefore there needs to be some OTP (one-time programmable) mechanism which allows to lock down at least a fraction of the boot code which can subsequently authenticate and install further instances of the code and execute them securely from flash memory. This includes new procedures for personalisation processes which can securely download the operating system from a customer without revealing it to the manufacturer or the personalising entity. Such secure download is handled by a so called IPL, initial program loader, with public key signature verification capabilities. One more challenge is how to securely emulate the behaviour of EEPROM. Recall that EEPROM is normally byte accessible and that flash memory can only be erased sector-wise. This means that whenever you want to update a 128-bit secret key, you need to erase 16 kbytes of memory. Therefore, a new mechanism
12 From Secure Memories to Smart Card Security
233
is invented which allows to simulate such byte access. However, this mechanism basically works as follows. Write the contents into a new page each time you need to update a value, and once all pages of a sector are full, copy the relevant content into a new sector and erase the previous one. This is somewhat similar to garbage collection. From the security point of view this obviously means that sensitive content never really gets erased before a whole sector is updated, which is clearly an undesirable feature. These are some of the security challenges high-density card designers and manufacturers have to face but overall the security level of such advanced devices is surprisingly satisfactory. The convergence between secure memories and smart cards seems to provide a very promising avenue for security applications in the digital world. Acknowledgments The authors gratefully acknowledge the contribution of Roel Maes from KU Leuven whose help in drawing figures for this chapter was as indispensable as it was timely and efficient.
References 1. R. Bez, E. Camerlenghi, A. Modelli, and A. Visconti, “Introduction to flash memory”, Proceedings of the IEEE, vol. 91, no. 4, April 2003, pp. 489–502. 2. A. Sharma, Semiconductor Memories: Technology, Testing, and Reliability, IEEE Press, 1997. 3. W. D. Brown, and J. E. Brewer, Nonvolatile Semiconductor Memory Technology: A Comprehensive Guide to Understanding and Using NVSM Devices, IEEE Press, 1997. 4. P. Cappelletti, C. Golla, P. Olivo, and E. Zanoni, Eds., Flash Memories, Kluver, Norwell, MA 1999. 5. L. Selmi and C. Fiegna, “Physical aspects of cell operation and reliability”, In Flash Memories, P. Cappelletti et al., Eds., Kluver, Norwell, MA, 1999. 6. S. Minimani, and Y. Kamogaki, “A novel MNOS nonvolatile memory device ensuring 10-years data retention after 107 erase/write cycles”, IEEE Transactions on Electron Devices, vol. 40, no. 11, 1993, pp 2011–2017. 7. B. Eitan, P. Pavan, I. Bloom, E. Aloni, A. Frommer, and D. Finzi, “A novel localized trapping, 2-bit nonvolatile memory cell”, IEEE Device Letters, 2002, pp. 543–545. 8. MirrorBit White Paper, AMD/Fujitsu Flash Memory, Publication number 25260. 9. NAND Flash Application Design Guide, Toshiba America Electronic Components, Inc., March 2004. 10. I. Motta, G. Ragone, O. Khouri, G. Torelli, and R. Micheloni, “High-voltage management in single-supply CHE NOR-type flash memories”, Proceedings of the IEEE, vol. 91, no. 4, April 2003, pp. 554–568. 11. A. Silvagni, G. Fusillo, R. Ravasio, M. Picca, and S. Zanardi, “An overview of logic architectures inside flash memory devices”, Proceedings of the IEEE, vol. 91, no. 4, April 2003, pp. 569–580. 12. G. Campardo, D. Canali, D. Fattori, G. Girardi, P. Scintu, L. Tarchini, and D. Tricario, “An overview of flash architectural developments”, Proceedings of the IEEE, vol. 91, no. 4, April 2003, pp. 523–536. 13. Security Features in NOR Flash Memories, ST Microelectronics. <www.st.com/flash>, September 2005 (accessed 15.05.2007). 14. Spansion Advanced Sector Protection, Spansion LLC. <www.spansion.com/application notes/spansion advsectprot an a0 e.pdf>, September 2002 (accessed 25.06.07).
234
H. Handschuh and E. Trichina
15. Spansion S29WS256N vs. Intel 28F256L18, Spansion LLC., Sunnyvale, CA. <www.spansion.com/application notes/S29WS256N vs 28H256L18 AN A0>, April 2006 (accessed 25.06.07). 16. KryptoTM Security for NOR Flash memories, ST Microelectronics. <www.st.com/flash>, October 2005 (accessed 15.05.07). 17. Flash NOR, Embedded Applications: Secure- KryptoTM,, ST Microelectronics.
, 2005 (accessed 15.06.07). 18. Secure MMC, Samsung Semiconductor Global. <www.samsung.com/Products/Semiconductor/ FlashCard/MMC/secure mmc.htm>, 2006 (accessed 26.06.07). 19. P. Laackmann, and M. Janke, “Integral security from flash to ROM”, Infineon Technologies. <www.infineon.com/security>, 2003 (accessed 3.05.06). 20. A. Constantinou, “High capacity SIMs : a white paper”, Informa Telecoms and Media. , 2006 (accessed 26.06.07). 21. P. Gutmann, “Data remanence in semiconductor devices”, Proceedings of the 10th USENIX Security Symposium, Washington, USA, August 13–17, 2001, , 2001 (accessed 10.04.07). 22. S. Skorobogatov, and R. Anderson, “Optical fault injection attacks”, Cryptographic Hardware and Embedded Systems (CHES 2002), Lecture Notes in Computer Science, vol. 2523, Springer-Verlag, 2002, pp. 2–12. 23. R. Anderson, and M. Kuhn, “Low cost attacks on tamper resistant devices,” M. Loman et al. (Eds.), Security Protocols, Proceedings 5th International Workshop IWSP, Lecture Notes in Computer Science, vol. 1361, Springer-Verlag, 1997, pp.125–136. 24. O. Kommerling, and M. Kuhn, “Design principles for tamper-resistant smartcard processors”, Smarcard’99, Proceedings USENIX Workshop on Smartcard Technology, 1999, pp. 9–20. 25. B.-E. Hagai, H. Choukri, D. Naccache, M. Tunstall, and C. Whelan, “The sorcerer’s apprentice guide to fault attacks”, Proceedings of the IEEE, vol. 94, no. 2, 2006, pp. 370–382. 26. M. Neve, E. Peeters, D. Samyde, and J.-J. Quisquater, “Memories: a survey of their secure uses in smart cards”, Second IEEE International Security in Storage Workshop, 2003, pp. 62–71. 27. S. Skorobogatov, “Low temperature data remanence in static RAM”, Cambridge Computer Lab. <www.cl.cam.ac.uk?TecReports/UCAM-CL-TR-536.pdf>, 2001 (accessed 15.02.04)
Index
Note: The letters ‘f’ & ‘t’ following the locators refer to figures and tables respectively
A Accidental code/data modification, standard protection, 222 ADC, see Analog-to-digital converters (ADC) ADC-chaos RNG design, 122–123 post-processing technique, 122 Additive Abelian group, 67 Address transition detection (ATD), 221 Advanced Encryption Standard (AES), 31, 39, 45–52, 206 basic rounds AddRoundKey, 48 MixColumns, 48 ShiftRows, 48 SubBytes, 48 implementation, 57–60 hardware implementation, 58–60 software implementation, 57–58 “state matrix”/“state”, 48 AES, see Advanced Encryption Standard (AES) Algorithms add-only right-to-left binary point multiplication, 76 binary modular exponentiation using exponent d’ = NAF(d), 66 binary point multiplication, 68 block cipher algorithm, 45–46 EC point operations minimizing register number, 185 generating width-w NAF, 74 helper data algorithm of fuzzy extractor, 135–138 Itoh and Tsujii algorithm, 23 least significant digit-serial (LSD) multiplier, 21 left-to-right and right-to-left binary modular exponentiation, 64
left-to-right point multiplication with NAFw(k), 74 modular addition over GF((2m)2), 73 modular addition/subtraction, 11 modular multiplication over GF((2m)2), 73 Montgomery inverse, 17 Montgomery modular multiplication, 13 Montgomery powering ladder, 65, 69 MSB-first scalar recoding with window size ω, 75 multiplicative inverse computation in F2 m, 24 NIST reduction, 15–16 point addition over GF(2m), 72 point multiplication, 184 pre-computation for k-ary modular exponentiation, 67 SAM secure IP format, 205 scalar multiplication algorithm, 183 shift-and-add most significant bit, 19 signed-digit exponent recoding, 65 Altera field programmable logic device family, 116–117 Analog-to-digital converters (ADC), 122 Architecture scheme, flash device, 221–222 Array organization, flash memory, 219–220 “bulk erasable”, 219 high-density modern devices, 220 wordline, 219 The Art of Computer Programming, 109 Asymmetric cryptography, see Modular integer arithmetic, public-key cryptography (PKC) ATD, see Address transition detection (ATD) Authentication, 56, 200–201, 206 B Bayesian classification, 35–36 Biot–Savart law, 29
I.M.R. Verbauwhede (ed.), Secure Integrated Circuits and Systems, Integrated Circuits and Systems, DOI 10.1007/978-0-387-71829-3, C Springer Science+Business Media, LLC 2010
235
236 Block cipher/stream cipher, 45–46 Boolean functions exclusive-or function, 160 modular addition, 160 modular multiplication, 160 C Capacitance matching precision, 148 DES Sbox, HyperExtract, 148 Carry delayed adders (CDA), 10–11 Carry look-ahead adders (CLA), 10–11, 16–17 Carry ripple adders (CRA), 10 Carry save adder (CSA), 10, 82, 85–86, 91–92 CASR, see Cellular automata shift register (CASR) CBC-Message Authentication Code (CMAC), 56 standardization of, 56 CDA, see Carry delayed adders (CDA) CE#, see Chip enable (CE#) Cell functionality, flash memory, 217–219 Cellular automata shift register (CASR), 115 CFB, see Cipher feedback (CFB) Chain-of-trust, multiple trustworthy components, 199 Challenge response pair (CRP), 128–129, 133–134, 137, 140, 200, 204–205, 212–213 CHES, see Cryptographic Hardware and Embedded Systems (CHES) Chip enable (CE#), 220 Cipher block chain (CBC), 54 Cipher feedback (CFB), 54 Ciphertext stealing, 53 CLA, see Carry look-ahead adders (CLA) CMAC, see CBC-Message Authentication Code (CMAC) CMOS devices, power consumption in, 28–29 dynamic power consumption, 29 short-circuit currents, 28 transistors, leakage currents in, 28 Cloning, unauthorised access to code/data/against, 224–226 Compact architecture, hardware design for hash functions, 100–101 denominator, iteration bound without, 100 registers, reduce number of, 100 Compact flash card (CF card), 197 Composite fields, 183, 185–186 Counter mode (CTR/CM), 55 CRA, see Carry ripple adders (CRA) Cross-talk techniques, 155 CRP, see Challenge response pair (CRP)
Index Cryptanalytic techniques, 38, 82, 230 Crypto-coprocessors AES, 228 DES, 228 RSA, 228 Cryptographic Hardware and Embedded Systems (CHES), 46 Cryptographic hash functions, 113 CSA, see Carry save adder (CSA) CTR/CM, see Counter mode (CTR/CM) Curve-based cryptography, 180 processors, low-cost applications, 186–192 modular arithmetic logic unit (MALU), 187–188 performance results/discussion, 189–192 CVC, see Cipher block chain (CBC) D Data-dependent power consumption, 29, 159 Data Encryption Standard (DES), 32, 38–39, 46 basic rounds, 51 invAddRoundKey, 51 invMixColumns, 51 invShiftRows, 51 invSubBytes, 51 Data flow graph (DFG), 83, 86f, 95f DES, see Data Encryption Standard (DES) The Design of Rijndael, 47 DFG, see Data flow graph (DFG) DHKE, see Diffie–Hellman key exchange (DHKE) Dichtl and Goli´c RNG design, 121–122 D-type flip-flop, 122 Differential power analysis (DPA), 28, 32, 60 Diffie–Hellman key exchange (DHKE), 3, 5 Digital ICs, 127 Digital design flow, 149–153 place and route approach, 151–153 secure digital design flow, 153 wave dynamic differential logic, 149–150 Digital oscilloscope, acquisition device, 30 Digital rights management (DRM), 229 Digital signal processing, 80 Digital signature algorithm (DSA), 5–7, 9 Digit-serial multiplications, 189 Discrete logarithm problem in finite fields (DLP), 4–6 DLP, see Discrete logarithm problem in finite fields (DLP) Double-and-add algorithm/binary algorithm, 67
Index DPA and template attacks, second-order masked AES round, 169f second-order DPA attacks, 170–172 correlation coefficient, 170, 161f on masked AES, 171f preprocessing, 170 template attacks, 172–175 template building phase, 173 DPA, see Differential power analysis (DPA) DRAM, see Dynamic RAM (DRAM) DRM, see Digital rights management (DRM) DSA, see Digital signature algorithm (DSA) DyCML, see Dynamic current mode logic (DyCML) Dynamic current mode logic (DyCML), 147 Dynamic/differential behavior, techniques, 150 connection rules, 150 logic, 146 precharge wave generation, 150 precharge wave propagation, 150 WDDL library construction, 150 Dynamic RAM (DRAM), 215 E ECB, see Electronic code book (ECB) ECDLP, see Elliptic curve discrete logarithm problem (ECDLP) EDK, see Embedded Developer Kit (EDK) EEPROM, see Electronically erasable programmable memory (EEPROM) Electronically erasable programmable memory (EEPROM), 215, 227–228, 232 Electronic code book (ECB), 52–54, 58 cipher text, Ci , 53 ciphertext stealing, 53 plaintext, Pi , 53 Elementary reduction scheme, 14 Elliptic curve cryptography (ECC), 5, 23, 67, 71, 73, 182 and HECC, difference, 73 over composite field, 71–72 over GF(2m), 70–71 over GF(p), 67–70 Montgomery powering ladder, 69t point addition/doubling over GF(p), 69–70 processor, architecture, 181 Elliptic curve discrete logarithm problem (ECDLP), 4 multiplication, 68, 73, 76 Embedded Developer Kit (EDK), 211 Embedded systems, demonstrating end-point security in, 197
237 correct/incorrect authenticated message, SAM setup, 213f end-point security for embedded systems, 197–199 hardware tampering, 199 message copying, 199 message tampering, 199 platform configuration, 198 software tampering, 199 video configuration, 198 platform/video configuration, 198t SAM performance results, 213t SAM implementation architecture, 206–207 chain-of-trust, booting, 202–203 design methodology, 211–212 secured video configurations, loading, 208–209 secure video peripheral, 209–211 system to SAM communication, 207–208 secure video message display, 198f system architecture, 201 booting chain-of-trust, 202–203 SAM protocol, 203–204 system layout, 201–202 security assurances, 199–201 VGA connector, 197 EM radiation in CMOS devices, 29–30 Biot–Savart law, 29 internal values and leakage modeling, derivation of, 33f relevant leakage samples, selection, 34f statistical comparison with correlation coefficient, 35f EMV, see Europay, Mastercard and VISA (EMV) Encryption algorithms, 45–46, 166 Entropy and mutual information, 133 Error correcting codes (ECC), 136 ETSI, see European Telecommunications Standards Institute (ETSI) Euclidean algorithm, see Fermat little theorem Europay, Mastercard and VISA (EMV), 226 European Telecommunications Standards Institute (ETSI), 229 Exemplary differential attack against DES, 32–35 device inputs, 33 internal values within algorithm, derivation of, 33 leakage measurement, 33–34 leakage modeling, 33
238 Exemplary differential (cont.) leakage source and measurement setup, 32–33 relevant leakage sample, 34 statistical comparison, 34 target algorithm and implementation, 32 target signal, 33 Exemplary power and EM leakage traces, 40 Exemplary profiled attack against DES, 36–37 leakage model, preparation of, 36–37 leakage modeling, 37 statistical comparison with correlation coefficient, 37, 37f Exponent recoding, 65–67 signed-digit exponent recoding, 66t Extractor functions, 112–113, 118 F FA, see Full-adders (FA), single-bit Factorization problem (FP), integer, 4 FP, attacked with IC, 4 NFS, 4 Fast Software Encryption (FSE), 46 Federal Information Processing Standard (FIPS), 38 Fermat little theorem, 16, 23, 59 Fibonacci galois ring oscillator (FIGARO), 120–121 theorem, 120–121 Field-programmable gate array (FPGA), 30, 32, 60, 107–123, 131, 139, 140, 197, 214 FIGARO, see Fibonacci galois ring oscillator (FIGARO) Finite fields, modular arithmetic, 7–9 definition, 7 operation, 7 modular exponentiation operations, 7 Finite state machines (FSM), 183, 186, 221, 223 FIPS, see Federal Information Processing Standard (FIPS) Fischer–Drutarovsk´y design, 116–117 Flash memory technology/architecture, 216–221 array organisation, 219–220 cell functionality, 217–219 flash memory user interface, 220–221 memory cell architecture, 216–217 Flash memory user interface, 220–221 CE#, 220 OE#, 220
Index RP#, 220 WE#, 220 F2 m, building blocks for fields, 9–16 inversion using Itoh–Tsujii algorithms, 23–24 Fermat’s little theorem, 23 gcd algorithms, 23 multiplication, 18–22 bit multipliers, 18–20 digit multipliers, 20–22 squaring, 22–23 one single clock cycle, 22 F2 m, multiplication bit multipliers, 18 shift-and-add method, 18 digital multipliers, 20 elliptic curve-based systems, 21 multiply/accumulate operation, 21 F p, building blocks for fields, 9–16 addition and subtraction, 10–11 faster reduction, 14–15 inversion, 15–16 almost Montgomery-inverse algorithm, 16 multiplication, 11–14 modular Montgomery multiplication, 11 operands/modules, 11 parallel algorithms, 12 sequential algorithms, 12 FP, see Factorization problem (FP), integer FPGA, see field-programmable gate array (FPGA) FPGA technology, 214 FP in cryptography RSA cryptosystem, 5 FSE, see Fast Software Encryption (FSE) FSM, see Finite state machines (FSM) Full-adders (FA), single-bit, 10 Fuzzy extractor, 137–140 strong extractor/secure sketch, construction, 137f G Galois field (GF), 49 Gate constructed, WDDL, 149 Gaussian channel, 133 distribution, 36–37 GBA, see Generic Bootstrapping Architecture (GBA) Gen, probabilistic procedure, 35 Generic Bootstrapping Architecture (GBA), 229
Index German department for security in information technology, 110 GF, see Galois field (GF) GF(256), finite field, 166 Goli´c FIGARO design, 120–121 H HA, see Half-adders (HA), single-bit Half-adders (HA), single-bit, 10 Hamming distance model (HDL), 30 Hamming weight model, 34, 172 Hardware – architecture level, 163–168 masking buses, 166 pitfalls, 166 random precharging, 165–166 Hardware attacks (Smart Card Tamper), 230–231 Hardware – cell level, 168–169 masked logic styles, 168 power analysis attacks, 168 two-input unmasked and corresponding masked cell, 169f Hardware description language, 211 Hardware design for hash functions, 79 collision resistant, 79–80 countermeasures, 231–232 dedicated hash algorithms, 80 designers’ feedback to hash designers, 99–101 high-throughput architecture, 100 registers, number of, 100 digital signature algorithms (DSA), 79 feedback to hash designers, 99–101 comments, 101 future work, 101 MD4-based techniques, 82–89 iteration bound analysis, 84–86 retiming transformation, 86–87 SHA1 Hash algorithm and DFG, 83–84 unfolding transformation, 87–90 Hash algorithms and security considerations, 80 Boolean functions, 80 design, implementation of, 95–99 microarchitecture level, 101 one-way hash algorithm, 80 preimage resistance, 79 retiming and unfolding, 80 second preimage resistance, 79 throughput optimal architecture of RIPEMD-160, 94–95 RIPEMD-160 data flow graph, 95f SHA2 expander DFG, 94f
239 throughput optimal architecture of SHA2, 90–94 DFG compressor, 91–94 DFG expander, 94 SHA-256 hash computation, 90f Hardware implementation of MD4-based, techniques for, 82–90 iteration bound analysis, 84–86 retiming transformation, 86–87 of SHA1, 87f SHA1 Hash algorithm and its DFG, 83–84 unfolding transformation, 87–90 of SHA1, 88f of SHA1 with CSA, 89f Hardware stream cipher, 209 Hash algorithms design, implementation of, 95–99 RIPEMD-160 algorithm, synthesis of, 98–99 results and comparison of RIPEMD-160, 99t and security considerations, 80 Boolean functions, 80 SHA1 algorithm, synthesis of, 96–97 results and comparison, 97t SHA2 algorithm, synthesis of, 97–98 results and comparison, 98t HDL see Hamming distance model (HDL) HDL programming, 98 HECC, see Hyperelliptic curve cryptosystems (HECC) HEC divisor multiplication, 73, 77 Helper data algorithm or Fuzzy Extractor, 135–138 High-density cards, 228–229 HD-SIM, application example, 228–229 High-end smart cards, see High-density cards High-throughput architecture, 82, 100 Hot electron injection, 217 Hyperelliptic curve cryptography (HECC), 72–73, 180, 182 ECC and HECC, difference, 73 MSB-first (left-to-right) scalar recoding, 74 Hyperelliptic curve cryptosystems (HECC), 5 I IACR, see International Association for Cryptologic Research (IACR) IC, see Index calculus (IC) Index calculus (IC), 4 Information reconciliation (fuzzy extractor), 135–138 helper data, 135
240 Information reconciliation (cont.) secure sketch, 135 Instruction set simulator (ISS), 211 Intellectual property (IP), 138–139, 199, 205, 212, 214 Intel RNG design, 114–115 International Association for Cryptologic Research (IACR), 46 IP, see Intellectual property (IP) IP protection, 139–140 bitstream encryption, 139 cloning attack, 139 SRAM FPGAs, programmable devices, 139 ISS, see Instruction set simulator (ISS) ITA, see Itoh and Tsujii algorithm (ITA) Itoh and Tsujii algorithm (ITA), 23–24 chains for exponentiation-based inversion, algorithm, 23 subfield inversion, 23 See also Algorithms J Jacobian representation, 68–69 K Kerckhoffs’ principles, cryptography, 107 Kohlbrenner–Gaj design, 117–118 L Leakages, origin of, 28–30 charge vs. discharge of CMOS inverter, 29f CMOS gates, 28 EM radiation in CMOS devices, 29–30 leakage models, 30 Least significant bit (LSB), 12, 18, 20 LFSR, see Linear feedback shift register (LFSR) Linear feedback shift register (LFSR), 108, 115, 120–121 Low-power cryptographic processor, 180 LSB, see Least significant bit (LSB) M MAC, see Message Authentication Code (MAC) Malicious code/data modification, protection against, 222–224 BGA package, 223 WP#, 223 MALU, see Modular arithmetic logic unit (MALU) Masked multiplier (MM), 163, 165 Masking Table LookUps, 161–162
Index AES, example, 161 add round key, 162 mix columns, 162 shift rows, 162 sub bytes, 162 popular countermeasure, 160–161 smart card implementation, 162 in software, 161–163 Maximum-likelihood (ML) decision rule, 173 MBMS, see Multimedia Broadcast Multicast Services (MBMS) MD4, cryptographic hash algorithm, 80 Memory cell architecture, flash memory, 216–217 Message Authentication Code (MAC), 56 Message digests – Hash(M), 79 Metal-oxide semiconductor (MOSFET), 127, 216 Microelectronic technologies, 28 MILENAGE, cryptographic suite of algorithms, 229 ML, see Maximum-likelihood (ML) decision rule MM see Masked multiplier (MM) MMS, see Multi-media message services (MMS) Modular arithmetic logic unit (MALU), 182, 187, 193 exponentiation, 31 Modular integer arithmetic, public-key cryptography (PKC) architecture of the squarer, 22f arithmetic of DLP/FP systems, 6f arithmetic of ECC systems, 6f asymptotic area/time complexities, n-bit adders, 10t building blocks for fields F2 m, 9–16 inversion using Itoh–Tsujii Algorithms, 23–24 multiplication, 18–22 squaring, 22–23 building blocks for fields F p , 9–16 addition and subtraction, 10–11 faster reduction, 14–15 inversion, 15–16 multiplication, 11–14 field elements, vector/polynomial representation, 8t finite fields, 7–9 finite fields classes, 9f F2 m gate consumption and latency figures, 23t
Index Montgomery inverse algorithm using CLA adders (hardware), 17f Montgomery modular multiplication with one CSA, 13f most significant bit-serial (MSB) multiplier circuit, 20f Modular Montgomery multiplication, 11–12 algorithm-per loop iteration, 13 architectures, 6, 13–14 CSA adder, 12–13 Montgomery domain, 12 Montgomery powering ladder, 64, 65t, 69t Moore’s law, 81 MOSFET, see Metal-oxide semiconductor (MOSFET) Most significant bit-serial (MSB), 19, 20f MSB, see Most significant bit-serial (MSB) Multimedia Broadcast Multicast Services (MBMS), 229 Multi-media message services (MMS), 229 Mutual opposite form (MOF), 75, 75t N NAF, see Non adjacent form (NAF) National Bureau of Standards, 47 National Institute and Standard’s Random Number Generation Technical Working Group, 109 National Institute of Standard and Technology (NIST), 46, 81 See also National Bureau of Standards NFS, see Number field sieve (NFS) methods NIST, see National Institute for Standards and Technology (NIST) Noisy analog measurements, 138 Non adjacent form (NAF), 65 Non-manufacturer reproducibility, 128–129 Non-volatile memory (NVM), 136, 138–139, 216–217, 224 NTRUEncrypt, 181 Number field sieve (NFS) methods, 4 NVM, see Non-volatile memory (NVM) O OCM, see On-chip memory modules (OCM) OE#, see Output enable (OE#) OFB, see Output feedback (OFB) On-chip memory modules (OCM), 201 On-chip peripheral bus (OPB), 203 One-time pad algorithm, 46 One time programmability (OTP), 223–226, 232 One-way trapdoor function, 4 ONO, see Oxide–nitride–oxide (ONO)
241 OPB, see On-chip peripheral bus (OPB) Operational modes, 52–57 cipher block chain, 54 cipher feedback, 54 counter mode, 55 electronic code book, 52–53 output feedback, 54–55 tweakable modes of operation, 56 OTP, see One time programmability (OTP) Output enable (OE#), 220 Output feedback (OFB), 54–55 Oxide–nitride–oxide (ONO), 216 P Parallel algorithms, 9 Personal identification number (PIN) code, 227 Phase-locked loop (PLL), 108, 116–117 Physical unclonable functions (PUFs), 119, 125, 128–135 coating PUF, 129, 130f intrinsic PUF, 129–134 n-MOS transistors (resistor), 131 p-MOS transistors (conducting state), 131 Silicon PUFs, 131–135 tamper evidence, 129 Picoblaze, micro-program on, 206 PIN, see Personal identification number (PIN) code Pipelined hardware architecture, 6 PKC, see Public key cryptography (PKC) Place and route approach, 151–153 grid definition, 153 non-preferred routing, 153 standard cell restriction, 153 transformation procedure, 152 translation procedure, 152 width reduction, 152 PLB, see Processor local bus (PLB) PLL, see Phase-locked loop (PLL) Point and divisor multiplication, ECC/HECC, 183 Post-processing techniques, 111–114 cryptographic hash functions, 113 extractor functions, 113–114 von neumann corrector, 112–113 Pottpouri of RNG designs, 114–123 ADC-chaos RNG design, 122–123 Devadas. PUF-based RNG design, 119–120 Dichtl and Goli´c RNG design, 121–122 Epstein’s RNG design, 116 Fischer–Drutarovsk´y design, 116–117 Goli´c FIGARO design, 120–121
242 Pottpouri of RNG designs (cont.) intel RNG design, 114–115 kohlbrenner–Gaj design, 117–118 rings design, 118–119 Tkacik RNG design, 115 Power analysis attacks by masking, 159 8-bit microcontroller, 175 boolean and arithmetic masking, 160 cryptographic device, 159 linear and non-linear functions, 160 masking, popular countermeasure, 160–161 boolean, 160 hardware – architecture level, 163–168 hardware – cell level, 168–169 software, 161–163 MM consists of four standard multipliers and adders, 165f second-order DPA attacks and template attacks masked AES round, 169f second-order DPA attacks, 170–172 template attacks, 172–175 PowerPC core, boot loader code, 201 algorithms selection/optimizations, 183–184 binary field arithmetic, 185–186 ECC/HECC arithmetic, algorithms, 184–185 ECC/HECC over binary fields, 182–183 Privacy amplification (Fuzzy extractor), 136 collision probability, 136 leftover hash-lemma, 136 strong extractor, 136 Private key algorithm, 3, 45 cryptography, 3 Processor local bus (PLB), 201 Projective coordinates, advantages, 68 Prototype IC/measurement results, 153–156 Pseudorandom number generator, 46, 79 Public key cryptography (PKC), 63 applications, 180 RFIDs networks, 180 sensor networks, 180 curve-based cryptography, 67–75 ECC over a composite field, 71–72 ECC over GF(2m), 70–71 ECC over GF(p), 67–70 Hyperelliptic Curve Cryptography (HECC), 72–73 scalar recoding, 73–75 exponent recoding, 65–67 signed-digit exponent recoding, 66t Montgomery powering ladder, 64
Index related work of PKC, 180–182 trends in, 76 parallelized left-to-right point multiplication, 76t RSA modular exponentiation, 63–65 left-to-right and right-to-left binary modular exponentiation, 64t right-to-left algorithm, 63 side-channel issues, 63 trends, 76 parallelized left-to-right point multiplication, 76t PUF-based RNG design, Devadas, 119–120 PUFs, process variations for security, 125 applications, 138–140 IP protection, 139–140 secure key storage, 138–139 background, 125–126 sub-micron process variations, 126 helper data algorithm or fuzzy extractor, 135–138 fuzzy extractor, 137–138 information reconciliation, 135–136 privacy amplification, 136 quantization, 138 physical unclonable functions (PUFs), 128–135 coating PUF, 129 intrinsic PUFs, 129–134 tamper evidence, 129 process variations, 127–128 fabrication process, 127 RFID, embedded devices, 125 uses, 135 key reconstruction or verification phase, 135 PUFs, see Physical unclonable functions (PUFs) R Radio frequency identification devices (RFIDs), 28, 77, 125–126, 179–193 RFID and sensor node, preliminaries, 182–186 RAM, see Random access memory (RAM) Random access memory (RAM), 215 Random number generator (RNG), integrated circuits and FPGAs ADC-chaos RNG design, 123f bi-stable memory component of RNG design, 116f circuits clocked by free running oscillators, 115f
Index fibonacci oscillator design, 120f Fischer–Drutarovsk´y RNG design, architecture, 117f galois oscillator design, 121f intel RNG uses junction noise, entropy source, 114f oscillator/CLB structure of the Kohlbrenner–Gaj design, 118f post-processing techniques, 111–114 cryptographic hash functions, 113 extractor functions, 113–114 von Neumann corrector, 112–113 post-processing of RNG designs, 111–114 ADC-chaos RNG design, 122–123 PUF-based RNG design, Devadas, 119–120 Dichtl and Goli´c RNG design, 121–122 Fischer–Drutarovsk´y Design, 116–117 Goli´c FIGARO design, 120–121 intel RNG design, 114–115 Kohlbrenner–Gaj design, 117–118 rings design, 118–119 RNG design, 116 Tkacik RNG design, 115 ring oscillators design, 118f testing for randomness, 108–111 statistical tests, 108–109 true randomness tests, 110–111 Randomness testing, 108–111 statistical tests, 108–109 true randomness tests, 110–111 certification mode, 110 online tests, 111 start-up test, 111 tot test, 111 Read-Only Memory (ROM), 92, 186, 215, 227–228, 232 Reset/power down (RP#), 220 Resilient functions, post-processing, 118–119 definition/theorem, 119 RFID, see Radio frequency identification devices (RFIDs) RFID/sensor nodes, compact public-key implementations area complexity in kgates of MALU, 189t area in number of kgates for cryptosystems, 190f comparison with other related work, 192t curve-based processor, architecture, 186f curve-based processors, low-cost applications, 186–192
243 performance results/discussion, 189–192 embedded devices, 125 hierarchy of ECC/HECC operations, 184f logic inside one cell of the MALU, 188f MALU, architecture, 188f one scalar multiplication for ECC, 191f performance in number of cycles for cryptosystems, 191f preliminaries, 182–186 algorithms selection/optimizations, 183–184 binary field arithmetic, 185–186 ECC/HECC arithmetic, algorithms, 184–185 ECC/HECC over binary fields, 182–183 related work, 180–182 results for power for cryptosystems, 192f Rings oscillators design, 118–120 RIPEMD-160, throughput optimal architecture, 94–95 RIPEMD-160 data flow graph, 95f SHA2 expander DFG, 93f RNG design, Epstein, 116, 116f RNG, see Random number generator (RNG), integrated circuits and FPGAs ROM, see Read-Only Memory (ROM) RP#, see Reset/power down (RP#) RSA modular exponentiation, 63–65 left-to-right and right-to-left binary modular exponentiation, 64t right-to-left algorithm, 63 side-channel issues, 63 S SABL, see Sense amplifier-based logic (SABL) SAM protocol, 203–204 off-line phase, 205 online phase, 204–205 CRP list generation, 204f video configuration enrollment and distribution, 204f SAM, see Secure Authentication Module (SAM) implementation Scalar multiplication algorithm, 183 recoding, 73–75 generating width-w NAF, 74t MSB-first scalar recoding with window size ω, 75t Schnorr’s identification scheme, 181 SD, see Signed-digit (SD), binary
244 Secret key crypto implementations AES, implementation, 57–60 hardware implementation, 58–60 software implementation, 57–58 block cipher/stream cipher, 45–46 CBC mode encryption, 54f counter mode encryption, 55f ECB encryption, 52f encryption standard, advanced, 47–52 finite field inversion, architecture, 60f internal structure of an AES round, 48f key schedule, 51f MixColumns transformation, 50f operation modes, 52–57 performances of software AES, 58t ShiftRows transformation, 50f SubBytes transformation, 49f SecSi, see secure silicon area (SecSi) Secure Authentication Module (SAM) implementation, 202–203 architecture, 206–207 chain-of-trust, booting, 202–203 design methodology, 211–212 coding complexity, 212t secure video system design flow, 211f opcodes for, 208 challenge response pairs, generating, 208 IP, loading secured, 208 secured video configurations, loading, 208–209 secure video peripheral, 209–211 Trivium with video raster scan, 210f video peripheral with streaming decryption, 209f system to SAM communication, 207–208 command format, 208t software communication, 207f Secure silicon area (SecSi), 223 Smart card security, secure memories to advanced sector protection mechanisms, 225f architecture scheme, 221–222 flash memory technology/architecture, 216–221 array organisation, 219–220 cell functionality, 217–219 flash memory user interface, 220–221 memory cell architecture, 216–217 hierarchical column sector organisation, 220f high-density cards, 228–229 HD-SIM, application example, 228–229
Index microprocessor card, 226 NOR array organisation, 217f sector protection from malicious code/data alteration, 224f secure memories, 222–226 accidental code/data modification, standard protection, 222 malicious code/data modification, protection against, 222–224 unauthorised access to code/data/against cloning, protection against, 224–226 secure memories to smart cards, 226–228 smart cards/high-density cards, comparison, 228t smart card tamper resistance, 230–233 hardware attacks, 230–231 hardware design level, countermeasures, 231–232 high-density cards, security challenges, 232–233 Sense amplifier-based logic (SABL), 147 Sequential algorithms, 9, 12 SHA1, hash standard, 81 SHA2, throughput optimal architecture, 90–94 DFG of SHA2 compressor, 91–93 basic, 91f final, 93f optimized SHA2 compressor DFG and with CSA, 92f DFG of SHA2 expander, 94 SHA-256 hash computation, 90f Short message services (SMS), 229 Side-channel attacks, 27–28, 31, 37–38, 76, 183, 230, 232 active vs. passive, 27, 31 basics of, 28–32 measurement setups, 30–31 origin of leakages, 28–30 SPA and DPA, classical attacks, 31–32 “classical” cryptanalysis, 27 counter measures, 37–38 cryptographic primitive, 27 exemplary differential attack against DES, 32–35 device inputs, 33 internal values within algorithm, derivation of, 33 leakage measurement, 33–34 leakage modeling, 33 leakage source and measurement setup, 32–33 relevant leakage sample, 34
Index statistical comparison, 34 target algorithm and implementation, 32 target signal, 33 improvement, 35–37 exemplary profiled attack against DES, 36–37 invasive vs. non-invasive, 27 microelectronic technologies, 28 physical security, 27 smart cards and RFIDs, embedded devices, 28 Side-channel resistant circuit styles/IC design flow balanced load capacitance, 147f capacitances at true signal nets versus capacitances, 148f cross-coupling effects, 152f dual rail with precharge inverter, 147t increased distance, 152f OAI32 gate with drive strength 2 in static CMOS, 150f parallel routes on adjacent grid lines, 151f prototype IC, 156t prototype IC/measurement results, 153–156 secure digital design flow, 149–153, 154f place and route approach, 151–153 secure digital design flow, 153 wave dynamic differential logic, 149–150 shielding lines, 152f supply current of protected processor, 155f supply current of unprotected processor, 155f transition-independent power consumption, requirements, 146–148 capacitance matching precision, 148 same capacitance value for each switching event, 147–148 single switching event per clock cycle, 146–147 unbalanced load capacitance, 147f wave dynamic digital logic:compound gate composition, 149f wave dynamic digital logic:precharge wave generation, 150f Signature algorithm (ECDSA), implementation, 181 Signed-digit (SD), binary, 65–66 Silicon PUFs arbiter-based PUF, 134
245 delay circuit as array of switch elements, 132f delay element in oscillating loop, 132f silicon delay PUF, histogram of, 134f Xilinx Spartan3 FPGA (XC3S200), 132 SIM, see Subscriber identity modules (SIM) Simple power analysis (SPA), 28, 31, 34, 162 Single switching event per clock cycle, 146–147 Sliding-window method, 67, 75 “Smart card”, designation, 226–228 Smart cards/high-density cards, comparison, 228t Smart card tamper resistance, 230–233 hardware attacks, 230–231 hardware design level, countermeasures, 231–232 high-density cards, security challenges, 232–233 SMS, see Short message services (SMS) Soft sector protection mechanisms, principle, 223 Software implementation, AES, 57–58 cache miss penalty, 58 T-table, creation, 57 SPA, see Simple power analysis (SPA) SPA and DPA, classical attacks, 31–32 Hamming weight/distance data dependencies in power consumption, 32f SPA monitoring from single AES encryption by smart card, 31f SRAM, see Static RAM (SRAM) SRAM PUFs, 130 cell circuit consisting of six MOSFETs, 130f Static RAM (SRAM), 215 Stream cipher algorithm, 45–46 bit-wise XOR, 46 one-time pad structure, 46 pseudorandom number generator, 46 Subscriber identity modules (SIM), 226 Switching event, capacitance value, 147–148 Symmetric algorithm, 45 block cipher algorithm, 45 secret key, 45 stream cipher algorithm, 45 Symmetric block cipher, 5 Symmetric-key/private-key algorithms, 3 Synonymous/secret key algorithm, see Private key algorithm Synopsys design vision, 99, 189
246 T Template attacks data and key, templates for, 173 intermediate values, templates for, 173 probability over an increasing number of traces, 175f template-based DPA attacks, 173–174, 174f “Three Input Bitwise Exclusive OR” operation, 84 Threshold voltage, transistor, 216 Tkacik RNG design, 115 TPM, see Trusted platform module (TPM) Transition-independent power consumption, requirements, 146–148 capacitance matching precision, 148 same capacitance value for each switching event, 147–148 single switching event per clock cycle, 146–147 Trivium stream cipher, 202–203, 209–210 Trusted platform module (TPM), 199 V VCO, see Voltage controlled oscillator (VCO) VGA connector, 197, 202 Video graphics array (VGA), 197–199, 201–203 Video IA, video message configuration, 200 Video system architecture, secure, 201
Index booting chain-of-trust, 202–203 architecture establishes chain-of-trust, 202f SAM protocol, 203–204 system layout, 201–202 secure video messaging system architecture, 202f VGA, see Video graphics array (VGA) Voltage controlled oscillator (VCO), 108, 114–115 Von Neumann corrector, 112–113 advantage, 113 negative aspects, 113 W Wave dynamic differential logic (WDDL), 149 gate construction, 149 static complementary CMOS logic, 149 WDDL, see Wave dynamic differential logic (WDDL) WE#, see Write enable (WE#) Weil descent attacks, 72 WP#, see Write Protection pin (WP#) Write enable (WE#), 220 Write Protection pin (WP#), 223 X Xilinx Spartan3 FPGA (XC3S200), 132 XUP Virtex-II Pro development board, 212