Design for Manufacturability and Statistical Design A Constructive Approach
Series on Integrated Circuits and Systems ...
392 downloads
1248 Views
11MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Design for Manufacturability and Statistical Design A Constructive Approach
Series on Integrated Circuits and Systems Series Editor:
Anantha Chandrakasan Massachusetts Institute of Technology Cambridge, Massachusetts
Design for Manufacturability and Statistical Design: A Constructive Approach Michael Orshansky, Sani R. Nassif, and Duane Boning ISBN 978-0-387-30928-6 Low Power Methodology Manual: For System-on-Chip Design Michael Keating, David Flynn, Rob Aitken, Alan Gibbons, and Kaijian Shi ISBN 978-0-387-71818-7 Modern Circuit Placement: Best Practices and Results Gi-Joon Nam and Jason Cong ISBN 978-0-387-36837-5 CMOS Biotechnology Hakho Lee, Donhee Ham and Robert M. Westervelt ISBN 978-0-387-36836-8 SAT-Based Scalable Formal Verification Solutions Malay Ganai and Aarti Gupta ISBN 978-0-387-69166-4, 2007 Ultra-Low Voltage Nano-Scale Memories Kiyoo Itoh, Masashi Horiguchi and Hitoshi Tanaka ISBN 978-0-387-33398-4, 2007 Routing Congestion in VLSI Circuits: Estimation and Optimization Prashant Saxena, Rupesh S. Shelar, Sachin Sapatnekar ISBN 978-0-387-30037-5, 2007 Ultra-Low Power Wireless Technologies for Sensor Networks Brian Otis and Jan Rabaey ISBN 978-0-387-30930-9, 2007 Sub-Threshold Design for Ultra Low-Power Systems Alice Wang, Benton H. Calhoun and Anantha Chandrakasan ISBN 978-0-387-33515-5, 2006 High Performance Energy Efficient Microprocessor Design Vojin Oklibdzija and Ram Krishnamurthy (Eds.) ISBN 978-0-387-28594-8, 2006 Abstraction Refinement for Large Scale Model Checking Chao Wang, Gary D. Hachtel, and Fabio Somenzi ISBN 978-0-387-28594-2, 2006 A Practical Introduction to PSL Cindy Eisner and Dana Fisman ISBN 978-0-387-35313-5, 2006 Thermal and Power Management of Integrated Systems Arman Vassighi and Manoj Sachdev ISBN 978-0-387-25762-4, 2006 Leakage in Nanometer CMOS Technologies Siva G. Narendra and Anantha Chandrakasan ISBN 978-0-387-25737-2, 2005 Continued after index
Michael Orshansky • Sani R. Nassif • Duane Boning
Design for Manufacturability and Statistical Design A Constructive Approach
ABC
Michael Orshansky The University of Texas at Austin Austin, TX USA
Sani R. Nassif IBM Research Laboratories Austin, TX USA
Duane Boning Massachusetts Institute of Technology Cambridge, MA USA
Series Editor: Anantha Chandrakasan Department of Electrical Engineering and Computer Science Massachusetts Institute of Technology Cambridge, MA 02139 USA
An image by Eugene Timerman used in cover design Library of Congress Control Number: 2007933405 ISBN 978-0-387-30928-6
e-ISBN 978-0-387-69011-7
© 2008 Springer Science+Business Media, LLC All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (Springer Science+Business Media, LLC, 233 Spring Street, New York, NY 10013, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use in this publication of trade names, trademarks, service marks, and similar terms, even if they are not identified as such, is not to be taken as an expression of opinion as to whether or not they are subject to proprietary rights. Printed on acid-free paper. 9 8 7 6 5 4 3 2 1 springer.com
To Leo, Mariya, Boris, Yana and Zhenya Michael
To Cosette, Julie, Victoria and Kamal Sani
To my family Duane
Preface
Progress in microelectronics over the last several decades has been intimately linked to our ability to accurately measure, model, and predict the physical properties of solid-state electronic devices. This ability is currently endangered by the manufacturing and fundamental limitations of nanometer scale technology, that result in increasing unpredictability in the physical properties of semiconductor devices. Recent years have seen an explosion of interest in Design for Manufacturability (DFM) and in statistical design techniques. This interest is directly attributed to the difficulties of manufacturing of integrated circuits in nanometer scale CMOS technologies with high functional and parametric yield. The scaling of CMOS technologies brought about the increasing magnitude of variability of key parameters affecting the performance of integrated circuits. The large variation can be attributed to several factors. The first is the rise of multiple systematic sources of parameter variability caused by the interaction between the manufacturing process and the design attributes. For example, optical proximity effects cause polysilicon feature sizes to vary depending on the local layout surroundings, while copper wire thickness strongly depends on the local wire density because of chemical-mechanical polishing. The second is that while technology scaling reduces the nominal values of key process parameters, such as effective channel length, our ability to correspondingly improve manufacturing tolerances, such as mask fabrication errors and mask overlay control, is limited. This results in an increase in the relative amount of variations observed. The third, and most profound, reason for the future increase in parametric variability is that technology is approaching the regime of fundamental randomness in the behavior of silicon structures. For example, the shrinking volume of silicon that forms the channel of the MOS transistor will soon contain a small countable number of dopant atoms. Because the placement of these dopant atoms is random, the final number of atoms that end up in the channel of each transistor is a random variable. Thus, the threshold voltage of the transistor, which is determined by the number
VIII
Preface
of dopant atoms, will also exhibit significant variation, eventually leading to variation in circuit-level performances, such as delay and power. This book presents an overview of the methods that need to be mastered in understanding state-of-the-art Design for Manufacturability (DFM) and Statistical Design (SD) methodologies. Broadly, design for manufacturability is a set of techniques that attempt to fix the systematic sources of variability, such as those due to photolithography and CMP. Statistical design, on the other hand, deals with the random sources of variability. Both paradigms must operate within a common framework, and their joint understanding is one of the objectives of this book. The areas of design for manufacturability and statistical design are still being actively developed and the established canon of methods and principles does not yet exist. This book attempts to provide a constructive treatment of the causes of variability, the methods for statistical data characterization, and the techniques for modeling, analysis, and optimization of integrated circuits to improve yield. The objective of such a constructive approach is to formulate a consistent set of methods and principles that allow rigorous statistical design and design for manufacturability from device physics to large-scale circuit optimization. Writing about relatively new areas like design for manufacturability and statistical design presents its difficulties. The subjects span a wide area between design and manufacturing making it impossible to do justice to the whole area in this one volume. We also limit our discussion to problems directly related to variability, with the realization that the term DFM may be understood to refer to topics that we do not address in this book. Thus, we do not discuss topics related to catastrophic yield modeling due to random defects and particles, and the accompanying issues of critical area, via doubling, wire spreading, and other layout optimization strategies for random yield improvement. These topics have been researched extensively, and there are excellent books on the subject, notably [89] and [204]. Also, with the rapid continuous progress occurring at the time of this writing, it is the authors’ sincere hope that many of the issues and problems outlined in this book will shortly be irrelevant solved problems. We assume that the reader has had a thorough introduction to integrated circuits design and manufacturing, and that the basics of how one creates an IC from the high level system-oriented view down to the behavior of a single MOSFET are well understood. For a refresher, we would recommend [75] and [5]. The book is organized in four major parts. The first part on Sources of Variability contains the three chapters of the book that deal with three major sources of variability: front-end variability impacting devices (Chapter 2), back-end variability impacting metal interconnect (Chapter 3), and environmental variability (Chapter 4). The second part on Variability Characterization and Analysis contains two chapters. Chapter 5 discusses the design of test structures for variability characterization. Chapter 6 deals with the statistically sound analysis of the results of measurements that are needed to create rigorous models of variability. The third part is on Design Techniques
Preface
IX
for Systematic Manufacturability Problems, and deals with techniques of design for manufacturability. Chapter 7 describes the interaction of the design and the lithographic flow, and methods for improving printability. Chapter 8 is devoted to a description of techniques for metal fill required to ensure good planarity of multi-level interconnect structures. The final part on Statistical Circuit Design is devoted to statistical design techniques proper: it contains four chapters dealing with the prediction and mitigation of the impact of variability on circuits. Chapter 9 presents strategies for statistical circuit simulation. Chapter 10 discusses the methods for system-level statistical timing analysis using static timing analysis techniques. In Chapter 11, the impact of variability on leakage power consumption is discussed. The final chapter of the book, Chapter 12, is devoted to statistical and robust optimization techniques for improving parametric yield. This book would not be possible without the generous help and support of a lot of people: our colleagues and graduate students. Several individuals have been kind enough to read through the entire manuscript or its parts, and give the authors essential feedback. Their comments and suggestions have helped us to make this book better. We would like to specifically thank Aseem Agarwal, Shayak Banerjee, Puneet Gupta, Nagib Hakim, Yehea Ismail, Murari Mani, Dejan Markovic, Alessandra Nardi, Ashish Singh, Ashish Srivastava, Brian Stine, Wei-Shen Wang, Bin Zhang, and Vladimir Zolotov. We want to particularly thank Wojciech Maly and Lou Scheffer for reading the entire manuscript and giving us invaluable advice. We thank Denis Gudovskiy for his help with typesetting. Carl Harris, our publisher at Springer, has been a source of encouragement throughout the process of writing. And, of course, this book owes an enormous debt to our families.
Michael Orshansky, Austin, Texas
Sani Nassif, Austin, Texas
Duane Boning, Boston, Massachusetts
July 2007
Contents
1
INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1.1 RISE OF LAYOUT CONTEXT DEPENDENCE . . . . . . . . . . . . 1.2 VARIABILITY AND UNCERTAINTY . . . . . . . . . . . . . . . . . . . . . 1.3 CHARACTERIZATION VS. MODELING . . . . . . . . . . . . . . . . . . 1.4 MODEL TO HARDWARE MATCHING . . . . . . . . . . . . . . . . . . . 1.5 DESIGN FOR MANUFACTURABILITY VS. STATISTICAL DESIGN . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
1 2 3 5 6 6
Part I Sources of Variability 2
3
FRONT END VARIABILITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2 VARIABILITY OF GATE LENGTH . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Gate Length Variability: Overview . . . . . . . . . . . . . . . . . . . 2.2.2 Contributions of Photolithography . . . . . . . . . . . . . . . . . . 2.2.3 Impact of Etch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Line Edge Roughness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.5 Models of Lgate Spatial Correlation . . . . . . . . . . . . . . . . . . 2.3 GATE WIDTH VARIABILITY . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.4 THRESHOLD VOLTAGE VARIABILITY . . . . . . . . . . . . . . . . . . 2.5 THIN FILM THICKNESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.6 LATTICE STRESS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.7 VARIABILITY IN EMERGING DEVICES . . . . . . . . . . . . . . . . . 2.8 PHYSICAL VARIATIONS DUE TO AGING AND WEAROUT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.9 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
11 11 15 15 16 20 22 24 26 27 32 35 37
BACK END VARIABILITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.2 COPPER CMP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.3 COPPER ELECTROPLATING . . . . . . . . . . . . . . . . . . . . . . . . . . .
43 43 44 48
39 41
XII
4
Contents
3.4 MULTILEVEL COPPER INTERCONNECT VARIATION . . . 3.5 INTERCONNECT LITHOGRAPHY AND ETCH VARIATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.6 DIELECTRIC VARIATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.7 BARRIER METAL DEPOSITION . . . . . . . . . . . . . . . . . . . . . . . . 3.8 COPPER AND VIA RESISTIVITY . . . . . . . . . . . . . . . . . . . . . . . 3.9 COPPER LINE EDGE ROUGHNESS . . . . . . . . . . . . . . . . . . . . . 3.10 CARBON NANOTUBE INTERCONNECTS . . . . . . . . . . . . . . . 3.11 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
50
ENVIRONMENTAL VARIABILITY . . . . . . . . . . . . . . . . . . . . . . 4.1 INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.2 IMPACT OF ENVIRONMENTAL VARIABILITY . . . . . . . . . . 4.3 ANALYSIS OF VOLTAGE VARIABILITY . . . . . . . . . . . . . . . . . 4.3.1 Power Grid Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.3.2 Estimation of Power Variability . . . . . . . . . . . . . . . . . . . . . 4.4 SYSTEMATIC ANALYSIS OF TEMPERATURE VARIABILITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5 OTHER SOURCES OF VARIABILITY . . . . . . . . . . . . . . . . . . . . 4.6 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
59 59 60 66 67 71
52 54 54 55 56 57 57
74 82 82
Part II Variability Characterization and Analysis 5
6
TEST STRUCTURES FOR VARIABILITY . . . . . . . . . . . . . . . 5.1 TEST STRUCTURES: CLASSIFICATION AND FIGURES OF MERIT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.2 CHARACTERIZATION USING SHORT LOOP FLOWS . . . . 5.3 TRANSISTOR TEST STRUCTURES . . . . . . . . . . . . . . . . . . . . . 5.4 DIGITAL TEST STRUCTURES . . . . . . . . . . . . . . . . . . . . . . . . . . 5.5 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
85 85 87 92 94 98
STATISTICAL FOUNDATIONS OF DATA ANALYSIS AND MODELING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101 6.1 A BRIEF PROBABILITY PRIMER . . . . . . . . . . . . . . . . . . . . . . . 102 6.2 EMPIRICAL MOMENT ESTIMATION . . . . . . . . . . . . . . . . . . . 104 6.3 ANALYSIS OF VARIANCE AND ADDITIVE MODELS . . . . . 106 6.4 CASE STUDIES: ANOVA FOR GATE LENGTH VARIABILITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 6.5 DECOMPOSITION OF VARIANCE INTO SPATIAL SIGNATURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 6.6 SPATIAL STATISTICS: DATA ANALYSIS AND MODELING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118 6.6.1 Measurements and Data Analysis . . . . . . . . . . . . . . . . . . . . 118 6.6.2 Modeling of Spatial Variability . . . . . . . . . . . . . . . . . . . . . . 119 6.7 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
Contents
XIII
Part III Design Techniques for Systematic Manufacturability Problems 7
LITHOGRAPHY ENHANCEMENT TECHNIQUES . . . . . . 127 7.1 FUNDAMENTALS OF LITHOGRAPHY . . . . . . . . . . . . . . . . . . 128 7.1.1 Optical Resolution Limit . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 7.2 PROCESS WINDOW ANALYSIS . . . . . . . . . . . . . . . . . . . . . . . . 134 7.3 OPTICAL PROXIMITY CORRECTION AND SRAFS . . . . . 137 7.4 SUBRESOLUTION ASSIST FEATURES . . . . . . . . . . . . . . . . . . 140 7.5 PHASE SHIFT MASKING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143 7.6 NON-CONVENTIONAL ILLUMINATION AND IMPACT ON DESIGN . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 7.7 NOMINAL AND ACROSS PROCESS WINDOW HOT SPOT ANALYSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150 7.8 TIMING ANALYSIS UNDER SYSTEMATIC VARIABILITY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152 7.9 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
8
ENSURING INTERCONNECT PLANARITY . . . . . . . . . . . . 155 8.1 OVERVIEW OF DUMMY FILL . . . . . . . . . . . . . . . . . . . . . . . . . . 157 8.2 DUMMY FILL CONCEPT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158 8.3 ALGORITHMS FOR METAL FILL . . . . . . . . . . . . . . . . . . . . . . . 160 8.4 DUMMY FILL FOR STI CMP AND OTHER PROCESSES . . 163 8.5 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
Part IV Statistical Circuit Design 9
STATISTICAL CIRCUIT ANALYSIS . . . . . . . . . . . . . . . . . . . . . 167 9.1 CIRCUIT PARAMETERIZATION AND SIMULATION . . . . . 167 9.1.1 Introduction to Circuit Simulation . . . . . . . . . . . . . . . . . . . 167 9.1.2 MOSFET Devices and Models . . . . . . . . . . . . . . . . . . . . . . 169 9.1.3 MOSFET Device Characterization . . . . . . . . . . . . . . . . . . . 171 9.1.4 Statistical Device Characterization . . . . . . . . . . . . . . . . . . 173 9.1.5 Principal Component Analysis . . . . . . . . . . . . . . . . . . . . . . 177 9.2 WORST CASE ANALYSIS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 9.2.1 Worst Case Analysis for Unbounded Parameters . . . . . . . 181 9.2.2 Worst Case Analysis Algorithm . . . . . . . . . . . . . . . . . . . . . 182 9.2.3 Corner-Based Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . 183 9.2.4 Worst Case Analysis Example . . . . . . . . . . . . . . . . . . . . . . . 185 9.3 STATISTICAL CIRCUIT ANALYSIS . . . . . . . . . . . . . . . . . . . . . . 190 9.3.1 A Brief SRAM Tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 9.3.2 Monte-Carlo Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192
XIV
Contents
9.3.3 Response-Surface Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 194 9.3.4 Variance Reduction and Stratified Sampling Analysis . . 197 9.4 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199 10 STATISTICAL STATIC TIMING ANALYSIS . . . . . . . . . . . . . 201 10.1 BASICS OF STATIC TIMING ANALYSIS . . . . . . . . . . . . . . . . . 202 10.2 IMPACT OF VARIABILITY ON TRADITIONAL STATIC TIMING VERIFICATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 10.2.1 Increased Design Conservatism . . . . . . . . . . . . . . . . . . . . . . 205 10.2.2 Cost of Full Coverage and Danger of Missing Timing Violations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 207 10.3 STATISTICAL TIMING EVALUATION . . . . . . . . . . . . . . . . . . . 211 10.3.1 Problem Formulation and Challenges of SSTA . . . . . . . . 211 10.3.2 Block-Based Timing Algorithms . . . . . . . . . . . . . . . . . . . . . 213 10.3.3 Path-Based Timing Algorithms . . . . . . . . . . . . . . . . . . . . . 221 10.3.4 Parameter Space Techniques . . . . . . . . . . . . . . . . . . . . . . . . 227 10.3.5 Monte Carlo SSTA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 230 10.4 STATISTICAL GATE LIBRARY CHARACTERIZATION . . . 234 10.5 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 11 LEAKAGE VARIABILITY AND JOINT PARAMETRIC YIELD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239 11.1 LEAKAGE VARIABILITY MODELING . . . . . . . . . . . . . . . . . . . 239 11.2 JOINT POWER AND TIMING PARAMETRIC YIELD ESTIMATION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 11.3 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 249 12 PARAMETRIC YIELD OPTIMIZATION . . . . . . . . . . . . . . . . . 251 12.1 LIMITATIONS OF TRADITIONAL OPTIMIZATION FOR YIELD IMPROVEMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 12.2 STATISTICAL TIMING YIELD OPTIMIZATION . . . . . . . . . . 257 12.2.1 Statistical Circuit Tuning: Introduction . . . . . . . . . . . . . . 257 12.2.2 Linear Programming under Uncertainty . . . . . . . . . . . . . . 263 12.3 TECHNIQUES FOR TIMING AND POWER YIELD IMPROVEMENT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268 12.4 SUMMARY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277 13 CONCLUSIONS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 279 A
APPENDIX: PROJECTING VARIABILITY . . . . . . . . . . . . . . 281
References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 291 Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 309
1 INTRODUCTION
In theory, there is no difference between theory and practice; In practice, there is. Chuck Reid
Design for manufacturability and statistical design encompass a number of activities and areas of study spanning the integrated circuit design and manufacturing worlds. In the early days of the planar integrated circuit, it was typical for a handful of practitioners working on a particular design to have a fairly complete understanding of the manufacturing process, the resulting semiconductor active and passive devices, as well as the resulting circuit often composed of as few as tens of devices. With the success of semiconductor scaling, predicted and - to a certain extent even driven - by Moore’s law, and the vastly increased complexity of modern nano-meter scale processes and the billion-device circuits they allow, there came a necessary separation between the various disciplines. For a number of process generations, until roughly the 0.18µm generation, the interface between the design and manufacturing phases of an integrated circuit could be well represented by straightforward device models (which we will discuss in Chapter 9) and simple geometric design rules which typically determined the minimum widths and spacings for the various layers that composed the integrated circuit. The role of the models was to enable us to predict the behavior of the integrated circuit given that it is not possible to prototype the IC in order to find out whether and how well it works. The role of the design rules was to insure that the yield of the circuit - defined as the proportion of manufactured circuits that are functional and meet their performance requirements - was economically viable. The relationship between yield and design rules existed because the yield loss mechanism in those manufacturing processes was dominated by topology changes (shorts and opens) caused by particulate contamination and similar phenomena.
2
1 INTRODUCTION
As scaling continued, however, our ability to reliably predict the outcome of a semiconductor manufacturing process has steadily deteriorated. This has happened because of two important reasons. Firstly, the CMOS technology scaling has led to the increasing complexity in the semiconductor process and in its interaction with design. This has in turn caused an increase in the number and magnitude of systematic sources of mismatch between simulation models (both at the technology CAD and at the circuit simulation levels) and hardware measurements. Secondly, manufacturing variability resulting from random as well as systematic phenomena -long a source of concern only for niche analog design- is becoming important for digital design as well and thus its prediction is now a first order priority. Process complexity and the challenges of accurately modeling variability have conspired to increase the error in performance predictions, leading to a gap in model to hardware matching.
1.1 RISE OF LAYOUT CONTEXT DEPENDENCE How is this happening? It is helpful to consider the evolution of the definition of a single device, the simple metal-oxide-semiconductor field-effect transistor (MOSFET) - the workhorse of semiconductor technology. Until relatively recently (say in the 1990s) the definition of the layout features which determine the performance of a fabricated MOSFET was relatively simple. The intersection between the diffusion (also referred to as active region) and polysilicon mask shapes determined the dimensions of the MOSFET, and those dimensions sufficed to characterize the behavior of the transistor. We do not mean to imply that the scaling of these dimensions did not make transistor modeling more difficult, but rather to point out that the behavior of the transistor was determined by local geometry and that its interpretation from the layout masks was a straightforward task. As the industry migrated to sub-wavelength lithography, the difference between the drawn mask shapes and actual printed (wafer) regions began to increase. This resulted in the introduction of resolution enhancement techniques (RET), such as optical proximity correction and phase-shift masking to improve the fidelity of manufactured devices. In spite of advances, bordering in many cases on the miraculous, it remains a fact that there is an ever growing gap between device layout as viewed by a designer, and final manufactured shapes as rendered in silicon. This gap exhibits itself in two ways: (a) the interaction between shapes on the mask due to interference, flare, and other lithography and illumination related issues means that the actual dimensions of the device are determined by many of the shapes in the neighborhood of the device; and (b) the precision with which the dimensions (length and width) of a device can be set are limited by the residual error that remains after RET is applied. At the 65nm node and below, assuming the lithography roadmap remains as it is currently defined, the radius of influence that defines the neighborhood
1.2 VARIABILITY AND UNCERTAINTY
3
of shapes that play a part in determining the characteristics of a MOSFET is about 500nm. This is equal to nearly half the height of a standard cell and thus would contain a great many shapes. This increase in the radius of influence will impact a number of areas. Firstly, it will make the modeling of device behavior more difficult since it will be harder to define a canonical or typical device from which to perform device model characterization. We will come back to this subject later in Chapter 5. Secondly, it will make the circuit extraction phase of design verification (where the layout is converted into a simulatable netlist) more complex since a larger number of geometries will need to be processed. And, finally, it will severely hamper design composability, defined as the ability to compose a complex circuit function out of individual simple functions, e.g. building a multi-bit adder out of single-bit adders, which are in turn composed of individual NAND, NOR and similar logic gates. A similar set of factors impacts the interconnect that we use to connect devices together into circuits. Lithography plays a similar albeit less extreme role in the imaging of interconnect, while chemical-mechanical polishing (CMP, the process used to planarize the interfaces between metal levels) introduces dimensional variations in interconnect that can have a profound impact on the characteristics of interconnect. The trends outlined thus far point to an increasing need to model and comprehend the interaction between design and manufacturing at the individual device (MOSFET or wire) level. A related and important trend is the increase in manufacturing variability, which we will discuss next.
1.2 VARIABILITY AND UNCERTAINTY The performance of an integrated circuit is determined by the electrical characteristics of the individual linear and non-linear devices which constitute the circuit. Variations in these device characteristics cause the performance of the circuit to deviate from its intended range and can cause performance degradation and erroneous behavior. We will refer to this type of variability as physical variability. Physical variability can be further divided into front-end and back-end components. Front-end variability is variability resulting from those steps of the manufacturing process responsible for creating devices. Due to the nature of the fabrication process, these steps happen relatively early in the process, hence the front end name. Chapter 2 discusses this subject in detail. Back-end variability is variability resulting from those steps of the manufacturing process responsible for creating interconnection. These steps happen relatively late in the process, and are discussed in Chapter 3. In addition, circuit performance is determined by the environment in which the circuit operates. This includes factors such as temperature, power supply voltage, and noise. Variations in operating environment can have a similar impact on circuit behavior as device variations. Chapter 4 will deal with this type of variability.
4
1 INTRODUCTION
It is tempting to think of physical variability as simply the result of systematic and random manufacturing fluctuations. But a variety of time-dependent wear-out mechanisms such as metal electro-migration and negative-bias temperature instability (NBTI) cause device characteristics to change over time, albeit with a time constant on the scale of months and years. In contrast, environmental variability is very much a function of time but at the same time-scale as that of the operation of the circuit, e.g. in the nano-second range for a typical GHz design. Whether physical or environmental, we can classify components of variability in various ways. For example, we can examine their temporal behavior (as we alluded to above). We can also examine their spatial distribution across chip, reticle, wafer, and lot. Most important, however, is the notion of whether we fully understand the interaction between the specific component of variability and the characteristics of the relevant design. Consider the case where a physical or environmental component of variability is known to be a function of specific design characteristics. For example, it is well known that variation in the channel length of MOSFET devices is related to the orientation of the devices. With a suitable quantitative model relating the variation to design practice, a designer can fully take this effect into account during the design and make the appropriate engineering tradeoff so as to guarantee circuit operation under these systematic variations. Because we are able to describe these phenomena, we treat them as systematic in nature, and we will refer to them as systematic variability. Now consider the case where a physical or environmental phenomenon is not well understood, such that available information is limited to the magnitude of the variation, but without insight into its quantitative dependence on design. In such a case, the designer has no choice but to perform worst-case analysis, i.e. creating a large enough design margin to correct for the worst possible condition that may occur, or rely on the methods of statistical design. The large margins are usually wasteful in design resources and end up impacting the overall cost and performance. We will refer to these types of phenomena as uncertainty or random variability. Systematic variability can be compensated for and will typically cause only small increases in design cost, while uncertainty or random variability needs to be margined against and will typically cause large increases in design cost. A key point to remember, however, is that the difference between the two is determined by our ability to understand and model the mechanisms at play, and that an investment in modeling and analysis can sometimes turn a source of uncertainty into a source of systematic variability, thereby reducing design cost and/or improving design performance. With increasing manufacturing process complexity, more and more phenomena are competing for limited modeling resources. This trend, unchecked, endangers our industry’s ability to deliver future design improvements consistent with historical trends. Furthermore, many phenomena are increasing
1.3 CHARACTERIZATION VS. MODELING
5
in magnitude as scaling continues, requiring more modeling and analysis resources [73]. The concrete understanding of variability, uncertainty, and their interaction with and impact on designs is one of the core activities of DFM.
1.3 CHARACTERIZATION VS. MODELING The prior two sections identified two trends: (1) an increase in the diversity of device implementation and interaction, and (2) an increase in random and systematic variability resulting from the manufacturing process as well as circuit operation. But how is our understanding of the manufacturing process created? In earlier technologies, it was sufficient to create simple scribe-line test structures including a handful of MOSFETs with varying dimensions. The measured characteristics of the devices was used to extract device model parameters (e.g. for a BSIM [74] model). The resultant parameters were considered to be a complete encapsulation of the behavior of the technology. As technology scaled and device models became more complex (e.g. to handle phenomena such as short channel effects) the selection and number of devices that are included in the test structure increased somewhat, but the fundamental idea that all other devices can be understood by looking at this handful of devices remained valid. Now consider the case where the local layout environment has a significant impact on device behavior, which is the case in current technologies. Suddenly, the number of layout variations that would need to be considered in order to bound device behavior is much larger than the handful of length-width combinations previously used. We can still get a lot of information from these devices, but since they differ from the devices that will actually be used in the real circuit, it is not clear that the information will be helpful. To drive the point further, consider the following scenario. A designer creates a new circuit, it is manufactured, and then it is measured. The information derived is a characterization of this particular circuit since it does not necessarily provide information that would be helpful to changing the original design. An alternative would be to create a design, as well as a number of derived designs in which specific design features were changed. When this collection of designs are manufactured and measured, the result can be considered a model of the behavior of the design in response to possible design changes. This conceptual difference between characterization and modeling is another core activity of DFM. For DFM to be meaningful, it must provide alternatives for the improvement of design in response to manufacturing realities. Such improvements by necessity require models that can allow the exploration of the design space in order to find the best possible designs. We discuss the
6
1 INTRODUCTION
design of test structures for DFM in Chapter 5. We deal with the statistical methods for analyzing the results of characterization and for building the rigorous models in Chapter 6.
1.4 MODEL TO HARDWARE MATCHING The issues and efforts outlined thus far culminate in the problem of model to hardware matching. We use this umbrella term to denote our ability to predict, via modeling, the behavior of hardware after fabrication. The confidence we have in such predictions is key to the economic success of the semiconductor business since performance and yield play a large part in determining the profitability of a design. The increasing interaction between layout and device performance is one of many sources of additional potential disparity between models and reality. In fact, a large number of mechanisms, when not modeled with appropriate accuracy, can lead to deviation between the predicted and observed performance. When these phenomena are characterized (not modeled) and the design margins are increased to accommodate them, the net result is an increase in the cost of the design without increase in performance. Considering that the increment in performance between technology generations has been getting smaller as CMOS technology nears its ultimate maturity, and that design margin is directly related to our ability to accurately predict and bound the nominal and statistical performance of our designs, it is clear that the improvement of our modeling breadth and accuracy must have the highest priority. The ultimate goal of DFM is to improve our understanding of the interaction between design and manufacturing with appropriate quantitative models that enable appropriate tuning of both the design and the fabrication process. It is only with such close interaction that predictability can be guaranteed, and it is only with predictability that economic success can be insured.
1.5 DESIGN FOR MANUFACTURABILITY VS. STATISTICAL DESIGN The distinction between systematic variability and random variability (or uncertainty) can also be used to clarify the difference between design for manufacturability (DFM) and statistical design (SD). It should be noted that there is no consensus in the community on the proper usage of these terms. We believe, however, that the distinction has some merit. As we indicated, often modeling and analysis resources invested in investigating a phenomenon produce a systematic understanding of its behavior. Ultimately, this systematic behavior can be expressed in terms of functional (non-stochastic) mathematical models. For example, despite the seeming randomness of data, it can be
1.5 DESIGN FOR MANUFACTURABILITY VS. STATISTICAL DESIGN
7
shown (as we do in a later chapter) that the dependence of transistor channel length on the device’ orientation in the layout is completely systematic and predictable in that we can construct a model of such dependence. If the phenomenon is not modelable because it is poorly understood or because it is truly a random phenomenon, such as the placement of dopant atoms, only stochastic models of the behavior can be supplied. Such models describe the behavior of a parameter using the means of probability theory and statistics. The parameter is treated as a random variable and the modeling goal is to describe its distribution. The traditional approach to dealing with uncertainty (random variability) is by relying on the worst-casing or margining strategy. A new viewpoint is advocated by the emerging philosophy of statistical design, alternatively referred to as probabilistic design or better-than-worst-case design. The new philosophy is a major shift from the traditional worst-case design paradigm, and is justified by the increased magnitude of uncertainty and variability that make the design landscape qualitatively different. One of the key cornerstones of digital system design methodologies has been that it is best to remove the uncertainty at the lowest possible level of the design hierarchy, to hide it from the higher levels of abstraction, so that deterministic design methodologies can be used. In a sense, this design strategy requires guaranteeing 100% parametric yield at a given level of design hierarchy. For example, the design solution must guarantee that delay of a block never exceeds a given value regardless of the amount of process, environmental, or workload variations. Paradoxically, when variability is large, this approach may be highly suboptimal. The cost of insuring 100% yield increases rapidly, and giving up on the need to have full yield may bring substantial benefits. For example, giving up only 5% of timing yield may reduce leakage power consumption by almost 25% according to one experiment [1]. The benefits can also be in terms of higher yield at the same timing or power values. At a higher level of abstraction, a statistical design of the MPEG2 video decoder has shown that significant timing speed-up is possible [2]. An implementation of the low power pipeline architecture relying on the double-sampling and recovery strategy in 0.18µm CMOS technology demonstrated that energy per operation can be reduced by up to 64% with only a 3% decrease in performance [3]. The key question that is often raised with regards to ideas of statistical design is how to verify and guarantee the reliability of such designs. Two classes of solutions can be used. The first relies on the testing procedure that not just qualifies the functionality of a digital system but extracts the full distribution of product metrics, e.g., maximum clock frequency. For example, speed testing needs to be done to measure maximum frequency for each product. The cost of extra testing may be justified by the overall improvement of power consumption or area. The second solution is to use adaptive circuit strategies, in which the compensation techniques are used to guarantee correct timing and functional behavior. One example is the transistor level adaptivity where the transistor
8
1 INTRODUCTION
switching speed and leakage power consumption are controlled by adjusting its threshold voltage, which can be modified by changing the voltage between the source and the body terminals of the transistor. The method has been shown to be effective in drastically improving both chip speed and its parametric yield [4]. Another example is the temporal redundancy scheme that allows lower average energy consumption largely without sacrificing performance by implementing a double-sampling latch with micro-architectural state recovery. The introduction of the state recovery allows backing off the 100% guarantee of block timing meeting constraints under all possible workloads and parameter configurations [3]. By accurately predicting the likelihood of timing properties, the system can be set up such that the recovery mechanism is activated only rarely. Realizing the new paradigm requires novel probabilistic analysis and optimization methods, as well as the change in design practices. The new design paradigm will span multiple levels of the design hierarchy, and probabilistically describe and optimize design objectives (power and clock frequency). It may allow relaxing the fixed yield budget relying instead on a flexible approach in which the optimal design is constructed by combining solutions across the design hierarchy. Design solutions and fixes at the various levels of the hierarchy have different cost-benefit curves depending on the level of the underlying uncertainty, and the cost-benefit curves of the alternatives. The right strategy depends on multiple factors across the hierarchy: the magnitude and structure of process-level variations, the nominal supply voltage level, the circuit design family used, and the particular techniques being employed. We can summarize the distinction between design for manufacturability and statistical design as follows. Both DFM and statistical design are strategies for dealing with increased variability and uncertainty. DFM has the goal of extracting from the complex world of variability behaviors that are predictable and systematic, and proposes modeling, analysis, and compensation techniques for handling these systematic dependencies. DFM techniques for compensating the systematic effects in lithography are discussed in Chapter 7, while the compensating techniques for the back end flow are covered in Chapter 8. Statistical design, on the other hand, embraces variability and seeks to represent the impact of variability in terms of distributions. It also seeks ways to exploit the statistical descriptions of circuit performance metrics for more efficient, better than worst case design. The methods of statistical design are covered in Chapters 9-12.
2 FRONT END VARIABILITY
There are more things in heaven and earth, Horatio, than are dreamt of in your philosophy William Shakespeare
2.1 INTRODUCTION One of the most notable features of nanometer scale CMOS technology is the increasing magnitude of variability of the key parameters affecting performance of integrated circuits [6]. Several taxonomies can be used to describe the different variability mechanisms, according to their causes, spatial scales, the particular IC layer they impact, and whether their ability can be described using non-stochastic models. Here we briefly discuss these taxonomies. The entire semiconductor flow is often partitioned into its front-end and back-end components. The front-end cluster comprises manufacturing steps that are involved in creating devices: implantation, oxidation, polysilicon line definition, etc. On the other hand, the back-end cluster comprises steps involved in defining the wiring of the integrated circuit: deposition, etching, chemical-mechanical polishing, etc. Both front-end and back-end flows exhibit significant variability. In this chapter we concentrate on the front-end variability. It is difficult to say, in a general way, which group of variability contributors dominates. This question can only be answered with respect to a specific concern – overall parametric yield, timing variability. In terms of the resulting timing variability, front-end (device) variability appears to be dominant. For example, for a realistic design, device-caused delay variability contributed close to 90% of the total variability of the canonical path delay [86]. While the exact decomposition of delay variability is design-specific, the device-caused variability is likely to remain the dominant source of path delay
12
2 FRONT END VARIABILITY
variation, because circuit design practices universally used to reduce the delay of long interconnect lines also help in reducing delay variability due to global interconnect. It is sometimes useful to distinguish the sources of variability between those related to the issues of manufacturing control and engineering, i.e. extrinsic causes of variation, and those that are due to the fundamental atomic-scale randomness of the devices and materials, i.e. intrinsic causes of variation. The extrinsic manufacturing causes are the more traditional ones and are due to un-intentional shifts in processing conditions related to the semiconductor fab’s goodness of process control. Examples of variability sources in this category include the lot-to-lot and wafer-to-wafer control of oxide thickness growth, primarily determined by the temperature, pressure, and other controllable factors. Historically, scaling made controlling this variability more difficult: while the nominal target values of the key process parameters, such as effective channel length of the CMOS transistors or the interconnect pitch, are being reduced, our ability to improve the manufacturing tolerances, such as mask fabrication and overlay control, is lagging behind [7]. However, the most profound reason for the future increase in parameter variability is that the technology is approaching the regime of fundamental randomness in the behavior of silicon structures. Fundamental intrinsic randomness is due to the limitations imposed by trying to operate devices at the scale at which quantum physics needs to be used to explain device operation and the device operation must be described as a stochastic process, and trying to geometrically define materials at the dimensional scale that is comparable to atomic structure of the materials. In other words, the key dimensions of MOS transistor approach the scale of the silicon lattice distance, at which point the precise atomic configuration becomes critical to macroscopic device properties [8]. At this scale, the traditional descriptions of device physics based on modeling semiconductor with smooth and continuous boundaries and interfaces break down [9]. The primary cases of fundamental device variability are: threshold voltage variation, line-edge roughness, thin film thickness variation, and energy level quantization [10][11][12][13]. For example, because placement of dopant atoms introduced into silicon crystal is random, the final number and location of atoms that end up in the channel of each transistor is a random variable. As the threshold voltage of the transistor is determined by the number and placement of dopant atoms, it will exhibit a significant variation [14][15]. This leads to variation in the transistors’ circuit-level properties, such as delay and power [16]. Energy quantization will also become a real factor in circuit design. For example, electric noise due to the trapping and de-trapping of electrons in lattice defects may result in large current fluctuations, and those may be different for each device within a circuit. At this scale, a single dopant atom may change device characteristics, leading to large variations from device to device [17]. As the device gate length approaches the correlation length of the oxide-silicon interface, the intrinsic threshold voltage fluctuations induced by
2.1 INTRODUCTION
13
local oxide thickness variation will become significant. For conventional MOSFETs this means that for technologies below 32 nm, Vth variation due to oxide thickness variation will be comparable to those introduced by random discrete dopants [14][10]. Finally, line-edge roughness, i.e., the random variation in the gate length along the width of the channel, will become quite noticeable for devices below 50nm, and will be severe at 32nm, also contributing to the overall variability of gate length [12]. The second distinction is based on the spatial scale in which variability of parameters manifests itself. This classification applies to extrinsic variability, as intrinsic variability, by definition, occurs on the scale of a single device. The total variability can be separated into (i) lot-to-lot, (ii) wafer-to-wafer within the lot, (iii) across-wafer, (iv) across-reticle, and (v) within-chip. Different processing steps impact these various spatial scales. The relative magnitudes of each scale depend on the specifics of the process. In general, there tends to be much more variation between-chip variation across the wafer compared to wafer-to-wafer variation within the lot [49]. For the circuit designer’s sake, the primary distinction is between chip-tochip (inter-chip) and within-chip (intra-chip) variability. Historically, within the chip the variation of the parameters could be safely neglected in digital circuit design (analog designers have been concerned with matching for a long time). The patterns of variability are changing, however. For 130nm CMOS technologies, the percentage of the total variation of the effective MOS channel length that can be attributed to intra-chip variation can be up to 35% [18]. A useful distinction that relates to within-chip variability is based on similar structure variability and dissimilar-structure variability [49]. Variability between similar structures arises due to the across-wafer and across-reticle variability that every chip experiences. Variability between dissimilar structures may be due to (i) the differences in processing steps, for example, different masks are used in dual threshold voltage processes for making devices with low and high Vth, and (ii) different dependencies of process conditions to variations in layout orientation and density, for example, orientational dependence in lithography or micro-loading in resist and etch. The increase of intra-chip parameter variation is caused by the emergence of a number of variation-generating mechanisms located on the interface between the design and process. For example, one of the major contributors to the variation of the effective channel length is the optical proximity effect. As the transistor feature size gets smaller compared to the wavelength of light used to expose the circuit, the light is affected by diffraction from the features located nearby. As a result, the length of the final polysilicon line becomes dependent on the local layout surroundings of each individual transistor. Another source of large intra-chip parameter variation is the aberrations in the optical system of the stepper used to expose the mask. These aberrations lead to predictable systematic spatial variation of the MOS gate length across the chip. For interconnect, an important source of variability is the dependence of the rate of chemical-mechanical polishing (CMP) on the underlying density
14
2 FRONT END VARIABILITY
of interconnect. The most significant problems that may arise when polishing are dishing and erosion, which happen when some areas of the chip are polished faster than others. In dishing, the metal (usually copper) is “dished” out of the lines. Erosion happens when some sections of the inter-level dielectric are polished faster than others. An important distinction that is often misused is between the stochastic (random, statistical) variability and the systematic (deterministic) types of variability mechanisms. The confusion stems from not distinguishing the actual mechanism by which variation is generated from one’s ability to predict the value of a variable deterministically (and thus analyze, correct, and compensate for it). A non-statistical (deterministic) description does not make a reference to the variance of a process parameter, but only to its mean value. For example, a well-specified non-uniform temperature profile affects the entire wafer and is thus systematic to the process engineer who can measure it and observe that the same profile affects each wafer in an identical way. Let us suppose that the process engineers cannot correct this temperature non-uniformity. To a circuit designer, this source of variability will appear statistical: the placement of each die on the wafer is unknown and cannot be utilized. There is no way by which the circuit designer can deterministically describe the values of temperature affecting each die, and thus only a statistical description is possible. (The statistical variable used can be spatially correlated, however.) In summary, the importance of the distinction is that we must treat random and systematic variations differently. While the systematic variations are modeled and accounted for in the design flow, the random variations are handled either through worst-case margining or parametric yield optimization methods. It is interesting to inquire about the future trends that the variability components will exhibit. Is variability going to grow dramatically or remain under control? In general it is quite difficult to predict the magnitude of variability that will be characteristic of the future processes, or even make reliable generalizations across the current processes. However, several trends appear quite certain. The threshold voltage variability will rise driven by the increased contribution of the random dopant fluctuations index Vth variation! due to random dopants. At the limit of scaling, below 22nm nodes, the oxide thickness variation and line edge roughness are likely to be substantial contributors to the variability budget. Until new lithography solutions are adopted in place of the current 192nm exposure systems, gate length (Lgate ) variability due to lithography is bound to remain problematic. For other variability mechanisms, the future is less predictable as ways to improve control are continuously developed. Figure 2.1 shows a large increase in 3σ variation of effective transistor length (Lef f ), oxide thickness (Tox ), threshold voltage (Vth ), interconnect width (W ) and height (H), and dielectric constant (ρ) [73]. These predictions should be interpreted cautiously, since the ability to control specific sources can change in the future, for better or for worse.
2.2 VARIABILITY OF GATE LENGTH
Percentage (%)
50
Leff tox VTH W H ρ
40 30 20 10 0
15
250 180 130 100 Technology Generation (nm)
70
Fig. 2.1. The 3σ parameter variation increases as a result of scaling (Reprinted c from [73], 2000 IEEE).
2.2 VARIABILITY OF GATE LENGTH 2.2.1 Gate Length Variability: Overview Variability in the gate length of MOS transistor is extraordinarily important for multiple aspects of IC performance and design. This parameter is known as “critical dimension” in the manufacturing community because it defines the minimum feature size of a technology. Electrically, gate length and a related parameter, known as effective channel length (Lef f ) strongly impact the current drive, and therefore the speed, of the circuit. There are several ways to define the effective channel length, here we take it to be equal to the gate length minus the under-diffusions of the source and drain regions. In the discussion that follows we will adopt the term Lgate uniformly. Another term that sometimes appears is the critical dimension (CD) that lithographers use to refer to Lgate . Transistor leakage current is an exponential function of Lgate . Because of this exponential dependence, variation of Lgate is greatly amplified in its impact on leakage. The growth of power consumption has led to a situation in which many chips are power-limited. As a result, Lgate variability leads to a large parametric yield loss. Because this loss occurs primarily in fast frequency bins which are the most profit-generating bins, Lgate variability is economically very costly. It has been estimated by one major semiconductor company that a reduction of 1nm of the standard deviation (σ) of Lgate would result in an additional earning of $7.5/ chip for a high-end product [19]. For future technologies, this cost of variability in Lgate is likely to be much higher. The ITRS Roadmap requires total Lgate variation (3σ) to remain under 10%, however, for technologies beyond 45nm node, a manufacturable solution is still unknown [19]. A large number of processing steps and modules have impact on effective channel length. Those include the mask, the exposure system, etching, the spacer definition, and implantation of source and drain regions. Factors that
16
2 FRONT END VARIABILITY
contribute to the variability of the polysilicon gate width are the dominant contributors to Lef f variability [20]. There are also multiple causes in the manufacturing sequence that contribute to overall Lgate variation. Table 2.2 provides a fairly exhaustive list of such causes, most of them are primarily interesting to process engineers. While the complete list of causes in Table 2.2 is quite extensive, error decomposition indicates that the primary ones include reticle mask errors, variations in scanner/stepper illumination, lens aberrations, post-etch bake (PEB) temperature non-uniformity, and plasma etch rate non-uniformity [22]. From the designer’s point of view most of these variability patterns are random. However, at the process level, continuous improvement of statistical metrology and the use of techniques for uncovering complex statistical dependencies have shown that much of the variability in the lithographic part of the sequence is systematic. Other variations acting across the wafer due to the lack of uniformity in temperature, non-uniformity of film thickness may also be highly systematic, at the process level [22]. Similar to other components of variation, linewidth variation can be decomposed into chip-to-chip and within-chip components. The within-chip component is often termed across-chip linewidth variation (ACLV). The chipto-chip component can be further decomposed into contributions from the lot-to-lot, wafer-to-wafer, and within-wafer components. Slow-changing, longterm fluctuations of the process may lead to lot-to-lot variations. Variations in etch or resist bake may introduce wafer-to-wafer variations. Within-wafer effects may be due to the radial variations in the photoresist coating thickness or etching. ACLV is primarily determined by systematic effects due to photolithography and etching. Again, a multitude of factors may contribute, including: stepper induced variations (illumination, imaging non-uniformity due to lens aberrations), reticle imperfections, resist induced variations (coat nonuniformity, resist thickness variation), and others. Lgate variability within a reticle field exhibits a strong systematic spatial dependence which is primarily due to lens aberrations [18]. The scaling of lithographic features makes the lens aberrations even more severe by forcing the operation of the illumination system at the optical resolution limit. The variability patterns due to aberrations are highly predictable at the level of the reticle field, and can be accurately described by distinct 2D surfaces. Finally, there also exists an interaction between the global lens aberration and the local layout patterndependent non-uniformities due to proximity, which contributes to the overall variability budget. We now discuss two major contributors: photolithography and etch. 2.2.2 Contributions of Photolithography The delayed introduction of new lithographic processes based on 157 nm wavelength of light, has forced the last several technology generations to use the
2.2 VARIABILITY OF GATE LENGTH Process Step Wafer Reticle Stepper Etch Resist PEB Environment Develop
17
Source of Variability Flatness, reflectivity, topography CD error, defects, edge roughness, proximity effects Aberrations, lens heating, focus, leveling, dose Power, pressure, flow rate Refractive index, thickness, uniformity, contrast Temperature, uniformity, time, delay Amines, humidity, pressure Time, temperature, dispense, rinse
Fig. 2.2. Summary of contributions to Lgate variability from different processing modules [21].
older technology based on 193nm light. To continue scaling the features, imaging systems had to rely on lower values of k1 , the parameter that is a metric of lithography aggressiveness. The k1 coefficient is defined as k1 = NλA CD, CD is the critical dimension of the feature being printed, λ is the wavelength of light and N A is the numerical aperture of the lens. Over the years, the value of k1 has decreased from about 1 to nearly 0.5. With low k1 imaging, image distortion during photolithography is a major contributor to across-chip linewidth variation. It also leads to other shape distortions. The effect of low k1 is that the optical system has a low-pass filter characteristic, filtering out the high-frequency components of the reticle features. Such distortion results in several major types of distortions: linewidth variation (proximity effect), corner rounding, and line-end shortening [22]. These are all systematic behaviors highly dependent on design layout characteristics. The essentials of photolithography relevant to printability are further discussed in Chapter 5. Proximity effect refers to the dependence of the printed CD on its surrounding. In this simplest 1-D case, the main dependence is on the distance to the nearest neighbor, or equivalently, the pitch. Depending on the proximity of the neighbors, polysilicon features can be classified as isolated or dense (nested). The dependence of linewidth on pitch depends on the type of the photoresist used. A typical dependence of printed linewidth on the pitch is shown on Figure 2.3. Line shortening refers to the reduction in the length of a rectangular feature. This effect is due to factors that include diffraction, the rounding of the mask patterns themselves, and photoresist diffusion. At low k1 imaging, diffraction is a major reason and with smaller CD, line shortening grows rapidly. Corner rounding is another type of image distortion, which occurs because the high-frequency components of the corner are filtered, producing a smoothedout pattern. This has a large impact on the gate width of the transistor if the poly-silicon gate is laid-out very near the L-shaped active region of the transistor. Because of the rounding of the corners of the L-shaped region, the effective gate width depends on the relative position of the gate and active
18
2 FRONT END VARIABILITY
Fig. 2.3. A typical dependence of linewidth on the proximity to the neighboring c polysilicon lines. (Reprinted from [136], 2001 SPIE).
regions. Line shortening and corner rounding are illustrated in Figure 2.4 and 2.5 respectively. Lens aberrations may lead to significant systematic spatial non-uniformity of Lgate over the reticle field. The spatial variation across the reticle can be as high as 12%, for a technology with Lgate = 130nm. Depending on the placement of a circuit, such as a ring oscillator, within the die its speed could vary by almost 15% [18]. The spatial Lgate maps that characterize the variations also depend on the local neighborhoods of the polysilicon features: dense and isolated features will exhibit different spatial profiles, indicating statistical interaction between the global lens aberrations and the pattern-dependent optical proximity effect. Lens imperfections also lead to predictable Lgate bias between the gates that are oriented vertically or horizontally in the layout. Finally, the coma effect leads to an anisotropy of multi-fingered layouts: the relative position of the surrounding gates, i.e. the neighbor being on the left vs. right, exerts predictable impact on the final Lgate , Figure 2.6. This anisotropy also leads to spatial across-reticle maps Lgate that are distinctly different, Figure 2.7. These differences are systematic, i.e. predictable, which is supported by rigorous analysis of variance. Another factor that has to be taken into account is the increased mask error factor (MEF), also known as mask error enhancement factor (MEEF). In projection photolithography, features on photomasks are scaled exactly on to the wafer by the demagnification of the projection optics (1/M ). At large k1 , the mask errors arising due to the inability to ideally place the features
2.2 VARIABILITY OF GATE LENGTH
c Fig. 2.4. Line shortening. (Reprinted from [136], 2001 SPIE).
c Fig. 2.5. Corner rounding. (Reprinted from [136], 2001 SPIE).
19
20
2 FRONT END VARIABILITY
Fig. 2.6. Data shows that linewidth depends on the relative positions of the neighbors and exhibit asymmetry. Layout pattern (a) is predictably different from pattern (b).
on the mask are scaled by the same demagnification factor. For example, if M=5, then the 20nm error in the mask feature placement will result in only a 4nm printed CD error. However, at low k1 imaging, for 0.5 < k1 < 0.8, the beneficial effect of demagnification on the mask error is reduced. Effectively, the mask error gets magnified and the degree of such error magnification is described by the mask error factor: ∆CDresist = M EF ∗
∆CDmask M
(2.1)
While this particular contributor to Lgate variability has always been present, it has recently taken on increased importance. The primary cause of MEF is degradation of image integrity, e.g. the loss of image shape control (due to such factors as lens aberrations, defocus, exposure, partial coherence), and photoresist processing at low k1 [144]. Measurements show that for a given process, MEF increases rapidly for small feature printing, Figure 2.8. MEF is a strong function of defocus and exposure errors. Defocus is the vertical displacement of the image plane during illumination. Exposure errors are due to differences in energy delivered by the illumination system, and other process errors that behave similar to exposure errors. The dependence of MEF can be clearly seen in Figure 2.8. MEF also depends on local layout density: it is higher for nested lines and spaces than for sparse lines [23]. The result of the increased value of MEF is that the mask placement errors contribute a growing amount to the overall Lgate variability. However, the dependence of MEF on the design attributes can be used to increase the process window and reduce the impact of mask errors on Lgate variability. 2.2.3 Impact of Etch The impact of etching non-uniformity on the overall linewidth budget can be comparable to the contribution of photolithography [20]. Etching nonuniformity manifests itself as variability of etching bias which is the difference between the photoresist and etched polysilicon critical dimensions. From the
2.2 VARIABILITY OF GATE LENGTH
21
(a)
(b) Fig. 2.7. The systematic spatial Lgate variation across the reticle field. (a) The spatial profile for a gate with the nearest neighbor on the left and a moderately spaced neighbor on the right. (b) The spatial profile for a gate with the nearest neighbor on the right and a moderately spaced neighbor on the left.
22
2 FRONT END VARIABILITY
Fig. 2.8. Mask error factor grows rapidly at smaller linewidths. (Reprinted from c [22], 2003 SPIE).
designer’s perspective, the variation of etching bias as a function of layout pattern density is the most important component. This dependence can be classified into three classes: micro- and macro-loading, and aspect-ratio-dependent etching. In aspect-ratio-dependent etching, the variation of linewidth is dependent on the distance to nearby features [26]. The biases due to photolithography and etching processes are additive. Micro-loading and macro-loading are driven by the common physical mechanism. The variation in the layout features can increase or decrease the density of reactant. In microloading, the etching bias for the same drawn features will depend on the local environment, with the range of influence of different patterns being 1-10 mm. Significant microloading can occur in places where there are abrupt changes in density, e.g. near scribe-lines, test and in-line diagnostic chips, and near the wafer edge [25]. In macroloading, the etching bias is determined by the average loading across the wafer [25]. Macroloading is a problem for technologists and process engineers, particularly for fabs that manufacture different types of ICs, e.g., logic, DRAM, and gate arrays. 2.2.4 Line Edge Roughness Despite the limitations of the patterning process discussed so far, the existing photolithographic processes are capable of producing a consistent poly line edge. As the devices are scaled below 50nm, the random variation in the gate length along the width of the gate will become quite noticeable making gate length variation control even more difficult, and its impact will become severe
2.2 VARIABILITY OF GATE LENGTH
23
below 32nm [12]. Line edge roughness is the local variation of the edge of the polysilicon gate along its width. The reasons for the increased LER in the future processes include the random variation in the incoming photon count during exposure and the contrast of aerial image, as well as the absorption rate, chemical reactivity, and the molecular composition of resist [27] [49]. Figure 2.9 shows the randomness of the line edge through several steps of the via hole fabrication process.
Fig. 2.9. Simulation of the exposure and development of a via hole with extreme c ultraviolet lithography. (Reprinted from [51], 2003 SPIE).
Line edge roughness has impact on all the main electrical device characteristics: the drive current, off-current, and the threshold voltage. The easiest way to characterize the line edge roughness is to compute its variance. For example, in a 193nm process, the total variation due to LER has the standard deviation of 3σLER = 6.6 − 9nm, measured on a polysilicon line with Lgate =110nm. However, the knowledge of the variance of LER is insufficient to properly predict at least some parameters, for example, the leakage current. The current value also depends on the spatial frequency profile of the local roughness. LER measurements show that the edge profile exhibits both smooth slow changing (low-frequency) and high-frequency types of variation [12]. For this reason, measurements show that there is strong dependence of edge variance on polysilicon gate width. Once the gate width is greater than ∼0.3 um, the variance does not increase any more, Figure 2.10. Thus, capturing only the variance ignores the spatial frequency profile of LER and fails to predict the variance dependence on the length of the measured line. A complete description would include the characterization of the spatial frequency of the LER [28].
24
2 FRONT END VARIABILITY
A model that can be more physically helpful relies on extracting only two additional parameters, the correlation length (ξ) and the roughness exponent (α). Correlation length is a measure of the length after which the segments of the polysilicon edge can be considered un-correlated. The roughness exponent is a measure of the relative contribution of high frequency component to LER. Higher values of α correspond to smoother lines with less high-frequency variation. When α ∼ 1, the profile exhibits a periodic behavior. Experiments show that for the 193nm process, the correlation length is about 33nm, and is relatively insensitive to aerial image quality. The roughness exponent increases slightly with decreasing aerial image contrast, suggesting that at high contrast imaging the contribution of high frequency is higher. The variance value saturates beyond about 10ξ, i.e., at about 0.3um. To assess the devicelevel impact of line edge roughness, we need to translate it into device width roughness, which results from local variation of both polysilicon edges. Experimental measurements show that the roughness of the two edges can be considered uncorrelated. Then, for gate width W > 0.3 um, Lgate variance 2 caused by line edge roughness can be approximated as 2σLER . Most of the experimental evidence suggests that the 3σLER is in the range of 5-6nm. These numbers are quite consistent among many companies and across several technology generations. The reported values of the correlation length, however, range much more widely - 10-50 nm [48]. 2D device simulations indicate that below 32nm, LER will have a significant impact on Vth uncertainty and will lead to a large increase in leakage current. Assuming 3σLER = 6nm and the correlation length of 20A, simulations show that in a device with Lgate = 30nm, the variation in threshold voltage, at low Vds , is σVth = 8 mV. In a device with Lgate = 50nm, the variation in threshold voltage is less severe σVth = 2.5 − 5mV [29][48]. It is instructive to compare the impact of LER on threshold voltage variability with that of random dopant fluctuation, discussed later. At Lgate = 30nm, random dopant fluctuation will lead to σVth = 38mV , approximately, making the impact of LER on threshold voltage uncertainty comparatively small. For non-minimum width devices, the variation in threshold voltage is smaller: σVth depends as 1/(Wef f )0.5 . Figure 2.11 investigates the dependence of σVth and of leakage increase on Lef f [29]. It is clear that below 45nm, line edge roughness does lead to the significantly increased mean leakage current. 2.2.5 Models of Lgate Spatial Correlation For the purpose of modeling of intra-chip variation of Lgate a model based on spatial correlation is used. Indeed, it is reasonable to believe that two transistors nearby will be affected by any source of variation in a similar way, leading to correlation. Moreover, this correlation should decrease with the increasing distance between the two transistors. This is the foundation behind the standard Pelgrom model [30]. The form of the correlation function and the value of the correlation length are determined empirically. One possible
2.2 VARIABILITY OF GATE LENGTH
25
Fig. 2.10. Variance of line edge roughness depends on gate width. The variance c increase saturates beyond about 0.3um. (Reprinted from [12], IEEE 2002).
correlation function is of the form [31]: −d ) V ar(∆CDd ) = 2V ar(CD) 1 − exp( dl
(2.2)
where Var(CD) is the total CD variance of a single device, and dl is a characteristic distance for a particular technology. Discussions of spatial correlation are often confounded with the issue of systematic spatial variation. The term “systematic” variation has a fair amount of ambiguity. From the point of view of statistics, “systematic” variation refers to phenomena characterized by the difference in mean values of certain measures. Systematic variation is synonymous with “deterministic” and can be described by functional forms. This is in contrast to random, or stochastic, variability. From an engineering point of view, naming a certain variability pattern “systematic” seems to be justified only if corrective actions can be taken. What may be “systematic” variability from the point of view of process engineers, may not be so from the point of view of circuit designers. For example, across-wafer and across-field CD variations exhibit spatial trends that appear systematic to the process engineer. Thus, process control can be used to characterize, compensate, and thus eliminate these systematic dependencies. If the data is analyzed now, after the removal of the above components of systematic variation, one finds that the magnitude of spatial correlation that was apparently present in the data is significantly reduced [22]. What if such ideal process control is not implemented? A circuit designer has no way of modeling this variability, except in a statistical sense. Systematic is equivalent to functionally modelable. But to a circuit designer facing a population of chips with different CDs the above variability appears stochastic.
26
2 FRONT END VARIABILITY
Fig. 2.11. LER will have a growing impact on Vth uncertainty and electrical charc acteristics. (Reprinted from [29], IEEE 2002).
While it is stochastic, it is, at the same time, correlated. The description utilizing spatial correlation is useful even though the spatial correlation is in reality due to a systematic non-stationary structure of the data. It can be noted that the famous Pelgrom model is also based on similar reasoning. In this model, the long-range radial wafer-level variation is clearly systematic. But because of the unknown placement of a die on the wafer, it manifests itself to designers as an additive stochastic component with a long correlation distance [30].
2.3 GATE WIDTH VARIABILITY For non-minimum size transistors typically used in logic gates, variability in transistor width has a negligible impact on performance parameters. However, for minimum width transistors, width variability is substantial. Mask align-
2.4 THRESHOLD VOLTAGE VARIABILITY
27
ment is a traditional source of width variability. Still, it is primarily due to two reasons. The first is grounded in photo-lithography. The gate is defined by the overlap between the polysilicon and diffusion layers. Many standard cells are laid out in such a way that the polysilicon gate makes a corner in a close vicinity of the diffusion layer, Figure 2.12. As we learned in the previous section, sub-wavelength lithography introduces image distortions and exhibits features of a low-pass filter when printing features with sharp corners. In this case, corner rounding leads to the reduction of the effective width of the transistor. A very similar situation takes place due to diffusion layer rounding. Another source of gate width variability is due to the planarization steps involved in producing device isolation based on shallow trench isolation (STI) technology. STI is the dominant isolation technique for deep submicron technologies, favored for its excellent latch-up immunity, low junction capacitance, and sharp vertical edges [32]. STI is performed with a damascene process similar to the one used in copper metallization processes. First, a protective layer of oxide and a layer of thicker nitride on the surface of silicon are deposited. An isolation mask is used to define the trenches. The nitride is patterned and anisotropically etched into the silicon substrate, producing a trench with sharp vertical walls. A reactive ion etch (RIE) is used to etch the silicon trenches. The trench is then filled with oxide producing an isolation between the neighboring devices. Now, however, the oxide has to be removed so that oxide forming the STI and the silicon of active areas are co-planar [33]. The planarization is performed using chemical-mechanical polishing (CMP) that removes the material using a combination of mechanical pressure and chemical action. As in the case of metal planarization, the rate of removal depends on the material and on the underlying pattern density, i.e. the layout. The wide trenches experience dishing, and thus are lower than the active area silicon. The planarity can be improved by using dummy fill features as well as imposing new design rules on active area density [33]. However, because of the limitations of these control schemes, there is residual non-uniformity in the alignment of silicon and oxide areas. Typically, extra silicon is removed near the STI-device interface. This effectively reduces the width of the transistor due to a non-vertical boundary. While for large widths this effect can be ignored, it is non-negligible for small devices.
2.4 THRESHOLD VOLTAGE VARIABILITY The threshold voltage of a MOS transistor is determined by several device characteristics including the material implementing the gate (typically, highly doped polysilicon), the thickness of the dielectric film (typically, silicon dioxide), and the concentration and the density profile of the dopant atoms in the channel of the transistor. As a result, the variations in oxide thickness, implantation energy and dose, and the substrate doping profiles will lead to the
28
2 FRONT END VARIABILITY
Fig. 2.12. The two contributors to gate width variability: (a) corner rounding on the poly and active layers, and (b) the impact of CMP used in shallow trench isolation. Courtesy of N. Hakim [31].
variation in threshold voltage (Vth ). Historically, all these effects jointly resulted in 3σ Vth variation of less than 10% of the nominal value [73]. Also, because all the above variation sources exhibited variability primarily on the chip-to-chip scale, intra-chip Vth variation was inconsequential, at least, for digital designs. (Analog designers have always been concerned with the problem of matching the threshold voltages of transistors in amplifiers, comparators, and other circuits that require good matching). With the continuing scaling of MOS dimensions, a radically different problem of Vth variation due to random dopant fluctuation (RDF) has emerged. Figure 2.13 shows the example of the distribution of threshold voltage from a 65nm CMOS process. Placement of dopant atoms into the channel is achieved via ion implantation. Implantation and the subsequent activation through anneal are such that the number and placement of atoms implanted in the channel of each transistor is effectively random. Because the threshold voltage of the transistor is determined by the number and location of dopant atoms, it also exhibits a significant variation. Figure 2.14 shows the randomized placement of dopant atoms in the channel of the 50nm MOSFET. The phenomenon of random dopant fluctuation truly belongs to the class of fundamental atomic-scale randomness, with precise atomic configuration being critical to macroscopic properties of devices [34]. The description that models semiconductors with
2.4 THRESHOLD VOLTAGE VARIABILITY
29
Fig. 2.13. Distribution of the n-channel FET threshold voltage from a 65nm CMOS c process. (Reprinted from [49], 2006 IBM).
smooth, continuous boundaries and interfaces breaks down [9], and has to be supplemented. Because of this discreteness and the stochastic nature of the implantation process, the location of the dopant atoms will vary from transistor to transistor. At the same time, because the number of the dopant atoms is getting smaller, the variation of the number of dopants around a certain mean value becomes greater. A theoretical model that predicts the amount of threshold voltage uncertainty can be constructed via a 3D analysis of the distribution of impurities in the silicon substrate [35]. The considered region is equal to a parallelepiped with the depth equal to the average depth of the depletion layer, X. A model divides the entire area into a number of cubes with the edge of length X. Given the average number of impurities, M , in a cube of size X 3 , the actual number of impurities, m, is described as following the Poisson distribution: P (m) = mM e−M /m!
(2.3)
Given the properties of the Poisson distribution, if the mean number of dopants is M , the standard deviation of the number of dopants is M 1/2 . It has been empirically found that the mean number of dopant atoms in standard 1.5 bulk CMOS devices has been decreasing roughly in proportion to Lef f . Since, the mean number of dopant atoms that are placed in the channel at the end of the implantation and activation processes rapidly decreases, the normalized
30
2 FRONT END VARIABILITY
D
z position ( m)
0.20
S
0.15 0.10 0.05
os
yp
0 0.08 0.06 0.04 0.02
n
itio
(
0
m)
0
0 0.25 .15 0.2 .10 0 0.05 0 sition ( m) x po
(a) 0.10
z position ( m)
0.08
0.06
0.04
0.02
0 0
0.08 0.12 x position ( m) (b)
0.16
Fig. 2.14. The random placement of dopants also impacts the definition of the source and drain regions, leading to the variation of source and drain capacitance c and resistance. (Reprinted from [49], 2006 IBM).
√ uncertainty (σ/µ) in the number of atoms grows as 1/ M . Figure 2.15 shows the variance of the number of dopant atoms for different values of the effective channel length. We now need to model the impact of dopant number uncertainty on the threshold voltage itself. Analytical models exist, and are typically based on the percolation models for establishing a path from source to drain [17][35][36]. Such analytical models are indispensable in providing an intuition for the general dependence of the uncertainty on device parameters. The precise impact of this uncertainty in the number and the placement of dopant atoms on the threshold voltage depends heavily on the specifics of the doping profile used in a MOSFET. Because the actual magnitude of uncertainty depends very much on the specifics of the doping profile, numerical simulations often must be used. For each lattice site, these programs compute a probability of its being a dopant, which can be found from the continuum doping concentration. This can be done for the entire substrate and for any
2.4 THRESHOLD VOLTAGE VARIABILITY
31
Fig. 2.15. The number of dopant atoms in the channel is getting smaller, increasing c the relative uncertainty in the actual number. (Reprinted from [36], 2001 IEEE).
doping profile. Then, at each site a dopant atom is randomly placed in accordance with the computed probability [36], and the device electrical properties are analyzed. These numerical simulations allow a look at the magnitude of Vth uncertainty for MOSFETs at the limits of scaling. √ For a device with the 25nm gate length it is predicted that σVth = 7 ∼ 10/ W mV. µm1/2 . Even if a retrograde doping profile is selected, which is optimal from the point of view √ of Vth uncertainty, the magnitude of uncertainty would remain at σVth = 5/ W mV. µm1/2 [36]. More accurate models that take into account the quantum confinement indicate that the uncertainties can be about 24% higher than stated above [34]. Based on these numerical simulations, an empirical model has been developed [14]. It is convenient to designers because it compactly captures the dependence of the Vth sigma on several device parameters: Tox NA0.4 σVth = 3.19 × 10−8 Lef f Wef f
(2.4)
where Tox is the oxide thickness, NA is the doping density, Lef f and Wef f are the effective channel length and width. From the design perspective, the important factor in this model is the inverse dependence of the standard deviation of Vth on the square root of the transistor width, and thus area. Because of this dependence the uncertainty (measured in the standard deviation of Vth ) of large-width devices will be much smaller than that of minimal-width devices. Figure 2.16 presents measurements of σVth for different values of gate area. It can be seen that the data is consistent with the behavior predicted by the model. All in all, the wide devices used in high-performance logic may have a few extra millivolts of variation, an insignificant amount. The problem is
32
2 FRONT END VARIABILITY
Fig. 2.16. Measurements of σVth for different gate geometries.
absolutely severe for SRAM designs, that rely on minimum-width transistors, which may have σVth =40 mV. The magnitude of Vth uncertainty due to RDF makes it one of the most difficult problems facing CMOS scaling, and especially, SRAM scaling. A more detailed analysis of the impact of RDF on SRAM is presented later in the book. Figure 2.17 shows the projected magnitudes of 3σVth together with the nominal saturated threshold voltage for several values of Lgate . The numbers are based on the projections contained in the ITRS update of 2006, and are premised on the transition from the conventional bulk device to ultra thin-body fully depleted device at the 32nm technology node. This is the reason for the non-monotonic trends in Vth and 3σVth observed in the figure.
2.5 THIN FILM THICKNESS The thickness of the dielectric film that isolates the gate from the silicon channel greatly influences the transistor’s electrical properties, including current drive, threshold voltage, and leakage current. Silicon dioxide (oxide) has been traditionally used as the gate isolation material. The scaling in oxide
2.5 THIN FILM THICKNESS
33
Fig. 2.17. While the nominal Vth gets smaller, the uncertainty in Vth increases.
thickness has continued at the typical rate of a 30% reduction per technology generation and is currently approaching 10-12Å. The continued scaling of the oxide is, however, threatened as it the current values of oxide thickness are approaching the physical limit of film scaling. The primary reason is the quantum-mechanical electron tunneling through the isolating dielectric material. Around the 65nm technology node, the gate tunneling current will become comparable or greater than the channel leakage current. In one example [37], a 100nm process with Tox = 16Å has the channel leakage of 0.3nA/µm of gate width, while the gate current is 0.65nA/µm. The problem is especially severe for NMOS devices. PMOS devices also exhibit gate tunneling current, but for the same physical Tox , it is typically an order of magnitude smaller that that of NMOS devices. The reason is that holes have a higher effective mass than electrons and their tunneling probability is thus much smaller. This ratio is dependent on the material, however. For some alternative dielectrics, for example, nitrided oxides, the hole tunneling can become equal to electron tunneling [38]. The gate tunneling current through the currently used oxide (with thickness of 8 − 12) is so large, that no further reduction is possible. New dielectric materials with a higher value of the dielectric constant are sought to replace oxide. Several alternative materials with a range of dielectric constants are explored. Some materials promise a great increase of the dielectric constant to the range of 25-50, compared to 3.9 for SiO2 , such as hafnium oxide (Hf O2 ). If successful, these materials will alleviate the problem of gate leakage. However, the quality of the insulator-silicon interface remains a problem, and major integration difficulties have been encountered for many such materials. The most realistic short-term hope comes from a nitrided gate oxide (oxynitride) with a dielectric constant in the range of 4.1-4.2Å. While providing less spectacular benefits, this material still leads to a 10× reduction in gate leakage current [37]. Silicon dioxide films are created with a thermal oxidation process which historically has been extremely tightly controlled. The 3σ variation of oxide
34
2 FRONT END VARIABILITY
thickness has been around 4% [19]. Currently, the thickness of the oxide layer has reached a scale of atomic-level roughness of the oxide-silicon interface layer [14][10]. The Si-SiO2 interface has the standard deviation on the order of 2Å [39]. The thickness of oxide film of 10Å corresponds to approximately five atomic layers of SiO2, while the thickness variation is 1-2 atomic layers. Thus, the control of the interface layer and the oxide layer itself has become increasingly difficult, and is now governed by the fundamental limitations of interface roughness and atomic-scale discreteness. That leads to growing variations in electrostatic device characteristics such as mobility and threshold voltage [14]. Most significant is its impact on gate tunneling current. Gate tunneling current shows an extraordinary high sensitivity to the dielectic thickness [37]. For a device with Tox = 15Å, and σTox = 1.8Å, the current can be 5× larger than at the nominal conditions [40]. The variance of the Tox variation is not a sufficient metric for analyzing the impact of oxide thickness variation on the electrical device properties. This is due to the need to consider the frequency distribution of the variation profile and take into account the correlation distance. A silicon-oxide interface is typically represented by a 2D Gaussian, or exponential, autocorrelation function with a given correlation length and the magnitude of variance [14]. Data shows that depending on the atomic orientation of the silicon substrate lattice, the interface roughness steps are on the scale of 1-3A. Because of the difficulty of accurately studying atomic-level interfaces, there is a fairly large range of correlation distance values that have been experimentally reported. TEM measurements typically indicate correlation length of 1-3nm, while AFM measurements are in the 10-30nm range [14]. The currently accepted view is that the correlation length (as determined by fitting roughness data to surface mobility data) is closer to the low range of the reported values, and the reasonable values to use are 7-15Å [47]. The impact of interface roughness and oxide layer non-uniformity on electrostatic device properties can be analyzed via careful 3D simulation [48]. For a device with an average Tox =10.5A, interface roughness steps of 3A, and a correlation length of Λ = 15A, it is found that the interface roughness leads to a Vth uncertainty of about σVth = 4mV . Given the large range of reported values of correlation length, it is useful to study its impact on the threshold voltage uncertainty. Projected magnitudes of σVth based both on classical and quantum-mechanical simulations for several values of Lef f are presented in Figure 2.18 for a range of correlation length values. The assumed interface roughness value is 3A, which is characteristic of the empirically measured devices. For the correlation length values at the higher end of the reported range (e.g. Λ=25nm), the uncertainty in the threshold voltage is much larger: σVth = 35mV, when quantum-mechanical effects are taken into account. The figure indicates that when the correlation length is much smaller than the characteristic MOSFET dimensions, the standard deviation of Vth depends linearly on the correlation length. For this linear range, the following model has been proposed to predict the geometry dependence of σVth :
2.6 LATTICE STRESS
σV th = σVmax Λ/ th
Wef f Lef f
35
(2.5)
where σVmax = 49mV . The numerical simulations validate the above depenth dence of σVth on the FET dimensions. Overall, these results indicate that threshold voltage uncertainty due to oxide non-uniformity is, indeed, significant when device dimensions are on the order of the interface correlation length. In devices below 30nm, this uncertainty is comparable to that introduced by random dopant fluctuations [48]. Experiments confirm that the two sources of Vth variability behave in an uncorrelated fashion. The total Vth variance is thus: V 2 σV2 th = (σVOT ) + (σVRDF )2 (2.6) th th Figure 2.18 also contains an inset showing the kurtosis of the Vth distribution as a function of the correlation length. Kurtosis is the measure of how non-Gaussian the distribution is. We see that for small values of Λ, the distribution is nearly Gaussian (small absolute value of kurtosis) but becomes increasingly flattened for larger correlation lengths.
Fig. 2.18. Threshold voltage uncertainty due to oxide thickness variation strongly depends on the correlation distance (Λ) characteristic of Si-SiO2 interface. c (Reprinted from [48], 2003 IEEE).
2.6 LATTICE STRESS A fairly recent systematic variability mechanism due to the impact of strain on device functionality has become increasingly important. One of the activelypursued approaches to device engineering is the use of strained silicon to
36
2 FRONT END VARIABILITY
enhance circuit performance. The mobility is a strong function of stress: a physical stress on silicon lattice leads to the higher carrier mobility. This means that the transistor current drive and switching speed are also dependent on stress. The precise device physics of stress-induced mobility enhancement is quite complex. It is believed that strain enhances the electron mobility by reducing both the effective electron mass and the scattering rate. The hole mobility appears to be affected only by the effective mass change [41]. In addition to the above, stress appears to affect velocity saturation, threshold voltage, and current drive, with the effect on current drive (via mobility change) being the most influential. Stress in silicon can be created by adding layers of other materials that mechanically expand or compress bonds between the silicon atoms. The desired stress is tensile for NMOS and compressive for PMOS transistors. For creating strain in NMOS transistors a layer of silicon nitride is used, whereas PMOS transistors can be stressed by using silicon germanium. Electron mobility enhancements of up to 60% have been reported [42]. Importantly, stress can also be created as a by-product of the processing steps involved in traditional device fabrication. The cause of such stress is the mismatch in thermal expansion coefficients and oxidation volume expansion [43][44]. The use of shallow trench isolation (STI) has been shown to lead to substantial compressive stress creation due to the above mechanisms, specifically the stress arises from the oxidation step that follows the formation of STI. NMOS mobility can be degraded by as much as 13% due to the stress caused by the proximity to the STI edge [41]. In addition to affecting mobility, the mechanical stress at the STI corners has also been implicated in anomalous leakage current. The STI-caused stress and its impact on mobility (and on-current) is highly dependent on the layout, specifically, the size of the active area and the distance to the STI edge. Because the stress produced by the STI is compressive, the trends are opposite for NMOS and PMOS devices: compressive stress enhances hole mobility and degrades electron mobility. For NMOS devices, the on-current is degraded as the active area is reduced. Consider the layout shown in Figure 2.19. For the length of the active area (Xactive ) below 5µm, the current drive is reduced up to 13% for a narrow-width device. At the same time, the PMOS current drive is increased by up to 10% for a narrow-width device, Figure 2.20. Additionally, the degradation (for NMOS) and enhancement (for PMOS) get bigger with the growing proximity to the STI edge. The dependence on the width of the poly-line is also quite significant, degrading both the NMOS (by 2%) and PMOS currents (by 10%) [45]. Thus, as the transistor active area shrinks and the channel is placed closer to the STI (trench) edge, the mobility degradation can be expected to become more significant. From the design point-of-view it is important that the amount of stress and therefore the electrical device characteristics are highly systematic and depend on the layout. As a result, transistors laid out with relatively wide spacing will perform quite differently from transistors laid out with high density for the same polysilicon dimensions.
2.7 VARIABILITY IN EMERGING DEVICES
37
Fig. 2.19. The stress is highly dependent on the size of active area (Xactive ) and c the proximity of the poly to the edge. (Reprinted from [45], 2003 IEEE).
2.7 VARIABILITY IN EMERGING DEVICES In response to multiple challenges in device engineering novel device architectures have been explored. The primary driver behind the search for alternative device architectures is the need to counter-act the severe short-channel effects of bulk FETs and partially-depleted SOI devices. New materials are also used in addition to novel device architectures to increase transistor performance and current drive, most notably, by increasing mobility of electrons and holes. This mobility enhancement is achieved by introducing intentional stress into silicon lattice. Tensile strain increases electron mobility while compressive strain enhances the mobility of holes. However, in a way similar to the just considered unintentional strain due to trench fill in STI, devices that use strained silicon exhibit strong dependence of their transport properties on the layout specifics [50]. Experiments show substantial, on the order of 1015%, dependence of carrier mobility on layout attributes, such as gate-to-gate spacing, length of the source and drain regions, and active area size. The new device architectures aim at reducing the severity of threshold voltage roll-off and drain-induced barrier lowering. These device architectures include fully depleted silicon-on-insulator device (FDSOI), dual-gate devices (e.g. FinFET), Tri-Gate, and Back-Gate devices. One common characteristic of these new device architectures is that they have thin fully depleted silicon body. This leads to two implications, important from the point of view of variability. First, because the channel is fully depleted, the device threshold voltage exhibits a stronger linear dependence on the doping concentration, compared to the power of 0.4 dependence in bulk FETs [49]. As a result, the variation in Vth due to random dopant fluctuation is more significant. Secondly, the thickness of the silicon body now has an influence on Vth , and thus variability in body thickness contributes to the variability in Vth .
38
2 FRONT END VARIABILITY
Fig. 2.20. Impact on mobility is dependent on the size of active area. The trends c are opposite for NMOS and PMOS devices. (Reprinted from [45], 2003 IEEE).
One of the most interesting practical alternative to the traditional planar MOS transistor is a dual-gate transistor, such as FinFET [46]. Planar MOSFET has one-sided control over the channel and has high leakage. A dual gate MOSFET has more electrostatic control over the channel, and thus has less leakage. The variation of Vth is, in fact, due to several distinct physical causes, including the short-channel effect, Vth dependence on the thickness of the silicon channel (fin), and the uncertainty due to random dopant fluctuations [36]. In a FinFET, the gate surrounds the thin silicon block (i.e.,
2.8 PHYSICAL VARIATIONS DUE TO AGING AND WEAROUT
39
fin), forming the conducting channel on both sides of the fin. The threshold voltage of the FinFET strongly depends on the thickness of the silicon fin. The most severe variability issue in such devices is likely to be the channel thickness control. In the case of the vertical channel, its thickness is defined by a lateral lithographic process, and its tolerance is usually worse than the film deposition (thermal growth) step used for the classical (planar) devices. The control of Si fin thickness therefore directly determines the degree of control on threshold voltage. Evaluation of all these factors indicates, that the standard deviation of Vth for FinFETs with fin-thickness of 5nm will be about 100mV, from which only 25-50mV is due to random dopant fluctuation [36]. Because of the vertical channel, in FinFETs, the transistor width is quantized to the number of silicon fins. The vertical variations in the fin height manifest themselves as FET width variations. It is interesting to observe that in this case, the global variations in fin height will lead to the same relative variation device widths, regardless of the absolute value of transistor width [49].
Fig. 2.21. The amount of Vth variability in double-gate devices will be a significant c concern. (Reprinted from [11], 1999 IEEE).
2.8 PHYSICAL VARIATIONS DUE TO AGING AND WEAROUT In this book we are primarily concerned with uncertainty in the physical parameters of ICs resulting from the manufacturing process or the intrinsic
40
2 FRONT END VARIABILITY
device uncertainty. A different type of uncertainty that affects the physical parameters is caused by the temporal factors. The impact of the above physical mechanisms is to change the properties of devices over time. The difference is that the manufacturing and intrinsic variations are manifest at time zero, while the “temporal” variations appear over time. From the designer’s point of view, the impact of these changes is not different from the variability induced by the manufacturing process: the impact of both is to introduce uncertainty about the device properties. The traditional approach designers use to deal with these two types of variability mechanisms is also the same - to use margins and worst-casing. To account for the temporal effects, device models containing “aged” parameters are created to ensure that the circuit will operate under the end-of-life conditions. One useful way of comprehensively describing all sources of variability is by identifying their time constants. Depending on the time constant associated with the mechanism of variability it is useful to divide them in two groups. The fast, small time constant temporal variability mechanisms include effects such as SOI history effect and self-heating. The second group of mechanisms has a much longer time constant and is related to aging and wear-out in physical parameters of transistors and interconnects. The primary mechanisms in this category include: (i) negative-bias temperature instability, (ii) hot carrier effects, (iii) electromigration. Negative bias temperature instability (NBTI) affects p-channel MOSFETs. Its impact is to increase over time the threshold voltage of the p-FET which reduces its current drive capability and thus increasing circuit speed. At some point, the possibility of path timing violations arises. NBTI is due to the creation of interface traps and the positive trapped charge. The NBTI stress occurs when the p-FET is on with gate voltage Vg = 0 and Vd = Vs = Vdd . When stressed continuously for the course of the device lifetime (e.g. 10 years) the p-FET threshold voltage can change by as much as 42%. Empirical observations show, however, that when the stress is removed, the NBTI can be reversed to some extent, even if not entirely. Since in a real circuit environment transistors typically are not stressed continuously, the true NBTI lifetime can be much longer. Alternatively, the increase of the threshold voltage is much smaller over the same period of time. For devices in 65nm technology, the lifetime computation that takes into account the real dynamic of the device switching predicts the Vth degradation of 38%. While the threshold voltage change is only 5% less severe than under static conditions, the lifetime is effectively doubled, since in the dynamic case it will take 10 more years (20 years in total) to experience the same level of degradation as in the static case. In addition to the stress patterns that are determined by the workload (e.g. the activity factor), the amount of threshold voltage degradation due to NBTI depends on the supply voltage and temperature in the device vicinity, as well as the capacitive load driven by a gate and the gate design factors (e.g. the ratio of p-FET and n-FET device geometries) [49].
2.9 SUMMARY
41
Hot carrier effect (HCE) affects primarily n-channel MOSFETs. It is due to the injection of additional electrons into the gate oxide near the interface with silicon. During switching, the electrons gain high kinetic energy under the influence of high electric field in the channel. Depending on the relative voltages on the FET terminals, different mechanisms may be responsible for electron injection into the oxide, including (i) the direct channel hot electron injection, (ii) the secondary generated hot electron injection, (iii) the drain avalanche hot carrier injection, and (iv) the substrate hot electron injection. Regardless of the specific mechanism of injection, the ultimate result is the growing interface charge that leads to the increase of the threshold voltage, lower current drive, and longer switching time. The danger of HCE is that timing constraints of some paths may be violated at some point. To prevent this from happening, the design must be checked with the aged models that correspond to end-of-life value of the threshold voltage. Electromigration is the process that affects wires and is caused by the continuous impact of high current densities on the atomic structure of the wire. Under the influence of current flow, the atoms of the metal wire may be dislocated. This may ultimately lead to the creation of shorts between the wires when the dislocated atoms of two neighboring wires are contacted. This may also lead to the creation of an open failure in the wire when the dislocated atoms produce a void in the wire that damages its electrical connectivity.
2.9 SUMMARY Variability in the front-end of the process technology will continue to be the main contributor to the overall budget of variability. There are multiple systematic design-process dependencies (proximity, etch, stress) that are of first rate importance. Because of their systematic nature, their impact on design can be eased by improving the characterization and modeling of these effects, and propagating the appropriate information to the designer. The fundamental, or intrinsic, components of variation are essentially random. Their impact on the design will continue to grow requiring a substantial change in the design approaches and practices.
3 BACK END VARIABILITY
We must beat the iron while it is hot, but we may polish it at leisure. John Dryden
3.1 INTRODUCTION The back end of the IC fabrication process refers to the steps which form the interconnect or wiring for the circuit, as well as any passive devices such as integrated inductors or capacitors formed within the interconnect process layers. Because the back end process shares many technologies and tools with the front end, many of the variations affecting the front end are also operative here, particularly those related to lithography and etch. In addition, a number of process technologies are used heavily in the back end, such as copper electroplating and chemical-mechanical polishing, which generate additional types and sources of variation. The key impacts we are concerned with here are variations in the final interconnect or backend components of the chip, including variations in the geometry of the structures formed, and in the material properties of these structures. The back end flow in advanced IC technologies consists of a repeated sequence of steps to generate each metal layer and between-layer vias. First, a dielectric stack is deposited; this stack may itself be relatively sophisticated to incorporate both low-k dielectric constant materials for capacitance reduction, as well as various etch stop or capping layers (particularly in the case of porous dielectrics). In dual-damascene processing, both the lines and vias are patterned prior to metalization. This requires a pair of lithography and etch sequences, to construct the via holes to the lower level metal and the trenches in which the metal will reside; the litho and etch may be done in either a
44
3 BACK END VARIABILITY
“via first” or “trench first” fashion. A thin barrier metal, typically tantalum or tantalum nitride, is then deposited over all exposed surfaces; this metal will serve to prevent the diffusion of copper into the dielectric layers. The barrier deposition is followed by deposition of a thin copper seed layer, forming a continuous electrically conducting wafer surface to support electroplating. The bulk of the vias and trenches are next filled using copper electroplating, which also results in unwanted metal deposition over the field regions. Chemicalmechanical polishing is then used to remove this overburden copper, as well as the barrier metal, in the field region, resulting in (ideally) a planar surface and individually defined metal lines and features. This planarity is important for lithography, and enables the entire metalization sequence to be repeated to build up subsequent metal layers, with ten or more metal layers common in advanced technologies. In the following sections, process variability in these process steps is discussed, with primary attention to the process steps that are unique to the back end, including copper CMP and plating, as opposed to those steps that are in common with front end processing, such as lithography and etch.
3.2 COPPER CMP The purpose of copper CMP is to completely remove the overburden copper and barrier metal sitting outside the trench area, leaving a flat surface that is nearly coplanar with the top of the surrounding dielectric. Unfortunately, as illustrated schematically in Figure 3.1, copper CMP is known to suffer from two important pattern dependent non-idealities: dishing and erosion. Dishing is the recess or overpolish within any given feature, relative to the surrounding dielectric surface. Erosion is the loss in thickness of the surrounding dielectric compared to a “just cleared” surface. The degree of dishing and erosion depends strongly on layout factors, as well as details of the process equipment, polishing pad and slurry, wafer level uniformity, and other parameters.
Fig. 3.1. Dishing and erosion in copper CMP.
Figure 3.2 shows the post-polishing surface height, as measured by profilometry horizontally across a large array of many lines and spaces of a defined
3.2 COPPER CMP
45
feature size. The high points correspond to the surface of the dielectric material, which the low points indicate the surface of each individual copper line. In general, we find that dishing is worse for larger (wider) features, while erosion is worse for narrower oxide or dielectric spacing between features. In mid-size features, the two effects combine, so that both dishing and erosion contribute to overall copper thickness reduction. In Figure 3.2, the features at the left are “isolated” lines of the given line width, while the array is a sequence of many lines with the same width and spacing. In the smaller features, very little dishing is observed, but the array is recessed due to erosion. In the largest features, no erosion occurs, but substantial dishing into the features is seen. In the medium sized features, the array is both recessed, and feature-level dishing is observed (the dark band within the array corresponding to the alternating dielectric and metals surfaces for small feature sizes). A qualitative understanding of these pattern dependent trends in copper CMP is useful, as the physical design of circuits can have a dramatic effect on the variation resulting in the thickness of copper lines and structures. Modeling and prediction tools which capture the full interplay between the process, topography, and layout are also needed, and are discussed in Chapter 5.
Fig. 3.2. Example surface profilometry traces across interconnect arrays with different line width and line space combinations showing the impact of both dishing and erosion in reducing line thicknesses. The scan distance corresponds to the horizontal trace of the profilometer, across the 2 mm wide array of lines and spaces.
46
3 BACK END VARIABILITY
Most CMP processes use two or more steps or stages. The first stage is to remove the bulk copper excess resulting from the electroplating process, proceeding down to the underlying barrier metal at which point the step 1 polishing should step. The ideal step 1 process in this approach would leave the surface perfectly flat with no removal of the barrier metal, no exposure of the underlying dielectric, and no dishing into the copper line within the trench. In the second step, the slurry (or slurry/pad combination) and process parameters are changed, so that the barrier metal can be removed. The ideal removal in this step would result in a perfectly planar surface, with the barrier metal removed, no erosion in the exposed dielectric, and no dishing in the copper. In reality, these steps typically do not have either perfect removal rate selectivity or perfect planarity. The ideal picture of CMP is that raised features contact the pad and abrasives, while recessed regions do not and therefore are not polished; this corresponds to a perfectly stiff or rigid polishing pad. In practice, however, the pad material is typically a porous polyurethane that is somewhat flexible and has many small surface asperities, such that the pad and slurry particles do contact recessed areas with a pressure that decreases with recess depth into steps. Thus down or recessed areas do indeed polish, though at reduced rates, compared to raised features and areas [52]. The polishing pad and process serve to apportion applied pressure across both raised features (resulting in a non-local pattern density dependence), and between raised and recessed regions (resulting in a feature step-height dependence). The translation to dishing and erosion trends can now be understood. As seen in Figure 3.3 wider copper features will typically dish to a deeper level, but when deep enough will stop dishing (the pad being “held up” by the neighboring oxide regions) or reach a steady state dishing value where the rate of dishing and rate of erosion are equal. Considering both erosion and dishing in Figure 3.4, we see that narrow oxide spaces will erode more rapidly, at a rate proportional to the oxide pattern density. The effect of line loss from dishing and erosion can be substantial, and directly impacts the resulting electrical parameters (resistance and capacitance) of interconnect features. The CMP process and structures on the wafer interact, so that the final result depends not only on the specific feature (i.e., the line width or spacing to the next feature), but also on nearby structures within some interaction distance. A well known dependence is on the pattern density of features: regions with higher area fraction or pattern density of raised features will take longer to polish than areas where only a small volume of material needs to be removed. In the copper damascene process, however, the presence of three materials – copper, barrier metal, and dielectric – means that selectivity during different steps or stages of the removal can play a critical role. In particular, the line spacing can be very important in “holding up” the polishing pad and preventing or limiting dishing. It should be noted that the spatial interaction of pattern effects includes both pattern density and other pattern parameters. Figure 3.5 shows an example of this interaction,
3.2 COPPER CMP
47
Fig. 3.3. Dishing versus polish time for 50% pattern density regions with different pitch (line width + line space). Larger features are seen to suffer more dishing, and a saturation or steady state dishing value is also seen for the narrower features.
Fig. 3.4. Dishing and erosion for 50% pattern density structures, with different pitch. Dishing strongly affects wide lines, while erosion strongly affects narrow spaces.
48
3 BACK END VARIABILITY
where the presence to the left of different line array feature sizes (still at 50% pattern density) can substantially change the resistance of the line array to the right (fine feature array also at 50% pattern density). In Figure 3.5, we see that the change in resistance can be large, indicating that 20% or more of the copper lines may be dished and eroded in some cases [53].
Fig. 3.5. Change in resistance in a 0.5 um line/space array as a function of distance from a step in line width (at the same global pattern density). (Reprinted from [53], c 2003 IEEE).
3.3 COPPER ELECTROPLATING Electroplating is an important process step in the formation of copper interconnect, and is the primary means for the “bulk” filling of copper vias and trenches. In this process, a wafer with a pre-existing continuous surface metal film (i.e., a barrier metal and copper seed deposited by sputtering, chemicalvapor deposition, or atomic layer deposition) is electrically connected at one or more points on the edge of the wafer to form a cathode, the wafer is immersed in an electrolyte solution, and a bias placed across a copper anode in the solution. Copper ions in the solution thus are attracted to the wafer surface, where they adsorb and build up the desired copper film. A number of challenges must be overcome by careful process and equipment design. First, for very narrow features, it is important that the copper fill from the “bottom-up” to avoid the feature closing off near the upper top opening, preventing complete fill of the feature. To achieve this, a number
3.3 COPPER ELECTROPLATING
49
of different additives are used in the electrolytic solution to manipulate the deposition rate in features and along the surface of the wafer. Second, wafer level variation can be substantial, arising from the nonuniformity in electrical or plating potentials from the edge (where the wafer is electrically contacted) to the center of the wafer, as well as additional localized depletion effects in the electrolyte bath. Bottom-up fill processes can result in substantial pattern dependencies. These are illustrated in Figure 3.6, where we see both long-range effects – the entire array region may sit below the surface of the surrounding field region depending on the pattern density, and short-range feature dependencies – the step height depends on line width and/or line spacing.
Fig. 3.6. Surface profilometry traces for post-plating profile, for different combinations of line width and line spacing. Recess and bulge situations, as well as varying degrees of feature step height, can be seen.
To better understand the traces of Figure 3.6, we consider two aspects of the surface height variation shown in these plots. The typical surface topography can be decomposed into a surface “envelope” capturing the degree of recess or bulge for the array, and local step height between the top of the plated feature and the bottom of the region between two plated features. In many cases, the top of each local step corresponds to the copper above the
50
3 BACK END VARIABILITY
insulating region between lines, and the bottom of the trace corresponds to the profilometer going downward due the copper fill sitting within the trench. In other cases, however, “superfill” chemistries may actually reverse this effect, so that the highest copper plating region is actually above the trench. The copper plating surface topography is “sacrificial” and temporary: the surface will be removed and planarized in the subsequent CMP step. One might well ask, then, why the plated topography matters, if it is to be removed. The problem is that the plating and CMP processes are coupled, as illustrated in Figure 3.7. The topography at the start of the CMP step can have a strong impact on clearing time and degree of over-polish that different pattern densities or regions on the chip experience. Thus efforts must be undertaken to minimize these effects by process design as well as by layout optimization (e.g. through the insertion of plating-aware dummy fill and slotting structures, as discussed in Chapter 5).
Fig. 3.7. (a) Topography after plating, illustrating array area bulges and recess, as well as both positive and negative step heights above plated lines resulting from superfill plating processes. (b) Topography after CMP, where the degree of dishing and erosion in the final post-CMP surface can be strongly influenced by the incoming profile that the CMP process experiences.
3.4 MULTILEVEL COPPER INTERCONNECT VARIATION The pattern dependent variations described above, arising from copper plating and CMP, can be exacerbated in multilevel interconnect structures. As pictured schematically in Figure 9, the first metal level may introduce topography resulting from either dishing or erosion. The underlying metal layer will typically have equal or smaller feature size, and so is more likely to be prone to erosion across regions with high pattern density and small feature sizes, as shown in the picture. In the subsequent dielectric deposition steps, one typically sees relatively conformal deposition, so that the surface height of the dielectric will retain topography from the first metal CMP (indicated by the dashed line in Figure 3.8). The patterning and etch of the second level metal trenches will be of approximately equal depth in either an etch-stop or
3.4 MULTILEVEL COPPER INTERCONNECT VARIATION
51
timed-etch approach, so that the bottoms of the trenches will also conform to the underlying regional topography. After barrier metallization and copper plating, the polish process can be especially challenging – the copper must be cleared away in all field regions and between all copper trenches, even those that are somewhat recessed to begin with due to the residual underlying metal topography. Failure to completely clear may result in “pools” or other areas where residual copper shorts lines together, leading to chip failure. Even when clearing can be achieved throughout the chip, compounding the complexity of the final metal 2 topography is the fact that additional dishing and erosion due to the pattern density and feature size dependencies in the second level metal will occur. In some regions, the recessed lines will be “protected” by being recessed, so that they suffer less additional dishing and erosion than one might expect in a corresponding metal 1 layout. Conversely, raised regions may suffer additional dishing and erosion due to the need for overpolishing to completely clear the chip and wafer.
Fig. 3.8. Schematic copper interconnect showing the interactions between dishing and erosion profiles in an underlying metal layer affecting the thickness of patterned and polished lines in the layer above.
The resulting multilevel dishing and erosion effects can lead to substantial variation in the thickness of patterned copper features, with corresponding deviations in metal line resistance. For example, Figure 3.9 shows the effect on metal 2 line resistance of different metal 1 pattern density and feature sizes [53]. As described above, as the metal 1 pattern density increases, the metal 1 layer suffers increased dishing and erosion which reduces the degree of dishing and erosion in metal 2. Up to 30% reduction in the “normal” dishing and erosion or line thickness loss in the metal 2 pattern is observed, indicating that the topography in which metal 2 sits can have a dramatic impact on subsequent CMP dishing and erosion.
52
3 BACK END VARIABILITY
Fig. 3.9. Impact of underlying metal 1 pattern density on metal 2 line resistance. c (Reprinted from [53], 2003 IEEE).
3.5 INTERCONNECT LITHOGRAPHY AND ETCH VARIATION While copper electroplating and CMP can substantially impact the thickness of copper wire structures, variation in the width of copper lines can also have a first order effect on resulting wire resistance and capacitance. The challenges in lithographic patterning and etch in the formation of copper damascene interconnect are similar to those in the front end, as described in Section 3.1, with some additional issues. One issue is the extended topography (on the chip scale) that can potentially build up in the back end, creating lithographic depth of focus problems that are often greater than in the front end. Transistor formation generally occurs in or close to the underlying silicon substrate, and thus has only a limited number of topographical features to contend with, such as height variations arising from pattern dependencies in the STI formation. In multilevel interconnect structures, on the other hand, substantial thickness variations in metal and dielectric layers can exist across the chip due to limitations in CMP, deposition, and plating processes. Fortunately, the feature sizes are generally much larger and less critical as one moves into higher metal layers, and so the added topography and lithography interactions does not result in larger proportional line width variations. A second patterning issue that is of substantial concern in interconnect structures relates to the full three-dimensional geometry of patterned features. In particular, the “ideal” perfectly vertical sidewalls and rectangular cross section are rarely achieved in real structures. Instead, trapezoidal or
3.5 INTERCONNECT LITHOGRAPHY AND ETCH VARIATION
53
more complex cross-sectional geometries often arise due to realistic constraints in the lithography and etch processes. The coupling capacitance between two neighboring lines with the same base separation, for example, will be substantially different if the lines are trapezoidal in shape. Relatively little information has been published detailing any systematic or random dependencies in sidewall slope as a function of layout pattern density, proximity to other structures, or feature size. In practice, one typically assumes a constant sidewall slope, and approximates the trapezoid with an equivalent rectangular cross section structure based on characterization and measurement of interconnect test structures. A third, and typically more important, variation arising from interconnect lithography and etch has to do with trench depth variation, which can generate significant wire resistance and capacitance variation. Here, two alternative approaches are often used in damascene interconnect formation, with substantially different etch variation implications. In one common approach, a dielectric stack is used which includes an “etch stop” material that is substantially selective to the bulk dielectric etch. Thus, during the plasma etch used to form the trench, the vertical etch process will stop (etch with much reduced vertical rate) on reaching the stop layer. While this provides substantial process window to enable the completion of vertical etch across all pattern sizes within the chip, and all chips across the wafer, there is a corresponding over-etch for those structures which reach the etch stop earliest. The result of the over-etch (so long as the etch does not break through the etch stop layer) is often increased lateral etch of the trench feature. This lateral etch can increase the line width of individual features, or affect their geometry (e.g., result in bowing of the trench structure). The local etch rate and thus over-etch time may depend strongly on layout pattern: the local or regional pattern density, as well as the aspect ratio of features being etched, means that different regions on the chip and different feature sizes may induce complex variations in the patterned trench width. An alternative trench etch formation approach is also popular, in which no dielectric etch stop layer is used. This has the advantage of eliminating this layer, which typically has a higher dielectric constant than the bulk (often lowk) dielectric stack. In these cases, a careful timed etch is typically used instead to achieve the desired trench depth. In these cases, pattern density and aspect ratio dependencies will again affect localized vertical etch rates, manifesting in trench depth variations across the chip (as well as width variations, although these may be reduced somewhat) [71]. Again, these etch effects are essentially deterministic process variations. With development of improved plasma etch pattern dependent models, characterization methods, and simulation tools, these etch depth or width variations can in theory be predicted and accounted for in circuit design or in layout modification stages.
54
3 BACK END VARIABILITY
3.6 DIELECTRIC VARIATION In the development and implementation of advanced copper interconnect, two concerns have received a great deal of attention: thickness variation in copper wires arising from CMP, and reliability concerns in the insulating layers arising from deposition, patterning, and material properties of low-k and other complex dielectric stacks [54], [55], [56]. Relatively little has been reported on wafer-level uniformity in dielectric film thickness or material properties, or on their pattern dependencies. Because the geometry of wire and insulating structures, including the thicknesses of via and trench layers, and the material properties including the dielectric constant of these stacks, both directly affect final interconnect structure capacitance, variations in these parameters are of potential concern.
3.7 BARRIER METAL DEPOSITION During the formation of damascene copper interconnects, a thin barrier metal must be deposited into the opened vias and trenches to meet multiple needs [57]. First, this dense metal, typically tantalum or tantalum nitride, must act as a “barrier” to the diffusion of copper into and through the surrounding dielectric, in order to prevent leakage paths between metal lines, and also to avoid copper reaching the active device layers. Second, the barrier also acts as a “glue” layer, having better adhesion properties to both the dielectric layer and the copper which fills the vias and trenches. Finally, the barrier metal is also typically less susceptible to electromigration, and thus serves as a last line of defense (a remaining electrical connection near high stress points) against the flow of copper in functioning devices. The barrier metal has higher resistivity than does copper, and so a design concern is that some portion of the vias and trenches are taken up with this higher resistance metal. Less well studied or understood, however, are the variation sources which may affect barrier deposition and the resulting final copper interconnect structures. While the process is often not considered high on the list of back end processes with variation concerns, it is worth recognizing that variation sources at the wafer, die, and feature level may be of concern. Physical vapor deposition is the dominant process in use for barrier deposition, specifically sputtering, but future technologies may transition to the use of chemical-vapor deposition (CVD) or atomic layer deposition (ALD). Sputtering typically achieves a reasonable degree of wafer-level uniformity, with some wafer-level variation due to the geometry of the deposition tool. While techniques are used to improve directionality of the sputtered metal as it travels from a source to the wafer surface, angular distribution can cause structures near the center and the edges of the wafer to fill slightly differently, or slightly asymmetrically. This may also affect the center to edge thickness of the deposited barrier film.
3.8 COPPER AND VIA RESISTIVITY
55
Second, the feature level dependence of barrier deposition can be a concern, if features of different sizes or shapes result in barrier film thicknesses or geometries that are different. One little studied effect may be the thickness and corner rounding of deposited barrier at the tops of the dielectric spaces between metal lines. This portion of the barrier is sacrificial – it is removed (must be removed) during the copper/barrier CMP process. However, in CMP processes where the barrier metal acts as a polish stop (with a lower removal rate than that of copper), any thinning of the barrier as a function of feature size can interact strongly with the resulting erosion of the dielectric layer, thus affecting the thickness of the final copper lines. In general, the decomposition of metal line variation based on which individual processes are responsible for each component has only begun in the research community. Rarely has barrier metal deposition been singled out as a leading wafer, die, or feature level variation source, compared to copper (plating) deposition and copper CMP, or compared to patterning and etch of vias and trenches. One can expect, however, that future scrutiny of barrier metal deposition will increase, as the percentage of the volume consumed by the barrier increasing with future scaling.
3.8 COPPER AND VIA RESISTIVITY Because copper lines and vias are encapsulated in a barrier metal having a substantially higher resistivity, the overall resistance of copper interconnect does not scale ideally as the simple product of copper resistivity times cross sectional area (wire height times width). The barrier thickness and material resistivity must be taken into account, resulting in a linewidth-dependent sheet resistivity for the barrier/copper bi-layer. Such a calculation is straightforward, if the copper and barrier metal thicknesses and resistivities are known and are constant. Unfortunately, with small feature size, additional scalinginduced wire resistance increases may also result [58]. With continued scaling of copper lines, the resistivity of the copper is no longer comparable to bulk copper resistivity, but rather is strongly affected by surface scattering and grain boundary structure. In particular, when the crosssectional dimension of copper lines approach the mean free path of electrons (about 40 nm at room temperature), these scattering mechanisms can have a large impact on line resistance [59]. The net effect is that resistivity increases with smaller lines with a stronger dependence than the reduced cross-sectional area might suggest. The resulting line resistance is pattern dependent (feature size dependent) and is therefore highly systematic; thus careful modeling and characterization should enable circuit design to account for such linewidthdependent resistances [60], [61]. In addition to pattern dependent variation in copper line thickness, variations in via resistances are becoming an increasing concern, particularly as via and contact resistances become a larger portion of overall wiring resistance.
56
3 BACK END VARIABILITY
Fig. 3.10. Copper line resistivity as a function of line width, per a model accountc ing for surface and grain boundary scattering effects. (Reprinted from [59], 2003 IEEE).
Technology specifications often include a wide margin for via resistances (by as much as factors of 0.5 to 2X the nominal resistance), reflecting the difficulty in patterning, fill, and material property control. While technology characterization vehicles or test chips often measure such variations [72], more work is needed to understand, model, and control specific via variation sources.
3.9 COPPER LINE EDGE ROUGHNESS A potential additional concern in very narrow copper lines is the effect of line edge roughness (LER). While LER in the gate electrode of transistors can strongly impact a number of device characteristics including leakage current as discussed in Chapter 2, in copper lines, the impact may be felt on resistivity of the line. In a study where LER was intentionally induced in 100 nm copper lines using e-beam lithography to add small rectangular patterned notches of various sizes and arrangements, it was found that changes in resistivity are primarily due to a change in the effective or average line width along the length of the line, and LER did not have a strong impact on resistance [62]. It is projected that LER might become more important in smaller lines, as the LER becomes comparable to the mean free path for electrons; however, the change in grain structure may offset this effect somewhat [62]. Modeling approaches have also been used to project the effect of LER on smaller copper interconnects, where it is suggested that the resistivity is increased significantly compared to interconnect structures with smooth sidewalls for line widths smaller than 50 nm and peak-to-peak variations of the widths exceed-
3.11 SUMMARY
57
ing 30 nm [63]. While the impact of unintentional (real) LER on deeply scaled sub-100 nm copper lines is not yet fully understood, overall line resistance is an integral of unit resistance, and thus averaging along the length of a wire reduces the impact of variation in line width in wires, unlike in transistors where non-linear channel length effects are important.
3.10 CARBON NANOTUBE INTERCONNECTS Substantial active research is investigating the feasibility of using carbon nanotubes (CNTs) for both interconnect and device applications in integrated circuits. The attraction is high electrical conductivity in single-walled (SWCNT) or multi-walled (MWCNT) structures, with high current density, excellent thermal conductivity, and small dimension. Bundles of CNTs have been grown in via holes, with total via resistances for a 1000 tube bundle of about 100 Ohms with current density of 2x106 A/cm2 , which is a factor of about 10 worse than copper plugs [64]. Those authors project that a theoretical total resistance of a CNT via with 5000 tubes, accounting for quantum resistance, could be reduced to about 1 Ohm. Single or bundled nanotubes in vertical and horizontal configurations have also been considered for local interconnect [65] as well as intermediate or global interconnect [66], [67], [68], [69]. In many ways, the present state of the art in CNT technology is struggling with issues of variability. One key difficulty is in the controlled synthesis of metallic versus semiconductor nanotubes; chirality (crystal orientation) control is not yet sufficient to guarantee that all or nearly all tubes grown will be metallic. Other variation concerns relate to the issues of size and shape of CNT inerconnects, controlled directional growth, as well as patterning. At this stage, technology exploration is in full swing, and consideration of statistical variation in CNTs is just beginning to emerge [70].
3.11 SUMMARY The back end sequence of semiconductor processing presents multiple challenges with respect to uniformity and controllability. The major challenge is related to pattern dependencies in chemical-mechanical polishing. These dependencies result in the currently very severe problems of dishing and erosion in copper-based processes. These dependencies are entirely systematic in that they can be described precisely once the parameters of the process and the layout patterns are known.
4 ENVIRONMENTAL VARIABILITY
Sometimes, all I need is the air that I breathe The Hollies
4.1 INTRODUCTION As we saw in earlier sections, front end and back end manufacturing variations can lead to variability in the physical and electrical properties of individual integrated circuit components. This physical variability is predominantly a function of the fabrication process, with random and systematic components. But the very meaning of the word integrated implies that an IC is composed of many individual circuits manufactured simultaneously and working concurrently to perform the overall function of the IC. This integration means that the various components and circuits share a common operating environment. This environment includes: (1) The silicon substrate on which the various circuits are integrated, which is typically electrically resistive and is an excellent thermal conductor. (2) The package in which the integrated circuit is sealed in order to protect it, and the connections between the packaged circuit and the external environment, through which the circuit is supplied with power as well as input and output signals. While physical variability has random and systematic components, environmental variability is largely systematic since it depends predominantly on the details of circuit operation. Thus the study of environmental variation naturally focuses on the efficient prediction and bounding of such variations. The sharing of the IC operating environment creates various types of coupling between the individual components and circuits. This coupling can include: (1) Coupling through the power supply network, where the distributed nature of the integrated circuit leads to temporal variations in the power supply voltage. (2) Coupling through the common thermal environment composed
60
4 ENVIRONMENTAL VARIABILITY
of the chip substrate and package, where differences in power density lead to temporal variations in local temperature. (3) Coupling through electro-static (capacitive) or electro-magnetic (inductive) mechanisms between neighboring wires or between the wires and the semiconducting (resistive) substrate. This coupling results in electrical activity in one component appearing as noise or interference in another component. In this section we will finish our study of variability sources by studying these types of within-chip environmental variability. We will start with a qualitative and quantitative study of the impact of environmental variability on circuit performance using a simple circuit simulation example. Such a study serves to sensitize the reader to the impact of environmental variations, provide quantitative insight into the magnitude of such variations and its impact on typical circuit performance, and finally provide a concrete example of a methodology for performing such studies.
4.2 IMPACT OF ENVIRONMENTAL VARIABILITY We will study the impact of environmental variability on the three dominant performance of digital integrated circuits: circuit delay, power dissipation, and leakage (also known as static power ). While it is possible to use simplified models to get a first order understanding of the dependence of - say - circuit speed on power supply voltage, such models are seldom useful for anything but back-of-the-envelope computation since it is often the case that numerous second and third order effects come into play. Thus we will instead study the impact of variability on performance using an accurate circuit simulation model. We perform the simulation study using a 7-stage CMOS ring oscillator composed of inverters, shown in Figure 4.1. We simulate this circuit with the SPICE circuit simulator [76] using parameters from IBM’s 0.13µm Bulk CMOS process1 . The circuit is designed such that when the Enable input to the NAND gate is true (high), the circuit oscillates. Figure 4.2 shows two waveforms. The first waveform is the voltage at a node in the oscillator, from which we can estimate the frequency of the oscillator and therefore get a measure of circuit delay. The second waveform is the total power supply current drawn by the oscillator, from which we can calculate the total power consumption of the oscillator. Furthermore, when the enable input is false, the circuit is dormant. In this case, the total current drawn by the oscillator is a measure of the leakage of the circuit. Since our aim is to study the impact of environmental variations on the performance of this circuit, we will simulate the ring oscillator in its active and dormant modes over a full-factorial experiment plan [91] that consisted 1
These parameters are available, at the time of this writing, from the MOSIS foundry web site: www.mosis.org.
4.2 IMPACT OF ENVIRONMENTAL VARIABILITY
61
Enable
Fig. 4.1. A seven stage ring oscillator.
2.5
0.003
2
0.0025
V o 1.5 l 1 t a g 0.5 e 0
C 0.002 u r r 0.0015 e n t 0.001
-0.5
0.0005 0 0.5 1 1.5 2 2.5 3 3.5 4 Time (ns)
0
0 0.5 1 1.5 2 2.5 3 3.5 4 Time (ns)
Fig. 4.2. Simulated voltage and current waveforms from seven stage ring oscillator.
of (a) measurements over 11 uniformly spaced values of power supply voltage from 2.25 to 2.75V, which is equivalent to a ±10% variation in a 2.5V supply; and (b) measurements over 11 uniformly spaced values of temperature from 25 to 125 degrees Celsius, which is a typical temperature range for a consumer product. The total number of simulation settings was 11 x 11 = 121. For each of these settings we simulated the circuit in both its active and dormant stages and measured the frequency of oscillation, the average power consumed by the oscillator, and the standby or leakage power consumed by the oscillator. With the data in hand, we can explore the dependence of the three performances on power supply voltage (Vdd), and temperature (Temp). Figures 4.3, 4.4 and 4.5 show plots of the three performances. Note that the frequency and the (active) power are fairly linear in Vdd and Temp, while the leakage is quite non-linear owing to its exponential dependence on temperature [239]. We now turn the qualitative insight gained from making plots of the performance vs. the environmental variable into a quantitative assessment of the impact of each of the variables. We can do this via analyzing the first order analytical models for power, speed and leakage; but, as mentioned previously,
62
4 ENVIRONMENTAL VARIABILITY
6
5 2.75 4 125
100
75
2.5 50
25 2.25
Fig. 4.3. Ring oscillator frequency vs. power supply voltage and temperature.
3 2.5 2 2.75 1.5 125
100
75
2.5 50
25 2.25
Fig. 4.4. Active power supply current vs. power supply voltage and temperature.
4.2 IMPACT OF ENVIRONMENTAL VARIABILITY
63
60 50 40 30 20 10 0 125
2.75 100
75
2.5 50
25 2.25
Fig. 4.5. Standby power supply current vs. power supply voltage and temperature.
this may not be representative in advanced technologies. Instead, we will perform this sensitivity analysis by using linear regression to fit the frequency and the two power supply currents as functions of the supply voltage and temperature. For frequency (denoted by f ), the result is a linear function with a correlation of fit of 0.979: f = 2.353 + 1.278 Vdd − 4.453 × 10−3 T
(4.1)
Figure 4.6 shows plots of the predicted vs. measured frequencies, and the residual error vs. measured frequency. It is clear from the plot that a linear model is an excellent predictor of the frequency. In order to assess the relative impact of supply voltage and temperature, it is useful to normalize the Eq. 4.1 by using a new set of variables vdd and t which are defined such that they are in the interval [−1, 1] for the range of the original variables. Such a transformation takes the form: x=
2X −1 Xmax + Xmin
(4.2)
Where X is the original variable with range [Xmin Xmax ] and x is the normalized variable. Making the appropriate substitutions in Eq. 4.1 gives us a new normalized equation from which we can draw somewhat more insight: f = 5.213 + 0.319 vdd − 0.223 t
(4.3)
64
4 ENVIRONMENTAL VARIABILITY 6
0.2
M 5.5 o d e l 5
R 0.1 e s i 0 d u a l -0.1
4.5 4.5
5
5.5
6
-0.2 4.5
5
Data
5.5
6
Data
Fig. 4.6. Linear fit for the frequency of the seven-stage ring oscillator.
From the equation above, we can surmise that the impact of Vdd is somewhat larger than the impact of temperature. Since the normalized variables are in the range [−1, 1] we can also say that the impact of Vdd on the frequency of this ring oscillator is about ±6% (≈ 0.319/5.213). Note that the analysis above is predicated on assumptions of the range of the variables in question. An alternative method might be to base the analysis on the relative (i.e. percentage) change in the output with respect to a relative change in the input. So one might say that a 1% change in Vdd causes a 1% change in the frequency. But such an approach is sensitive to the units in which the various quantities are measured. For example, a 1% change in a temperature measured on the Celsius scale is quite different from the same change in the same temperature measured on the Kelvin scale. We performed a similar analysis for the power supply current in active mode (denoted by Iactive ) and we get a linear function with correlation of fit of 0.998: Iactive = −1.543 + 1.554 Vdd − 2.288 × 10−3 T
(4.4)
Using the same type of normalization as above we can reduce Eq. 4.4 to: Iactive = 2.174 + 0.388 vdd − 0.111 t
(4.5)
Here we see that power supply current in active mode is far more sensitive to Vdd than it is to temperature. Since the total dynamic power is Vdd × Iactive it will have a square-law dependence on power supply voltage. Finally, we performed a linear fit for the standby current, denoted by Ileak . The resulting linear model had an R2 of 0.965, but examination of the plot of the fit and the residuals (shown in figure 4.7) reveals some rather severe lack
4.2 IMPACT OF ENVIRONMENTAL VARIABILITY
65
of fit because of the expected and known exponential nature of its dependency on voltage and temperature. 60
10
50 R 5 e s i 0 d u a l -5
40 M o 30 d e 20 l 10 0 -10 -10
0
10
20 30 Data
40
50
60
-10 -10
0
10
20 30 Data
40
50
60
Fig. 4.7. Linear fit for the standby current of the seven stage ring oscillator.
Given the poor quality of the fit, we elect instead to fit a linear model of the natural logarithm of the standby current (shown in figure 4.8). The result is an excellent model with an R2 of 0.997 and an equation of the form: log Ileak = −0.805 + 0.644 Vdd + 0.0256 T
(4.6)
Which upon applying the normalization becomes: log Ileak = 2.725 + 0.161 vdd + 1.28 t
(4.7)
From which we see the extreme importance of temperature for leakage prediction. In the process of analyzing this relatively simple example, we have demonstrated the use of a systematic design-of-experiment/regression based method for studying the impact of environmental variability on the performance of a circuit, and studied a specific example of the relative magnitude of these sensitivities which is useful in allowing the reader to get a sense for the relative importance of power supply voltage and temperature on timing and on both active and standby power. With this motivation in place, we now show how the systematic variability in power supply and temperature can be analyzed for a design.
66
4 ENVIRONMENTAL VARIABILITY 4.5
1
4 R 0.5 e s i 0 d u a l -0.5
3.5 M o 3 d e 2.5 l 2 1.5 1
1
1.5
2
2.5 3 Data
3.5
4
4.5
-1
1
1.5
2
2.5 3 Data
3.5
4
4.5
Fig. 4.8. Linear fit for the logarithm of the stand by current of the seven stage ring oscillator.
4.3 ANALYSIS OF VOLTAGE VARIABILITY Because of the strong dependence of both circuit delay and power dissipation on power supply voltage, there has been much work dedicated to the modeling, simulation, and analysis of integrated circuit power delivery systems. These efforts have resulted in computer-aided design (CAD) tools that estimate the power supply voltage delivered to every component of a complete integrated circuit. In such an analysis, it is possible to take into account the on-chip power distribution network, the on-chip decoupling capacitance, the package parasitics, as well as the parasitics associated with the board and possibly even the power regulator. The reason for the existence of these power delivery analysis CAD tools is the complexity of the power grid. To get a sense for this complexity, consider the following facts. In the design of a high performance processor, it would not be unusual to dedicate 10% or so of the overall wiring resources on all metal levels to power delivery. This heavy investment in wiring is required because it not uncommon for a modern high-performance multi-core processor to dissipate 100W at a 1V supply. Thus a series resistance of 1mΩ results in a voltage drop 100mV which is 10% of the voltage supply! As we saw in the previous section, that same 10% change in power supply voltage can cause a change of 6% in frequency and 18% in power dissipation. Let us assume the processor which is created in an 8-level metal process, occupies a 1cm × 1cm area, and uses2 wires 1µm wide at a pitch of 9µm. This means that every metal level has about 103 wires and 106 nodes, assuming vias are created at
2
On the average since each level is likely to have slightly different width and pitch dimensions.
4.3 ANALYSIS OF VOLTAGE VARIABILITY
67
each intersection between metal wires on different levels. This results in a power grid circuit with 8 million nodes and 24 million resistors. As different components within an integrated circuit process data, thus drawing power from the power delivery (power grid) network, the voltage supply at all points of the network fluctuates. This time domain behavior can be quite complicated, especially with the recent introduction of various power reduction techniques such as clock gating [231] and power gating [232]. Furthermore, for a complex system that includes software-programmable parts, there will be a large dependence of instantaneous power on the actual software program and data being processed. The complexity of these interactions makes it quite difficult to determine the appropriate set of conditions under which to perform a power delivery variability analysis, and has resulted in most of the work in this area being split between two problems: (1) a detailed steady state (DC) problem [233, 234], and (2) a simplified time domain problem [235]. We now briefly discuss each of them. In steady state power delivery analysis, we restrict our attention to the DC voltage drop at each component in the circuit, and our goal is to analyze the full network in order to assess the statistical and spatial variations of the power supply. Simple engineering checks can be made to insure -say- that all grid voltages are within specified bounds, and there have been attempts to formally show how this can be ensured under various DC loading conditions [236]. For the time domain problem, power supply variation tends to be dominated by the resonant interaction between the predominantly capacitive chip and the predominantly inductive package. These time domain interactions tend to be more global in nature, i.e. with a lower spatial frequency across the chip, and can thus be well estimated from simplified models of the system [237]. In the next section, we will focus on the steady state problem. The time domain version is similar, but introduces complexity somewhat beyond the scope of this work (see for example the work in [235]). 4.3.1 Power Grid Analysis Consider the simple power grid illustrated in Figure 4.9. Power supply wires for Vdd and for Ground are assembled in an orthogonal grid across the various wiring levels, and are connected to the circuit components at the bottom and to the package power terminals at the top. In cases where multiple power supplies are used, such as the common case of a separate power supply for external I/O buffers, sub-grids may be formed in regions where such supplies are employed. The analysis of such sub-grids can proceed in the manner similar to that of the main grid. The first step in performing an analysis of the power grid is to create an equivalent circuit of the power grid described above, the commonly applied
68
4 ENVIRONMENTAL VARIABILITY
Fig. 4.9. A representative on-chip power grid.
choice is to model each segment as a single resistor, resulting in a two dimensional resistive mesh portrayed in Figure 4.10. Due to the size and complexity of the power grid model presented above -recall that the power grid can easily contain millions of nodes, it is common (and certainly necessary) to separate the analysis problem at the boundary between the linear power grid and the non-linear active circuits connected to it. This is done by estimating the power requirement for each active circuit, and modeling that power requirement as a constant current load on the power grid. This power estimation can be quite involved and many researchers have made important contributions to its understanding [238]. We will discuss the variability aspects of this power estimation in the next section. For now, however, we assume that this estimation process results in a constant current source. With constant current sources to represent the power consuming parts of the network, the circuit portrayed in Figure 4.10 becomes linear and timeinvariant, and its efficient solution has received much attention [234]. Note, however, that the assumption that the current is constant is not yet justified. In fact, we observed earlier that the power dissipated by a ring oscillator circuit has a significant dependence on the circuit’s power supply voltage. But the earlier example was a free-running ring oscillator where both the frequency and the power were changing with power supply voltage. Typically, we need to determine the variability in the power grid at a fixed frequency. To this end, we will perform a similar study on a circuit running at a fixed frequency, and we will again measure the dependence of power supply current
4.3 ANALYSIS OF VOLTAGE VARIABILITY
69
Fig. 4.10. Resistive mesh equivalent of an on-chip power grid.
on power supply voltage. The circuit shown in Figure 4.11 is simulated with a fixed frequency while varying the power supply voltage from 2.25 to 2.75V and recording the average power supply current drawn in the third inverter in the chain.
Current
Fig. 4.11. Circuit used to estimate dependence of power supply current on power supply voltage.
70
4 ENVIRONMENTAL VARIABILITY
We find that the average current (in mA) can be modeled as: (4.8)
Idd = −0.01623 + 0.08453Vdd
0.22 0.215 0.21 0.205 0.2 0.195 0.19 0.185 0.18 0.175 0.17
3 2.2
3
3
2.3
3
3
3
3
2.4
3
3
3
3
3
3
3
2.5
3
2.6
3
3
3
3
3
3
2.7
2.8
Fig. 4.12. Fit of power supply current vs. power supply voltage.
The fit is illustrated in Figure 4.12. After the same style of normalization as performed above, we can rewrite the current as: Idd = 0.195 + 0.021Vdd
(4.9)
This shows that the impact of power supply variation on current is linear and proportional, i.e. that a 10% change in Vdd translates to a 10% change in Idd (0.021/0.195=10.7%). Note from Figure 4.12 that Idd increases with increasing Vdd . Consider now what happens during a power grid simulation. If we take the value of Idd at nominal Vdd and apply it to the power grid, we will calculate a certain voltage drop, so in fact - the voltage at various points in the grid will be strictly lower than Vdd . Because of the positive slope of the Idd vs. Vdd plot, this lower Vdd will actually result in a lower value of Idd . This natural overestimation results in a safety margin that practitioners use to guard against modeling errors and other possible sources of inaccuracy. However, it is possible to interpret Eq. 4.8 more accurately as current source in parallel with a resistor. Such a model preserves the linearity of the power grid analysis problem -and thus the applicability of a variety of speedup techniques for its solution; it also increases the accuracy of the power supply voltage prediction. It is not in common use, however.
4.3 ANALYSIS OF VOLTAGE VARIABILITY
71
4.3.2 Estimation of Power Variability In the previous section we showed why it is desirable to separate the power grid analysis problem into a linear problem involving a resistive power grid and constant (DC) current sources, and a non-linear problem involving the estimation of the current drawn during the operation of active circuits as a constant. Without this separation, the combined problem would be much too difficult to solve within a practical time. Our purpose in this section is not to explore the difficult problem of power estimation [238], but rather try and understand what factors can cause variability in the power consumed by a circuit. We start by observing that an integrated circuit is composed of several distinct types of circuits. The first type is the static random access memory (SRAM) which is becoming an ever larger portion of modern processors and systems on a chip (SOC). SRAM will be discussed more fully later in this book. Suffice to say, for now, that the power consumed by an SRAM is substantially constant and independent of the data stored. The second type is the circuitry for clock generation and distribution, which includes phase-locked loops, sector buffers, local buffers, and a variety of related components. Clocks consume a large proportion of total chip power, but the estimation of clock power is relatively straightforward since it only depends on the clock distribution and on any clock gating [231] applied. The third circuit type is random logic macros which are usually composed of individual gates and created from higher level descriptions using automatic synthesis tools. The power consumed by such macros is typically very sensitive to the inputs to the macro and has historically been the most difficult to estimate. For the remainder of this section, we will focus our attention on estimating the variability in the power dissipation of random logic macros. As before, we will perform a simulation study of such a macro and use the results to draw insights as to the importance of two sources of variability: (1) the input to the macro (often referred to as input pattern), and (2) the manufacturing-induced variations in the parameters of the devices (MOSFETs) used to implement the macro. As a vehicle for this simulation study, we choose a relatively small public benchmark circuit from the ISCAS 1985 combinational circuit set.3 The circuit has 116 inputs and contains 160 gates. We mapped the circuit to IBM’s 0.13µm CMOS technology using a simple cell library and the resulting circuit contains a total of 760 MOSFETs. We simulated the circuit with 150 random inputs, selected such that the difference between one input pattern and the next was in about one fifth (20%) of the 116 inputs. This 20% is commonly referred to as the input activity of the circuit. The time between patterns was 1ns, corresponding to an operating frequency of 1GHz. We monitored the power supply current over the 150 random inputs; Figure 4.13 shows an example of the power supply current 3
Information about the circuit is available at: http://www.eecs.umich.edu/ jhayes/iscas.restore/c432.html.
72
4 ENVIRONMENTAL VARIABILITY
waveform. Note the wide variation in the current peaks from a typical value around 5mA to peaks at 20 to 25mA. 25 20 15 10 5 0 -5
0
20
40
60
80
100
120
140
160
Fig. 4.13. Idd waveform for C432 example over 150 input samples.
In order to explore the dependence of the power on manufacturing variations, we performed the identical simulation using six distinct sets of IBM’s 0.13µm MOSFET device parameters, downloaded from the MOSIS web site at www.mosis.org, and corresponding to a time period from February to October 2005. To make the comparison between the impact of manufacturing variations and that of the input pattern, we post-processed the Idd waveforms to determine the peak current for each of the 150 patterns. This was done for each of the six sets of technology parameters, giving us six values for each of the 150 patterns. We computed the average, minimum and maximum value for each of the 150 patterns, and created the plot in Figure 4.14. From the figure, we observe that the change between one pattern and another is significantly larger than the range (minimum to maximum, denoted by the vertical whiskers on each point in the plot) of the current variation due to process settings for any given pattern. In order to quantify this empirical observation, we assemble the 150 x 6 matrix containing the peak per-cycle power supply current for each pattern and for each process setting. We then compute the mean of each row and column, giving us two vectors. The first is the vector µp , which denotes the mean current for each pattern over the process settings, and is of dimension 150. The second one is the vector µs which denotes the mean current for each
4.3 ANALYSIS OF VOLTAGE VARIABILITY
73
Fig. 4.14. Average and range (minimum to maximum) of Idd for 150 patterns applied to the C432 benchmark circuit.
process setting across all patterns, and is of dimension 6. From these two vectors, we compute two variability metrics: σp is the standard deviation of µp and was 3.88µA, while σs is the standard deviation of µs and was 0.52µA. This factor of 7.5 difference is expected from the plot above, and confirms that the input pattern plays a far stronger in determining power dissipation than normal fluctuations in the manufacturing process. In order to allow comparison with previous examples, where we had explored the dependence of power supply current on power supply voltage and on temperature, it is useful to normalize σp and σs by the current mean in order to express them as percentages. The mean current was 9.98µA which means that the variability caused by input pattern is of the order of 38%, while the variability caused by manufacturing variations is of the order of 5%. This compares with 18% for power supply voltage and 5% for temperature for the ring oscillator explored previously.
74
4 ENVIRONMENTAL VARIABILITY
4.4 SYSTEMATIC ANALYSIS OF TEMPERATURE VARIABILITY Temperature variability is similar to voltage supply variability in many respects. It arises due to the distributed nature of an integrated circuit, and due to the fact that some components dissipate more power than others. In the previous section we saw that the common power supply grid causes the voltage drop at one point of the grid to be a function of the power dissipation (and hence power demand) at nearby points on the grid. In a similar fashion, the common silicon substrate causes heat generated by power dissipated at one point to spread and cause a temperature rise at nearby points within the chip. Previously, we had found that temperature has a modest impact on circuit delay, having about a 4% impact on the frequency of the ring oscillator simulated in section 4.2. Temperature also has an impact on metal resistivity, and hence the resistance of wires on a chip. The resistivity of a metal can be expressed as: (4.10) ρ = ρ0 + α(T − Tref ) where ρ0 is the resistivity at the reference temperature Tref , and α is the temperature coefficient. The ratio of α to ρ0 is an indication of the strength of the dependence of resistivity on temperature, and that ratio is about 1 to 2 parts per thousand for aluminum and copper, the two materials commonly used to fabricate on-chip wires. So here, again, we see that the impact of temperature on wire resistivity is relatively small. The impact of temperature on circuit leakage, however, is quite strong -as we saw earlier in section 4.2. Research in the area of on-chip temperature variability had been relatively sparse when leakage was a small proportion of overall power demand. Recently, however, with the dramatic increase in leakage brought upon by aggressive technology scaling, there has been renewed interest in temperature variability [239]. A discussion about temperature variability within an integrated circuit must start with the package in which the chip is placed. The package is essential in three different roles. First, it protects the integrated circuit from the environment. Second, it provides electrical connectivity between the integrated circuit and the external world, usually a printed circuit board on which the chip is mounted. Third, it provides thermal connectivity between the integrated circuit and the external world, thereby allowing heat generated within the chip to be safely dissipated. The wide variety of chip packages is far beyond the scope of this book. But in order to provide a framework within which to examine thermal variability, we will describe briefly one type of package, the plastic ball grid array (PBGA) commonly used for mid-power level integrated circuits. An example of such a package is illustrated in Figure 4.15. The figure shows the chip (lower right) connected to the package via gold bond wires, the package wiring, and the C4
4.4 SYSTEMATIC ANALYSIS OF TEMPERATURE VARIABILITY
75
Fig. 4.15. A Plastic Ball Grid Array (PBGA) package, showing the chip, the wires, and the C4 solder balls.
solder bumps used to connect the package to the board. It is probably easier to understand how the package looks from the simplified schematic cross-section illustrated in Figure 4.16. Armed with this understanding of how chips are mounted in packages, let us examine the manner in which heat will flow for a system illustrated in Figure 4.16. To do that, we start with the general concept of thermal conductivity, a material property measured in W/m.K, and describing the ability of a material to conduct heat. The table below shows the thermal conductivity of materials commonly occurring in the system illustrated in Figure 4.16: Table 4.1. Thermal conductivity of materials commonly used in integrated circuits. Material
Thermal Conductivity in W/mK Copper 400 Aluminum 205 Silicon 150 Silicon Dioxide 1
76
4 ENVIRONMENTAL VARIABILITY
Fig. 4.16. Schematic cross-section of a chip mounted in a PBGA package and fitted with a heatsink.
We see that there is a wide range of thermal conductivities, and that silicon itself is quite a good thermal conductor, while silicon dioxide (which we experience in everyday life as glass) is essentially an insulator. For a component with a specific material and geometry, it is often convenient to determine the thermal resistance of the component, expressed in ◦ C/W. This allows a quick determination of the temperature rise as a function of heat flow, and is often applied to heatsinks. In fact, much of the work on thermal design and analysis focuses on the design of efficient heatsinks, enclosures and fans to ensure thermal stability for an overall system. From a variability point of view, however, we are interested in a somewhat different problem localized to the chip, and focused on temperature variations within the chip itself. To do that, we will turn our attention to a much simpler thermal system illustrated in Figure 4.17. The equation describing the steady state temperature distribution within a uniform isotropic material is: k∆2 T (x, y, z) + P (x, y, z) = 0
(4.11)
4.4 SYSTEMATIC ANALYSIS OF TEMPERATURE VARIABILITY
77
Fig. 4.17. Simplified integrated circuit thermal analysis schematic.
where T denotes the temperature at location (x, y, z), P denotes the power density at location (x, y, z), and k is the thermal conductivity. Note that in the general case k is a function of temperature, but we will overlook this dependence for our current purposes. It is possible to extend Eq. 4.11 to model the transient (time domain) behavior of the thermal system, but we will restrict our attention to the steady state behavior since the typical time constants associated with thermal systems are orders of magnitude larger (slower) than those associated with integrated circuits. The system illustrated in Figure 4.17 has adiabatic (or insulating) interfaces along the sides and the top, which means that no heat can flow in or out through those interfaces. Thus all heat generated at the top of the substrate must flow down through the heatsink (modeled by a distributed resistance) and on to the outside environment which is considered to be of infinite heat capacity and therefore at a constant temperature. The heat diffusion partial differential equation expressed in Eq. 4.11 is commonly solved by spatial discretization followed by the approximation of the derivatives by finite differences. Consider for example the discretization illustrated in Figure 4.18, we can approximate the second derivative along the
78
4 ENVIRONMENTAL VARIABILITY
Fig. 4.18. Local spatial discretization along x and z dimensions.
x directions as: ∂2T Ti+1,j,k − 2Ti,j,k + Ti−1,j,k = 2 ∂x ∆x2
(4.12)
where the subscripts denote discretization along the x, y, and z coordinate directions. The y and z derivatives are approximated similarly. This results in a linear equation written for the temperature at one location (i.e. Ti,j,k ) as a function of the temperature at neighboring locations (i.e. Ti+1,j,k , Ti−1,j,k and so on) and of the power generated at that location. In addition, points along the insulating boundary must satisfy boundary conditions which can also be written as linear equations, for example we can express the boundary condition along the left side of the chip as: T0,j,k − T1,j,k ∂T =0= ∂x ∆x
(4.13)
If we discretize the chip along the x, y and z directions into Nx , Ny , and Nz points, we will have a total of N = Nx ×Ny ×Nz points, and the overall system of linear equations will have N equations (one for each node). Such a system can be solved using specialized tools [243], but many have observed that there is a natural electrical equivalent to this thermal system where temperature can be modeled as voltage, power as current, and the thermal resistance as electrical resistance. This results in the equivalent circuit discretization illustrated in Figure 4.19, where (assuming a uniform spatial discretization) the resistances are
4.4 SYSTEMATIC ANALYSIS OF TEMPERATURE VARIABILITY
79
Fig. 4.19. Equivalent electrical circuit to thermal discretization in Figure 4.18.
calculated as: Rx =
∆x ∆y ∆z Ry = Rz = k∆y∆z k∆x∆z k∆x∆y
(4.14)
Such a circuit can easily be solved using SPICE [76]. Now that we have explained a procedure for estimating the temperature variations within a chip, we will perform a simple simulation study to examine the impact of several sources of temperature variation. The vehicle for this simulation example is a chip with an area of 1cm by 1cm and a thickness of 1mm. The chip is mounted on a heatsink with a total thermal resistance of 20◦ C/W. The number of grid points along x and y was 21, and the number along z (depth) was 11. When we simulate the chip with a 1W source in the bottom left corner we get the temperature distribution at the surface of this chip that is illustrated in Figure 4.20. We see in this simple example that the largest temperature variation between different parts of the chip is 9◦ . We performed the same simulation varying the size of the load from 0.1W to 2W and monitored the range of temperature as a function of the load. The result is shown in Figure 4.21 where it is clear that the temperature differential is linearly related to the load. This is expected from the linear nature of the thermal equivalent circuit. The slope of the line in Figure 4.21 is approximately 8.3◦ /W and can be used to get a first order estimate of the temperature difference between two points on this chip vs. the difference in power density at the two points.
80
4 ENVIRONMENTAL VARIABILITY
Fig. 4.20. Temperature distribution on the surface or a 1cm x 1cm x 1mm chip in response to a 1Watt source in the lower left portion of the chip.
Armed with this temperature difference, we can then evaluate its impact on circuit delay, circuit leakage, and on wire resistance. To verify the simple model we created, we performed a Monte Carlo analysis where we simulated our chip again with four distinct heat sources located at the four corners, and with each of the loads drawn randomly from the uniform distribution [0, 1]. For each of these simulations, we measured: (a) the temperature difference between the hottest and coldest locations, and (b) the power density difference between the highest and lowest of the four sources. The results are plotted in Figure 4.22. A least squares fit to the data in the plot has a slope of 8.16◦ /Watt which is quite close to the 8.3◦ /W predicted earlier and shows that this type of simple analysis is indeed valid for making engineering approximations.
4.4 SYSTEMATIC ANALYSIS OF TEMPERATURE VARIABILITY
81
18 3
16 14 12 10 3
8 6 3
4 2 0
3 0
3 0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
Fig. 4.21. Temperature range vs. applied heat load for simple chip example.
Fig. 4.22. Temperature range vs. loading range for the Monte Carlo analysis of the simple chip example.
82
4 ENVIRONMENTAL VARIABILITY
4.5 OTHER SOURCES OF VARIABILITY We close this chapter with a review of some other forms of environmental variability for which we did not attempt to provide a full treatment. One of the most widely studied such behaviors is the interconnect capacitive coupling, where a signal on one wire can capacitively or inductively couple to another wire causing an unwanted signal there [240]. Technology scaling has caused an increase in wire density and in wire cross-section aspect ratio, which has made capacitive noise a major source of errors in integrated circuits. Similarly, increasing operating frequencies have caused an increase in on-chip inductive coupling. Much has been published about this area and we merely note here that these phenomena can be thought of as a form of environmental variability. Another significant coupling mechanism is the substrate coupling, where switching in one region of a chip can resistively couple through the common substrate to cause unwanted noise, to which a sensitive analog circuit can be highly susceptible [241]. Though widely researched, such coupling can be thought of as performance variability resulting from the environment in which the chip operates. An important mechanism that can be thought of as affecting the environmental parameters is the Single Event Upset (SEU), where an energetic particle passing through one or more devices can cause charge creation and, possibly, a change in the state of a memory or a latch [242]. Technology scaling has reduced the amount of charge held at a node making SEU a phenomenon of growing impact. Though traditionally the SEU is treated as a reliability problem, it can also be thought of as a source of variability that comes from the environment.
4.6 SUMMARY In this chapter we reviewed the sources, impact, and analysis of environmental variability. We focused on two major types: power supply voltage variations and temperature variations. For each of these, we showed how this variability can be analyzed, how its impact on the operation of a circuit can be assessed, and we presented relevant illustrative examples to help the reader.
5 TEST STRUCTURES FOR VARIABILITY
No amount of experimentation can ever prove me right; a single experiment can prove me wrong. Albert Einstein
In order to arrive at the quantitative understanding of most of the variability mechanisms discussed in Chapters 2 and 3, they need to be characterized empirically for a specific semiconductor process. A large number of features of a semiconductor process influence the magnitude and the specific behavior of variability mechanisms, making it impossible to predict them from first principles. The growing number and complexity of variability mechanisms increase the importance of methods for their empirical characterization. In this chapter, we study the measurement techniques for characterization of variability. This is followed by the discussion of statistical analysis and modeling methods crucial for properly interpreting the results of the measurements.
5.1 TEST STRUCTURES: CLASSIFICATION AND FIGURES OF MERIT The measurements of the various parameters are carried out with the help of on-chip test structures. The test structures are circuits that are added on to a wafer to help control, understand, and model the behavior of the active (MOSFET) and passive (interconnect, via) components. The test structures can be classified according to the ultimate objective of measurements. We can distinguish two classes of test structures. (a) Test structures for process control: These are test structures that are used for monitoring and controling the fabrication line. Historically, these were small circuits placed in the scribe line (a scribe line is the area between wafers wasted due to the need to slice the wafer up into chips); these circuits consisted of simple test
86
5 TEST STRUCTURES FOR VARIABILITY
structures that allow the measurement of current and voltage (I − V ) characteristics of MOSFETs, and of the resistivity of wires and vias. More recently, they have become more complex as the need to monitor various systematic phenomena has increased. These structures exist on all wafers fabricated in a line, and are therefore capable of modeling the history of the line. (b) Test structures for modeling: These tend to be larger structures that are part of a full-reticle test chip, typically composed of multiple such test structures. These test structures will typically have a much richer variety of structures, and are used to generate the fundamental data needed to create models of the fabricated components. These test structures (also commonly referred to as test chips) are fabricated infrequently, typically early in the life of the line. In evaluating test structures it is useful to consider five major figures of merit: (1) Number of entities: In order to increase the statistical quality of collected data, the test structures need to collect a large amount of information. (2) Cost: Cost includes area and test time that need to be small in order to control the overall cost and ensure that the structure gets tested often. (3) Generality or generalizability: These refer to the ability of the structure to help predict the performance of something other than itself. (4) Accuracy: The test structures are ultimately characterized by the physical resolution of measurements that they allow. (5) Specificity: Some measurements provide an unambiguous indication of the behavior of a specific physical parameter (e.g. resistivity of a line); other types of measurements convolve several factors in producing a response (e.g. the ring oscillator frequency). Test structures can also be classified based on the type of measurements they perform: (1) Electrical (transistor) measurements: These test chips contain arrays of transistor structures permitting the measurements of I − V characteristics of transistors, as well as resistances of the various components. Simple resistivity measurements as well as four-point measurements fall into this category [111]. All above measurements are analog in nature, and their measurement is subject to well-known analog signal measurement challenges, such as noise, bandwidth limitations, thermal effects, and instrumentation drift. Transistor arrays [113][116] tend to allow high measurement accuracy, and are effective in terms of their generality: because of that they are used to generate the BSIM models which is then relied upon to predict the performance of all other circuits. However, transistor arrays tend to occupy a large area, have a fairly small number of distinct devices, and have a low measurement throughput. The attractiveness of transistor arrays as test structures has led to recent efforts [113] that aim to come up with new ways to create optimized structures with fast measurement, sufficient replication, and good generality by relying on multiplexed transistor arrays with high-density access to multiple devices. (2) Digital (frequency) measurements: An alternative to the first class of measurements is to convert an analog signal to a more robustly measurable quantity since doing so simplifies the requirements of the test equipment and environment. Frequency measurements using on-chip circuitry can help with signal-to-noise and bandwidth problems and provide
5.2 CHARACTERIZATION USING SHORT LOOP FLOWS
87
a minimally invasive probing strategy. Ring oscillators have been successfully used to measure a great number of different physical parameters. Ring oscillator-based approaches [114] are effective for test time, and are good general purpose indicators of digital performance, even though they typically cannot help predict the performance of dissimilar circuits (e.g. a PLL). An additional factor is that frequency measurements have low specificity because frequency is impacted by a large number of factors, confounding the impact of specific parameters. We now provide a more detailed discussion of the various test structures classifying them by the type of information they supply.
5.2 CHARACTERIZATION USING SHORT LOOP FLOWS For process characterization, specialized test structures are often needed to discover the particular variational dependencies. This is coupled with a need to collect very large amounts of information in order to increase statistical significance of data. Electrical (resistive or capacitive) measurements are the least expensive methods for collecting large amounts of information. They provide a rapid, low-cost end-of-process metrology for both material properties and specific process information. In these tests, electrical measurements of current, voltage, or charge imply other characteristics. With sufficient ingenuity, they can be used to extract information about a wide range of parameter behaviors. For example, electrical measurements have found the following uses: resistance measurements to determine the line width; resistance measurements to find the placement of the feature; resistance measurements for metal step coverage over topography; capacitive measurements for wire-to-wire spacing and inter-level dielectric thickness over patterned lines; resistance measurements for wire edge-taper width and layer-to-layer alignment [110] [111] [112]. Electrical measurements have been shown to have sufficient accuracy for processcontrol applications and can provide good resolution in terms of geometric values of the parameters [110]. For the process characterization applications, we want to be able to electrically measure and characterize structures resulting from a single process step. This would enable us to precisely assign the cause of error or deviation to a process step. However, to make electrical measurements possible, the structures must be fabricated through at least three steps in a process sequence: film deposition, lithographic patterning, and etching [108]. This leads to some statistical confounding but, as we will see in the next chapter, statistical filtering techniques can be used to analyze this information and derive the individual error contributions from the process steps [121]. Increasing the specificity of measurements, as well as reducing their cost, requires the test structures to be as simple as possible. This is the idea behind the “short-loop”
88
5 TEST STRUCTURES FOR VARIABILITY
characterization flows whose basic objective is to make the structures as simple as possible and as informative as possible. The structures must be designed in a way that enhances their sensitivities to the relevant parameters. After the data is collected, the decomposition into error components contributed by each process step is carried out. Process characterization based on short loop sequences has been used to characterize the behavior of both front-end and back-end processes [108]. We discuss some of the important test structure design issues addressed by these experiments below. The complexity of variability trends and dependencies makes the design of test chips a challenging task. This is especially so due to the interaction between the layout (design) and the manufacturing sequences. In order to properly understand such interactions, test structures must contain a large number of possible layout and neighborhood configurations. The layout-process interactions affect both the front-end and back-end flows. In the front end flow, we are primarily interested in characterizing the dependencies introduced by lithography and etch into the patterning of the polysilicon layer. Specifically, the proper capture of Lgate behavior requires good characterization of the optical proximity behavior, the impact of asymmetry of the layouts, the impact of shape orientation, the spatial across-reticle behavior as well as the impact of the surrounding poly density [108][117]. The within-field Lgate variability has recently become significant to characterize. This component of variation is largely systematic, rather than random, resulting in a distinct spatial Lgate trend-surface. The systematic intra-field variation of gate Lgate is impacted by many factors that include: stepper induced variations (illumination, imaging non-uniformity due to lens aberrations), reticle imperfections, and resist induced variations (coat non-uniformity, resist thickness variation). For full characterization of pattern-dependent variability, in [117] all the gates are classified into 18 categories depending on their orientation in the layout (vertical or horizontal), and the spacing to the nearest neighboring gate. To capture the coma effect, the relative position of the surrounding gates, i.e. the neighbor on the left vs. the neighbor on the right, was also distinguished. The Lgate variability was characterized using electrical measurements. On the 22x22mm2 reticle field a grid of 5x5 test modules was placed to discover the spatial structure of intra-field variability. Each module contained long and narrow polysilicon resistors, with a variety of distances to adjacent polysilicon lines. The polysilicon resistors were manufactured with the same process steps as polysilicon gates, including poly CVD, resist coating, exposure, development, and gate definition by plasma etching. Special attention was paid to minimizing the confounding of Lgate variability due to photolithography with other sources of variation. One possible source of undesired variability is the variation in the sheet resistance across the reticle field that would confound the measurement results. In order to eliminate this component, each module contained a test structure to calibrate the sheet resistance. Another possible source of variation is the silicide resistance, which is known to cause a large
5.2 CHARACTERIZATION USING SHORT LOOP FLOWS
89
standard deviation in the sheet resistance of thin lines. In order to avoid this source of variation, the polysilicon resistors were not silicided. Finally, a third source of variation would be due to variability in the widths of lines on the test chip mask. This component of variation could not be eliminated and is confounded with the measurement results. A series of F -tests were carried out to verify that the generated topological maps of Lgate variability over the reticle field are statistically significant, i.e. that the level of variation is large in comparison with the random Lgate noise. These spatial maps are shown in Figure 2.7. Analysis showed that for all the gate categories, the extracted Lgate maps are in fact statistically significant. In other words, variation within the reticle field cannot be modeled as purely random. The Lgate maps for some categories exhibit quite distinct spatial behaviors. This is due to the interaction between the global lens aberration, and the pattern-dependent optical proximity effect. The result is that, at least for some gate categories, distinct spatial models have to be used for modeling and correction. Data confirms that regardless of spatial variability, a statistically significant bias exists between the Lgate of gates belonging to different categories. This means that the polysilicon lines will have a systematic difference based on the location of a feature, its proximity to its neighbors, its orientation, and even the relative spacing of its features (e.g. the asymmetry effect). The complexity of systematic dependencies in the back-end process also requires a sophisticated test structure design. For back-end short loop characterization [109] introduces methods for the rapid characterization of the CMP process, empirical modeling, and comparison of pattern dependencies as a function of processes, consumables, or equipment options. This is achieved in two ways: First, each mask is targeted toward an individual source of pattern dependent variation. To this end, four separate single-layer masks have been designed to probe structure area, pattern density, line pitch, and structure aspect ratio effects, respectively. The masks are shown in Figure 5.1. These characterization masks can be utilized to investigate pattern dependencies in a variety of CMP process applications, including traditional back-end, trench isolation, and damascene or inlaid metal processes. Pattern-dependent issues include both “dishing” of features being polished and “erosion” or regions with lower or higher density of features. Second, the masks support simplified metrology tools and techniques including optical film thickness and profilometry measurements. The area mask has patterned structures with different areas across a variety of pattern densities achieved by altering the fill pattern inside each structure. In addition to the area structures, there are also structures to test the role of geometric orientation (horizontal lines versus vertical lines). The pitch mask is the second mask. The density of each structure is fixed at 50% (equal linewidth and linespace), and the pitch is varied from 2µm to 1000µm for a total of 36 structures for each die. There are also spatial replicates for many of the structures so that pitch effects can be separated from spatial location
90
5 TEST STRUCTURES FOR VARIABILITY
Fig. 5.1. Mask design for the ILD test chip.
effects. In the density mask, the pattern density (the ratio of raised metal area in each structure to the total area of each structure) is varied systematically from 4% to 100%. The aspect ratio (perimeter/area) mask is designed to explore the role of aspect ratio (the ratio of the length of the structure to the width of the structure) and the ratio of perimeter to area. This mask targets any systematic edge/corner effects which may be present. The results of the experiments are captured in Figure 5.2, where a range of ILD thickness dependencies on area, density, pitch, and aspect ratio can be seen. So far we only mentioned the electrical capacitance and line resistance measurements that can be used to investigate and identify the potential factors contributing to interlevel dielectric (ILD) thickness nonuniformity. However, the masks designed for electrical probing require several masking steps,
5.2 CHARACTERIZATION USING SHORT LOOP FLOWS
91
Fig. 5.2. The systematic dependencies discovered through the ILD test chip experiment. Data is shown for two different wafers.
with the resulting confounding of data, and thus do not allow the immediacy of observation that is achievable by some other metrology techniques. Beyond electrical measurements, several other metrology tools and techniques can be used for experimental analysis, including optical interferometry and profilometry techniques as well as atomic force microscopy (AFM) and scanning electron microscopy (SEM). Optical interferometry is a commonly used technique for directly measuring film thicknesses. It offers reasonably high throughput as well as an absolute thickness measurement assuming the tool is properly calibrated. Automated optical metrology is limited to structures with linewidths greater than 10µm. For smaller structures (down to about
92
5 TEST STRUCTURES FOR VARIABILITY
4µm), optical metrology can still be used but only in a manual model and with less reliable results. In profilometry, a sharp stylus is dragged across the surface of interest, and deviations in the stylus are measured. Profilometry can be used in three-dimensional (3-D) mode to generate an entire die map with high throughput or in two-dimensional (2-D) mode to generate planarization information over centimeter scale distances. In profilometry, measurements are susceptible to stage tilt and bias as well as wafer bow and warp. This problem can be compensated by using a combined optical/profilometry technique: several points are selected on the die which are measured using both optical interferometry and profilometry measurements. AFM and SEM are also useful metrology tools for use with the characterization masks. In each case, detailed information about planarization can be obtained, and small structures which cannot be measured using optical interferometry can also be examined. Since the effective throughput of these techniques is extremely low, the application of these tools is limited to only a few key measurements. An additional drawback of SEM techniques is that they are destructive.
5.3 TRANSISTOR TEST STRUCTURES Test structures for statistical MOSFET characterization rely on the traditional test structures for characterizing MOSFET I − V behavior. The challenge is in performing this characterization efficiently. The ability to collect very large amounts of information is essential. It is especially challenging to characterize intra-chip variability since the test structures in this case have to be placed very densely with each device being measured individually. Recently an on-chip current-voltage characterization system that allows for rapid characterization of a large, dense array of multiplexed devices, eliminating the effects of switch resistances and allowing for very high-resolution current measurements from 100 nA (minimum resolvable current) to 3 mA (full-scale range) was demonstrated [113]. The block diagram of the on-chip measurement system is shown in Figure 5.3. It measures the current-voltage characteristics of the associated transistor array shown in Figure 5.4. Test structures for this array include devices of various sizes, of varying orientations (horizontal or vertical), in the presence or absence of parallel dummy poly, and with gates covered or uncovered by first-level metal. The layout is the transistor array, containing 80 x 20 NMOS devices with a total area of 2.8 mm2 in a 0.25µm CMOS process. A device-under-test (DUT) is selected for measurement with row-select and column-select scan chains. The gate voltage is applied directly to the DUT, while a force-sense technique is used to apply the drain voltage. Separate Force and Sense leads connect to each transistor channel in the array. This allows the current mirror to mirror the device current (isense) as itest, while ensuring, through negative feedback of the current mirror, that the drain voltage of the selected transistor is at vforce despite voltage drops across the switches. The cascoded current mirror is necessary
5.3 TRANSISTOR TEST STRUCTURES
93
to boost the output resistance of the mirror and avoid significant gain errors. Data conversion is implemented with an integrating amplifier and high-gain comparator. The ADC conversion has a linearity which exceeds 10 bits for all input current ranges and has 8 bits of absolute accuracy including current mirrors [113].
Fig. 5.3. Architecture of the test chip for efficient I-V characterization of large transistor arrays.
A much higher spatial resolution of measurements can be achieved if an already density-optimal circuit, such as SRAM, is used for direct measurements of process parameters. In addition, SRAM circuits are known to be especially vulnerable to variations. Therefore, characterization of their behavior under variability is important. Here we consider a test structure for statistical characterization of intrinsic parameter fluctuations in MOSFET devices [116].The test structure features a large array of densely populated SRAM sized devices. It allows fast and precise measurement of electrical characteristics of each individual device. Figure 5.5 shows the schematic of the test structure. The structure contains a 1250µm × 110µm array of small sized devices arranged in an individually addressable fashion, and was fabricated in a 65nm SOI process. The array contains a total of 96,000 devices placed in 1,000 columns, with 96 devices in each column. To minimize parasitic effects, the gate-line and the drain-line of each column can be driven from both the top and bottom end of the column. The small height of the structure ensures that the worst case parasitic drop in a column line does not exceed 1 mV. The gate and drain-lines can also be sensed from both ends, which enables the measurement of voltages at the output of the column drivers. The architecture of the test chip and the array is shown in Figure 5.5. Once a column is selected, the current steering circuit steers the current of the device under test (DUT) to the measuring pin and the currents of the remaining rows are steered to the sink pin. The current steering devices are made of thick oxide to reduce the gate leakage current. These steering devices lie in series between the source
94
5 TEST STRUCTURES FOR VARIABILITY
Fig. 5.4. The organization of transistor arrays.
terminal of the DUT and the ground, causing the row voltages to rise slightly above the ground. The parasitic resistance of the wire also adds an additional resistance between the source node of the DUT and the steering device. The impact of channel doping on random dopant fluctuation is studied by including devices with different Vth implants in the array [116]. The uniquely high spatial resolution of measurements enabled by the described test chip architecture has been successfully used to study the fine-grain spatial behavior of Vth . The experiment has concluded that while there are systematic within field variations, the spatial correlation of the truly stochastic component is negligible. We discuss the statistical techniques that enabled such analysis in the next chapter of the book.
5.4 DIGITAL TEST STRUCTURES Test structures based on digital (frequency) measurements offer an important alternative to other characterization strategies considered so far. Such digital
5.4 DIGITAL TEST STRUCTURES
95
Fig. 5.5. The SRAM-based test chip for Vth spatial characterization.
measurements are attractive for a number of reasons: the cost, the noiseimmunity, and the ease of automation [114] [115]. A large number of different ring oscillator (RO) structures can be instantiated, and their frequency can be measured through the multiplexing circuitry. RO frequency measurements can be used to characterize both device and interconnect properties. Essentially, we need to solve the inverse problem: given the measurements of a function (frequency), we need to find the values of the parameters (device and interconnect geometries) with the complication that there are many possible mappings from the function space to the parameter space. In practice, the problem is solved by (i) differentiating between different capacitive components (Cplate , Ccoupling , Cf ringe ) and (ii) decoupling the capacitive and resistive impact of geometries on delay in some way. This can be done by laying out a structure in a way that minimizes the delay dependence on, say, interconnect resistance by making resistance negligible and extracting the capacitance. Then, a delay measurement based on RC delay can be used to compute the resistance (given that the capacitance value is already known.) There is a finite resolution to the RO frequency measurements (e.g. up to 100KHz). The resolution sets the limit to the ability of the method to detect small variations in the values of parameters.
96
5 TEST STRUCTURES FOR VARIABILITY
In [115] the authors demonstrate a test chip for ring oscillator frequency measurements that is sensitive to device and interconnect parameters. The test chip allows the study of different layout practices to understand variation impact. The test chip enables relatively simple measurement and evaluation of timing variation resulting from process and layout-induced variation. The fundamental test structure is a nine-stage ring oscillator. The basic ring oscillator structure is shown in Figure 5.6. A frequency-divided readout of the RO frequency serves as a clearly defined measure of circuit speed. The key element of the test structure methodology is a scan-chain architecture enabling independent operation and readout of replicated ring oscillator test structures. In the test chip, which was designed and fabricated in 0.25µm technology, over 2000 ring oscillators per chip can be measured using simple digital control and readout circuitry interfaced to the packaged chip. Additional test chip design elements include separate ring oscillator and control logic power grids, so that the frequency dependence of the ring oscillators on power supply voltage can also be measured, enabling separation of channel length and threshold voltage variation contributions [115]. Each ring oscillator structure is made sensitive to a particular device or interconnect variation source. Front-end-of-line (FEOL) or device variation sensitive structures enable examination of channel length variation as a function of different layout practices, including gate length (finger width), spacing between multiple fingers, orientation (vertical or horizontal), and density of poly fill. The FEOL ring oscillators are ROs consisting only of inverters with no additional load between inverter stages. The stage layouts that correspond to the different layout practices are shown in Figure 5.7.
Fig. 5.6. The basic structure of the digital test chip is the 9-stage ring oscillator circuit.
Back-end-of-line (BEOL) or interconnect sensitive structures enable examination of variation in dielectric or metal thickness at different metal levels and impact on interconnect capacitance. These test structures consist of ROs with metal load dominating the output frequency. The interconnect style
5.4 DIGITAL TEST STRUCTURES
97
Fig. 5.7. The test chip contains ring oscillators layouts with different poly-to-poly spacings and different poly densities.
is chosen to accentuate a variation of a specific capacitance, i.e. fringing, planar, and coupling. This is shown in Figure 5.8.
Fig. 5.8. The test chip contains ring oscillators layouts that emphasize different wire capacitance components.
The results of the experiments and of the data analysis are summarized in Figure 5.9 and 5.10. Figure 5.9 shows the dependence of the ring oscillator speed on the proximity or spacing between fingers. We observe that mean frequency is decreasing as the spacing between the polylines increases. This specific result can be used to conclude that the Lgate increases as the spacing increases and to quantify the strength of the dependence. The effect of the proximity is quite noticeable and is comparable in magnitude to the wafer-level variation (0.2MHz vs. 0.3MHz). At the same time, variance is not strongly impacted by line spacing, i.e. the within chip variation is the same for different RO spacings. Figure 5.10 shows the dependence of the ring oscillator speed on poly density. We observe that mean frequency is decreasing for higher surrounding global poly density. The effect is substantial, resulting in a ∼ 0.1MHz decrease in the mean frequency. Similar to the previous example, the variance is not strongly impacted by global poly density. In summary, the results of this extensive experiment indicate that the RO-based test chip can detect susceptibility to variations related to layout. Scan-chain control architecture can be used to obtain replicated information to extract sources of variation. All the important variation sources can be studied. Both chip-to-chip and within chip spatial trends can be mapped. The
98
5 TEST STRUCTURES FOR VARIABILITY
results also indicate that the interconnect variations are more challenging to detect than the FEOL variations using frequency based measurements.
5.5 SUMMARY Development of an effective statistical design methodology requires a thorough understanding of the patterns of variability. Both systematic and random patterns of variability need to be characterized via empirical measurements. Variability characterization requires collection of a very large amount of data which calls for test structures that are inexpensive in terms of area and testtime. In this chapter we discussed the figures of merit for assessing different measurement strategies. We also provided a thorough discussion of the three types of test structures classified by the type of information they supply: the test structures for process characterization based on short loop flows, the test structures based on I − V measurements in transistor arrays, and the test structures based on digital measurements of ring oscillator frequency. In the next chapter, we discuss statistical analysis and modeling methods crucial for properly interpreting the results of the measurements.
Fig. 5.9. Ring oscillators exhibit dependence of frequency on the proximity or spacing between fingers.
5.5 SUMMARY
99
Fig. 5.10. Ring oscillators exhibit dependence of frequency on the poly density.
6 STATISTICAL FOUNDATIONS OF DATA ANALYSIS AND MODELING
Facts are stubborn things, but statistics are more pliable. Mark Twain
A rigorous utilization of statistical design and design for manufacturability is impossible without a certain level of statistical rigor. The techniques required are often different from those that are familiar to most circuit designers. In this chapter, we discuss some specific statistical concepts useful for statistical design. Along the way we will attempt to develop a more refined view of what variability is, and how to describe it in useful, formal, and unambiguous terms. Statistical techniques are required for understanding the spatial and temporal signatures of variation components, for decomposing the variability into causal and spatial sources, and for separating systematic variability from random. Misidentification of the systematic component as random can lead to dramatically higher yield losses compared to when it is truly random. Because causal sources of uncertainty at different scales are distinct, knowing something about one scale does not tell us much about the variation at a different scale. Data collected in the course of experiments typically confounds multiple scales and signatures of variability, so that it may look “random” if taken as a whole. Thus, separation of repeatable, deterministic (systematic) components from a raw data set is crucial for proper analysis and design. Let us begin by defining the useful notion of parameter variability since the usage of the term is sometimes overloaded. The notion of uncertainty, or unpredictability, is clearly central to our discussion of variability. The important aspect is that we cannot predict a value of a parameter exactly. This may be either because the behavior is (1) unknown or un-modelable, or (2) because it is truly random. The example of the first sense of variability is environmental variability of supply voltage which is entirely deterministic, and could be predicted given infinite computational resources. The example
102
6 STATISTICAL FOUNDATIONS OF DATA ANALYSIS
of the second sense of variability is threshold voltage variation due to random dopant atoms which is determined by a fundamentally stochastic nature of the dopant implantation process. On the other hand, to include known functional dependencies in the discussion of variability is unreasonable.
6.1 A BRIEF PROBABILITY PRIMER It is clearly impossible to do justice to complex probabilistic notions within such a short overview. Our objective is to briefly introduce the most useful concepts and help the reader to identify the areas of probability theory that need to be studied rigorously. Another objective is to establish common definitions of the terms that we use in the book. We start by defining a random variable as any function that assigns a numerical value to each possible outcome of some experiment [118]. If the number of outcomes is finite, then the random variable is a discrete random variable. Examples of discrete random variables include the number of dopant atoms in a region of a FET channel. If the number of outcomes is infinite, then the random variable is continuous. An example of a continuous random variable is the value of the gate length Lgate . A discrete random variable, X, is characterized by the probability distribution which is a set of the possible values of X and their probabilities. We now describe several specific discrete random variables that are encountered in IC manufacturing and yield analysis. A common case is where X can have one of two values. This is a Bernoulli random variable. The Binomial random variable describes the number, X, of successes in a series of n identical independent trials that have only two possible outcomes, and succeed with the same probability p. Binomial random variables are often used in estimating catastrophic yield loss due to random defects. For example, if every module in a system has the same probability of failing, then the number of working modules is a Binomial random variable. The probability distribution of Binomial random variable is: n (6.1) px (1 − p)n−x , x = 0, 1, . . . n P (X = x) = x with mean µ = np and variance σ 2 = np(1 − p). When modeling the number of events that have occurred within a given interval of time or space, the Poisson random variable can be used. For example, the number of dopant atoms in a given region of a FET channel can be modeled as a Poisson random variable. Let the mean number of events in a given interval be λ. Then, the probability distribution of Poisson random variable is: e−λ λx (6.2) P (X = x) = x! with the mean µ = λ and variance σ 2 = λ.
6.1 A BRIEF PROBABILITY PRIMER
103
Several continuous random variables are encountered in statistical design applications, including normal, uniform, and log-normal random variables. A continuous random variable is described by a probability density function (pdf ) or cumulative distribution function (cdf ), which is the integral of pdf. The most common continuous random variable is the normal, or Gaussian, random variable. It is common to assume that a great variety of physical parameters follow normal distribution. The reason for the prevalence of normal behavior is the Central Limit Theorem, which proves that the sum of arbitrarily distributed independent random variables asymptotically converges to a normal distribution as the number of variables increases. In reality convergence occurs relatively fast – for 10-15 components. This justifies describing the variation of many physical process parameters encountered in IC manufacturing as normal. The pdf f (x) and cdf F (a) of the normal random variable are: f (x) =
1 x−µ 2 1 √ e− 2 ( σ ) and P {x ≤ a} = F (a) = σ 2π a 1 x−µ 2 1 √ e− 2 ( σ ) dx = −∞ σ 2π
(6.3) (6.4)
The cdf of a normal random variable has no closed-form solution and is typically tabulated as Φ(.), where Φ(a) = P {z ≤ a} and z ∼ N (0, 12 ). The mean and variance are sufficient for a complete description of the normal random variable. A simple transformation, often referred to as normalization, can map an arbitrary normal random variable to the standard normal variable z, such that: z≡
a−µ a−µ x−µ and P {x ≤ a} = P {z ≤ } ≡ Φ( ) σ σ σ
(6.5)
Probability density function and cumulative distribution functions fully characterize a random variable. Partial description of random variables can be given in terms of the moments of their probability distribution. The first moment is the mean (µ) of the random variable, and the second central moment is its variance (σ 2 ). When the full cdf of a random variable is not known, it is still possible to give a bound on the probability of a random variable deviating from its mean value. The Chebyshev inequality provides a bound on the possible spread of any discrete or continuous random variable regardless of the distribution: 1 (6.6) P {|X − µ| ≥ t} ≤ 2 tσ Because of the universality of this inequality, the mean and the variance are the most important moments that need to be known about a random variable. Often we need to analyze several random variables simultaneously. The full description of such a random vector is the joint cdf and pdf. For vectors of normal random variables, the complete description is given by the correlation
104
6 STATISTICAL FOUNDATIONS OF DATA ANALYSIS
or covariance matrix that defines pair-wise correlations between variables. Covariance is a measure of how much two random variables vary together. It is formally defined as: cov(x, y) = E[(x − µx )(y − µy )]
(6.7)
Correlation is a normalized measure of co-dependence. It can be determined from the moments of the individual variables and their covariance: cor(x, y) = cov(x, y)/σx σy
(6.8)
It is often necessary to analyze distributions of functions of random variables. Computing a full pdf or cdf of a function of random variables may be difficult, but estimating the moments of the new random variable can be done more easily. Often a linear function of random variables (x) needs to be evaluated: y = aT x. In vector form, we can show that E(y) = aT E(x) and V ar(y) = aT Σx a, where Σ is the co-variance matrix of x. For example, if y = a1 x1 + a2 x2 : E(y) = a1 E(x1 ) + a2 E(x2 ), V ar(y) = a21 σx21 + 2a1 a2 ρσx1 σx2 + a22 σx22 . A key property of a linear combination of normal random variables is that it also follows a normal distribution. If the function is non-linear in the random variable, then its distribution is not normal, and, in fact, may be difficult to find analytically. In many engineering applications, an approximate method, based on a first-order Taylor-series expansion of the function, can be used if the span of random variables is not large and the function is nearly linear in the small range.
6.2 EMPIRICAL MOMENT ESTIMATION In practice, any information about the properties of random variables will have to be estimated from observed data. Data is collected from test structures and conclusions must be drawn on magnitude of variation and average values of parameters. The theory of statistical estimation posits and tries to solve two major problems in estimation. One is to determine the distribution of the data: e.g. should we model the variation in Lgate as normal or as uniform? The other major problem is to determine the parameters of that distribution: assuming that a variable is normal, what are its mean and variance? Because of the limited amount of data that is accessible to us and because of our ignorance of the precise mechanism that “generates” data, both of these questions are not trivially answered. Estimation of the distribution is typically done using the method of maximum likelihood estimation. (For a more detailed treatment see [120], for example). In engineering practice, we typically have a good guess about which distributional family to use, and simply impose it on the data, often without exploring other possible distributional options. Moment estimation, however, is routinely done in practice. Most typically, it is limited to the estimation of the mean and of the variance of a population.
6.2 EMPIRICAL MOMENT ESTIMATION
105
The values of the sample mean and sample variance are used as estimators of n ¯ = n1 xi , the population mean (µ) and variance (σ 2 ). The sample mean is x and the sample variance is s = 2
1 n−1
n
i=1
i=1
2
(xi − x ¯) . Notice that a factor of
(1/(n − 1)), rather than (1/n) has to be used in the computation of sample variance in order to produce an unbiased estimator of the population variance [120]. The fundamental premise of empirical data analysis is that the quality of estimators depends on the amount of data used in producing the estimates. By increasing the sample size, the estimation error can be reduced. Confidence intervals quantify the amount of uncertainty about an estimate, and thus are useful for drawing reliable conclusions. We first consider constructing confidence interval for the mean. Often the population variance can be assumed to be known a priori, such as when it is known from earlier experience and can be assumed to be unchanged. In this case, according to the central limit theo √ 2 σ rem the sample mean is distributed as x ¯ ∼ N µ, ( / n) . Using this sample distribution, we can construct confidence intervals: x ¯ ± z ∗ √σn , where z ∗ is the value on the standard normal curve that contains the probability of α of being between −z ∗ and z ∗ . For example, if we know that the standard deviation of oxide thickness is σ = 2nm, and after performing n = 15 measurements the estimated sample mean is x = 20nm, we can construct a confidence interval that will contain the true value of the population mean with the probability of, say, 95%. For 95%, z*=1.96, so µ = 20 ± 1.96 √215 = 20 ± 1.01. In other words, we now know that with high confidence µ ∈ [18.99, 21.01]. When there is no prior knowledge of the variance, the variance of the population also has to be estimated from the sample. This is the case, for example, if the manufacturing process has been changed, and the old values of variance are no longer representative of the process spread. In this case, the sample mean is not normally distributed. Instead, it follows a Student tdistribution. The t-distribution is parameterized by the number of degrees of freedom, and the values of t-distribution can be found in statistical tables, or computed by most mathematical packages. Specifically, the following sample −µ . statistic has a t-distribution with (n − 1) degrees of freedom: tn−1 = sx¯√ / n Given this, the confidence interval for the population mean can now be constructed as: µ = x ¯ ± t∗ √sn , where t∗ is the value of the variable that gives the confidence level of α, e.g. 95%. Repeating calculations for the example above, suppose that the sample standard deviation is s = 2nm. Using the table of t-distribution we find that for n = 15 and α=95%, t∗ = 2.13 and µ = 20 ± 2.13 √215 = 20 ± 1.1 or µ ∈ [18.9, 21.1]. The interval for the population mean is now larger; this is a consequence of our not knowing the sample variance.
106
6 STATISTICAL FOUNDATIONS OF DATA ANALYSIS
Similarly, confidence intervals for variance of the population can be constructed based on the sample variance. This can be done on the basis of the χ2 distribution. This distribution is derived from the sum of the squared deviations of standard normal random variables from their means. Formally, if n x2i follows a χ2n , or chi-squared, Xi ∼ N (0, 1), then y = x21 + x22 +. . . x2n = i=1
distribution with n degrees of freedom. This distribution can be used directly for constructing the confidence intervals for population variance based on: (n−1)s2 ∼ χ2n−1 . However, it also plays a role in defining the F -distribution σ2 essential for analysis of variance that we discuss next.
6.3 ANALYSIS OF VARIANCE AND ADDITIVE MODELS Variability in IC parameters often reflects the contributions of multiple factors. In order to develop proper analysis and compensation schemes, we must be able to assign variability to specific factors. This gives us an understanding of the amount of variability contributed by the factor and its relative significance compared to other factors. What do we mean when we say that the impact of a specific attribute is large? The formal answer can be provided via the notion of the statistical significance of differences between groups of data that have different values of the factor. Such analysis is formally known as analysis of variance (ANOVA). This analysis also permits a formal answer to the question of whether there is a systematic difference (“systematic variability”) between parameter values with different attributes. For example, we may use analysis of variance techniques to check whether there is a systematic (statistically significant) difference between the gate length values of transistors that are laid out at different pitches or with different orientation. The basic objective of statistical data modeling is (i) to determine if a change in an attribute results in a statistically significant difference of a response variable, and (ii) to find the amount of change in the response variable caused by the attribute. Analysis and decomposition of variance begins by formulating a model of variability. A statistical model formalizes one’s interpretation of data and of mechanisms that contribute to variation in data. Initially, a model is just (i) a list of distinct components of variation into which the data is to be decomposed and (ii) the form of the model. The key consideration in defining the model is separability or decomposability of data which is needed for a meaningful identification of some terms. The selection of the model is also driven by its intended use: if the separation cannot be utilized, there is no point in making the model more complex. For example, if separating the Lgate variation due to exposure bias and tilt from the random error term is not possible, a simpler model for Lgate variation would lump these two terms together.
6.3 ANALYSIS OF VARIANCE AND ADDITIVE MODELS
107
A typical characterization experiment is designed such that multiple measurements of a response variable (e.g. Lgate ) are taken on devices that have different combinations of attributes. Suppose we want to analyze Lgate variation due to orientation (vertical and horizontal) and pitch (dense and isolated). Then an experiment will measure Lgate from four groups of data (verticaldense, vertical-isolated, horizontal-dense, and horizontal-isolated). The goal of model construction is to check whether the differences in attributes result in statistically different values of Lgate , that is, that the differences can not be attributed to noise. The number of measurements needed for detecting statistical significance in the presence of noise grows significantly if the number of factors is high and also if factors can take on many values. The simplest statistical models are additive in contributions of different factors to variation in a response variable. The general technique for analyzing additive models is formally called the analysis of variance (ANOVA). ANOVA is a technique that allows us to compare the means of several groups of data simultaneously and check if any of the groups have distinct means. An important advantage of ANOVA over the pair-wise group comparisons that can be carried out with t-tests is that ANOVA techniques are more efficient: they have better resolution and require fewer samples. This is because ANOVA effectively tests each factor while controlling for the impact of all others. Because of the importance of the ANOVA technique for data analysis and modeling, we briefly outline its formal theory in the context of one-factor statistical additive modeling. For models with two factors, two-way ANOVA is used, and its theory is a fairly simple extension of the basic one. For more complex models, the theory of factorial designs can be used to construct models and analyze them. The theory we derive is known as one-way ANOVA because it permits the analysis of between-group differences that are different in only one factor (e.g. pitch). (Notice, however, that ANOVA can be very effectively used for analysis of models with multiple factors.) Let their be I distinct values of an attribute and I resultant groups of data, with J samples taken within each group. Let Yij be the jth measurement in the ith group. The analysis relies on the following additive model: (6.9)
Yij = µ + αi + εij
where µ is the overall mean level, αi is the differential offset of the group mean from the overall mean, and εij is the random error. The errors are assumed to be independent and identically distributed: εij ∼ N (0, σ 2 ). Let us assume that I = 3, e.g. that poly-lines can be spaced at isolated, dense, I αi = 0. or intermediate pitch values. Based on the definition of µ and αi , i
The basic question to answer is whether there is a statistically significant difference in the response variable (Y ) depending on the value of the factor, in our example, depending on pitch. The test is formulated using the hypothesis testing strategy. The test is to check the null hypothesis Ho : αi = 0, for all i =
108
6 STATISTICAL FOUNDATIONS OF DATA ANALYSIS
1 . . . I. To enable a formal hypothesis test, we perform several transformations. First we observe that the following holds I J i
j
Y¯ =
I
J
i
j
I
(Yij − Y¯i )2 + J (Y¯i − Y¯ )2 (Yij − Y¯ )2 =
1 IJ
J I i
Yij , Y¯i =
j
i
1 J
J
(6.10)
Yij
j
This expression can be more compactly written as the sum of squares decomposition using the following definitions SST =
I J i
j
(Yij − Y¯ )2 , SSW = SSB = J
I i
I J i
(Y¯i − Y¯ )2
j
(Yij − Y¯i )2 ,
(6.11)
(6.12)
to refer to the total sum of squares (SST), within-group sum of squares (SSW), and between-group sum of squares (SSB). SSW measures variability within the groups, while SSB is a measure of how much variability there is between the group means and the overall mean. Notice that while ANOVA operates with variances, its true purpose is to identify differences between the mean values of groups. It can be shown [120] that if the null hypothesis Ho SSB /(I−1) is true (αi = 0, for all i = 1. . .I, ), then SS should be close to 1. W /I(J−1) If it is false, than the statistic will tend to be much greater than 1. Formal testing is possible because the ratio follows an F -distribution; which can be shown by observing that SSB /(I − 1) ∼ χ2I−1 and SSW /I(J − 1) ∼ χ2I(J−1) .
/n1 The ratio of two chi-squared variables follows the F -distribution: yy21 /n ∼ 2 2 2 Fn1 ,n2 , if y1 ∼ χn1 , y2 ∼ χn2 . The F -distribution is a powerful tool for the analysis of data and comparing multiple groups of data. Formally, the statistic SSB /(I−1) is used to test the hypothesis Ho : α1 = α2 = · · · = FI−1,I(J−1) = SS W /I(J−1) αI = 0, and the values of the F -statistics can be found in tables or computed by mathematical packages. The results of such analysis are summarized in the formal ANOVA table. This table lists in convenient form, the contributions of between-the-group variability (with groups corresponding to different values of attributes) and within-the-group variability to the total variability. In addition to values of sums of squares, the table lists the numbers of degrees of freedom (df ) and the values of the mean square terms computed as the ratio of SS and df. Finally, the table also contains the value of the F -statistic and the probability of observing this value under the null hypothesis. In the next section, we use the ANOVA framework for a series of analysis of a large data set.
6.4 CASE STUDIES: ANOVA FOR GATE LENGTH VARIABILITY
109
Table 6.1. ANOVA table is a standard way of summarizing the ANOVA analysis. Source df of Variation Between I − 1 (Factor) Within I(J − 1) (Error) Total IJ − 1
Sum of Mean Square F p-value Squares (SS) SSB M SB= F= (F >Fcalc ) SSB/(I − 1) M SB/M SW SSW M SW = SSW/I(J − 1) SST
6.4 CASE STUDIES: ANOVA FOR GATE LENGTH VARIABILITY We now use the techniques of ANOVA to explore a range of statistical models of varying complexity. We are studying the variations in a set of gate Lgate measurements collected across multiple spatial locations and for gates with different layout characteristics. The simplest model would describe all variability as purely random variation around the mean: L = µ + N (0, σ 2 ) = 143.2 + N (0, 15.62 )
(6.13)
where the units µ and σ are nm. Such a model is constructed by simply finding the mean and the variance of the data. This model would result if we were unaware of the possible systematic impact of gate layout on the value of gate Lgate or the presence of a systematic spatial bias. Under this model, the worst-case values of L (which we define as Lmax = µ + 3σ and Lmin = µ − 3σ) would be: Lmax = 190nm and Lmin = 96.4nm. We will later compare these values with those predicted by other, more refined models. Let us suppose that the measurements were, in fact, coming from gates of four different layout types: vertical-dense, vertical-isolated, horizontal-dense and horizontal-isolated. Because of such grouping of layout categories (spacing / orientation), we can use a one-way ANOVA model to test whether the layout has any predictable impact on Lgate . The single factor layout has four values corresponding to the four layout types listed above. The model becomes: L = µ + ∆Llayout + N (0, σ 2 )
(6.14)
where ∆Llayout = {∆LHD , ∆LHI , ∆LV D , ∆LV I }T . From the ANOVA table for the model, we see that the p-value is nearly zero; therefore we reject the null hypothesis and conclude that layout configuration does have a predictable systematic impact on the value of Lgate . We further extract the parameters of the model, i.e. the systematic biases to Lgate depending on the layout configuration. The computed model is:
110
6 STATISTICAL FOUNDATIONS OF DATA ANALYSIS
Table 6.2. ANOVA table for the model testing the significance of the impact of layout configuration on Lgate. Source SS df MS F p-value Fcrit of Variation Layout 0.003785 3 0.001262 11.3009 4.2E-06 2.739502 Error 0.007592 68 0.000112 Total 0.011376 71
L = 143.2 + { 10 2 −2 −10 }T + N (0, 10.62 )
(6.15)
The model tells us that vertical-isolated devices have the smallest Lgate and that horizontal-dense devices have the largest Lgate . The systematic difference between these categories is about 14%. Also notice that identifying systematic bias helps to reduce the variance of the random residual from 15.62 to 10.62 nm2 ! Let us re-compute the value of Lmax and Lmin under this model. Consider the max Lgate first. We know that horizontal-dense devices have systematically the largest Lgate : Lmax = µ + ∆LHD + 3σ = 185nm. In a similar fashion, Lmin = µ + ∆LV I − 3σ = 101.4nm. Notice that for the horizontal dense Lmin = 121.4 and for the vertical isolated device Lmax = 165nm. Compared to the previous model, max (min) Lgate is smaller (larger) for all devices. But for horizontal dense, Lmax is 15% less than predicted by the simple model. Such an overestimation would lead to much wasted overdesign. Another useful perspective on the danger of missing systematic dependencies is found by examining the impact on parametric yield due to gate length variation. Suppose that some company carried out process characterization and used the simple model for representing statistical information to the designer. The designer implements layout such that all the transistors are vertical and isolated. Suppose that in this process, subthreshold leakage is such that Lgate cannot go below 127.6nm. The designer uses the simple model and assumes that all variability is random. He estimates that parametric yield is 1 − P (L ≤ 127.6) = 1 − Φ((127.6 − 143.2)/15.6) = 1 − Φ(−1) = 0.84. In reality, since all transistors are vertical isolated, and their Lgate ∼ N (133.2, 10.62 ), the yield is given by 1 − P (L ≤ 127.6) = 1 − Φ((127.6 − 133.2)/10.6) = 1 − Φ(−0.53) = 0.69. It’s a 15% difference in yield! Now we can see the great advantage of distinguishing systematic components of variability from random. The above model, however, does not permit making conclusions about which factor contributes more to the systematic bias of gate Lgate - orientation or spacing. From the designer’s point of view, it would be crucial to know which factor is more important. For example, given that leakage is less in horizontal and dense devices, should the redesign priority be to choose an alternative orientation or spacing? The previous model is based on the oneway ANOVA - a model in which variability is explained by a single factor. But
6.4 CASE STUDIES: ANOVA FOR GATE LENGTH VARIABILITY
111
the ANOVA framework can also be extended to models with multiple factors. The extension for a model with two factors - the 2-way ANOVA - is especially simple. We now set up such a model to decompose the variability in Lgate based on two factors: line spacing (isolated or dense) and orientation (vertical or horizontal):
αI βV L=µ+ + +ε (6.16) αD βH
where µ, αD , αI , βH , βV are the model coefficients to be determined from the data and ε ∼ N (0, σ 2 ). For both spacing and orientation, the p-value is nearly zero and the null hypothesis is rejected. The 2-way ANOVA table below validates the statistical significance of both factors, and thus the need to have a systematic model of Lgate dependence on spacing and geometrical orientation. The extracted model coefficients are (in nm):
−6.0 −4.0 (6.17) + N (0, 10.62 ) + L = 143.2 + 6.0 4.0
With this model we can see that the orientation of the transistor has a larger impact on the gate length compared to the spacing. Also, we find that the above two models are consistent in that we can reconstruct the bias for a composite attribute by adding the biases for two attributes individually, e.g. the mean of the vertical-isolated category can be computed by adding the model coefficients αI and βV : −4 + (−6) = −10 = ∆LV I . The same is true for all other combinations. This is supported by the ANOVA’s conclusion that the effect of interaction between orientation and spacing is insignificant (p-value is high), supporting a statistical model without interactions. Table 6.3. 2-Way ANOVA for the model testing the significance of gate spacing and orientation. Source of Variation Spacing Orientation Interaction Error Total
SS 0.001166 0.002618 5E-09 0.007591 0.011376
df
MS
F
p-value
Fcrit
1 0.001166 10.44821 0.001895 3.981896 1 0.002618 23.45443 7.72E-06 3.981896 1 5E-09 4.48E-05 0.99468 3.981896 68 0.000112 71
Notice that the variance of the residual is unchanged compared to one-way ANOVA: this is reasonable since we have not removed any new systematic components of variability. The only difference between the models is that we can more precisely allocate the contributions of layout factors that were
112
6 STATISTICAL FOUNDATIONS OF DATA ANALYSIS
already taken into account. The residual can be further reduced, however, if we identify additional systematic terms. Data used in the preceding model was taken from a single spatial location. On the other hand, the simple model was based on measurements taken at different spatial locations, and it is possible that data exhibits a systematic spatial signature. For example, lens aberrations in the stepper can contribute to a systematic spatial signature across the die or reticle field. We now use one-way ANOVA to check if the spatial variation of Lgate within the field is statistically significant regardless of the layout configuration: L = µ + ∆Lx,y + N (0, σ 2 )
(6.18)
where ∆Lx,y are the systematic location-dependent Lgate offsets. The ANOVA confirms that the spatial pattern is significant. Because in this model we ignore the impact of layout configuration, the variance of the residual is 15.32 which is substantially higher than the error of the prior layout-dependent models. This indicates that systematic layout-dependent bias has a much bigger impact on the overall variance than spatial bias. Table 6.4. ANOVA table for the model testing the significance of the impact of spatial location on Lgate. Source SS df MS F p-value Fcrit of Variation Location 0.018503 24 0.000771 3.275203 1.58E-07 1.523453 Error 0.41782 1775 0.000235 Total 0.436323 1799
Having established that both the spatial signature and layout configuration have a systematic impact on Lgate , we now study a model that considers the layout-dependent variability and the spatial dependence simultaneously: L = µ + ∆Llayout + ∆Lx,y + N (0, σ 2 )
(6.19)
The model vector is ∆Llayout = {9.9, 0, −1.6, −8.3}T . It is different from the previous vector of layout biases because now all spatial locations are taken into account. The spatial vector has 25 values corresponding to the measurement sites, and is given by: ∆Lx,y =(3.7, 0.5, 4.7, 0.1, 2.4, 0.9, -1.9, -4.4, -2.3, 2.8, 3.8, -4.1, -6.7, -3.5, -0.3, 0.8, -2.4, -4.9, -2.2, 3.6, 5.5, 0.7, -0.2, 0.2, 3.3). Importantly, now the residual variance is 13.92 nm2 . We can see that the variance is smaller compared to the previous model (15.32 ). Let us again re-compute the value of Lmax and Lmin using the new model. Consider the max Lgate first. We know that horizontal-dense devices have systematically the largest Lgate . We can also find a location at which the Lgate bias is largest, max(∆Lx,y ) = 3.8.
6.5 DECOMPOSITION OF VARIANCE INTO SPATIAL SIGNATURES
113
Then, Lmax = µ+∆LHD +max(∆Lx,y ) + 3σ = 200.4nm. In a similar fashion, Lmin = µ + ∆LV I + min(∆Lx,y ) - 3σ = 86.6nm. Notice that the new worstcase values are worse than those computed from the simplest model (190nm and 96.4nm), indicating that at least in some locations transistors would not satisfy specs. This is further illustrated by re-computing the value of yield, assuming the gates are laid out as horizontal and dense, and considering the location with the worst bias. Under these conditions, the yield is 1 − P (L ≤ 127.6) = 1 − Φ((127.6 − 128.3)/13.9) = 1 − Φ(−0.05) = 0.52. Again, we see from this example that without uncovering the systematic components of variation, we can significantly under-estimate the max value and overestimate the min value, while being too conservative for most points. This may have fatal consequences for the quality of yield estimation. Table 6.5. 2-Way ANOVA table for the model that simultaneously tests the significance of the impact of layout configuration and spatial location on Lgate. Source of Variation Layout Location Interaction Error Total
SS
df
0.077104 3 0.018503 24 0.010385 72 0.330331 1700 0.436323 1799
MS
F
p-value
Fcrit
0.025701 132.2681 5.18E-77 2.610137 0.000771 3.967603 4.15E-10 1.523725 0.000144 0.74226 0.947496 1.297607 0.000194
6.5 DECOMPOSITION OF VARIANCE INTO SPATIAL SIGNATURES Another common need in data analysis is to decompose variability to different spatial scales. Because of the specifics of the semiconductor manufacturing process, variability in key parameters is caused by several spatial signatures. It is of great interest to identify the sources and magnitudes of variation of each scale - to find out how much variability can be attributed to lot-level, wafer-level, intra-die, and wafer-to-wafer components. This decomposition is also intimately linked with the modeling task of describing the systematic variations and their explicit dependencies on specific design attributes. Neglecting a spatial signature may skew the ANOVA model based on the layout and pattern characteristics, such as orientation and spacing. In performing spatial analysis, we may assume that the data set contains multiple measurements of a parameter (Lgate , ILD thickness) across the chip and across the wafer repeated over multiple wafers. This is essential for performing accurate analysis and is practically possible to implement during
114
6 STATISTICAL FOUNDATIONS OF DATA ANALYSIS
technology characterization. The number of repeated measurements at the same spatial location does not have to be exceptionally large: often 15-20 repeated samples are sufficient for drawing conclusions. We assume that each measurement is associated with a known spatial location (within the wafer, field, and die). Decomposition of spatial variability can be done in several ways, depending on the intended use of the model. First we consider decomposition in terms of four components: systematic wafer-level variation, systematic within-die variation, systematic wafer–die interaction, and random residuals. There are distinct causal mechanisms contributing to Lgate variability at each scale: (i) the field signature of the stepper (reticle) that is identical for each field, (ii) a variation due to the exposure that produces a bias or tilt in the Lgate values within the field, (iii) wafer-level variability due to wafer processing that include resist spin, develop, poly CVD, and poly etch, and (iv) a random error that cannot be attributed to any identifiable component. The model can again be a linear additive model: L = Lw (x, y) + Ld (x, y) + Lwd (x, y) + ε
(6.20)
where x, y are the spatial coordinates, Lw (x, y) is the wafer-level variation, Ld (x, y) is the within-die variation, Lwd (x, y) is the wafer-die interaction term, and ε ∼ N (0, σ 2 ). The use of (x, y) coordinates emphasizes that the model seeks to find systematic spatial profiles of variation attributable to each scale. It lumps all the remaining un-accounted variability into the residual term, which represents purely random stochastic variability. This variance decomposition is difficult because of the confounding of contributions from different spatial scales of variation. If some components are known, however, this helps find other components since under the additive model, we can sequentially remove the identified dependencies from the raw data. The flow of such variance decomposition is shown in Figure 6.5. The first step is to identify the wafer-level trends. Data shows that the parameter variation across the surface of the wafer is typically smooth, exhibiting low spatial frequency, somewhat larger than field sizes. This makes nearby parameter values highly correlated. The wafer-level variations can be attributed to equipment design (temperature gradients in the furnace, nonuniform gas flow in the chamber) and constraints due to the machine operation (non-uniform slurry flow in CMP). The wafer-level component of variation is often characterized by symmetric patterns such as radial (“bull’s eye”), “salad bowl”, slanted planes, or their combinations. It is typically assumed that they are independent of the layout. Estimation of wafer-level trends relies on the assumption that the variation is smooth and gradual. The simplest technique for extracting the wafer-level trend is to use a single measurement location for each die. This approach is limited because it does not utilize the spatial information contained in the measurements within the die, and implicitly assumes that the spatial distance over which the wafer-level component changes is greater than the die size. This
6.5 DECOMPOSITION OF VARIANCE INTO SPATIAL SIGNATURES
115
Fig. 6.1. Flow of variance decomposition.
may not be true, especially, for die that are near the wafer edge. A more potent technique is based on moving averages, based on defining a spatial window within which the points get averaged to produce an estimate. Near the wafer edge the simple moving average procedure will give poor estimates since the window will not be fully inside the wafer. An approach that has demonstrated better behavior is a modification of the simple moving average, known as the down-sampled moving average estimator (DSMA). But researchers have also studied other analytical techniques for estimation based on the meshed spline method and regression using assumed parametric forms [119]. Once the wafer-level component has been extracted, the difference between raw data and the systematic wafer-level trend is generated. If the wafer-level variability is negligible within a stepper field, or it behaves like random error, the within-die component can be found by computing for each location within the die a simple average over the measurements available from n different die: n
ˆ y) = 1 Li (x, y) L(x, n i=1
(6.21)
This approach was used to extract the spatial intra-field dependence used for ANOVA models of the previous section. This is typically sufficient for the fields in the center of the wafers in a well controlled process. This is not always sufficient, however, since edge die often exhibit peculiar, atypical profiles. A more reliable and accurate alternative is based on the use of spatial filtering in the frequency domain. The intra-die component can be estimated utilizing field-level periodicity of the variability and using frequency-domain analysis to extract the components that occur with the proper spatial frequency (of the repeated reticle field). The difference between raw data and the wafer-level component is transformed into frequency domain using a 2-D Fast Fourier
116
6 STATISTICAL FOUNDATIONS OF DATA ANALYSIS
Fig. 6.2. Extracted spatial terms.
Transform (FFT) algorithm. A spatial filter is then utilized to pass the component near the field frequency. An inverse Fourier transform finally produces the sought die-level variation. This approach has proven to be more robust compared to the computation based on the moving average [108]. The final systematic component posited by the additive model above is the interaction between the wafer-level and die-level components. This term is non-zero, if depending on the die location on the wafer, the die-level pattern is attenuated or accentuated. This factor may be quite significant for dies on the edge of the wafer. It has been shown that techniques based on spatial FFT analysis and spline method can also be used to extract this component of spatial dependence [85]. The results of decomposing variance to identify the three systematic spatial signatures of Lgate variation are shown in Figure 6.2. In practical uses of statistical design, a frequent concern is regarding the absolute and relative magnitudes of variability at different spatial scales. From
6.5 DECOMPOSITION OF VARIANCE INTO SPATIAL SIGNATURES
117
the designer’s point of view the most important distinction is between the intra-die and die-to-die variability. Here we decompose variability into fieldto-field and within-field components. (The amount of intra-chip variability depends on the size of the chip compared to the reticle size. It may be further useful to decompose the field-to-field variance into within-wafer and waferto-wafer component. This becomes a challenging problem that requires techniques for the analysis of nested variance [122].) Table 6.6 shows the decomposition for several categories of Lgate . The table contains information on the raw amount of variability, the fraction of total variability that can be attributed to either intra- or inter-field component, and also the relative magnitude of each component compared to the mean value of Lgate . As we saw earlier, layout-dependent bias contributes significantly greatly inflating the total variance. When we examine the variance for each layout category individually, we find that it is almost 50% less than the overall variance. Spatial variation also contributes to the intra-field variance, and since we know that it is systematic, we can remove it. Then, for a specific layout type (vertical/dense), we can observe a significant reduction in intrafield variance. This variance is now the best measure of random intra-chip variability. We see that unless we remove layout dependence of Lgate , 60% of total variability is intra-field. For a single layout category, the ratio is reversed, with only 30% of total variability attributable to the intra-field component. Finally, when the spatial dependency is removed, we discover that only 17% of total variability is intra-field. At this point, the relative magnitude of the random intra-field component is only 3σ/µ = 9%. Table 6.6. Analysis of relative contributions of intra- and inter-field Lgate variability. Variance All Categories Lumped (s2 ) s2 % of (nm2 ) Total Var 12.12 60
3σ/ µ (%)
Intra25 Field Field- 9.82 40 21 to-Field Total 15.62 100 33
Vertical Dense With Spatial Signature Without Spatial Signature s2 % of 3σ/ s2 % of 3σ/ 2 (nm ) To- µ (nm2 ) To- µ tal (%) tal (%) Var Var 6.12 30 13 4.22 17 9 9.42
70
11.22 100
20
9.42
23
10.32 100
83
20 22
118
6 STATISTICAL FOUNDATIONS OF DATA ANALYSIS
6.6 SPATIAL STATISTICS: DATA ANALYSIS AND MODELING The previous sections have already suggested the importance of spatial behavior in integrated circuit design and manufacturing. Many variability patterns have some spatial characteristics. Intra-wafer, intra-field, and intra-chip variability may all exhibit non-trivial spatial structure. Spatial statistics is the area of statistics that deals with random variables and random processes defined on a set of spatial locations. In this section some relevant aspects of spatial statistics are reviewed. 6.6.1 Measurements and Data Analysis Equation 6.20 posits a model that decomposes overall observable variability into several additive components. In the additive model specified by Equation 6.20, the intra-chip component of variability does not assume spatial correlation. In a more general case, the within-die variability can be spatially correlated. The term “spatially correlated”, in this case, means that there is correlation between the values of the intra-chip component of variability of a parameter at different locations. The notion of spatial correlation is intuitively appealing in the context of semiconductor manufacturing, since it is based on the intuition that the parameter behavior at the points close to each other is similar. In statistics, however, intuitions are often dangerous. Data analysis suggests that extreme care needs to be taken in the discussion of spatial correlation. Specifically, it is important to distinguish the true stochastic correlation of the intra-chip component of variation from the apparent correlation in the raw data that is, in reality, due to a systematic drift of the mean in the underlying parameter behavior. Several recent experiments have investigated such confounding of data and the resulting mis-interpretation of results [123][116]. In [123], a large number of Lgate measurements is collected using electrical linewidth measurements of poly-lines with the nominal dimension of 130nm. The experimental data has high spatial resolution: the measurement points are separated by 2.19mm (1.14mm) horizontally (vertically). The total of about 10,000 data points were collected, permitting a very reliable analysis of the underlying trends. If the decomposition of raw data into die-to-die and intra-die components is not performed, the apparent spatial correlation between data at different locations is very high. The correlation function, shown in Figure 6.3, was computed separately for the vertical and horizontal directions, and some anisotropy in the behavior can be discerned. The correlation decreases from the high of 0.75 at 1mm distance to 0.32 at 17mm distance for the horizontal direction. For the vertical direction, the correlation decreases from the high of 0.75 at 1mm to 0.12 at 10mm. The correlation functions changed dramatically, however, after the known systematic components were removed from the raw data. Using the techniques
6.6 SPATIAL STATISTICS: DATA ANALYSIS AND MODELING
119
discussed earlier in this chapter, the systematic within-in-field component is first identified and then removed from the data. This component is often due to the lens aberrations. The removal of this component leads to a relatively small reduction in the correlation values. When only the systematic acrossthe-wafer component is identified and removed from the data, there is also a rather small reduction in the correlation values. Upon the further study of this data set, it was observed, however, that an additional identifiable component of variation is the average field-to-field variation, most likely due to the shot dose variation. Finally, this component is also removed to reveal the truly stochastic within-field variability. When we now study its correlation function, shown in Figure 6.3, we see a drastic reduction of spatial correlation compared to the correlation function of the raw data. For the horizontal direction, the correlation decreases from 0.17 at 1mm to 0.05 at 5mm; and for the vertical direction, it decreases from 0.25 at 1mm to 0.05 at 3mm. Comparing these results to the much higher value of the apparent correlation cited earlier, we see that the true amount of correlation in the intra-chip component is nearly negligible. In an another experiment, the spatial distribution of Vth has been explored with an even higher spatial resolution: the device spacing was 5µm [116]. The experiment found that the spatial correlation coefficient is −0.09 for the horizontal direction, −0.12 for the vertical direction. In the analysis of data using the variogram, an effective tool for the analysis of spatial random processes, it was found that 76% of all variance was contributed by a purely un-correlated component of variation and only 11% of the variance was due to the spatially correlated component [116]. The above two studies indicate that the spatial correlation of the truly stochastic component of intra-chip parameter variation is most likely very small. 6.6.2 Modeling of Spatial Variability In order to make the discussion of spatial dependencies precise, there is a need to rigorously describe the spatial behavior of a variational component. In this section we briefly describe several tools that can be useful in this. The fundamental notion for the study of spatial statistics is that of a stochastic (random) process. For our purpose it can be simply defined as a collection of random variables defined on a set of temporal or spatial locations. The spatial characteristics of stochastic processes can be captured in several ways. We saw how the correlation function was used in the previous section to describe the spatial correlation found in the Lgate data. The correlation function is only useful if the stochastic process has the property of being stationary [127]. We use the second-order stationary process as the working model, but other more strict criteria of stationarity are possible. In the second-order stationary processes only the first and second moments of the process remain
120
6 STATISTICAL FOUNDATIONS OF DATA ANALYSIS
Fig. 6.3. True spatial correlation after the removal of wafer-level and field-level c components. (Reprinted from [123], 2005 SPIE).
6.6 SPATIAL STATISTICS: DATA ANALYSIS AND MODELING
121
invariant. This model implies that the mean is constant and the covariance only depends on the separation h between any two points: EZ(x) = m E[Z(x) − m][Z(x + h) − m] = C(h)
(6.22)
where C(h) is the covariance function that depends only on separation h and m is the mean. Using the above definition, the correlation function can also be formally defined as the function that describes the correlation between any two points Z(x) and Z(x + h) separated by distance h: ρ(h) = C(h)/C(0)
(6.23)
The correlation function ρ(h) has a maximum at the origin and is an even function. The covariance and correlation functions capture how the co-dependence of random variables at different locations changes with the separation h. It is possible to show that a very popular model for matching of transistor properties developed by M. Pelgrom [125] can be derived from the properties of stationary processes described by a given correlation function [126]. Pelgrom’s model describes the standard deviation of the difference between the properties of two transistors with an area (W L) which are separated by distance (D): A2p + Sp2 D2 (6.24) σ 2 (∆p) = WL where Ap and Sp are the technology-dependent coefficients to be determined empirically, W and L are the device width and length, and D is the distance between the devices. This model has been successfully used in analog design. Pelgrom’s model fundamentally relies on the assumption of spatial correlation of parameters. The usefulness of Pelgrom’s model, however, is limited for describing the mismatch at larger distances [126]. This is because at longer distances, the device behavior is dominated by the systematic wafer-level and field-level components of variation. In this case, the assumption of stationarity of the stochastic spatial process is not justified, and a more useful model is a non-stationary model that relies on explicit modeling of the systematic component (also known as drift in the area of spatial statistics). When the process has a systematic shift in the mean (formally known as the drift), the stochastic process is non-stationary. For example, the process may have a linear drift which can be described as: E[Z(x + h) − Z(x)] = a · h
(6.25)
where a is a constant of the linear drift of the mean, and h is the separation. The covariance and correlation functions are unambiguously defined only for stationary processes. For example, the random process describing the behavior of the Lgate is stationary only if there is no systematic spatial variation of the mean Lgate . If the process is not stationary, as in the analysis in
122
6 STATISTICAL FOUNDATIONS OF DATA ANALYSIS
the previous section, the correlation function is not a reliable measure of codependence and correlation. Recall, that once the systematic wafer-level and field-level dependencies were removed, thereby making the process stationary, the true correlation was found to be negligibly small. In order to describe the spatial co-dependence in non-stationary processes, we need a substitute for the covariance and correlation functions. The variogram offers a more powerful tool to describe the behavior of a nonstationary random process. The variogram captures the degree of dissimilarity between the Z(x) and Z(x + h). The variogram function γ(h) is defined as 2γ(h) = V ar[Z(x + h) − Z(x)]
(6.26)
The advantage of the variogram function is that in contrast to the covariance function it does not require knowing the mean and does not require mean to be constant. Thus, the variogram is more general than covariance. This has made the variogram a central tool for describing spatial random functions in spatial statistics and its application areas, such as geostatistics. The variogram of a process with strong spatial correlation will be an increasing function, usually saturating beyond a certain point. An example of a variogram that shows spatial correlation is in Figure 6.4 which represents the spatial properties of the IR voltage drop within a power grid [124]. In the already- mentioned study of the spatial structure of Vth , the spatial properties of the threshold voltage were also studied using the variogram [124]. The variogram for this data set is shown in Figure 6.5. The steady behavior of the variogram indicates the absence of spatial correlation. The notion of drift and non-stationarity allows us to clearly delineate between process variability that exhibits systematic variability across the die from a spatially correlated stationary stochastic process. The term “systematic spatial variability” refers to the presence of drift of the mean. Using this term to refer to a stationary random process (e.g. with the constant mean) with spatial correlation hides its fundamental difference from a random process with the drift (i.e. a non-stationary process).
6.7 SUMMARY In this chapter we reviewed several important concepts related to probability and statistical data analysis. We first provided a primer of basic probabilistic terms and assumptions. The basic issues of empirical moment estimation were then discussed. We introduced additive statistical models and their analysis using ANOVA techniques, and offered a series of case studies employing ANOVA models of varying complexity. A strategy for identifying spatial signatures using variance decomposition was also discussed. Finally, we introduced some formal tools useful for the study of spatial statistics.
6.7 SUMMARY
123
Fig. 6.4. Variogram of IR voltage drop which shows a strong degree of spatial c correlation. (Reprinted from [124], 2006 ACM).
Fig. 6.5. Variogram for the experiment investigating spatial Vth variation. Spatial c correlation is low. (Reprinted from [124], 2006 ACM).
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
You can’t have a light without a dark to stick it in. Arlo Guthrie
Weather forecast for tonight: dark. Continued dark overnight, with widely scattered light by morning. George Carlin
Much variability in key process parameters is highly systematic. The sources and mechanisms of variability are known, and can be described by precise functional dependencies. In front-end processes, the use of subwavelength photolithography has led to severe difficulties in performing predictable pattern transfer. An important manifestation is the variation of the polysilicon gate critical dimension as a function of the 2-D layout neighborhood. Such critical dimension variability has tremendous impact on the timing and power consumption of digital ICs. It can also lead to functional failures due to shorts and opens caused by lithographic non-idealities. Once the systematic variability-generating mechanism is identified, corrective measures can be taken, both at the time of lithography and in the design phase. The purpose of this chapter is to give designers a basic understanding of how the idealized layout that they produce gets distorted by the advanced lithography and what can be done about that. We review the palette of reticle enhancement and design for manufacturability techniques that are currently required for ensuring the quality of the pattern transfer.
128
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
7.1 FUNDAMENTALS OF LITHOGRAPHY In the realm of lithography, the reduction of minimum printed feature size, required by Moore’s Law, has traditionally been accomplished through the use of illumination sources that utilize progressively smaller wavelength of light. However, over the last decade this trends has slowed down, Figure 7.1. While in the past, the wavelength of light was smaller than the minimum feature size, it is now 4-5× larger. This is primarily due to the long delay in the introduction of 157nm illumination, and the continued reliance on the 193nm illumination source that this necessitates. The reasons for continued use of 193nm light are both technical and financial. It appears both too expensive and technologically very challenging to produce high-quality lenses and photoresist for 157nm light. At the moment of writing in 2007, the lithographers believe that the 157nm illumination will not become mainstream.
Fig. 7.1. Scaling of minimum feature size, illumination wavelength, numerical aperc IEEE). ture, and k1 factor. (Reprinted from [128], 2003
7.1.1 Optical Resolution Limit The move into the regime of sub-wavelength lithography is one of the defining features of nanometer scale CMOS process and design engineering, and it creates multiple challenges. The primary implication is that optical lithography approaches the fundamental limit to its resolution, which is the minimum feature size that can be reliably patterned.
7.1 FUNDAMENTALS OF LITHOGRAPHY
129
Fig. 7.2. (a) Simplified view of the illumination system. (b) Different diffraction c orders emerging from the mask plane. (Reprinted from [129], 2003 IEEE).
To understand the challenges to printability brought about by scaling, we now review the basics of lithography. A simplified view of a modern lithography system is shown in Figure 7.1. The illumination source (laser) produces a coherent light wave of a certain wavelength, currently 193nm. The light illuminates the mask containing the layout pattern. To construct a model of illumination, the mask openings can be thought of as point light sources located in the center of the opening. The light diffracts from mask openings and forms a series of diffracted beams at a finite number of angles that are dependent on the wavelength. It can be shown that the optical resolution is defined by a spatially periodic pattern. Interestingly, there is no theoretical limit to printability of single isolated openings. For a spatially periodic pattern with pitch, p, the permitted angles of diffraction are given by the Bragg’s condition: nλ = psinα, where n is an integer that describes a specific diffracted
130
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
beam, called diffracted order, by lithographers. Bragg’s condition implies that there is only a finite discrete number of diffracted beams because the beams at any other angle suffer destructive interference. The beam with n = 0 passes through the center of the lens, but higher order beams pass closer to the periphery of the lens. A smaller pitch increases the angle of deflection for any given order. The angle of deflection is given by α = sin−1 (nλ/p), and for some small value of pitch, it will be deflected so much that the lens will not capture it even for n = 1. Since a practical mask contains features at different pitches, we can represent the mask patterns in terms of their spatial frequency spectrum. From this point of view, we can say that light that carries information about high-frequency spatial components passes near the periphery of the lens, while the low-frequency spatial components are transmitted near the center. The diffracted beams that pass through the lens are combined at the wafer (image plane) and form an interference pattern. The maximum angle, θ, of diffracted light that a projection lens can capture is a crucial property of the lens. The sine of this angle is defined as the numerical aperture (N A) of the lens: N A = sinθ. In order to produce a useful image, the zero-order beam must be combined with the beams from at least one other diffracted order (n = 1, −1). Without that combination, the wafer plane image contains no spatial modulation from the mask, and patterning is thus impossible. Therefore, the limit of printability is determined by the ability of the lens to capture the n = 1 diffracted order beam. We can now find the minimum pitch that can be resolved: p = λ/N A
(7.1)
Notice that the theoretical limit defined by this equation definition is dependent only on the pitch, not the minimum dimension! Adapting a common assumption that the minimum dimension is 1/2 of pitch (Rmin = 1/2p), we arrive at a familiar expression, known as Rayleigh resolution limit: Rmin = 0.5λ/N A
(7.2)
In an ideal infinite lens, all diffraction components would be deflected back and be used in reconstructing an interference pattern at the image plane. An ideal reconstruction of the sharp intensity profile at the mask would be possible. When only the n = 1/ − 1 diffraction order passes through the lens, the image does contain some amount of spatial modulation of the intensity. But because all higher order diffraction beams have been lost, the wafer-level intensity cannot have sharp transitions. Instead, the ideally square intensity profile resembles a sinusoidal function, with the difference between the low and high regions of intensity being smoothed out. The image transformation affected by the loss of the higher diffracted orders of the intensity profile can be thought of as the application of the low-pass filter that does not pass higher-order spatial mask features
7.1 FUNDAMENTALS OF LITHOGRAPHY
131
through the optical system. This low-pass filter behavior of the lens has different effects on mask (layout) features depending on their spatial frequency spectrum. Since higher frequencies are lost, it is obvious that mask features that need such frequencies will suffer a greater degree of image degradation. The most vulnerable patterns are the inner and outer corners of geometric shapes. This analysis helps to see why the push to resolve progressively smaller dimensions takes us closer to the resolution limit and also results in severe challenges to reliable pattern transfer. A practical way to quantify the severity of the printability problems is by measuring how close any given process is to the resolution limit. This is done using the Rayleigh factor k1 , k1 = CD ∗ N A/λ, which becomes a measure of the difficulty of imaging and resolving a specific dimension with the given lithographic process. In this expression, CD is the actually enabled critical dimension of the process. Patterning fidelity decreases with lower k1 . A study of the recent trends in lithography makes it clear that despite the reduction in λ and increase in NA, the difficulty of lithography as measured by the value of k1 , has been increasing. k1 decreased from 0.9-0.8 at 0.5µm technology node to below 0.5 for 130nm node, Figure 7.1. Rayleigh’s criterion sets the limit on k1 at 0.5. To enable Rmin scaling and to ensure pattern fidelity at low k1 , a growing number of resolution enhancement techniques are needed. These techniques are now permitting lithography which is far below the resolution limit of “0.5”. Among the most common are optical proximity correction, subresolution assist features (SRAF), alternating phase shifted mask, and off axis illumination. Most of these RETs help improve fidelity at the given resolution limit. Out of these palette of tools, only off-axis illumination and alternating PSM can help go beyond Rayleigh’s resolution limit all the way to the ultimate resolution limit of k1 =0.25 [130]. Another key characteristic of a lithographic flow is its depth of focus (DOF), which is the ability of lithography to reliably reproduce image despite a shift in focal plane. This shift in focal plane is also known as defocus or focus error. At the resolution limit, the image is formed by the interference of the 0th order that passes through the center of the lens in a perpendicular direction, and the 1st diffracted order. The vertical displacement of the image plane creates a pathlength difference between the orders. This leads to the phase difference between the two beams depending on the displacement. The depth of focus is defined as the maximum change of focus for which the image will still retain quality and the pattern will still be printed within specifications. Rayleigh’s criterion defines the point at which the image becomes blurred, i.e., the depth of focus, as the vertical displacement at which the phase difference is 90◦ , or the pathlength difference is λ/4. Using this definition, the DOF can be expressed as DOF = λ/(2(N A)2 )
(7.3)
132
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
This expression indicates that DOF is decreasing with scaling of λ and due to an inverse quadratic dependence on NA. To enable Rmin reduction, the numerical aperture of the lenses has increased from 0.28 to the current values of 0.75-0.85. The growing N A, however, leads to a quadratically smaller DOF. For a 193nm system with N A = 0.75, the DOF is 116 nm. While DOF decreases, it becomes more difficult to control the amount of vertical image plane displacement due to such factors as wafer topography, non-uniformity introduced by the modern planarization techniques based on chemical-mechanical polishing, nonflatness of the mask, and focus setting error. The decreasing DOF is a serious challenge to the existing and future low k1 lithography processes. As we discuss later, it leads to the shrinking of process window and growing variability. The resolution limit of optical systems derived above is characteristic of ideal, or diffraction-limited optics. That is, the limitations are due to the fundamental wave properties of light, and are defined by the way a diffraction pattern is formed. In real systems, this limit is hard to achieve because of the non-idealities of the lenses. The multiple manifestations of non-ideal behavior are known in optics as aberrations. The aberrations are an unavoidable feature of real-life lenses especially those with high numerical aperture and a large image field size. Thus, the very requirements needed for low k1 lithography also bring more significant aberrations. Furthermore, some of the resolution enhancement techniques (specifically, phase-shift masking and off-axis illumination) that we discuss later in this chapter also make lens aberrations more pronounced. The most important examples of aberrations are shifts in the image position, image asymmetry, reduction of the process window, and the appearance of undesirable imaging artifacts. We begin by defining the aberrations in terms of geometric optics. In an ideal optical system, all possible optical beams leaving an object point have the same optical path length. The optical path is the distance along the ray multiplied by the index of refraction of the medium. The aberrations occur when the optical paths between beams are different. More generally, the aberrations can be thought of as phase deviations. They are measured in the units of wavelengths, and for modern lenses the errors are on the order of 20 milliwaves which is about 4nm for 193nm exposure tools [132]. The optical path difference (OPD) for each beam is defined with respect to the zero diffraction order beam passing through the center of the lens, and is equal to the difference of the optical paths of the two beams. The aberrations are completely characterized by the OPD surface across the lens. A common approach is to represent the OPD surface using the spherical coordinates, and decompose it in terms of contributions of different types of aberrations using orthonormal Zernike polynomials. (7.4) OP D(ρ, φ) = aj λZj (ρ, φ)
7.1 FUNDAMENTALS OF LITHOGRAPHY
133
In this decomposition, the coefficient aj determines the contribution of the jth Zernike term measured in the units of wavelength. The advantage of such a decomposition is that it permits the analysis of the individual contributions and a better qualitative understanding of the signatures of different terms. The theory of aberrations, which goes beyond the scope of this book, can be used to show that each terms contributes an error of specific profile [133]. While the total number of Zernike terms is rather high (more than 30), here we consider the impact of several most important terms. The first component Z1 adds a constant phase shift across the image plane and thus does not affect the image. Z2 (Z3 ) represents a tilting of the OPD that is manifested as a shift of the image in the plane of the wafer along the X(Y ) direction. The magnitude of this shift does not depend on the shape of the pattern. Typically, however, the values of a2 and a3 vary across the lens field resulting in the overlay errors. The aberrations component Z4 introduces defocus. It thus adds to other many sources of focus variation such as wafer nonflatness, leveling errors, lens heating, and autofocus errors. The impact of Z4 is somewhat different in that the coefficient a4 often varies across the image field, which reduces the overall usable depth of focus and reduces process window. The terms Z5 and Z6 refer to the aberration known as astigmatism, which leads to the focus difference between lines of different orientation. Astigmatism creates the OPD profiles such that the focus shift is positive for mask patterns laid out in one direction and is negative for patterns in the perpendicular direction. Because in traditional layouts shapes of different orientation are present, the common process window defined by the overlap of process windows of distinct shapes is reduced. Below we discuss the process window analysis in greater detail. The terms Z7 and Z8 capture the coma effect which causes the image shift that depends on the pupil radius and typically manifests in image asymmetry. For example, when a pattern with three polysilicon fingers is printed, the left line will have a different linewidth compared to the right one. The term Z11 corresponds to spherical aberrations that result in focus shift that depends on the pupil radius [133]. So far we discussed the distortions introduced by advanced optics into intensity profile formation. The lithographic patterning process is a combination of optical behavior and photoresist chemistry. A simple model of resist action is the constant threshold resist (CTR) model which assumes that when the delivered amount of light intensity exceeds a threshold, the resist chemistry is activated. The actual geometric profile of pre-etch resist will deviate from the threshold intensity location by a constant bias [134]. Both the threshold and the bias are resist-specific and can be found either through simulation or empirical characterization.
134
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
7.2 PROCESS WINDOW ANALYSIS In this section we discuss analytical tools for precisely capturing the ability of a lithographic process to withstand a specific amount of variability in key parameters, such as dose and focus. In the previous section, we saw that vertical displacements due to non-uniform topography of photoresist during exposure will lead to the change in the image plane. A variety of factors such as wafer topography, CMP-driven layer non-uniformity, nonflatness of the mask, and focus setting error lead to the lack of focus, or de-focus, in imaging of patterns. The nominal image intensity profile is defined as being in-focus and printed with the nominal intensity dose. The amount of defocus determines the deviation of the printed geometry from the nominal geometry. Many patterns exhibit very high sensitivity to defocus. Another factor that is difficult to control but that exerts strong influence on the printed geometry is the amount of energy delivered to photoresist. The amount of energy, or dose, varies from one exposure shot to another, and from wafer to wafer. However, there are multiple other factors in the litho flow, such as post-etch bake temperature, resist development time, and true shot-to-shot dose variations, that affect the image in a way equivalent to variation in dose. These effects are typically treated as contributing to dose errors [135]. The advantage of such an approach is that the number of independent source of variation is reduced and the analysis is greatly simplified. The robustness of the imaging process, e.g., how much variability in dose and focus it can tolerate, can be captured by multiple measures. The simplest ones include the exposure latitude, the depth of focus, and their combined measure, exposure-defocus window. All printed patterns are affected in some way by defocus and dose variation. We now discuss one particularly important mask pattern, the polysilicon gate conductor, and the key metric - the variation in the critical dimension (CD) of the polysilicon gate. The plot that represents CD linewidth as a function of defocus at different dose values is known as Bossung plot, Figure 7.2. The same information, with the emphasis on sensitivity of CD to dose variation, can be captured with the exposure latitude plot. In this plot a family of curves represents CD dependence on dose variation at different levels of defocus. The sensitivity of CD to defocus also on the feature pitch, i.e., on the mask neighborhood of a shape. The Bossung plots for shapes at different pitches may exhibit opposite behavior. For example, it has been observed that densely spaced features bend upward at larger defocus, while the isolated shapes bend downward [143]. The Bossung plot can be used to estimate the depth of focus (DOF), which is the maximum value of defocus at which a feature still prints within specifications. The typical required tolerance is 10% or 15% of the nominal CD. The flat curves in the Bossung plot imply low sensitivity to defocus which is equivalent to a larger value of the depth of focus. In a way similar to DOF, exposure latitude measures how much illumination dose can vary without causing the CD to fall outside of specs. If the curves in the Bossung plot lie
7.2 PROCESS WINDOW ANALYSIS
135
next to each other, the sensitivity to dose variation is low, implying larger exposure latitude.
Fig. 7.3. Bossung plot represents CD linewidth as a function of defocus at different c dose values. (Reprinted from [135], 2006 SPIE).
The information contained in the Bossung plot and the exposure latitude plots can also be represented in terms of contour plots of constant CD for different exposure and defocus values. For a given value of CD tolerance (e.g. 10%), the range of focus and dose values for which printed CD will be within specifications, is bounded by two curves: these are the two doubledotted curves in Figure 7.2. For reliable manufacturing, the behavior of two other measures of resist quality as functions of defocus and dose variation need to be taken into account. The first one is the resist sidewall angle. The second one is the resist thickness. The values of defocus and dose for which these two measures do not deviate from their specifications by more than the allowed tolerance level (e.g. 10%) can be described by contours within the same exposure-defocus plot. These are the solid and broken curves in the Figure 7.2. The overlap of three sets of curves establishes the exposure-defocus (E-D) window, Figure 7.2. Exposure-defocus process window is a key tool of lithographic control and provides a common criterion of robustness, since the exposure latitude is defined at a fixed defocus and, similarly, depth of focus, is defined at a fixed dose. From E-D window, the maximum allowed focus variation at a fixed dose variation, and vice versa, can be deduced. Assuming that the errors in focus and dose are systematic, the maximum process latitude is given by a rectangle
136
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
Fig. 7.4. E-D process window plots contours of constant CD, resist sidewall, and resist thickness values in the exposure-defocus space. Their overlap establishes the process window which is given in the figure by the inscribed rectangle or ellipse. c (Reprinted from [135], 2006 SPIE).
Fig. 7.5. Depth-of-focus vs. exposure latitude (DOF-EL) curve. It is used in common window analysis to analyze the impact of different error sources. (Reprinted c from [135], 2006 SPIE).
7.3 OPTICAL PROXIMITY CORRECTION AND SRAFS
137
that can be fitted into the process window in Figure 7.2. The height (focus) and the width (dose) of the maximum rectangles that are contained in the ED window can be alternatively captured by the depth-of-focus exposure latitude curve (DOF-EL curve), Figure 7.2. Fitting of a rectangle into the process window is only justified if the errors in dose and focus are systematic. In that case, every point in the rectangle can occur and the process must produce a reliable image for such conditions. If the errors are random, then the set of values that will occur with a given probability is an ellipse. An advantage of the ellipsoid model is that it rules out combinations of the extreme values of defocus and dose variation as highly improbable. The result is a substantially larger process latitude, which is reflected in Figure 7.2 in the fact that the ellipse curve is further away from the origin. The DOF-EL is also convenient for analyzing the impact on process window of other sources of CD variation, in addition to focus and dose variation. Important examples are contributions of lens aberrations and mask CD errors that are becoming important in defining the overall process window. We know that aberrations lead to focus variation across the image plane. A shifted process window at each specific location could represent this, and can be captured by a family of process window boundaries plotted in the ED window. Mask errors and a large MEEF are a growing source of printability problems [144], as we saw in Chapter 3.1. Since process latitude is given by the maximum rectangle that can be fitted between the ED boundaries, the impact of the additional sources of variation is to reduce the common process window and shift the DOF-EL curve towards the origin.
7.3 OPTICAL PROXIMITY CORRECTION AND SRAFS We now discuss a range of resolution enhancement techniques for enabling sub-wavelength lithography. Among the most successful and widely adopted measures for improving printability is the use of optical proximity correction (OPC). OPC is a technique based on pre-distorting mask patterns to make printed silicon patterns appear as close to the targeted shapes as possible. This is achieved by either modifying the ideal features or introducing additional geometrical structures to the mask to correct for image distortion. OPC has proven itself as an efficient way to (1) prevent functional failures due to poorly printed features, and (2) reduce across the chip linewidth variation. OPC has been routinely used beginning with 0.25um technology node, and is now indispensable. It is now routinely offered through the foundries, which perform the corrections in the mask-making step. Typically, OPC is performed when design data is prepared for mask fabrication, and thus is invisible to the designer. There are two major ways to compute the mask correction amounts: rulebased and model-based. In the rule-based approach, the correction amounts
138
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
are captured in terms of simple tables that pre-define bias amounts for layout shapes at each pitch. During the run-time of the OPC algorithm, only geometric operations are required to match specific layout pattern configurations to rules and find the correction amounts in the rule tables. In early applications of OPC when the number of patterns to correct was not large and only the nearest neighbor had to be considered, rule-based correction was sufficient. Rule-based correction becomes insufficiently flexible when the radius of optical influence (the optical interaction range) is much larger than the distance to the nearest neighbor. In that case, the correction amount for an edge will also depend on the distance between the nearest and second nearest feature, or on even more distant features. Increasing the number of geometric configurations covered by distinct rules is possible but the table size may become too large since it grows exponentially with the number of parameters. For 193nm illumination, the radius of influence is about 500 nm. At a 65nm node, this is equal to nearly half the height of a standard cell and thus would contain a great many shapes. An alternative to rule-based OPC is the model-based OPC. Model-based OPC is different in that it computes the silicon image of a specific mask during run-time of the algorithm. It does this by performing optical simulation of the entire lithography flow: of an illumination source, of its interaction with the mask, of the diffraction of beams from the mask and their interaction with the lens, and of the eventual interference pattern forming the image-plane intensity profile. In addition to optical effects, the simulation may capture the effects of photoresist acid diffusion, flare, and pattern loading for reactive ion etching [136]. The correction amounts are computed after the silicon image is available for the specific geometric pattern. The model-based approach enables a superior correction capability since it does not reduce a very rich space of geometries to a finite set of rules. Because in model-based OPC, the generation of silicon image is performed during the run-time of the algorithm, this class of OPC techniques is computationally quite demanding. This initially tended to favor the much faster rule-based algorithms. Significant progress in computational lithography has been made to speed up the evaluation of optical and resist models. This has enabled running full-chip simulation in a reasonable time practically enabling model-based OPC. Still, the simulation requires a trade-off between run time and accuracy. Completing the task in reasonable time (days) still requires running simulation on a large cluster of workstations involving up to hundreds or even thousands of CPUs. For example, a 90nm chip with 15M gates that required performing RETs on 8-12 layers, took 100 hours to perform full-chip model-based OPC with a cluster of 50 CPUs [137]. Several 1-D and 2-D distortion patterns are effectively corrected by OPC: One of the main objectives of OPC is to perform selective linewidth biasing to correct for pitch-dependent (“through-pitch”) linewidth variation. We saw in an earlier chapter that printed CD exhibits strong dependence on pitch. The specific profile is process-dependent and must be characterized by either
7.3 OPTICAL PROXIMITY CORRECTION AND SRAFS
139
test measurements or through optical process simulation. The correction of through-pitch linewidth dependence is achieved by increasing (or decreasing) the ideal-case dimension of a feature on the mask, so that it is properly printed on the wafer. The need for accurate prediction of the silicon profile is especially severe when 2-D layout patterns are corrected. This is the case, for example, with such common distortions as line shortening and corner rounding. The degree of both shortening and corner rounding depends on the 2D neighborhood and has to be determined by optical simulation. Prevention of substantial line shortening is important for avoiding yield losses due to overlay mismatch. The straightforward way to compensate for line shortening is to increase the line length. However, this leads to loss of layout density. For this reason, emphasizing the line end with additional mask patterns, known as hammer heads is more effective. Similarly, corner rounding can be compensated for by using serifs. For the inner corner a serif is an intrusion, while for the outer corner it is a protrusion, Figure 7.3.
Fig. 7.6. Common OPC distortions applied on the mask. (Reprinted from [128], c 2003 IEEE).
The ability of OPC algorithms to improve uniformity by mask-level corrections can be used to correct for non-idealities introduced by non-optical effects, for example, etch microloading, mask effects such as mask corner rounding, and lens non-idealities (aberrations) [136]. Some authors refer to such a use of OPC as Process Proximity Correction (PPC) since it compensates for optical as well as non-optical effects. A model of a complete pattern transfer process, including resist process and etch, can be constructed through measurements or simulation. For example, such a PPC technique could help in reducing the impact of lens aberrations. Lens aberrations, which become more pronounced as the optical photolithography is pushed to its limits, lead to systematic spatial variation of L across the chip. By making the correction bias dependent
140
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
on the location of the feature in the optical field, OPC could reduce this variation. Complete correction of the spatial dependency is not possible. One limiting factor is the finite resolution of mask-making equipment. Another is the non-zero random intra-field component of CD variation because of which correcting by an amount smaller than the noise level leads to diminishing returns in the amount of intra-field variability reduction. It should be noted that while aberration-aware OPC could provide some real benefits, implementing such a flow would be expensive. One problem is that the specific set of lens aberrations is unique to a lens, and thus a specific stepper machine. Thus, an aberration-aware OPC flow would require making a stepper-specific mask, which is expensive from the point of view of manufacturing logistics. The lenses also have a tendency to “age”, i.e., to change their aberrations profile, as they go through the thermal cycle. Without periodic introduction of a new mask set, the effectiveness of aberration-aware OPC would be reduced. The need to perform OPC requires changes to the CAD flows and practices [145]. Most typically, OPC is performed when design data is prepared for mask fabrication. In this case, designers rely on OPC to guarantee the ideal shapes but the process is transparent to the designers. Some layouts may not be OPC-friendly in that the OPC algorithm cannot guarantee their printability. Therefore, one implication of the foundry-side OPC is that the layout sent to the mask-shop by the designer must be OPC compliant. Within the standard cell flow this requires (1) that the intra-cell layouts are OPC-compliant, and that (2) any legitimate placement and routing is OPC-compliant. Making intra-cell layout OPC compliant is easy because it can be done once, at the library creation stage. Ensuring that an arbitrary placement of cells is OPC compliant requires care and, typically, requires paying some price in terms of area. The composability problem can be alleviated through the use of restricted design rules (RDR) and by adding sufficient empty space at cell periphery [147].
7.4 SUBRESOLUTION ASSIST FEATURES The OPC techniques discussed above, such as selective line biasing, serif and hammer head insertio n, are effective in ensuring that under nominal conditions the layout features are printed as desired. However, their impact on process latitude is small and they do not improve image robustness. For example, the distribution of linewidth due to defocus and dose variation is shifted by applying OPC but its variance is not changed. The bias between isolated and dense patterns is eliminated; nonetheless isolated lines still have smaller process latitude (and, larger linewidth distribution) than dense patterns. Another pattern correction strategy that is often applied in conjunction with OPC is the insertion of sub-resolution assist features (SRAF). Assist features, also known as sub-resolution assist bars or scattering bars, are different
7.4 SUBRESOLUTION ASSIST FEATURES
141
in that they actually increase process window, and are effective in improving pattern fidelity and reducing variability. Typically, SRAFs are additional mask patterns that are added along the edges of the core pattern, Figure 7.4. Their defining characteristic is that they are smaller than the minimum printable size, and thus will not be printed on photoresist. However, their presence modifies the light intensity profile on the wafer: the diffraction pattern produced by the SRAF is similar to that produced by a mask pattern that is intended to be printed. Recall that the reason for dense-iso bias is the difference in diffraction patterns and light intensity profiles. The effect of the insertion of an SRAF is that the apparent neighbor makes the intensity profiles similar to features of different pitch. Thus, SRAFs effectively re-create the environment of a dense pattern for the isolated mask features, Figure 7.4. This similarity makes both types of patterns to have similar sensitivities to defocus and dose variation. Their process windows become more similar and have larger area of overlap increasing their common process window.
Fig. 7.7. SRAFs are additional mask features with dimensions below the resolution limit that are added along the sides of the core shapes.
To be effective in making light intensity profiles of layout patterns at different pitches look similar, for pitch values beyond a certain point, multiple assist features are needed. This need presents the designer with difficulties. For assist features to remain below resolution, they cannot be made larger and also cannot be put arbitrarily close to each other. As a result, it is not possible to insert an arbitrary number of SRAFs at any given pitch. Because of these constraints, the number of inserted SRAFs is a discontinuous ladder function. The difficulty of this is the non-monotonic dependence of process latitude on the number of inserted assist features. In Figure 7.4 we see that without SRAFs process latitude is steadily decreasing. With the insertion of a discontinuously growing number of SRAFs, there are pitch values with much lower than maximum process latitude. In order to narrow the linewidth variation (because of low process latitude) it is desirable to forbid the use of pitches for which high process latitude cannot be guaranteed.
142
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
Fig. 7.8. SRAFs and the resulting intensity pattern. SRAFs make intensity profiles c at different pitches more similar. (Reprinted from [129], 2003 IEEE).
Fig. 7.9. At a given pitch, it is impossible to insert an ideally desired number of SRAFs. This lowers process latitude at some pitches leading to the notion of forbidden pitches that should be avoided due to low process latitude. (Reprinted c from [138], 2001 IBM).
7.5 PHASE SHIFT MASKING
143
7.5 PHASE SHIFT MASKING Another powerful method of ensuring better process uniformity in the era of sub-wavelength photolithography is the phase-shift masking (PSM) technology [130]. Earlier in this chapter, we saw that Rayleigh criterion sets resolution limit at Rmin = 0.5λ/N A, making the minimum k1 = 0.5. It is remarkable that there is a way to overcome this limit. Fundamentally, this can be done by manipulating phase of light to artificially create favorable patterns of destructive interference. Manipulation of the phase, needed for resolution improvement can be done in two ways: one is by illuminating of the openings from an angle – off-axis illumination, the other is phase-shifting. Both create a path difference equivalent to a 180◦ phase shift for light coming from the neighboring openings. As a result, the first diffracted order needed to produce an image passes closer to the center of the lens, Figure 7.5a. PSM technology creates beams with the appropriate path (phase) difference by modifying the traditional binary mask: special phase shifters are inserted to create phase difference between beams from the neighboring mask openings, Figure 7.5b. OAI achieves a similar result by using illumination at an angle, Figure 7.5c. Following the analysis of resolution carried out earlier in this chapter, it can be shown [129], that the ultimate resolution, in the sense of half-pitch, is now halved compared to Rayleigh resolution limit: Rmin = 0.25λ/N A (with PSM or off-axis illumination)
(7.5)
In other words, with PSM or OAI, k1 can be as low as 0.25 . It is also important that the illumination pattern of Figure 7.5(b) and (c) leads to destructive interference at the angle 0 due to a 180◦ phase difference between the 0th and 1st diffraction order beams. This condition helps to significantly increase the depth-of-focus [128], and thus increase process window and improve CD uniformity. The improvement in uniformity allows more aggressive feature size scaling and gives a significant performance benefit. Because of these benefits, PSM and OAI have become indispensable in ensuring manufacturability of nanometer-scale technologies. The phase shifting technology comes in two most common flavors - alternating PSM and attenuating PSM. In alternating PSM, every pair of neighboring clear regions must have a 180◦ phase difference. Phase difference between neighboring clear area leads to destructive interference and results in a sharp dark area. The phase shift of 180◦ is achieved, in practice, by selectively etching the mask substrate and creating the optical path difference of λ/4. Each critical feature has a 0◦ phase on one side, and a 180 phase on the other side, which creates destructive interference and leads to a zero of intensity, improving sharpness, Figure 7.5. The ability to create an intensity profile with a sharp transition from the region of high to low intensity areas translates into smaller resolution, bigger exposure latitude, and a larger depth of focus, Figure 7.5.
144
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
Fig. 7.10. (a) Minimum resolution can be improved by reducing the angle of deflection of the 1st diffraction order. This can be done by (b) PSM or (c) OAI. (Reprinted c from [129], 2003 IEEE).
The use of AltPSM introduces two major challenges. One fundamental implication of AltPSM is that in every layout pattern that is to be printed at the minimum dimension, known as critical pattern, the opaque mask region has to be surrounded by clear mask regions having opposite phases. It turns out that not every layout can be assigned phases in such a way that every pair of neighboring mask openings has opposite phases. Two common patterns that do lead to phase conflicts are shown in Figure 7.5a. Phase conflicts must be resolved, since otherwise critical features may not be printed with sufficient fidelity, or at all. Phase conflict resolution inevitably leads to the loss of layout density. The easiest way to resolve the phase conflict is to turn a critical feature into a non-critical one, which can be done by increasing its dimension. Other ways include expanding poly outside of active area and moving features apart to create white space. Figure 7.5b shows several alternative solutions to a phase conflict in the top pattern in this figure. Specific layout constraints and objectives would determine which solution is preferable in every case. It is more effective to avoid phase conflicts by generating layouts that are free of phase conflicts, i.e., PSM-compliant layouts. While fully avoiding phase
7.5 PHASE SHIFT MASKING
145
Fig. 7.11. Light from openings with opposite phases produces light of same magnitude but opposite sign, resulting in destructive interference at wafer plane and sharper transitions.
conflicts may not be possible, much work in the EDA community has been to enable generation of layouts with as few conflicts as possible.
Fig. 7.12. Alternating PSM mask increase process window compared to a binary c (chrome on glass, COG) mask. (Reprinted from [136], 2001 SPIE).
146
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
(b) Fig. 7.13. (a) Two common layout patterns resulting in phase-assignment conflicts. (b) Multiple solutions to a pattern with a phase conflict, on top, exist. (Reprinted c from [129], 2003 IEEE).
The second challenge related to PSM is the problem of a “residual image” that is an unwanted effect of introducing phase-shifted regions in the layout. This problem typically affects critical features, such as polysilicon gate conductors. Ideally, the boundary between the mask regions of opposite phases falls onto the opaque mask region that implements the critical feature. The undesired image is created when the boundary between the mask regions of opposite phases does not fall onto the opaque mask region, Figure 7.5. Because of the phase difference, destructive interference will occur and produce
7.5 PHASE SHIFT MASKING
147
the zero of intensity in the transparent region which was supposed to have high intensity. This produces an unintended dark region where in reality no feature should be printed. In contrast to the problem of phase assignment, the design-side solutions to the residual image problem do not exist since nothing can be done on the design side to prevent phase transitions over the transparent regions. The solutions come from the manufacturing side. While multiple alternatives have been explored for getting rid of the residual phase edge images, the most common strategy is to use a double-exposure process. In this process, a bright-field mask (the clear mask substrate) is followed by a second dark-field (the opaque mask substrate) mask exposure. The second mask, known as trim mask, ensures that regions with residual phase edges are exposed to prevent unwanted shapes.
Fig. 7.14. Alternating PSM results in an unwanted image due to a phase transition c that does not fall on the opaque mask region. (Reprinted from [129], 2003 IEEE).
In addition to alternating phase PSM, an important class of PSM techniques is attenuating PSM (AttPSM). Both techniques rely on the same fundamental principle: to use destructive interference for improving intensity contrast. In this case, however, the regions that are made completely opaque in the traditional binary (chrome-on-glass) mask technology are made partially transmitting. Typically, about 7% of total energy is transmitted. This amount of intensity is below the resist threshold and is thus not sufficient for resist exposure. The light transmitted through the attenuated regions has a 180 phase difference with respect to the light of the clear regions. Notice that in contrast with AltPSM all clear regions have the same 0 phase. While the energy transmitted through the “dark” regions is not sufficient for the area to be printed, the destructive interference between the clear area and the partially
148
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
Fig. 7.15. Impact of attenuating PSM on intensity profile. (Reprinted from [136], c 2001 SPIE).
transmitting area still occurs. This helps to improve the intensity (and thus, image) contrast. Figure 7.5 shows how the use of AttPSM creates a sharper transition from the high to low intensity in the wafer intensity profile. We now briefly discuss the implications of PSM on the design practice. The use of PSM requires some modification of the design flow and leads to the reduction of layout density because additional design rules must be satisfied. From the designer’s point of view, the ideal methodology would be based on the “black-box model” that does not require the designer to be aware of phase-shifting. In this model, the generation and validation of phase shapes is carried out in the mask shop, in a way similar to the optical proximity correction and SRAF insertion. The designer requires only the design rules for the poly layer and design rule checking tools capable of verifying for the conformance to the new rules [130]. The black-box model presumes that the layout generated by the designer can be made phase-shiftable, i.e., that the phase assignment can be done in a conflict-free manner. Design experiences with PSM indicate that this is possible only by adopting design rules that lead to unacceptable penalties in terms of layout density [130]. In other words, the price of having a layout free of phase conflicts is a severe relaxation of design rules. This indicates that the designer’s involvement in making the layout altPSM-compatible may be unavoidable. In this flow, phase shape generation is performed during layout creation with designers relying on CAD tools that can check the design for PSM compliance.
7.6 NON-CONVENTIONAL ILLUMINATION AND IMPACT ON DESIGN
149
7.6 NON-CONVENTIONAL ILLUMINATION AND IMPACT ON DESIGN In order to enable lithography at very low k1 , new illumination techniques also have been introduced. While the techniques themselves properly belong to the world of process engineering, their use has led to a number of crucial implications for the designers. It is from this perspective that we briefly discuss these strategies here. Until the move towards very low k1 lithography, illumination systems used a single illumination configuration. The use of the modified illumination techniques permits significant improvement in resolution. It effectively reduces the resolution limit to k1 =0.25 from 0.5, thereby doubling the resolution. The new strategies include using off-axis illumination, such as dipole, quadrupole, and annular illumination configurations, and several levels of partial coherence. The precise mechanisms that enable reducing resolution through the use of above techniques vary. There is one fundamental implication of all these techniques for circuit designers: the improvement in resolution is highly selective. The improvement in image quality does not happen for every possible pattern. Moreover, while the image quality for some patterns is improved, in others the intensity contrast and, thus, image quality are reduced. The new illumination strategies fundamentally create limitations on the sizing, pitch (i.e. “forbidden” pitches), and orientation of layout patterns that can be reliably manufactured. The improvements allowed by advanced lithography can be enjoyed only if one avoids using certain patterns that may be otherwise chosen from the point of view of traditional design objectives - performance, power consumption, and area density. Thus the price of printability with finer resolution is a smaller flexibility in design. The specific patterns of improvement and degradation depend on the particular illumination scheme in use. Each illumination method favors a specific layout style. In the dipole illumination system, isolated features oriented along the axis of the dipole have larger process margins than the same features oriented perpendicular to this axis. Only a limited set of pitches along a single orientation as well as features sizes can be used if they are to be printed with sufficient process latitude [139]. In the quadrupole system, the light falls onto the mask from four orientations. This enhances 0 and 90 geometries that lie on a rectangular (Manhattan) grid but makes imaging of some 45 shapes virtually impossible. In general, the off-axis illumination significantly worsens the iso-dense bias (i.e., the bias between isolated and dense lines). This effect can be compensated with OPC pre-biasing, however, the variation in depth of focus of polylines at different spacing still remains. The use of SRAFs, although subject to forbidden pitch problem, can help to restore the DOF uniformity and is routinely used together with off-axis illumination to improve pattern fidelity. We saw earlier that the use of modern illumination leads to significant limitations on designer’s flexibility. In order to ensure printability, process
150
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
engineers force the designers to follow a growing list of design rules that are introduced exclusively for the goal of establishing better manufacturability. This contributes to the problem of design rule explosion and makes design closure more difficult requiring more extensive DRC and rule violation fixing. In response, an idea that has been gaining popularity is to rely on a radically restricted design (RDR) approach from the start (rather than incrementally modifying the existing layouts to make them satisfy design rules). In this strategy, the manufacturing requirements are taken as fundamental and then possible layout configurations are derived. The lithographer’s dream layout turns out to be a regular grid, effectively, a diffraction grid. At the critical layer, the features would be allowed only to be at integer multiples of the minimum pitch (the contacted device pitch). Moreover, the lines are permitted only in a single orientation. While it may seem that such a design style would be expensive in terms of layout area, early experiments suggest that if an arbitrary initial layout has to be made to satisfy all the design rules, it may have a more severe area penalty [129].
7.7 NOMINAL AND ACROSS PROCESS WINDOW HOT SPOT ANALYSIS The complexity of RET synthesis (OPC, SRAFs, PSM) and a large number of manufacturability design rules necessitates performing full-chip lithography manufacturing checking (LMC) after RET insertion. In contrast to traditional design rule checking that can be reduced to geometric manipulations of the polygons, LMC requires a full-chip silicon image to be generated in order to identify the areas of the layout that will not print properly. The layout areas in which printed silicon as predicted by the lithography simulation differs from the target layout by more than a pre-defined tolerance level are known as “hot spots”. Hot spots are areas of the layout that may lead to yield failures. The “hot spot” analysis has to be carried out via a full-chip lithography simulation. The development in the recent years of fast numerical methods has resulted in full-chip optical and resist simulators that can produce a silicon image for a whole chip in a very reasonable time. These simulations tools must be carefully calibrated to guarantee good accuracy. The amount of characterization data needed for calibration may be quite extensive and is made more complicated by the fact that the printability characterization must be two-dimensional as 1D characterization is not sufficient. The silicon image of a feature is impacted by a large number of other features around it. At a 65nm node, the radius of influence is about 500 nm, and this is nearly half the height of a standard cell. This also highlights the problem of manufacturability verification based on design rules checking - it is nearly impossible to construct effective design rules that would refer to more than two or three neighboring edges [140].
7.7 NOMINAL AND ACROSS PROCESS WINDOW HOT SPOT ANALYSIS
151
The hot spot analysis is especially important when considering the impact of variation of focus and exposure dose. Focus and dose variation within the process window lead to variations in the silicon image. Because of the shrinking process windows required by low k1 lithography, nominal verification may miss failures due to variation in these parameters. In order to ensure good yield, manufacturability verification must be performed at all the corners of the process window. Traditionally, OPC corrections are defined at the nominal process conditions, thus, there is no a priori guarantee of printability at the corners of the process window. Some layout patterns will print well under nominal conditions of focus and dose but will not print under process window variations. Figure 7.7 illustrates this by showing several patterns that exhibit poor through-focus printability. As lithography moves further into the low k1 regime, not only post OPC lithography manufacturing checking, but also OPC itself has to comprehend variations in focus and dose. Specifically, doing OPC in a way that takes into account exposure and focus variations becomes unavoidable. To enable both efficient process window aware lithography checking and OPC, a fast simulation technique that will enable full-chip litho simulation over the entire process window is required. This can be done by creating a variational lithography model to predict the intensity profile as an explicit function of defocus and exposure. The model can be constructed in two ways. The first option is to construct an empirical response-surface model of the final silicon profile in terms of the impact of focus and exposure using design of experiments techniques. Alternatively, a model can be constructed analytically by explicitly propagating focus and exposure variables through optical and resist models [141]. Both models require calibration of parameters over the process window by using experimental data. Basic models cover the image dependence on defocus and exposure only. More complex and powerful printability analysis schemes construct an RSM model that takes into account a larger number of parameters, including resist effects, etch effects, mask error and misalignment. The variational lithography model can help identify layout patterns that will fail under specific configurations of process parameters. The lithographic manufacturability checking proceeds by simulating the layout at different points in the process window, typically near the boundary using the model to speed up evaluation. Hot spots are identified at each such simulation. A variational litho-model also provides information on pattern sensitivities to defocus and exposure. By comparing silicon image contours at the nominal and corner conditions, the sensitivity of a layout pattern to defocus or exposure can be identified. This information is invaluable for automated hot spot fixing and optimization.
152
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
Fig. 7.16. Layout patterns exhibiting poor through-process printability. (Courtesy of A. Strojwas).
7.8 TIMING ANALYSIS UNDER SYSTEMATIC VARIABILITY In this section we discuss methods for dealing with residual post-OPC variability patterns, specifically, with the CD dependence on pitch. Linewidth biasing is applied within OPC to remove this dependence. If OPC is fully successful, the post-OPC silicon should not exhibit any systematic through-pitch CD dependence, even though one would expect to see some residual random variability due to focus and dose variation. Measurements show, however, that existing techniques are not ideal in removing all the sources of systematic dependencies. Experiments reveal that predictable systematic dependencies of critical dimension on pitch remains even after OPC [142][143]. There are several reasons for this residual systematic variability. Some of them are commonly due to the mismatch between what the OPC algorithm prescribes and what can in fact be inserted in the mask. In some cases, mask rules prevent insertion of features or using the specific CD biasing amount. The corrections
7.8 TIMING ANALYSIS UNDER SYSTEMATIC
VARIABILITY
153
prescribed by the algorithm for different features may conflict with each other, such that the resulting mask profile is the best possible compromise. Finally, the fidelity of the models and the accuracy of correction prescribed by litho simulation used in the OPC contribute a growing amount of error [146]. Additionally, the current OPC tools do not perform correction that takes into account the spatial across-the-field non-uniformity due to lens aberrations, and typically compute correction only at nominal conditions. When systematic CD dependence on pitch is severe, it can lead to the emergence of hot spots, which can be addressed by the methods discussed in the previous section. Here we are concerned with the impact of the residual systematic variability in CD on timing and leakage variability, and specifically, about our ability to accurately predict timing and power in the context of design verification. Timing variability due to residual post-OPC through-pitch CD variation can be substantial. It has been shown that ignoring post-OPC variation leads to under-prediction of the average slack by 24% and the worst slack by 36% in a modern microprocessor block [142]. Path ranking in terms of their criticality is also significantly impacted. As a result, both the parametric and functional yields are potentially affected. (It should be noted that the systematic OPC errors are going down with the adoption of restricted design rules and the advancements in OPC algorithms. Thus, the systematic timing errors cited above are likely to be too high in the more modern flows [147].) Notice that statistical STA is not a solution to this deficiency in modeling. Treating the systematic sources of variation by statistical means is both suboptimal and risky. Adopting a statistical model of intra-chip CD variation may lead to the failure of timing paths on all chips (i.e., 100% yield loss), if the error is truly systematic. We need a timing methodology that would utilize the systematic manufacturing information to perform improved timing analysis after OPC. Within a custom design flow, such a step can be implemented through a location-specific litho simulation. The methodology involves performing a litho simulation to determine the impact of layout neighborhood on silicon CD, and then using layout extraction and circuit simulation to assess the impact on circuit performance. The modeling of the transistor I − V characteristics is complicated by the non-rectangular shape of the polysilicon gate. The classical compact device models assume that the transistor gate is rectangular. Recent work has proposed techniques for accurately capturing the effect of gate nonideality on its I − V characteristics [148]. However, a re-simulation approach is not compatible with the currently used timing flows. This is because the standard flows rely on pre-characterized cell timing information for fixed cell footprints, and currently, there is no easy way of using a post-OPC layout within a cell-based STA to perform delay estimation more accurately: by the time of GDSII creation the connection between the polygons of the layout and the specific standard cell with its timing tables is lost. In order to update pre-characterized cell timing information for a given cell that was printed in a specific manner new data models and tagging strategies need to be used.
154
7 LITHOGRAPHY ENHANCEMENT TECHNIQUES
Re-extraction of layout parasitics and re-simulation of each cell based on the actual silicon profile generated by the litho simulation is too expensive in terms of STA runtime. Instead, parameterized cell timing models dependent on 2-D layout features may be used. The models will relate deviations of key geometries to changes in cell timing. Such models can be constructed using the technique of response surface method. Currently, CAD companies are active modifying the existing cell-based timing flow methodology to allow efficiently capturing the impact of actual printed silicon geometries on timing.
7.9 SUMMARY In this chapter we discussed the reasons for the many challenges of lithography in nanometer scale CMOS. We reviewed the basics of optical behavior that explains the difficulty of achieving reliable pattern transfer. Then we explored a range of reticle enhancement techniques that have become indispensable for ensuring high-quality manufacturing in nanometer scale CMOS. The primary techniques on the side include optical proximity correction, phase-shift depsffile side include optical proximity correction, phase-shift masking, and subresolution assist features. On the process side, off-axis illumination is used on the process side to improve resolution and increase the process window.
8 ENSURING INTERCONNECT PLANARITY
To be able to fill leisure intelligently is the last product of civilization. Arnold Toynbee
Variations in the back end are often related to pattern dependencies, as discussed in Chapter 3. These include both feature level variations, due to process interactions with features of different sizes, and regional or chip scale variations, often due to pattern density differences at various locations within the chip. An important approach to reduce the occurrence or severity of pattern dependencies is to modify the layout to be more “regular” – that is, to have a more limited range in pattern density or a restricted set of feature sizes. In some cases, the circuit design might be able to be modified or generated directly to achieve such improved layout regularity. In many cases, however, post-processing of the layout can be performed to insert “dummy fill” or additional layout geometries, which are non-functional from an electrical perspective, but which serve to improve the layout regularity. Figure 8.1 shows a schematic illustration of three dummy fill options in the case of copper interconnect. In this chapter, we discuss dummy fill from three perspectives. We begin with an overview of dummy fill strategies and issues for copper interconnect. The goal from a process physics perspective is discussed, together with strategies for understanding the possible negative impact of inserted dummy structures, primarily revolving around additional capacitance that might result. We next review a number of algorithmic challenges and strategies related to dummy fill. Finally, additional dummy fill issues and opportunities for other process steps and layers are discussed, including for copper electroplating and shallow trench isolation.
156
8 ENSURING INTERCONNECT PLANARITY
Fig. 8.1. Schematic illustration of copper dummy filling options. (a) Unfilled case, with large metal features in cross-hatch. (b) Between pattern dummy fill, consisting of a tiling of small features placed in the field regions, in near proximity to the designed copper feature. (c) Within pattern fill (also known as slotting), where dielectric structures are added to the interior of large copper structures.
8.1 OVERVIEW OF DUMMY FILL
157
8.1 OVERVIEW OF DUMMY FILL An important early application for dummy fill was in aluminum-oxide interconnect, in order to improve the uniformity of oxide CMP process steps. In the planarization of interlevel dielectric (ILD) layers, pattern density of raised oxide features overtop of patterned aluminum lines is the dominant effect determining the final thickness of the ILD insulator between metal layers. Addition of non-functional aluminum lines or features are used to raise the pattern density in low density regions, to avoid generation of regions that are too thin after polishing on the chip. However, the additional metal features can create coupling capacitances both within the layer, and from one layer to the next through overlap capacitance [244], [248]. Two different strategies for dummy fill emerge, depending on the nature and importance of the additional capacitance generated by the presence of the dummy fill. In the case of ASIC designs, design predictability is crucial, therefore electrically grounded dummy structures may be preferred even though they tend to increase the coupling capacitance. The timing verification with grounded dummy fill is easier because the potential of grounded lines is known. In the case of highly tuned and customized designs, such as microprocessors, the performance is paramount: the capacitance needs to be minimized at any cost even if the timing verification problem becomes more severe. For this reason, floating dummy fill may be selected in that case. The within-layer coupling can also be minimized by ensuring that a minimum spacing or stand-off distance is used between functional metal lines and dummy structures. In the case of copper interconnect, recall that there are two important pattern dependencies at work, dishing and erosion. Erosion tends to be the worst in regions with high metal pattern density, and where the feature sizes (particularly dielectric spacing to nearby structures) are smallest. In these regions, arrays of lines and spaces tend to be substantially thinner than in other regions on the chip, particularly field regions or low pattern density areas. In order to reduce the range in final dielectric thickness, typical practice is to insert dummy copper structure in field regions (into the low pattern density or more “empty” areas). The addition of metal causes these regions to increase in pattern density, and forces these regions to also suffer increased erosion. In this sacrificial approach, then, erosion is also encouraged to occur in low density regions of the chip, so that the overall surface topography is reduced across the chip after CMP. Dishing, on the other hand, tends to be worst in areas where very large copper features are present. In this case, an alternative strategy can be pursued, in which dielectric structures are intentionally inserted into the large metal feature. If the dummy dielectric features are rectangular in shape, the practice is often referred to as “slotting,” while if the dummy dielectric features are generally square in form factor, the term “cheesing” is sometimes used. Here we will simply refer to such practices as “in pattern” dummy fill.
158
8 ENSURING INTERCONNECT PLANARITY
We refer to the typical insertion of metal in the field regions between active metal features as “between pattern” dummy fill. These practices are illustrated in Figure 8.1. Addition of dummy fill or slotting features can have dramatic effect, in terms of changing the resulting distribution of copper wire thicknesses. Based on a simulation of a within pattern dummy fill approach (or slotting) only in the large features, for the MIT/SEMATECH 854 copper CMP test mask, the effective copper thickness can be substantially tightened. Figure 8.2 shows the simulated distribution of “effective” copper thickness without and with fill, where the penalty created by lost cross-sectional area by the addition of insulting dielectric features is included.
8.2 DUMMY FILL CONCEPT Dummy fill in copper seeks to equalize both pattern density and feature size in the layout, in order to achieve as uniform final copper thickness as possible across the chip. The pattern density dependency, in particular, is non-local, in that the pattern density depends on an average over some spatial distance of the local layout density. As such, simple “by hand” dummy fill insertion can be far from optimal. In addition to improving the effectiveness of dummy fill as a driver for automation, the large numbers of structures and regions that need to be modified make automatic and algorithmic tools a necessity. In the following, we build up the concepts for dummy fill, together with algorithmic techniques, from the most simple to more sophisticated approaches. The simplest conceptual approach to between pattern dummy fill insertion, is to simply insert a pre-defined template of dummy structures into any large open area on the layout. The dummy fill template is typically some structure with a known pattern density, which can be tiled or replicated easily to “fill” the empty layout region, again, subject to constraints such as standoffs from active metal features. Conceptually, simple layout manipulations can be used to combine a dummy fill template with an existing layout layer, to create a filled layout layer. The geometries in the metal layer of interest, call it MX here, might first be “bloated” by the amount of the required standoff (each rectangle grown by the standoff, creating a layer MB). This layer can then be combined with a dummy template layer, DT, to generate the design dummy fill insertion DX for layer X, as DX = AND(DT, NOT(MB)). The single masking layer is then simply OR(MX, DX). Several disadvantages of the simple rule-based approach described above are immediately apparent [247]. First, dummy pattern density is added in all empty field regions that are large enough, irrespective of whether or not that empty field region sits in a larger region that already has a higher than average pattern density. That is to say, the dummy insertion is not directly driven by a target pattern density, or a range constraint on the effective pattern density across the chip. An alternative “model based” approach, compared
8.2 DUMMY FILL CONCEPT
159
4
2.5
x 10
2
1.5
1
0.5
0 1500
2000
2500
3000
3500
Effective Copper Thickness (A) 4
8
x 10
7 6 5 4 3 2 1 0 1500
2000
2500
3000
3500
Effective Copper Thickness (A) Fig. 8.2. Simulated distributions of effective copper thickness: (a) without fill, and (b) with in-pattern dummy fill structures. The benefit in reduced dishing is substantially greater than the loss in copper cross sectional area, resulting in more uniform wire resistances.
160
8 ENSURING INTERCONNECT PLANARITY
schematically to the rule-based approach in Figure 8.3, can explicitly include a wide range of pattern-dependent effects, resulting in more specialized or customized dummy features to achieve a variety of objectives.
Fig. 8.3. Conceptual comparison of rule-based and model-based dummy fill approaches. (a) Original layout, with features shaded and exclusion zones shown as dashed lines. (b) A 25% dense tiling template for a rule-based approach. (c) The result of rule-based tiling after boolean operations. (d) A possible model-based tiling c result for the same layout. (Reprinted from [247], 2000 ACM).
8.3 ALGORITHMS FOR METAL FILL Physical understanding of the source of the pattern dependencies guides improvements to the simple dummy fill strategy described above. Consideration of additional physical and electrical effects leads to more careful and elaborate strategies and algorithms. These may either be more careful rule-based approaches, or model-based approaches which incorporate explicit models for the CMP process as a function of the layout and dummy fill.
8.3 ALGORITHMS FOR METAL FILL
161
The most fundamental issue relates to pattern density and the spatial averaging inherent in both CMP processes and in the notion of pattern density [246]. Specifically, each specific CMP process has an inherent spatial interaction distance (related to the bending of the pad in response to raised regions); this distance suggests a natural window size or spatial extent (also known as the planarization length) that should be used to calculate the effective pattern density determining the local polishing rate. The conceptual approach is to imagine a square window of this dimension, and the area fraction or pattern density of all structures within the window (equally weighted) gives the effective pattern density at the center of that window. The simplest approach is to subdivide the entire chip into non-overlapping windows of this size, and assign a single effective pattern density everywhere within the window in this fashion. A better approach is to use a moving window or overlapping windows [246], as illustrated in Figure 8.4. One can also consider other weighting functions that more closely follow the physics of the CMP process [253].
w
Y
w/r
tile
windows
n
X Fig. 8.4. A layout partitioning approach, for calculation of pattern density in the chip. The layout is divided into small tiles of dimension r × r. Windows (or “dissections”), here with a width w equal to four tile widths, then defines a region for consideration of averaged layout pattern parameters. As shown here, the windows c may overlap. (Reprinted from [246], 1999 IEEE).
Once the relationship between pattern density and resulting CMP uniformity is known, then the algorithmic problem reduces to producing layout
162
8 ENSURING INTERCONNECT PLANARITY
fill (or slotting existing patterns) so as to minimize some property, such as the pattern density range across the chip, subject to appropriate constraints. Here we provide a brief and partial overview of contributions to the development of the metal fill area from different perspectives. Since the late 1990’s, multiple algorithmic concerns have been addressed including computational efficiency, CMP model accuracy, chip uniformity, mask fabrication cost based on the number of inserted features, and other issues. The definition of the fill problem was clearly set out in [246], including a number of different possible objective functions, such as elimination of extremal pattern density regions, or minimizing the range or variation in pattern density. Additional pattern dependencies, including perimeter or feature size effects, have also been considered beginning with early explorations of dummy fill [245]. A number of overviews of dummy fill and its place in the larger context of DFM have appeared [250], [259], [272], and [277]. The fundamental algorithmic formulations for dummy fill, from a CAD perspective, have been discussed in [249], [255], [257]. The electrical implications of dummy fill, and approaches that seek to minimize these effects, have also received substantial attention. Feature scale and full-chip RC extraction issues introduced by the presence of dummy fill are studied in [251] to derive the design rules to guide dummy fill. The electrical effects due to dishing for on-chip interconnect optimization were studied in [266], and design strategies, such as large wire splitting for minimizing the dishing, were derived. Concern about capacitance extraction when large numbers of dummy fill features are present are addressed in [267], and dummy filling methods for reducing interconnect capacitance, as well as the number of fills, are proposed in [269]. The impact of floating fill impact on interconnect capacitance has been studied in [276]. From a CAD perspective, the performance concern has typically been a delay or delay variation penalty, rather than explicitly the added capacitance. Dummy insertion techniques with delay performance objectives and constraints have been explored in [261], [275], [264]. An alternative approach using “wire swizzling” seeks to reduce delay uncertainty due to capacitive coupling [268]. A complementary perspective studied in [273] is the design of IC interconnects such as sizing, spacing, buffer insertion using accurate modeling of CMP, and not simply post-layout dummy fill approaches. Global routing using accurate CMP models has been studied in [274]. Consideration of the dummy fill problem including dummy feature process design from the process point of view have also appeared [265]. As previously discussed, uniformity or flatness in the post-CMP surface is a key driver for the insertion of dummy fill. While often stated as a goal in itself, limited topography is clearly understood as a key requirement for subsequent photolithograpy steps. While dummy fill can reduce this topography, at present it cannot eliminate post-CMP remaining topography. An opportunity exists, however, to couple information about the remaining topography across the chip, into optical proximity correction that adjusts for the change in focus due to regional height differences on the chip. For example, wafer
8.4 DUMMY FILL FOR STI CMP AND OTHER PROCESSES
163
topography-aware OPC can be developed to achieve better depth of focus and critical dimension control, based on topography prediction from CMP models [271]. Concern about the mask cost, and the size of design files, also arises from the insertion of dummy fill or slotting structures. In dummy fill styles where small structures are inserted, an explosion in the number of rectangles or objects in the design can occur. The aim to reduce the data volume can be incorporated as an objective in the dummy fill generation algorithm [262]. The same problem can be addressed through the use of compression algorithms, to reduce the demands of dummy fill on VLSI layout data [260]. Recent work has also considered dummy fill and slotting as a means to improve the reliability of copper/low-k interconnect structures. In particular, the insertion of dummy structures can strengthen the low-k, and help reduce stresses and defect generation (cracking, pealing) during CMP [252].
8.4 DUMMY FILL FOR STI CMP AND OTHER PROCESSES Any fabrication process that is subject to variation arising from pattern density effects is a candidate for dummy fill layout modifications. While we have focused on metal interconnect in the discussion above, other back end and front end processes are increasingly being improved with the use of dummy fill. These include the shallow trench isolation (STI) process used early in transistor formation in the front end, as well as dummy fill to improve plasma etch uniformity in both trench etch and gate etch, among other steps. Copper plating is also subject to such pattern dependencies, so that dummy fill to improve plating, or to jointly optimize the plating and CMP process, is also possible [278]. Another approach is to consider dummy fill, in order to enable alternative planarization technologies, such as electropolishing rather than CMP [263]. Shallow trench isolation is an interesting case, in that the physical process includes interactions and complexities that affords additional optimization opportunities for dummy fill design [280], [254]. In particular, the STI process has twin pattern dependencies, in both the deposition process (typically done using a high density plasma CVD process) and in the CMP process [279]. The HDP-CVD deposition results in a small lateral “shrinkage” (a sloping sidewall) of the raised oxide over the nitride active areas, so that the pattern density of the resulting raised oxide over the nitride depends on both the layout pattern density, and the size of the active region. During the CMP process, the oxide removal phase is determined by the raised oxide pattern density, while the overpolish stage (determining the dishing and erosion of the STI structures) is strongly influenced by the underlying nitride pattern density. Thus, careful design of the dummy fill pattern provides the opportunity to improve both the oxide and nitride pattern densities, and overall STI uniformity [281].
164
8 ENSURING INTERCONNECT PLANARITY
8.5 SUMMARY The variation in interconnect geometries related to pattern dependencies can be quite severe. Most of these dependencies are highly systematic and can therefore be modeled and predicted with high accuracy. The knowledge of systematic behavior also provides a way to mitigate the pattern dependencies by making the the layout to be more “regular” through the techniques of metal filling and dummy feature insertion. The insertion of additional layout features has important implications on the electrical and timing properties of the integrated circuit. This chapter reviewed the process of metal fill and the computer-aided design tools that operate with the accurate CMP models to produce optimal layouts that satisfy the various electrical and performance constraints.
9 STATISTICAL CIRCUIT ANALYSIS
There are three kinds of lies -lies, damned lies, and statistics Benjamin Disraeli
In the context of design for manufacturability, there is often a confusion between circuit simulation, using tools such as the venerable SPICE [76], and circuit analysis, which uses circuit simulation as a core and necessary component. The confusion arises from the complex nature of the variability analysis task, and because of a lack of separation between analysis algorithms and simulation algorithms -for example, many modern versions of SPICE implement Monte-Carlo analysis as an integral part of the simulator. In this chapter we review circuit simulation, explain the procedures through which the basic parameters defining the circuit are derived from observed data, and present two strategies for statistical circuit analysis.
9.1 CIRCUIT PARAMETERIZATION AND SIMULATION Statistical circuit analysis requires both a characterization of the variability in device behavior, and the ability to translate device variability to circuit performance variability, i.e. circuit simulation. We will examine these two topics, starting with circuit simulation, below. 9.1.1 Introduction to Circuit Simulation Circuit simulation is arguably one of the earliest computer-aided design fields and emerged because of the impossibility of prototyping an integrated circuit, and thus the need for a predictive tool that will aid the designer in determining the performance of a paper design. Statistical circuit analysis represents a set
168
9 STATISTICAL CIRCUIT ANALYSIS
of techniques that leverage circuit simulation and an understanding of the inherent variability in the elements that constitute an integrated circuit in order to predict performance variations. We start with a necessarily brief look at circuit simulation. Consider the simple circuit illustrated in Figure 9.1. The circuit has two nodes, a and b, and four elements, a DC voltage source of value V, two resistors R1 and R2 , and a diode D. a
b R1
+ R2
V
D
Fig. 9.1. A simple DC non-linear circuit
Solving for the DC state of such a circuit is done by applying Kirchoff’s current and voltage laws (KCL and KVL). For example, writing KCL at node b results in: Vb − Va Vb + + fD (Vb ) = 0 (9.1) R1 R2 where fD is a function that determines the current in the diode D as a function of the voltage across the diode. By inspection, we also have: Va = V
(9.2)
This leaves us with a single non-linear equation to solve for Vb , which can be solved using Newton’s method which works as follows. Say, we are given a non-linear equation of the form: f (x) = 0
(9.3)
Then Newton’s method uses the following iteration to find a solution: xi+1 = xi −
f (xi ) ∂f (x) ∂x |xi
(9.4)
9.1 CIRCUIT PARAMETERIZATION AND SIMULATION
169
where the subscript i denotes iteration number, and the iterations terminate when f (x) is suitably small. Note that Newton’s method requires the computation of both the function and its derivative with respect to the unknown x being solved for. In the generalization of Newton’s method to a larger number of unknowns (which we will denote by M ), the function value is a vector of dimension M and the derivative is an M × M matrix which is called the Jacobian of the function. The mechanization of forming Eq. 9.4 for an arbitrary circuit is explained in detail in works that focus on circuit simulation such as [77]. For our purposes, suffice it is to know that the simulation of the circuit requires the computation of the current through every linear and non-linear device, and of the derivatives of that current with respect to the terminal voltages of the device. In addition, for transient (time domain) analysis, we also require the computation of the charge stored in the device, and the derivative of that charge with respect to the terminal voltages1 . The equations that describe the current and charge of a device as a function of its terminal voltages are referred to as device models and we will discuss these in the next section. 9.1.2 MOSFET Devices and Models The metal-oxide-silicon field effect device (MOSFET) is the workhorse of the semiconductor industry. At its simplest, it can be viewed as a variable resistor connected between two nodes (the source and the drain) with a control node (the gate) modulating the resistance between the source and the drain. A cross-section view of an N-channel MOSFET is illustrated in Figure 9.2. Viewing the gate, the gate oxide, and the channel (the area between the source and the drain) as a capacitor, we see that when the gate node is set to a positive potential, it will cause the channel to have a negative charge, i.e. electrons will collect on the silicon surface along the channel to balance the applied voltage. In that situation, if a voltage difference is applied between the source and the drain, it will cause the electrons to move, creating a current between the source and the drain. As the voltage on the gate is changed, the number of carriers in the channel is changed, thereby changing the current. To illustrate how such a MOSFET operates as a variable resistor, we plot the source-drain resistance of a 130nm N-channel MOSFET device as a function of the gate-to-source voltage VGS and of the drain-to-source voltage VDS . We observe that for small values of VGS , the resistance is large and little current flows through the device. As the gate voltage is increased, the resistance decreases until it reaches a plateau. At this point the device is turned on and conducting current. Note from the figure that the gate voltage plays a large part in determining the conductance, while the drain-to-source voltage VDS has a less significant influence. The reason for looking at the device in this somewhat non-traditional manner is to have the reader recognize that modern 1
Recall that the derivative of charge with respect to voltage is capacitance.
170
9 STATISTICAL CIRCUIT ANALYSIS
Gate (polysilicon) Gate Oxide Channel
Source (n−type material)
(p−type material)
Drain (n−type material)
Body
Fig. 9.2. Cross-section of a N-channel MOSFET.
short channel MOSFETs look less like the ideal current sources predicted by a typical first order analysis as would be found in an introductory VLSI book, and much more like variable resistors. 1e+10 RDS (Ohms)
1e+09 1e+08 1e+07 1e+06 100000 10000
1000
VDS (0...1.2)
100 10
VGS (Volts) 0
0.2
0.4
0.6
0.8
1
1.2
Fig. 9.3. Resistance vs. gate-to-source voltage for different drain-to-source voltages.
The behavior of a MOSFET for use in circuit simulation is defined by socalled Device Models. A device model is a representation of device behavior useful for use in a circuit simulator, or for hand calculation. The most useful form for such a model is to express the current and charge of a device in terms of terminal voltages. Given the pervasive use of CMOS technology, MOSFET
9.1 CIRCUIT PARAMETERIZATION AND SIMULATION
171
device models have a long and rich history which is far beyond the scope of this work. An overview of such models can be found in [78]. In the examples which we will show later, we will use the SPICE Level 3 model [79], a simple semiempirical MOSFET model, because it has a small number of parameters and relatively modest characterization needs. The full set of equations defining the device current are quite large and complex, here we show a simplified version where we removed the impact of the body node (sometimes also called the back-gate bias), and simplified the dependence on device dimensions. When this simplification is performed, the equations defining the device current IDS become: 1 + FB VX )VX (9.5) IDS = β(VGS − VT 0 − 2 Where FB depends on device dimensions and back gate bias, and: β= µef f =
W µef f L COX
(9.6)
UO 1 + THETA(VGS − VT 0 ) +
UO VDS VMAX L
VX = min(VDS , VSAT ) VSAT = Va + Vb − Va2 + Vb2 VGS − VT 0 1 + FB VMAX L Vb = µef f
Va =
VT 0 = VTH − σVDS σ = ETA
−22
(9.7) (9.8) (9.9) (9.10) (9.11) (9.12)
8.15 × 10 COX L3
(9.13)
ǫOX TOX
(9.14)
And finally: COX =
9.1.3 MOSFET Device Characterization Device characterization is the procedure through which the parameters associated with a device model are determined in order to reproduce the behavior of a specific observed device. This task, referred to as the parameter extraction problem [80], is most commonly cast as a non-linear least squares optimization: min ||I − f (V, P )|| (9.15) P
where I is a vector of observed currents, V is a matrix of applied voltages at which I was observed, P is a vector of device model parameters (e.g. VTH,
172
9 STATISTICAL CIRCUIT ANALYSIS
and µef f in the SPICE Level 3 model used above), and the function f () represents the device model equations. The minimization may be done empirically, by trial and error, or via the use of standard numerical analysis techniques [81]. Figure 9.4 shows the result of fitting the SPICE Level-3 model (i.e. solving Eq. 9.15) to the measured characteristics of a 65nm NMOS device. For this fit, the parameters generated were as follows: .model nenh nmos level=3 +vto=0.363161 gamma=0.3 phi=0.3 rd=0 rs=0 cbd=0 cbs=0 is=1e-14 +pb=0.8 cgso=0 cgdo=0 cgbo=0 cj=0 mj=0.7 cjsw=0 mjsw=0.3 js=0 +tox=2e-09 nsub=7e+16 nss=1e+10 nfs=5e+12 tpg=1 xj=1e-09 +ld=1e-09 uo=515.619 vmax=1e+07 kf=0 af=1 fc=0 delta=0 +theta=0.891496 eta=0.00154978 kappa=0 Where parameters underlined are those that were extracted. 0.00016 IDS (Amps) 0.00014 0.00012 0.0001
3 3
8e-05 6e-05 4e-05 2e-05 03 -2e-05
0
3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 0.1
3 3 3 3 3 3 3 3
3 3 3 3 3 3 3 3 0.2
3 3
3 3
3
3
3 3 3 3 3
3 3 3 3 3 0.3
3 3 3 3 3 3 3 3 3
3 3 3 3
3 3 3 3
3
3
3 3 3 3 3
3 3 3 3 3
0.4
3 3 3 3 3 3 3 3 3 3 0.5
3 3 3 3 3 3 3 3 3 3
3 3 3 3 3 3
3 3 3 3 3 3
3 3 3 3 3 3
3 3 3 3 3 3 3 3 3 3 VDS (V olts) 3 3 3 3 3
0.6
0.7
Fig. 9.4. Measured (line) and fitted (points) characteristics of a 65nm NMOS device.
In general, the minimization of Eq. 9.15 suffers from a problem which is common for large scale optimization, namely the uniqueness of the optimum. This is especially important when one or more of the following conditions hold: (1) The number of parameters, i.e. the dimension of the P vector, is large, with many parameters having similar effects on the model. (2) The model is empirical in nature, in which case there is poor physical basis to limit the
9.1 CIRCUIT PARAMETERIZATION AND SIMULATION
173
parameter values to a specific domain. (3) The model is not complete in that it may fit one region well but other regions poorly. To illustrate this problem, we consider the behavior of the SPICE Level-3 model for small values of VDS . In this case, the equation for the drain current simplifies to: IDS =
W ǫOX UO (VGS − VTH)2 L TOX 1 + THETA(VGS − VTH)
(9.16)
We will use Eq. 9.16 to calculate the sum of squares error between this model and the measured data in Fig. 9.4 for VDS = 50mV . This sum of squares E for the N measured points would be written as: E=
N i=1
equation 2 measured (IDS − IDS )
(9.17)
We will set W , L, UO and TOX to the values corresponding to the optimum plotted in Fig. 9.4, but we will vary the threshold voltage VTH and the mobility degradation parameters THETA over a domain of values close to, but not at the optimum. Fig. 9.5 shows the behavior of the sum of squares error E vs. VTH and THETA and it is clear that the function is not very smooth and that there are many potential local minima that might trap traditional Newton-based minimization schemes [82]. While there exist a number of optimization methods that attempt to find global minima of functions [83], these tend to be quite slow in convergence and often require extensive user guidance. Thus the state of the art in parameter extraction for device model characterization relies on a deep understanding of the model to help guide the minimization process. Several strategies are effective. (1) It is useful to fix parameters which are known or expected to be nearly constant. For example, the gate oxide thickness TOX is usually known with high accuracy. (2) It helps to limit the ranges of parameters to physically relevant ranges. For example, the low-field mobility UO is expected to be in the vicinity of 500 or so. (3) Finally, it is useful to start by extracting important first order parameters (e.g. the threshold voltage VTH) first, while fixing unimportant or second order parameters. Then, reversing the procedure, the second order parameters can be found. All of these empirical methods contribute to making the task of device model characterization a difficult one, resistant to automation, and requiring close expert interaction. With the ever increasing complexity of models, this unfortunate trend is likely to continue. 9.1.4 Statistical Device Characterization The previous section showed how a single device model can be characterized from observed data. In order to model manufacturing variability, however, we need to develop a statistical device model which predicts a whole range of
174
9 STATISTICAL CIRCUIT ANALYSIS
1e+06 100000 10000 1000 100 10 1 0.1 1.3
1.32
1.34
1.36
1.38
1.4 0.7
0.72
0.74
0.76
0.78
0.8
Fig. 9.5. Least-squares error in Eq. 9.16 vs. THETA varying from 1.3 to 1.4 and VTH varying from 0.7 to 0.8.
device behaviors [84]. The characterization of such a model would obviously require measurements from a statistically significant number of devices. In fact, if we needed a variability model which includes the various components of spatial variability (within-die, within-reticle, within-wafer, within-lot, etc...) then we would require a corresponding distribution of observed devices. A spatial model of the type above extends the core device model by expressing each of the device model parameters as nested or additive model. A nested model might express a model parameter p as: p = φ(φ(φ(φ(µ, σlot ), σwaf er ), σreticle ), σdie )
(9.18)
Where φ(µ, σ) denotes a random variable with mean µ and standard deviation σ. An additive model would express the same model parameter p as: p = φ(µlot , σlot )+φ(µwaf er , σwaf er )+φ(µreticle , σreticle )+φ(µdie , σdie ) (9.19) A number of such models have been proposed in recent years (see for example [85] and [86]). But regardless of the model used, the core statistical characterization task can be defined as the generation of a joint probability density function describing the variability at a specific spatial level of abstraction. Layered upon this core task is the distribution of variability across the various spatial levels, which is described in [85] and [86]. Let us turn our attention to the core statistical characterization task. In the previous section we outlined a number of problems that can occur with
9.1 CIRCUIT PARAMETERIZATION AND SIMULATION
175
nominal device characterization, such as multiple minima, unrealistic values of parameters and so on. These problems become far more serious for statistical characterization since they play havoc with the resulting parameter statistics, and make it difficult to separate out genuine variability from noise introduced in the extraction process itself. To illustrate how such problems can occur, we performed two parameter extraction experiments. Both experiments use the same data set consisting of the current/voltage characteristics of 1600 65nm N-channel devices. The devices are measured from a single test structure and thus reflect the withindie component of variability. The procedure used for both experiments was as follows. For each of the 1600 characteristics, we first initialize the parameters to a known initial point, then solve the least squares problem of Eq. 9.15 and store the resulting parameters P . For the first experiment, Table 9.1 shows the parameters extracted, the initial value of the parameter, and the range (i.e. minimum and maximum allowed values) applied on the parameters2 : Table 9.1. Parameters extracted in the first statistical characterization experiment. Parameter Starting Lower Upper Value Bound Bound VTH 0.378 0.1 1.0 460 100 1000 UO 104 104 1010 VMAX 1 0.01 4 THETA 0.001 0 1 ETA
The result of the first parameter extraction is shown in Figure 9.6 which shows a pair plot of the 5 parameters. The figure shows that the resulting final values of the parameters are ill-behaved, with many parameters at the lower or upper bounds of their extraction ranges. This can be viewed as an indication that the parameter extraction problem is ill posed resulting in noisy parameters that do not correctly reflect the underlying variability expected. For the second experiment, we retained the same structure except that the VMAX parameter was fixed at a value of 107 . The result of the second parameter extraction is shown in Figure 9.7 which again shows a pair plot of the 4 parameters extracted. We now see that the resulting parameters are well behaved, showing strong correlation in some cases, and giving us confidence that they are a faithful reflection of the original variability present in the measured data. There is currently no body of work on statistical characterization that researchers in the DFM area can look to for quantitative guidance on the 2
These bounds are applied in order to insure convergence of the non-linear leastsquares minimization algorithm.
176
9 STATISTICAL CIRCUIT ANALYSIS
T H E T A
V M A X
U O
VTH
E T A
T H E T A
V M A X
VTH
VTH
VTH
E T A
UO
UO
T H E T A
UO
E T A
VMAX
VMAX
E T A
THETA Fig. 9.6. Results of the first statistical characterization experiment where all 5 parameters were allowed to vary, note that many of the parameters are at their extremes!
9.1 CIRCUIT PARAMETERIZATION AND SIMULATION
T H E T A
U O
E T A
VTH
VTH
177
T H E T A
VTH
E T A
UO
UO
E T A
THETA Fig. 9.7. Results of the second statistical characterization experiment, where VMAX was held constant at 107 .
quality of a statistical model. The two examples shown above are an indication of the sort of problems one can run into, and the sort of results one may expect when things go well. The authors hope that this will be an area of fertile research in the near future, and that more exact techniques can be brought to bear to insure the quality of variability models. 9.1.5 Principal Component Analysis It is rare that the sources of process variation are directly represented in the parameters of a device model. This is especially true for complex models including large numbers of empirical or semi-empirical parameters which have only a tenuous connection to the physical properties (e.g. dimensions, dopant concentration, etc...) of the device. Even when it may appear that a device parameter is indeed physical in origin, the gate oxide thickness TOX in Eq. 9.14, the actual value used in the model can be different in order to help tune
178
9 STATISTICAL CIRCUIT ANALYSIS
the model better3 . Because of this fact, the device parameters end up representing multiple sources of variation, and are therefore statistically correlated because of this common dependence. Because it is far easier to deal with uncorrelated variables, since one can sample them completely independently, it is desirable to transform the correlated device parameters into a set of uncorrelated factors. Principal Component Analysis (abbreviated PCA) is a formal statistical technique that performs exactly this task [87]. Given N samples from a set of correlated random variables P , PCA seeks a linear transformation of these variables into a new set of random variables Q which are maximally orthogonal, i.e. have zero correlation. There are a variety of techniques for accomplishing this task. Two considerations are essential: the sample size and the linearity. First of all, it is important to realize that the procedure requires a sample of the correlated variables in order to develop the linear transformation. Thus the size of the sample is critical to ensuring that the transformation is meaningful. Similarly, the breadth of the sample in terms of how much of the original parameter space it represents is also crucial to insuring correct results. Secondly, since the procedure results in a linear transformation, it is well suited to handling generally linear relationships amongst the input (correlated) variables. In cases where the relationships between the input variables are severely non-linear, PCA may not provide correct results. We demonstrate the application of PCA to the set of device parameters produced in Section 9.1.4. A commonly applied version of PCA analysis begins by forming the correlation matrix amongst the input variables, which is shown in Table 9.2. Table 9.2. Correlation matrix for the statistical MOSFET parameters extracted in Section 9.1.4. VTH UO THETA ETA
VTH 1.0 0.363 0.254 0.191
UO 0.363 1.0 0.816 0.050
THETA 0.254 0.816 1.0 0.106
ETA 0.191 0.050 0.106 1.0
An eigenvalue decomposition [81] of the correlation matrix results in four eigenvalues (ǫ1 . . . ǫ4 ) and four corresponding eigenvectors (ν1 . . . ν4 ). The eigenvalues are sorted from large to small, and normalized by their sum into a percentage, i.e. ǫj (9.20) ǫj ← 100 i=1...4 ǫi 3
For gate oxide thickness, as an example, the value used in the model might be referred to as effective or electrical thickness, to distinguish it from the physical thickness.
9.1 CIRCUIT PARAMETERIZATION AND SIMULATION
179
With this normalization we interpret each of the normalized eigenvalues as the amount of overall variability that its corresponding eigenvector explains. In our case, the raw and normalized eigenvalues are shown in Table 9.3. Using the table, we see that if we use the top three eigenvalues/vectors, we explain 95.72% of the overall observed variation. This illustrates a major benefit of PCA, dimension reduction! By reducing the number of variables4 , the complexity of many downstream analysis techniques - such as Worst-Case Analysis which we will discuss next - is considerably reduced. Table 9.3. Raw and normalized eigen values. Index 1 2 3 4
Raw 2.0365 1.0422 0.7503 0.1710
Normalized 50.91% 26.06% 18.76% 4.28%
Now that we have decided on the number of independent factors we will consider, PCA will define a transformation from these factors back to our original space. In our case the transformation is computed to be: ⎤⎡ ⎤ ⎡ ⎡ ⎤ ⎤ ⎡ σVTH 0.410 0.384 0.818 ⎡ ⎤T µVTH VTH ⎢ UO ⎥ ⎢ µUO ⎥ ⎢ σUO ⎥ ⎢ 0.643 −0.242 −0.099 ⎥ δ0 ⎥⎢ ⎥ ⎢ ⎢ ⎥⎣ ⎦ ⎥ ⎢ ⎣ THETA ⎦ = ⎣ µTHETA ⎦ + ⎣ σTHETA ⎦ ⎣ 0.624 −0.236 −0.305 ⎦ δ1 δ2 σETA µETA 0.170 0.860 −0.477 ETA (9.21) The variables δi represent zero-mean unit-variance independent normally distributed random variables which form the new uncorrelated space from which we can generate samples of the device parameters. Looking at the values in the transformation matrix, we observe that δ0 is dominated by UO and THETA -which were correlated as we saw in Figure 9.7. We also see that δ1 is dominated by ETA and δ2 by VTH. This provides an interesting opportunity to generate a PCA transtormation which, while being less optimal from the mathematical point of view, may be a better match to common engineering understanding, and therefore more likely to be used by a practitioner. Such a transformation, which is basically an adjustment of Eq. 9.21, might be written as: ⎤⎡ ⎤ ⎡ ⎡ ⎤ ⎤ ⎡ σVTH 0.0 0.0 1.0 ⎡ ⎤T VTH µVTH ⎢ UO ⎥ ⎢ µUO ⎥ ⎢ σUO ⎥ ⎢ 0.5 0.0 0.0 ⎥ δ0 ⎥⎢ ⎥ ⎢ ⎢ ⎥⎣ ⎦ ⎥ ⎢ (9.22) ⎣ THETA ⎦ = ⎣ µTHETA ⎦ + ⎣ σTHETA ⎦ ⎣ 0.5 0.0 0.0 ⎦ δ1 δ2 0.0 1.0 0.0 σETA µETA ETA 4
For our toy example, a reduction of 1 parameter may not seem too impressive; but in larger more realistic scenarios, it is not uncommon for PCA to reduce the dimension of the parameter space by factors of 2, 3 or more.
180
9 STATISTICAL CIRCUIT ANALYSIS
which forces full correlation between THETA and UO while making the remaining two parameters independent. The main advantage of such an approach is that the relationship between the device parameters and the independent variables is now clearer and suitable for hand or analytical work.
9.2 WORST CASE ANALYSIS Worst-case or corner analysis is the most commonly used technique for dealing with manufacturing process fluctuations at the circuit level. The technique was initially applied to discrete and hybrid circuits (see for example [88]) where a tolerance was associated with each discrete component in the parameter space, and the values of the discrete components were assumed to be independent. The purpose of worst-case analysis is to identify the unique point in parameter space which corresponds to the worst circuit performance. Since the tolerances (e.g. ±10% on a resistor value) are finite and bounded, such a problem is well posed. R2 Maximum Value R1 +
Tolerance Range Vout
V R2
Minimum Value R1
Fig. 9.8. Example of worst-case analysis.
To illustrate, consider the simple voltage divider circuit implemented using discrete resistors shown in Fig. 9.8. Since the two discrete resistors R1 and R2 have tolerances associated with them, they are modeled as arbitrarily distributed random variables that together define a box in parameter space. If the performance associated with the circuit is the output voltage Vout , then worst-case analysis will identify the point (or points) in parameter space, which cause the performance to take on its extreme (minimum or maximum) values. The two corner points marked in the figure correspond to the absolute maximum and minimum output voltages, and are thus the worst-cases for the circuit.
9.2 WORST CASE ANALYSIS
181
9.2.1 Worst Case Analysis for Unbounded Parameters As we saw previously in the context of device nominal and statistical characterization, device parameters for integrated circuits are not generally bounded or truncated, as were the values of the resistance in Figure 9.8. While some process parameters may have truncated distributions due to wafers or chips being out of specification windows, this is relatively rare in a well controlled manufacturing process. While device parameters are usually highly correlated, as were the values of the device parameters extracted in Section 9.1.4, we will assume that principal component analysis will be applied in order to create an equivalent uncorrelated parameter space where we will perform the analysis. Worst case analysis with correlated parameters is treated in Chapter 5 of [89]. Given the above, it is no longer possible to define a strict extremum of circuit performance since the probability density functions that describe the device parameters extend, in theory, to ±∞. Thus in order to apply this technique, the worst performance is redefined to be that value of performance which bounds some desired percentage of performances (e.g. the 95th percentile). This definition couples two important concepts. First, in parameter space, the probability density function describing the N dimensional uncorrelated principal components results in spherical equi-probability contours. Second, circuit performance is, obviously, a function of the device parameters. Assuming that the dependence is monotonic, the worst case parameters can be estimated by finding the points on the equi-probability contours which have the extreme performances. Figure 9.9 shows a graphical example of how this type of worst case analysis is performed. PDF
P_2
P_2 Maximum Value
Maximum Value
Increasing performance
z=10 z=8
60% Minimum Value 90% 95%
P_1
z=6
Minimum Value z=4 z=2 P_1
Fig. 9.9. Worst-case analysis with unbounded variables.
Key to the use of worst case analysis for integrated circuits is the observation that the worst case parameter set for one performance of one circuit is likely to be the same as that for the same performance of another similar
182
9 STATISTICAL CIRCUIT ANALYSIS
circuit. For example, the set of parameters that cause the worst case delay for a CMOS inverter are likely to be the same as those that cause the worst case delay for a CMOS NAND gate. This assumed similarity makes worst case analysis very efficient since the parameter sets can be generated once off-line, making the cost of applying the analysis to a new circuit equal to one circuit simulation. Note, however, that in the unbounded variable case, worst-case analysis does not provide an estimate of the yield of the circuit, instead it provides the circuit designer with a single-ended test for the variability in any chosen performance measure. So if we assume that a performance z has a specification z spec associated with it, if the 95% worst case of z, z wc , is better than z spec then we know that we have a yield of at least 95%. If z wc is worse that z spec then nothing is known about the yield. In summary, worst case analysis for integrated circuits is based on the following basic notions. (1) Process fluctuations manifest themselves as variations in device model parameters. (2) For a representative circuit, for which the distribution of some measure or measures of performance can be determined, it is possible to determine some set of extreme device parameters, which, when simulated, cause the performance of the circuit to be at an extreme (e.g. bounding 95% of the population). (3) The same set of extreme device parameters, when used to simulate a different circuit, would result in a performance measure with similar statistical qualities (i.e. bounding 95% of the population). 9.2.2 Worst Case Analysis Algorithm We start by expressing the performance of a circuit, denoted by the random scalar z, as a function of the uncorrelated results of principal component decomposition of extracted statistical device parameters, which we will denote by the random vector variable P , as: z = fcirc (P )
(9.23)
For multiple performance, the worst case analysis algorithm would be repeated as needed (recognizing that some performances may have the same worst case parameters of course). In a well-controlled manufacturing process, the variations in device parameters P are expected to be small, say of the order of 15%. Under such conditions, it is possible to approximate Eq. 9.23 by a truncated Taylor series expansion: (9.24) z ≈ znom + λT (P − Pnom ) where λ can be viewed as a vector of sensitivities of z with respect to each component of P . Since fcirc in Eq. 9.23 is usually implemented using SPICE [76], λ must be estimated using finite differencing or linear regression.
9.2 WORST CASE ANALYSIS
183
Knowing that the parameters P are normally distributed, independent, and normalized (i.e. with a mean of zero and unit standard deviation), we recognize that Pnom ≡ 0 and that we can write the probability density function (pdf ) of z as: √ z = N (znom , λT λ) (9.25) Without loss of generality, we assume that increasing z corresponds to making it worse. Thus, given a confidence level ρ, the worst case value of z, z wc is defined as that value for which P rob(z ≤ z wc ) = ρ, and can be calculated from: √ (9.26) z wc = znom + Φ−1 (ρ) λT λ where Φ is the cumulative distribution function of the standard normal random variable. Once z wc is known, Eq. 9.24 defines a hyper-plane in P space: √ λ P − Φ−1 (ρ) λT λ = 0 (9.27) Since any combination of parameters on this hyper-plane produces z wc , an additional condition must be added to uniquely identify P wc . Intuitively, the choice is made such that P wc should be the most probable point on the hyper-plane. For a multi-variate normal pdf the most probable point is the one closest to the mean, thus P wc is the point on the hyper-plane closest to the origin. After some algebraic manipulation, the worst case point is found to be simply: Φ−1 (ρ) λ (9.28) P wc = − √ λT λ The analysis above relies on two primary assumptions. The first assumption is that fcirc is well approximated by a truncated Taylor expansion, i.e. a linear function, which further implies monotonicity between the device parameters and circuit performances. The second assumption is that the parameters P are well characterized by a multi-variate normal distribution. When either of these assumptions is violated, the analysis becomes somewhat more difficult; some solutions are presented in Chapter 5 of [89]. 9.2.3 Corner-Based Algorithm One commonly applied approximation to the worst case analysis algorithm, presented above, is the so-called corner based analysis [90], which applies in the case where the parameters P are uncorrelated and the monotonicity assumption holds. Strictly speaking, the phrase corner-based should only be applicable for the case of bounded variables, where the worst case does indeed fall in a corner. The phrase, however, is in common usage in this interpretation, and therefore we will use it here. In this simplification of the worst case analysis algorithm, first a crude estimate of the λ vector is generated, and then (only) the signs of the entries in the vector are noted. A positive sign for the ith
184
9 STATISTICAL CIRCUIT ANALYSIS
entry implies that z increases with Pi and determines the direction in which Pi must be adjusted to make z worse. The second step determines a scale factor ν used to adjust all component of P such that the combined probability is ρ: 1
(9.29)
ν = Φ−1 (1 − (1 − ρ) n )
from which the worst case set of parameters, which we will denote by P cn to distinguish it from the parameters estimated using the exact method above, is then simply: sign(λ) P cn = ν √ (9.30) λT λ By ignoring the values in λ and using the sign only, this shortcut estimates the worst case to be in the corner of the hypercube defining the parameters space. This is valid if the range of values in λ is small, which is not usually the case. The two methods are compared graphically in Figure 9.10 which shows one quadrant of parameter space and compares the worst case point derivable from the Eq. 9.28 (labeled point A in the figure) and that from Eq. 9.30 (labeled point B in the figure). P2 Worst case A
lambda line
x=y line
B
Corner based
P1
Fig. 9.10. Worst case vs. Corner-based analysis.
9.2 WORST CASE ANALYSIS
185
9.2.4 Worst Case Analysis Example We will make use of the device parameters generated in Section 9.1.4 to illustrate worst case analysis, recognizing that the models represent only one component of manufacturing process variability, i.e. die-to-die variations of N-channel devices. In a realistic setting, many more sources of variability will typically be included. Delay of a CMOS Inverter For this first experiment, we will perform worst case analysis on the Fan-Outof-Four (FO4) pair delay of a CMOS inverter, which we will denote by DF O4 . This delay is defined as the delay of a pair of inverters, each loaded by four copies of itself, the circuit used to measure the pair delay is shown in Figure 9.11, where the delay is measured between nodes a and b in the figure. Note that the pair delay includes both the rising and falling delays of the inverter since the waveform at the node between a and b is the complement. Representative waveforms obtained from SPICE for nodes a and b of the circuit in Figure 9.11 are shown in Figure 9.12. We define the delay to be the 50% crossing points for the waveforms. a
b
Fig. 9.11. Circuit to measure FO4 pair delay.
Our first step is to build a model for the delay as a function of the device parameters. Making use of the PCA transformation defined earlier (Eq. 9.21) we perform a full factorial experiment [91] setting each of the principal components to integral values from −2 to 2. Recall that the principal components are normalized such that they have zero mean and unit sigma so −2σ to 2σ accounts for approximately 95% of the range of each variable. We take all possible combinations for a total of 53 = 125 simulations. We plot the
186
9 STATISTICAL CIRCUIT ANALYSIS 0.7 Node Voltage (Volts) 0.6 0.5 0.4 a
b
0.3 0.2 0.1 0 -0.1
time (ns) 0
0.2
0.4
0.6
0.8
1
Fig. 9.12. Waveforms for the circuit in Figure 9.11 at nodes a and b.
resulting delay as a function of the three principal components in Figure 9.13, noting that the strongest dependence is with respect to the third PCA factor, while the dependence on the first two is less pronounced. This might appear somewhat counter-intuitive since the PCA factors are ordered by how much of the overall variation in device parameters they explain. The reason this occurs is that the PCA analysis does not take into account the sensitivity of the circuit performance to the parameters.
D E L A Y
D E L A Y
P0
D E L A Y
P1
P2
Fig. 9.13. FO4 pair delay vs. principal component factors.
With this data in hand, we begin the worst case analysis by performing linear regression to fit the delay to the model parameters, i.e. calculating znom and λ in Eq. 9.31. The linear model generated, which is shown in Figure 9.14,
9.2 WORST CASE ANALYSIS 0.29
187
0.015
0.27 R e s i d u a l
M 0.25 o d e l 0.23 0.21 0.19 0.19 0.21 0.23 0.25 0.27 0.29 Measurement
0.01 0.005 0 -0.005 0.19 0.21 0.23 0.25 0.27 0.29 Measurement
Fig. 9.14. Modeled FO4 pair delay and residual vs. measured delay.
has a fit correlation of 0.989 but the residuals show a pronounced trend that indicates that the model is not as predictive as one might hope5 . At this point, we have several options: (a) Ignore the lack of fit in the model and go ahead with the analysis. (b) Reduce the domain of the inputs from ±2σ to ±1σ and hope that the linear approximation holds better over the smaller domain. (c) Use a higher order regression model, recognizing that we will then have a more difficult task identifying the worst case parameters. (d) Find a simple transform on the performance that would allow a linear model. We opted to go with the last option, replacing the delay with its inverse, which can be interpreted as a frequency F. It turns out that the inverse delay produces an excellent linear fit with far fewer artifacts. Figure 9.15 shows the fit, which had a correlation coefficient of 0.996, this is not much higher than the 0.989 for the delay, but the plot is much better behaved. The equation describing the fit is: F=
1 = 4.2659 + 0.05726 P0 − 0.07751 P1 − 0.2022 P2 Delay
(9.31)
We want the maximum delay, which corresponds to the minimum frequency, so we choose a value of ρ of 5%, corresponding to a probability value Φ−1 (ρ) ≈ −1.645, so the corresponding worst case parameters computed using Eq. 9.28 are P wc ≈ [−0.4205, 0.5692, 1.4850], and the worst case value predicted by Eq. 9.31 would be F wc = 3.8974. Had we used the corner
5
Making plots such as that in Figure 9.14 is always a good idea in order to make sure that the generated model can serve its purpose.
188
9 STATISTICAL CIRCUIT ANALYSIS 5
M o d e l
0.02 0.01 R e s i 0 d u a l -0.01
4.5
4
3.5
3.5
4
4.5
5
-0.02
3.5
Measurement
4
4.5
5
Measurement
Fig. 9.15. Modeled FO4 inverse delay and residual vs. measured values.
based algorithm, we would have computed ν ≈ 0.336 and the corresponding worst case parameters would have been P cn ≈ [−1.5, 1.5, 1.5] which results in F cn = 3.7604 and is quite far from the estimate above. To verify the worst case analysis above, we estimated the distribution of the FO4 delay be performing a 5000 run Monte-Carlo simulation. The results of the MC simulation are illustrated in Figure 9.16, which shows a histogram as well as a quantile-quantile plot of the measured inverse delay F. From the data we estimate µF = 4.2832 and σF = 0.22212. Thus the estimate of the worst (minimum) frequency would be F M C = 3.9178. Our earlier estimate of F wc = 3.8974 is about 0.5% in error in terms of predicted value. Alternatively, we can measure the probability associated with F wc given our estimates of µF and σF , and we get a cutoff probability ρ of ≈ 95.5%. In comparison, the corner based estimate has ρ ≈ 99%, so obviously one must be careful when applying this simpler construction method. Application to other Circuits and Performances As mentioned earlier, the efficiency of worst case analysis derives from reusing the parameters generated for one circuit for another. Historically, this has been the genesis of the so-called corner cases (e.g. fast/fast, nominal/nominal, slow/slow, and other combinations) used to represent the variability in the process. We will test this reuse principle by applying the worst case analysis methodology outlined above to several other circuits and performances. Instead of performing the full analysis, however, we will simply generate the λ
9.2 WORST CASE ANALYSIS 450
6
400
4
350 300
2
250
0
200 150 100 50
189
-2 3.9178
-4
? 0 -6 3.2 3.4 3.6 3.8 4 4.2 4.4 4.6 4.8 5 5.2 -6 Inverse delay
33 33 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 33
-4
-2 0 2 Inverse delay
4
6
Fig. 9.16. Results of 5000 run MC analysis of inverter FO4 delay.
vector for each of these cases and compare it to the one we previously generated for FO4 delay. Since the λ vector uniquely determines the worst case point, this is a valid and efficient method for comparison. We performed the same type of full factorial experiment as before, i.e. setting of the three principal components to integral values from −2 to 2 for a total of 125 runs. We simulated the following circuits: (1) A CMOS inverter with a fan-out of 1 (FO1), where we measured the inverse delay as we had done for the FO4 case above. (2) A CMOS inverter loaded with a long wire and with a fan-out of 1 (FOW), where we measured the inverse delay as we had done for the FO4 case above. (3) A 3-input CMOS NAND gate, with all inputs tied together (NAND), where we measured the inverse delay as we had done for the FO4 case above. (4) The unit-gain input voltage points of a CMOS inverter, (VIL and VIH). Table 9.4 shows the λ/sqrtλT λ vectors generated for these circuits and performances. We see that the four delay components, as expected, have very similar sensitivities to the canonical FO4 case presented above, with other circuits showing very small differences. The two unit-gain points, however, are quite different, indicating that the worst case parameters for these important noise propagation parameters are quite different from those generated for the FO4 delay of an inverter. Since the sensitivities of different circuits and performances vary widely, the applicability of simple worst case analysis across all of them is questionable. This points to the need for more robust methods for statistical analysis, which is the subject of the next section.
190
9 STATISTICAL CIRCUIT ANALYSIS Table 9.4. Sensitivity (λ) of various performances. Performance FO4 FO1 FOW NAND VIH VIL
λ0 0.256 0.258 0.226 0.253 0.120 0.223
λ1 -0.346 -0.366 -0.288 -0.495 -0.046 0.445
λ2 -0.903 -0.895 -0.931 -0.831 0.992 0.867
9.3 STATISTICAL CIRCUIT ANALYSIS Worst case analysis is one of a number of techniques available for the analysis of variability in circuits. Its efficiency is the main reason that it finds widespread application, but the requirement that the circuit under analysis be somehow similar to the canonical circuit used to generate the worst case parameters limits the applicability of worst case analysis. In cases where we have a circuit whose sensitivities to manufacturing variations are not known, the computational cost of performing circuit-specific worst case analysis becomes higher than other techniques. In this section we will explore a number of these alternative techniques, showing examples of their application and comparing them in terms of accuracy and computational cost. In order to put the various techniques on a common footing, we will use the same circuit example for all of them. The example is that of a CMOS Static Random Access Memory (SRAM) which is known to be highly sensitive to within-die variations [92], and whose yield and performance are of paramout importance to modern designs. 9.3.1 A Brief SRAM Tutorial Here we will give a necessarily brief and quite incomplete review of SRAM operation in order to prepare the reader for the statistical circuit analysis examples we will present in the next sections. For more information on SRAM and its detailed implementation and operation, we refer the reader to [93]. An SRAM is made up of a two dimensional array of cells which share vertical bit lines and horizontal word lines. The SRAM is addressed one horizontal row (word) at a time. A single cell is depicted in Figure 9.17. The two back-to-back CMOS inverters in the cell have two stable states for the nodes L and R: (L=0,R=1) and (L=1,R=0), thus the cell can store a single bit of information. An SRAM supports two main operations: the Read and Write. A Read operation, where the bit-line and bit-line complement are preset to true then disconnected from their driver, then the word line is activated (made true). The cell causes one of the two bit-lines to discharge, thus allowing its value to be read out. A Write operation, where the bit-line and bit-line complement are set to the value to be stored in the SRAM cell, then the word line is activated, forcing the value from the bit-lines onto the cell.
9.3 STATISTICAL CIRCUIT ANALYSIS
191
With the ever increasing desire for on-chip storage, designers naturally strive to make SRAM cells as small as possible, often pushing the various design and lithography rules beyond the minimum requirements. The small size of SRAM devices makes them especially sensitive to manufacturing variations, especially the impact of random dopant fluctuations, which causes variations in the threshold voltage (see for example [94]). A major concern for SRAM cells is their stability with respect to noise, since any such susceptibility can cause the stored content of the SRAM to become corrupted. A first order measure of this stability is the noise margin of the SRAM which has been the subject of much work due to its extreme importance in modern VLSI SRAMs [95]. A simple metric of SRAM noise margin, and hence stability, can be derived from the so-called butterfly curve plotted in Figure 9.17. The curve has the voltages on nodes L and R on the x and y axes, and plots the input/output transfer curves of the two inverters (A and B in Figure 9.18). The widths of the two rectangles inscribed in the upper and lower lobes of the butterfly curve are a measure of the noise margin on each of the two storage nodes of the cell. The minimum of the two widths is a measure of the overall noise immunity of the cell. Noise can occur for multiple reasons (see [92]) and, for the purposes of our example below, we will somewhat arbitrarily assert that an SRAM cell operates correctly if its noise margin is above 150mV. Word Line
L
A Storage Cell
Access Transistor B Bit Line
R
Bit Line Complement
Fig. 9.17. A schematic of a conventional SRAM cell.
192
9 STATISTICAL CIRCUIT ANALYSIS 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 0.7 + 3 3 3 + 3 3 + + 3 + + + 3 + + + 3 0.6 + + + + -3 + + + 3 + + + 0.5 + 3 + + + 3 + + ++ 3 ++ ++ 0.4 3 ++ ++ 3 +3+ ++ ++ 0.3 3 ++ ++ 3 + + + 3 + + 0.2 + 3 + + + 3 + + + + 3 + + 0.1 + 3 + + + + 3 + + 3 + 3 3 + 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 + 0 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7
Fig. 9.18. Butterfly curves corresponding to the SRAM circuit in Figure 9.17.
9.3.2 Monte-Carlo Analysis It is easy to forget that the classical simple Monte-Carlo (MC) [96] technique is a viable candidate for statistical circuit analysis, but it is indeed a candidate for our consideration and, under certain conditions may even be the best or only candidate. Furthermore, MC is often the basis for other more complex analysis methods, as we shall see below. For our SRAM example, we simulated the DC transfer characteristics of the two inverters that make up the storage cell and determined the (negative) unity gain points. Figure 9.19 shows how these points are defined. We denote the two input voltages that produce the two unity gain points for inverters A and B by VAL , VAH , VBL and VBH . By inspection, the two noise margins defined in the butterfly curve of Figure 9.18 would be N M1 = VBH − VAL and N M2 = VAH − VBL , the overall noise margin would be the minimum of the two N M = min(N M1 , N M2 ). We performed a 10000 run MC analysis of the circuit, calculating the noise margin as described above. In each run, we assigned the two N-channel devices a randomly selected sample of the 1600 device parameters we generated in Section 9.1.4, while holding the P-channel devices constant. The resulting distribution of noise margins is illustrated in Figure 9.20 which shows a histogram of the measured noise margin as well as a quantile-quantile (QQ) plot of the same data. The QQ shows significant departure from normality (we will come back to this point in the next section).
9.3 STATISTICAL CIRCUIT ANALYSIS
193
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -0.1
?
?
VL 0
0.1
0.2
0.3
VH 0.4
0.5
0.6
0.7
Fig. 9.19. Inverter transfer characteristics showing the unity gain points.
As defined above, the SRAM is considered to be operational if the noise margin is above 150mV . There were 12 failing samples in the 10000 run MC analysis, so our estimate of the failure probability of the cell is Pf ≈ 1.2×10−3 , or -equivalently- our estimate of the yield of the cell is 99.88%. This yield number may appear comfortingly high, but if we consider that we have an array with N cells, each of which must work for the array to work as a whole6 , then the yield of the array would be: N
yarray = (ycell )
(9.32)
From which we can see that the yield of a small 1024 bit array would be only 29.2%. The sheer size of modern SRAMs, which can be as large as tens of millions of bits, means that the failure probability per cell must be vanishingly small in order to insure overall array yield. This last comment brings us to Achilles heel of the MC method, which is its slow rate of convergence for small failure rates [97]. Suppose we want to estimate the yield of a design, and that we have a performance metric f (x) and a corresponding critical value f0 such that the design passes if f (x) ≥ f0 . If we perform N simulations and find that NF of them fail the test, then the estimated failure probability would be: Pf = 6
NF N
(9.33)
In practical SRAMs, redundancy is implemented in order to increase array yield, which we are ignoring in this computation.
194
9 STATISTICAL CIRCUIT ANALYSIS
900
6
800
4
700 F r e q u e n c y
600
2
500
0
400 300
-2
200
-4
100 0 0.12
0.16 0.2 Noise Margin
0.24
-6
33 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3 3
-6
-4
-2 0 2 Noise Margin
4
6
Fig. 9.20. Results of 10000 run MC analysis of SRAM noise margin.
And the variance of this estimate would be [98]: σP2 f =
Pf (1 − Pf ) N
(9.34)
Under several assumptions, which we discuss at greater length in the next chapter, it can be shown that relative error of this estimate, at the 95% confidence level, is: 1 − Pf −1 Φ (0.95) (9.35) Pf N So we see that ultimate convergence depends on both Pf and N and that for values of Pf near zero or 1, the number of simulations grows quickly. In fact, the the example above, the error in our estimate is: 0.9988 ≈ 0.29 (9.36) 2 0.00012 × 104 which indicates that we need a significantly larger number of simulations (perhaps a factor of 100 more) before we can be confident that our estimate is reliable. 9.3.3 Response-Surface Analysis The major cost of running large Monte-Carlo analyses comes from the need to perform circuit simulation for thousands or even millions of sets of device
9.3 STATISTICAL CIRCUIT ANALYSIS
195
parameters. Recently, there have been a number of alternative simulation methods proposed which all claim the same accuracy as SPICE with reduced computational cost (see for example [99, 100, 101, 102]). But most if not all of these alternative methods target digital logic circuits, which tend to be relatively forgiving in terms of error. SPICE [76] and its equivalents remains the only trusted simulator and the benchmark to which others are compared, especially for difficult analog circuits. One method to speed up MC analysis with SPICE is to use some type of surrogate model instead. The surrogate model is first generated, from accurate SPICE simulations, then used as a replacement for SPICE in order to get the large number of MC samples needed. The surrogate model is often referred to as a response surface model and hence the name of this type of analysis [103, 104, 105]. The variety of surrogate models that can be generated is large and their full coverage is beyond the scope of this work. A significant literature exists (e.g. [103]) in statistics on how such models can be created, tested, and used. In the remainder of this section we will extend the previous SRAM example to show how response surfaces can aid in accelerating MC analysis. Recall that for our SRAM example, we simulated the DC transfer characteristics of the two inverters that make up the storage cell and determined two noise margins (see Figure 9.19) defined by N M1 = VBH − VAL and N M2 = VAH − VBL . The overall noise margin was defined as the minimum of the two N M = min(N M1 , N M2 ). For our first experiment, we ran a standard MC of only 500 samples7 , where for each run, we assigned the two N-channel devices a randomly selected sample of the 1600 device parameters generated in Section 9.1.4, and calculated N M1 , N M2 and N M . We then built a linear regression model for the noise margin N M vs. the device parameters. The resulting fit is illustrated in Figure 9.21 which shows a rather poor fit with a correlation factor of only 49.7%. Clearly such a model is not sufficient to replace or even guide our SPICE simulations. When faced with an ill-fitting model such as the one we just generated, one might be tempted to explore more complex regression schemes, using a second order (quadratic) model for example. Often, however, the reason for the lack of fit has more to do with how the data was generated rather than the modeling scheme employed, and it is always wise to look at our data generation methodology with a critical eye before automatically using more complex models. In our case, the noise margin was defined as the minimum of two noise margins. The minimum function is inherently discontinuous and approximating its behavior via a linear function is bound to result in large errors. In fact, we noticed this earlier in Figure 9.20 where we saw that the distribution of N M departed significantly from normality. Therefore instead 7
In fact, we simply used the first 500 samples of the 10000 run MC we performed for the previous section.
196
9 STATISTICAL CIRCUIT ANALYSIS
0.02 0.21 M o d e l
R 0 e s i d u a -0.02 l
0.19
0.17
0.15 0.15
0.17
0.19
0.21
-0.04 0.15
Measurement
0.17
0.19
0.21
Measurement
Fig. 9.21. Results of modeled SRAM noise margin and residual vs. measured noise margin.
of creating an approximation of the noise margin N M as a function of the device parameters P in the following form: N M = F(P )
(9.37)
we simply generate two response surface models for each of the two noise margine, i.e.: N M = min (F1 (P ), F2 (P )) (9.38)
which removes the discontinuous minimum operator from the function. In fact, when we generate the F1 and F2 functions, we observe excellent fits with correlations of about 99%. Those fits are illustrated in figure 9.22. In fact, one can achieve an even better results by modeling the unity gain points defined in Figure 9.19, but we leave that as an excercise for the reader. Using the linear models generated and Eq. 9.38 we can propose several ways forward. First of all, we can use the linear models with a much larger MC sample than the previous one with much lower cost. In fact, the total CPU time required for the 500 samples we started the response surface with was around 2 minutes, while the time to evaluate the linear models 100 thousand times was only 10 seconds8 . This may sound like a good idea, but if we look at the model in detail, we see that the error in the estimates it provides can be as high as 5mV and it would be difficult to gauge how much of that error would pollute our yield estimate. Secondly, we can generate a much larger
8
This was an implementation using the AWK scripting language, the time would have been even shorter using a compiled language such as C.
9.3 STATISTICAL CIRCUIT ANALYSIS
N M 1
0.3
0.3
0.25
0.25 N M 2
0.2
0.15
0.1
197
0.2
0.15
0.1
0.15
0.2
0.25
Measurement
0.3
0.1
0.1
0.15
0.2
0.25
0.3
Measurement
Fig. 9.22. Results of modeling the two SRAM noise margins.
MC sample and use the linear models to simply filter those samples which are certain to work, e.g. samples for which the model predicts a noise margin larger than the sum of the 150mV threshold plus our estimate of 5mV of model error. Those samples that remain can then be simulated using SPICE to insure good accuracy. With a MC sample of size one million, the linear models filtered out all but 4368 samples resulting in a speedup of more than 200x over a naive implementation. 9.3.4 Variance Reduction and Stratified Sampling Analysis In the last section we attempted to speed up Monte-Carlo analysis by using a surrogate model. An alternative method to increase MC efficiency is based on the observation that many of the samples generated by MC are close to each other and therefore do not provide proportionally increased insight (i.e. N samples close to each other do not provide us with N times more information about the behavior of the function). This motivates the use of alternative sampling techniques to seed the analysis. To illustrate the problem, consider the plots in Figure 9.23 which shows two thousand samples in a two dimensional space generated by: (1) Uniform random generators with a range of 0 to 1 (left plot); (2) Gaussian random number generators with a mean of 1/2 and a standard deviation of 1/6 (center plot); and (3) Latin hypercube sampling [106] (LHS) with a range of 0 to 1 (right plot). A full introduction to the Latin hypercube sampling technique is somewhat beyond the scope of this work, suffice to say that it is a pseudorandom sampling technique. For N total samples, LHS divides each variable’s
198
9 STATISTICAL CIRCUIT ANALYSIS
range into N bins and aims to have each bin sampled once, thus it insures that the full range of each variable is covered. From the plots in Figure 9.23 we can see that the Uniform samples and the ones from Latin hypercube sampling span the full domain of the data, while the Gaussian samples are clustered in the center. The difference appears dramatic in two dimensions, but is somewhat less so in higher dimensions.
1 0.8 0.6 0.4 0.2 0
0
0.2 0.4 0.6 0.8 Uniform
1 0
0.2 0.4 0.6 0.8 Normal
1 0
0.2 0.4 0.6 0.8 Latin-Hypercube
1
Fig. 9.23. Random samples generated with Uniform, Gaussian, and Latin hypercube generators.
In traditional MC, we assume that the probability of occurrence for each sample is identical. So when we estimated the failure probability (or equivalently the yield) of the SRAM cell in Section 9.3.2, we simply computed the probability of fail Pf as the ratio: Nf ail NT otal
Pf ≈
(9.39)
When we use alternative sampling techniques, the occurrence probability for each sample is different. If we denote that occurrence probability by ω then we can modify the previous equation to get: Pf ≈
f ailures
ω
ω
(9.40)
So we simply add the probabilities in a posteriori fashion, i.e. after the analysis is performed.
9.4 SUMMARY
199
For small dimensions, some specialized sampling method becomes possible where the complete operating region is gridded uniformly, and each resulting grid location is evaluated to find out if it passes or fails [107]. Naturally, such a scheme is exponential in the number of variables being considered and rapidly becomes impractical even for 4 or 5 dimensional problems. We demonstrate the use of these variance reduction techniques on our SRAM example above. In doing this, we make use of the PCA transformation we generated in Section 9.1.5, which allows us to generate valid device model parameters from a set of three independent normally distributed random variables. We also rely on the Latin hypercube sampling in a 6 dimensional space (3 for each of the two N-channel devices in our SRAM circuit of Figure 9.17, where each of the variables was sampled in the range −4 . . . + 4 reflecting a ±4σ variations in each of the factors. We generated a sample of 2000 circuits, and for each sample computed the probability of occurrence as the product of the probability of occupance of each of the six parameters (since they are independent, of course). Each of the probabilities was computed using the standard Normal probability density function, and thus the overall probability of the sample can be written as: 2 ( P2i )2 e (9.41) ω= π i=1...6 We simulated all 2000 circuits and found that 422 failed, a comfortingly large number given that we only had 12 failures in the 10000 run MC performed earlier. Applying Eq. 9.21 we find: 8.1854 × 10−4 f ailures ω = = 0.002488 (9.42) Pf ≈ ω 0.3902
which is about twice the earlier estimate, in spite of the fact that the utilized PCA factors explain just 95% of the overall variability! The equivalent number of MC simulations to achieve the same number of failures would have been about 170 thousand, so we have achieved a speedup factor of roughly 80 x. To show how this smaller sample achieved better results, it is helpful to look at the dependence of the noise margins on the PCA variables. Figure 9.24 shows N M1 vs. P3 (the third principal component). It is clear that (a) there is a very strong dependence between the two quantities, and (b) that by sampling P3 densely over the ±4σ domain many more samples were generated in the failure region, contributing to the improvement in our estimate of the failure probability.
9.4 SUMMARY In this chapter we reviewed statistical circuit analysis, starting from the basics of how circuit simulation is performed, and explaining how device models
200
9 STATISTICAL CIRCUIT ANALYSIS 0.35 0.3 0.25
N M 1
0.2 0.15 Failure Region
0.1 0.05
-4
-3
-2
-1
0
1
2
3
4
P2 Fig. 9.24. Dependence of the first noise margin N M1 on the third PCA variable P3 as resulted from a 10000 run Monte-Carlo analysis.
are used. We then showed how device characterization is performed on a single device, and how the characterization of a statistically valid sample of device characteristics can be used to get the distribution of device parameters. Since device parameters are expected to be correlated, we then showed how principal component analysis can be used to generate a new set of uncorrelated factors that are related to the parameters via a linear transformation. With the ability to simulate circuit variability, we then show two examples of statistical analysis, namely, worst (or corner) case analysis, and Monte-Carlo analysis and various speedup techniques.
10 STATISTICAL STATIC TIMING ANALYSIS
The generation of random numbers is too important to be left to chance. Robert R. Coveyou
Information about parameter variability must be propagated to the system level. In this chapter we discuss methods for timing analysis of digital systems using statistical techniques. At system level, timing analysis is concerned with ensuring that all sampled memory elements of a circuit have the proper logical value at the end of each clock cycle. Verifying this property first requires that under no circumstances does the computation of the correct logical value require longer than the clock cycle of a device. Secondly it requires that the latching of the correct logical value be not pre-empted by any rapid propagation of the results of the previous clock cycle. Fundamentally, timing analysis can be performed dynamically or statically. In dynamic analysis, a circuit is analyzed for a particular time interval by simulating its response for a fully specified set of input patterns. Dynamic analysis can be performed using circuit simulator SPICE, fast circuit simulators, or gate-level simulators. Static timing analysis verifies the timing of a circuit for all possible vectors. Static timing analysis (STA) has a number of vital advantages over dynamic analysis that made it into the industrial workhorse for verifying the timing properties of synchronous digital integrated circuits. The accuracy, computational efficiency, and reliability of STA have made it the natural choice. A static approach eliminates the need of vector-set generation and more importantly provides vector-independent worst-case modeling. This approach obviates the concern that a key critical path sensitizing vector sequence may have been overlooked. Computing the longest and shortest paths without considering functionality can be done using linear time algorithms. Thus, static timing checks can be carried efficiently, in time linear in the circuit size (the number of gates and interconnect segments) which is necessary
202
10 STATISTICAL STATIC TIMING ANALYSIS
for full-chip timing verification. After decades of research, false-path eliminating function-dependent timing [149] has also been brought within affordable computational limits. Most importantly, STA has been reliable. With solid models of cells and wiring delay models fortified by back-annotated capacitances, STA has been able to provide reliable, i.e. consistently conservative, bounds on the delay of a circuit. It should be pointed out that the above description gives a simplified view of what STA does, and ignores many practical complications of real-life STA, such as handling transparent latches, cycle stealing, time borrowing, multicycle operation, and other STA complications. Each of the traditional strengths of STA also exposes a potential weakness. Reliability has enforced conservatism, and the requirement of conservatism has led to worst-case delay estimation using an elaborate series of worst-case assumptions in delay modeling, delay calculation, and path traversal. Such a worst-case approach to timing is beginning to fail for several reasons: (1) conservatism is sacrificing too much performance, (2) under certain “worst-case” conditions timing may in fact be not conservative so that reliability is jeopardized, (3) the computational cost of verifying timing with corner approaches becomes prohibitive because of exponential dependency on the increasing number of independent sources of variation that must be considered, and (4) traditional STA does not provide a method for estimating yield. The limitation of traditional STA techniques has led to calls for the creation of a timing verification methodology based on explicit probabilistic models of uncertainty in the process parameters and algorithms that can efficiently perform timing analysis using fully probabilistic timing descriptions. In this chapter we review the models and algorithms that have recently been developed to enable fully probabilistic timing verification.
10.1 BASICS OF STATIC TIMING ANALYSIS Static timing analysis is concerned with ensuring timing correctness of synchronous digital circuits [157]. A verification methodology relying on static analysis is based on the separation of timing and functional correctness of a design. Static timing analysis replaces a dynamic (input-dependent) simulation with an input-independent evaluation of circuit delay. While simulation is more accurate for a specific input vector, the overwhelmingly high number of test vectors, which depends exponentially on the number of inputs, makes it prohibitively expensive for large circuits. Since static timing analysis does not rely on specific vectors, it can ensure that the circuit is correct for any vector, but to do this it is essential that it be built upon conservative delay models of gates and interconnects covering an entire set of inputs, and a range of process parameters and operating conditions. A synchronous digital circuit is timing-correct if the signal that is computed by the combinational circuit is captured at the latching (destination) flip-flop on every latching edge of the clock signal, Figure 10.1. This condition
10.1 BASICS OF STATIC TIMING ANALYSIS
203
Fig. 10.1. A combinational circuit with a source and destination flip-flops.
can be expressed in terms of two constraints that must be met for every combinational circuit. First, the signal must stabilize at the destination flip-flop by the time the latching clock edge arrives. The arrival time at the destination flip-flop, Tmax , is given by the path with the longest delay. The minimum margin between the signal arrival time and the clock edge is the setup time (Tsetup ). The safety margin must be further increased if the arrival times of the clock edge at the launching and destination flip-flops are not synchronized. The worst situation is when the clock arrives latest at the source, while arriving earliest at the destination flip-flop. The maximum difference between clock arrival times is known as clock skew, Tskew . If the clock period is T , then the setup timing check is done by verifying that: Tmax ≤ T − Tsetup − Tskew
(10.1)
This analysis is known as max delay analysis and must be done using the slowest (worst-case) delays of gates and interconnects to ensure that the inequality reliably holds. The second requirement is that the correct signal keeps its value for the additional period of time after the clock edge arrives at the destination flipflop. This extra margin is known as hold-time, Thold . The problem may occur if a new signal for the next clock cycle propagates through a short (fast) path before the hold time is over and corrupts the logical value to be latched by the flip-flop. The hold time violation will not happen if the minimum delay through any circuit path (Tmin ) is greater than hold time. If the clock arrives early at the launching flip-flop and late at the destination flip-flop, the minimum margin must be effectively increased. Denoting the maximum difference in clock arrival times, Tskew , the hold time check is: Tmin ≥ Thold + Tskew
(10.2)
This check is known as min delay check and must be done using the fastest (best-case) delays of gates and interconnects. A crucial aspect of timing a sequential digital circuit is determining the timing of the combinational blocks. In this chapter, we take the major algorithmic role of static timing to be the computation of minimum and maximum delay through the combinational circuit. For each of the checks, a circuit is
204
10 STATISTICAL STATIC TIMING ANALYSIS
represented by a timing graph. There is a vertex in the graph for each primary input and output, and for each gate input and output. A directed edge connects two vertices if the transition on the former may cause a timing transition on the later. Thus, there is a weighted edge for each pin-to-pin transition within a gate and for every interconnect segment (net). The edge weight is the delay of that edge. (The delay can be either maximum or minimum delay through the gate depending on the type of a timing check). Timing graphs for combinational circuits are assumed to be directed acyclic graphs (DAGs). For DAGs, the computation of max (and min) propagation delays can be performed using a graph traversal following a topological order. For a graph with the number of vertices |V |, and the number of edges |E|, this can be done in O(|V | + |E|) time, that is linear time in the circuit size. It is obvious that the quality of STA depends on the timing numbers that are used in processing the timing graph. There are two types of edges, corresponding to gate delay and interconnect delay, and the computation of delay numbers for these edges is done very differently. STA can be used in standard-cell flows, for custom design, and for transistor-level timing. In STA used in the standard-cell flows, the libraries contain timing tables or analytical equations that capture the cell pin-to-pin propagation delays for different capacitive loads and input transition times. These tables are generated by performing exhaustive circuit simulation with SPICE after the extraction of the layout parasitics for cells in the library. In transistor-level STA, the delays of the timing edges are extracted dynamically. Delay tables or analytical models capture cell delay as functions of such parameters as output load and input slew-rate. Cell delay is a non-linear function of a number of parameters, and to guarantee that the circuit operates under any conditions, the SPICE characterization is performed at a variety of extreme environmental and process conditions (corners). Among the process parameters that have large impact on delay are channel length, threshold voltage, and oxide thickness. The environmental parameters include temperature and supply voltage. In order to capture the impact of process, voltage and temperature variations the simulation is repeated using a SPICE model that contains the corresponding points in the process-voltage-temperature (PVT) space. While the raw number of corners can be high, in traditional practice, a small number of lumped corner sets are created. In the simplest case, three corners - fast, slow, and typical - are used. Those are composed by considering variations in the environmental factors (temperature and supply voltage) and process parameters. For the environmental parameters, the slow corner corresponds to the low voltage and high temperature, while the fast corner is measured at high voltage and low temperature. Interconnect delay calculation is performed following the back end parasitic extraction and model-order reduction steps. Because the interconnect nets may contain millions of RLC elements, the model order reduction strategies, such as PRIMA [177], are essential to modern interconnect analysis, because of the enormous complexity of full system analysis. Industrial STA
10.2 IMPACT OF VARIABILITY ON ST VERIFICATION
205
methodologies need also to be concerned with many other practical issues that we do not discuss, such as distinct evaluation of delays for rise and fall transitions, dealing with the clock domains, handling false paths, and others. The reader is advised to refer to excellent treatment of STA in [188][190]. Here we concentrate on the aspects of STA that relate to the impact of variability.
10.2 IMPACT OF VARIABILITY ON TRADITIONAL STATIC TIMING VERIFICATION Essential to the success and sufficiency of traditional STA is the ability to check timing constraints under the worst- and best-case conditions but do that efficiently and without imposing unreasonable margins. The rise of variability discussed in previous chapters, strains the ability of static timing to be an effective design tool. The growing magnitude and complexity of variability lead to greater conservatism that wastes too much performance. Parametric yield optimization is not possible with traditional STA because it is unable to give the designer a sense of the distribution of performance metrics. The computational cost of verifying timing with corner approaches becomes prohibitive with the increasing number of independent sources of variation that must be considered. Further, complex dependencies of circuit performances on process and environmental parameters makes it difficult to identify the proper corners, as a result, the “worst-case” timing may not actually be worst and thus will fail to identify some timing violations. These challenges have spurred much recent interest in extending static timing analysis to a fully probabilistic or statistical formulation. 10.2.1 Increased Design Conservatism Industrial experience shows that timing estimates produced by existing STA tools are often overly conservative: the gap between the worst-case timing constraints predicted by the tools, and the final silicon performance is sometimes as high as 30% [150]. This gap is due to many reasons: conservative cell delay models, not comprehending simultaneous signal transitions on their inputs, conservatism due to noise analysis, variability in process parameters, ad hoc insertion of timing margins by designers, general modeling inaccuracy of creating compact models from simulation, as well as the inaccuracy of process files. An important contribution to this gap is the impact of process variability. In late mode analysis, deterministic static timing analysis performs the longest path computation by setting the delay of each timing arc in the timing graph each to its maximum value. The assumption that makes this procedure meaningful is that delay variations in all delay elements are perfectly correlated with each other. This condition can be translated into the more specific assumptions: (i) intra-chip parameter variability within the chip is negligible
206
10 STATISTICAL STATIC TIMING ANALYSIS
compared to inter-chip variability, and (ii) all types of digital circuits, e.g. all cells and blocks, behave statistically similarly (i.e. in a correlated manner) in response to parameter variations. Both of these assumptions held up well in the past. However, as the earlier chapters argued, the intra-chip component of variation is steadily growing. This is driven by systematic Lgate variability patterns due to photolithography, interconnect variability due to chemical-mechanical polishing, as well as random contributions such as from random dopant fluctuations. The second assumption is also not completely reliable. Specifically, different cells have different sensitivities to process variations [200]. This is especially true of cells with different aspect ratios, tapering, and cells designed within multi-threshold and multi-VDD design methodologies. Also, interconnect and gate delays are not sensitive to the same parameters, and with the increasing portion of delay being attributed to interconnect, intra-chip delay variation of interconnect segments also needs to be taken into account. The failure of these two assumptions leads to intra-chip variation of gate and wire delays, i.e., they begin to vary in an uncorrelated manner under the impact of variability. The inability of deterministic STA to comprehend such uncorrelated timing behavior results in an increased level of conservatism, lost performance and over-design, because designers operate under a distorted view of system’s timing. The conservatism can be defined as the difference between the clock period at a high percentile of the speed binning distribution (for example, 95th percentile) and the estimate produced by the deterministic STA. In Figure 10.2.1, the typical relationship between the two estimates is shown. Statistical analysis shows a tighter distribution with smaller spread but with the mean clock period shifted, indicating an average degradation of achievable Fmax . The reduction of conservatism and the amount of corresponding improvement in circuit speed that can be enabled by SSTA is a matter of some dispute. At the moment of this writing, SSTA has not yet become widely adopted for industrial design and there is insufficient data for gauging its actual benefits. The achievable reduction of conservatism depends strongly on assumptions regarding the path length, the number of critical paths, and the magnitude and composition of variability. For relatively short artificial paths the numbers vary from 6-11% [151]. For a popular set of combinational benchmarks, under a different set of assumptions, the average difference at 99.7th percentile is 6% [151]. A very simple analysis can show that the relative difference between the worst-case timing estimate and SSTA at best scales as: σ 1 (10.3) φ−1 (y) · · 1 − √ µ N where φ−1 (·) is the inverse of the Gaussian cdf, y is the value of the timing yield distribution at which the comparison is performed, σ is the standard deviation and µ is the mean delay of a single gate, and N is the number of stages in the path. Only the intra-chip portion of variation needs to be
10.2 IMPACT OF VARIABILITY ON ST VERIFICATION
207
considered as it exclusively acts to introduce conservatism. For paths that are typical of modern microprocessors, the number of stages is N =9-12. Assuming that the normalized standard deviation of a gate delay is σ/µ=5%, we get an improvement of 10% at 99.7% percentile, and 6.7% at 95% percentile. For application-specific integrated circuits (ACICs), the typical number of stages is substantially higher N =35-50, thus the reduction of conservatism can also be larger [191]. Over-conservatism also makes the probability of finding a chip, with characteristics assigned to it by the worst-case timing analysis, very small. Deterministic STA does not have any way to quantify the likelihood of observing specific timing behavior due to its non-probabilistic formulation. Because STA tools are incapable of predicting the parametric yield of a circuit, they do not allow design for yield as an active strategy. In the next chapter we discuss how the coupling between timing yield and leakage variability further raises the importance of accurately predicting timing yield (e.g. the speed binning curve) in nanometer silicon technologies [153].
Fig. 10.2. Traditional timing underestimates typical (mean) Tclock and overestimates worst-case Tclock .
10.2.2 Cost of Full Coverage and Danger of Missing Timing Violations To validate chip correctness, verification must ensure that timing requirements (both setup and hold time) are met under all relevant realizations of process and environmental parameters. As the number of independent sources of variability increases, the number of corners at which performance needs to
208
10 STATISTICAL STATIC TIMING ANALYSIS
be checked increases exponentially. Another challenge is to identify the appropriate “worst-case” parameter combinations under increasingly complex dependencies of performance variables on process parameters. The space of parameter variability is a very high-dimensional one, even if intra-chip variability is ignored. Because individual metal layers are processed at different times and using different equipment, the parameter variations affecting interconnect layers are independent. There are several parameters per layer that need to be taken into account. For each routing layer, four variables should be included for complete coverage - metal width, metal height, interlayer dielectric height, and via resistance. Some of these parameters exhibit correlation, for example, metal height, dielectric height, and via resistance tend to be correlated. For M metal layers, there are 4M variables to consider. Thus, just for interconnect timing verification, in a ten metal layer process, the dimensionality of the parameter space is 40 [154]. (The presence of intrachip variability further introduces an almost infinite number of independent sources of variation.) With N parameters, the number of corners is 2N , where N is the dimensionality of the parameter space. So for the interconnect parameter variation space, this would translate into 240 = 1012 corners (and this still ignores transistor variability and environmental factors). This is clearly impractical. The help may come in the form of abstraction which uses a priori reasoning to reduce the number of independent variables, cutting down on the number of corners that need to be checked. In essence, a new composite variable is created and a set of real parameters are lumped into this new fictitious parameter. One solution, for example, is to lump the parameters for each layer into a combination for slow and fast interconnect timing response which would lead to 210 = 1024 corners, still an unacceptably high number. The next level of simplification is to lump the parameters of all metal layers into one interconnect parameter, which can be slow or fast. Then for setup time checks, the slow cell and slow interconnect corners are used, whereas for hold time checks the fast cell and fast interconnect corners are used. These approaches are obviously overly conservative most of the time because they ignore possible lack of perfect correlation between parameters lumped into a single variable. On the other hand, abstracting out the variation of individual parameters may make the analysis unreliable. Such analysis presumes that the timing constraint checks can be performed at the corners of the new variables rather than of individual variables. This approach essentially relies on a priori identification of the potentially limiting original process parameter corners. One specific concern is that the major cause of hold time violations in digital design arises when the delay of the data net and of the clock nets mis-track under process variability. Thus, an improvement to the basic strategy is to identify groups of signals whose independent behavior may cause problems. Notice that this would not have to be done if we simply tested 240 corners for every metal layer - in this case, both clock and data signals would automatically be treated distinctly.
10.2 IMPACT OF VARIABILITY ON ST VERIFICATION
209
An approach, known as 4 corner analysis, reduces conservatism while capturing clock-data mis-tracking by distinguishing the data and clock nets. For each group of nets (clock and data), all the parameters are grouped. In this technique, one performs the hold time and setup time checks under four corner conditions: fast clock / fast data, fast clock / slow data, slow clock / fast data, and slow clock / slow data. But since they depend on similar gates and wires, the clock and data speeds are unlikely to be completely independent, and a less conservative coverage can be achieved by a so-called 6 corner analysis that are selected in the clock - data speed plane as shown in Figure 10.2.2. Nonetheless, a danger of missing conditions leading to timing violations still exists [154]. Suppose a data line is dominated by metal-1 delay and a clock line by metal-2 delay. Then the worst case for setup is when metal-1 is slow and metal-2 is fast. The worst case for hold time is when metal-1 is fast and metal-2 is slow. In our attempt to reduce the conservatism, we got rid of the full slow/fast and fast/slow corners. These two cases would be not be covered by 6 corner analysis, and thus there is still a danger of a timing violation.
Fig. 10.3. 6 point corner analysis.
Another difficulty of corner analysis is its inability to adequately deal with interior points in the parameter space at which failures may occur. This happens due to the non-monotonic dependence of delay on the process and environmental parameters. If the delay is monotonic, the minimum and maximum values of delay are to be found at the corners of the hypercube in the process space. Introducing the notion of an acceptance region will clarify this difficulty of corner analysis. The traditional approach to timing verification effectively assumes that the acceptance region in the process space is convex, and thus ensuring that the performance at the vertices (corners) of the parameter spread (PS) guarantees that the entire PS lies within the acceptance region. It is clear that if the assumption of convexity is violated, timing correctness cannot be guaranteed. In Figure 10.2.2 simulations at the four corners indicate that the circuit performances are acceptable. Assuming
210
10 STATISTICAL STATIC TIMING ANALYSIS
convexity, the apparent (perceived) acceptance region is a rectangle. If, however, the true yield body, the region in the process space where performances are within specifications, is not convex, then some yield escapes are inevitable. The set of points at which timing is checked thus must provide a minimal coverage of the acceptance region regardless of its shape.
Fig. 10.4. If true yield body is not convex, then some yield escapes are inevitable under assumption of convex acceptance region.
An example of a phenomenon that leads to the breakdown of the simple corner analysis is the so-called inverted temperature dependence of timing with respect to temperature [155]. Normally delay increases with low supply voltage and higher temperature. But at low supply voltages, temperature impacts both mobility and threshold voltage, as a result the cell delay will be at its worst in the interior of parameter spread. Identifying the worst-case behavior requires an iterative computation, and as a result it is impossible to analytically determine the parameter settings (voltage and temperature) that will lead to worse-case cell and path delays. Thus the modeling and characterization methodology used with traditional gate-level STA tools cannot guarantee that the resulting timing estimates are reliable. To solve this problem by non-statistical means, one would have to find a set of process parameters that would always lead to the worst-case circuit behavior. Due to the complex patterns of coupling between gates and interconnects, finding such a point in the space of process parameters is extremely difficult. From this perspective, one of the benefits of statistical timing analysis is that it naturally may manipulate explicit parametric functions of delay on process parameters. This removes the need to identify the worst-case conditions a priori.
10.3 STATISTICAL TIMING EVALUATION
211
10.3 STATISTICAL TIMING EVALUATION The fundamental problem of traditional static timing analysis is that it is essentially formulated in a non-probabilistic manner. The delays of gates and paths are treated as fixed numbers, not random variables. The inherently probabilistic problem is reduced to a purely arithmetical one, and once this transition is made, the ability to probabilistically quantify the likelihood of timing estimates is irreversibly lost. A different formulation of the timing problem is required that would do justice to the probabilistic nature of the problem. This has become the focus of the recent work on statistical STA. Development of statistical static timing analysis poses a number of challenges of an algorithmic and modeling nature. Ideally, the statistical timing tool should keep the high capacity and fast runtime that are characteristics of deterministic STA. It should not pose an undue burden to the overall design methodology in terms of characterization effort and increased data volumes. It should have the ability to perform incremental timing to be used in optimization. It should not require specific (e.g. linear) delay models or restrict the distribution of process parameters (e.g. to Gaussian), and should handle delay correlation due to all possible sources, including path reconvergence and spatial correlation in process parameters. Some challenges have been addressed through the large amount of work in recent years but many challenges remain. This section discusses the algorithmic aspects of implementing an SSTA flow. 10.3.1 Problem Formulation and Challenges of SSTA The statistical timing information for a combinational circuit is encoded in the form of a probabilistic timing graph in which the edges represent the gate and wire delays and the connections between gates and wires are represented by the vertices. This representation differs from the timing graph of deterministic STA in that the delay values of edges are now random variables. In an ideal scenario, the joint pdf of the delays of the edges is given. The objective of S-STA is to compute a statistical description of the maximum and minimum propagation time through the graph. Such a description may be equivalently represented in terms of cumulative distribution function, probability density function, or the moments of the max and min delay. In the discussion that follows, we focus on the distribution of the maximum delay through the timing graph. Most of the deterministic approaches to STA are based on the topological longest-path algorithm, which is also known as the critical path method. In this algorithm, each node and edge of a timing graph is explored in topological order and is visited once, in a single traversal of the graph. During the graph traversal, the maximum arrival time to a given node is recorded and then updated as all the incoming edges are explored [156][188]. The two fundamental arithmetic operations that have to be carried out in the course of the graph traversal are the max and the sum operators:
212
10 STATISTICAL STATIC TIMING ANALYSIS
D(Z) = max{D(X) + dX→Z , D(Y ) + dY →Z }
(10.4)
where D(X), D(Y), D(Z) are the arrival times at the vertices X,Y and Z respectively, and dX→Z and dY →Z the timing arc delays. Deterministic STA applies this procedure recursively propagating the result toward the circuit primary outputs. In the statistical setting, the above timing quantities become random variables. Most of the non-trivial algorithmic features of SSTA stem from the difficulty of evaluating the above equation in the statistical setting in ways that are: (i) efficient, (ii) accurate, and (iii) allow the propagation of the invariant timing expression through the graph. One specific challenge is to find a representation of random delays and arrival times which would remain invariant under the above evaluations and could be propagated through the graph. As a way to illustrate the problem, let us begin by considering the simplest possible model. Assume that the delays of the gates (vertices of the graph) are normal random variables that are distributed independently. The sum can be computed both accurately and efficiently, it is normally distributed, and its mean and variance are easily calculated. However, the maximum of two normal random variables is not distributed normally. This makes subsequent timing evaluations difficult since the operands do not remain normally distributed. Ideally, the stochastic maximum operation should have the property of closure in that the result of stochastic max should be expressed in the same form as its inputs. Evaluating the stochastic maximum accurately in a way that is conducive to repeated evaluations is one of the major problems of SSTA. Performing statistical timing graph evaluation in an efficient manner is also difficult. In a deterministic setting the maximum operator permits making a clear-cut decision on which path will be responsible for the longest delay to a given node. However, the maximum of two random variables is not uniquely defined. The problem fundamentally reduces to the difficulty of making decisions in a stochastic setting and the resulting inability to prune-out sub-critical timing paths. There are several alternative ways to compare random variables. The strongest criterion is known as simple stochastic ordering. For two random variables X and Y , it can be said that X is stochastically smaller than Y if P {X > t} ≤ P {Y > t} for ∀t (10.5) Under this definition, if X is stochastically less than Y , then X is less likely than Y to take values that are larger than t. But even the strongest criterion does not imply P (X < Y ) = 1. There is no guarantee that one path will be greater than another in all chip instances - there is always a probability that another path is longer. That makes it difficult to prune out sub-critical paths, and puts a burden on the efficiency of the algorithm. Even though the identity of the longest path is now random, it is possible to quantify the probability, P (X < Y ), that one path is greater than the other. This probability is known as the tightness probability. For the case of normally distributed X and Y ,
10.3 STATISTICAL TIMING EVALUATION
213
the tightness probability can be computed analytically using the expressions derived by Clark [157] and is used in some of the popular approaches to SSTA. Two classes of algorithms have been proposed to deal with the above challenges: block-based and path-based algorithms. Each technique has its own strengths and weaknesses, which we discuss in the remainder of this chapter. 10.3.2 Block-Based Timing Algorithms Block-based, or node-based, approaches to SSTA aim at developing techniques that would allow statistical delay evaluation in a way similar to the traditional topological longest-path algorithm, in which each node and edge of a timing graph is explored in a topological order with a breadth first search strategy, and is visited once in a single traversal of the graph. Block-based approaches preserve the attractive computational properties of deterministic STA: linear runtime complexity in terms of circuit size. They are also inherently more suitable for incremental timing analysis, which is essential for optimization. It is highly desirable to be able to perform timing verification in an incremental mode, that is, to re-use most of the timing checking and limit additional analysis to the small region that is affected by localized changes to the structure of the netlist during synthesis, or localized changes of the layout during placement and routing. The need for operators having the property of closure when processing random variables is inherent in block-based approaches. This requires a representation of timing quantities in a form suitable for further propagation through the graph. Block-based algorithms can be classified based on how they represent random delays and their ability to handle delay correlations. Random edge delays can be represented by discrete or continuous random variables. If the number of outcomes is finite, then the random variable is a discrete random variable. A discrete random variable X, is characterized by the probability distribution, or probability mass function, which is a set of the possible values of X and their probabilities. If the number of outcomes is infinite, then the random variable is continuous. A continuous random variable X, can be characterized by the probability density function or cumulative distribution function. A useful distinction that we need to introduce here is between random variables described compactly using parametric formulas and those described using histograms or entire probability mass functions. The familiar example of continuous random variables, such as normal, uniform, and exponential are parametric in the sense that their probability density function can be described by a formula with only a small number of parameters. The number of parameters is two, two, and one for each of the continuous random variables above. Similarly, such familiar discrete random variables as Binomial and Poisson can also be described parametrically - by compact formulas with two and one parameters, respectively.
214
10 STATISTICAL STATIC TIMING ANALYSIS
Representations of random variables that involve specifying a full probability mass function or a histogram are not compact in the above sense. A histogram is a way to graphically and computationally represent the empirically observed data which can truly be both discrete and continuous. Indeed, any legitimate probability mass functions defines a unique random variable, and similarly, for probability density functions. The non-compact representations are more flexible in that they can define a random variable of arbitrary distribution, while parametric representations limit the random variables that can be captured to a small class. However, non-compact representations (histograms, probability mass functions) are expensive in terms of data volume. The algorithms handling them often suffer from exponential run-time [162]. Representations based on continuous parametric distributions are more efficient. Another distinction is the way the algorithms handle correlation between node delays due to spatial correlations of process parameters and/or correlations between node arrival times due to path reconvergence. The early work in SSTA ignored both types of correlations and handled the contribution of interchip and intra-chip variability to delay separately [160][161]. Later, strategies to address delay dependencies due to path reconvergence have been explored [162][163]. Later work has tried to include both types of correlations in the computation [151][152][164]. Separable Treatment of Intra- and Inter-Chip Variation The distribution of circuit delay depends on intra- and inter-chip parameter variations impacting gate delays simultaneously. Thus an accurate computation of this distribution requires treating the intra- and inter-chip variation component jointly. A single variable which is a sum of the two components (Pinter + Pintra ) will exhibit apparent spatial correlation between values at different locations on the chip. An algorithmically simple strategy for SSTA is to handle the impact of Pinter and Pintra separately. The impact of interchip variation can then be captured with case-based characterization. At cell library characterization time, cell delay values would be defined under parameters set values that correspond to a specific quantile of the distribution of the inter-die portion of variability. This cell delay value (defined by the impact of inter-chip variation) will be treated as “nominal” within the SSTA engine, with variance due to intra-chip variability taken into account statistically. Intra-chip variability is handled by the SSTA engine. The separate treatment gives credence to the assumption that delay variability of individual gates is uncorrelated, and also makes the algorithmic task much more tractable. Using normal distribution to model the process parameters and a first-order delay model, delay is also normally distributed. Since most the STA flows rely on delay tables containing delay as a function of load and input slew [157], the simplest approach to SSTA would directly operate with delays as unmediated random variables [159][160]. If Pintra can be assumed
10.3 STATISTICAL TIMING EVALUATION
215
to have no substantial spatial correlation, then SSTA can be done in a way that models node delays as uncorrelated. If, in addition, delay correlation due to re-convergence is ignored, the algorithm reduces to repeated operation of sum and max on normal uncorrelated random variables. This is the strategy adopted by the first practical SSTA algorithm proposed by M. Berkelaar [160]. This work observed that the sum of two normal variables is distributed normally, but the max of two normal variables is not normal. One can, however, approximate the resulting random variable by a normal distribution. A common strategy is to ensure that the means and the variances of the correct and approximate max match. The mean and variance of the max can be computed analytically as show in [158], therefore the resulting algorithm exhibits good run-time. The moment-matching strategy is discussed at length in the following section. If we choose to ignore delay correlations, then it is possible to handle even non-Gaussian random variables in a computationally efficient manner. Under the assumption of separable treatment of contributions of inter- and intra-chip variation and uncorrelated delay variability, the computations can be done efficiently if random arrival times are represented directly with a piece-wise linear cdf and the delays are described as piecewise constant pdf functions. Using this representation, max and summation can be efficiently implemented as operations on the linearized cdf and pdf [162]. If we consider a two-input gate with arrival times Ai and Aj , and two pin-to-pin delays Di and Dj , then the arrival time at the output is: Ao = max(Ai + Di , Aj + Dj )
(10.6)
The pdf of Ai + Di is given by the convolution of the pdf of Ai and the pdf of Di . It is more efficient, however, to work directly with the cdf of Ai + Di , which is given by the convolution of the pdf of Ai and the cdf of Di (P DFAi ⊗ CDFAi ), or, alternatively, as CDFAi ⊗ P DFDi . Since delays through both timing arcs are assumed to be un-correlated, their cdf is given by the product of their respective cdfs: CDFout = (CDFAi ⊗ P DFDi )(CDFAj ⊗ P DFDj )
(10.7)
Structural correlations due to path reconvergence can be handled using common node identification. A dependency list is kept for each vertex of the graph during the forward propagation. The list enumerates all the vertices on which the arrival time at a given vertex depends. Common node removal is done in the following way: Aout = max(Ar + D1 + Di , Ar + D2 + Dj ) Aout = Ar + max(D1 + Di , D2 + Dj ) CDFout = CDFr ⊗ [(CDFD1 ⊗ P DFi )(CDFD2 ⊗ P DFj )]′
(10.8)
The limitation of this technique is that the dependency list can become very large, significantly slowing the computation. This imposes serious constraints on applicability of this technique to large problems.
216
10 STATISTICAL STATIC TIMING ANALYSIS
Algorithms based on Explicit Modeling of Delay as a Function of Process The SSTA techniques considered above at least implicitly rely on a separation of treatment of delay variability due to inter- and intra-chip variation. The only justifiable use of such techniques is to treat the inter-chip variability in a corner-based manner and use a dedicated statistical timing engine to account for intra-chip variability. The above techniques assume that the timing dependencies come only due to path reconvergence. These methods, however, cannot handle delay dependencies due to spatially correlated intra-chip variation. Later work in SSTA departs from the assumption that variability can be treated in a separable manner, and seeks to treat both sources of variability simultaneously. These algorithms also rely on explicit rather than abstracted modeling of delay dependency on process parameters, such as in representations using histograms, cdf, or pdf of delay. In recent work, delay is described by an explicit parametric function of process parameters. A linear delay model is often assumed to be sufficient to describe the dependence of pin-to-pin delay of a cell on the parameters in their typical range of variation [164][165][170]. It is possible that higher-order models may have to be used for larger ranges of parameter variations, however, the evidence for this remains inconclusive [166]. A linear model of delay on process parameters is, effectively, a first-order Taylor-series expansion of the delay function: d = do +
n ∂d ∆Pi ∂Pi i=1
(10.9)
where there are n components of variation Pi , and ∆Pi = Pi − Po = Pinter + Pintra is the departure from the nominal value Po of the ith component of variation. Block-based algorithms propagate arrival times through the timing graph. We want a timing model that could represent both delay and arrival times in a single canonical form, such that the arrival time model would remain invariant in the course of multiple applications of add and max operators. This model must also allow capturing correlation due to path reconvergence and spatial correlations. It seems that to exactly capture structural correlation, path history must be stored. But we also want to represent random arrival times in a compact form, similar to a single number in deterministic STA. These requirements are conflicting and the problem of structural correlations, i.e., capturing the history of signal propagation, is difficult to handle in blockbased analysis. A trade-off between accuracy (keeping a path history), and run-time and data-volume efficiency (having a compact representation) seems to be unavoidable. Because the number of global components of variation is small, the main challenge comes from handling the intra-chip component of variation. This is
10.3 STATISTICAL TIMING EVALUATION
217
because it may add a unique random component of variation to each node delay. Each node thus contributes a share of random variation to the path delay, and since uncorrelated random variables are distinct random variables, storing history would require carrying a lot of terms through the graph. One solution is to lump the contributions of all sources of random variation into a single delay term and represent the arrival time (and delays) as: ao +
n
ai ∆Pi +an+1 ∆Ra
(10.10)
i=1
where ∆Pi = Pi − Po is the deviation from the mean value of the ith global variation component, ai is the sensitivity of the gate (interconnect) delay to the global parameter Pi , ∆Ra is a standard normal variable and an+1 is the scaling coefficient that represents the magnitude of lumped random intra-chip component of delay variation. Assigning overall variance into one term of the canonical expression has tremendous computational advantages. But collapsing random contributions of nodes into a single term makes it difficult to capture correlation between paths that share history. If a common node is shared between the paths, then its random contribution has to be captured and that cannot be done precisely if a canonical model contains only one random term. Consider the example shown in Figure 10.3.2, in which the global parameters of variation are ignored for simplicity. A the random term coefficient ai is associated with each edge of the graph. Then, A = a1 Ra , B = a21 + a22 Rb and C = a21 + a23 Rc and Cov(B, C) = 0. However, the correct computation which keeps separate random component corresponding to each of the timing edge would give Cov(a1 Ra + a2 Rb , a1 Ra + a3 Rc ) = a21 .
Fig. 10.5. A simple graph to illustrate the challenge of capturing delay correlation due to paths having common history.
The more general canonical delay expression that can accurately model path correlation due to random delays of shared edges contains a term for each node in the graph [167]: A = ao +
n i=1
ai ∆Pi +
M j=1
aj ∆Rj
(10.11)
218
10 STATISTICAL STATIC TIMING ANALYSIS
where M is the number of nodes in the timing graph, ∆Pi is the ith global source of variation, and ∆Rj is the random intra-chip component of delay variation contributed by the timing edge j. This expression permits capturing the history of the signal: if a path does not pass through a node, the corresponding coefficient aj is zero. Thus, in contrast to the canonical form of [164], such a representation allows correlations due to sharing of nodes to be captured. The complexity of handling this timing model may be overwhelming, however, because the number of nodes (M ) may be too high. We will later discuss an approach based on the application of the dimensionality-reduction technique, PCA, to simplify the complexity of this analysis. The state-of-the-art algorithmic techniques tend to choose the efficiency over the accuracy and ignore the impact of path re-convergence by relying on the canonical delay model. Assuming that process variations are Gaussian, this model permits efficient operations of addition and subtraction. Most importantly, the result can be captured and propagated in canonical form. This model permits a seamless representation of correlation between the delays of any two nodes due to the impact of inter-chip variation. Consider two signals: A = ao + B = bo +
n
i=1 n
ai ∆Pi +an+1 ∆Ra
(10.12)
bi ∆Pi +bn+1 ∆Rb
i=1
Their sum C = A + B can be represented straightforwardly by: C = (ao + bo ) +
n
(ai + bi )∆Pi +cn+1 ∆Rc
(10.13)
i=1
with ∆Rc ∼ N (0, 1) and cn+1 = a2n+1 + b2n+1 . Since the maximum of two normal variables Z = max(A, B) is not distributed normally, an approximation can be employed to approximate the result to the canonical form. One challenge is how to ensure that a new timing quantity Z accurately represents built-in correlation with A and B. Mapping Z onto a canonical model, e.g. C, permits preserving correlation C and any other edge in the graph, which is essential for the future operations. The approximation is based on matching the first two moments and covariance with each of the inter-chip component of the process variation of the exact distribution of Z and a normal random variable. The mean and variance of the maximum of two correlated normal random variables can be computed analytically using Clark’s formulas [158]. Then, the computed mean and the variance are matched to a result represented in canonical form. 2 2 , σB and their correlation, ρ, can be Both the variance of A and B, σA 2 computed from their canonical delay models. Let φ(x) = √12π exp(− x2 ) be t 2 2 the normal pdf and Φ(t) = φ(x)dx be the normal cdf, and θ ≡ (σA + σB − −∞
10.3 STATISTICAL TIMING EVALUATION
219
2ρσA σB )1/2 . The value of tightness probability P (A > B) plays a key role in the derivations that follow and can be compactly computed as ao − bo TA = P (A > B) = Φ (10.14) θ The mean and variance of max(A, B) can be expressed as functions of the tightness probability: ao − bo (10.15) E(max(A, B)) = ao TA + bo (1 − TA ) + θφ θ 2 2 2 2 V ar(max(A, B)) = (σ A + ao )TA + (σB +2 bo )(1 − TA ) a −b +(ao + bo )θφ o θ o − {E(max(A, B))}
(10.16)
It can be shown that a suitable approximation that matches the first two moments is based on a linear combination of the “properties” of A and B but also requires an additional independent variable for variance matching: Z = M AX(A, B) ≈ C = TA A + (1 − TA )B + ∆
(10.17)
while (1) E(max(A, B)) = E(C) and (2) V ar(max(A, B)) = V ar(C). The above implies that the first n coefficients of the canonical form are: ao − bo co = TA ao + (1 − TA )bo + θφ (10.18) θ ci = TA ai + (1 − TA )bi for i = 1...N
(10.19)
Finally, the coefficient of the random term has to be computed in a way that matches the variance V ar(max(A, B)): c2n+1 = V ar(max(A, B)) − V ar(co +
n
ci ∆Pi )
(10.20)
i=1
In the prior discussion, it was assumed that the random terms of the canonical delay model are completely uncorrelated. An important consideration to be taken into account is that the random components of node and arrival time delays may be correlated spatially, as a result of the gates being in close proximity. It is known that some sources of variation may exhibit spatial correlation, for example, transistor gate length shows spatial patterns of variation due to lens aberrations. One way to model spatial dependence is by introducing an explicit continuous correlation function: j i ) = f (xi − xj , yi − yj ) , ∆Pintra cor(∆Pintra
(10.21)
Such continuous correlation functions are widely used in other areas, such as geophysics. An alternative is to introduce a correlation function defined
220
10 STATISTICAL STATIC TIMING ANALYSIS
on a grid that assumes perfect correlation (ρ = 1) for gates belonging to a certain locality (one grid-cell), high correlation for gates in the neighboring grid-cells, and a decreasing degree of correlation for gates belonging to more distant grid-cells. This can, for example, be implemented using a simple rectangular grid [151]. A covariance matrix captures the parameter covariance between the grid-cells. Another alternative is to capture correlation between two parameters implicitly by capturing it via the additive model. In this case, correlation can be captured by establishing a hierarchy of scales of variation (“grid model”). The grid model decomposes the overall intra-chip variability into a set of variables [152][168]. Spatial correlation between intra-chip random components of variation increases the computational complexity of evaluating the statistical add and max operators described above. Working with uncorrelated parameters is preferred. It is possible, however, to use principal component analysis (PCA) to transform the original space into a space of uncorrelated principal components. PCA is a technique from multivariate statistics that uses the rotation of axis of a multidimensional space in a way that makes the variation projected on the new set of axis behave in an uncorrelated fashion. The computational techniques for performing PCA are available as standard routines in statistical packages, for example, S-Plus. Operating with uncorrelated parameters is advantageous because the calculation of (i) the variance of the delay of the canonical delay expression and (ii) the covariance between the delays of two canonical delay expressions can now be computed by efficient dot product vector operations. The covariance calculations between paths can also be computed by simple dot products [152]. It is also known that if a multivariate random distribution exhibits strong correlations, then the effective dimensionality of this distribution can be reduced through PCA. To illustrate this transformation, let us suppose that the random contribution to gate delay, ai ∆Ri , is only due to intra-chip Lgate variation, so that: ∆Ri ≡ Li . Now let Lg = {L1 , ..., LM } be a vector of Lgate variations for each gate in the timing graph, and let Σ be its covariance matrix. If there is spatial correlation, then Σ is not diagonal and PCA will result in dimensionality reduction. As a result, a vector Lg can be expressed as a linear combination of a smaller set of independent components: Lig = bi1 p1 + ... + bik pk
(10.22)
where pi ∼ N (0, 1) are the principal components and bi j are the coefficients of computed by the PCA algorithm. If the original data was correlated, then M ≥ k. Thus, an additional advantage of using PCA is to reduce the number of independent sources of variation that need to be carried through the timing graph during the computation. The presence of spatial correlation may actually help in reducing the memory complexity of representing the random delay contributions from individual nodes needed for properly modeling path sharing. Thus it may give more flexibility in using the extended canonical
10.3 STATISTICAL TIMING EVALUATION
221
model of [167] which explicitly includes as many terms as the number of random delay components in the timing graph. Note that if the parameters are not spatially correlated then dimensionality reduction does not occur. 10.3.3 Path-Based Timing Algorithms Another class of algorithms for SSTA is based on path enumeration and the construction of the circuit delay distribution based on the distributions of paths and their interdependencies. While block-based approaches are more computationally efficient and are capable of incremental analysis, path-based algorithms also have a number of advantages. First, it is much easier to handle accurately the delay dependency on the slope of the driving signal, and to propagate the statistical slope information within path-based methodologies. Second, there is no need to use an approximation to the max operator required in block-based analysis. Third, delay correlations introduced from both path-sharing and spatial parameter correlations can be taken into account. Path variance and path-to-path covariance can be directly evaluated by path tracing. Fourth, designers are often more comfortable with circuit debuging based on the specification of individual paths, and in the design of high-performance custom chips, path based design methodologies are widely used. Path-based analysis does not imply the analysis of a single path or a one path-at-a-time analysis. The objective of the path-based SSTA algorithm is to find the distribution of the maximum of delays of a set of paths. Ensuring that the maximum probability of each path violating the constraint is acceptably small does not guarantee that the entire circuit will meet its constraints with the same low probability, since in general [169]: P {max(D1 ...DN ) > t} > max P {Di > t} i∈P aths
(10.23)
Thus, a set of paths must be simultaneously considered to derive the overall circuit delay distribution. The complete description of a set of path delays is given by the cumulative probability function of max{D1 . . . DN }: F (t) = P {max{D1 ...DN } ≤ t} = P {D1 ≤ t, D2 ≤ t, . . . , DN ≤ t}
(10.24)
where Di is the delay of the ith path in the circuit and F (t) is the cumulative probability function defined over the circuit delay probability space. The cumulative probability function can theoretically be computed by direct integration: F (t) =
t
−∞
(N − 1)
t
f (D1 , D2 , . . . DN )dD1 dD2 . . .dDN
(10.25)
−∞
where f (D1 , D2 , ...DN ) is the joint probability density function (jpdf ) of {D1 , ..., DN }. Unfortunately, the direct evaluation of an N -dimensional integral for an arbitrary f (D1 , D2 , ...DN ) is extremely difficult for large N . Given
222
10 STATISTICAL STATIC TIMING ANALYSIS
that it is impractical to solve the above integral directly for large N , we are faced with the task of finding the distribution of max{D1 ...DN } by some other means. (Monte Carlo methods discussed later in the chapter provide a different way to evaluate the above integral.) The disadvantage of path-based methods is that in the worst-case, the number of paths in a general circuit (which is a DAG) can be exponential in the number of nodes. While this worst-case behavior can occur in arithmetic circuits (e.g. multipliers), in many cases the number of paths is quite manageable [166], and path-based methods may behave reasonably well. We can identify two classes of path-based algorithms depending on their treatment of paths. The first group of algorithms begins by extracting a set of top K critical paths using a deterministic STA. The selection of paths is based on the nominal delay. The statistical analysis is then applied on this sub-set of paths [190]. The second class of algorithms includes all paths in a fully statistical analysis flow. The obvious challenge is to statistically handle an enormous number of paths (millions) [166]. Here we concentrate on the algorithms that statistically analyze a manageable subset of paths. Computing Distributions of Individual Path Delays Path-based SSTA is effectively statistical analysis of a random vector of path delays. While the full description is given by the joint pdf, the analysis begins by defining the individual (or marginal) distributions of each path. The early work in path-based SSTA has concentrated on describing distributions of individual paths. For tractability of analysis, it assumed stochastic independence of path delays. In order to arrive at the circuit delay distribution, an argument would be made that ensuring that a path meets timing constraints with certain likelihood gives a specific probability that the whole circuit will meet its timing constraints. For example, if each path meets the constraints with the probability of α, P {D1 ≤ t} = α, and all Npaths are assumed to be independent, then the circuit yield is given by P { (Di ≤ t)} = αN . This model i
obviously ignores the important fact of path sharing and spatial intra-chip parameter correlation which, in reality, lead to significant correlation between path delays. One must, therefore, be very careful in drawing any conclusions that are based on this model. The distribution of a single path delay can be estimated using a linear gate delay model [170]: D = Do +
n ∂D ∆Pi ∂P i i=1
(10.26)
where ∂D/∂Pi is the composite path delay sensitivity to a change in the parameter of variation ∆Pi and n is the number of parameters being considered. The path delay sensitivity coefficient can be computed by summing up the
10.3 STATISTICAL TIMING EVALUATION
223
contributions of each cell, including the dependency of the input slew on the parameter variation. Let mj be the function capturing the input slew for cell j and ∂dj /∂Pi be the sensitivity of cell j to parameter i, the aggregate path delay sensitivity for the path containing M cells can be computed as [170]: M
∂D = ∂Pi j=1
j j ∂dj ∂dj ∂mj ∂dj ∂mk−1 ∂ml (10.27) + + ∂Pi ∂Pi ∂mj ∂Pj ∂ml−1 ∂mj k=3
l=k
Using the linear expansion, the distribution of the path delay can be easily computed. Generally ∆Pi includes both intra-chip and inter-chip components of variation. Given that the path can be spatially spread out across the chip, we have distinct values of the intra-chip component of variation. As an example, let us assume that only Lg ate variability is being considered. Then, the set of distinct process parameters ∆Pi includes a single Linter and Li for each gate on the path: Pi = {Linter , L1 ,...,LM }. If we ignore spatial correlation, the intra-chip components are independent of each other. Assuming that the process parameters are independent and normal, the distribution is: ⎛ ⎞ " # ( 2 # ∂D ⎟ ⎜ D ∼ N ⎝Do , $ M + 1) σ Pi ⎠ (10.28) ∂P i i=1 Using the assumption of path independence is an oversimplification which inevitably ignores important correlations between path delays and their impact on the distribution of circuit delay. It is necessary to consider a set of stochastically correlated near-critical paths as simultaneously determining the distribution of Dmax . Because the paths are neither independent nor perfectly correlated a sophisticated analysis is required to evaluate the circuit delay distribution. The path covariance matrix can be computed on the basis of pair-wise gate delay covariances: cov(dk , dl ) (10.29) Cov{Di , Dj } = k∈Gi k∈Gj
where Gi (Gj ) is a set of gates along path i(j), and Di , Dj are the delays of the paths i and j respectively. Similarly to the earlier analysis, this equation can also be modified to account for the effect of gate delay dependence on the output slew of the previous gate within the path using chain rule. There is a significant cost to this direct computation of the path-delay covariance matrix since this operation is O(N 2 ) with the number of paths N . Because of additive effect of intra- and inter-chip components of variation, even when we have spatially uncorrelated intra-chip components, the overall gate length (Linter + Liintra ) is correlated with (Linter + Ljintra ), where i and j are gates on a path. The principal component analysis can be used to speed up the computation of path covariances by de-correlating process parameter
224
10 STATISTICAL STATIC TIMING ANALYSIS
vectors. (It should be pointed out the when the number of variables is large principal component analysis may become prohibitively expensive because of the cost of computing eigenvalue decomposition of large matrices.) PCA represents the delay of every node as a linear function of the principal components. The delay of a path is then also a linear function of the principal components. Instead of being dependent on M process parameters, path delays are now functions of m principal components (m << M ): ¯ i + ki1 p1 + ... + kim pm Di = D ¯ j + kj1 p1 + ... + kjm pm Dj = D
(10.30)
¯ i (D ¯ j ) are the mean path delays, and kil is the coIn these expressions, D efficient of the principal component pl for the path i, and it is a sum of the coefficients for this principal component at each node on the path. Because all the components pi are orthogonal to each other, the path-to-path covariance simplifies to a simple vector dot product on the vectors of coefficients. Then, covariance between the two paths i and j, can be computed as a dot product: cov(Di , Dj ) =
m
kil kjl = Ki Kj
(10.31)
l=1
where Ki ,Kj are the vectors of coefficients of the principal components for paths i and j. Note that an essential advantage is that now to compute an entire covariance matrix every path has to be traversed only once to compute the vector path coefficient. Then, its covariance to any other path is computed directly. We now have a complete covariance matrix of the path delay vector. However, reaching the objective of evaluating the distribution of the normal multivariate vector with a non-trivial covariance matrix is still extremely difficult. The exact distribution of max{D1 ...DN } is intractable for arbitrary dependence structure. The exact path criticality probabilities would also be intractable for the same reason [169]. Even if all node delays are assumed independent, the structural correlations due to path sharing would render this problem intractable. Bounding Circuit Delay Distribution in Path-Based Timing Analysis Given the intractability of the exact solution, seeking the bounds on the distribution of max{D1 ...DN } is a natural answer. The bounds enable greater flexibility in modeling complex sources of uncertainty, and are consistent with the existing timing paradigm, i.e. the worst-case timing is also a bound, albeit a poor one. An early technique for computing the bounds relied on solving a convex optimization problem [169]. This approach takes the given marginal distributions of individual paths and assumes that the correlation structure is
10.3 STATISTICAL TIMING EVALUATION
225
not even known. It bypasses the need for knowing the exact correlation structure by finding the worst-case distribution of P {max(D1 ...DN ) > t} over all possible correlation structures. The technique relies on solving a convex optimization problem is computationally expensive and is also likely to be overly pessimistic. Recent work has utilized the theory of majorization and the theory of stochastic Gaussian processes to produce the bounds on cdf of max{D1 ...DN } [176]. The analysis is carried out in terms of the Gaussian path delay vector, D. The cdf of a Gaussian vector with arbitrary mean vector and covariance matrix can be expressed in terms of the distribution of a standard multivariate normal vector with an arbitrary correlation matrix. ( ( ′ PΣ ( Di ≤ t) = PΣ ( {Zi ≤ ti }) ′
where ti = (t − µDi )/σDi and Zi ∼ N (0, 1). Note that the vector t′ that determines the set over which the probability content is being evaluated, is not equicoordinate, i.e. the components of the vector are not equal (t′i = const). Also note that the correlation matrix Σ that characterizes the path delay vector is populated arbitrarily, i.e. it has no special structure. Both of these factors make the immediate numerical evaluation of the above probability impossible. To enable numerical evaluation, a set of transformations can be performed. These transformations will lead to the bounding of the circuit delay probability by the probabilities expressed in the form of equicoordinate vectors with well-structured correlation matrices. Multivariate Gaussian distributions have the unique property that their probabilities are monotonic with respect to the correlation matrix. A specific result, known as Slepian inequality, says that by increasing the correlation between the members of the Gaussian vector, their probability contents over the sets of interest increases [170][171]. Specifically, let X be distributed as N (0, Σ), where Σ is a correlation matrix. Let R = (ρij ) and T = (τij ) be two positive semidefinite correlation matrices. If ρij ≥ τij holds for all i, j, then N N ( ( {Xi ≤ ai } ≥ PΣ=T {Xi ≤ ai } (10.32) PΣ=R i=1
i=1
T
holds for all a = (a1 , ..., aN ) . For example, consider a vector X with N = 4, and the ⎡ two correlation matrices given ⎤ ⎡ by ⎤ 1 0.9 0.9 0.8 1 0.8 0.9 0.8 ⎢ 0.9 1 0.8 0.7 ⎥ ⎢ 0.8 1 0.8 0.7 ⎥ ⎥ ⎢ ⎥ R=⎢ ⎣ 0.9 0.8 1 0.8 ⎦ and T = ⎣ 0.9 0.8 1 0.8 ⎦ 0.8 0.7 0.8 1 0.8 0.7 0.8 1 the only difference in the two correlation matrices is in the value of correlation coefficient ρ12 (ρ21 ). The inequality above would ensure then: PΣ=R [
4 (
i=1
{Xi ≤ ai }] ≥ PΣ=T [
4 (
i=1
{Xi ≤ ai }]
(10.33)
226
10 STATISTICAL STATIC TIMING ANALYSIS
The above transformation expressed the circuit delay probability by correlation matrices with identical off-diagonal elements. Still, they require evaluating the probability content of a standard multi-normal vector over the non-equicoordinate set, which is numerically expensive. To enable a more efficient numerical evaluation of these probabilities, we can use expressions in terms of the equi-coordinate probability. That can be achieved by comparing (establishing a partial ordering) of the mean and variance vectors, using the techniques of the theory of majorization. The notions of strong and weak majorization can be used to compare random variables and their distributions [170]. Let us introduce several useful concepts. The components of a real vector a = (a1 , ..., aN )′ , can be arranged in the order of increasing magnitude a[1] ≥ .... ≥ a[N ] . A relationship of majorization is defined between two real vectors, N N a and b. We say that a majorizes b, in symbols a ≻ b, if bi and ai = r
i=1
a[i] ≥
r
i=1
i=1
i=1
b[i] for r = 1, ..., N . If only second of the previous conditions is
satisfied, then we resort to the notion of weak majorization. We say that a r r b[i] for r = 1, ..., N − 1. a[i] ≥ weakly majorizes b, in symbols a ≻≻ b, if i=1
i=1
Theory of stochastic majorization establishes that for certain distributions stochastic inequalities can be established on the basis of ordinary ) *deterministic majorization. Specifically, if t = (t1 , ..., tN ) and t = t, ..., t , where t = 1 N
N
∼
∼
ti , then the following is true
i=1
t ≻ t →P
∼
∼
(
( + , Zi ≤ t {Zi ≤ ti } ≤ P
(10.34)
which establishes an upper bound on the circuit delay probability. We are also interested in lower bound on the probability distribution of the path delay vector. In this case, we need to resort to weak majorization. For t = (t1 , ..., tN ) ∼
, tmin = min (t1 , ..., tN ) and tmin = (tmin , ..., tmin ), then the following holds: ∼
t ≻≻ tmin → P
∼
∼
(
( {Zi ≤ ti } {Zi ≤ tmin } ≤ P
(10.35)
We have finally bounded the original cumulative probability by cumulative probabilities expressed in terms of an equicoordinate vector, a correlation matrix with identical off-diagonal elements, and the standard multivariate normal vector. ( ( PΣmin ( {Zi ≤ tmin }) ≤ P (max{D1 ...DN } ≤ t) ≤ PΣmax ( {Zi ≤ t¯}) (10.36) This is a well-structured object whose probability content is amenable to numerical evaluation. The numerical evaluation can be done by using pregenerated look up tables for a wide range of dimensionalities, coordinates, and
10.3 STATISTICAL TIMING EVALUATION
227
correlation coefficients of the correlation matrix with identical off-diagonal elements. An example of the bounds on the true distribution of delay for a benchmark circuit and the comparison with the Monte Carlo-generated distribution is shown in Figure 10.3.3.
Fig. 10.6. The exact distribution of the delay of a combinational circuit produced by Monte Carlo simulation and the bounds based on majorization.
10.3.4 Parameter Space Techniques The statistical STA techniques described in the previous sections extend the basic ideas and terminology of STA. In evaluating timing yield they construct the distribution working directly with the notions of edge delays, arrival times, and slacks. They typically adopt a graph-theoretical view of the problem, effectively seeking to construct the distribution of the maximum delay through a probabilistic timing graph. In these techniques, the circuit constraints are defined in the performance domain, and the yield is given directly by the cdf of maximum delay, P (max{D1 ...DN } ≤ t) = P (Dckt ≤ t). This problem can be interpreted as integrating the joint probability density function of path delays over an N -dimensional cube of size t. The jpdf is computed from the process parameters but ultimately the analysis is in terms of path (node) delays. The timing constraints determine acceptability region that in the circuit domain is, typically, a simple multidimensional cube with the side of t.
228
10 STATISTICAL STATIC TIMING ANALYSIS
Ad = {Dckt |tmin < Dckt ≤ t) =
( i
{Di |tmin < Di ≤ t}
(10.37)
Y = P (Dckt ∈ Ad )
(10.38)
Ap = {p|tmin < Dckt (p) ≤ t}
(10.39)
If the number of independent sources of variation is not large, an alternative approach is to map the constraints imposed at the circuit level to the process domain [203][206]. Statistical timing analysis techniques that are based on this view are known as parameter space techniques. The parameter space is a Cartesian space with axis corresponding to process parameters. The region of acceptability in the parameter space is a set of process realizations that will lead to a circuit meeting its constraints:
Fig. 10.7. Acceptability region in (a) the parameter space, and (b) in the performance space.
The acceptability regions in the circuit and process domains are linked by a mutual transformation. However, for a general non-linear dependency of a circuit function on process parameters, the acceptability region Ap is difficult to find, and, in general, is not convex. This general formulation has been widely used in the traditional parametric yield maximization literature [203][204][205][206] dealing with analog design problem, where the functional mappings are highly non-linear. In statistical timing analysis, where a linear delay model is uniformly accepted, the transformation can be derived analytically. Let C be the topological path matrix, where the (i, j)th entry is 1 if path i contains edge j, and 0 otherwise. If there are N paths, and m timing edges in the timing graph, then C is a N ×m matrix. Let the edge delay d be a linear function of process parameters, p, and let p = po + ∆p, with po being the nominal value. Now, with S being the sensitivity vector: the gate delay is d = do + S∆p. The vector of path delays is given by [172]:
10.3 STATISTICAL TIMING EVALUATION
D = Cdo + CS∆p = Do + R∆p
229
(10.40)
Now, the acceptability region in the performance domain can be mapped to the acceptability region in the process domain. Concentrating on the max delay computation, let Ad = {D|D ≤ t} and Do+R∆p ≤ [t, ...t]T . Combining these two matrix expressions, we can re-express the timing inequalities in the space of process parameters as: Rp ≤ [l1 , . . . lk ]T
(10.41)
where [l1 , ...lk ]T = [t, ...t]T − Do + Rpo . This expression defines a linear system of inequalities in parameter space. Each inequality is a hyperplane, and together all of the inequalities define a convex acceptability region: Ap = {p|Rp ≤ [l1 , . . . lk ]T }
(10.42)
Fig. 10.8. Feasibility region defined by hyperplanes as in [172].
Now, if one is given the jpdf f (p) of process parameters, the yield can be calculated in the parameter space as: (10.43) Y = P (p ∈ Ap ) = f (p)dp Ap
The parameter space methods thus seek to evaluate the yield integral in an equivalent way. In contrast to the performance space methods, the dimensionality of the space that can be handled is typically much lower. Because intra-chip variation would introduce too many independent sources of variation, the process variables are the global sources of variation. This limits
230
10 STATISTICAL STATIC TIMING ANALYSIS
the usefulness of this class of techniques. On the other hand, the acceptance region can be somewhat more complex, and more refined timing models can be used. Numerical integration in high dimensional spaces is a difficult computational task. The famous “curse of dimensionality” means that the cost of computing, or approximating, the value of the integral grows exponentially with the number of dimensions [173]. If integration is over a special multidimensional region, such as a cube, sphere, or ellipsoid, then a variety of techniques exist. Integration over general multidimensional regions is significantly more difficult. Thus, integrating over a general polytope, such as one defined by the intersection of hyperplanes in the parameter space is challenging. The common solution is to resort to approximate integration, by approximating the actual region of integration by one, or many, “nice” regions. The literature of general numerical integration is enormous. The use of specialized integration techniques for timing yield estimation has been explored. One technique relies on approximating the polytope by a maximum ellipsoid that can be inscribed into the acceptance region. Once this is done standard integration techniques can be used to integrate the jpdf of the process parameters over this ellipsoid. The complexity of the algorithm for fitting the maximum ellipsoid is at least cubic in the number of constraints (e.g. the number of paths). Therefore, significant path pruning would have to be performed. The method requires the use of linear delay models but can handle arbitrary distributions of process parameters [174]. Another technique uses the algorithm for calculating the volume of a given polytope to compute the “weighted volume”, which is, of course, the required integral [174]. This algorithm finds the lower bound on the volume by fitting a set of parallelepipeds to fill the volume of the polytope. Because of the convexity of the parallelepiped, ensuring that every vertex of the parallelepiped satisfies the linear constraints, which is easily checked, ensures that the entire volume of the parallelepiped is inside the acceptance region. If some vertices are outside, the parallelepipeds are recursively divided until all their vertices are inside. A weighted sum of the volumes of the parallelepipeds gives the value of yield. The apparent attraction of such an approach is the flexibility in delay modeling, which can achieve high degree of accuracy. This technique, however, has exponential complexity in the number of dimensions and linear complexity in the number of paths considered (which determines the number of hyperplanes) [174]. 10.3.5 Monte Carlo SSTA Despite the foreign name, the Monte Carlo technique is seemingly the most familiar way of performing statistical STA. In general, the Monte Carlo method is any method which relies on random sampling to find a solution. In its most straightforward implementation, SSTA via Monte Carlo generates random samples in the process domain and then performs N full methodology runs
10.3 STATISTICAL TIMING EVALUATION
231
(extraction, delay calculation, and timing flow) to produce the full circuit delay distribution. The Monte Carlo based methods have a number of attractive properties: they are considered very accurate, conceptually easy to implement, and are well understood by the designers. At the same time, multiple disadvantages are often cited as proof of inferiority of Monte Carlo methods. The major one is the computational cost. A theoretically sound estimate of N will be introduced below, but it is typically quite large, on the order of 500010000, if any reasonable level of accuracy is required. Clearly, performing 105 runs of the standard STA flow would be computationally prohibitive, and it is for that reason that the Monte Carlo based SSTA methods get bad press and considered, almost by default, to be the worst possible approach. Other presumed deficiencies include a methodological problem of extracting paths to be analyzed by Monte Carlo and the limited role that it can play for circuit optimization, as it does not provide the sensitivity information [154]. Thus, using Monte-Carlo techniques in the circuit optimization loop is difficult and this truly is a serious limitation. A careful look reveals a more nuanced reality of the pros and cons of Monte Carlo methods and the alternatives. A number of improvements to the basic methodology can be made to deal with the computational cost. Extraction and delay calculation can be done in a way that preserves the explicit dependence on the process variations and be used to speed up the Monte Carlo flow [165]. The analysis does not have to include the entire timing graph but only the paths that have a high probability of being critical. Reducing the size of the timing graph will lower the cost of a single timing run. Theoretically, the number of paths is exponential in the number of nodes, in the worst case. However, in practice such situations rarely occur. As an example, a study of some typical designs has shown that the number of paths that had to be taken into account is about 1% of the number of cells in the design [154]. The problem of selecting the path set to be included in such a simulation is non-trivial. The challenge of the procedure is that it is difficult to know which paths to analyze as different paths will be critical under different process conditions. (For the sake of this discussion, the term “critical” may refer to a path within a small deviation from of the longest path, e.g. 5%). The selection of paths can be performed based on the mean delay criteria. For a path πi , nom , and the its delay D(πi ), the maximum delay under nominal conditions Dmax delay deviation ∆, the path set is nom Π = {πi |Dmax − D(πi ) ≤ ∆}
(10.44)
However, with a certain probability there will be paths that have much shorter mean delay but higher variance, and thus can become critical. An alternative approach would select paths based on the corner cases, with the assumption that the critical paths would reveal themselves when the circuit is analyzed at corners. The deterministic STA runs can be performed under several corner cases, e.g. min / max interconnect delay case, min / max cell delay, and the nominal. Then the union of such paths can be formed:
232
10 STATISTICAL STATIC TIMING ANALYSIS
Π=
j
{πi |Dmax (Pj ) − D(πi |Pj ) ≤ ∆}
(10.45)
where Pj are corner parameter sets. Given a sufficiently large number of corner cases used, the procedure can be quite effective. However, given the uncorrelated behavior of the different metal layers, the number of corners can, even under a simple model, be nearly 1000 [154]. Interestingly, the problem of path selection for Monte Carlo analysis can be efficiently solved by random sampling. Suppose that a given path can become critical only for the given value of the process parameter set, Xi . Now suppose that the probability of getting such Xi is p. Then, if a single simulation with a random sample from X is performed, with the probability p, a given path will be critical. After repeating the random sampling experiment N times, the probability of such a path being critical at least once, which is the probability of finding the correct path is: P (critical at least once) = 1 − Pmiss
(10.46)
where Pmiss is the miss probability, i.e., the probability that the path is not critical in all N runs. This probability can be computed using the Binomial distribution, giving the probability of getting x “successes” in N trials. N (10.47) B(x; N, p) = px (1 − p)x x The probability of identifying a potentially critical path that has the likelihood of p of manifesting itself in N runs of STA under the randomly sampled process parameters is: Pmiss = (1 − p)N (10.48)
For the given probability of the miss, we can compute the required number of runs: ln Pmiss N= (10.49) ln(1 − p) Using this formula, it can be seen that the number of runs that needs to be performed to identify all the critical paths with the probability of 99% (i.e. the coverage probability of 99%) is 458. And, performing only 100 simulations, will with 99.5% coverage probability identify all the paths that have at most 5% probability of ever being critical. Once the set of critical paths, Π, is selected, the full Monte Carlo analysis can be performed on it to find the distribution of Dckt = max{D1 . . . DN }. There exist further ways to speed up the analysis. That can be done by first constructing a response-surface model. Because a linear model may be sufficiently accurate [73], typically, a fairly small number of runs is needed: Dckt (P ) = Dckt (Po ) + aT ∆P
(10.50)
10.3 STATISTICAL TIMING EVALUATION
233
Once the response surface model is available, the value of delay for each random sample of P can be computed by a dot product, speeding up the computation. In Monte-Carlo based estimation methods, a crucial question of interest is how many random simulations need to be performed to get an accurate estimate of the cumulative probability of circuit delay, i.e. the circuit timing yield. The Monte Carlo method typically is described as a way to estimate the probability of an event by sampling from a properly chosen random process. In the context of SSTA, the event is the meeting of timing constraints, by a circuit whose behavior is impacted by a vector of random disturbances x. Let Dckt = max{D1 . . . DN } be the circuit delay, and let it be a random function of process parameters P . The event probability, i.e. timing yield, is expressed by (10.51) P (Dckt ≤ t) = P (Dckt ∈ (0, t]) The Monte Carlo method proceeds to draw N instances of the random vector x from a joint probability density function f (P ). The delay is evaluated on each sample: Dckt (i) = Dckt (Pi ). Let [175] 1, if Dckt (i) ≤ t (10.52) ξi = 0, otherwise Then, the sought probability is estimated by the frequency of the event: pˆt =
N 1 ξi N i=1
(10.53)
This frequency of occurrence of the event Dckt ∈ (0, t] is a random variable with expectation: N 1 N pt E(ˆ pt ) = =pt (10.54) E(ξi ) = N i=1 N
where the expected value of the Bernoulli variable, ξi , is equal to the probability of the event Dckt ∈ (0, t] pt . The variance of the estimate is: N N pt (1 − pt ) pt (1 − pt ) 1 = V ar(ξi ) = V ar(ˆ pt ) = 2 N i=1 N2 N
(10.55)
What is the error |ˆ pt − pt | of this estimate? According to the Law of Large Numbers, the frequency pˆt of occurrence of the event Dckt ∈ (0, t] asymptotically approaches the probability of the event pt . Formally, it is possible to find how many samples are needed to ensure that with a required high confidence of (1 − ε) a specific accuracy is reached δ = |ˆ pt − pt |. Using the Chebyshev inequality, we can show that: pt (1 − pt ) (10.56) δ≤ εN
234
10 STATISTICAL STATIC TIMING ANALYSIS
For reasonably large N and p (i.e. for events whose probability is not excessively small, or alternatively, for experiments with pN >> 1), a better bound can be derived. Since pˆt is a sum of a large number of random variables, by Central Limit Theorem, it can be described by a normal distribution. Then, pt (1 − pt ) −1 (10.57) δ ≤ φ (1 − ε) N and the relative error is given by: δ (1 − pt ) d= ≤ φ−1 (1 − ε) pt pt N
(10.58)
For example, a common three sigma rule can be recovered by choosing ε = .003, φ−1 (1 − ε) = 3. The number of samples to achieve a specified accuracy can then be computed by: 9(1 − pt ) N≈ (10.59) pt d 2 Suppose, the yield is to be estimated at a high quantile of pt = .95, and the accuracy d required is d = 0.01 which is 1% off from true yield. Then, N ∼ 6000. Thus, performing a Monte Carlo experiment with 6000 samples, will with the probability of 99.4% produce an estimate of the yield that is no more than 1% off the true value of timing yield. The efficiency of Monte-Carlo analysis can be increased by increasing the number samples. It is also possible to increase its by exploiting the fact that many of the samples generated by Monte-Carlo are close to each other and therefore do not provide proportionally increased level of information. Instead, more effective sampling strategies can be used. These strategies, that generally come under the term variance reduction techniques [96], are discussed in more detail in the previous chapter.
10.4 STATISTICAL GATE LIBRARY CHARACTERIZATION This section addresses the problem of capturing statistical gate delay information for SSTA. The challenges related to statistical circuit simulation of standard cells in circuit simulators, such as SPICE, are addressed in another chapter. In traditional STA, the timing information for every cell in the standard cell library is captured in terms of a 2-D look-up table or in terms of fitted polynomial functions. Both the table look-up method and the polynomial fitting represent the propagation time and the output transition time for each gate as functions of only two variables, the capacitive load and the input transition time. In industrial timers, separate tables are used to represent
10.4 STATISTICAL GATE LIBRARY CHARACTERIZATION
235
timing information for rise and fall times, and for different input-output timing arcs, for multiple-input / single-output cells. Consider an individual logic gate with n inputs and one output. Let us begin with the simplifying assumption that one will model the input/output relation from each input to the output individually; this implies that one does not model the case where more than one input switches simultaneously. Such an assumption does not materially impact the overall presentation presented and, in fact, was the state-of-the-art in timing model generation until recently. Both the propagation time and the rise/fall times of each cell depend on input slew and capacitive load: Tout = f (Tin , Cl ) Tt = f (Tin , Cl )
(10.60)
The tables (functions) are derived from the results of more detailed simulation, typically performed using a circuit level simulator such as SPICE. In addition to the information contained in the deterministic models described above, a statistical static cell-delay model should capture the cell delay variation in response to both inter-chip and intra-chip sources of variation. It also must do so in a way that is compatible with the statistical timing evaluation algorithms. The simplest possible such model would be: Tout = f (Tin , Cl ) + δ(Tin , Cl ) Tt = f (Tin , Cl ) + δ(Tin , Cl )
(10.61)
where δ(Tin , Cl ) would represent the random δ(Tin , Cl ) ∼ N (0, σ 2 ) delay variation component. Such a model would abstract away the process variability, and capture its impact on timing in a lumped way, at library characterization time. This could be done, for example, by performing a Monte Carlo simulation during library characterization to directly find the variance of the random term. Such an approach would be sufficient if used with the S-STA algorithm of [159], where only a lumped delay variance term is required from a library. This is clearly a very restricted model which does not provide an explicit link to the process parameters. That link is required for several reasons. First, a model should be general enough to be used not just with statistical timing analysis but with any type of variational timing analysis. In statistical modeling, the source of variation is by definition assumed to be stochastic, whereas a variational model may deal with systematic sources of process variation that are predictable. Second, the proper accounting for the different impacts of inter- and intra-chip parameter variation on delay does not seem to be possible with a lumped model. Third, the canonical timing representation that are becoming popular in S-STA algorithmic community are formulated as explicitly dependent on the process-level sources of variation. In such models,
236
10 STATISTICAL STATIC TIMING ANALYSIS
the statistical behavior of the delay appears as a function of random process parameters, not as a lumped random delay term. It is also difficult to account for correlation in delay elements when using the lumped model. The simplest desired model is of the form consistent with the canonical model: Tout = f (Tin , Cl )+aT (Tin , Cl )×∆P = ao +aT ∆P = ao +
n i=1
ai ∆Pi +an+1 ∆Ra
(10.62) where the vector a is the vector of sensitivities (derating coefficients) evaluated for the specific value of Tin and Cl . Derating coefficients are needed to describe the cell’s delay and output slew as linear functions of a selected set of variations (processing parameters and environmental variations). The cell derating coefficients can be obtained by performing SPICE simulations of the cell using different values of the parameters and performing a linear regression between the cell delay or output slew and the selected parameters. The model above is sufficient if the within gate variability due to intra-chip variability is negligible. With the increase of purely random components of variation (e.g. random dopant fluctuation), a more elaborate model is needed. For one, it should provide a compact way to account for random delay variation from gate to gate. But it should do that in a way that accounts for intra-gate uncorrelated variability. The simple model is still sufficient for the inverter because it has only one transistor on a pull-up and pull-down paths. In this case, the impact of intra-chip variation on delay is correctly represented by the last term an+1 ∆Ra , where ∆Ra is a standard normal variable. In a gate with m multiple transistors on a pull-down or pull-up paths, such as in a NAND gate, the random contributions of individual transistors get averaged out in their contribution to the term an+1 ∆Ra , which now has to be adjusted appropriately. Note that there are several options for determining the cell derating coefficients in terms of input slew and load capacitance using SPICE simulations and linear regression. The simplest method is to choose a single value of input slew and load capacitance, and perform all SPICE simulations at that point. A second, more advanced, and potentially more accurate method would be to perform the simulations and regressions for several values of input slew and load capacitance and get each derating coefficient as a function of input slew and load capacitance. The second option would continue by using the actual value of input slew and load capacitance (as given for each cell instance in the slack report) to calculate the derating coefficient for each instance of the cell. A third method, which may be more computationally demanding, would be to perform the simulations and regression fits for each value of input slew and load capacitance seen for all instances of the cell in the slack report. It is possible to mix the three methods on a given design to optimize accuracy versus speed tradeoffs.
10.5 SUMMARY
237
An essential component of all modern static timing verification flows is timing analysis of on-chip interconnect. An interconnect net is a linear system, and most techniques of interconnect analysis aim at finding a set of time constants of that linear system. Currently, most timing engines utilize modelorder reduction techniques, such as PRIMA [177]. Because of the enormous complexity of full system analysis of industrial interconnect nets which contain millions of RLC elements, the model order reduction strategies are essential to modern interconnect analysis. As the discussion in the previous chapters have shown, interconnect parameters exhibit substantial variability, both chip-to-chip and intra-chip. Thus, enabling statistical interconnect timing analysis is important for a complete SSTA flow. The area of statistical interconnect analysis is currently being actively developed. However, a detailed mathematical discussion of the modern techniques for model order reduction and their extensions under variability is outside the scope of this book. A reader is encouraged to consult the books on interconnect timing analysis [177][178] and the growing literature on variational model order reduction [179][180][181][182][183][184][185][186][187].
10.5 SUMMARY This chapter has reviewed the work on the modeling and algorithmic aspects of statistical static timing analysis. A family of node-based and path-based SSTA algorithms has been analyzed. Monte-Carlo simulation remains a popular alternative because of its conceptual simplicity and the ease of implementation. The parameter-space methods tend to be accurate but limited to a very small number of sources of variation.
11 LEAKAGE VARIABILITY AND JOINT PARAMETRIC YIELD
All power corrupts, but we need the electricity. Unknown
Power consumption is also strongly affected by manufacturing variability. It is especially true of leakage power because of the exponential dependencies of subthreshold and gate leakage currents on several process parameters. In this chapter we give an overview of leakage power variability and provide analytical methods for modeling it. This chapter also highlights the challenge of parametric yield loss due to the simultaneous impact of timing and power variability. Both timing and power variation need to be accounted for in estimating the overall parametric yield. This chapter introduces a quantitative yield model that can accurately handle the correlation between timing and power variability.
11.1 LEAKAGE VARIABILITY MODELING The previous chapters have been concerned with the impact of variability on timing and the resulting timing-limited parametric yield loss. The growth of standby leakage power as device geometries scale down has become an extremely urgent issue. At the 65nm node, subthreshold and gate leakage power accounts for 45% of total circuit power consumption [192]. There are several reasons for increased leakage power consumption. The two primary sources of leakage power are the channel (subthreshold) leakage current from the drain to the source of a MOSFET, and the gate tunneling current through the thin oxide film of the MOSFET. Increase in subthreshold current is fundamentally due to the short-channel behavior of scaled MOSFETs. It is exacerbated by the reduction of threshold voltage brought about by scaling. CMOS scaling trends require the reduction of Vth along with the reduction in supply voltage
240
11 LEAKAGE VARIABILITY AND JOINT PARAMETRIC YIELD
in order to keep high circuit speed, i.e., to maintain sufficient gate overdrive voltage. Supply voltage reduction is an effective way to contain the growth of switching power because of its quadratic dependence on Vdd . However, threshold voltage reduction causes an exponential increase in subthreshold channel leakage current. The reduction of effective channel length, intrinsic to scaling, also causes an exponential increase in subthreshold leakage. At the same time, aggressive scaling of gate oxide thickness leads to exponential increase in gate oxide tunneling current, such that at the 65nm node the total gate tunneling current may exceed the total subthreshold leakage. Gate leakage current is significantly lower when high-K dielectric materials are used as the gate insolating material, instead of silicon dioxide. A complete discussion of the physics behind the increase in subthreshold and gate leakages is beyond the scope of this text. For a thorough treatment of the subject see [93]. The above analysis identifies an important fact: both channel and gate leakage currents have exponential dependencies on several key process parameters. As a result, they exhibit a large spread in the presence of process variations. Experiments show that the subthreshold channel current is more sensitive to process variations than the gate tunneling current. The standard deviation of subthreshold leakage can be as high as 350% of the mean value. For gate leakage this number can be as high as 115% [195]. Experiments indicate that subthreshold leakage variability is driven primarily by Lef f and Vth , and gate leakage variability is primarily driven by oxide thickness, Tox . As we saw in the earlier chapters, Lef f exhibits a large amount of variability from several causes mainly in the lithographic sequence. Threshold variability due to implant dose variation and random dopant fluctuation is also significant. Historically, the thickness of the oxide film, Tox has been a well-controlled parameter, and did not result in large leakage variability. However, this is rapidly changing as technology approaches the limits of thin film scaling. Overall, it has been reported that the maximum full chip leakage due to process variation can be 20× of minimum leakage current [193]. This has significant implications for chip design as current designs are often powerconstrained. One reason is that many products are battery operated. Another is due to the large cost of a package that is needed to accommodate a chip with high power. The cost of a cooling solution increases rapidly beyond a certain point: a 15% increase in power, can lead to a 3.5× increase in the cost of a cooling system [194]. The dramatic increase in leakage power with scaling, and a strong dependence of leakage on highly varying process parameters, raises the importance of statistical leakage and parametric yield modeling. Leakage power variability may lead to yield losses due to the violation of power limits set by system constraints and packages. We now describe the analytical tools that can be used to quantify the amount of leakage variability and evaluate the power-limited parametric yield of a chip. Such models are needed both for yield prediction and in statistical optimization to guide the design towards statistically feasible and preferable solutions.
11.1 LEAKAGE VARIABILITY MODELING
241
We start with subthreshold leakage current. It can be shown that for transistors in the weak inversion region, the subthreshold current can be described as:
Isub = kn (m − 1)
kT q
2
eq(Vgs −Vth )/mkT (1 − e−qVds /kT )
(11.1)
where kn = µef f Cox W L is the transistor transconductance, Vgs and Vds are the gate- and drain-to-source voltages, kT /q is the thermal voltage, and m is the body-effect coefficient, a typical value is m = 1.3. [93]. This equation is the basis of subthreshold leakage analysis in the compact device model BSIM3. We can use a circuit simulator to find the sensitivity of Isub to the relevant set of process parameters. In [195], the impact of the flat-band voltage Vf b , oxide thickness (Tox ), effective channel length (Lef f ) and the doping concentration in the halo implant (Npocket ) is shown to be especially severe, with the impact due to Vf b and Lef f being by far the most significant [See Table 11.1]. The variation in the channel doping (Ndep ) and in the width (W ) has relatively minor effect compared to the other sources. The use of the flat-band voltage in the place of threshold voltage is caused by the need to separate the impact of Lef f and Ndep on the threshold voltage from all the other contributions, that get lumped into Vf b . In long-channel MOSFETs, the threshold voltage is independent of Lef f . In the short-channel devices, the effect of the so-called Vth roll-off is to make threshold voltage decrease exponentially for Lef f beyond a certain value [93]. The amount of Vth reduction compared to the long-channel threshold voltage depends on Lef f , and is given by (11.2) k φbi (φbi + Vds )e−Lef f /l
where k and l are the device-dependent coefficients, φbi is the built-in pnjunction potential, and Vds is the source-drain voltage. The circuit simulator internally accounts for the short-channel effect, i.e. for the impact of Lef f on Vth . And, the variability in Vth due to random dopant fluctuation or implant dose variation can then be captured through Vf b . This justifies treating the parameters as stochastically independent. We conclude from the analysis above that leakage variability is primarily driven by the variation in Lef f and Vth . While the analytical model of (1) gives accuracy and generality required for circuit simulation, it is not very convenient for direct statistical analysis and design because of its complexity and because it contains only an implicit dependence of subthreshold current on Lef f . An empirical equation that directly captures this dependence is more useful while being sufficiently accurate. It aims to capture explicitly the impact of variation in Lef f and the variation in threshold voltage Vth due to doping concentration, in a way that permits treating them as stochastically independent [196]. For statistical analysis and design, we are primarily interested in the variance of leakage current around a fixed nominal process point.
242
11 LEAKAGE VARIABILITY AND JOINT PARAMETRIC YIELD
Table 11.1. The impact of process parameters on subthreshold leakage current. c The nominal current value is 3.72 nA. (Reprinted from [195], 2003 IEEE). Parameter Vf b Vdd Npocket Ndep Lef f Tox W All parameters All parameters
% Variation (3σ) Mean(nA) 10% 7.09 20% 3.74 20% 4.44 20% 3.78 20% 6.97 20% 4.51 10% 3.73 10% 9.11 20% 17.55
S.D. (nA) 11.7 0.46 2.91 0.72 13.62 3.17 0.38 19.18 61.43
S.D / Mean (%) 1.67 0.124 0.655 0.191 1.96 0.704 0.101 2.106 3.5
Because we are interested in the “delta” model, we can lump the various cono . From (1) we can tributions to the nominal leakage into a single term, Isub conclude that the general form of an empirical leakage model is exponential in Lef f and Vth . Regression methods can be used to find the best possible coefficients for fitting the model. Experiments show that a model needs to have a polynomial dependence of the exponent on Lef f and linear dependence on Vth [199]. Specifically, the following empirical equation provides reasonable accuracy: o · ec1 ∆L+c2 ∆L Isub = Isub
2
+c3 ∆V
(11.3)
where ∆L is the deviation of the effective channel length of the device from the nominal value, ∆V is the deviation of the threshold voltage from the nominal, and c1 ,c2 ,c3 are the fitting coefficients that can be found via a circuit simulation. Figure 11.1 shows a scatterplot produced by comparing the predictions of the above model with the values produced by the circuit simulator. We can see that the fit is reasonably good justifying the use of the model for statistical analysis. We now turn to the model of gate leakage current. There are two main mechanisms for gate tunneling: Fowler-Nordheim tunneling in which electrons first tunnel into the conduction band of the oxide layer, and direct tunneling in which electrons tunnel straight into the conduction band of the gate [93]. In typical device operation, Fouler-Nordheim tunneling is negligible compared to direct tunneling. The discussion of the first-principles physical theory of gate tunneling current is outside the scope of this book, see [93] for more. While there exist semi-empirical models for predicting gate current as a function of the various device parameters [197][198], these models are targeted for the use with circuit simulation, such as SPICE, and are not convenient for hand analysis. The general dependence can be captured by:
11.2 JOINT POWER AND TIMING PARAMETRIC YIELD ESTIMATION
243
Fig. 11.1. Comparison of the normalized leakage of an inverter predicted by SPICE and the analytical leakage model.
2
Igate = W Lsde Ag (Vdd /Tox ) e
−Bg (1−Vdd /φox )3/2 Vdd
Tox
(11.4)
where φox is the barrier height for the tunneling electron or hole, Tox is the oxide thickness, Lsde is the length of the source- and drain- extension, and Ag and Bg are the material and device-dependent parameters [93]. Similar to the case with Isub , for statistical leakage analysis, a fully empirical gate current model is easier to handle. The gate tunneling current can be modeled via an empirical model constructed around a set of nominal process conditions. Through a simulation, we can assess the impact of several key parameters on Igate . In a study of several process parameters (Tox , W , L) that have some impact on gate leakage, Table 11.2, it has been found that Tox has by far the dominant impact [195]. The table indicates that the supply voltage also has large impact on Igate but it is only 20% of the impact of Tox . This analysis justifies a fairly simple model for variational analysis that makes the deviation of Igate from its nominal value dependent only on oxide thickness, Tox : T
o Igate = Igate · e− β1
(11.5)
where T is the oxide thickness, and β1 is the empirical fitting coefficient that can be extracted via a circuit simulation.
11.2 JOINT POWER AND TIMING PARAMETRIC YIELD ESTIMATION Parametric yield losses due to variability in chip timing and power are fundamentally related. Both are caused by the variability in the common process
244
11 LEAKAGE VARIABILITY AND JOINT PARAMETRIC YIELD
Table 11.2. The impact of process parameters on gate leakage current. The nominal c current value is 8.43nA. (Reprinted from [195], 2003 IEEE). Parameter Vdd Tox Tox Lsde W All parameters All parameters
% variation 20% 10% 20% 20% 10% 10% 20%
Mean 8.54 9.3 12.5 8.44 8.43 9.4 12.89
S.D. 1.71 4.43 14.2 0.56 0.28 4.55 14.82
S.D / Mean 0.2 0.47 1.14 0.067 0.033 0.484 1.152
parameters, primarily, Lgate , Vth , and Tox . Thus, it is crucial to predict parametric yield considering the simultaneous effect of variability on leakage power and timing. Leakage power varies inversely with chip frequency. The same parameters that reduce gate delay (shorter channel length, lower threshold voltage, thinner gate oxide) also increase the leakage. As a result, slow die have low leakage, while fast die have high leakage, Figure 11.2. While leakage power exhibits exponential dependencies on process variables, chip frequency has a near-linear dependency on most parameters [196]. Because of that, as data in Figure 11.2 shows, a 1.3× variation in delay between fast and slow die could potentially lead to a 20× variation in leakage current [193]. Moreover, the spread in leakage grows as chips become faster. We will see in the following pages that delay variation is primarily due to Lef f . At lower Lef f , and thus in the fast bins, the variability in the other process parameters (Vth , and Tox ) makes leakage variation much more pronounced compared to larger Lef f (slow bins). Because of this a substantial portion of the chips in the fast bins have unacceptably high leakage power consumption. This makes power the yield-limiting factor, which indirectly affects the achievable maximum clock frequency. This is quite undesirable with respect to the economics of high-performance chip manufacturing, since the chips in the fast bins can be sold much more expensively than the slow chips. For a recent Intel microprocessor, a 2.8GHz part sold for $1.6, while a 3.8GHz part sold for $6.5: in other words, a 36% faster part had a 400% price difference [227]. This is further illustrated in Figure 11.3. Switching power is relatively insensitive to process variation. In the absence of substantial leakage power, parametric yield is determined by the maximum possible clock frequency. The combination of leakage and switching power may exceed the power limit determined by the cooling and packaging constraints. If a package is selected based on power values at nominal conditions, then the exponential dependence of leakage on process spread may force the total power to go above the cooling limit even below the maximum possible chip frequency. Previously, the challenge for high-performance chips was that of minimizing the fraction
11.2 JOINT POWER AND TIMING PARAMETRIC YIELD ESTIMATION
245
of chips that are not fast enough. Now it is limited both by slower chips, and chips that are too fast, because they are too leaky. As we see later, this loss due to a double squeeze on yield is substantial: in one study, the yield loss in the fastest bin is 56% if the power limit is set at 175% of the nominal power [196].
Fig. 11.2. Exponential dependence of leakage current on 0.18um process parameters results in a large spread for relatively small variations around their nominal value. c (Reprinted from [193], 2003 ACM).
The severity of joint power and timing parametric yield loss has to be estimated early in the design phase when it is useful both for technology and design optimization. Here we describe a high-level technique to estimate parametric yield at different power levels and frequency bins for the given process technology and the level of variability. For early planning this can be done based on a small number of chip parameters: the total chip area, the number of devices, the nominal and statistical technology parameters, and the supply and threshold voltages [196]. Because the magnitudes of Isub and Igate become comparable, the analysis includes both subthreshold and gate oxide leakage components. Effectively, the technique finds the joint distribution of chip frequency and leakage power. The modeling strategy follows [196]. For every frequency value, we will find the distribution of power across multiple chips. This will give us a measure of power yield at each frequency bin. The probability of a chip falling into the bin sets the timing yield. The model can be significantly simplified if we assume that the distribution of chip frequency is primarily determined by the chip-to-chip variation of Lef f that typically follows a normal distribution. The simulation of a ring oscillator shows that the change of path delay due to Lef f is 15% (within 3σ range), while the change due to threshold voltage is about 2%, and is even lesser for oxide thickness. The reasonableness of this
246
11 LEAKAGE VARIABILITY AND JOINT PARAMETRIC YIELD
model is also supported by the common industrial practice in which frequency bins are tightly linked to a specific value of Lef f [196]. It should be acknowledged that this modeling strategy ignores the impact of intra-chip variation of Lef f on the clock period. While this is a simplification - recall our discussion of the shift in the Fmax distribution as a result of intra-chip variability - in numerical terms, the error is minor. The model could be enhanced by introducing a correction term to account for the impact of intra-chip variation on Fmax : it would not materially change the analysis to follow. The only difference will be to slightly (<5-6%) shift the clock period.
Fig. 11.3. Inverse correlation between leakage power and frequency contributes to parametric yield loss. The maximum frequency is reduced because chips in the “fast” bin exceed power limits.
We can model the subthreshold leakage current of individual transistors using the empirical polynomial exponential function of (3) [199]. Both Lef f and Vth are modeled as independent normal random variables. The total variability in L and Vth is decomposed additively into intra-die and inter-die components: L = Lo + Lintra + Linter . For compactness of the derivation that follows we change the usual notation to: Lintra = Ll and Linter = Lg , and same for Vth . Because the variation of path delay is primarily defined by inter-chip L variation, when estimating yield, it will be convenient to express Isub as an explicit function of inter-chip Lg . For that reason, the impact of intra- and inter-chip variability on leakage distribution is evaluated separately. The chip leakage variation due to intra-chip variability is a sum of current contributions from all devices on the chip. The normalized standard deviation of full chip subthreshold current Ic,sub due to intra-chip variability, e.g. at √ the fixed value of inter-chip Lg and Vth , behaves as σ(Ic,sub )/E(Ic,sub ) ∼ 1/ N . A heuristic assumption is adopted that because of an enormous number of independent sources of variation that contribute to total chip leakage, the standard deviation of total leakage due intra-chip variation, at the fixed value of Lg , is
11.2 JOINT POWER AND TIMING PARAMETRIC YIELD ESTIMATION
247
small compared to the mean and can be ignored. Another way to state this assumption is by using the quantile function which we formally define here as q β (x) = min{x| Pr(X ≤ x) ≥ β}. (The notation used here for the quantile function is not universal but is convenient for our discussion. A more common notation is Qx (β)). Then, our assumption implies that the ratio of leakage at a high quantile of the spread caused by intra-chip variation to the mean value is nearly unity: q 0.95 [Ic,sub |Lg , Vg ] ≈1 E[Ic,sub |Lg , Vg ]
(11.6)
Thus, the impact of intra-chip variation on leakage is assumed to be completely captured by the impact on the mean. The expected (mean) value of subthreshold leakage current of a unit width device at fixed Lg and Vg can be described as:
E[Isub |Lg , Vg ] = I(Lg , Vg )
2
e−(λ1 Ll +λ2 Ll +λ3 Vl ) f (Ll , Vl )dVl dLl
(11.7)
where λ1 , λ2 , and λ3 are the process-dependent coefficients, f (Ll , Vl ) is the Gaussian jpdf of Ll and Vl , and 2
o · ec1 Lg +c2 Lg +c3 Vg I(Lg , Vg ) = Isub
(11.8)
o where Isub is the leakage evaluated at the nominal process values. The integral in (6) can be evaluated in closed form and reduces to a product of 2 ) and SV (σV2 l ) dependent on the variance of the intra-chip two functions SL (σL l components Ll and Vl [196]:
2 σL
2 σV t
l
2 ) SL (σL l
= (1 +
2 2 −0.5 2λ1 +4σLi λ1 λ2
2λ2 2 λ1 σLi )
e
and
SV (σV2 l )
l
=
2 2 e 2λ1 /λ3
(11.9)
where σ L and σ V are the standard deviations of intra-chip components Ll and Vl . It is easy to see that SL ≥ 1 and SV ≥ 1, and become equal to one if intra-chip variation is absent. Thus, the impact of intra-chip variability on the mean value of leakage of the unit-width device is to multiplicatively increase the nominal leakage at a given value of Lg : 2 E[Isub |Lg , Vg ] = SL (σL )SV (σV2 l )I(Lg , Vg ) l
(11.10)
Igate ∼ = E[Igate |Tg ] = ST (σT2 l )ITg
(11.11)
Using Equation (5) to compactly model gate tunneling current, and following the decomposition into intra- and inter-chip components, under the similar arguments the gate leakage can be represented by
248
11 LEAKAGE VARIABILITY AND JOINT PARAMETRIC YIELD
Fig. 11.4. Intra-chip variability increases mean chip leakage. This can be modeled by a multiplicative function dependent on intra-chip variance.
where ST is the scaling factor accounting for the intra-chip variation of Tox : ST =
2 σT i 2 2β e 1
o ITg = Igate e−
∆Tg β1
(11.12)
and where ∆Tg is the inter-chip component of variation in Tox . The total chip leakage can be obtained by summing up the subthreshold and gate leakage components and by weighting the leakage contribution of individual transistors by gate widths (wd ): wd (SL SV ILg ,Vg + ST ITg ) (11.13) Ic,tot (Lg ) =
This equation is evaluated at a specific value of Lg , while explicitly keeping the dependence on the inter-chip variation of V and Tox . At every fixed Lg , variability in Vg and Tg leads to the spread in Ic,tot . In Equation (12), the total leakage current is a sum of two log-normal distributions which are independent, since one is a function of Vg , and the other is a function of Tg . Ic,tot (Lg ) can be further approximated by a single log-normal using moment matching. Now the pdf and cdf of Ic,tot (Lg ) can be evaluated numerically which enables the estimation of the leakage yield corresponding to a specific leakage constraint or, conversely, the leakage current corresponding to a quantile value. Using the above model, the impact of variability in Lef f , Vth , and Tox on the distribution of frequency, leakage power, and the resulting parametric yield loss can now be quantitatively assessed. Below are the results for the impact of variability in a 100nm CMOS technology with typical industrial estimates of the magnitudes of variability. The analysis is for an adder circuit [196]. The impact of intra-chip variability on leakage power is to increase the mean
11.3 SUMMARY
249
by about 15%, and this is the same for all frequency bins. Both median and the high quantiles of leakage power grow towards the fast bins. The median increases by 4× but the total spread in leakage is as much as 14×, Figure 11.5. Moreover, at every fixed value of Lg , there is a 2.5-3× spread of leakage due to the variation of threshold voltage and oxide thickness. A substantial fraction of chips that are fast, and thus have good timing yield, may have to be discarded because of high power. As expected, this loss of power yield is worse in the fast bins (Table 11.3). We can see, for example, that at the fastest bin, the yield loss is 56% if the power limit is set at 175% of the nominal power, and even if the power is allowed to exceed 250% of the nominal power there will be a 24% yield loss. For a bin with the clock periods that are only 5% longer, these numbers are 27.4% and 6.7% respectively. These numbers clearly indicate the severity of the dependence of power on frequency, and the need to perform accurate analysis.
Fig. 11.5. Scatter plot showing the distribution of the total circuit leakage. Variability in Vg , Tg are responsible for “intra-chip” spread in leakage. (Reprinted from c [193], 2003 ACM).
11.3 SUMMARY In this chapter, we have analyzed the impact of variability on power and its impact on circuit performance and yield. In the recent past, it was sufficient to model the impact of variability on timing. In recent technologies,
250
11 LEAKAGE VARIABILITY AND JOINT PARAMETRIC YIELD
Table 11.3. Power yields at different caps on power are shown for several clock period bins. Clock Periods in Different Bins (%) 85 % 90 % 93 % 100 % 103 % 1.00 6.4 21.1 44.8 68.5 84.8 1.75 43.6 72.6 90.5 97.5 99.4 2.50 76.0 93.3 98.7 99.8 99.9 Plim
leakage variability has had substantial impact on parametric yield since it greatly increases the maximum power consumption of chips coming out of the production line. This has implications for power management and packaging solutions. Leakage variability and timing variability are inversely correlated which causes yield losses both in slow and fast frequency bins. We described an analytical model for predicting the overall yield taking into account the correlation between the timing and power variabilities.
12 PARAMETRIC YIELD OPTIMIZATION
When defeat is inevitable, it is wisest to yield. Quintilian The policy of being too cautious is the greatest risk of all. Jawaharlal Nehru
The previous chapters have been concerned with the modeling and analysis of the impact of process variability on circuit timing, leakage, and parametric yield. We also discussed techniques to compensate for the impact of systematic sources of variation. In this chapter we discuss optimization strategies for minimizing the impact of random variability on parametric yield. Specifically, we explore the emerging methods for full-chip statistical optimization with the purpose of minimizing parametric yield loss. Traditional circuit optimization methods are not sufficient for dealing with large-scale variability because they lack the explicit notion of parameter variance. We discuss the possible formulations of optimization under stochastic variability using robust and chance-constrained optimization formulations. These techniques permit efficient statistical circuit tuning using gate and transistor sizing and the use of multiple threshold voltages.
12.1 LIMITATIONS OF TRADITIONAL OPTIMIZATION FOR YIELD IMPROVEMENT The previous chapter considered the analysis methods to assess the impact of variability on parametric yield. We now turn to a discussion of the optimization strategies that can be employed to improve parametric yield. The question we seek to answer is whether it is possible to use computer-aided design
252
12 PARAMETRIC YIELD OPTIMIZATION
(CAD) techniques to improve parametric yield in high frequency bins by shifting the power and delay distributions by post-synthesis circuit level tuning via gate sizing, the use of multiple threshold voltages, and others. The traditional optimization techniques are insufficient for the purpose of parametric yield improvement in nanometer scale integrated circuits. These traditional optimization methods are deterministic: they hide variability through abstraction of corner models that correspond to the so-called worst-case, nominal, and best case operating corners. Such deterministic methods have become too restrictive with increased magnitude of variability and more complex optimization objectives that now include both timing and power. In contrast, statistical optimization operates with functions that expose variability in process parameters and operate with explicit variational metrics. There are several principle reasons for the need for statistical optimization. The analysis of these reasons will also point to conditions under which statistical optimization can be expected to outperform a purely deterministic optimization strategy. Different sensitivities of cell responses to process variations: Optimization tools rely on the power and timing metrics contained in the standard cell libraries. Consider an algorithm to minimize a quantile power value. We wish to guarantee that leakage variability caused by Lgate variation meets package power limit at the 95th percentile of the distribution: Pi (Lintra + Linter )) ≤ Plimit (12.1) q .95 ( i∈N
A deterministic algorithm operates with point estimates of random power quantities, for example, with q .95 (Pi ). It will properly track the true quantile estimate of power value if: Pi ) = q .95 (Pi ) (12.2) q .95 ( i∈N
i∈N
Deterministic optimization makes the tacit assumption that circuit performances of different cells behave in a correlated fashion in the absence of intra-chip variation. As we argued previously, the impact of intra-chip variation on leakage is negligible. Even then, Equation 12.2 will not exactly hold because sensitivities of different cells to process parameters are not identical [201]. When power (or delay) depends on two or more process variables, and the sensitivities are not identical, they will not be perfectly correlated even if there is no intra-chip variation. The subthreshold leakage strongly depends on several process parameters (Lef f , Vf b , Tox , and Npocket ), so perfect correlation between cell’s power values will require identical sensitivities to process variables. This lack of correlation between cell power (and timing) responses makes it impossible to reduce evaluation of stochastic objective functions to algebraic operations with point estimates of random variables. Figure 12.1 shows a scatter plot of leakage current values for two cells, when subthreshold current is described by Equation 11.3, and the two sets of
12.1 LIMITATIONS OF TRADITIONAL OPTIMIZATION
253
coefficients are c (cell A) =(1, 0.2, 0.6) and c (cell B) = (1.2, 0.3, 0.3). The lack of correlation observed in this figure jeopardizes the case-file approach to optimization since it is impossible to come up with a case file that will guarantee a specific yield point and permit reliable evaluation of the cost metric.
Fig. 12.1. Leakage power values of cells may not track even in the absence of intra-chip variation when the sensitivities to process variables are different.
Impossibility of reducing stochastic comparisons to deterministic comparisons: The job of optimization is to make optimal decisions and this requires comparing alternatives. The second reason why statistical optimization is needed is because it is impossible to ensure consistent deterministic decision-making when operating with intrinsically random variables. It is possible that, when operating with random variables, one alternative is preferred at a lower quantile but is inferior at a higher quantile: that happens when its mean is lower but its variance is higher than of the alternative. Figure 12.1 shows an example of such behavior based on the comparison of leakage power distributions for two different cells. This discussion and example raise a problem of comparing random variables. The simplest formal way to compare random is through stochastic dominance. This is a simple stochastic ordering defined in terms of cumulative distribution functions. We say that X is stochastically larger than Y if:
254
12 PARAMETRIC YIELD OPTIMIZATION
P (X ≤ t) ≤ P (Y ≤ t) or alternatively P (X > t) ≥ P (Y > t) ∀t
(12.3)
Clearly, if X stochastically dominates Y , then q (X) ≥ q (Y ) also implies that Qβ (X) ≥ Qβ (Y ). This implication holds only when there is stochastic dominance. In such cases, deterministic decision-making that is based on comparisons of quantile values is sufficient. But in a general case, when the dominance relationship cannot be established, it is impossible to reduce stochastically aware decision-making to comparisons based on point estimates of random variables. α
α
Fig. 12.2. Some cells have better leakage at lower quantiles while others at higher quantiles.
Dynamic variance updating: Another issue is that variability of circuitlevel parameters is often directly dependent on the decision variables. For example, the standard deviation of threshold voltage is inversely proportional . It is difficult to see how an algorithm to transistor area: σV th ∼ √ 1 Lef f Wef f
that does not explicitly account for variance can properly model such behavior. Sub-optimality of path delay histogram formed by deterministic tools: In design methodologies for high-performance and low-power digital circuits, the end result of multiple optimization steps, e.g. gate and transistor sizing, use of multiple Vth , is a high concentration of path delays near the critical path delay value. An un-tuned or a poorly tuned circuit may have a gradually falling distribution of path delays. All circuit tuning tools essentially rely on modifying the distribution of timing slack, which is the amount of timing margin that a path has. A circuit tuner achieves two effects. It is able to speed some of the slow paths, but it can also slow down other paths by trading their slack for area or power reduction. Because fast paths become slower, while slow paths become faster, the result of tuning is a distribution of slack that has a large concentration of path delays (a “wall” of paths) near the critical delay value. This is shown in Figure 12.1.
12.1 LIMITATIONS OF TRADITIONAL OPTIMIZATION
255
Fig. 12.3. Circuit tuning creates a large number of paths near the delay target. A distribution in terms of nominal delays is shown.
The effect of variability on such well-tuned circuits is the substantial spreading among the timing paths from the “wall” of critical paths since some paths become slower while others become faster. This is illustrated in Figure 12.1a. The circuit delay which is defined by the maximum of all path delays is increased as a result, degrading the overall performance, Figure 12.1b. The detrimental impact of variability in pushing out the performance gets larger as the number of paths near the crucial delay vale increases, since a larger number of paths can go above the critical value. The expected spreading of the wall of paths can be modeled quantitatively. Let the delay target used by the optimization tools be Do . The intra-chip variability leads to the path delay variation around Do . If path delay variance due to intra-chip variation is σD , and the number of paths is N , the mean clock period becomes: E[Tclock ] = Do + η(N )σD
(12.4)
where η(N ) = φ ((N − 1)/N ), and φ () is the inverse cdf of normal distribution. The path delay variance depends on the path length. Some typical numbers of the expected circuit delay degradation are shown in Figure 12.1. The path delay wall is a result of mathematical optimization. In the absence of variability and under the conditions of perfect modeling, the solution produced by a mathematical optimizer is optimal for a convex problem. Both optimization theory and practice show, however, that often a deviation of the conditions of the problem drastically worsens the quality of the solution - in a formal sense, the solution is said to be not robust. Because the solution is not robust to a violation of the assumptions, the optimal value of maximum delay is very sensitive to variability. We see that this leads to the increase in the expected circuit delay shifting the path delay distribution and degrading timing yield. −1
−1
256
12 PARAMETRIC YIELD OPTIMIZATION
Fig. 12.4. (a) Variability acts to spread out path delays. (b) Overall distribution is shifted due intra-chip variability.
In the language of optimization, we need to make the solution more robust - less sensitive to uncertainty and variability in parameters. This can be done by avoiding the creation of the wall of paths that lowers their spreading. In the above simplified analysis, we neglected the fact that the paths and their delay variances are not identical. Paths with high variance are more likely to exceed the target delay than those with small variance. Thus, more importantly, statistical optimization needs to ensure that high-variance paths are not pushed against the delay target. This will lead to higher timing yield. Figure 12.1 contrasts a µ − σ path delay scatter plots of a circuit optimized both by deterministic and statistical methods. The figure plots the standard deviations and mean delays for all paths in a 32 bit LF adder [202]. A deterministic tuner optimizes the circuit such that there is a high number of near critical paths regardless of their variance. In contrast, a statistical algorithm creates a path distribution with fewer high-variance paths near the target delay [202]. We observe that the wall is clearly absent and also that the mean delay is
12.2 STATISTICAL TIMING YIELD OPTIMIZATION
257
Fig. 12.5. Intra-chip variability will lead to the shift of the mean circuit delay and overall speed degradation.
increased while the maximum standard deviation is reduced. This behavior is characteristic of statistical methods in contrast to deterministic algorithms that lack the notion of parameter variance and parametric yield, preventing design for yield as an active design strategy.
12.2 STATISTICAL TIMING YIELD OPTIMIZATION 12.2.1 Statistical Circuit Tuning: Introduction We first explore statistical optimization for improving timing yield. Later, we study techniques that handle timing and power limited parametric yield simultaneously. Given the exponential dependence of leakage power on the highly variable transistor channel length and threshold voltage, statistical power-optimization is likely to have a more significant impact on circuit performance and parametric yield. Optimization strategies for timing yield loss minimization are, however, simpler and their review provides clues as to the range of possible options in statistical optimization. Optimization for performance and power improvement are applied at multiple points in the RTL synthesis flow. In this chapter, we are concerned almost exclusively with the post-synthesis circuit tuning that can be considered part of physical design or post-placement optimization. Several tuning strategies are effective at this stage, with primary ones being gate and transistor sizing, dual Vth assignment, and Lgate biasing. Statistical optimization as an area in CAD for digital IC design is relatively new. In the past, significant efforts have been aimed at developing techniques
258
12 PARAMETRIC YIELD OPTIMIZATION
Fig. 12.6. (a) Deterministic optimization may result in high-variance paths pushed to critical delay. (b) Robust optimization ensures that high-variance paths have c lower mean delay. (Reprinted from [202], 2005 IEEE).
for yield improvement based on design centering [203][204][205][206]. These techniques have originally appeared in the context of analog design. While many concepts are also used in statistical design for digital circuits, the specific optimization techniques differ due to the vast difference in the size of the optimization problem. Methodologically, there are two approaches to statistical design. One group of methods essentially utilizes traditional optimization but uses statistical timing estimates generated by the statistical STA tools to drive the optimization. An example is a modification of the Lagrangian relaxation based sizing to use a statistical timing target instead of the deterministic target [224]. In this example, gate sizing is performed iteratively while updating the required arrival time constraint using an SSTA tool. The algorithm strategy is not changed materially - only the value of a target is changed. The second group of sta-
12.2 STATISTICAL TIMING YIELD OPTIMIZATION
259
tistical optimization techniques does modify the cost function formulation and/or the solution strategy. Among the examples are gate sizing formulated as a general non-linear programming (NLP) problem [208], a sensitivity-driven iterative algorithm for sizing [212], sizing by robust geometric programming [209], and sizing and dual Vth assignment by robust linear programming [211]. We concentrate here on the later group. Gate and transistor sizing offers a simple strategy for yield optimization. In a deterministic setting, gate and transistor sizing is well developed theoretically and fast practical algorithms are available. It is natural, therefore, to explore the effectiveness of statistical optimization methods through statistical gate sizing. Deterministic sizing as a CAD strategy seeks to minimize total circuit area by manipulating transistor widths. The objective is to find the minimum dimensions of transistors that will provide enough current drive for the circuit to meet timing constraints. Gate (cell) sizing problem reduces the complexity by assuming that within the cell all transistors are scaled identically, and the objective of optimization is to find a set of gate scaling factors. Compared to full transistor sizing, this is a computationally easier problem. It is, however, practically very important in the context of standard cell library-based flows in which there is limited flexibility in sizing transistors individually. Timing yield optimization strategies aim at ensuring that the circuit meets its timing constraints with a specific probability. In case of cell sizing, the most general formulation is: Si (12.5) min i
s.t. P (Tmax ≤ T ) ≥ η
(12.6)
where Si are the gate (cell) sizes, Tmax is the longest delay through the circuit, T is the target clock period, and η is the required timing yield probability level. This problem falls into a class of chance-constrained problems since the constraints are required to be met only with a specific probability, usually of less than 100%. Solving the chance-constrained formulation of the sizing problem for general dependencies of delay on process parameters and sizes can be prohibitively expensive from the point of view of computational cost. Thus, the primary challenge of statistical optimization is how to handle variational information in a computationally efficient manner. Optimizing the parametric yield metric directly is computationally very difficult because of its numerical properties. Yield is a value of the cumulative distribution function which is an integral of the pdf of the immediate circuit function. For that reason, most yield-improvement strategies map the yield metric into approximate metrics that are more convenient computationally. Another possibility is to adopt a delay model that makes chance-constraint problems solvable. We explore this strategy later in this chapter.
260
12 PARAMETRIC YIELD OPTIMIZATION
It is possible to indirectly improve timing yield by attacking the negative implications of forcing path delays to be concentrated near the delay target. The reduction of the height of the path delay “wall” will improve timing yield since fewer paths will have a chance of exceeding the maximum delay once the chip is manufactured. Importantly, the wall spreading can be achieved without explicit accounting of path delay variance. The attraction of this approach is that it can be implemented by reusing most of the existing sizing tools and flows. Industrial sizing tools are very effective at tuning a circuit for minimum delay [207]. They rely on accurate non-linear delay models to speed up the slow paths while down-sizing the gates on the slow paths to improve area. A common way to set up a deterministic sizing problem is: (12.7)
min Tmax s.t. Tmax ≥ ATi
∀P Os
(12.8)
∀P Os
(12.10)
where P O is a set of primary circuit outputs. Robustness of the solution to this problem can be improved by introducing a penalty term that will help prevent the path build-up [207]: penalty(Tmax − ATi )) (12.9) min (Tmax + k i∈P Os
s.t. Tmax ≥ ATi
where penalty(Tmax − ATi ) is an increasing function that penalizes arrival times that are too close to the maximum arrival time at the primary output. Note that this strategy uses only nominal (non-random) path delays. It does not explicitly account for the path delay variance and nothing would prevent it from pushing out (slowing down) paths with low variance while making paths with high variance near critical. Since the actual yield losses depend on the variance of delays of the individual paths, a full solution must include the explicit treatment of path delay variance. One way to account for the impact of delay variability is to formulate the sizing optimization in terms of mean circuit delay, E[Tmax ]. For a simple statistical delay model, gate and node delay variance can be expressed in closed form and used within a general non-linear gate sizing problem. Let us assume that gate delay variations are independent and normal. The first two moments of the maximum of two Gaussian random variables can be computed analytically, and can be used to approximate the true distribution of maximum arrival time with another normal [208]. The mean, µi (Si ), and the standard deviation, σi (Si ), of delay of every cell in the netlist can be described as an explicit function of gate size Si . Recursively, a system of analytical equations can describe E[Tmax ] as: E[Tmax ] = f [µo (So ), σo (So ), ..., µN (SN ), σN (SN )]
(12.11)
12.2 STATISTICAL TIMING YIELD OPTIMIZATION
261
Now the sizing problem can be formulated as non-linear programming problem and be solved directly using a general non-linear solver [208]: min E[Tmax ] Si
(12.12)
Unfortunately, this function and many constraints, not shown here for compactness of expression, contain a number of highly non-linear dependencies. The methods of general non-linear optimization tend to be excessively slow. As an example, the above formulation can statistically size a circuit of 1690 nodes in 3 hours and 40 minutes [208]. For comparison, the state-of-theart deterministic optimization based on Lagrangian relaxation takes less than 1 minute to optimize a similar circuit. Because the cost of statistically sizing a circuit is high and grows with circuit size, runtime of non-linear statistical optimization algorithm would be significantly improved if the number of nodes to consider statistically were reduced. This can be done by excluding the paths that do not contribute to timing yield loss, leaving us with the paths that dominate the statistical timing behavior. Formally, we can say that paths D1 ...DN form a set of dominant paths if (12.13) P (max(D1 ...DN ) ≤ T ) ∼ = P (Dckt ≤ T ) If such stochastically dominant paths could be identified, then hybrid optimization could be used. We could size an entire circuit deterministically and do statistical sizing of only dominant paths. Finding the set of dominant paths is very challenging, however, because of the basic mathematical difficulty of comparing random variables. As discussed above, the statement that one random variable is greater than another is not easy to define since there is no unique way to compare random variables. The simple stochastic ordering based on comparison of cumulative probabilities can be used. Then, if X is stochastically larger than Y , dropping Y from analysis will not have large impact on accuracy and optimality of the computation. This simple way of comparing random variables is often too restrictive. It is common that one variable will dominate at low probabilities, but the relation will be reversed for higher probabilities, Figure 12.2.1. This can be fixed by performing the comparison at multiple points along the cdf but would be expensive. For this reason, in addition to simple stochastic ordering other ordering principles, such as convex and hazard-rate orderings, have been defined [224]. However, in a circuit setting their evaluation is not computationally feasible. A further difficulty is that path delay distributions are stochastically correlated, which makes the finding of the dominant set of paths a formidable challenge. More basically, in order to use stochastic ordering, the cumulative distribution function has to be computed since it is not immediately available. Computing cdf requires imposing distributional assumptions on the random variables, for example, assuming that the path delays are normal. A distribution-free evaluation is more flexible. Given the restrictiveness of using
262
12 PARAMETRIC YIELD OPTIMIZATION
Fig. 12.7. Simple stochastic dominance is defined in terms of cumulative distribution functions.
stochastic dominance and of computing cdf, operations with moments of distributions can be used. In other words, moment-based orderings can be used to draw inferences about distributions. It can be shown that a stochastic ordering implies some form of moment ordering. Unfortunately, the reverse is not true: there is no provable connection between orderings based on moments and stochastic orderings [226]. Empirically, it is observed that statistical optimization based on moment ordering is effective. Thus, the use of moment orderings is forced by computational concerns but remains somewhat heuristic. A computationally attractive option is to use a linear combination of moments of the random variable. One especially simple cost function is the sum of the first and second moments of the random variable: C = E[x] + E[x2 ] = µ + µ2 + σ 2
(12.14)
where µ = E[x] and σ 2 is the variance. In the context of statistical gate sizing, this moment-based ordering has been used as a way to mimic the stochastic orderings of path delays to identify the dominant set of paths [223]. For every path delay, the following cost function can be employed for estimating how yield-limiting a path is: Ci = E[Di ] + E[Di ]2 + V ar[Di ]
(12.15)
Paths with a high value of this metric are most likely to cause timing yield loss. Finding a subset of paths with the highest values of the cost function identifies the set of paths on which statistical sizing should be performed. The statistical sizing is applied to the set of dominant paths and the algorithm itself can be based on the non-linear formulation considered above. Alternatively, the moment-based cost function can be directly used to drive the nonlinear optimization itself. Since the cost function incorporates both the mean and variance, by minimizing the cost over the dominant path set should improve yield [223]. Specifically, a sizing solution under which the mean delay meets
12.2 STATISTICAL TIMING YIELD OPTIMIZATION
263
timing constraints while minimizing the variance of paths will also implicitly lead to a reduction of parametric yield loss: min Ci s.t. max E[Dp ] ≤ Treq (12.16) i∈N
p∈P
In this formulation, the objective is only indirectly minimizing yield loss, but it is fairly easy to evaluate. Yet, since such a formulation still relies on the general non-linear solver, it will remain computationally very expensive. The hope of breaking the computational bottleneck of statistical optimization is likely to come from the adoption of specialized optimization techniques. Efficient formulations based on geometric programming have been explored. In [202], the fact that sizing problems have fairly flat maxima is exploited to search for a robust solution. The technique also utilizes the notion of “soft” maximum of edge delays as a way to introduce variance awareness into the problem. Statistical STA is then used to guide the optimization in the right direction. Figure 12.1 shows the results of using this optimization. We see that the path delay wall is reduced. While the deterministic solution has a high number of near critical paths with high variance, the statistical algorithm creates a path distribution with fewer high-variance paths near the target delay. The technique based on geometric programming presented in [209] models parameter variations using an uncertainty ellipsoid, and proceeds to construct a robust geometric program, which is solved by convex optimization tools. One very special case is a robust equivalent of linear programming. It is an extension of linear programming to cases when the coefficients belong to an uncertainty set. Importantly, if the uncertainty set is ellipsoidal then a special second-order conic program can be formulated. Because of its importance for statistical optimization, we now provide a detailed analysis of this paradigm, and then briefly discuss how it can be used for statistical sizing. 12.2.2 Linear Programming under Uncertainty Substantial improvements in computational performance can be achieved if we adopt optimization methods requiring cost functions (objective and constraint functions) to have special structure. It has been known that optimization of linear functions can be done efficiently by linear programming (LP). Linear programming is powerful in that it can guarantee global optimality of the solution. There have been fast algorithms for some special non-linear functions, e.g. for posynomial functions via geometric programming. In the last decade, advances in applied mathematics have significantly expanded the class of convex functions that be efficiently optimized. It has been shown that many non-linear convex functions can be optimized using fast algorithms based on interior-point methods, similar to the ones available for linear programming [217]. Development of such interior-point methods made convex optimization a practical tool for a variety of optimization needs. Flexible optimization
264
12 PARAMETRIC YIELD OPTIMIZATION
packages based on these methods are now widely available, for example, MOSEK [221]. Of specific interest to our discussions is a class of second-order conic functions. Optimization with second order conic functions enables formulating a linear program under uncertainty when the uncertainty is ellipsoidal [217]. An optimization problem containing second-order conic constraint and objective functions is known as the second order conic program (SOCP). The theoretical worst-case complexity of interior-point methods for solving SOCP is polynomial. Similar to LP, however, it has been widely observed that the average complexity on most practical problems is much better. In many cases, runtime grows linearly in the size of the problem [218]. Traditional (certain) linear programs minimize a linear objective function over a set of linear constraints: min{cT x|aTi x ≤ bi } x
(12.17)
If the actual values of the coefficients of LP deviate from the assumed values, the solution produced by the traditional LP may become infeasible or significantly suboptimal. Robust optimization seeks to find solutions that will be insensitive to small deviations of parameters. Another way of looking at robust optimization is that it minimizes the objective function for any possible realization of the problem coefficients (parameters). In robust optimization we formally capture variability in the parameters using the concepts of set theory rather than probability. We say that a parameter vector represents not a single point in a multi-dimensional space, but belongs to (forms) a set of points. The set can be a multidimensional cube, ball, ellipsoid, or a much more complex set. Unfortunately, if the uncertainty set to which the parameters belong is of arbitrary form, the optimization problem is NP-hard. For linear programs, if the uncertainty set is ellipsoidal, then a robust counterpart of LP can be solved efficiently. For simplicity, let us assume that only the vector c in the above ai + ru : u2 ≤ 1}, LP is uncertain. The simplest ellipsoid is ball: ai ∈ εi = {¯ 1/2 see Figure 12.2.2. More generally, an ellipsoid is given by εi = {¯ ai + Pi u : u2 ≤ 1}, where the positive-semidefinite matrix P describes the orientation and the length of the axis of the ellipsoid. The robust counterpart of LP is then ¯Ti x + Pi x ≤ bi , i = 1..m min cT x : a
(12.18)
where ||.|| denotes the second norm of a vector. It is remarkable that in important cases there is a simple link between robust (set-based) optimization under uncertainty and statistical (chanceconstrained) optimization [217]. We are primarily interested in statistical interpretations of robust LP because they are of immediate use in parametric yield optimization problems. Consider a chance-constrained linear program:
12.2 STATISTICAL TIMING YIELD OPTIMIZATION
265
Fig. 12.8. Illustration of ellipsoidal sets used in formulating robust LP. The sets of equal probability of a Normal distribution are also ellipsoids. This provides the connection between the robust LP and the chance-constrained LP.
min cT x
(12.19)
P (aTi x
s.t. ≤ bi ) ≥ αi , i = 1..m (12.20) where ai = N (¯ ai , i ). The set of equal values of Gaussian cdf (equiprobability sets) are by the ellipsoids with their axis and Torientation given ¯ = a ¯i x and σ = (xT Σi x)1/2 . covariance matrix i . Let u = aTi x. Then, u The constraint P (aTi x ≤ bi ) ≥ αi can be re-written as: P(
bi − u ¯ u−u ¯ ≤ ) ≥ αi σ σ
(12.21)
u Notice that ( u−¯ σ ) ∼ N (0, 1), thus the above probability is equal to where φ() is the cdf of the standard normal random variable. We can transform the above probability into biσ−¯u ≥ φ−1 (αi ). Plugging in the expressions for u ¯ and σ, and re-arranging the terms we get:
φ( biσ−¯u ),
a ¯Ti x + φ−1 (αi )(xT Σi x)1/2 ≤ bi .
(12.22)
min cT x
(12.23)
a ¯Ti x + φ−1 (αi )(xT Σi x)1/2 ≤ bi
(12.24)
When αi > 0.5 the constraint set is a second-order cone and is convex. This finally leads to the following SOCP equivalent of the robust LP:
An efficient algorithm based on second order conic programming has been recently proposed for statistical gate sizing [210]. While the most natural model of gate delay dependence on its size is posynomial [216], with reasonable accuracy we could approximate it by a piecewise linear model. Then a robust linear program can be used for sizing. We adopt the following linear gate delay model:
266
12 PARAMETRIC YIELD OPTIMIZATION
di = ai − bi si + ci
sj
(12.25)
j∈f o(i)
This model captures the dependence of gate delay on the size of the gate itself (si ) and the sizes of the gates that load its output ( sj ). Here, the ai , bi and ci are the fitting coefficients that can be empirically determined through circuit simulation for each cell in the library. For more accurate delay modeling, a piecewise-linear model can be used. The accuracy of the model is reasonable; for example, the average error is less than 5% for a model based on three regions of linearization. Under this linear delay model, the deterministic sizing problem in terms of path delay constraints can be written as: min si sj ) (ai − bi si + ci Dp = (12.26) i∈p j∈f o(i)
s.t. Dp ≤ T ∀p ∈ P
where Dp is the delay a path p, and P is the set of all paths. The effect of process variability on gate delay can be captured by making the coefficients of the linear delay model (bi and ci ) random variables. We now want to set up a chance constrained problem for circuit sizing. The overall probabilistic constraint is in terms of circuit delay: P (Dckt ≤ T ) ≥ α. To formulate an analytically convenient optimization problem, the probabilistic constraint on circuit delay must be first translated into a set of path-based constraints, in the form of P (Dp ≤ T ) ≥ η, such that the resulting set of constraints well approximates, and ideally guarantees, the specified circuit timing yield level. In general, mapping a circuit timing yield value α into a set of path-dependent yield (probability) values η is a difficult task, which would require knowing and estimating the conditional probabilities of individual path delays. The delays of paths through a combinational block typically exhibit high positive correlation. Because of that, one simple choice that works reasonably well in practice is to let η = α. Experiments with the described optimization strategy have suggested that the quality of the solution is very weakly dependent on the exact choice of the yield mapping from the circuit to path constraints. Still, there is significant ongoing work on how to best guide statistical optimization using the notions of path and node criticalities [228], and yield budgeting [229]. In what follows, we simply assume that every path has to meet its timing constraint with the probability of η without elaborating on how to choose the specific value of η. The chance-constrained linear gate sizing problem in terms of path delay constraints is: si min i sj ) (ai − bi si + ci Dp = (12.27) i∈p
j∈f o(i)
s.t. P (Dp ≤ T ) ≥ η ∀p ∈ P
We can now exploit the fact that the delay is linear in the gate sizes. Under the assumption of Gaussian process variability, the path delays are also Gaussian.
12.2 STATISTICAL TIMING YIELD OPTIMIZATION
267
For compactness of expression, we can introduce the following definitions. Let the nominal gate delay be sj ) (12.28) d¯i = (ai − ¯bi si + c¯i j∈f o(i)
and the variance of gate delay be σd2i = s2i σb2i + (
si )2 σc2i
(12.29)
j∈f o(i)
Using the equivalence between the probability constraint and the second order cone constraint, discussed above, we can write down the robust counterpart of the linear gate sizing problem in terms of path delay constraints as an SOCP: min si i ¯ 1/2 (12.30) ≤T di + φ−1 (η)( σd2i ) Dp = i∈p
i∈p
s.t.Dp ≤ T ∀p ∈ P
The above optimization problem is in terms of paths. For a circuit of practical interest the number of paths is very large, resulting in an enormous number of constraints and the need to enumerate the set of paths. It is desirable to find a node-based restatement of the problem. The challenge is in mapping the path-based constraint Dp =
i∈p
1/2 d¯i + φ−1 (η)( σd2i ) ≤T
(12.31)
i∈p
into a set of node-based constraints. The difficulty is in extracting the individual gate variance terms out of a single term for the standard deviation of the path delay ( σd2i )1/2 . In [230], it is shown that the Cauchy-Schwarz inequality can be used to achieve the goal. This inequality states that for any ui , αi ∈ R ( u2i )1/2 ≥ αi ui f or αi2 ≤ 1 (12.32) We can therefore bound φ−1 (η)( σd2i )1/2 as: σd2i )1/2 ≥ φ−1 (η) αi σdi φ−1 (η)(
(12.33)
i∈p
This means that the path-based constraint can be replaced by a set of nodebased constraints with the margin coefficients fornode i given by φ−1 (η)αi . Multiple choices of αi , satisfying the condition αi2 ≤ 1, exist [230]. One −1/2 example is αi = lmax , where lmax is the logic depth of the circuit. Another −1/2 possibility is to set αi = li where the li is the length of the longest path
268
12 PARAMETRIC YIELD OPTIMIZATION
that contains node i. Practical experiments with the proposed optimization strategy indicate that the specific choice of the αi introduces only a small error with respect to the quantile function of path delay. This appears to be the case because the circuit yield is a fairly weak function of the margin coefficients. Now we can finally formulate the node-based second-order conic program for sizing as: si min i ATk ≥ ATi + d¯i + φ−1 (η) σb2i s2i + σc2i ( sj )2 , for ∀i ∈ F I(k) j∈f o(i)
ATo ≤ T for ∀o ∈ P O
(12.34) Here AT i is the arrival time at node i. This problem is still a second-order cone problem, as the constraints are the second-order cones of dimension 2. The second-order conic program for gate sizing formulated above can now be solved by a dedicated interior-point solution method. The algorithm allows efficient and superior area minimization under statistically formulated timing yield constraints. The algorithm was implemented using the commercially available conic solver MOSEK [221]. The runtime is an order of magnitude faster than the previously explored formulations of statistical circuit optimization. The runtime grew sub-quadratically in circuit size. The area savings were 22-30% at the same timing yield. The area-delay trade-off becomes extremely unfavorable at tighter performance constraints and higher yield levels as shown in Figure 12.2.2. Figure 12.2.2 shows the dependence of the achievable area on the required yield level at different performance targets. This observation supports the expectation that the statistical optimization is more important in the design of very high-performance parts. Figure 12.2.2 also indicates the strong dependence of the results on the assumed magnitude of process variability. It further shows that at loose timing constraints, the benefit of statistical optimization is not significant.
12.3 TECHNIQUES FOR TIMING AND POWER YIELD IMPROVEMENT As we argued above, in the nanometer regime, timing yield alone is not sufficiently predictive of overall parametric yield as it ignores variability in leakage power. We observe that yield is especially low in high-frequency bins because of power variability. This necessitates the development of computationally efficient statistical optimization techniques to minimize parametric yield loss resulting from power and delay variability [211][212]. Our strategy is to extend techniques that are effective in deterministic setting to a statistical formulation. We again focus on post-synthesis optimization where multiple CAD techniques have been explored as ways to minimize leakage: they include transistor
12.3 TECHNIQUES FOR TIMING AND POWER YIELD IMPROVEMENT
269
Fig. 12.9. The area-delay curves at different yield levels. Statistical optimization does uniformly better than the deterministic optimization at the same yield level.
Fig. 12.10. The sensitivity of area to yield level. The area-delay trade-off becomes extremely unfavorable at tighter clock period targets and higher yield levels.
270
12 PARAMETRIC YIELD OPTIMIZATION
Fig. 12.11. The minimum achievable area goes up for higher magnitude of parameter variation especially, for tight timing constraints.
stacking, gate length biasing, gate and transistor sizing, and multi-threshold voltage optimizations. Larger savings in leakage can possibly be achieved if optimization is moved into synthesis, e.g. technology mapping where there is more structural flexibility to select optimal cells to implement the function. While relying on different implementation strategies, all circuit tuning techniques essentially trade the slack of non-critical paths for power reduction. Here we concentrate on post-synthesis sizing and dual-Vth allocation which are effective in reducing leakage, and have been effectively used in a deterministic setting [213][214][215]. Circuit experiments suggest that the optimal allocation of high Vth transistors on non-critical paths can give a 1.3-3.6× reduction in leakage power compared to the circuit utilizing only low Vth transistors. Leakage reduction can be as high as 3.1-6.2× if high Vth allocation can be done jointly with transistor sizing [214]. Reduction in leakage can be achieved by either downsizing the transistors or setting them to a higher Vth . Table 12.1 shows the changes in delay and leakage power when transistor threshold voltage is changed from low Vth to high Vth , measured for the 130nm technology. Values of leakage and delay at the 99th percentile of their distributions are represented by the quantile function, e.g. q .99 (P ). We observe that low Vth devices exhibit a higher leakage spread, while high Vth devices exhibit a higher delay spread. Importantly, leakage variability strongly depends on both sizing changes and Vth assignments. Downsizing reduces gate area, increasing delay and reducing mean leakage.
12.3 TECHNIQUES FOR TIMING AND POWER YIELD IMPROVEMENT
271
But reduction in transistor width and area also increases the standard deviation of Vth , which leads to larger leakage variance. Table 12.1. Low Vth devices exhibit a higher leakage spread while high Vth devices exhibit a higher delay spread. Delay Nominal Q(0.99) Increase Low Vth (0.1 V) 1 1.15 115% High Vth (0.2 V) 1.2 1.5 125%
Leakage Nominal Q(0.99) Increase 1 2.15 215% 0.12 0.2 170%
The objective of introducing statistical optimization is to take variance into account while making decisions to change Vth or gate size. From the viewpoint of trading slack for power, slack can now be assigned more optimally: the priority for allocating slack should be given to the paths that increase the leakage variability in a minimum way. This will lead to an improvement of power-limited parametric yield. The general problem of gate sizing and dual Vth assignment is an NP-hard problem [219], as is the extension to including multiple threshold voltages. A computationally feasible approach to optimize circuits of any significant size will have to be based on approximating techniques. One approach is to extend to a statistical formulation the optimization techniques that proved effective in deterministic optimization. Sensitivity-driven iterative algorithms are flexible, and easy to implement and apply as long as sensitivities of the cost function to alternative solutions can be computed. The classic iterative coordinate-descent algorithm for sizing is TILOS [216]. The algorithm makes changes to the circuit in the order given by the sensitivity list. This strategy has been extended to the algorithm for for sizing and dual Vth assignment that proceeds as follows [214]. With all the gates initially at low-Vth , timing constraints are met by gate sizing alone. Then a sensitivity measure s is used to assess the value of changing a gate from low Vth to high Vth to reduce power: spower = ∆P/∆D, where ∆P < 0 and ∆D > 0 are the changes in power dissipation and delay of the gate if the gate Vth is changed from low to high. The gate with the maximum sensitivity is then swapped with a high-Vth version of the gate. If after the change, the circuit dose not meet timing, a new sensitivity measure for upsizing gates to reduce the delay is defined as sdelay = ∆D/∆P , where ∆D and ∆P are now the changes in power and delay due to an upsizing of a given gate [214]. This sensitivity metric is used to determine the order of gate upsizing. The algorithm is iterative and proceeds by performing greedy changes based on the ordered sensitivity values. The algorithm is easy to implement. Its greedy decision-making may lead to sub-optimal decisions, however. A statistical formulation of this algorithm models both leakage and delay as random variables. The algorithm aims to minimize the leakage power
272
12 PARAMETRIC YIELD OPTIMIZATION
measured at a given quantile point while meeting a timing constraint with some probability. The first enhancement to the deterministic technique is to use the statistically-defined timing constraints computed by statistical timing analysis. The challenge is in how to make statistically-aware decisions. Variability in delay and power makes sensitivity metrics (sdelay and spower ) random variables. One way to enable statistical decisions-making is to do the ranking of the preference for raising Vth or upsizing using the quantile values of the random sensitivities [212]. The power savings enabled by this algorithm, as compared to its deterministic counterpart, appear to be quite substantial. Across different benchmark circuits, the savings range from 15% to 35%. This technique has the advantage of being based on reliable coordinate-descent algorithms known to be effective in gate sizing. Yet, an extension of the deterministic algorithm to the statistical setting causes the run-time to grow considerably. We again turn to robust linear programming to show the possibility of statistical yield enhancement techniques that achieves high computational efficiency while treating both timing and power metrics probabilistically. The computational properties of the robust LP (and SOCP), and its ability to handle variability explicitly makes it a natural vehicle for parametric yield maximization. Specifically, we use SOCP for gate sizing and dual Vth assignment to maximize power yield under timing yield constraints. Delay, or slack, budgeting is a general paradigm for timing optimization, and can be used for power minimization with sizing and dual Vth assignment [213]. Delay budgeting can be formulated as a linear programming problem, and the advantage of this formulation is that circuit modifications can be driven by global, rather than greedy, decision-making. To enable using slack as a decision variable, we can work directly with the power-delay curves that characterized the possible configurations of the individual gates in the circuit. More specifically, the algorithm uses a gate configuration space which is formed by any valid assignment of sizes and threshold voltages to transistors in a library gate. For any fixed load, a set of Pareto points in the power-delay space can be identified among all the possible configurations, Figure 12.3. When the configuration space is continuous, and delay is a monotonic and separable function, such a procedure is optimal for small increments of slack assignments [220]. The deterministic algorithm for power minimization by delay budgeting is a two-phase iterative relaxation scheme. It is an interleaved sequence of (i) slack-redistribution using linear programming, and (ii) the search over the gate configuration space to identify a configuration that will absorb the assigned slack. The input to the first phase is a circuit sized for maximum slack using a transistor (gate) sizing algorithm, such as TILOS, with all the devices set to low Vth . This circuit has the highest possible power consumption of any realization. LP is used to distribute the available slack to gates optimally with weights given by the power-delay sensitivities. The slack is allocated in a way that maximizes the power reduction. A linear measure of a gate’s power-delay sensitivity is power reduction per unit of added delay: s = −∆P/∆D. A unit
12.3 TECHNIQUES FOR TIMING AND POWER YIELD IMPROVEMENT
273
Fig. 12.12. The power-delay space for a NAND2 gate driving two different capacitive loads. The Pareto frontier is depicted by the dashed gray lines.
of slack added to a node with a higher sensitivity leads to the greater power reduction. The use of the power-delay sensitivities to guide optimization is similar to its use in [212] in the iterative descent scheme discussed above. The difference is that instead of making greedy localized choices a linear program is used to assign the added delays, which globally maximizes the power reduction. A minimum power solution will contain only the Pareto-optimal gate configurations. These configurations are the ones maintained in the library along with the power-delay sensitivity value for every pair of configurations on the Pareto frontier. A linear program for power-weighted slack allocation can be expressed as: (12.35) max si di s.t. ti ≥ tj + d0i + di , j ∈ F I(i) tk ≤ Tmax , ∀P O
(12.36) (12.37)
Here ti is the arrival time at node i, Tmax is the required arrival time at the primary output, d0i is the delay of the gate i in the gate configuration from the last iteration (initially obtained by sizing for maximum slack), and di is the added slack. Because the use of the linear sensitivity vector (si ) presumes a first order linear relationship between delay and power, it is only accurate within a narrow delay range. This requires moving towards the solution under small slack increments. The second phase consists of a search among gate configurations in the library, such that slack assigned to gates in the previous phase is optimally utilized for power reduction.
274
12 PARAMETRIC YIELD OPTIMIZATION
We can now set up a statistical equivalent of power minimization problem under variability reformulated as a robust linear program. As indicated previously, this will enable a highly computationally efficient solution using interior-point solution methods. The notion of variability in delay and power due to process variations is explicitly incorporated into the optimization. To allow that, the delay budgeting phase is re-formulated as a robust version of the power-weighted linear program that assigns slacks based on powerdelay sensitivities of gates. The robust LP is then cast into a second order conic program that can be solved efficiently. As before, the slack assignment is inter-leaved with the configuration selection to absorb slack by the gates to minimize total power savings. Let us consider the statistical power minimization problem under the variability of effective channel length (Lef f ) and the threshold voltage (Vth ) variation. These parameters have significant impact on timing (Lef f ) and leakage power (Vth ). An additive statistical model that decomposes the variability of both Lef f and Vth into the intra-chip and chip-to-chip variability components can be used. Based on empirical data, it is, typically, safe to model both Lef f and Vth as Gaussian random variables. Then, under the leakage models described in Chapter 11, the leakage power follows a log-normal distribution. With a linear model of delay, delay variation is normally distributed. The variability affects the power and gate delay metrics, and thus the sensitivity vector. It can be shown that the vector of power-delay sensitivity coefficients also follows a log-normal distribution. When formulating a statistical power minimization problem, an equivalent formulation of the problem, which places the power weighted slack vector into the constraint set, is more convenient. Its statistical equivalent is formulated as a chance-constrained linear program: (12.38) min di s.t. P si di ≥ Pmax − Pconst } ≥ η (12.39) P (ATo ≤ Tmax ) ≥ ζ for ∀o ∈ P O
ATi ≥ ATj +
d0i
+ di , for ∀j ∈ F I(i)
(12.40)
(12.41)
The chance constraints refer respectively to the power-limited parametric yield η, and the timing-limited parametric yield ζ. Based on the formulation of the model of uncertainty, they capture the uncertainty due to process parameters via the uncertainty of power and delay metrics. The constraints are now reformulated as second-order conic constraints. Notice that in formulating the node-based probabilistic timing constraints we again face the problem of determining the margin coefficients for each node. It is possible to deal with this challenge using the principles similar to the ones used for the gate sizing problem under the linear delay model. Using the fact that sensitivity is a lognormal random variable, we can transform the power constraint into one which is linear in the mean and variance of si di :
12.3 TECHNIQUES FOR TIMING AND POWER YIELD IMPROVEMENT
min
di
s.t. s¯T d + κ(η)(dT Σs d)1/2 ≤ ln(∆P )/λ(η) ATi ≥ ATk +
d0i
+ ki σd0i + di , ATo ≤ Tmax
275
(12.42) (12.43) (12.44)
Here s ∼ LN (¯ s , Σs ) is the log-normal sensitivity vector with mean s¯ and covariance matrix s , and λ(η) and κ(η) are the fitting functions dependent on η. An implementation of the algorithm based on SOCP solver in MOSEK [221] results in run-time that is roughly linear in circuit size. The algorithm was run on a dual core 1.5GHz. AMD Athlon machine with 2GB of RAM. The runtime of the statistical algorithm is about 25% longer than that of a deterministic version of the same optimization flow. It took 4.5 minutes to optimize a circuit containing 2704 nodes. On the common circuit benchmarks, the algorithm is almost 30X faster than the previously described TILOS-like algorithm [212]. This speedup is due to the special structure of the SOCP program, which is not available to general nonlinear problem solvers. The potential of statistical optimization for yield improvement, and power and delay improvement at the same yield, can be ascertained by applying the algorithm on a number of benchmark combinational circuits. The fundamental reason for the reduction in power, enabled by statistical optimization, is its ability to explicitly account for the variance of the constraint and objective functions, and to allot slack more effectively. One manifestation of the superiority of statistical optimization is that it can assign more transistors to a high Vth . For one example, under otherwise identical conditions, the number of transistors set to high Vth by the statistical algorithm is 20% more than the corresponding number for the deterministic algorithm. Compared to its deterministic counter-part, statistical optimization reduces both the mean and the spread of the leakage distribution at the same timing yield, Figure 12.3. As a result, the new algorithm, on average, reduces the static power by 31% and the total power by 17% without the loss of parametric yield when used on ISCAS’85 benchmark circuits. These power reductions are at the same timing points, as validated by SSTA. The improvement strongly depends on the underlying magnitude and structure of physical process variation. The above numbers assumed that magnitude of Lef f and Vth variability is 8% and 7% of σ/µ respectively, and the equal breakdown of Lef f variability into intra- and inter-chip components. As the amount of uncorrelated variability increases, i.e. the intra-chip component grows in comparison with the chip-to-chip component, the power savings enabled by statistical optimization increase. In addition to uniformly improving power at the same yield, statistical yield optimization tools enable trading yield for power and delay. This can be seen in Figure 12.3 that shows a set of power-delay curves for one of the benchmarks. It can be observed that at tight timing constraints the difference in power optimized for different yield levels is significant. For the same yield, the trade-off between power and arrival time
276
12 PARAMETRIC YIELD OPTIMIZATION
is much more marked at tighter timing constraints. For example, it is possible, at delay of 0.6ns, to trade only a 5% timing yield decrease for a 25% power reduction.
Fig. 12.13. The probability distribution functions of static (leakage) power produced by a Monte Carlo simulation of the benchmark circuit (c432) optimized by the deterministic and statistical algorithms.
Fig. 12.14. Power-delay curves at different timing yield levels for one of the benchmark circuits. At larger delay, power penalty for higher yield is smaller.
12.4 SUMMARY
277
12.4 SUMMARY Traditional deterministic optimization methods are not sufficient for improving parametric yield and new methods based on explicit manipulation of variance needs to be developed. This requires the adoption of new analysis and optimization methodologies for minimizing the parametric yield loss due to both timing and power variability. Statistical optimization is inherently very expensive computationally. Novel techniques based on robust linear programming and robust geometric programming can be used to perform statistical optimization efficiently. Several techniques for efficient parametric yield improvement have been discussed in this chapter.
13 CONCLUSIONS
This book has presented an attempt at a comprehensive treatment of topics related to the emerging disciplines of design for manufacturability and statistical design. The central premise of the book is that the variability must be rigorously described as either random or systematic before meaningful measures can be taken to mitigate its impact on design procedures. This requires the understanding of the physical causes of variability in advanced semiconductor processes, techniques for measuring the relevant characteristics of data, and for describing the results of experiments in a rigorous language of statistics. Once the patterns of variability affecting a specific process module are understood, corrective measures can be taken. If the variability is systematic, that is, it can be traced to a clear functional dependency, such as the featureto-feature spacing or wire density, then it can be modeled in the design flow and its impact on design precisely accounted for. If the variability is random, then statistical or worst-case techniques must be used to deal with it. The topics covered by each chapter of this book are complex enough to each deserve a book. It is unfortunately impossible to do full justice to every concept discussed in the book. Our intent was to provide the foundation for a rigorous study of variability and its impact on the design process. We want to conclude by identifying the additional resources that the readers could use to further investigate the questions discussed in the book.
On semiconductor physics and technology Y. Taur and T. H. Ning, Fundamentals of Modern VLSI Devices. Cambridge, UK: Cambridge University Press, 1998. W. R. Runyan and K. E. Bean, Semiconductor Integrated Circuits Processing Technology. Addison-Wesley Company, 1990. W. Maly, Atlas of IC Technologies: An Introduction to VLSI Processes. Benjamin/ Cummings, 1987.
280
13 CONCLUSIONS
S. Wolf and R. Tabuer, Silicon Processing for VLSI Era, vol. 1, Process Technology. Sunset Beach, California: Lattice Press, 1986.
On lithography and reticle enhancement techniques C. A. Mack, Field Guide to Optical Lithography”. SPIE Press, 2006. P. Wong, Resolution Enhancement Techniques in Optical Lithography. SPIE Press, 2001. H. Levinson, Lithography Process Control. Bellingham, Washington: SPIE Press, 1999. W. Glendinning and J. Helbert, eds., Handbook of Microlithography: Principles, Technology and Applications. Noyes Publications, 1991.
On circuit simulation W.J. McCalla, Fundamentals of Computer-Aided Circuit Simulation. Kluwer Academic Publishers, 1987. D. Foty, MOSFET Modeling with SPICE - Principles and Practice. Prentice Hall, 1997.
On timing analysis S. Sapatnekar, Timing. Kluwer Academic Publishers, 2004. S. Hassoun and T. Sasao, Logic Synthesis and Verification. Kluwer Academic Publishers, 2002. A. Srivastava, D. Sylvester, D. Blaauw, Statistical Analysis and Optimization for VLSI: Timing and Power. Springer, 2005.
On design for yield S.W. Director and W.Maly, eds., “Statistical Approach to VLSI,” in Advances in CAD for VLSI. North-Holland, 1994. W. Maly, S. Director, and A. Strojwas, VLSI Design for Manufacturing: Yield Enhancement. Kluwer Academic Publishers, 1989. R. Spence and R. Soin, Tolerance Design of Electronic Circuits. Reading, MA: Addison-Wesley, 1988. J. Zhang and M. Styblinkski, Yield and Variability Optimization of Integrated Circuits. Boston, Massachusetts: Kluwer Academic Publishers, 1996.
A APPENDIX: PROJECTING VARIABILITY
Many aspects of the emerging methods of DFM and statistical design depend on both the magnitudes of parameter variations and the specific types and patterns of variability. To a significant degree, the innovation is required because of the rise of a specific type of variability - the intra-chip variability. The other scales of variability can, to a large extent, be handled by the traditional methods. Once this is understood, the next question to address is how much variability of a specific type we expect to see. Having answers to these questions is crucial for practical engineers and researchers alike. The ability to project into the future the amounts of variability to be expected is particularly important for researchers since their research priorities are determined by these numbers. So it is crucial to have an idea of the typical magnitudes of variability we may expect in the future. The standard reference for much numerical data relating to the trends in semiconductor industry, including variability projections, is the International Technology Roadmap for Semiconductors (ITRS). The roadmap contains multiple references to variability and manufacturability because many steps in the semiconductor process (e.g. lithography, front end and back end) contribute to the overall variability budget. Table A.1 below contains a summary of variability projections with respect to some key process (Vth , Lgate ) and circuit (delay, power) level parameters. This table comes from the 2006 Edition of ITRS [153]. The task of projecting the magnitudes of variability into the future is quite involved. In the attempt to clarify some of the questions that may arise in interpreting these projections, we include the following section organized in terms of the answers to a set of questions that a user of the roadmap may have.
282
A APPENDIX: PROJECTING VARIABILITY
Table A.1. The amount of variability in key parameters predicted by the latest edition of ITRS (adapted from [153]). Year of Production DRAM 1/2 Pitch (nm) % VDD Variability % VT H Variability (doping) % VT H Variability (all sources) % CD Variability % Delay variability % Power variability
’05 80 10% 24%
’06 70 10% 29%
’07 65 10% 31%
’08 57 10% 35%
’09 50 10% 40%
’10 45 10% 40%
’11 40 10% 40%
’12 36 10% 58%
’13 32 10% 58%
26%
29%
33%
37%
42%
42%
42%
58%
58%
12% 41% 55%
12% 42% 55%
12% 45% 56%
12% 46% 57%
12% 49% 57%
12% 50% 58%
12% 53% 58%
12% 54% 59%
12% 57% 59%
Methodology of Predictions •
It is difficult enough to predict the nominal values of parameters for future technologies. How can we predict the future magnitudes of variability?
Expert practitioners make it their business to precisely follow the history of various technology “capabilities”, and to be constantly aware of new developments in manufacturing equipment, processes, materials, and methods that can influence said capability. These are the people who make the ITRS predictions. Since these predictions are reviewed by a large number of practitioners, they represent the best information we have about nominal values as well as the expected amount of variability in the future. The expectation is that near term values are quite precise, but that values that look out two or more technology nodes are less so. Basically, a number of experts get together for (many) meetings and converge on the numbers presented in the document. Essentially, the predictions are based on looking at the empirical data so far and extrapolating into the future. •
Do the variability magnitude numbers in the roadmap represent targets or limits?
There are two ways to think about the variability numbers. The first view is that these are the desired magnitudes of variability, i.e. targets, such that if variability is worse than they predict the design yield will be unacceptable. The second is that these are the physically defined limits on the ability to control the parameters. The variability projections in the roadmap represent the collective solution to the conflict between the above viewpoints. They represent a compromise between the maximum amount of variability that the designers can tolerate and the tightest tolerances that the process engineers will be able to deliver. This compromise is achieved through the expert consensus that takes into account the innovations that are likely to occur in the future.
A APPENDIX: PROJECTING VARIABILITY
•
283
How do the numbers take into account the evolution of process tolerances over time?
Over the lifetime of any given semiconductor manufacturing technology, two phenomena occur that act to improve its overall performance. First, the staff running the fabrication facility will gain more experience with the technology, getting a better understanding of the sensitivity of various important parameters to machine settings and procedures. This natural process, often referred to as “yield learning”, is well documented and is “built in” to the technology assumptions in the sense that the specification for the technology is typically tighter than that which can be achieved early in its life cycle, the idea being that by the time designs enter the fab in volume, the technology would have been refined to the point that it is able to achieve tighter tolerances than at the beginning. Second, manufacturing equipment makers will come up with new and better machines, with improved tolerances, throughput, and uniformity. Similar to the yield learning trend above, this means that at some intermediate point in the lifetime of a technology, the underlying equipment is updated, enabling tighter tolerances. The upshot is that the numbers reported would tend to take these phenomena into account, and they thus represent process capability “after” the necessary learning and updates have occurred. •
Are variability projections based on theoretical understanding or empirical extrapolation?
For some variability mechanisms, the scaling of variance can be predicted from first principles. This is the case, for example, for Vth variation due to random dopant fluctuation (RDF) known to be governed by Poisson statistics. Since we know the properties of Poisson statistics, we know that if the target number of atoms that we want to implant into the channel is N , then the standard deviation of the actual number of dopants is N 0.5 . Thus, for this specific mechanism we can predict the magnitude of variation for future technologies. For many other variability mechanisms, projections will significantly rely on empirical extrapolations in combination with the understanding of the physical principles involved. For example, the experts who make the predictions about the evolution of lithography have a firm understanding of the physics and theory involved. Given the uncertainty in technology directions, machinery, and innovations constantly coming up, there will necessarily be some empirical component to the predictions. Even in the case of RDF, however, the empirical component is not negligible. While the variance of the number of dopant atoms is exactly predicted from theory, the impact on Vth depends on the details of device design, such as the 2-D doping profile. New device design ideas may reduce the severity of Vth variation; there is interest, for example, in using undoped channels to avoid the Vth fluctuations that arise from RDF. •
The roadmap contains several CD variability numbers including the total 3σ variation and the dense/isolated bias. How are we to
284
A APPENDIX: PROJECTING VARIABILITY
put these two categories together to arrive at the final variability assessment? Example. The answer depends on the phase of the design process at which variability is being considered. The first scenario is when the analysis occurs before physical design. In this case, no information is available about the layout implementation, and thus whether a device is in an isolated or dense environment is not known. The second scenario is after physical design. In this case, the complete layout is determined, and each device can be tagged as being isolated or dense. Suppose that we are told that overall CD variation is expressed as a normal distribution with mean µ and standard deviation σ: CD = N (µ, σ 2 ) Suppose further that we are told that dense devices have a “bias” over isolated devices of a factor b, i.e. CDiso = N (µ, σ 2 ) CDden = N (µ + b, σ 2 ) Then it is clear that before physical design (PD), one would use a model for the overall CD of CD = N (µ, σ 2 ) + bU (r) where U (r) is a random number generator which returns 0 r% of the time, and 1 (100-r)% of the time. After PD, one would use the equations for CDiso and CDden directly, of course.
Decomposition into Spatial Scales •
Does the roadmap show the breakdown of variability into interchip and intra-chip variability components?
In the newest roadmap (2007 version) there is an attempt to properly categorize variability components as within-die, within-mask, within-wafer, and so on. Older versions did not directly do this in all relevant places. Where it is not clear, the assumption should be that the variability reported is the total variability. •
How much of the total variability is chip-to-chip and how much of it is intra-chip?
This number varies depending on many factors. A prominent factor is the size of the die. Even for one technology, a small die will have less within-die variability than a large die. Also, a “uniform” die made up of fairly similar parts will exhibit less spread than a heterogenous die composed of parts with very different layout characteristics. In general, assuming that roughly half of the total variability is exhibited within die is a good starting point. Recall, however, that much of that variability will be “systematic” in nature and therefore must be treated as such. For a specific example of a variability decomposition the reader may consult section 6.5 of the book.
A APPENDIX: PROJECTING VARIABILITY
285
How to Interpret and Use the Roadmap •
Does the variability number in an ITRS table refer to the ultimate un-correctable variability of that parameter?
We know that multiple corrective steps are now routinely used in a semiconductor process to reduce the variability. The examples are optical proximity correction and dummy fill insertion. Unless clearly stated, any tolerance in the ITRS tables should be understood as the total variability remaining after all corrective measures have been applied. Since the design rules for a specific technology are generated with a specific manufacturing flow in mind, designers do not have the option of “opting out” of corrective processes such as dummy metal fill or OPC. Thus there is little value in quoting raw variability since no design will encounter it. •
The variance of some parameters depends on the gate area. What device size is assumed for the variability numbers in ITRS?
The standard deviation of some parameters depends on the area. Most notably, the standard deviation of Vth is inversely proportional to the square root of gate area. The standard deviation of Vth in ITRS is for the minimumwidth devices. The ITRS refers to DRAM devices, which are usually small; and the microprocessor and ASIC devices, which are generally large. A good rule of thumb is to assume that small devices have a W/L ratio of around 2, while a large device would have a W/L ratio of more than 5. •
How does one scale these numbers for other device sizes?
This is not generally possible. For example, Vth vs. Lgate is a non-linear curve and if one was to change Lgate it is not immediately clear what the change in Vth would be. Due to the nature of the lithography process, it is not necessary that the variance of Lgate will be the same for different values of nominal Lgate either. In cases like that it is probably best to rely on a technology-predictive model instead. •
Which chapters in the roadmap contain information about the variability of parameters?
The following chapters in the 2005 ITRS are relevant for DFM. • • •
Design: this chapter has a section explicitly on DFM. Lithography: one of the main sources of information on systematic and random variability related to the photolithography process, many of the tables have CD control tolerances in them. Interconnect: the main sources of information on wire variability, many of the tables report tolerances on dimensions, sheet resistivity, and contact resistance.
286
• • •
A APPENDIX: PROJECTING VARIABILITY
Yield Enhancement: a defect centric view of yield; one of the major areas from a DFM perspective. Metrology: a small section on electrical characterization is here, useful in connection with the characterization of systematic and random variability. Modeling & Simulation: useful chapter to understand the linkage between variability models and the prediction of overall integrated circuit performance.
Projecting CD Variability: A Case Study In this section we illustrate how the ITRS roadmap can be used to make first order variability predictions by building a complete model for critical dimension (CD) variability based on the 2005 roadmap. Our goal is to demonstrate how an understanding of the roadmap and its assumptions can be leveraged to make first order predictions of parametric variability. This is needed because the roadmap contains many entries pertaining to the final CD variability and there is a need to integrate them in order to arrive at an overall assessment. Ultimately, such predictions can be folded into a performance model which can be used to evaluate the impact of this variability on important circuit metrics like delay and power1 . The data we need to make these predictions comes from the Lithography section of the ITRS, tables 77a and 78a from whence we get the following data: Table A.2. CD tolerance data from the 2005 ITRS Roadmap. Year MPU Physical Gate Length (nm) Line width roughness (3σ) (nm) Overlay (3σ) (nm) MEF (isolated) CD uniformity (isolated) (3σ) (nm) MEF (dense) CD uniformity (dense) (3σ) (nm) CD linearity (nm) CD mean to target (nm)
1
’05 32
’06 28
’07 25
’08 23
’09 20
’10 18
’11 16
’12 14
’13 13
4.2
3.8
3.4
3
2.7
2.4
2.1
1.9
1.7
15 1.4 3.8
13 1.4 3.4
11 1.6 2.6
10 1.8 2.1
9 2 1.7
8 2.2 1.3
7 2.2 1.2
6 2.2 1.1
5 2.2 1
2 7.1
2 6
2.2 4.8
2.2 4.3
2.2 3.8
2.2 3.4
2.2 3
2.2 2.7
2.2 2.4
13 6.4
11 5.6
10 5.2
9 4.6
8 4
7.2 3.6
6.4 3.2
5.6 2.8
5.1 2.6
Note that the Design chapter in the 2007 update is slated to contain exactly such a performance model
A APPENDIX: PROJECTING VARIABILITY
287
In the table, certain numbers are underlined to reflect the fact that they represent red fields in the ITRS table. Briefly, red is used to denote situations where no known solutions exist to meet the desired level of capability, in a sense, it points to where innovation is required in order to keep the roadmap on track. We begin by explaining how the terms in the table are to be interpreted. The physical gate length is simply the desired size of the gate. Line width roughness described the CD deviation for a device of typical width. Overlay denotes the possible misalignment between masks and usually impacts structures such as vias. In the context of CD control, however, and in combination with the corner rounding phenomenon, overlay exhibits itself as CD variability as shown in figure A.1. Corner rounding is discussed in greater detail in Chapter 2 and Figure 2.5 shows an example of this behavior. Based on qualitative analysis we assign 5% of the overlay CD variation.
Fig. A.1. Illustration of how overlap, combined with corner rounding, can impact critical dimension.
MEF is the mask error factor, also known as mask error enhancement factor, that we also discussed in Chapter 2. CD uniformity refers to variations in dense and isolated fatures due to etch and related phenomena. The CD uniformity entries in the table is a mask-level number. (The CD linearity and CD mean to target are also defined at the mask level.) Equation 2.1 is used to translate these quantities into polysilicon CD variations. We assume that
288
A APPENDIX: PROJECTING VARIABILITY
the dense/isolated bias is zero, which is equivalent to assuming that the final CD variations for dense and isolated structures both have a mean of zero. CD linearity refers to the maximum length deviation for structures with constant spacing but with non-identical sizes. (In the context of restricted design rules, such a situation happens when, for example, device length adjustments are used to reduce leakage.) Finally, the CD mean to target term refers to the overall bias between drawn and fabricated features. The classic sum-of-variance expression is used to compute the total variance. The CD linearity and CD mean to target terms are defined as range variables, rather than random variables. We take them to correspond to a 3σ deviation for the purpose of this computation. Then, the overall formula to calculate CD variability from the data above is: 2 2 2 2 2 = σiso + σden + σlin + (0.05σovr )2 + σler σtot
(A.1)
where: σiso is the standard deviation of CD variation in isolated structures, σden is the standard deviation of CD variation in dense structures, σlin is the computed standard deviation of the linearity term, σovr is the standard deviation of the overlay term, and σler is the standard deviation of the line edge roughness term. The result, for the data presented above, is a constant 3σ=23% CD tolerance over the time horizon the tables are defined at. We noted, however, that many of the numbers in the table are red, meaning that they are currently beyond our capability. One way to bound the overall variability, then, is to assume that these red numbers will not scale, i.e. that we will do not better than what we currently know we can do. In this case, the data tables would look like this: Table A.3. CD tolerance data from the 2005 ITRS Roadmap, assuming that red values are not possible. Year MPU Physical Gate Length (nm) Line width roughness (3σ) (nm) Overlay (3 σ) (nm) MEF (isolated) CD uniformity (isolated) (3σ) (nm) MEF (dense) CD uniformity (dense) (3σ) (nm) CD linearity (nm) CD mean to target (nm)
’05 32
’06 28
’07 25
’08 23
’09 20
’10 18
’11 16
’12 14
’13 13
4.2
3.8
3.4
3.4
3.4
3.4
3.4
3.4
3.4
15 1.4 3.8
13 1.4 3.4
11 1.6 3.4
11 1.8 3.4
11 2 3.4
11 2.2 3.4
11 2.2 3.4
11 2.2 3.4
11 2.2 3.4
2 7.1
2 6
2.2 4.8
2.2 4.8
2.2 4.8
2.2 4.8
2.2 4.8
2.2 4.8
2.2 4.8
13 6.4
11 5.6
10 5.2
10 4.6
10 4
10 4
10 4
10 4
10 4
A APPENDIX: PROJECTING VARIABILITY
289
In this case, the overall CD tolerance would rise from 23% in 2005 to 45% in 2013, figure A.2 illustrates the trend in question. One can view these two cases (constant tolerance, vs. rapidly increasing tolerance) as the two extreme, and assume that the actual tolerance will lie somewhere in between. As innovation occurs and some, hopefully most, of the difficult problems indicated by the red numbers are solved, the expected amount of variability will be correspondingly reduced.
Fig. A.2. Lower and upper bounds on CD control as predicted by the 2005 ITRS document.
References
1. M. Mani, A. Devgan, and M. Orshansky, “An efficient algorithm for statistical minimization of total power under timing yield constraints,” in Proceedings of Design Automation Conference, 2005, pp. 309-314. 2. G. de Veciana, M. Jacome and J.-H Guo, “Hierarchical algorithms for assessing probabilistic constraints on system performance,” in Proceedings of Design Automation Conference, 1998, pp.251-256. 3. S. Martin, K. Flautner, T. Mudge, and D. Blaauw, “Combined dynamic voltage scaling and adaptive body biasing for lower power microprocessors under dynamic workloads,” in Proceedings of International Conference on Computer Aided Design, 2002, pp. 721-725. 4. A. Keshavarzi et al, “Effectiveness of reverse body bias for leakage control in scaled dual Vt CMOS ICs,” in Proceedings of the International Symposium on Low Power Electronics and Design, 2001, pp. 207-212. 5. J. Rabaey, A. Chandrakasan, B. Nikolic, Digital Integrated Circuits: A Design Perspective, Second Edition, Prentice Hall, 2003. 6. K. Bowman, and J. Meindl, “Impact of within-die parameter fluctuations on the future maximum clock frequency distribution,” in Proceedings of Custom Integrated Circuits Conference, 2001, pp. 229-232. 7. K. Bowman, S. Duvall, and J. Meindl, “Impact of within-die and die-to-die parameter fluctuations on maximum clock frequency,” IEEE Journal of SolidState Circuits, vol. 37, no.2, pp. 183-190, February 2002. 8. A. Asenov, G. Slavcheva, A. R. Brown, J. H. Davies, and S. Saini, “Quantum mechanical enhancement of the random dopant induced threshold voltage fluctuations and lowering in sub 0.1 micron MOSFETs,” in Proceedings of International Electron Devices Meeting, 1999, pp. 535–538. 9. V. V. Zhirnov, R. K. Cavin, Fellow, “On Designing Sub-70-nm Semiconductor Materials and Processes,” IEEE Transactions on Semiconductor Manufacturing, vol. 15, no.2, pp. 157-168, May 2002. 10. C.T. Liu, F.H. Baumann, A. Ghetti, and H. H. Vuong, “Severe thickness variation of sub-3 nm gate oxide due to Si surface faceting, poly-Si intrusion, and corner stress,” in Procedings of VLSI Tech Digest, 1999, pp. 75-76. 11. H.-S. Wong, D. J. Frank, P. Solomon, C. Wann, and J. Wesler, “Nanoscale CMOS,” in Proceedings of the IEEE, vol. 87, no. 4, 1999, pp. 537-570.
292
References
12. J. A. Croon, G. Storms, S. Winkelmeier, and I. Pollentier, “Line-Edge Roughness: Characterization, Modeling, and Impact on Device Behavior,” in Proceedings of International Electron Devices Meeting, 2002, pp. 307-310. 13. P. Oldiges, Lin Qimghuang, K. Petrillo, M. Sanchez, M. Leong, and M. Hargrove, “Modeling line edge roughness effects in sub 100 nanometer gatelength devices,” in Proceedings of the International Conference on Simulation of Semiconductor Processes and Devices, 2000, pp.131-134. 14. A. Asenov, S. Kaya, and J.H. Davies, “Intrinsic threshold voltage fluctuations in decanano MOSFETs due to local oxide thickness variations,” IEEE Transactions on Electron Devices, vol.49, no.1, pp.112-19, January 2002. 15. V. K. De, X. Tang, and J. D. Meindl, “Random MOSFET parameter fluctuation limits to gigascale integration(GSI),” in Proceedings of Symposium on VLSI Technology, 1996, pp. 198-199. 16. A. Asenov, G. Slavcheva, A. R. Brown, J. Davies, and S. Saini, “Increase in the random dopant induced threshold fluctuations and lowering in sub-100 nm MOSFETs due to quantum effects: a 3-D density-gradient simulation study,” IEEE Transactions on Electron Devices, vol. 48, no. 4, pp. 722-729, April 2001. 17. T. Mizuno, J. Okamura, and A. Toriumi, “Experimental study of threshold voltage fluctuation due to statistical variation of channel dopant number in MOSFET’s,” IEEE Tranactions on Electron Devices, vol. 41, pp. 2216–2221, November 1994. 18. M. Orshansky, L. Milor, M. Brodsky, L. Nguyen, G. Hill, Y. Peng, and C. Hu, “Characterization of spatial CD variability, spatial mask-level correction, and improvement of circuit performance” in Proceedings of SPIE, vol. 4000, pp. 602-611, July 2000. 19. D. Gerold, presentation at Sematech Technical Conference, Lake Tahoe, NV, 1997. 20. D. G. Chesebro, J. W. Adkisson, L. R. Clark, S. N. Eslinger, M. A. Faucher, S. J. Holmes, R. P. Mallette, E. J. Nowak, E. W. Sengle, S. H. Voldman, and W. Weeks, “Overview of gate linewidth control in the manufacture of CMOS logic chip,”IBM Journal of Research and Development, vol. 39, no. 1, pp. 189-200, January 1995. 21. C. Yu, “Integrated Circuit Process Design for Manufacturability Using Statistical Metrology,” Ph.D. dissertation, UC Berkeley, 1996. 22. J. Cain and C. Spanos , “Electrical linewidth metrology for systematic CD variation characterization and causal analysis”, in Proceedings of SPIE, vol. 5038, 2003, pp. 350-361. 23. D. Schurz, W. W. Flack, S. Cohen, T. Newman, K. Nguyen, “The Effects of Mask Error Factor on Process Window Capability,” in Proceedings of SPIE, vol. 3873, pp. 215-225, December 1999. 24. G. S. Chua, C. J. Tay, C. Quan, and Q. Lin, “Performance improvement in gate level lithography using resolution enhancement techniques,” Microelectronic Engineering, vol. 75, no. 2, pp. 155–164, August 2004. 25. R. A. Gottscho, C. W. Jurgensen, and D. J. Vitkavage, “Microscopic uniformity in plasma etching,” Journal of Vacuum Science & Technology B: Microelectronics and Nanometer Structures, vol. 10, no. 5, pp. 2133-2147, September 1992. 26. B. Morgan, C. M. Waits, and R. Ghodssi , “Compensated aspect ratio dependent etching (CARDE) using gray-scale technology,” Microelectronic Engineering, vol. 77, no. 1, pp. 85-94, January 2005.
References
293
27. M. Ercken, L.H.A. Leunissen, I. Pollentier, G.P. Patsis, V. Constantoudis, and E. Gogolides, “Effects of different processing conditions on line-edge roughness for 193nm and 157nm resists,” in Proceedings of SPIE, vol. 5375, pp. 266-275, May 2004. 28. V. Constantoudis, G. P. Patsis, L. H. A. Leunissen, and E. Gogolides, “Line edge roughness and critical dimension variation: Fractal characterization and comparison using model functions,” Journal of Vacuum Science & Technology B, vol. 22, no. 4, pp. 1974-1981, July 2004. 29. J. Croon, G. Storms, S. Winkelmeir, I. Pollentier, M. Eracken, S. Decoutre, W. Sansen, and H.E. Maes, “Line-edge roughness: characterization, modeling and impact on device behavior,” in Proceedings of International Electron Devices Meeting, 2002, pp. 307-310. 30. M. Pelgrom, A. Duinmaijer, and A. Welbers, “Matching properties of MOS transistors,” IEEE Journal of Solid-State Circuits, vol. 24, no. 5, pp. 14331439, October 1989. 31. N. Hakim, “Statistical Performance Analysis and Optimization of Digital Circuits,” presented at Design Automation Conference, Anaheim, CA, June 2005. 32. T. Gan, “Modeling of Chemical Mechanical Polishing for Shallow Trench Isolation,” Ph.D. dissertation, MIT, 2000. 33. T. Speranza, Wu Yutong, E. Fisch, J. Slinkman, J. Wong, K. Beyer, “Manufacturing optimization of shallow trench isolation for advanced CMOS logic technology,” in Proceedings of Advanced Semiconductor Manufacturing Conference, 2001, pp. 59-63. 34. S. Roy and A. Asenov, “Where Do the Dopants Go,” Science, vol. 309, no. 5733, pp. 388-390, July 2005. 35. R. W. Keyes, “The effect of randomness in the distribution of impurity atoms on FET thresholds,” Appied. Physics A: Materials Science & Processing, vol. 8, no. 3, pp. 251–259, November 1975. 36. D. J. Frank, R. H. Dennard, E. Nowak, P. M. Solomon, Y. Taur, and H.S. P. Wong, “Device Scaling Limits of Si MOSFETs and Their Application Dependencies,” in Proceedings of IEEE, vol. 89, no. 3, pp. 259-288, March 2001. 37. D. Lee, W. Kwong, D. Blaauw, D. Sylvester, “Analysis and minimization techniques for total leakage considering gate oxide leakage,” in Proceedings of Design Automation Conference, 2003, pp. 175-180. 38. Y.-C. Yeo, “Direct tunneling gate leakage current in transistorswith ultra thin silicon nitride gate dielectric,” IEEE Electron Device Letters, pp. 540-542, November 2000. 39. S. M. Goodnick, D. K. Ferry, C.W. Wilmsen, Z. Liliental, D. Fathy, and O. L. Krivanek, “Surface roughness at the Si(100) - SiO interface,” Phys. Rev. B, vol. 32, pp. 8171–8186, 1985. 40. E. Cassan, P. Dollfus, S. Galdin, and P. Hesto, “Calculation of direct tunneling gate current through ultra-thin oxide and oxide/nitride stacks in MOSFETs and H-MOSFETs,” Microelectronics Reliability, vol. 40, no. 4, pp. 585-588, April 2000. 41. G. Scott, J. Lutze, M. Rubin, F. Nouri, and M. Manley, “NMOS Drive Current Reduction Caused by Transistor Layout and Trench Isolation Induced Stress”, International Electron Devices Meeting, 1999, pp. 827-830.
294
References
42. J. L. Hoyt, H. M. Nayfeh, S. Eguchi, I. Aberg, G. Xia, T. Drake, E. A. Fitzgerald, and D. A Antoniadis, “Strained silicon MOSFET technology”, in Proceedings of International Electron Devices Meeting, 2002, pp. 23-26. 43. P. M. Fahey, S. R. Mader, S. R. Stiffler, R. L. Mohler, J. D. Mis, and J. A. Slinkman, “Stress-induced dislocations in silicon integrated circuits,” IBM Journal of Research and Development, vol. 36, no.2, pp. 158, 1992. 44. N. Shah, “Stress modeling of nanoscale mosfet a thesis presented to the graduate school,” Ph.D. dissertation, University of Florida, 2005. 45. Y. M. Sheu, C.S. Chang, H.C. Lin, S.S. Lin, C.H. Lee, C.C. Wu, M.J. Chen, and C.H. Diaz, “Impact of STI mechanical stress in highly scaled MOSFETs,” in Proceedings of the International Symposium on VLSI Technology, Systems, and Applications, 2003, pp. 269- 272. 46. D. Hisamoto, W.-C. Lee, J. Kedzierski, H. Takeuchi, K. Asano, C. Kuo, T.-J. King, J. Bokor, and C. Hu, “FinFET—A self-aligned double-gate MOSFET scalable beyond 20 nm,” IEEE Transactions on Electron Devices, vol. 48, no. 5, pp. 880–886, December 2000. 47. A. Pirovano, A. L. Lacaita, G. Ghidini, and G. Tallarida, “On the correlation between surface roughness and inversion layermobility in Si-MOSFETs,” IEEE Electron Device Letters, vol. 21, no. 1, January 2000. 48. A. Asenov, S. Kaya, A. R. Brown, “Intrinsic parameter fluctuations in decananometer MOSFETs introduced by gate line edge roughness,” IEEE Transactions on Electron Devices, vol. 50, no. 5, pp. 1254-1260, May 2003. 49. K. Bernstein, D. J. Frank, A. E. Gattiker, W. Haensch, B. L. Ji, S. R. Nassif, E.J. Nowak, D. J. Pearson, and N. J. Rohrer, “High-performance CMOS variability in the 65-nm regime and beyond,” IBM Journal of Research and Development, vol. 50, no. 4, July 2006. 50. S. Eneman, P. Verheyen, R. Rooyackers, F. Nouri, L.Washington, R.Degraeve, B. Kaczer, V. Moroz, A. De Keersgieter, R. Schreutelkamp, M. Kawaguchi, Y. Kim, A. Samoilov, L. Smith, P.P. Absil, K. De Meyer,M. Jurczak, and S. Biesemans, “Layout impact on the performance of a locally strained PMOSFET,” in Proceddings of the 2005 Symposium on VSLI Technology, 2005, pp.22Ű23. 51. J. Cobb, F. Houle, and G. Gallatin, “The estimated impact of shot noise in extreme ultraviolet lithography,” Proceedings of SPIE, vol. 5037, no. 397, pp. 397-405, June 2003. 52. T. Tugbawa, T. Park, D. Boning, L. Camilletti, M. Brongo, and P. Lefevre, “Modeling of pattern dependencies in multi-step copper chemical mechanical polishing processes,” in Proceedings of Chemical-Mechanical Polish for ULSI Multilevel Interconnection Conference, 2001, pp. 65-68. 53. S. Lakshminarayanan, P. J. Wright, and J. Pallinti, “Electrical characterization of the copper CMP process and derivation of metal layout rules,” IEEE Transactions on Semiconductor Manufacturing, vol. 16, no. 4, pp. 668-678, November 2003. 54. Maex, M. R. Baklanov, D. Shamiryan, F. Iacopi, S. H. Brongersma, and Z. S. Yanovitskaya, “Low dielectric constant materials for microelectronics,” J. Appl. Physics, vol. 93, no. 11, pp. 8793-8841, June 2003. 55. M. Morgen, E. T. Ryan, J.-H. Zhao, C. Hu, T. Cho, and P. S. Ho, “Low dielectric constant materials for ULSI interconnects,” Annual Review of Materials Science, vol. 30, pp. 645-680, 2000.
References
295
56. M. W. Lane, X. H. Liu, and T. M. Shaw, “Environmental effects on cracking and delamination of dielectric films,” IEEE Trans. on Device and Materials Reliability, vol. 4, no. 2, pp. 142-147, June 2004. 57. R. H. Havemann and J. A. Hutchby, “High-performance interconnects: an integration overview,” Proc. of the IEEE, vol. 89, no. 5, pp. 586-601, May 2001. 58. P. Kapur, J. P. McVittee, and K. C. Saraswat, “Technology and reliability constrained future copper interconnects. I. Resistance modeling,” IEEE Trans. on Electron Devices, vol. 49, no. 4, pp. 590-597, April 2002. 59. W. Steinhoegl, G. Schindler, G. Steinlesberger, M. Traving, and M. Englehardt, “Scaling Laws for the Resistivity Increase of sub-100 nm Interconnects,” SISPAD 2003, pp. 27-30, Sept. 2003. 60. K. Banerjee, S. Im, and N. Srivastava, “Interconnect modeling and analysis in the nanometer Era: Cu and beyond,” in Proceedings of Advanced Metallization Conference, 2005. 61. S. Im, N. Srivastava, K. Banerjee, and K. E. Goodson, “Scaling analysis of multilevel interconnect temperatures for high-performance ICs,” IEEE Transactions on Electron Devices, vol. 52, no. 12, pp. 2710-2719, December 2005. 62. L. H. A. Leunissen, W. Zhang, W. Wu, and S. H. Brongersma, “Impact of line edge roughness on copper interconnects,” Journal of Vacuum Science & Technology B, vol. 24, no. 4, pp. 1859-186, July 2006. 63. W. Steinhoegl, G. Schindler, G. Steinlesberger, M Traving, and M. Englehardt, “Impact of line edge roughness on the resistivity of nanometer-scale interconnects,” Microelectronic Engineering, vol. 76, no. 1, pp. 126-130, October 2004. 64. M. Nihei, M. Horibe, A. Kawabata, and Y. Awano, “Carbon nanotube vias for future LSI interconnects,” in Proceedings of IEEE International Interconnect Technology Conference, 2004, pp. 251-253. 65. A. Naeemi and J. D. Meindl, “Monolayer metallic nanotube interconnects: promising candidates for short local interconnect,“ IEEE Electron Device Letters, vol. 26, no. 8, pp. 544-546, Aug. 2005. 66. F. Kreupl, A. P. Graham, M. Liebau, G. S. Duesberg, R. Seidel, and E. Unger, “Microelectronic interconnects based on carbon nanotubes,” in Proceedings of Advanced Metallization Conference, 2004. 67. H. Stahl, J. Appenzeller, R. Martel, Ph. Avouris, and B. Lengeler, “Intertube coupling in ropes of single-wall carbon nanotubes,” Physical Review Letters, vol. 85, no. 24, pp. 5186-5189, 2000. 68. N. Srivastava, R. V. Joshi, and K. Banerjee, “Carbon nanotube interconnects: implications for performance, power dissipation, and thermal management,” in Proceedings of International Electron Devices Meeting, 2005, pp. 249-252. 69. N. Srivastava and K. Banerjee, “Performance analysis of carbon nanotube interconnects for VLSI applications,” in Proceedings of the International Conference on Computer Aided Design, 2005, pp. 383-390. 70. Y. Massoud and A. Nieuwoudt, “Modeling and design challenges and solutions for carbon nanotube-based interconnect in future high performance integrated circuits,” ACM Journal on Emerging Technologies in Computing Systems, vol. 2, no. 3, pp. 155-196, July 2006. 71. K. O. Abrokwah, P. R. Chidambaram, and D. Boning, “Pattern Based Prediction for Plasma Etch,” IEEE Trans. on Semiconductor Manufacturing, vol. 20, no. 2, pp. 77-86, May 2007.
296
References
72. M. Yamamoto, H. Endo, and H. Masuda, “Development of a Large-Scale TEG for Evaluation and Analysis of Yield and Variation,” IEEE Trans. on Semiconductor Manufacturing, vol. 17, no. 2, pp. 111-122, May 2004. 73. S. Nassif, “Delay variability: sources, impact and trends,” in Proceedings of International Solid-State Circuits Conference, 2000, pp. 368-369. 74. B. Sheu, D. Scharfetter, P.-K. Ko, and M.-C. Jeng, “BSIM: Berkeley shortchannel IGFET model for MOS transistors,” IEEE Journal of Solid-State Circuits, vol. 22, no. 4, pp. 558-566, August 1987. 75. W. Maly, Atlas of IC Technologies: An Introduction to VLSI Processes. Benjamin/Cummings, 1987. 76. L.W. Nagel, SPICE2: A Computer Program to Simulate Semiconductor Circuits. PhD thesis, University of California, Berkeley, 1975. 77. W.J. McCalla, Fundamentals of Computer-Aided Circuit Simulation. Kluwer, 1987. 78. D. Foty, MOSFET Modeling with SPICE - Principles and Practice. Prentice Hall, 1997. 79. L.M. Dang, “A simple current model for short-channel IGFET and its application to circuit simulation,” IEEE Transactions on Electron Devices, vol.26, no. 4, pp. 436- 445, April 1979. 80. M. Sharma and N. Arora, “OPTIMA: A nonlinear model parameter extraction program withstatistical confidence region algorithms,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 12, no. 7, pp. 982-987, July 1993. 81. W. Press, B. Flannery, S. Teukolsky, and W. Vetterling, Numerical Recipes. Cambridge University Press, 1986. 82. P.E. Gill, W. Murray, and M.H. Wright, Practical optimization. Academic Press, 1981. 83. S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 4598, pp. 671-680, May 1983. 84. K. Krishna, ” Statistical Parameter Extraction,” Ph.D. dissertation, CarnegieMellon University, 1995. 85. B.E. Stine, D.S. Boning, and J.E. Chung, “Analysis and decomposition of spatial variation in integrated circuit processes and devices,” IEEE Transactions on Semiconductor Manufacturing, vol. 10, no. 1, February 1997. 86. M. Orshansky, C. Spanos, and C. Hu, “Circuit Performance Variability Decomposition”, in Proceedings of Workshop on Statistical Metrology, 1999, pp.10-13. 87. I.T. Jolliffe, Principal Component Analysis, Springer, 2002. 88. R. Spence and R.S. Soin, Tolerance Design of Electronic Circuits, AddisonWesley, 1988. 89. S.W. Director and W.Maly, eds., Statistical Approach to VLSI, Advances in CAD for VLSI, vol. 8, North-Holland, 1994. 90. S.R. Nassif, A.J. Strojwas, and S.W. Director, “A Methodology for WorstCase Analysis of Integrated Circuits,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 5, no. 1, pp. 104-113, January 1986. 91. A.C. Atkinson and A.N. Donev, Optimum Experimental Designs. Oxford, 1992. 92. K. Agarwal and S.R. Nassif, “Statistical analysis of SRAM cell stability,” in Proceedings of Design Automation Conference, 2006, pp. 57-62. 93. Taur, Y. and Ning, T. H., Fundamentals of Modern VLSI Devices, Cambridge Univ. Press, 1998.
References
297
94. A.J. Bhavnagarwala, X. Tang, and J.D. Meindl, “The impact of intrinsic device fluctuations on CMOS SRAM cellstability,” IEEE Journal of Solid-State Circuits, vol. 36, no. 4, pp. 658-665, April 2001. 95. E. Seevinck, F. List, and J. Lohstroh, “Static-noise margin analysis of MOS SRAM cells,” IEEE Journal of Solid-State Circuits, vol. 22, no. 5, pp. 748-754, October 1987. 96. R.Y. Rubinstein, Simulation and the Monte-Carlo Method. Wiley, 1981. 97. R. Kanj, R. Joshi, and S.R. Nassif, “Mixture importance sampling and its application to the analysis of SRAM designs in the presence of rare failure events,” in Proceedings of Design Automation Conference, 2006, pp. 69-72. 98. W.G. Cochran, Sampling Techniques. Wiley, 1977. 99. Y.H. Shih, Y. Leblebici, and S.M. Kang, “ILLIADS: a fast timing and reliability simulator for digital MOScircuits,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 12, no. 9, pp. 1387-1402, September 1993. 100. A. Devgan and R. Rohrer, “Adaptively controlled explicit simulation,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 13, no. 6, pp. 746-762, June 1994. 101. M. Rewienski and J. White, “A trajectory piecewise-linear approach to model order reduction and fast simulation of nonlinear circuits and micromachined devices,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 22, no. 2, pp. 155-170, February 2003. 102. L. Vidigal, S.R. Nassif, and S.W. Director, “CINNAMON: coupled integration and nodal analysis of MOS networks,” in Proceedings of Design Automation Conference, 1986, pp. 179-185. 103. G. Box and N. Draper, Empirical model-building and response surfaces. Wiley, 1998. 104. K. Singhal, C. McAndrew, S. Nassif, and V. Visvanathan, “The center design optimization system,” AT&T Technical Journal, vol. 68, no. 3, pp. 77-92, May 1989. 105. S.G. Duvall, “Statistical circuit modeling and optimization,” in Proceedings of International Workshop on Statistical Metrology, 2000, pp. 56-63. 106. M. McKay, R. Beckman, and W. Conover, “A comparison of three methods for selecting values of input variables in the analysis of output from computer code,” Technometrics, vol. 42, no. 1, pp. 55-61, May 1979. 107. D.S. Gibson, R. Poddar, G.S. May, and M.A. Brooke, “Statistically based parametric yield prediction for integrated circuits,” IEEE Transactions on Semiconductor Manufacturing, vol. 10, no. 4, pp. 445-458, November 1997. 108. C. Yu, T. Maung, C. Spanos, D. Boning, J. Chung, H. Liu, K. Chang, and D. Bartelink, “Use of short-loop electrical measurements for yield improvement,” IEEE Transactions on Semiconductor Manufacturing, vol. 8, no. 2, pp. 150159, May 1995. 109. B.E. Stine, D.O. Ouma, R.R. Divecha, D.S. Boning, J.E. Chung, D.L. Hetherington, C.R. Harwoo, O.S. Nakagawa, and Oh Soo-Young, “Rapid characterization and modeling of pattern-dependent variation in chemical-mechanical polishing,” IEEE Transactions on Semiconductor Manufacturing, vol. 11, no. 1, pp. 129-140, February 1998. 110. E.G. Colgan, R.J. Polastre, M. Takeichi, and R.L. Wisnieff, “Thin-filmtransistor process-characterization test structures,” IBM Journal of Research and Development, vol. 42, no. 3, pp. 481-490, May-July 1998.
298
References
111. M. G. Buehler, “Microelectronic Test Chips for VLSI Electronics,” in VLSI Electronics: Microstructure Science, Vol. 6, N. G. Einspruch and G. B. Larrabee, Eds. New York: Academic Press, Inc., 1983, Chapter 9. 112. L. W. Linholm, R. A. Allen, and M. W. Cresswell, “Microelectronic Test Structures for Feature Placement and Electrical Line Width Metrology,” in Handbook of Critical Dimension Metrology and Process Control, Vol. CR52, Critical Reviews of Optical Science and Technology, K.M. Monahan, Ed. Bellingham, WA: SPIE Optical Engineering Press, 1994, pp. 91-118. 113. V. Wang, K. L. Shepard, and S. Nassif, “On-chip transistor characterization macro for variability analysis,” unpublished manuscript, 2007. 114. M. Bhushan, A. Gattiker, M.B. Ketchen, and K.K. Das, “Ring oscillators for CMOS process tuning and variability control,” IEEE Transactions on Semiconductor Manufacturing, vol. 19, no. 1, pp. 10-18, Feb. 2006. 115. D. Boning, J. Panganiban, K. Gonzalez-Valentin, S. Nassif, C. McDowell, A. Gattiker, and F. Liu, “Test structures for delay variability,” in Proceedings of the International Workshop on Timing Issues in the Specification and Synthesis of Digital Systems, 2002, pp. 109-109. 116. K. Agarwal, F. Liu, C. McDowell, S. Nassif, K. Nowka, M. Palmer, D. Acharyya, and J. Plusquellic, “A test structure for characterizing local device mismatches,” in Proceedings of 2006 Symposium on VLSI Circuits, 2006, pp. 67-68. 117. M. Orshansky, L. Milor, and C. Hu, “Characterization of spatial intrafield gate CD variability, its impact on circuit performance, and spatial mask-level correction,” IEEE Transactions on Semiconductor Manufacturing, vol. 17, no. 1, pp. 2-11, February 2004. 118. I. Miller and J. Freund, Probability and Statistics for Engineers, 3rd edition. Englewood Clis, NJ: Prentice-Hall, Inc., 1985. 119. B. E. Stine, D. S. Boning, and J. E. Chung, “Analysis and Decomposition of Spatial Variation in Integrated Circuit Processes and Devices,” IEEE Transactions on semiconductor manufacturing, Vol. 10, No. 1, February 1997. 120. J.A. Rice, Mathematical Statistics and Data Analysis. Duxbury Press, 1994. 121. B.E. Stine, D.S. Boning, J.E. Chung, D.A. Bell, and E. Equi, “Inter- and intradie polysilicon critical dimension variation,” in Proceedings of SPIE Symposium on Microelectronic Manufacturing, vol. 2874, pp. 27-35, October 1996. 122. J. Jin and H. Guo, “ANOVA Method for Variance Component Decomposition and Diagnosis in Batch Manufacturing Processes,” International Journal of Flexible Manufacturing Systems, vol. 15, no. 2, pp. 167-186, April 2003. 123. P. Friedberg, Y. Cao, J. Cain, R. Wang, J. Rabaey, and C. Spanos, “Modeling within-field gate length spatial variation for process-design co-optimization,” in Proceedings of SPIE, vol. 5756, pp. 178-188, May 2005. 124. F. Liu, “How to construct spatial correlation models: A mathematical approach,” in Proceedings of International Workshop on Timing Issues in the Specification and Synthesis of Digital Systems, 2007, pp. 106-111. 125. M.J.M. Pelgrom, A.C.J. Duinmaijer, and A.P.G. Welbers, “Matching properties of MOS transistors,” IEEE Journal of Solid-State Circuits, vol. 24, no. 5, pp. 1433-1439, October 1989. 126. U. Gruebaum, J. Oehm, and K. Schumacher, “Mismatch modeling and simulation-a comprehensive approach,” Analog Integrated Circuits and Signal Processing, vol. 29, no.3, December 2001.
References
299
127. J. P. Chiles and P. Delfiner, Geostatistics: Modeling Spatial Uncertainty, New York, NY: John Wiley and Sons, 1999. 128. A.K. Wong, “Microlithography: trends, challenges, solutions, and their impact on design,” IEEE Micro, vol. 23, no. 2, pp. 12-21, March/April, 2003. 129. L.W. Liebmann, “Layout Impact of Resolution Enhancement Techniques: Impediment or Opportunity?”, in Proceedings of International Symposium on Physical Design, 2003, pp. 110-117. 130. M.D. Lenevson, N.S. Viswanathan, and R.A. Simpson, “Improving resolution in photolithography with a phase-shifting mask,” IEEE Transactions On Electron Devices, vol. 29, no. 12, pp.1828-1836, December 1982. 131. L.W. Liebmann, J. Lund, F.-L. Heng, and I. Graur, “Enabling alternating phase shifted mask designs for a full logic gate level,” in Proceedings of Design Automation Conference, 2001, pp. 79-84. 132. B. Wong, A. Mittal, and Y. Cao, Nano-CMOS Circuit and Physical Design, John Wiley & Sons, 2004. 133. T.A. Bruner, “Impact of lens aberrations on optical lithography,” Journal of IBM Research and Development, vol. 41, no. 1, pp.57-67, January 1997. 134. D. Fuard, M. Besacier, and P. Schiavone, “Validity of the diffused aerial image model: an assessment based on multiple test cases,” in Proceedings of SPIE, vol. 5040, pp. 1536-1543, June 2003. 135. C.A. Mack, Field Guide to Optical Lithography, SPIE Press, 2006. 136. P. Wong, Resolution Enhancement Techniques in Optical Lithography, SPIE Press, 2001. 137. Aprio, Application Notes, http://www.aprio.com/downloads/halo_opc_ datasheet.pdf, 2006. 138. L.W. Liebmann, S.M. Mansfield, A.K. Wong, M.A. Lavin, W.C. Leipold, and T.G. Dunham, “TCAD development for lithography resolution enhancement,” IBM Journal of Research and Development, vol. 45, no.5, pp. 651-666, 2001. 139. I. Matthew, C.E. Tabery, T. Lukanc, M. Plat, M. Takahashi, A. Wilkison, “Design restrictions for patterning with off-axis illumination,” in Proceedings of SPIE, vol. 5754, pp. 1574-1585, May 2004. 140. M. Cote and P. Hurat, “Layout Printability Optimization using a Silicon Simulation Methodology,” in Proceedings of ISQED, 2004, pp. 159-164. 141. Y. Zhang, M. Feng, H.Y. Liu, “Focus exposure matrix model for full chip lithography manufacturability check and optical proximity correction,” in Proceedings of SPIE, vol. 6283, May 2006. 142. J. Yang, L. Capodieci, and D. Sylvester, “Advanced timing analysis based on post-OPC extraction of critical dimensions,” in Proceedings of Design Automation Conference, 2005, pp. 359-364. 143. P. Gupta and F-L. Heng, “Toward a Systematic Variation Aware Timing Methodology,” in Proceedings of Design Automation Conference, 2005, pp. 321326. 144. A. K. Wong, R. A. Ferguson, S. M. Mansfield, “The Mask Error Factor in Optical Lithography,” IEEE Transactions on Semiconductor Manufacturing, vol. 13, no. 2, pp. 235-242, May 2000. 145. F.M. Schellenberg, O. Toublan, L. Capodieci, and B. Socha, “Adoption of OPC and the Impact on Design and Layout”, in Proceedings of Design Automation Conference, 2001, pp. 89-92. 146. Jim Wiley, “Future challenges in computational lithography”, Solid State Technology Magazine, http://sst.pennnet.com/, May, 2006.
300
References
147. P. Gupta (private communication), 2007. 148. R. Singhal, A. Balijepalli, A. Subramaniam, F. Liu, S. Nassif, and Y. Cao, “Modeling and analysis of non-rectangular gate for post-lithography circuit simulation,” in Proceedings of Design Automation Conference, 2007, pp. 823828. 149. S. Devadas, K. Keutzer, and S. Malik, “Delay computation in combinational logic circuits: theory and algorithms,” in Proceedings of International Conference on Computer Aided Design, 1991, pp. 176-179. 150. D. Chinnery, and K. Keutzer, “Closing the gap between ASIC and custom: an ASIC perspective”, in Proceedings of Design Automation Conference, 2000, pp. 637-642. 151. M. Orshansky, and K. Keutzer, “A Probabilistic Framework for worst case Timing Analysis,” in Proceedings of Design Automation Conference, 2002, pp. 556-561. 152. H. Chang and S. Sapatnekar, “Statistical timing analysis considering spatial correlations using a single pert-like traversal,” in Proceedings of International Conference on Computer-Aided Design, 2003, pp. 621-625. 153. Semiconductor Industry Association, International Technology Roadmap for Semiconductors, 2001. 154. L. Scheffer, “The Count of Monte Carlo,” presented at International Workshop on Timing Issues in the Specification and Synthesis of Digital Systems , Austin, Texas, 2004. 155. A. Dasdan, and I. Hom, “Handling inverted temperature dependence in static Timing analysis,” ACM Transactions on Design Automation of Electronic Systems, vol. 11 , no. 2, pp. 306-324, April 2006. 156. N. Maheshwari and S. Sapatnekar, Timing Analysis and Optimization of Sequential Circuits, Kluwer Academic Publishers, 2001. 157. S. Hassoun, T. Sasao, Logic Synthesis and Verification, Kluwer Academic Publishers, 2002. 158. C. E. Clark, “The greatest of a finite set of random variables,” Operations Research, vol. 9, no. 2, pp. 145-162, March 1961. 159. R. Hitchcock, “Timing verification and the timing analysis program,” in Proceedings of Design Automation Conference, 1982, pp. 594-604. 160. M. Berkelaar, “Statistical delay calculation”, presented at International Workshop on Logic Synthesis, Lake Tahoe, CA, 1997. 161. H.-F. Jyu, S. Malik, S. Devadas, and K. Keutzer, “Statistical timing analysis of combinational logic circuits,” IEEE Transactions on VLSI Systems, vol.1, no.2, pp.126-137, June 1993. 162. A. Agarwal, D. Blaauw, V. Zolotov, and S. Vrudhula, “Computation and refinement of statistical bounds on circuit delay,” in Proceedings of Design Automation Conference, 2003, pp. 348-353. 163. A. Devgan and C. Kashyap, “Block-based static timing analysis with uncertainty,” in Proceedings of International Conference on Computer-Aided Design, 2003, pp. 607-614. 164. C. Visweswariah, K. Ravindran, K. Kalafala, S. G. Walker, and S. Narayan, “First-order incremental block-based statistical timing analysis,” in Proceedings of Design Automation Conference, 2004, pp. 331-336. 165. L. Scheffer, “Explicit computation of performance as a function of process variation,” in Proceedings of International Workshop on Timing Issues in the Specification and Synthesis of Digital Systems, 2002, pp. 1-8.
References
301
166. A. Ramalingam, A. K. Singh, S. Nassif, G.-J. Nam, D. Pan, and M. Orshansky, “An accurate sparse matrix based framework for statistical static timing analysis,” in Proceedings of International Conference on Computer-Aided Design, 2006, pp. 231-236. 167. L. Zhang, W. Chen, Y. Hu, and C. Chung-Ping Chen, “Statistical timing analysis with extended pseudo-canonical timing model,” in Proceedings of Design, Automation and Test in Europe, 2005, pp. 952-957. 168. A. Agarwal, D. Blaauw, and V. Zolotov, “Statistical timing analysis for intradie process variations with spatial correlations,” in Proceedings of International Conference on Computer-Aided Design, 2003, pp. 900-907. 169. A. Nadas, “Probabilistic PERT,” IBM Journal of Research and Development, vol. 23, no. 3, pp. 339-347, May 1979. 170. A. Gattiker, S. Nassif, R. Dinakar, C. Long, “Timing yield estimation from static timing analysis,” in Proceedings of International Symposium on Quality Electronic Design, 2001, pp. 437-442. 171. Y. L. Tong, Probability Inequalities in Multivariate Distributions, Academic Press, 1980. 172. P. Naidu, “Tuning for Yield”, PhD dissertation, Eindhoven University of Technology, The Netherlands, 2004. 173. E. Novak, K. Ritter, “The curse of dimension and a universal method for numerical integration,” in: G. Nüneberger, J. Schmidt, G. Walz (Eds.), Multivariate Approximation and Splines, ISNM, 1997, pp. 177-188. 174. J. Jess, K. Kalafala, S. Naidu, R. Otten, and C. Visweswariah, “Statistical timing for parametric yield prediction of digital integrated circuits,” in Proceedings of Design Automation Conference, 2003, pp. 932-937. 175. Shreider, The Monte Carlo Method, New York: Pergamon Press, 1966. 176. W. S. Wang and M. Orshansky, “Path-based statistical timing analysis handling arbitrary delay correlations: theory and implementation,” in IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 25, no. 12, pp. 2976-2988, December 2006. 177. M. Celik, L. Pillegi, and A. Odabasiouglu, IC Interconnect Analysis, Kluwer Academic Publishers, 2002. 178. C.-K. Cheng, J. Lillis, S. Lin, and N. Chang, Interconnect Analysis and Synthesis, New York: Wiley-Interscience, 2000. 179. Y. Liu, L. Pileggi, and A. Strojwas, “Model order-reduction of RC(L) interconnect including variational analysis,” in Proceedings Design Automation Conference, 1999, pp. 201-206. 180. K. Agarwal, D. Sylvester, D. Blaauw, F. Liu, S. Nassif, and S. Vrudhula, “Variational delay metrics for interconnect timing analysis,” in Proceedings of the Design Automation Conference, 2004, pp. 381-384. 181. C. L. Harkness and D. P. Lopresti, “Interval methods for modeling uncertainty in RC timing analysis,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 11, no. 11, pp. 1388-1401, November 1992. 182. J. Wang, P. Ghanta, and S. Vrudhula, “Stochastic analysis of interconnect performance in the presence of process variations,” in Proceedings of International Conference on Computer-Aided Design, 2004, pp. 880-886. 183. L. Daniel, O. C. Siong, L. S. Chay, K. H. Lee, and J. White, “Multiparameter moment-matching model-reduction approach for generating geometrically parameterized interconnect performance models,” IEEE Transactions on Computer-Aided Design, vol. 23, no. 5, pp 678-693, May 2004.
302
References
184. J. Ma and R. Rutenbar, “Interval-valued reduced order statistical interconnect modeling,” in Proceedings of International Conference on Computer-Aided Design, 2004, pp. 460-467. 185. P. Heydari and M. Pedram. “Model reduction of variable-geometry interconnects using variational spectrally-weighted balanced truncation,” in Proceedings of International Conference on Computer Aided-Design, 2001, pp. 586-591. 186. J. R. Phillips, “Variational interconnect analysis via PMTBR,” in Proceedings of International Conference on Computer-Aided Design, 2005, pp. 872-879. 187. X. Li, P. Li, and L. T. Pileggi, “Parameterized interconnect order reduction with explicit-and-implicit multi-parameter moment matching for inter/intradie variations,” in Proceedings of International Conference on Computer-Aided Design, 2005, pp. 806-812. 188. S. Sapatnekar, Timing, Kluwer Academic Publishers, 2004. 189. M. Orshansky and A. Bandyopadhyay, “Fast statistical timing analysis handling arbitrary delay correlations,” in Proceedings of Design Automation Conference, 2004, pp.337-342. 190. S. Hassoun and T. Sasao, Logic Synthesis and Verification, Kluwer Academic Publishers, 2002. 191. D. Chinnery and K. Keutzer, editors, Closing the Gap Between ASIC and Custom: Tools and Techniques for High-Performance ASIC Design, Springer 2006. 192. R. Brodersen, M. Horowitz, D. Markovic, B. Nikolic, and A. Stojanovic, “Methods for true power minimization,” in Proceedings of International Conference on Computer-Aided Design, 2002, pp. 35-40. 193. S. Borkar, T. Karnik, S. Narenda, J. Tschanz, A. Keshavarzi, and V. De, “Parameter variation and impact on circuits and microarchitecture,” in Proceedings of Design Automation Conference, 2003, pp. 338-342. 194. S. H. Gunther, F. Binns, D. M. Carmean, and J. C. Hall, “Managing the Impact of Increasing Microprocessor Power Consumption,” Intel Technology Journal, vol. 5, no. 1, pp. 1-9, February 2001. 195. S. Mukhopadhyay and K. Roy, “Modeling and estimation of total leakage current in nano-scaled CMOS devices considering the effect of parameter variation,” in Proceedings of International Symposium on Low Power Electronics and Design, 2003, pp. 172-175. 196. R. Rao, A. Devgan, D. Blaauw, and D. Sylvester, “Parametric Yield Estimation Considering Leakage Variability,” in Proceedings of Design Automation Conference, 2004, pp. 442-447. 197. C.-H. Choi, K.-H. Oh, J.-S. Goo, Z. Yu, and R. W. Dutton, “Direct tunneling current model for circuit simulation,” in Proceedings of International Electron Devices Meeting, 1999, pp. 735-738. 198. W.-C. Lee and C. Hu, “Modeling CMOS tunneling currents through ultrathin gate oxide due to conduction- and valence-band electron and hole tunneling,” IEEE Transactions on Electron Devices, vol. 48, no. 7, pp. 1366-1373, July 2001. 199. R. Rao, A. Srivastava, D. Blaauw, D. Sylvester, “Statistical estimation of leakage current considering inter- and intra-die process variation,” in Proceedings of International Symposium on Low-Power Electronics Design, 2003, pp. 88-89. 200. S. Nassif, “Statistical worst-case analysis for integrated circuits,” in Statistical Approach to VLSI, New York: Elsevier, 1994, pp. 233-253.
References
303
201. J. Chen, C. Hu, and M. Orshansky, “A statistical performance simulation methodology for VLSI circuits,” in Proceedings of Design Automation Conference, 1998, pp. 402-407. 202. D. Patil, S. Yun, S.-J. Kim, A. Cheung, M Horowitz, and S. Boyd, “A new method for design of robust digital circuits,” in Proceedings of International Symposium on Quality of Electronic Design, 2005, pp. 676-681. 203. A. Strojwas, “Statistical design of integrated circuits,” in IEEE Special Selections of Advances in Circuits and Systems, Piscataway: IEEE Press, 1987. 204. W. Maly, S. Director, and A. Strojwas, VLSI Design for Manufacturing: Yield Enhancement, Kluwer Academic Publishers, 1989. 205. S.W. Director, P. Feldmann, and K. Krishna, “Statistical integrated circuit design,” IEEE Journal of Solid-State Circuits, vol. 28, no. 3, pp. 193-202, March 1993. 206. P. Feldmann, and S. W. Director, “Integrated circuit quality optimization using surface integrals,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 12, no. 12, pp. 1868-1879, December 1993. 207. X. Bai, C. Visweswariah, P. Strenski, and D. Hathaway, “Uncertainty aware circuit optimization,” in Proceedings of Design Automation Conference, 2002, pp. 58-63. 208. E. Jacobs, and M. Berkelaar, “Gate sizing using a statistical delay model,” in Proceedings of Design Automation Conference, 2000, pp. 283-290. 209. J. Singh, V. Nookala, Z.-Q. Luo and S. Sapatnekar, “Robust gate sizing by geometric programming,” in Proceedings of Design Automation Conference, 2005, pp. 315-320. 210. M. Mani and M. Orshansky, “A new statistical optimization algorithm for gate sizing,” in Proceedings of International Conference on Computer Design, 2004, pp. 272-277. 211. M. Mani, A. Devgan, and M. Orshansky, “An efficient algorithm for statistical minimization of total power under timing yield constraints,” in Proceedings of Design Automation Conference, 2005, pp. 309-314. 212. A. Srivastava, D. Sylvester, and D. Blaauw, “Statistical optimization of leakage power considering process variations using dual-Vth and sizing,” in Proceedings of Design Automation Conference, 2004 pp. 773 – 778. 213. D. Nguyen, A. Davare, M. Orshansky, D. Chinnery, B. Thompson, and K. Keutzer, “Minimization of dynamic and static power through joint assignment of threshold voltages and sizing optimization,” in Proceedings of International Symposium on Low Power Electronics and Design, 2003, pp. 158 – 163. 214. S. Sirichotiyakul, T. Edwards, C. Oh, J. Zuo, A. Dharchoudhury, R. Panda, and D. Blaauw, “Stand-by power minimization through simultaneous threshold voltage selection and circuit sizing,” in Proceedings of Design Automation Conference, 1999, pp. 436-441. 215. Q. Wang, and S. Vrudhula, “Static power optimization of deep submicron CMOS circuit for dual Vth technology,” in Proceedings of International Conference on Computer-Aided Design, 1998, pp. 490-496. 216. J. Fishburn and A. Dunlop, “TILOS: A posynomial programming approach to transistor sizing,” in Proceedings of International Conference on ComputerAided Design, 1985, pp. 326-328. 217. S. Boyd and L.Vandenberghe, Convex Optimization, Cambridge University Press, 2004.
304
References
218. H. D. Mittelmann, “An independent benchmarking of SDP and SOCP solvers,” Mathematical Programming, vol. 95, no. 2, pp. 407-430, February 2003. 219. S. Chakraborty and R. Murgai, “Complexity of minimum-delay gate resizing,” in Proceedings of International Conference on VLSI Design, 2000, pp. 425-430. 220. V. Sundararajan, S.S. Sapathekar, and K.K. Parhi, “Fast and exact transistor sizing based on iterative relaxation,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 21, no. 5, 2002, pp.568-581, May 2002. 221. MOSEK ApS. The MOSEK optimization tools version 3.2 (Revision 8), User’s manual and reference, http://www.mosek.com/documentation.html#manuals. 222. A. Prekopa, Stochastic Programming. Kluwer Academic Publishers, 1995. 223. S. Raj, S.B. Vrudhula, and J. Wang, “A methodology to Improve Timing Yield,” in Proceedings of Design Automation Conference, 2004, pp. 448-453. 224. S. Hoon Choi, B.C. Paul, and K. Roy, “Novel sizing algorithm for yield improvement under process variation in nanometer technology,” in Proceedings of Design Automation Conference, 2004, pp. 454-459. 225. A. W. Marshall, I. Olkin, Inequalities: Theory of Majorization and Its Applications, Academic Press, 1979. 226. P. C. Fishburn, “Stochastic dominance and moments of distribution,” Mathematics of Operations Research, vol. 5, pp. 94-100, 1980. 227. Datta, A., Bhunia, S., Choi, J. H., Mukhopadhyay, S., and Roy, K, “Speed binning aware design methodology to improve profit under parameter variations,” in Proceedings of Asia South Pacific Design Automation Conference, 2006, pp. 712-717. 228. X. Li, J. Le, M. Celik and L. Pileggi, “Defining statistical sensitivity for timing optimization of logic circuits with large-scale process and environmental variations,” in Proceedings of International Conference of Computer-Aided Design, 2005, pp. 844-851. 229. A. Srivastava and D. Sylvester, “ŞA general framework for probabilistic lowpower design space exploration considering process variatio,” in Proceedings of International Conference on Computer-Aided Design, 2004, pp 808-813. 230. S. J. Kim, S. Boyd, S. Yun, D. Patil, and M. Horowitz, “A heuristic for optimizing stochastic activity networks with applications to statistical digital circuit sizing, ” Optimization and Engineering, Springer, May 2007. 231. R. Patel, S. Rajgopal, D. Singh, F. Baez, G. Mehta and V. Tiwari, “Reducing power in high-performance microprocessors,” in Proceedings of Design Automation Conference, 1998, pp. 732-737. 232. J. P. Halter, and F. N. Najm, “A gate-level leakage power reduction method for ultra-low-power CMOS circuits,” in Proceedings of Custom Integrated Circuits Conference, 1997, pp. 475-478. 233. A. Dharchoudhury et. al., “Design and analysis of power distribution networks in PowerPC microprocessors,” in Proceedings of Design Automation Conference, 1998, pp. 738-743. 234. S. R. Nassif, and J. N. Kozhaya, “Fast power grid simulation,” in Proceedings of Design Automation Conference, 2000, pp. 156-161. 235. R. Panda, D. Blaauw, R. Chaudhry, V. Zolotov, B. Young and R. Ramaraju, “Model and analysis for combined package and on-chip power grid simulation,” in Proceedings of International Symposium on Low Power Electronics and Design, 2000, pp. 179-184.
References
305
236. D. Kouroussis and F. N. Najm, “A static pattern-independent technique for power grid voltage integrity verification,” in Proceedings of Design Automation Conference, 2003, pp. 99-104. 237. H. H. Chen and D. D. Ling, “Power supply noise analysis methodology for deepsubmicron VLSI chip design,” in Proceedings of Design Automation Conference, 1997, pp. 638-643. 238. F. N. Najm, “A survey of power estimation techniques in VLSI circuits,” IEEE Transactions on VLSI Systems, vol. 2, no. 2, pp. 446-455, December 1994. 239. H. Su, Y. Liu, A. Devgan, E. Acar, S. Nassif, “Full chip leakage estimation considering power supply and temperature variations,” in Proceedings of International Symposium on Low Power Electronics and Design, 2003, pp. 78-83. 240. A. Devgan, “Efficient coupled noise estimation for on-chip interconnects,” in Proceedings of International Conference on Computer-Aided Design, 1997, pp. 147-151. 241. R. Gharpurey, and R. Meyer, “Modeling and analysis of substrate coupling in integrated circuits,” IEEE Journal of Solid State Circuits, vol. 31, no. 3, pp. 344-353, March 1996. 242. P. E. Dodd, and L. W. Massengill, “Basic mechanisms and modeling of singleevent upset in digital microelectronics,” IEEE Transactions on Nuclear Science, vol. 50, no. 3, part 3, June 2003. 243. FLOMERICS Corp., Application notes, “http://www.flomerics.com/flotherm”, 2007. 244. B. E. Stine, D. S. Boning, J. E. Chung, L. Camilletti, F. Kruppa, E. R. Equi, W. Loh, S. Prasad, M. Muthukrishnan, D. Towery, M. Berman, A. Kapoor, “The physical and electrical effects of metal-fill patterning practices for oxide chemical-mechanical polishing processes,” IEEE Transactions on Electron Devices, vol. 45, no. 3, pp.665-679, March 1998. 245. A. B. Kahng, G. Robins, A. Singh, H. Wang, and A. Zelikovsky, “Filling and slotting: analysis and algorithms,” in Proceedings of International Symposium on Physical Design, 1998, pp. 95-102. 246. A. B. Kahng, G. Robins, A. Singh, and A. Zelikovsky, “Filling algorithms and analyses for layout density control,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 18, no. 4, pp. 445-462, April 1999. 247. R. Tian, D. F. Wong, and R. Boone, “Model-based dummy feature placement for oxide chemical-mechanical polishing manufacturability,” in Proceedings of Design Automation Conference, pp. 667-670, 2000. 248. J.-K. Park, K.-H. Lee, J.-H. Lee, Y.-K. Park, and J.-T. Kong, “An exhaustive method for characterizing the interconnect capacitance considering the floating dummy-fills by employing an efficient field solving algorithm,” in Proceedings of Simulation of Semiconductor Processes and Devices, 2000, pp. 98-101. 249. Y. Chen, A. B. Kahng, G. Robins, and A. Zelikovsky, “Hierarchical dummy fill for process uniformity,” in Proceedings of Asia and South Pacific Design Automation Conference, 2001, pp. 139-144. 250. A. B. Kahng, “Design technology productivity in the DSM era (invited talk),” in Proceedings of Asia and South Pacific Design Automation Conference, 2001, pp. 443-448.
306
References
251. K.-H. Lee, J.-K.Park, Y.-N. Yoon, D.-H. Jung, J.-P. Shin, Y.-K. Park, and J.-T. Kong, “Analyzing the effects of floating dummy-fills: from feature scale analysis to full-chip RC extraction,” in Proceedings of International Electron Devices Meeting, 2001, pp. 685-688. 252. K. Mosig, T. Jacobs, K. Brennan, M. Rasco, J. Wolf, and R. Augur, “Integration challenges of porous ultra low-k spin-on dielectrics,” Microelectronic Engineering, vol. 64, no. 1, pp. 11-24, October 2002. 253. D. O. Ouma, D. S. Boning, J. E. Chung, W. G. Easter, V. Saxena, S. Misra, and A. Crevasse, “Characterization and modeling of oxide chemical-mechanical polishing using planarization lenght and pattern density concepts,” IEEE Transactions on Semiconductor Manufacturing,” vol. 15, no. 2, pp. 232-244, May 2002. 254. R. Tian, X. Tang, and M. D. F. Wong, “Dummy-feature placement for chemicalmechanical polishing uniformity in a shallow-trench isolation process,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 21, no. 11, pp. 63-71, January 2002. 255. Y. Chen, A. B. Kahng, G. Robins and A. Zelikovsky, “Area fill synthesis for uniform layout density,” IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 21, no. 10, pp. 1132-1147, October 2002. 256. Y. Chen, A. B. Kahng, G. Robins, and A. Zelikovsky, “Monte-Carlo methods for chemical-mechanical planarization on multiple-layer and dual-materials models,” in Proceedings of SPIE, vol. 4692, pp. 421-432, 2002. 257. Y. Chen, A. B. Kahng, G. Robins, and A. Zelikovsky, “Closing the smoothness and uniformity gap in area fill synthesis,” in Proceedings of International Symposium on Physical Design, 2002, pp. 137-142. 258. C. Hess, B. E. Stine, L. H. Weiland, and K. Sawada, “Logic characterization vehicle to determine process variation impact on yield and performance of digital circuits,” in Proceedings of International Conference on Microelectronic Test Structures, 2002, pp. 189-196. 259. P. Gupta and A. B. Kahng, “Manufacturing-aware physical design,” in Proceedings of International Conference on Computer-Aided Design, 2003, pp. 681-687. 260. R. B. Ellis, A. B. Kahng, and Y. Zheng, “Compression algorithms for dummyfill VLSI layout data,” in Proceedings of SPIE Conference on Design and Process Integration for Microelectronic Manufacturing, vol. 5042, pp. 233-245, 2003. 261. Y. Chen, P. Gupta, and A. B. Kahng, “Performance-impact limited dummy fill insertion,” in Proceedings of SPIE Conference on Design and Process Integration for Microelectronic Manufacturing, vol. 5042, pp. 75-86, 2003. 262. Y. Chen, A. B. Kahng, G. Robins, A. Zelikovsky, and Y. H. Zheng, “Data Volume Reduction in Dummy Fill Generation,” in Proceedings of Design, Automation and Testing in Europe, pp. 868-873, 2003. 263. J. Pallinti, S. Lakshminarayanan, W. Barth, P. Wright, M. Lu, S. Reder, L. Kwak, W. Catabay, D. Wang, and F. Ho, “An overview of stress free polishing of Cu with ultra low-k(k<2.0) films,” in Proceedings of International Interconnect Technology Conference, pp. 83-85, 2003. 264. Y. Chen, P. Gupta, and A. B. Kahng, “Performance-impact limited area fill synthesis,” in Proceedings of Design Automation Conference, 2003, pp. 22-27. 265. M. M. Nelson, “Optimized pattern fill process for improved CMP uniformity and interconnect capacitance,” in Proceedings of University Government Industry Microelectronics Symposium, 2003, pp. 374- 375.
References
307
266. R. Chang, Y. Cao, and C. Spanos, “Modeling the electrical effects of metal dishing due to CMP for on-chip Interconnect Optimization,” IEEE Transactions on Electron Devices, vol. 51, no. 10, pp. 1577-1583, October 2004. 267. Kurokawa, A., Kanamoto, T., Kasebe, A., Inoue, Y., Masuda, H., “Efficient capacitance extraction method for interconnects with dummy fills,” in Proceedings of Custom Integrated Circuits Conference, pp. 485-488, October 2004. 268. P. Gupta and A. B. Kahng, “Wire swizzling to reduce delay uncertainty due to capacitive coupling,” in Proceedings of International Conference on VLSI Design, 2004, pp. 431- 436. 269. A. Kurokawa, T. Kanamoto, T. Ibe, A. Kasebe, C. W. Fong, T. Kage, Y. Inoue, and H. Masuda, H., “Dummy filling methods for reducing interconnect capacitance and number of fills,” in Proceedings of International Symposium on Quality of Electronic Design, 2005, pp. 586-591. 270. G. Xu, R. Tian, D. Z. Pan, and M. D. F. Wong, “CMP aware shuttle mask floorplanning,” in Proceedings of Asia South Pacific Design Automation Conference, 2005, pp. 1111-1114. 271. P. Gupta, A. B. Kahng, C.-H. Park, K. Samadi, and X. Xu, “Wafer topographyaware optical proximity correction for better DOF margin and CD control,” in Proceedings of SPIE Conference on Design and Process Integration for Microelectronic Manufacturing, vol. 5353, pp. 844-854, 2005. 272. D. Z. Pan and M. D. F. Wong, “Manufacturability-aware physical layout optimizations,” in Proceedings of Integrated Circuit Design and Technology, 2005, pp. 149-153. 273. L. He, A. B. Kahng, K. H. Tam, and J. Xiong, “Design of integrated-circuit interconnect with accurate modeling of chemical-mechanical planarization,” in Proceedings of SPIE Conference on Design and Process Integration for Microelectronic Manufacturing, vol. 5756, pp. 109-119, 2005. 274. M. Cho, D. Z. Pan, H. Xiang, and R. Puri, “Wire density driven global routing for CMP variation and timing,” in Proceedings of International Conference on Computer-Aided Design, 2006, pp. 487-492. 275. H. Xiang, I. M. Elfadel, and R. Puri, “Wire variation and dummy fill impacts on RC and delay,” IBM Research Report, RC23843 (W0601-058), January 2006. 276. A. B. Kahng, K. Samadi, and P. Sharma, “Study of floating fill impact on interconnect capacitance,’ in Proceedings of International Symposium on Quality Electronic Design, pp. 691-696, 2006. 277. A. Balasinski, “Question: DRC or DfM? Answer: FMEA and ROI,” in Proceedings of International Symposium on Quality Electronic Design, pp. 789-794, 2006. 278. H. Cai and D. Boning, “In-pattern dummy design and copper ECD/CMP process co-optimization,” in Proceedings of Chemical-Mechanical Planarization for ULSI Multilevel Interconnect Conference, 2007. 279. J. T. Pan, D. Ouma, P. Li, D. Boning, F. Redecker, J. Chung, and J. Whitby, “Planarization and integration of shallow trench isolation,” in Proceedings of VLSI Multilevel Interconnect Conference, pp. 467-472, 1998.
308
References
280. B. Lee, D. Hetherington, and D. Boning, “Using smart dummy fill and selective reverse etchback for pattern density equaliziation,” in Proceedings of ChemicalMechanical Planarization for ULSI Multilevel Interconnect Conference, 2000, pp. 255-258. 281. A. B. Kahng, P. Sharma, and A. Zelikovsky, “Fill for shallow trench isolation CMP,” in Proceedings of International Conference on Computer-Aided Design, 2006, pp. 661-668.
Index
F -distribution, 108 Lgate bias iso-dense, 17 based on asymmetry, 18 based on orientation, 18 iso-dense, 18 Lgate variation impact of local neighborhood, 18 Tox variability, 32 Tox variation, 34 Vth variation, 27, 96, 102 area dependence, 31 due to local oxide thickness, 13 due to random dopants, 12, 29 Poisson model, 30 k1 factor, 17, 18, 128, 131 t-distribution, 105 aberrations, 13, 16–18, 20, 88, 112, 119, 132, 133, 137, 139, 140, 153 acceptance region, 209, 230 across-chip linewidth variation (ACLV), 16 additive, 106, 107, 114, 116, 118, 122, 174, 220, 223, 274 additive model, 107, 114, 118 aerial image, 23, 24 aging, 39, 40 alternating phase-shift mask, 143 anneal, 28 annular illumination, 149 ANOVA, 106–113, 115, 122 2-way model, 111 ANOVA factors, 106
aperture, 17, 128, 130, 132 aspect ratio, 89 etch dependence of, 22 CMP dependence of, 53 astigmatism, 133 atomic-scale randomness, 12, 28 attenuated PSM, 147 autocorrelation function, 34 back end, 3, 11, 88, 155, 163 back gate device, 37 back-end-of-line (BEOL), 43, 44, 52, 54, 88, 89, 96, 155 barrier metal, 44, 46 barrier metal, 46 Bernoulli random variable, 233 bias, 89 ANOVA, 110 applied during OPC, 138 etch microloading, 22 iso-dense, 140, 141, 149 layout-dependent, 112, 117 resist model, 133 spatial, 112 spatially dependent, 139 binary mask, 143 Binomial random variable, 102, 213, 232 block-based SSTA, 213, 216, 221 body bias, 171 Bossung plot, 134, 135 bound on circuit delay distribution, 225 on circuit delay probability, 226
310
Index
on distribution of random variable, 103 Bragg’s condition, 129 butterfly curve, 191 canonical form, 216, 218, 219 capacitive, 273 capacitive coupling, 82 capacitive load, 204, 234, 235 capacitive measurements, 87 carbon nanotube interconnects, 57 catastrophic yield, 1 catastrophic yield loss, 102 Cauchy-Schwarz inequality, 267 Central limit theorem, 105, 234 chance-constrained optimization, 251, 259, 265, 266, 274 channel leakage, 33, 240 Chebyshev inequality, 103 chemical vapor deposition, 48 chemical- mechanical polishing, 27 chemical-mechanical polishing, 3, 43–46, 50, 57 chrome on glass, 145, 147 circuit simulation, 153, 167, 169, 199, 234, 241, 242, 266, 280 device models, 170 for response surface analysis, 194 circuit yield, 222 clock period, 203, 206, 250, 255, 259, 269 closure, 213 CMP, 44–46, 50, 51, 55, 157, 158, 160–162 coherence, 20 coherent illumination, 129 coma effect, 18, 88, 133 common window analysis, 136 comparing random variables, 261 comparison of random variables, 226 confounding, 87, 114, 118 continuous random variable, 102, 103, 213 contrast, 4, 17, 23–25, 122, 147, 149, 150, 218, 229, 252, 256 conventional, 13, 32, 149, 191 convex optimization, 263, 265 convex ordering, 261 convolution, 215
corner analysis, 180, 209, 210 corner rounding, 17–19, 27, 55, 139 correlation, 24, 94, 103, 118–123, 180, 195, 208, 218, 246, 252, 253 between node delays, 214 between path delays, 211 coefficient, 227 due to path reconvergence, 216, 217 matrix, 225, 226 of delay in SSTA, 222 of device parameters, 175 of node delays, 218 of path delays, 217, 218, 236 of regression fit, 187 spatial, 24–26, 214, 215, 220, 223 unknown structure, 224 correlation coefficient, 119, 225 correlation function, 219 correlation length, 24, 34, 35 interface roughness, 34 local oxide thickness variation, 34 measured values of LER, 24 of line-edge roughness, 24 of oxide-silicon interface, 12 Pelgrom model, 24 correlation matrix use in PCA, 178 coupling capacitance, 53 covariance, 104, 121, 122, 218, 220, 221, 223–225, 265, 275 critical dimension, 15, 17, 127, 131, 134, 152 critical feature, 143, 144 critical path, 201, 211, 232 criticality, 153, 224 cumulative distribution function, 103, 211, 253 damascene process, 27, 46 defocus, 20, 131, 134–137, 140, 141, 151 demagnification, 18 dense feature, 17, 134 depletion layer, 29 depth of focus, 131, 133–135, 149 design rule, 148, 150 design rule checking, 150 device models, 1, 5, 40, 169, 170 device wearout, 39 die-to-die, 185
Index dielectric variation, 54 diffraction, 13, 17, 129, 130, 132, 138, 141, 143, 144, 150 diffraction order, 130 diffusion, 2, 27, 44, 54, 138 dipole, 149 direct tunneling, 242 directed acyclic graph (DAG), 222 discrete, 103 discrete random variable, 102, 213 dishing, 14, 27, 44–47, 50, 51, 89, 157, 162, 163 dissimilar structure variability, 13 distortion, 17, 138 distribution, 140, 224, 227, 245, 246, 248, 249, 252, 254–256, 259–263, 265, 275, 276 across spatial scales, 174 angular, 54 arbitrary, 214 lower bound on, 226 model used in SSTA, 214 of a representative circuit, 182 of a single path, 222 of circuit delay, 214, 232 of clock frequency, 206 of delay, 188 of device characteristics, 174 of device parameters, 200 of discrete random variable, 213 of impurities, 29 of max delay, 221, 222, 224 of multivariate normal, 225 of noise margins, 192, 195 of path delays, 221 of threshold voltage, 28, 29, 35 upper bound on, 226 with strong correlations, 220 dose, 17, 27, 119, 134, 135, 137, 140, 141, 151, 152, 240, 241, 271 drift of non-stationary spatial process, 118, 121 dual Vth optimization, 257, 259 dummy fill model-based, 158 rule-based, 158 effective channel length, 12, 15, 31, 240–242, 274
311
eigenvalue decomposition, 178 electromigration, 40, 54 electroplating, 43, 44, 46, 48, 52 Ellipsoid, 137 ellipsoid, 230, 263, 264 emerging devices, 37 energy level quantization, 12 erosion, 14, 44–47, 50, 51, 55, 89, 157, 163 etching, 11, 15, 20, 22, 138, 143 exposure, 14, 15, 20, 23, 46, 88, 106, 114, 132, 134–137, 143, 147, 151 exposure field, 13, 127, 154 exposure latitude, 134, 135 exposure-defocus window, 134, 135 fab, 12 film thickness, 32, 34 FinFET, 37, 38 flare, 2, 138 focus, 17, 52, 131–137, 143, 149, 151, 152, 169, 211, 268 forbidden pitch, 149 Fowler-Nordheim tunneling, 242 frequency measurements, 94, 95 for BEOL characterization, 97 for FEOL characterization, 96 impact of layout, 97 front end, 3, 11 front-end, 41, 88, 127 front-end-of-line (FEOL), 11, 96 full factorial experiment, 185, 189 gate dielectric, 27, 32 gate leakage, 33, 93, 239, 240, 242–244, 247, 248 gate length, 12–15, 22, 31, 102, 106, 109–111, 219, 223, 270 gate length bias, 110 gate width variability, 26–28 Gaussian random variable, 103 geometric programming, 259, 263, 277 hafnium oxide, 33 hammer head, 139, 140 heat dissipation, 74 heatsink, 76 high-K, 240 hold time, 203, 207–209
312
Index
hot carrier effect, 40 hot spot checking, 150, 151 hot spots, 150, 151, 153 illumination, 16, 20, 88 annular, 149 dipole, 149 dose, 134 interactions with design, 149 non-conventional, 149 off-axis, 131, 132, 143, 149, 154 pattern, 143 quadrupole, 149 reduction of wavelength, 128 source, 129, 138 system, 129 wavelength, 128, 138 illumination system, 16, 149 illumination wavelength, 13, 16, 129 image contrast, 24 image distortion, 17 image quality quantification, 131–134 implantation, 11, 15, 27–29, 102 integrated circuit, 1, 3, 11, 118, 168 intensity, 130, 133, 134, 138, 141–143, 147–149, 151 interior-point methods, 263, 264 intra-die, 246 intrinsic randomness, 7 inverted temperature dependency, 210 isolated feature, 17 isolated features, 134 Jacobian, 169 KCL, 168 kurtosis, 35 KVL, 168 Lagrangian relaxation, 258 Latin hypercube sampling, 199 lattice stress compressive, 36 tensile, 36 layout manufacturing checking (LMC), 150 leakage current, 15, 56, 239–241, 244–248, 252 lens, 16–18, 20, 88, 112, 119, 130–133, 137–140, 143, 153, 219
line edge roughness model, 23 line-edge roughness, 12, 13 line-end shortening, 17 linear programming, 259, 261, 263, 272 linear regression, 195, 236 linewidth, 16–18, 20, 55, 89, 118, 133–135, 137, 138, 140, 141, 152 linewidth variation, 17 lithography, 2, 3, 13, 14, 17, 27, 43, 44, 52, 53, 56, 127–129, 131, 132, 138, 149–151, 154, 191, 280 lognormal random variable, 274 low-K dielectric, 43 low-k imaging, 17 low-pass filter, 17, 27, 130 macroloading, 22 magnitude of variability, 11, 14, 252 mask, 2, 12, 15, 17, 18, 22, 26, 27, 89, 129–134, 137–141, 143–149, 151, 152 mask error enhancement factor (MEEF), 18 mask error factor (MEF), 18 Matrix, 264, 275 matrix, 104, 169, 171, 178, 179, 220, 223–228, 265 max of random variables, 260 max of two random variables, 212 maximum of normal random variables, 218 metrology atomic force microscopy, 91 based on frequency measurements, 95 electrical test, 87 optical interferometry, 91 scanning electron microscopy, 91 short loop flows, 87 microloading, 22 mobility, 36 mobility enhancement, 36 moment estimation, 104, 122 moment-based optimization, 262 Monte carlo, 222, 227, 230–233, 235, 276 Monte-Carlo analysis, 192 MOSFET, 2–4, 34, 38, 85, 169–171, 178, 239
Index moving average, 115, 116 multilevel interconnect, 52 negative bias temperature instability, 40 negative bias temperature instability (NBTI), 40 nested variance, 117 Newton’s method, 168 Node, 239 node common node, 217 common node identification, 215 criticalities, 266 delay correlations, 224 node delay correlations, 214, 215 node-based formulation, 267, 268, 274 noise margins, 192, 199 non-linear programming, 259 non-stationary random process, 122 normal, 183, 199, 245, 246, 248 cumulative distribution function, 255 normal distribution, 214 Central limit theorem, 234 normal random variable, 103, 212, 213, 215, 217 sets of equal probability, 265 NP-hard, 271 numerical aperture, 17, 128, 130, 132 off-axis illumination, 143, 149 optical interaction range, 138 optical path difference, 132 optical proximity correction (OPC), 137–140, 149–154 model-based, 137 rule-based, 137 optical resolution, 16, 128, 129 optical simulation, 138 optimization deterministic, 252 robust, 263 sensitivity-driven, 259 stochastic, 254 over-polish, 50 oxidation, 11, 33, 36 oxynitride, 33 package, 240, 244, 252 parallelepiped, 29, 230
313
parameter space, 178–181, 184, 208, 209, 227–229 parametric, 153, 239, 240, 243–246, 248, 250, 251 parametric functions explicit dependence of delay on process, 210 parametric representations, 214, 216 parametric yield, 7, 8, 11, 14, 271 parametric yield loss, 263, 268 impact of gatelength variability, 15 impact of gatelength variation, 110 impact of power variability, 251 joint impact of power and timing variability, 257 parametric yield metric difficulty of computation, 259 parametric yield optimization, 257 chance-constrained formulation, 274 design centering, 228 need for SSTA, 205, 207 robust LP, 264, 272 SOCP, 272 Pareto optimal solutions, 272 Pareto optimality, 273 partial coherence, 20 path reconvergence, 211 path-based, 266, 267 path-based SSTA, 213, 221, 222, 224, 237 pattern correction, 140 pattern density, 22, 27, 46–53, 89 patterning, 22, 50, 52, 54–56, 87, 130, 131, 133 Pelgrom model, 24, 26, 121 percolation model, 30 phase conflict, 144, 146 phase-shift masking (PSM), 143 photolithography, 16–18, 20, 88, 127, 139, 143, 206 photoresist, 16, 17, 20, 128, 133, 134, 138 pitch, 12, 17, 47, 89, 107, 129, 130, 134, 138, 141–143, 149, 150, 152, 153 plasma etch, 16, 53 Poisson random variable, 29, 102, 213 polishing pad, 44, 46 positive-semidefinite, 264 posterior probability, 198
314
Index
posynomial, 263, 265 power estimation, 68, 71, 241 power grid, 122 PRIMA, 204, 237 principal component analysis (PCA), 177–179, 181, 185, 199, 200, 218, 220, 223, 224 probability density function, 102, 211 probability distribution, 102, 104 process latitude, 135, 140–142, 149 process variation, 177, 235, 240 process window, 53, 132–137, 141, 143, 145, 150, 151 proximity effect, 17 PVT space, 204 quadrupole, 149 quadrupole illumination, 149 quantile-quantile plot (QQ plot), 188, 192 random, 88, 137, 140, 152, 174, 178–180, 182, 190, 191, 197–199, 240, 241, 246, 279 random arrival time, 216 random component, 101, 206, 220, 235, 236 random delays, 212 random dopant fluctuation, 12, 14, 24, 28, 35, 37, 94, 102 random dopant placement, 12, 30 random error, 106, 107, 115 random function, 122 random gate length variation, 22 random intra-chip variability, 117 random node delay component, 217 random process, 118, 119, 122 random residual, 110, 114 random sampling, 230, 232, 233 random simulation, 233 random variability, 4–7, 14, 25 impact on yield, 251 random variable Binomial, 102 decision-making, 253 definition, 102 discrete, 102 point estimates, 252
random variables, 103, 104, 119, 178, 212, 213, 217, 218, 234, 253, 261 random vector, 222 Rayleigh depth of focus criterion, 131 Rayleigh resolution criterion, 130 refractive index, 17 Regression, 115 regression, 186, 195, 236, 242 reliability, 7, 54, 201, 202 representations of random variable, 214 residual image, 147 residuals, 187 resist, 13, 16, 17, 20, 23, 88, 114, 133–136, 138, 139, 147, 150, 151 resist coating, 16 resist sidewall angle, 135 resistivity copper, 54, 55 resitivity via, 55 resolution, 2, 16, 86, 107, 118, 119, 128–132, 137, 140, 141, 143, 144, 149, 154, 280 resolution enhancement techniques (RET), 137, 140, 143, 147, 149 resolution limit, 143, 149 response surface, 194 response surface analysis, 194 response surface model, 195, 233 response surface modeling, 194 restricted design rules (RDR), 140 restrictive design rules (RDR), 150 reticle, 4, 13, 16–18, 86, 112, 114, 115, 117, 127, 154, 174, 280 roadmap (ITRS), 281–285 runtime OPC algorithm, 138 sample mean, 105 sample variance, 105 scales of variability, 13 chip-to-chip variation, 13, 117 intra-chip variation, 13, 117 scan chain, 92 scattering bars, 140 second-order conic programming, 263–265, 267, 272, 275 separable treatment of variability, 214 serif, 139
Index setup time, 203, 208, 209 shallow trench isolation (STI), 27, 28, 36, 37 short-channel effects, 37 silicon dioxide, 32–34 similar structure variability, 13 simulation, 53, 280 fast lithography simulation, 151 for leakage modeling, 243 for library characterization, 202, 204, 234 lithography, 150, 153 Monte Carlo, 188, 227, 232, 235, 276 of exposure, 23 resist model, 133 single event upset (SEU), 82 sizing, 149, 251, 252, 254, 257–263, 265, 266, 268, 270–274 slack budgeting, 254, 270 Slepian inequality, 225 slurry, 44, 46, 114 source and drain, 30, 37 spatial correlation, 24–26, 94, 118–123, 211, 214, 219, 220, 223 spatial distribution, 119 spatial frequency of LER, 23 spatial random process, 119 spherical aberrations, 133 SRAM overview of operation, 190 statistical analysis of, 190 standard cell library, 259 static timing analysis, 201, 202, 211 for systematic variability, 152 objective, 202 stationary random process, 119 statistical static timing analysis, 211 statistical timing analysis, 228 formulation, 211 stepper, 13, 16 stochastic dominance, 253 stochastic majorization, 225, 226 stochastic ordering, 253, 261 strain, 36 stratified sampling, 197 stress, 36 subresolution assist feature (SRAF), 131, 140, 141
315
substrate, 27, 29, 30, 34, 59, 60, 74, 77, 82 substrate coupling, 82 subthreshold leakage, 110, 240–242, 246, 247, 252 surface profilometry, 45, 49 systematic, 6, 8, 13, 16–18, 53, 55, 86, 114, 152, 206, 235, 251, 279 systematic comonent, 115 systematic component, 101, 111, 116–118 systematic difference, 106 systematic drift, 121 systematic impact of layout, 109, 112 systematic model, 111 systematic spatial signature, 112 systematic spatial variation, 25, 121, 122, 139 systematic variability, 2, 4, 5, 14, 113, 121 lattice stress, 35 through-pitch, 152 systematic variation definition, 25 taxonomy of variability, 2, 4, 11–13 temperature variability, 60, 61, 63, 64 test structures, 85 figures of merit for, 85 multiplexed transistor arrays, 86 thermal conductivity, 75, 77 threshold voltage, 12 tightness probability, 213, 219 TILOS, 271, 272, 275 Timing, 280 timing, 207, 216, 239, 243, 245, 249–252, 254–262, 266, 268, 270–272, 274–276 basics of STA, 202 deterministic, 207 operations on random variables, 212 uncorrelated behavior, 206 verification, 202 worst-case approach, 202, 206, 207 timing analysis, 201 deterministic, 211 timing constraint, 41, 208, 222 timing graph, 205, 211, 220, 228, 231 timing model generation, 235
316
Index
timing models, 230 timing optimization, 272 timing variability, 11, 96 timing verification deterministic, 209 timing violation, 40, 209 timing yield, 206, 207, 227, 259, 260, 266 timing yield constraint, 268 timing yield estimation, 230, 233 timing yield optimization, 259 tolerance, 39, 134, 135, 150, 180, 280 topography, 45, 49, 50 of multilevel interconnect, 50 trench, 44, 53, 54, 89 tri-gate device, 37 trim mask, 147 tunneling current, 33, 34, 239, 242, 243, 247 unbiased estimator, 105 uncertainty set, 263, 264 variance decomposition, 114, 117, 122 variance reduction, 197, 199, 234 variation scale die-to-die, 117, 118 intra-die, 113, 115, 117, 118 variogram, 119, 122, 123 via hole, 23 via resistance, 208 voltage variability, 66, 67
wafer, 2, 4, 12–14, 16–18, 22, 25, 26, 44, 46, 48, 49, 53–55, 85, 113–122, 130, 132–134, 141, 145, 148, 174 wall of paths, 254 wavelength of light, 128 wire density, 279 within-die, 114, 115, 118, 174, 175, 190 worst-case analysis, 4, 180–183, 185, 186, 188–190 yield, 11, 14, 15, 139, 150, 151, 153, 233, 239, 240, 243–246, 248, 250, 280 economic importance, 6 error of estimate, 196 impact of gate length variation, 110 impact of timing and power, 257 importance of modeling, 101 parameter space estimation, 229 SRAM, 190, 193, 198 yield body, 210 yield escape, 210 yield estimation integration techniques, 230 yield loss, 261–263 yield optimization, 252, 257, 275 chance-constrained problem, 264 design centering, 228 gate sizing, 259 joint power and timing optimization, 268 yield point, 253 yield-limiting paths, 262 Zernike polynomials, 132
Continued from page ii Statistical Analysis and Optimization for VLSI: Timing and Power Ashish Srivastava, Dennis Sylvester, and David Blaauw ISBN 978-0-387-26049-9, 2005