Advanced Production Testing of RF, SoC, and SiP Devices
For a complete listing of the Artech House Microwave Library, turn to the back of this book.
Advanced Production Testing of RF, SoC, and SiP Devices Joe Kelly Michael Engelhardt
artechhouse.com
Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the U.S. Library of Congress. . p. cm. — (Artech House microwave library) Includes bibliographical references and index. ISBN 1-58053-709-X (alk. paper) 1. . 2. . I. . II. Title III. Series. British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library.
Cover design by Igor Valdman
ISBN 10: 1-58053-709-X ISBN 13: 978-1-58053-709-4
© 2007 ARTECH HOUSE, INC. 685 Canton Street Norwood, MA 02062
All rights reserved. Printed and bound in the United States of America. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. All terms mentioned in this book that are known to be trademarks or service marks have been appropriately capitalized. Artech House cannot attest to the accuracy of this information. Use of a term in this book should not be regarded as affecting the validity of any trademark or service mark.
10 9 8 7 6 5 4 3 2 1
To Kathleen, who has supported me with all of this —Joe Kelly
To my two children, Franz and Katrin Engelhardt, whose curiosity and open mind help me to enjoy life —Michael Engelhardt
Contents
1
Preface
xvii
Acknowledgments
xix
Concepts of Production Testing of RF, SoC, and SiP Devices
1
1.1
Introduction
1
1.2 1.2.1 1.2.2
Test and Measurement Production Testing Characterization Testing
2 2 3
1.3 1.3.1 1.3.2
Production Test Systems Rack-and-Stack System Automated Test Equipment
3 3 4
1.4 1.4.1
The Peripherals of Production Testing The Test Floor and Test Cell
4 4
1.4.2
Handlers
5
1.4.3 1.4.4 1.4.5
Load Boards Contactors Wafer Probers
6 6 7
1.5
The Test Program
7
1.6
Calibration
8 vii
viii
Advanced Production Testing of RF, SoC, and SiP Devices
1.7
Reducing Test Costs
9
1.7.1 1.7.2
Multisite Testing Outsourcing of Production Testing
9 9
1.7.3
Built-In Self-Tests
10
1.8 1.8.1
Testing RF, SoC, and SiP Devices Testing RF Low-Noise Amplifiers
10 11
1.8.2 1.8.3 1.8.4
Testing RF Power Amplifiers Transceivers Testing Receivers
11 12 14
1.8.5 1.8.6 1.8.7
Testing Transmitters Testing PLLs and VCOs Testing Modern Standards
15 15 16
1.8.8
Summary References
18 18
2
Tests and Measurements I: Fundamental RF Measurements
21
2.1
S-Parameters
21
2.1.1
Application of S-Parameters in SoC Testing
23
2.2 2.2.1
PLL Measurements Divider Measurements
24 25
2.2.2 2.2.3
VCO Gain Measurement PLL Settling Time
26 26
2.3 2.3.1
Power Measurements RF Output Power Measurement
27 28
2.3.2 2.3.3 2.3.4
Spur Measurements Harmonic Measurements Spectral Mask Measurements
29 30 31
2.4
Power-Added Efficiency References
31 33
3
Tests and Measurements II: Distortion
35
3.1
Introduction
35
3.2
Linearity
36
Contents
ix
3.3
Distortion in SoC Devices
36
3.4
Transfer Function for Semiconductor Devices
37
3.5
Harmonic Distortion
38
3.5.1
Measuring Harmonic Distortion
40
3.6 3.6.1
Intermodulation Distortion Second-Order Intermodulation Distortion
42 42
3.6.2 3.6.3 3.6.4
Third-Order Intermodulation Distortion Higher-Order Intermodulation Distortion Products Example of Harmonic and Intermodulation Distortion Products
44 46 46
3.6.5
Intermodulation Distortion Products of a ZIF Receiver
48
3.7 3.7.1
Measuring Intermodulation Distortion The Intercept Point, Graphically
48 48
3.7.2 3.7.3 3.7.4
The General Intercept Point Calculation Input- and Output-Referencing of Intercept Points Example: Calculating the IP3 of an RF LNA
49 50 52
3.8
Source Intermodulation Distortion
52
3.9
Cross Modulation
53
3.10 3.10.1
Gain Compression Conversion Compression in Frequency-Translating Devices
54 56
Minimizing the Number of Averages in Distortion Measurements
56
References
56
4
Tests and Measurements III: Noise
59
4.1 4.1.1 4.1.2 4.1.3
Introduction to Noise Power Spectral Density Types of Noise Noise Floor
59 59 61 65
4.2 4.2.1 4.2.2
Noise Figure Noise Figure Definition Cascaded Noise Figure
66 66 68
3.11
x
Advanced Production Testing of RF, SoC, and SiP Devices
4.2.3
Noise Power Density
69
4.2.4 4.2.5
Noise Sources Noise Temperature and Effective Noise Temperature
69 69
4.2.6
Excess Noise Ratio
70
4.2.7 4.2.8 4.2.9
The Y-Factor Mathematically Calculating Noise Figure Measuring Noise Figure
71 72 72
4.2.10 4.2.11 4.2.12
Direct Measurement of the Noise Figure Measuring Noise Figure Using the Y-Factor Method Measuring Noise Figure Using the Cold Noise Method
73 73 77
4.2.13 4.2.14 4.2.15
Noise Figure Measurements on Frequency-Translating Devices Calculating Error in Noise Figure Measurements Equipment Error
78 79 79
4.2.16 4.2.17 4.2.18
Mismatch Error Production Test Fixturing External Interfering Signals
80 80 81
4.2.19
Averaging and Bandwidth Considerations
81
4.3 4.3.1
Phase Noise Introduction
82 82
4.3.2 4.3.3 4.3.4 4.3.5 4.3.6
Phase Noise Definition Spectral Density-Based Definition of Phase Noise Phase Jitter Thermal Effects on Phase Noise Low-Power Phase Noise Measurement
84 86 86 87 87
4.3.7 4.3.8 4.3.9
High-Power Phase Noise Measurement Trade-Offs When Making Phase Noise Measurements Making Phase Noise Measurements
87 88 88
4.3.10 4.3.11 4.3.12 4.3.13
Measuring Phase Noise with a Spectrum Analyzer Phase Noise Measurement Example Phase Noise of Fast-Switching RF Signal Sources Measuring Phase Noise Using the Delay Line Discriminator Method References Selected Bibliography
90 91 93 93 94 95
Contents
xi
5
Advances in Testing RF and SoC Devices
97
5.1
Introduction
97
5.2
System-Level Testing
98
5.3
RF Wafer Probing
99
5.4
SiP Versus SoC Architectures
99
5.5
Designers’ New Responsibilities
100
5.6
RF Built-In Self-Test (BIST)
102
5.7
Test System Architecture
103
5.8 5.8.1
Testing Wide Bandwidth Devices New Test Methodologies
104 105
5.8.2
Calibration
106
5.9
Conclusion References
106 107
6
Production Test Equipment
109
6.1
Introduction
109
6.2
Tuned RF Receivers Utilizing a Digitizer
110
6.2.1 6.2.2 6.2.3
Description of Tuned RF Receivers Utilizing a Digitizer 110 Comparison to Benchtop RF Instruments 111 Tuned RF Receiver Parameters 112
6.3
Modern IC Power Detectors
114
6.3.1 6.3.2 6.3.3
Overview of IC Power Detectors Basic IC Power Detector Circuit Operation Quantitative Comparison of IC Power Detectors
114 115 120
6.3.4
Types of Power Detectors Used in Production
120
6.4 6.4.1 6.4.2
Production Testing Using Digital Channels and PMU Digital Channel and PMU Components Using a Digital Pin as a Crystal Reference Frequency
121 122 124
6.5 6.5.1
Digitizers (ADCs) Digitizer Components
125 126
6.6 6.6.1
Arbitrary Waveform Generators Overview of Arbitrary Waveform Generator
126 126
xii
Advanced Production Testing of RF, SoC, and SiP Devices
6.6.2
Creating AWG Waveform Files
128
6.7
Use of DSP in Production Test Equipment
130
6.8
Communicating with ATE Hardware
131
6.8.1 6.8.2 6.8.3
131 132
6.8.4 6.8.5
General-Purpose Interface Bus VMEbus eXtensions for Instrumentation Peripheral Component Interconnect eXtensions for Instrumentation Summary of ATE Communication Interface Standards LAN eXtensions for Instruments
133 134 134
6.9
Summary
136
References
136
7
Cost of Test
139
7.1
Introduction
139
7.2
Parameters Contributing to the COT
141
7.2.1 7.2.2 7.2.3
Shifts and Hours Per Shift Utilization Yield
141 141 141
7.2.4 7.2.5 7.2.6
Depreciation of the Test System Test Time Handler or Prober Index Time
141 142 142
7.2.7
Additional Cost Parameters
142
7.3
Basic COT Model
143
7.4 7.4.1 7.4.2
Multisite and Ping-Pong COT Models Ping-Pong Testing Multisite Testing
145 146 148
7.4.3
Additional Variables for Multisite and Ping-Pong Testing 151
7.5 7.5.1 7.5.2
COT Considerations When Using Test Houses Guaranteed Volume or Usage Availability of Testers
152 153 153
7.6
Accuracy and Guardbands
153
7.7
Summary References
156 157
Contents
xiii
8
Calibration
159
8.1 8.1.1
Overview Calibration Methods
159 161
8.2 8.2.1 8.2.2
Calibration Procedures DPS Calibration Digital Calibration
164 164 165
8.2.3 8.2.4
Analog Calibration RF Calibration References
168 169 173
9
Contactors
175
9.1
Introduction
175
9.2
Types of Contactors
177
9.2.1 9.2.2 9.2.3 9.2.4
Spring Pin Contactor Elastomer/Interposer Contactor Cantilever Contactor Short Rigid Contactor
178 178 178 179
9.2.5
Summary of Contactor Types and Their Properties
179
9.3 9.3.1
Contactor Properties Electrical Properties
180 180
9.3.2 9.3.3
Thermal Properties Mechanical Properties
188 191
9.4
Load Board Considerations
195
9.5
Handler Considerations
195
9.6
Overall Equipment Effectiveness
196
9.6.1
The Contactor and OEE
197
9.7 9.7.1
Maintenance and Inspection of Contactors Contactor Cleaning
198 198
9.8
Manual Hold-Downs
199
9.9
Cost Considerations
199
Acknowledgments References
199 200
xiv
Advanced Production Testing of RF, SoC, and SiP Devices
10
Handlers
201
10.1
Introduction
201
10.2
Handler Types
202
10.2.1 10.2.2 10.2.3
Gravity-Feed Handlers Pick-and-Place Handlers Turret Handlers
202 202 203
10.2.4
Strip Test Handlers
204
10.3
Choosing a Handler Type
205
10.4 10.4.1 10.4.2
Throughput Number of Sites Index Time
207 208 209
10.5 10.5.1 10.5.2
Testing at Various Temperatures Tri-Temp and Slew Time Methods of Heating and Cooling
212 212 213
10.5.3 10.5.4
Thermal Soaking of Devices Handler Design Considerations for Thermal Testing
213 213
10.6
Contacting the Device to the Load Board
214
10.7
Handler Footprint
215
10.8
Tester Interface Plane
215
10.9 10.9.1 10.9.2
Device Input and Output Binning Loading and Unloading of Devices
215 216 218
10.10
Conversion and Changeover Kits
218
References
219
11
Load Boards
221
11.1
Introduction
221
11.2 11.2.1 11.2.2 11.2.3 11.2.4
Materials Material Properties The Test Engineer’s Role in Material Selection Layers Hybrid Load Boards
223 223 229 229 230
Contents
xv
11.3
Electrical
231
11.3.1 11.3.2
Signal Routing and Traces Grounding
231 234
11.3.3
Device Power Supplies
237
11.3.4 11.3.5 11.3.6
Components Connectors Cables
238 244 245
11.3.7
Vias
245
11.4
Mechanical Design Considerations for Load Boards
246
11.4.1 11.4.2
Keep-Out Areas Other Mechanical Design Considerations
247 247
11.5
Thermal Design Considerations for Load Boards
248
11.6 11.6.1
Load Board Verification Time Domain Reflectometry
248 248
11.7
General Debugging and Design Considerations
249
11.7.1 11.7.2 11.7.3 11.7.4
Probe Points Reference Designators Component Layout Schematic and Layout Reviews
249 249 250 250
11.7.5
Start with an Evaluation Board References
250 250
12
Wafer Probing
253
12.1
RF Wafer Probing
254
12.2
Yield of MCM Justifies Wafer Probing
254
12.3
Probe Cards
255
12.4 12.4.1 12.4.2 12.4.3
Types of Probe Cards Cantilever Needle Probes Coplanar Probes Membrane Probes
256 256 257 258
12.5 12.5.1 12.5.2
Selecting a Probe Card Frequency Range Number of Pins
258 258 259
xvi
Advanced Production Testing of RF, SoC, and SiP Devices
12.5.3
Impedance Control
259
12.5.4 12.5.5
Decoupling and Current Limitations Inductance
259 260
12.6
Tester to Wafer Prober Interface
260
12.6.1 12.6.2
Soft Docking Hard Docking
260 261
12.6.3
Direct Docking
262
12.7 12.7.1
Calibration Methods for Measurements with Wafer Probing Scalar Loss Calibration
262 262
12.7.2
S-Parameter-Based Calibration
262
12.7.3
Calibration with Calibration Substrates References
263 263
Appendix A: Power and Voltage Conversions
265
Appendix B: VSWR, Return Loss, and Reflection Coefficient
271
Appendix C: RF Coaxial Cables
275
Appendix D: RF Connectors
277
Appendix E: Decimal to Hexadecimal and ASCII Conversions
283
Appendix F: Numerical Prefixes
287
About the Authors
289
Index
291
Preface This book is intended to be a follow-up to Production Testing of RF and Systemon-a-Chip Devices for Wireless Communications, by Keith Schaub and Joe Kelly (Artech House, 2004). That book was a first of its kind, covering many of the topics surrounding production radio-frequency (RF) and system-on-a-chip (SoC) testing. On publication of the book, numerous questions were received as to why more in-depth detail about some of the other less-known areas of testing, such as load boards and contactors, was not included. Frankly, these topics could be considered to be more generalized production testing items, necessary for all aspects of testing (digital, memory, and so forth). Looking into these inquiries, we realized that there were no books or complete works available that covered the advanced topics of RF and SoC testing and the peripherals associated with that testing. We agreed to take on this project together and we hope to create a source to help advance the area of production testing. To help with our endeavor, we received invaluable input from some industry-leading companies such as Verigy, Johnstech International, Delta, and Aetrium. Like the 2004 publication, this book is intended for a wide variety of audiences including SoC applications engineers, engineering managers, product engineers, and students, although other disciplines can benefit as well. Because many of the topics on peripheral testing equipment and needs overlap with many different forms of semiconductor testing, the audience also includes test engineers involved with all types of semiconductor testing. Chapter 1 is an overview of the concepts presented in the book. We designed the content of this chapter to enable a semitechnical reader to gain
xvii
xviii
Advanced Production Testing of RF, SoC, and SiP Devices
knowledge about the topics that are presented in depth throughout the rest of the book. Chapters 2 through 5 present many different aspects of production measurements and also provide enough background to build the reader’s knowledge base to a level of competence to implement these tests in a production environment as well as perform them on benchtop instruments to perform correlation. Chapter 6 presents the many aspects of equipment that is used in both ATE test systems and rack and stack instrumentation. Instrumentation for all aspects of front-end RF/SoC testing is overviewed (RF receivers, digitizers, AWGs, and digital subsystems). Chapter 7 discusses the topic of test costs and how recent changes in industrial models have impacted costs. Models that discuss many factors (beyond simply test time and number of sites) are presented. Because a production measurement is only as good as the calibration of the test system hardware, Chapter 8 describes how calibration is performed on each of the pieces of hardware that make up a test system. Additional emphasis on RF measurement calibration is also presented. Considered possibly the most important piece of the production testing setup, contactors are discussed in Chapter 9. Materials for and construction of contactors are presented, and the trade-offs that need to be considered when choosing a contactor for the various different types of devices that are being tested are discussed. Chapters 10 and 12 discuss handlers and prober interfaces to the test setup. An in-depth presentation of the requirements for developing and fabricating a load board to interconnect the DUT to the test system is found in Chapter 11. Materials, components, and circuit designs are discussed and provided. Often overlooked, the cost of producing a load board can be more than one may think. There are many reasons behind this, and this chapter aims to enlighten readers so that they become more aware of how project funds are being spent. Appendixes are included to provide useful information on topics such as power and voltage conversions, descriptions of VSWR, return loss, and reflection coefficient, guides to RF coaxial cabling and connectors, and more. We look forward to any feedback from you, the reader. Enjoy.
Acknowledgments For their gracious help in bringing together the content of this book, the authors would like to thank Bert Brost of Johnstech International, Inc., Kevin Brennan of Delta, Orville Wright of Aetrium, and Lee Ritchey of Circuit Speed. We also thank Lawrence Roberts of Cree for writing Chapter 6 on production test equipment. Also, this book could not have accomplished this without the expert-quality reviews done by our coworker and friend, Linda Miquelon. The authors would also like to thank the following people for their support throughout our careers (in alphabetical order). • Agilent Technologies: Robert Bartz, Bill Cash, Bob Cianci, Ron
• • • • • • • •
Hubscher, Miklos Kara, Peggy Kelley, Doug Lash, John McLaughlin, Gene Mead, Darrin Rath, Jake Sanderson, Jason Smith, Phil Spratt, David We, Jeff Xu, and Kai Yick Conexant: Max Thornton Corad Technology: Michael Lugay and K. N. Chui DSP Group: Behrouz Halliyal Epcos: Mike Alferman, Ulrich Bauernschmitt, Stefan Freisleben, Joachim Gerster, and Wolfgang Till Intel: Udaya Natarajan, Leonid Sassoon, and Binh Truong Karsten Schefer Keithley Instruments: Mike Millhaem Maxim Semiconductor: Ted Sato
xix
xx
Advanced Production Testing of RF, SoC, and SiP Devices
• Philips: Lan Ho, Tim Jones, Richard Myers, Sultan Sabuktagin, and • • • • • • •
•
Khoi Tran Qualcomm: Farzin Fallah, Osbaldo Oscala, Joerg Paulus, and Pat Sumner RF Micro Devices: Igor Emelianoff Rutgers University: Ahmed Safari and Daniel Shanefield Silicon Wave: Brian Pugh and Phong Van Pham University of Texas at Austin: Kimberly Tran U.S. Army Research Laboratory: Arthur Ballato and John Vig Verigy: Don Blair, Jeff Brenner, Scott Chesnut, Eric Chiu, Bill Clark, Greg Erdmann, Frank Goh, K. A. Goh, Troy Heistand, Thomas Herbst, Daniel Ho, Craig Kanetake, Hiroshi Kikuyama, Ginny Ko, Adrian Kwan, Edwin Lowery, H. L. Lye, Roger McAleenan, Kathleen Miller, Linda Miquelon, Steve Moore, Pam Myers, Roger Nettles, Satoshi Nomura, Laurent Ollivier, Don Ong, Ariana Salagianis, Bob Smith, Oscar Solano, Eng-Keong Tan, Tim Tan, Hubert Werkmann, Roger White, and Juergen Wolf Other: Keith Schuab and Karsten Schefer
1 Concepts of Production Testing of RF, SoC, and SiP Devices 1.1 Introduction This book present concepts surrounding production testing of radio-frequency (RF), system-on-a-chip (SoC), and system-in-a-package (SiP) devices. These devices have become the driving forces behind wireless and mobile communications for the consumer market. Because testing is the final stage before a semiconductor device becomes part of a product, an understanding of how to test these devices is necessary to reduce the overall cost of the semiconductor manufacturing process. The common theme throughout this book is RF testing. Whether testing stand-alone RF devices, receivers, transmitters, or fully integrated transceivers, a thorough understanding of the basic concepts of RF and the details of testing with RF signals is a necessity. Until the late 1980s when the pager (a basic RF receiver device) was introduced to the consumer market, the concept of production testing of RF devices was not of concern because of the low volumes and specialty markets for these devices. Since that time, however, the consumer communications market has grown tremendously and our understanding of how to make the most of production testing of the semiconductor devices used in these products has risen significantly. Add to that the fact that the cost of silicon processing has been reduced significantly, and it becomes apparent that production testing is not
1
2
Advanced Production Testing of RF, SoC, and SiP Devices
only a critical part of the overall fabrication process, but is now beginning to play a major role in the overall cost of manufacturing the devices. In the early 1990s RF technology emerged in the form of cordless and wireless (cellular, mobile) phones. It was apparent that the industry was expanding, and as a result—or perhaps the cause of—the prices of semiconductor devices dropped significantly, especially when compared to the RF devices that had previously only been used for military applications [1]. It is exactly this phenomenon that mandated finding clever, low-cost methods of testing RF-containing devices. This chapter provides an overview of the concepts involved in production testing of RF, SoC, and SiP devices to enlighten the reader about what is involved in setting up and running a test cell. Each of the topics highlighted in this chapter is covered in depth in the other chapters of this book.
1.2 Test and Measurement The testing and measurement of parameters of electronic devices has been around since before the invention of the transistor. Traditionally they were performed on specialized benchtop setups (bench testing) in a laboratory environment. Testing of a device under test1 (DUT) can be performed in a number of ways. For the purpose of this discussion, testing of semiconductor devices will be grouped into two categories: production testing and characterization testing. 1.2.1
Production Testing
Production testing is a special case of test and measurement. In the context of this book, production testing of RF, SoC, and SiP devices is to be considered the act of performing numerous tests in a short amount of time on high volumes of parts. The primary objective is to have high throughput and low overhead, or low test costs, such that the production testing does not adversely impact the marketable value of the device [1]. Quality of testing and test coverage is also key to avoiding returns and avoiding false yield fallout, which causes trade-offs in the high-volume manufacturing environment. In production testing, the optimum goal is to use the shortest test possible to pass the good parts and fail the bad parts. Many stages are often used to get a device to the fully mature production test stage, with each stage having fewer and fewer tests that target the most significant areas of concern or possible failure. When a test program reaches the full production testing stage, the 1. The term unit under test (UUT) is a general production testing term that is sometimes used when discussing testing of electronic devices.
Concepts of Production Testing of RF, SoC, and SiP Devices
3
minimum number of tests should be used that provide a high level of confidence in the final product. 1.2.2
Characterization Testing
In contrast to production testing, during the early stages of production and preproduction runs, the test program is often conservatively written, so that the part is overtested (redundant test coverage). This is attributed to the number of people who are involved in the development of the device, where each has a specified set of tests to run to satisfy their individual criteria. This methodology may initially give important feedback to the design engineers and help them build their confidence in the device, but as the test program matures (usually over a period of many weeks), tests are removed or test methodologies are changed, such that the final production test program may not resemble the initial test plan [1]. A large number of tests are used in a test program for other reasons also. In the early stages of the product life cycle, the design, product, and manufacturing engineers of the DUT seek awareness of potential production flaws and tolerances. This is best achieved by feeding back excessive quantities of information from the tests. Even as the product matures, and the test list is reduced, a test program may include provisions to periodically run extensive tests on a full lot or just every nth part [1]. Also during the design validation phase, characterization might require validation at certain specification limits such as operating at the power supply’s lower and upper levels; usually, however, production worst-case parameters can be chosen to limit the production test time.
1.3 Production Test Systems On a laboratory bench, equipment is placed in a random fashion and wires are bundled in every manner in an effort to facilitate many different types of testing procedures. In contrast, production test systems, or testers, are an attempt to group numerous instruments into one locale, providing easy access through a common interface (the load board). These instruments are often optimized to work together and usually controlled by a computer or some form of common processor. The two types of test systems are rack-and-stack systems and automated test equipment (ATE). 1.3.1
Rack-and-Stack System
Similar to the laboratory configuration mentioned earlier is the rack-and-stack tester. This is a suitable configuration for a production tester during the characterization and prototype stages of a device because the equipment that is
4
Advanced Production Testing of RF, SoC, and SiP Devices
contained in the rack can be quickly reconfigured to meet changing needs. Rack-and-stack configurations are often custom to a specific part. This is an advantage and a disadvantage. The custom tailoring is advantageous in that it can enable the fastest possible test times and makes it easy to add a new instrument. It can also be a disadvantage, however, in that it reduces the flexibility of the architecture. Often, the tester has to be significantly rebuilt before another product can be tested. The computer programs that run the hardware can also be somewhat difficult because there may be interfacing to the equipment via various buses or protocols [1]. 1.3.2
Automated Test Equipment
ATE is a tester that is designed as a complete stand-alone solution for optimal production testing of devices. This is the primary advantage of ATE. Many of the larger test equipment manufacturers produce ATE systems. Optimally designed systems are flexible and, with respect to RF and SoC devices, can also test a multitude of parts. The manufacturers of ATE consider market factors when designing testers of this type. They focus on usability and flexibility in architecture, and ease of programming for the user. A fundamental discriminating factor between ATE and rack-and-stack systems is that ATE often has card-based instruments. This eliminates the displays common to boxed instruments because, ideally, all interfacing is controlled through a common processor. By eliminating the instrument displays, the instruments can be more densely situated and optimally designed to eliminate the signal losses commonly associated with RF instrumentation.
1.4 The Peripherals of Production Testing Once the test equipment is chosen, an efficient means to route the signals from the test equipment to the DUT must be determined. In this endeavor, many pieces fit into this puzzle, such as the test floors and cells, handlers, load boards, contactors, wafer probers, and so forth. The following sections describe these key items. 1.4.1
The Test Floor and Test Cell
The test floor is where all of the production testing takes place. The test floor is usually a cleanroom or near-cleanroom environment, free of dirt as well as electrical noise and where electrostatic discharge (ESD) precautionary measures are taken to avoid prematurely damaging potentially good devices. The term test cell refers to the area surrounding a test system, along with the peripheral equipment. At a minimum, an ideal test cell consists of the test
Concepts of Production Testing of RF, SoC, and SiP Devices
5
system, a handler or wafer prober, an ESD-safe table for organizing tested and untested lots of devices, and provisions for air and vacuum (for running the handler or wafer prober). Additionally, if low-noise measurements are being performed, an electromagnetically shielding enclosure, or screen room, may be needed [1].
1.4.2
Handlers
When production testing of any packaged semiconductor device is performed, one of the major capital investments is the handler. The handler is a robotic tool for placing the DUT into position to be tested [personal communication with Kevin Brennan, director of marketing, Delta Design, 2006]. It communicates with the tester and provides the temperature stimulus and the means to handle the DUT while it is being tested. To demonstrate the significance of the handler, consider that in 2006 a test system could cost up to a few million dollars. Although the test handler may cost less than 10% of this amount, it is the handler that determines how much the tester will be used. Expressed differently, if a handler could offer twice the productivity, then only half the number of multimillion-dollar testers would be needed [2]. While the handler communicates with the tester, it also provides signals to inform the tester when the DUT is ready for test and receives binning information from the tester after the DUT is tested. The communication between handler and tester is controlled by specific software. After the test is performed, the handler then places the DUT into an appropriately selected pass bin or fail bin. Modern test systems offer enhanced and plentiful resources to enable multisite, parallel, and concurrent testing. Modern handlers have to follow in their footsteps to be able to handle these architectures. Otherwise, the multisite-capable tester is useless. The two major handler types are gravity-feed and pick-and-place handlers. Gravity-feed handlers work best for packages that are mechanically robust and can withstand friction on a sliding surface. A gravity-feed handler usually feeds the DUTs into a slider via transportation tubes. When the DUT gets to the slider, it slides down to the load board by means of gravitational force. Because smaller, lighter packages pose a problem with friction, some handlers integrate air blowers into the channel along the gravity slider to assist in acceleration of the DUT onto the load board [1]. Pick-and-place handlers can work with almost all types of packages. Typically using suction, this handler moves the DUT from a transportation tray to the load board contactor. The precision movement in these handlers is controlled through stepper motors. Pick-and-place handlers often employ numerous vacuum solenoids, rather than electrically controlled switches, which
6
Advanced Production Testing of RF, SoC, and SiP Devices
minimizes the introduction of electrical noise to the production testing environment [1]. 1.4.3
Load Boards
A load board is defined as a printed circuit board assembly that is used to route all of the tester resources to a central point, allowing the tester to drive and receive signals from the DUT. This assembly may also be referred to as a DUT interface board (DIB). The load board is independent of the tester and is almost always unique to each DUT that is tested due to factors such as required external circuitry or pin count and pin location. One of the most time-consuming elements of developing a full production test solution is the design and fabrication of the load board. The DC power supply, digital control, mixed signal, and RF signal lines must all coexist and be routed among each other on a common board. This inevitably requires a multilayered load board to be fabricated. The process of creating a load board involves design, layout, fabrication, assembly and test, and possibly multiple iterations of each step. The load board fabrication process is very similar to the fabrication of the actual DUT, although not as complicated, and ample time for this effort should be included in the project schedule [1]. Many companies provide load board services that range from consulting to full turnkey delivery of load boards. Depending on your budget, it is often a wise investment to engage these companies, because you will benefit from their experiences with circuit design and the interfaces among the numerous test systems and peripherals. 1.4.4
Contactors
Contactors are the link between the DUT and the rest of the test system. Physically, they sit atop the load board. The load board routes signals to and from the test system, so the contactor can be considered an extension of the load board, routing signals between the load board and DUT. They perform the important task of providing a test site for the device in order for the critical performance characteristics of the device to be transferred to the test system. This information, ultimately, determines whether the device passes or fails. In addition, depending on the capabilities of the contactor, it may help determine “how good” the device is. Because of its nature as an interconnect, the test contactor as it relates to the system interface is frequently one of the key areas to consider for improvement. There are various types of contactor technologies, corresponding to the style of package to be tested. Contactors are mechanical and therefore exercised with each DUT that is placed onto the load board and have a limited lifetime. A
Concepts of Production Testing of RF, SoC, and SiP Devices
7
contactor is usually a removable assembly that is mounted on the load board. When selecting a contactor it is important to make sure that the contactor is easy and fast to replace, because it will need to be replaced frequently on the production test floor [1]. For engineering and characterization purposes, a contactor with a clamp, or hold-down, on it is desirable so that a test engineer can manually place a DUT onto the load board. This is critical during load board debugging because impedance matching can be performed on the load board without having to work around the handler [1].
1.4.5
Wafer Probers
Another method of interfacing to the DUT is via wafer probing equipment. Wafer probing ensures that the chip manufacturer avoids incurring the significant expense of assembling and packaging chips that do not meet specification by identifying flaws early in the manufacturing process [1]. In the area of RF testing, traditionally, wafer probing has been avoided at the production testing stage if at all possible. Early designs of wafer probes and wafer probe interfaces were unable to handle the parasitic capacitances and inductances seen at RF frequencies. Noise pickup was an additional problem. However, with the increasing costs of more complex packages, the advent of the SiP, and the sale of known-good die (KGD), it has become clear that probing is becoming more necessary. Furthermore, because various functioning die are incorporated into the final package, in a worst-case scenario, a low-yield inexpensive die could jeopardize the entire package, making more expensive die in the package (plus the package) useless. This need has driven the advancement of RF wafer probing technology. In the early 1990s, only production microwave and high-speed integrated circuits (ICs) for expensive modules or packages were being fully RF probed before assembly. By the late 1990s, consumer devices for wireless communications began to be wafer probed routinely [3].
1.5 The Test Program A test program (also called a test plan or test flow) is a computer program that tells the test system how to configure its hardware to make the needed measurements. This program can be developed in many ways, ranging from low-level C/C++ code to a graphical interface for ease of use. Within this program, instructions to the hardware and information, such as how to determine if the DUT has passed or failed the test (known as limits), are provided [1].
8
Advanced Production Testing of RF, SoC, and SiP Devices
The sequence of tests is arbitrary and normally goes back to the each particular company’s device test philosophy. For instance, most companies conduct all of the dc tests at the beginning of the test flow. Some companies, however, want to do all of the dc testing at the end because it is believed that when the device was stressed during previous ac tests, the later dc tests might show failures due to device damage as a consequence of those previous tests [1]. Another possibility that is seen frequently is to arrange the tests in such a sequence that those tests with the highest failure rates are conducted at the beginning of the test flow. This method results in shorter test times if the test program is developed so that the test flow execution is stopped at the first failure. This methodology has the most benefit for low yielding devices.
1.6 Calibration All measurements that are performed on any kind of equipment have errors due to inaccuracies in the measurement technique as well as the equipment itself. The purpose of calibration is to reduce (or in theory eliminate) the measurement error that is related to the measurement equipment. Two kinds of errors contribute to measurement errors: random errors and systemic errors. Random errors can only be characterized with probabilities and by means of statistics. A good example is the contribution of thermal noise. Unless the measurement is executed at absolute zero (0 kelvin), the measurement will always include a contribution from thermal noise. Repeating the same measurement over and over will yield very similar results, plus or minus the random contribution of the noise. Obviously, random errors cannot be calibrated out due to the fact that they are random. The test engineer, however, will have to consider the effects of random error when he or she evaluates the results of a measurement. The other type of error in a measurement is a systemic error. Systemic errors can be corrected because it is possible to characterize the exact amount of their contribution to a measurement. For instance, when an RF measurement is performed, the loss between the test head and the digitizer is always the same for one specific frequency and therefore can be calculated out of the measurement result. Numerous papers and books have been written describing the multitude of methods used to calibrate for RF power measurements [4–6]. Another (but not always necessary) type of calibration is termed de-embedding. Although used mostly for wafer probing, it can also be performed for packaged part testing. De-embedding calibration requires the use of additional “standards” that are replicas of the device (wafer probing) or package (package testing). There are at least four standards consisting of a short, open, 50-Ω load,
Concepts of Production Testing of RF, SoC, and SiP Devices
9
and through connections. With RF probing, it becomes necessary to perform this additional calibration to compensate for every component all the way to the probe tip. These standards can be readily produced though a combination of the device designer’s knowledge of the device and the help of probe card models supplied by the probe card manufacturer. In contrast, for packaged devices, special standards must be designed and fabricated in the “package type” that is used for the device. This is a custom and expensive operation that is not highly utilized for a final production solution because it adds another process step, which increases the already high cost of testing. Most ATE testers provide the ability to perform de-embedding calibration of both die and packaged parts [1]. Finally, ATE and rack-and-stack testers should be subject to an overall calibration. This is usually performed with a frequency that is based on the ATE’s manufacturing process and experience. Also, whenever periodic maintenance or replacement of any tester hardware occurs, it should be followed by calibration. With RF frequencies, the mistake of forgetting to torque a connector properly can make accurate assessment of DUT performance impossible [1].
1.7 Reducing Test Costs Reducing the costs involved with testing is often considered to be the most important factor with production testing. The acronym COT, meaning cost of test, is widely heard among individuals in the semiconductor industry ranging from test engineers all the way up to CEOs. It is clear that it is a very important topic. Many factors impact COT and they are addressed in Chapter 7. A few key items that impact COT are presented in the following sections. 1.7.1
Multisite Testing
An easy way to reduce COT (i.e., increase throughput using the same number of test systems) is to perform multisite testing. In this case, multiple contactors are used on the load board to allow, at best, full parallel testing of multiple DUTs. Oftentimes, complete parallel testing is not possible, but rather a percentage of full parallelism is obtained due to tester hardware limitations or DUT operation. Regardless, it is worth exploring the possibilities of multisite testing as a means to possibly reduce COT. 1.7.2
Outsourcing of Production Testing
Traditionally called test houses, facilities that own a large variety of test systems and peripheral test equipment offer a pay-per-use business model allowing individual semiconductor manufacturers to reduce capital expenditures on test equipment, real estate, and support personnel. The products and services offered
10
Advanced Production Testing of RF, SoC, and SiP Devices
by these facilities have expanded significantly during the past few years. These companies now offer everything from assembly of SiPs in packages, to custom packages (as well as, of course, testing). In extreme cases, some facilities even offer complete services from semiconductor manufacturing all the way to production testing of packaged parts. This allows a company to thrive solely on the intellectual property (IP) of its chip design. In addition to reducing capital costs for a semiconductor manufacturer, the use of outsourcing allows a very simple COT calculation for the manufacturer where the personnel responsible for operating and maintaining the test systems are absorbed into the hourly rate of the test facility. With this, a good portion of COT is simply an hourly rate. Chapter 7 also discusses this in detail. It is important to note that there are some liabilities associated with outsourcing your testing program. Although the test facilities have personnel that are capable of test program development, it is still necessary to have at least some expertise inside a company that can establish contract specifications that make sense, ask appropriate questions, monitor progress, and work as a partner with the test house to overcome problems [1].
1.7.3
Built-In Self-Tests
While built-in self-tests (BISTs) have been used for many years in digital circuit design and testing, it is in its infancy when applied to RF circuits. The focus of BIST is on transistor-level defects, a level of granularity not traditionally observed by RF test engineers. Through its use, test costs can be reduced by reducing the externally applied signals (requiring less external hardware). This can allow for tremendous possible testing opportunities in multisite testing. BIST can also reduce the quantity of tests that are needed. For example, currently, in an SoC transceiver, digital signals of the device are monitored and analyzed to determine whether the device is in the transmitting or receiving state. A BIST designed into the device could potentially indicate status and eliminate the need for tests such as “turn-on time” or “lock time” [1].
1.8 Testing RF, SoC, and SiP Devices The focus of testing in this book is on RF, SoC, and SiP devices, which make up the front-end transceiver architecture used in the modern communications equipment that is overtaking the market in volume. This includes the signals from the antenna to the analog baseband portion of the chip, and the pertinent tests needed to characterize these in a production environment. In the subsequent sections, the front end will be broken into sections that are still tested as
Concepts of Production Testing of RF, SoC, and SiP Devices
11
DUTs, although they may even be in a package with other portions. Chapters 2 through 5 discuss the details of these measurements. 1.8.1
Testing RF Low-Noise Amplifiers
The RF low-noise amplifier (LNA) is an RF-to-RF device, meaning that it has an RF input and an RF output. The LNA is the first active component in the receiver chain, and it has a few critical parameters that can determine how the overall receiver chain will perform. The low-noise amplifier is often the most critical device in the receiver chain of a wireless device. The LNA must amplify the extremely weak signals received by the antenna with large amounts of gain, while simultaneously minimizing the amount of added noise. Because it is the first device that is “seen” by the incoming signal, it is critical that its additive noise be extremely low (see the Friis equation in Chapter 4). The noise figure of an LNA is likely the most significant measure of how well a receiver will work. Thus the noise figure (NF) of the LNA is often tested in production. From a design point of view, the difficult task is to provide high gain while minimizing the introduction of noise. These two items are mutually exclusive. The most common tests performed on an LNA are gain, VSWR, thirdorder intercept point (IP3), and NF (see Chapters 2 through 4). Gain is important because it determines the amount of amplification the LNA will perform on the incoming, often low-level, signal. VSWR gives an indication of the amount of reflection that occurs at the input of the LNA. Reflected signals at the input of the LNA do one of two things, both of which are undesirable: 1. Reflect some of the incoming signal back into the duplexer where it can leak into the transmitted signal; 2. Reflect some of the incoming signal, reducing the expected signal strength. 1.8.2
Testing RF Power Amplifiers
The RF power amplifier (PA) is required at the output of the transmitter chain and, until recently, was the one device that remained a stand-alone discrete device. Now, due to advances in materials and design as well as many low-power applications, the PA is being integrated more frequently. PAs are used at the output of a transmitter to boost the signal level so that it can reach its final destination, which may be far away. Testing of PAs is necessary to ensure that the signal levels are not too large and are able to be controlled. Common tests on a PA are gain, carrier and image suppression, and sometimes gain flatness, over frequency as well as power level. A PA sometimes has the ability to adjust the gain directly or through the use of a
12
Advanced Production Testing of RF, SoC, and SiP Devices
variable attenuator at its output. The most important test to be done on a PA is the adjacent channel leakage ratio (ACLR) test. Historically, this was also referred to as the adjacent channel power ratio. This test is important to ensure that the signal does not leak into the adjacent channel to its intended channel. This is required by the Federal Communications Commission and many other regulating agencies around the world. 1.8.3
Transceivers
A transceiver is a device that transmits and receives. Many current SoCs and SiPs are actually simply front-end transceivers. They may also be simply either only the receiver or only the transmitter. In any case, specific types of tests are performed on them. Furthermore, whether the final application of these devices is as a mobile phone or a wireless LAN device, the tests are fundamentally the same. Transceiver architectures are almost always either of the superheterodyne or the zero-IF type. Because they are frequency-translating devices, performing either upconversion or downconversion on input signals, the testing methodologies differ from those of traditional RF-to-RF devices such as LNAs. 1.8.3.1 The Superheterodyne Architecture
The superheterodyne transceiver is considered the classic radio architecture in which the received signal is downconverted to baseband frequency in two stages. The incoming RF signal is first downconverted to an intermediate frequency (IF). This allows image suppression and channel selection by filtering out any unwanted signals. The filtering is commonly accomplished by use of surface acoustical wave (SAW) or ceramic filters. The filtered IF signal is then further downconverted to the baseband frequency, which is then digitized and demodulated in a DSP. Because the radio has two stages of downconversion, it is generally more complex and more expensive due to the extra components such as discrete SAW filters and voltage-controlled oscillator (VCO)/synthesizers. Figure 1.1(a) shows the superheterodyne receiver [1]. 1.8.3.2 The Zero-IF Architecture
In contrast, the homodyne, or zero-IF (ZIF), radio transceiver is a direct-conversion architecture, meaning that it utilizes one mixer stage to convert the desired signal directly to and from the baseband without any IF stages and without the need for external SAW filters. A block diagram of a ZIF radio is shown in Figure 1.1(b), where it can be noted that there are fewer components than in the superheterodyne radio. It is also common to integrate the LNA, VCO, and baseband filters onto one single die. ZIF transceivers are not a new concept and they have been used for years in cellular and pager applications. They are also
ADC
LNA
Phase splitter
VGA
LO
90°
Duplexer
ADC LO
DSP
DAC 0 0° PA
Phase splitter
VGA
LO
90°
Concepts of Production Testing of RF, SoC, and SiP Devices
0°
DAC (a)
13
Figure 1.1 (a) Superheterodyne transceiver architecture and (b) ZIF transceiver architecture.
14
Advanced Production Testing of RF, SoC, and SiP Devices
ADC 0° LNA
Phasesplitter splitter phase
LO
90°
Duplexer
ADC DSP
DAC 0° PA
phase splitter Phase
LO
90° DAC (b)
Figure 1.1 (continued)
beginning to emerge in WLAN applications, which play an important role in the SoC market [1]. 1.8.4
Testing Receivers
The receiver is the portion of a transceiver that takes the incoming RF signal from the antenna all the way down to analog baseband (I and Q). Many of the traditional tests that are performed on LNAs are also performed on receivers. These are items such as gain, NF, IP3, and so forth. However, due to the downconverting architecture of receivers, there is frequency translation of the signal from RF to baseband. This can cause some anomalies in the measurement results. These anomalies are sometimes overlooked, however, and the results are taken to be analogous to their RF-to-RF counterparts. An important test item for receivers is sensitivity. The term sensitivity is an extension of the discrete RF device noise figure. With the discrete RF device, noise figure is a measure of sensitivity. The higher the noise figure, the harder it will be for a device to receive low-level signals. The all-encompassing term sensitivity is used with a receiver because it is possible to measure the full functionality of the SoC device. One such example sensitivity test is to provide modulated
Concepts of Production Testing of RF, SoC, and SiP Devices
15
RF signals to the input of the receiver. The baseband output is then analyzed at different input power levels to determine when the data becomes corrupted because of noise introduced during the receiver downconversion process. 1.8.5
Testing Transmitters
The counterpart to the receiver, the transmitter, performs upconversion on analog baseband signals to produce an RF output from a device. As noted earlier, the transmitter is followed by a power amplifier to produce the necessary power for transmission of the signal to a base station [mobile telephony, local-area network (LAN), and so forth]. Similar to a receiver, the transmitter architecture can be either superheterodyne or ZIF, although the ZIF architecture is more prevalent today. Many of the same tests that are performed on PAs are also performed on transmitters. However, usually the tests are more extensive because the complete transmitter portion of a transceiver often has much more control of output power levels and these must be tested. 1.8.6
Testing PLLs and VCOs
The phase-locked loop (PLL) and VCO are the low- and high-frequency components of the circuit used to generate the local oscillator (LO) signal that is used in upconversion and downconversion. The PLL circuit is a frequency synthesis and control circuit. In general, it provides multiple, stable frequencies on a common time base within the same system. A basic PLL consists of a reference oscillator, phase detector, loop filter, and a VCO. While discrete PLL devices are available, the context within this book will be limited to testing of PLLs and VCOs that are contained within a transceiver. The key items within a PLL/VCO circuit that are tested in production are the N or R dividers, the VCO, and the response time of the PLL circuit. Although direct access to the dividers is not necessary for normal operation of the PLL, on some devices, the output of the N divider is routed to the package to allow testing of the divider output. Because of the increase of multiple chips in a single package and the limitations on available package pins, this is becoming less common. To test the VCO for proper operation, either VCO gain (KVCO) or direct RF frequency measurement is performed. Direct RF frequency measurement can be somewhat time consuming and is therefore typically reserved for characterization testing only. The overall response time of the PLL circuit is often a critical parameter. In production SoC testing, a measure of this “lock” time, called synthesizer lock time, is a common measurement.
16
Advanced Production Testing of RF, SoC, and SiP Devices
Finally, a critical measure of the quality of the output of this circuit is the measure of VCO phase noise, which measures the amount of noise that may be introduced to the device during upconversion and downconversion via the LO.
1.8.7
Testing Modern Standards
Even though there are different approaches to production testing, at this time most device testing is typically done with continuous-wave (CW) signals (instead of modulated signals). Such tests are basically the same for all different standards. The only differences in testing different standards are in the stimulus parameters such as power level, frequency, tone spacing, and so forth. The requirements of power levels, tone spacing, and so on are specified in the test plan document with inputs from the systems engineer to make sure that that specific device is following at least the IEEE standards. Most device manufacturers go slightly beyond the IEEE standard to provide some kind of margin in their devices. Table 1.1 Various Protocols Requiring Modulated Signals During Production Testing Frequency Range (MHz)
Channel Modulation Bandwidth (MHz) Data Rate (Mbps) Format
802.11b (WLAN)
2,400–2,500
22
11
CCK
802.11a/g (WLAN)
2,400–2,500 (g) 5,000–6,000 (a)
16.8
54
OFDM, 52 subcarriers (4 pilots and 48 data channels)
802.16a (WIMAX)
2,000–11,000; three most common: 2,500, 3,400, 5,800
1.25–20
Up to 75
OFDM, 256 subcarriers (200 actually used; 192 of them are data channels)
802.15 (UWB)
3,100–10,600
528
53.3–480
OFDM
GSM
3 bands: 890–960, 1,710–1,880, 1,850–1,990
0.200
0.270
GMSK
CDMA2000
450, 800, 1,700, 1,900, 2,100
1.25
0.060 to 0.100
CDMA
Bluetooth
2,400–2,500
1
1
FSK
Concepts of Production Testing of RF, SoC, and SiP Devices
17
Even though currently it is sufficient to test most devices with CW signals to guarantee the device performance in its final application, each one of the standards that is listed in Table 1.1 [7–14] typically has one or more tests that make the standard unique and might require modulated signals. For instance, the IEEE 802.11b spec specifies a spectral mask test, which requires complementary Table 1.2 Production Tests on RF, SoC, and SiP Devices Parameter
LNA PA
Transmitter
Receiver PLL/VCO
ACPR/ACLR
—
X
X
—
—
Bandwidth
X
X
X
X
—
Carrier suppression
—
—
X
—
—
Charge pump current
—
—
—
—
X
Dynamic range
—
X
—
X
—
Error vector magnitude (EVM)
—
—
X
X
—
Gain
X
X
X
X
—
Gain flatness over frequency
—
X
X
X
—
Gain flatness over power
—
X
X
X
—
Insertion loss
—
—
—
X
—
Isolation
X
—
—
—
—
I/Q amplitude balance
—
—
—
X
—
I/Q dc offset
—
—
—
X
—
I/Q phase balance
—
—
—
X
—
LO frequency
—
—
—
—
X
N-/R-counter frequency
—
—
—
—
X
Noise figure
X
—
—
X
—
Output power
—
—
X
—
—
Phase noise
—
—
—
—
X
Power-added efficiency (PAE)
—
X
—
—
—
Power/gain compression (e.g., P1dB)
X
X
X
X
—
Power/gain linearity
—
X
—
—
—
Return loss
X
X
X
RF-LO rejection
—
—
Second-order intercept point (IP2)
X
X
Spurious output
—
Third-order intercept point (IP3/TOI)
X
Total harmonic distortion
—
X
—
X
—
X
X
—
X
X
—
—
X
X
X
—
X
—
X
—
18
Advanced Production Testing of RF, SoC, and SiP Devices
code keying (CCK) modulated signals as well as a wide bandwidth receiver. Other standard specific tests that require modulated signals are ACLR [commonly performed for code division multiple access (CDMA) devices], bit error rate tests (commonly performed on Bluetooth devices), or error vector magnitude (EVM), which is frequently measured on devices that use orthogonal frequency division modulation (OFDM) such as Worldwide Interoperability for Microwave Access (WiMAX). A device-specific test is a test that is not performed because it is required according to the IEEE specification, but performed by the vendor to guarantee correct design of the device. In most cases those tests are added after the device has been fully characterized and it was determined that the device had deficiencies on some of the design parameters. A good example for such a test would be the “image reject test,” which is commonly performed on the downconverting mixer of the receive chain. 1.8.8
Summary
Table 1.2 lists tests that are commonly performed on the various devices or portions of devices that have been introduced. These tests make up the majority of tests performed on modern RF, SoC, and SiP devices. As always there are exceptions. As new technologies arrive, new tests or variations of older tests are encountered. The important thing is to understand the fundamentals of test and measurement and with that, all tests can be efficiently and effectively implemented.
References [1]
Schaub, K., and J. Kelly, Production Testing of RF and System-on-a Chip Devices for Wireless Communications, Norwood, MA: Artech House, 2004.
[2]
Gray, K., “Current Trends in Test-Handler Technology,” Evaluation Engineering, Vol. 36, No. 5, 1997.
[3]
Gahagan, D., “RF (Gigahertz) ATE Production Testing On-Wafer: Options and Tradeoffs,” Proc. 1999 Int. Test Conf., 1999, p. 388.
[4]
Wong, K., and R. Grewal, “Microwave Electronic Calibration: Transferring Standards Lab Accuracy to the Production Floor,” Microwave Journal, Vol. 37, No. 9, 1994, pp. 94–105.
[5]
Dunsmore, J., “Techniques Optimize Calibration of PCB Fixtures and Probes,” Microwaves & RF, Vol. 34, No. 11, 1995, pp. 93–98.
[6]
Fitzpatrick, J., “Error Models for Systems Measurement,” Microwave Journal, Vol. 22, No. 5, 1978, pp. 63–66.
Concepts of Production Testing of RF, SoC, and SiP Devices
19
[7] Agilent Technologies, “Ultra Wideband Communication RF Measurements,” Application Note 1488, 2004. [8] Agilent Technologies, “WIMAX Concepts and Measurements,” Application Note, 2004. [9] Agilent Technologies, “WIMAX Signal Analysis.” Application Note, 2004. [10] Agilent Technologies, “Wireless Test Solutions,” Application Note 1313, 2002. [11] OFDM Alliance, “Multi Band OFDM Physical Layer Proposal for IEEE 802.15 Task Group 3A,” 2004. [12] OFDM Alliance, “Ultrawideband: High Speed, Short Range Technology with FarReaching Effects,” 2004. [13] Scourias, J., “A Brief Overview of GSM,” University of Waterloo, 1995. [14] CDMA Development Group, http://www.cdg.org/technology/cdma_technology/a_ross/ index.asp, 1996.
2 Tests and Measurements I: Fundamental RF Measurements 2.1 S-Parameters One of the most frequently used methods to describe the functionality of RF devices is to use scattering parameters, or in short S-parameters. S-parameters are used to describe how a device alters voltage waves that are applied to its ports. For instance, an amplifier should increase the amplitude of the voltage wave that was applied to the input of the device. Therefore, the S-parameter measurement of an amplifier’s gain compares the output amplitude of the amplifier to the amplitude of the voltage wave that was applied to stimulate the amplifier. First, let’s look into how S-parameters are defined [1]. Figure 2.1 shows the signals that are applied to obtain S-parameters in the case of a two-port device. When we talk about SoC testing, in most cases we are talking about two-port devices, that is, devices in which we observe one output relative to one input. The simplest example of an active two-port device would be an amplifier. The a1 signal in Figure 2.1 is the voltage wave that is applied to the input of the device. Likewise, the a2 signal is a voltage wave that is applied to the output of the device; b1 is the voltage wave that can be measured at the input of the device; and b2 is the voltage wave that can be measured at the output of the device. For two-port devices we have four S-parameters, which are defined as follows:
21
22
Advanced Production Testing of RF, SoC, and SiP Devices
b2
a1 Two port device
a2
b1
Figure 2.1 S-parameter definition for a two-port device showing incident and reflected signals.
S 11 =
b1 with a2 = 0 a1
(2.1)
S 21 =
b2 with a2 = 0 a1
(2.2)
S 12 =
b1 with a1 = 0 a2
(2.3)
S 22 =
b2 with a1 = 0 a2
(2.4)
Under the assumption that the output of the two-port device is terminated (a2 = 0), S11 is called the input reflection coefficient. Under the same assumption, S21 is called the forward transmission coefficient. Also, S22 is the output reflection coefficient if the input of the device is terminated (a1 = 0) and S12 is the reverse transmission coefficient if a1 is assumed to be zero. The term terminated or matched means that the load that is provided to the input of the second port is the complex conjugate of its input impedance. Likewise, the load that is provided to the output of the second port has to be the complex conjugate of the output impedance. For instance, if the impedance of port 2 is Z2 = R + jX, then the load that is applied to that port should have the characteristics of ZL = R − jX. In other words, the load has to be selected so that there is a maximum transfer of power from the two-port device to the load. It is important to note that S-parameters are not scalar numbers but vectors; since we are applying voltage waves and observing the device’s response by
Tests and Measurements I: Fundamental RF Measurements
23
measuring the reflected or transmitted waves, we have to compare the magnitudes and phase information in the calculations. Therefore, S-parameters can be listed in a magnitude–phase notation or in real–imaginary notation. The more common way is to specify magnitude and phase. In RF and SoC testing, the phase is typically not of importance, and the test engineer concentrates only on the magnitude portion of the measurement. Because S-parameters are obtained by dividing two numbers of the same units (volts), S-parameters themselves do not have a unit. However, instead of simply showing S-parameters as a number without units we translate those ratios into a logarithmic scale and therefore talk about S-parameters in decibels (dB). As mentioned earlier, S-parameters are obtained by calculating the ratios between reflected to incident and transmitted to incident voltage waves. To transform those unitless ratios into decibels, we have to use the following equations:
2.1.1
S 11 [dB] = 20 × log S 11
(2.5)
S 21 [dB] = 20 × log S 21
(2.6)
S 12 [dB] = 20 × log S 12
(2.7)
S 22 [dB] = 20 × log S 22
(2.8)
Application of S-Parameters in SoC Testing
2.1.1.1 Input Match
Probably the most widely used application for S-parameters in SoC testing is to determine the input match of the device. The input match of the device indicates how much of the power that is applied to the device is reflected. Obviously this number should be low because the goal is to get as much of the applied power into the device as possible. Assuming that the rms value of the applied voltage wave is 0.1V and the rms value of the reflected voltage wave is 0.02V (if the device output is perfectly matched), the magnitude of the input reflection coefficient is S 11 = 0.02 01 . = 0.2
(2.9)
To get this number into decibels, we have to calculate S 11 = 20 × log (0.2 ) = −14 dB
(2.10)
24
Advanced Production Testing of RF, SoC, and SiP Devices
Considering that most device data sheets specify a maximum input match of around –10 dB this would be an acceptable number for most devices. 2.1.1.2 Output Match
On modern SoC devices, an output match measurement is performed to measure the output matching of the power amplifier. The point of this measurement is to determine how much of the power is actually absorbed by the load and not reflected. Values less than –10 dB are typically considered acceptable as an output match. Often, the output match measurement is simply called the S22 measurement. 2.1.1.3 Gain Measurement
In many cases the S21 measurement can be used as a gain measurement. There are various definitions of gain in the RF world, and if some conditions of matching are met, the differences between those various gain definitions can be neglected. For instance, if the input and output of a device are matched, S21 can be used as a gain measurement because all of the input power is absorbed by the device, and the output power coming from the device is applied to the load. Gain is one of the key parameters of amplifiers. If a single amplifier is measured, the gain value is typically on the order of 20 dB even though there are applications where this number could be significantly lower or higher. Another key parameter on top of the gain measurement is the gain flatness of the device. For instance, a Bluetooth device works over a bandwidth of 100 MHz. It is important to make sure that the gain of the transmitter is the same whether the measurement is taken at 2.400 or at 2.499 GHz. The maximum allowed gain deviation over frequency is called gain flatness. In the preceding example of a Bluetooth transmitter, the typical gain flatness would be specified as, for instance, 1 dB. That means that if the gain over the whole 100-MHz band is observed, the difference between the highest gain and the lowest gain must not be more than 1 dB.
2.2 PLL Measurements The purpose of the PLL in each SoC device is to provide the mixer in the receiver chain with the correct frequencies to downconvert the received signals into the baseband and to provide the mixer of the transmitter chain with the correct frequencies to upconvert baseband signals into the device’s transmitting spectrum. Before we talk about measurements that test the functionality of a PLL in a SoC device, it is important to understand the basic functionality of a PLL. Figure 2.2 shows a basic block diagram of a PLL.
Tests and Measurements I: Fundamental RF Measurements
fref ref
/R
Phase Phase Detector detector
fout=
Loop Filter filter
VCO
25
Nfref R
/N
Figure 2.2 PLL circuit used to generate LO frequencies used in upconversion and downconversion in SoC devices.
As can be seen in Figure 2.2, a PLL consists of two programmable dividers, a phase detector, a loop filter, a VCO, and a feedback loop. The phase detector frequency can be calculated by dividing the reference frequency by R: f PD =
f ref R
(2.11)
The RF output frequency of the PLL, which is often termed fout, can be calculated as, f out =
N ⋅ f ref R
(2.12)
More theory about PLLs can be found in [2]. We now focus on some key measurements of PLLs in SoC devices. 2.2.1
Divider Measurements
As can be seen in (2.12), the output frequency of a PLL is determined by the values of the two dividers N and R. Depending on the device and the application, the range of those dividers can be quite large, for instance, from 1 to 65,535. Stepping through all of those values is obviously not practical in a production test. To guarantee that all divider settings work, in most cases test modes have been built into the device that allow the test engineer to verify the functionality of the dividers with just a few tests. A test mode is a device state in which the test engineer can access functions that are not intended for use in the final application. The purpose of a test mode is to allow measurements that are otherwise not possible to perform, for instance, by routing an internal signal to a pin that is used for other functions during normal operation. Another frequently applied approach in divider testing is to program the divider into one state that will generate a certain RF frequency. After verifying that this setting indeed generates an RF tone at the expected frequency, the divider is reprogrammed by flipping all bits and the RF tone is measured again
26
Advanced Production Testing of RF, SoC, and SiP Devices
at a different frequency. For instance, the divider might first be programmed to the value 011111111111, which corresponds to the divider setting 2047 and generates a frequency f1. Next, the divider is reprogrammed to the value 2048 (binary number 1000000000), which generates a frequency at f2. This approach makes sure that all bits in the dividers are toggled once and can be verified with two RF power measurements whose frequencies can be calculated using (2.12). 2.2.2
VCO Gain Measurement
The VCO has to be able to cover the whole frequency band in which the device operates. For instance, a Bluetooth radio covers the frequency band from 2.4 to 2.48 GHz. That means that the PLL in the Bluetooth radio has to be able to cover a frequency range of at least 80 MHz under the worst operating conditions such as the supply voltage being at the lower limit. PLL gain measurements can be performed by programming the dividers such that the PLL tunes to the lower frequency limit. Because the voltage that controls the VCO is dc in the case for which the VCO has settled, a simple voltage measurement is performed at the input of the VCO. Next, the dividers are programmed such that the PLL tunes to the upper frequency where the voltage at the input of the VCO is measured again. The change in RF frequency divided by the change in dc voltage is defined as the gain of the VCO,1 or KVCO: K VCO =
∆f ∆V
(2.13)
where ∆f is the change in frequency and ∆V is the change in voltage applied to the VCO. As an example, assume that a VCO operates between 850 and 900 MHz. The frequencies in Table 2.1 are measured in response to the applied voltages in Table 2.1. Using (2.13), the VCO gain is calculated to be 27.8 MHz/V. Assuming that the VCO is linear in this operating range, this means that the output frequency of the VCO changes by 27.8 MHz for every 1-V change at its input. 2.2.3
PLL Settling Time
The PLL settling time is the time that it takes the PLL to change from one frequency to another. Obviously this is a very important parameter for standards 1. Some modern SoC devices have VCOs that are designed with tunable capacitors in order to cover a wider frequency band. If that is the case, the test engineer has to make sure that the capacitor setting is not changed while the VCO gain is measured with the method described earlier.
Tests and Measurements I: Fundamental RF Measurements
27
Table 2.1 Applied Voltage and Measured Frequency at VCO Output for a Common VCO VCO Parameter
State 1
State 2
Voltage applied to VCO (V)
0.4
2.2
Measured VCO output frequency (MHz)
850
900
that require frequency hopping such as Bluetooth. Measuring the PLL settling time can be challenging and is always dependent on the architecture of the test system that is used. It requires powerful capabilities in terms of trigger requirements for the digital, analog, and RF subsystem and, depending on the rate of frequency changes, the bandwidth of the test system digitizer and the tester’s ability to change the frequencies of the tester local oscillator. Reference [3] explains in detail one way to perform a PLL settling time measurement.
2.3 Power Measurements Most parameters that are used to specify a SoC receiver are related to RF power. The exception is the reflection coefficient, as discussed earlier, in which S-parameters are used. The specification of the receiver includes minimum detectable RF power as well as the maximum allowed input power, power levels of interfering tones, and so forth. Likewise, the output of the transmitter is specified mostly in terms of RF power: the maximum RF output power under certain conditions, the change in RF power if gain steps are part of the SoC transmitter, the distribution of RF power over the frequency band, and so forth. Reference [3] gives an excellent overview of how RF power is defined. It is important for the test engineer to understand the concept of measuring and applying RF power in dBm. The dBm unit is another example of translating unitless numbers into the logarithmic scale. The letter m indicates that the RF power is referenced to 1 mW. To convert power from the linear scale (watts) into dBm, the following formula has to be applied: P [W ] P [dBm ] = 10 × log 0.001W
(2.14)
Let’s assume that we have an RF amplifier that has an output power of 3W; we can calculate the power in dBm using (2.14):
28
Advanced Production Testing of RF, SoC, and SiP Devices
P = 10 × log
3W = 34.8 dBm 0.001W
(2.15)
When RF power is measured, it is important to specify the measurement bandwidth. The higher the measurement bandwidth specified, the larger the frequency band that can be measured. However, the downside is that the wider the measurement bandwidth chosen, the more noise integrated into the measurement. This might not be a problem for measurements that have strong signals (i.e., signals that are way above the noise floor of the measurement equipment). However, for signals that are weak, special care must be taken in the correct selection of the measurement bandwidth. As a rule of thumb, the sensitivity of an RF power measurement goes up 3 dB if the measurement bandwidth is cut in half. This might make it tempting to always select the smallest possible measurement bandwidth. It is important to know, however, that the smaller the measurement bandwidth, the longer the time required to execute the measurement. Therefore, because a short test time is one of the key objectives for every test engineer, the right compromise has to be found between small measurement bandwidth in order to perform accurate measurements of low-power signals and a wider measurement bandwidth to save test time. The following sections focus on basic measurements that involve measuring RF power on SoC devices. More advanced concepts of measuring RF power such as distortion measurements are covered in Chapter 3. 2.3.1
RF Output Power Measurement
When a test program for a transmitter or an amplifier is developed, the RF output power measurement is most likely the test that is implemented first. Without an output signal from the device, no other tests can be developed. Most RF measurements in SoC devices are performed with CW tones, which means that the RF power is concentrated into an infinitely small bandwidth (at least in theory). Remember the earlier discussion about measurement bandwidth selection. In the case of a power amplifier or transmitter output, the power level will be relatively high, say, between –10 and +35 dBm, so a good starting point for the test engineer might be the default bandwidth of the measurement equipment.2 When modulated power has to be measured, it is important to understand the minimum required bandwidth for the measurement. For instance, the channel bandwidth for a modulated 802.11b signal is 22 MHz. This is a system-specific number that can be found in the data sheet of the device. If the measurement 2. Care has to be taken when high-power levels are measured. To avoid overdriving the measurement receiver, the RF signal might have to be attenuated or an attenuator might have to be switched into the measurement path of the test system.
Tests and Measurements I: Fundamental RF Measurements
29
bandwidth of the test system is less than that of the signal, the result of the measurement will be a lower reading than anticipated. On top of that, of course, it is mandatory that all required measurement parameters, such as sampling rate, are set up correctly to allow the measurement to be performed. Many modern modulation formats use pulsed power. This is the case, for instance, for the IEEE 802.11a standard, which uses OFDM. In the case of modulated power, the measurement will yield typically just an average power. To determine the pulsed power, additional parameters such as duty cycle or measurement filter type have to be specified. More specific information about measuring modulated or pulsed power can be found in [3].
2.3.2
Spur Measurements
As mentioned in Chapter 1, most tests in RF SoC devices are still performed with CW signals despite the test system’s ability to apply and measure modulated or pulsed power. A typical measurement that falls into this category is a spur measurement. A spur is an RF tone that is in the frequency band of interest even though it is not intended to be there. For instance, in the transmitting spectrum of an RF SoC device, there are typically more spectral components than desired or expected after applying the correct stimulus at the input of the device. Many factors can contribute to the creation of spurs. One main contributor is the PLL in the system where a phase detector operates at one specific frequency. Even though the design engineer tries to minimize the spurs that are created by the phase detector, they can never be completely avoided. Other spurs that are commonly seen are the spectral components from the reference frequency or a fraction of the reference frequency. Every wireless standard has some form of requirement in terms of what bandwidth it is allowed to occupy and how much energy it is allowed to leak into other channels, or the maximum allowed power per spur. Figure 2.3 shows an example where the modulated spectrum can be seen on the left side. However, a powerful spur in the spectrum can also be seen. This spur was created by the reference input crystal oscillator and its impact was not considered during the design phase. Even though all other parameters were within specification, this spur in the transmitting spectrum required a redesign of the device. Finding the exact frequency of a spur is normally done during characterization of the device. During production testing, a simple power measurement is then performed at exactly that frequency and, according to the power level, the device is then judged as either good or bad. A more serious problem is the occurrence of random spurs. Random spurs are spurs that are present at some frequencies in one DUT but are not present at the same frequencies in another DUT. One of the reasons they are there is due
30
Advanced Production Testing of RF, SoC, and SiP Devices −10 −20 −30 −40 −50 −60 −70 −80
0
200
400
600
800
1000 1200 1400
1600
Figure 2.3 Spurs in the transmitting spectral output of an SoC device.
to manufacturing variations. During device characterization the engineer has to decide whether the existence of the spurs is serious enough that they have to be measured during production testing or if their power levels are so low that they are not a problem. If random spurs have to be measured during production testing, the whole spectrum has to be scanned and evaluated for spurs. Because random spurs are nondeterministic, the test engineer has no other choice than to scan the whole frequency spectrum where the device radiates RF power. Needless to say, this is a lengthy test and the goal is always to avoid searching for random spurs. 2.3.3
Harmonic Measurements
Harmonic measurements and spur measurements have in common that they are done by performing power measurements of unwanted tones. For spur measurements, however, the whole spectrum has to be scanned and characterized in order to determine the frequencies where those measurements should be performed. For harmonic measurements, the frequencies are known to the test engineer without characterizing the spectrum since they are multiples of the main tone. Even though there are exceptions, harmonic measurements are in most cases done for the transmitter only. The creation of the harmonics is due to fact that the output amplifier of the transmitter is driven close to saturation or eventually even into saturation. This can be described as compression in the frequency domain or clipping in the time domain. Harmonics are created every time the transmitting amplifier is compressed, and the power level of those harmonics increases when the amplifier is driven more into saturation. If harmonic measurements are performed, the test engineer focuses in most cases on odd harmonics because odd harmonics
Tests and Measurements I: Fundamental RF Measurements
31
have significantly higher power levels than even harmonics in the case that an amplifier is driven into compression. Assuming that a device has its fundamental tone at 2.5 GHz, the third harmonic is at 7.5 GHz and the fifth harmonic is at 12.5 GHz. The even harmonics are at 5 GHz (second harmonic) and 10 GHz (fourth harmonic). Because most testers that are used to test SoC devices cannot measure frequencies above 10 GHz and the main focus is on odd harmonics, the harmonic measurement in the preceding example would be performed at 7.5 GHz to measure the power level of the third harmonic. 2.3.4
Spectral Mask Measurements
Whenever a device is designed, it is the goal of the design engineer to concentrate the power of the signals into the bandwidth of interest. This is not only due to the desire to be as energy efficient as possible, but also because the amount of power that can be transmitted outside of the desired bandwidth is limited by the specification for each standard. The idea is to reduce interference by limiting radiation of RF frequencies beyond the necessary bandwidth [4]. For instance, the IEEE 802.11b standard specifies that the maximum power level in a frequency band that is 11 to 22 MHz away from the carrier has to be at least 30 dB below the carrier power [5].3 Figure 2.4 shows the spectral mask plot for an 802.11b modulated carrier. The IEEE 802.11b standard also specifies that for offsets that are greater than 22 MHz, the power levels have to be 50 dB below the carrier level. In the case of Figure 2.4, we can see that the device is passing the criterion that the frequency band be between 11 and 22 MHz, but is failing the criterion that the frequency band be 22 MHz or more away from the carrier: At an offset of 22 MHz, the difference between the carrier power and the power at the offset frequency is only 47.55 dB.
2.4 Power-Added Efficiency Power-added efficiency (PAE) is a measurement that is done almost exclusively on power amplifiers and describes how efficiently the device is working. PAE is the change in RF power divided by the dc power consumption of the device. It is defined as follows: PAE =
∆P RF P DC
(2.16)
3. There are more conditions attached, for instance, in terms of the measurement bandwidth, which has to be 100 kHz when the spectral mask measurement is performed.
32
Advanced Production Testing of RF, SoC, and SiP Devices
Figure 2.4 A spectral mask plot for a WLAN 802.11b modulated carrier, as output from the transmitting portion of an SoC device.
As an example, assume that a DUT (RF power amplifier) is subject to the following conditions: Input power is +2 dBm and it has a dc operating voltage of 3V. As a result, the DUT draws 2,250 mA of current and outputs +34 dBm of RF power. As mentioned earlier, the dBm unit is the common way of specifying power in the RF world. Because the dBm unit is in the logarithmic scale we will have to calculate the corresponding power in watts: P in
P out
2 10
W
= 0.001 × 10
= 0.0015W
W
= 0.001 × 10 10 = 2.5118 W
34
P DC = 3 V × 2.25A = 6.75W PAE =
2.5118 − 0.00158 = 37.2% 6.75
(2.17)
(2.18) (2.19) (2.20)
This means that about 37% of the dc power is actually used to amplify the RF signal. The remaining 63% is dissipated as heat. Because the power amplifier is the device that has the highest current consumption in a mobile phone, for instance, systems and design engineers pay a lot of attention to PAE because this
Tests and Measurements I: Fundamental RF Measurements
33
is the main factor that causes the battery to drain and therefore reduces a mobile phone’s usable time before it has to be recharged.
References [1]
Agilent Technologies, “S-Parameter Design,” Application Note 154, 2000.
[2]
“Phase-Locked Loops,” http://www.uoguelph.ca/~antoon/gadgets/pll/pll.html.
[3]
Schaub, K., and J. Kelly, Production Testing of RF and SoC Devices for Wireless Communications, Norwood, MA: Artech House, 2004.
[4]
“Spectral Mask,” http://en.wikipedia.org/wiki/Spectral_mask.
[5]
http://standards.ieee.org/getieee802/download/802.11b-1999.pdf.
3 Tests and Measurements II: Distortion 3.1 Introduction A lot of changes have been made to the methodologies used for testing for distortion in modern RF-containing SoC devices. Many excellent resources are available describing the types of distortion and how to test them. With the recent integration of RF front ends into SoC transceivers, especially in the area of wireless communications, some changes have been made to the fundamentals of distortion testing. In particular, the rise of homodyne, or ZIF, architectures has led to the importance of distortion mechanisms and products that have traditionally been ignored due to past architecture types. This chapter is aimed at enlightening the SoC production test engineer to distortion testing techniques required as the levels of integration continue to increase. However, it can also serve as an updated review of the fundamentals of distortion and distortion testing for all electronic devices in general. This chapter takes concepts from traditional RF and traditional mixed-signal testing and unites them in one discussion. With the integration levels of today’s SoC devices for wireless communications, it is necessary to have a full understanding of how these traditional analog measurements are performed, regardless of whether they are at RF frequencies or baseband frequencies. The concepts used in performing RF frequency distortion measurements are the same as those used in performing lower frequency distortion tests. Nonlinear properties such as harmonic and intermodulation distortion occur in all real devices. The methods used to determine these properties, and other distortion properties, in devices will be shown in this chapter. Numerous papers have been written on various types of distortion tests ranging from audio frequencies to several gigahertz, but when one considers the 35
36
Advanced Production Testing of RF, SoC, and SiP Devices
basic phenomenon of distortion, it all leads to the same result: degradation of desired signal.
3.2 Linearity Distortion occurs due to the nonlinear behavior of a device. All devices, whether RF or otherwise, exhibit nonlinear behavior. At times it is part of proper operation, as in the case of a high-efficiency power amplifier, mixer, or frequency doubler. At other times, nonlinear behavior is undesired and a problem that deteriorates the intended performance of a DUT. Fundamentally, the linearity of a system has two requirements [1]: 1. All frequencies in the output of a system will be relative to the input by a proportionality, or weighting factor, independent of power level. 2. No frequencies will appear in the output, that were not present in the input. However, because semiconductor devices are based on diodes and transistors, there is nonlinear device behavior between the input and output signals. This is known as distortion.
3.3 Distortion in SoC Devices When signals are sent through a device, the occurrence of distortion is inevitable. The problem with distortion—and it does not matter which type of distortion because it nets the same result—is that distortion products take away from the intended, or fundamental, signal. For example, assume all of the desired power coming from a DUT was contained in a single tone at the fundamental frequency when the device was operating at low power levels. When the device power level is increased and distortion occurs, the power begins to be seen at the distortion products (e.g., second and third harmonics), taking away from the power intended to be at the fundamental frequency. Distortion can occur in any of the following most common forms: harmonic distortion, intermodulation distortion, or gain compression. Testing techniques for the presence of distortion consist of the application of single-tone (gain compression and harmonic distortion), two-tone (intermodulation distortion), and multitone (cross modulation) stimuli to the DUT while analyzing the output spectrum.
Tests and Measurements II: Distortion
37
3.4 Transfer Function for Semiconductor Devices Because semiconductor devices are made of diode structures, a derivation of voltage behavior in diode-based devices is presented here and will be used in the subsequent harmonic and intermodulation distortion explanations [2]. Most literature starts from the statement that the transfer function can be represented as a power series, but this discussion will begin with fundamentals. The definition of the current through a diode is I out = I S (e αV in − 1)
(3.1)
where IS is a constant (saturation) current of a diode, α is a constant dependent on temperature and the design of the diode structure, and Vin is the combined ac and dc voltage across the diode. If the total input voltage is generalized to contain both dc and ac components, then V in = V 0 + v in
(3.2)
)
I out = I S (e α (V 0 +v in ) − 1
(3.3)
where V0 is a dc voltage and vin is a small signal ac voltage. Because vin is small, a Taylor Series (or power series) expansion can be used to rewrite (3.3) as I 0 = I 0 + v in
dI out dv in
+ V0
1 2 d 2 I out v in 2 dv in2
+K
(3.4)
V0
From (3.3), dI out dv in
= αI S e αV 0
(3.5)
V0
and each successive derivative is just a constant, α, multiplying the exponential, d N I out dv inN
= α N I S e αV 0 V0
(3.6)
38
Advanced Production Testing of RF, SoC, and SiP Devices
It is often easier to work in terms of voltages rather than currents since they are simpler to measure. If the current through a diode is measured as a voltage across some resistance R, then from Ohm’s law, (3.4) becomes v out = a 0 + a 1 v in + a 2 v in2 + a 3 v in3 + K
(3.7)
where a0, a1, a2, a3, …, are constants that have absorbed R and the derivatives. The term a0 is a dc term describing the dc parameters of a diode. An amplifier, when working in the linear region, is described by the linear term a1. The higher order terms are used to describe either the proper operation of nonlinear devices, or the undesirable, nonlinear distortion found in many SoC devices. Equation (3.7) is the fundamental equation that has been used to describe all effects of distortion on devices ranging from audio frequencies to RF applications for many years. The two common distortion tests, for harmonic distortion and for the various forms of intermodulation distortion, involve applying a single-tone sinusoid and two combined sinusoids to the device and analyzing the response. Mathematically this is described by acquiring the solutions to (3.7) for each case. These are discussed in the following sections.
3.5 Harmonic Distortion Consider what happens if the input voltage waveform to a DUT is a single-tone frequency, v in = A cos( ωt )
(3.8)
v out = a 0 + a 1 A cos ωt + a 2 A 2 cos 2 ωt + a 3 A 3 cos 3 ωt + K
(3.9)
Then (3.7) becomes
Considering only the first three components and applying trigonometric identities show that each higher-order term can be rewritten as a multiple of the fundamental frequency ω; for example, cos 2 ωt = and
1 (1 + cos 2 ωt ) 2
(3.10)
Tests and Measurements II: Distortion
cos 3 ωt =
39
1 (3 cos ωt + cos 3 ωt ) 4
(3.11)
Therefore, v out = a 0 + a 1 A cos ωt + a3 A 3 4 v out
(3.12a)
(3 cos ωt + cos 3 ωt )
3a 3 A 3 a2 A 2 a A2 cos 2 ωt + = a0 + + a1 A + cos ωt + 2 2 4 2 (3.12b)
a3 A 3 4
a2 A 2 (1 + cos 2 ωt ) + 2
cos 3 ωt
Amplitude
Harmonic distortion occurs when some of a DUT’s intended power is transferred from a desired frequency to a higher order multiple of the fundamental frequency. These higher order terms are called harmonics and are classified by their order. The order is an integer and is taken to be m. All of the higher order terms can be written in terms of the fundamental frequency and from that it is immediately noticed that each higher order term is really the fundamental frequency ω multiplied by the order (e.g., 2ω, 3ω, and so forth) of the term. Thus, for any vin = cos(ωt), the output will consist of all harmonics, mω, where m is an integer going from minus infinity to infinity. Figure 3.1 shows the first few harmonic distortion products for a fundamental signal having frequency, f1, where f = ω/2π. Note that if the frequency axis could be extended infinitely, the harmonic distortion components would continue indefinitely, equally spaced, but decreasing in amplitude. Harmonic distortion typically occurs at higher power levels, but because no devices are perfect, harmonic distortion can even be generated at low power
DUT …
f1
f1 Frequency
2f 1
3f 1
Figure 3.1 Harmonic distortion products due to a single-tone input, f1, to a DUT.
40
Advanced Production Testing of RF, SoC, and SiP Devices
levels. It has been rarely tested for in traditional, pure-RF devices for wireless communications because RF frequencies are so high already and the second and third harmonics are far from the frequency band of interest. On receiver devices, the harmonics will be filtered due to the finite bandwidth of the receiver. However, on transmitting devices, there is a little more concern, because it is important to ensure that signal transmission is minimal at other frequencies that may be used for other purposes. Harmonic distortion is defined and tested by application of a single-tone (frequency) sinusoidal waveform, as in the preceding derivation. Even-order harmonics result from αj with even j. In (3.12b) it is important to note that the amplitude of the nth harmonic consists of a term proportional to An. 3.5.1
Measuring Harmonic Distortion
The two primary ways to quantify the harmonic distortion content of a DUT are (1) total harmonic distortion and (2) signal, noise, and distortion, both of which are discussed in the following sections. 3.5.1.1 Total Harmonic Distortion
A standardized measure of harmonic distortion is total harmonic distortion (THD). THD is the relative power contained in all harmonics of a signal expressed as a percent of the fundamental signal power. It is a measure of how well the device converts energy to the desired fundamental signal versus the undesired harmonic signals. Harmonic distortion is specified (and tested) at a specified output power of the DUT. It is defined as follows: THD(% ) =
P 2 + P 3 + P4 + K P fundamental
× 100%
(3.13a)
where P2, P3, P4, … are the power, in watts, of the second, third, fourth, and so on harmonics, respectively. Pfundamental is the power of the desired fundamental signal. Alternatively, in units of volts, as when measuring a digitized analog baseband signal,
(V 2 ) 2 + (V 3 ) + (V 4 ) + K 2
THD(% ) =
V fundamental
2
× 100%
(3.13b)
where V2, V3, V4, … are the voltage amplitudes of the second, third, fourth, and so on harmonics, respectively. Vfundamental is the voltage amplitude of the desired
Tests and Measurements II: Distortion
41
fundamental signal. In either case, an ideal device with no distortion would have 0% THD. If a measurement receiver, or digitizer, does not have adequate bandwidth, THD measurements are measured by making several simple power measurements because the fundamental and harmonic frequencies are often far apart. 3.5.1.2 Signal, Noise, and Distortion
Signal, noise, and distortion (SINAD) is a measure of the quality of a received signal and is really just another variation of THD. The definition of SINAD in decibels is: S + N + D SINAD[dB] = 10 log 10 N +D
(3.14)
where S is the signal power (watts), D is the distortion power (watts), and N is the noise power (watts). Ideally, the distortion and noise powers would be zero. For zero noise and zero distortion (or noise and distortion that approach zero), the SINAD equation would reduce to: S +0+0 SINAD[dB] = 10 log 10 = small + small
(3.15)
10 log 10 ( VeryBigNumber )
and the end result would be a large number that would indicate that the device converts energy very efficiently, has almost zero distortion, and adds almost zero noise. If the measured distortion value of one DUT versus another device is higher, then the overall SINAD result will be lower, indicating that the first device is not as efficient. This happens because the distortion is both added to the numerator, but then divided by the denominator. The same thing happens for the noise. As an example, let S = 1 and consider that there is zero noise and that the distortion power is 1/10 of the signal power; then .S S + 0 + 01 SINAD[dB] = 10 log 10 = 0 + 01 .S
10 log 10 (11) = 10.4
Now, doubling the distortion to 1/5 of S yields
(3.16)
42
Advanced Production Testing of RF, SoC, and SiP Devices
S + 0 + 0.2S SINAD[dB] = 10 log 10 = 0 + 0.2S
(3.17)
10 log 10 (6 ) = 7.78
Equation (3.17) yields a result that is smaller than (3.16) by 2.6 dB. This gives a good indication that the distortion plus noise power has increased by approximately two times or that the fundamental power has decreased by two times. In any case, the efficiency has been reduced in terms of power by a factor of two [3].
3.6 Intermodulation Distortion The single-tone description of the previous section only yields harmonic distortion products and only reveals part of the distortion story for wireless communication systems and SoC devices. Modern wireless systems use multiple tones and multiple modulation formats to squeeze as much information as possible into the channel bandwidth. In a communications system this means that signals in one channel can cause interference with signals in adjacent channels. As the spectrum becomes busier and the channels become more tightly spaced, minimizing intermodulation distortion becomes more important [4]. Consider a more complicated input waveform placed into (3.8), say, a two-tone signal: v in (t ) = A cos ω1t + B cos ω 2t
(3.18)
where ω1 and ω2 are two arbitrary frequencies. Then (3.9) becomes v out = a 0 + a 1 ( A cos ω1t + B cos ω 2t ) + a 2 ( A cos ω1t + B cos ω 2t ) + 2
(3.19)
a 3 ( A cos ω1t + B cos ω 2t ) + K 3
The following sections will expand the various terms in (3.19). 3.6.1
Second-Order Intermodulation Distortion
In this case of a two-tone sinusoid, the a0 and a1 terms are straightforward. Because the expansion of the higher order terms of (3.19) becomes quite lengthy, each of the components will be treated separately, then grouped by frequency and combined afterward. First, however, the following additional
Tests and Measurements II: Distortion
43
trigonometric identity is needed to obtain individual single-frequency cosine functions: cos α cos β =
1 [cos( α + β) + cos( α − β )] 2
(3.20)
The second-order term of (3.19) is expanded as follows: v in2 (t ) = ( A cos ω1t + B cos ω 2t )
2
v in2 (t ) = A 2 cos 2 ω1t + 2 AB cos ω1t cos ω 2t + B 2 cos 2 ω 2t
(3.21a) (3.21b)
Using trigonometric identities to restate the frequencies as multiples of ω1, we obtain v in2 (t ) =
A2 (1 + cos 2 ω1t ) + AB cos( ω1 + ω 2 )t + 2 B2 AB cos( ω1 − ω 2 )t + (1 + cos 2 ω 2t ) 2
(3.21c)
The result in (3.21c) describes both harmonic and intermodulation distortion. When expanded, it contains single-frequency terms (harmonic distortion) and terms with multiple frequencies (intermodulation distortion). Intermodulation distortion is the nonlinear product caused by application of multiple input frequencies to a device interacting with each other. It has a more pronounced effect at elevated power levels. As with harmonic distortion, intermodulation distortion occurs at different output frequencies than those put into the device. In communication systems the end result is that intermodulation distortion from signals in one channel can cause interference in other channels. Characterizing intermodulation distortion becomes more important as channels become more tightly spaced within the frequency spectrum. Note that with second-order intermodulation distortion there are four distortion products, at the following frequencies: 2ω1, 2ω2, ω1 − ω2, ω1 + ω2. The term second order comes from the fact that there are four combinations of the coefficients of ω1 and ω2 that, when added, give the value of two. Figure 3.2(a) shows second-order distortion products resulting from the application of two tones to a device. Traditionally, second-order intermodulation products have been of little concern for wireless communications
Advanced Production Testing of RF, SoC, and SiP Devices
Amplitude
44
DUT
f1−f2 f2−f1
f1 f2
…
f1 f 2
2f1 2f2
Amplitude
Frequency (a)
DUT
f1 f2
3f1−2f2 f1 f2 3f1−2f2 2f1 2f2 2f1−f2 2f1−f2 Frequency (b)
… 3f1
3f2
Figure 3.2 Intermodulation distortion products due to a two-tone input, f1 and f2, to a DUT [5]: (a) second-order products and (b) third-order products.
devices because of their architecture. The superheterodyne architecture (see Chapter 1) that has been used from the beginning of wireless communications devices converts the frequencies that are input to lower frequencies, but far from dc, thereby never having to worry about the second-order product interference. More recently, the homodyne, or ZIF, architecture has eliminated the intermediate frequency, converting the received RF signals to near-dc frequencies. This means that closely spaced frequencies at RF will be closely spaced after they are converted via a homodyne receiver. This holds for second-order products, as well as any other higher, even-ordered intermodulation products, although for most communications devices, orders higher than two are relatively insignificant. 3.6.2 Third-Order Intermodulation Distortion
The third-order term of (3.19) is expanded as follows: v in3 (t ) = ( A cos ω1t + B cos ω 2t ) v in3 (t ) = ( A cos ω1t + B cos ω 2t )
3
( A 2 cos 2 ω1t + 2 AB cos ω1t cos ω 2t + B 2 cos 2 ω 2t )
(3.22a)
(3.22b)
Tests and Measurements II: Distortion
v in3 (t ) =
3 A 2B A3 (3 cos ω1t + cos 3 ω1t ) + 4 2 (1 + cos 2 ω1t ) cos ω 2t + 3 AB 2 (1 + cos 2 ω 2t ) cos ω1t + 2 B3 (3 cos ω 2t + cos 3 ω 2t ) 4
45
(3.22c)
A3 v (t ) = [3 cos ω1t + cos 3 ω1t ] + 4 B3 [3 cos ω 2t + cos 3 ω 2t ] + 4 (3.22d) 3 A 2B 1 cos ω 2t + (cos( 2 ω1 + ω 2 )t + cos( 2 ω1 − ω 2 )t ) + 2 2 3 in
3 AB 2 2
1 cos ω1t + 2 (cos( 2 ω 2 + ω1 )t + cos( 2 ω 2 − ω1 )t )
3 A 3 3 AB 2 3A3 + v in3 (t ) = cos 3 ω1t + cos ω1t + 2 4 4 3 A 2 B 3B 3 3B 3 + cos 3 ω 2t + cos ω 2t + 4 4 2 2
(3.22e)
2
3A B 3A B cos( 2 ω1 + ω 2 )t + cos( 2 ω1 − ω 2 )t + 4 4 3 AB 2 3 AB 2 cos( 2 ω 2 + ω1 )t + cos( 2 ω 2 − ω1 )t 4 4 Figure 3.2(b) graphically shows the distortion products of (3.22e) arising from two fundamental signals being put into a device. There are six third-order distortion products, as shown. A few of these products are far from the fundamental (desired) frequencies and it is common practice to design filtering into a device to remove these products. Two terms of (3.22e), 2ω1−ω2 and 2ω2−ω1, however, are very close to the fundamental input frequencies. It is these two terms that have traditionally (in heterodyne device architectures) been the most troublesome and therefore tested exhaustively for. Specifically, the third-order products occur at the following frequencies: 3ω1, 3ω2, 2ω1 + ω2, 2ω2 + ω1, 2ω1 − ω2, and 2ω2 − ω1 (notice the six terms). Note that the two products that are only dependent on a single frequency (only
46
Advanced Production Testing of RF, SoC, and SiP Devices
ω1 or only ω2) are third-order harmonic distortion products. The first four terms are again relatively far away from the fundamental frequencies ω1 and ω2, so they are often outside of the normal frequency response of the device or can easily be filtered. 3.6.3
Higher-Order Intermodulation Distortion Products
Second- and third-order intermodulation distortion products have been discussed. These are the most prevalent types tested for in communications front ends. Although the products may be small, there are an infinite number of intermodulation distortion products. Sometimes intermodulation distortion products having orders greater than three may be of interest. This is true mainly for high-power applications such as baseband power transmitter devices. Another possibility for which higher order intermodulation products may be of concern is that as device performance is moved to lower signal levels through lower device noise floors, the higher order intermodulation distortion products may become visible and impact the low-level signals. As with the even products, the higher-order odd products follow the same behavior as third-order products, making troublesome interference in many device architectures. A term called spectral regrowth is sometimes used to describe intermodulation distortion [6]. 3.6.4
Example of Harmonic and Intermodulation Distortion Products
From (3.21c), it can be stated that for any two-tone input waveform, vin(t) = A cos ω1t + B cos ω2t, the output can be written in terms of all harmonics of the form mω1+nω2, where both m and n are positive and negative integers. The “order” of the distortion products can then be defined by order = m + n
(3.23)
To demonstrate the impact of harmonic and intermodulation distortion, an example is provided in which two test tones (fundamentals), f1 = 100 MHz and f2 = 101 MHz are input to a DUT. (These are just two arbitrarily chosen, close-spaced frequencies, and could have been in any frequency range, that is, 1 MHz, 1 GHz.) The same methodology follows for all frequencies. Table 3.1 summarizes all of the harmonic and intermodulation distortion products that have been discussed. Table 3.2 shows the distortion products that arise due to the chosen test frequencies discussed earlier. Depending on the type of architecture of the device, this table shows how the various types of distortion impact it.
Tests and Measurements II: Distortion
47
Table 3.1 Two-Tone Harmonic and Intermodulation Distortion Products and Their Locations
Number of Distortion Products Order
Distortion Product Frequencies (Relative to Fundamental Two-Tone Input, f1 and f2)
Total Harmonic Intermodulation Harmonic Intermodulation
2
4
2
2
2f1, 2f2
f 1 + f 2, f 2 – f 1
3
6
2
4
3f1, 3f2
2f1 ± f2, 2f2 ± f1
4
8
2
6
4f1, 4f2
2f1 ± 2f2, 2f2 – 2f1, 3f1 ± f2, 3f2 ± f1
5
10
2
8
5f1, 5f2
3f1 ± 2f2, 3f2 ± 2f1, 4f1 ± f2, 4f2 ± f1
6
12
2
10
6f1, 6f2
3f1 ± 3f2, 3f2 – 3f1, 5f1 ± f2, 5f2 ± f1, 4f1 ± 2f2, 4f2 ± 2f1
7
14
2
12
7f1, 7f2
4f1 ± 3f2, 4f2 ± 3f1, 5f1 ± 2f2, 5f2 ± 2f1, 6f1 ± f2, 6f2 ± f1
N
2N
2
2N – 2
Nf1, Nf2
Table 3.2 Location of Distortion Products for Input Tones of 100 and 101 MHz Frequency of Distortion Product (MHz) Order Harmonic
Intermodulation
2
200, 202
1, 201
3
300, 303
99, 102, 301, 302
4
400, 404
2, 199, 203, 401, 402, 403
5
500, 505
98, 103, 299, 304, 501, 502, 503, 504
6
600, 606
3, 198, 204, 399, 400, 405, 601, 603, 604, 605
7
700, 707
97, 104, 298, 305, 499, 506, 701, 702, 703, 704, 705, 706
Harmonic distortion products are found at much higher frequencies and, as discussed earlier, are mostly only of concern to neighboring channels or frequency bands. If necessary, filtering can remove their presence. Intermodulation distortion products affect devices quite differently. In the case of heterodyne transceiver architectures, the odd-order intermodulation products are of concern. In the case of more recent usage of homodyne (ZIF) architectures, the even-order products are of more concern because they arise in the baseband signals (near dc) where filtering is not often possible for the intended signal would be filtered out.
48
3.6.5
Advanced Production Testing of RF, SoC, and SiP Devices
Intermodulation Distortion Products of a ZIF Receiver
Consider a two-tone test signal applied to the input of an RF-to-baseband front-end ZIF receiver. In this DUT, the input RF signal is downconverted directly to a baseband analog signal. In the case of second-order intermodulation distortion, Table 3.1 can be used to show that a second-order product exists at f 1MD2 = f tone1 − f tone2
(3.24)
For the calculations showing the impact of third-order intermodulation distortion products, it is now necessary to consider the LO frequency used in downconverting the signals. As an example, one of the third-order products falls at f 1MD3 = 2 f tone2 − f tone1 − f LO
(3.25)
This tone falls in the baseband region and could possibly cause interference with the desired operation in a multichannel environment, if not adequately characterized.
3.7 Measuring Intermodulation Distortion A figure of merit known as the intercept point has been established to describe and quantify intermodulation distortion. It is the point at which the intermodulation distortion product power level equals (intercepts) that of the fundamental. Almost always, the intercept point is beyond the linear operation of the device and, therefore, the intercept point is a fictitious point. The various intercept points are each related to the order of distortion being discussed. For example, the third-order intercept point quantifies third-order intermodulation distortion. The intercept point of a device cannot be measured directly, because it is typically at a very large power level. Instead, the measurement is performed at lower, typical operational power levels and extrapolated to determine the intercept point. The intercept point is always referenced to either the input or output power. This is discussed in Section 3.7.3. 3.7.1
The Intercept Point, Graphically
Figure 3.3 is a plot of the output power from a DUT versus the input power applied to it. The small-signal gain, second-order intercept point (IP2), and third-order intercept point (IP3) are shown on the graph.1 It is of fundamental 1. Traditional RF measurement theory often refers to the third-order intercept point with the abbreviation TOI.
Tests and Measurements II: Distortion
49
IP2
e =3) er (slop
3rd ord
orde r (s l 2nd
Lin
ea
r(
slo
pe
ope
=
=2)
1)
Ouput power (dBm)
IP3
Input power (dBm)
Figure 3.3 Output power versus input power, demonstrating the concept of intercept points.
importance is to observe that, in Figure 3.3, the slope of the small-signal gain is 1. The slope of the second-order intermodulation distortion product power level is 2, and that of the third-order product is 3. This means that with a 1-dB reduction of the input power, the fundamental tone will reduce by 1 dB, whereas the third-order product power will reduce by 3 dB. (The converse is also true.) Notice that it is physically impossible to measure either of the intercept points directly. As the input power is increased toward either fictitious intercept point, the DUT becomes nonlinear. The output signal starts clipping, and the extra energy is diverted into the higher order harmonics. The linear portion of the small-signal gain line must be extended to find the crossing point of the second- and third-order products. Notice that the IP3 point intercepts the linear curve before the IP2 point. The graph highlights that a high IP3 number is desired. The higher the IP3 number, the less distortion the device exhibits under normal operating power levels. 3.7.2
The General Intercept Point Calculation
In a general sense, for any order of intermodulation distortion product, the intercept point (dBm) is calculated by measuring the power levels of the output of a DUT resulting from the application of a two-tone signal. There are many variations of the calculation, but they are all interrelated, as described in next section. One such calculation is
50
Advanced Production Testing of RF, SoC, and SiP Devices
IPN = P Fundamental,Output +
(P
Fundamental,Output
− P IMDN
N −1
)
(3.26)
where PFundamental,Output is the power (dBm) of either of the two input tones, N is the order of the distortion product, and PIMDN is the power level (dBm) of the distortion product as measured at the respective frequency at the output of the DUT. Sometimes the value in parentheses in (3.26) is represented as a single variable. Whichever way it is represented, it is simply a difference in power, having units of dBc (dB below “carrier”). Assuming that the frequency response of the device is flat across the frequency spacing of the two tones, the output power of either of the two tones could be used as the value for PFundamental,Output. Extending (3.26), the equation for calculating the second-order intercept point is IP2 = P Fundamental,Output + (P Fundamental,Output − P IMD2
)
(3.27)
and for the third-order intercept point is IP3 = P Fundamental,Output +
(P
Fundamental,Output
2
− P IMD3
)
(3.28)
Keep in mind that the title of this section uses the term calculation. This is done because the intercept point is an indirect measurement where the intermodulation distortion product power level is what is measured, then the intercept point is calculated from that value. Note that PFundamental,Output is the power level of one of the output tones. This assumes that the two tones have power levels equal to each other at the output of the DUT. Often this is not the case. To handle this situation of uneven output power levels of the two tones, the average power between the two tones at the output can be used. Alternatively, both can be used to arrive at two different values of IP3 and then the lower, or worse, value of IP3 is reported.
3.7.3
Input- and Output-Referencing of Intercept Points
Equations (3.26) to (3.28) used the output power levels of the device as the reference point of their calculation. This is the common approach to this calculation. When done in this fashion, the intermodulation products are termed output-referenced. For completeness, these equations can be rewritten as
Tests and Measurements II: Distortion
51
OIP2 = P Fundamental,Output + (P Fundamental,Output − P IMD2
)
(3.29)
and OIP3 = P Fundamental,Output +
(P
Fundamental,Output
− P IMD3
)
2
(3.30)
The only difference between these two sets of equations is the name on the left-hand side of the equations. The standard convention (which is almost always the one that is used) is that the intercept point (regardless of order) for any intermodulation distortion measurements on the transmitter/upconversion chain of a DUT (such as a power amplifier) is output referenced, and for any receiver/downconversion chain of a DUT (such as an LNA) measurements, it is input referenced. The interesting thing is that for a given intercept point measurement, the only difference between input referencing and output referencing is the small-signal gain of the DUT. In practice, the gain can simply be measured, or the equations can be rearranged. In this case gain is represented as G = P Fundamental,Output − P Fundamental, Input
(3.31)
where G is the gain of the DUT (dB) and PFundamental,Output is, again, the power (dBm) of either of the two input tones, but now it is the power that is applied to the DUT. Thus,
)
IIP2 = P Fundamental, Output + (P Fundamental,Output − P IMD2 − G
(3.32)
or
)
(3.33)
) −G
(3.34)
IIP2 = P Fundamental, Input + (P Fundamental,Output − P IMD2 and IIP3 = P Fundamental, Output + or
(P
Fundamental,Output
2
− P IMD3
52
Advanced Production Testing of RF, SoC, and SiP Devices
IIP3 = P Fundamental, Input +
(P
Fundamental,Output
− P IMD3
2
)
(3.35)
where IIP indicates input referencing. It must be emphasized that the gain value used in (3.32) and (3.34) has to be the small-signal gain (measured during linear mode of operation of the DUT). If this gain is measured when the device is in compression and used in the calculation of the intercept point, then the intercept points will mistakenly be reported to be worse than their actual values. 3.7.4
Example: Calculating the IP3 of an RF LNA
Consider an RF low-noise amplifier that has had its gain measured as 20 dB. The first step in the distortion measurement is to apply a two-tone signal to the input of the DUT. Consider the two tones to be 2,140.10 and 2,140.30 MHz, with both having a power level of –30 dBm. Using the equations for determining the frequencies of second-order intermodulation distortion products in Table 3.1, these input tones will generate a product at 400 kHz that is very far away from the operational capability of this device, so there is no need to measure IP2. Again, using Table 3.1, a third-order intermodulation distortion product falls at 2140.50 MHz, which is in the operational bandwidth of the LNA and must be measured. The next step is to measure the power of the third-order intermodulation product, PIMD3. In this example, accept that it was measured to be –84 dBm. Because this is an LNA, it would be most appropriate to represent the result as input referenced. Using (3.35), the result is IIP3 = −30 +
( −10 − ( −84 )) 2
= +7 dBm
(3.36)
3.8 Source Intermodulation Distortion The residual intermodulation distortion that is due to the hardware involved in sourcing the two-tone signal (sources, tone combiner circuitry, and so forth) is called source intermodulation distortion (SIMD). Most often, any SIMD contribution comes from poor isolation between the two sources that supply the input tones. It is important to be aware of the amount of this contributed distortion from the measurement setup. To measure the SIMD, simply remove the DUT and connect the two-tone source directly to the measurement equipment and
Tests and Measurements II: Distortion
53
measure the power at the frequencies where the intermodulation distortion products are expected [7]. One would think that this is only of concern in LNA and PA testing where the output of the device is in the same frequency range as the input signals. However, keep in mind that for a frequency-translating device like a front-end receiver, any SIMD will also be downconverted. In many intermodulation distortion measurement setups where multiple power levels need to be applied to the DUT (as in characterizing power-out versus power-in to establish the nonlinear characteristics), attenuators are often used between the two-tone output and the DUT. The reason for this is so that the sources can stay at a constant power level and, hence, a constant value of SIMD. The power levels can be adjusted simply by adjusting the attenuators. This eliminates changes in the source settings and eliminates the possibility of the SIMD changing. The contribution of error due to SIMD can be calculated from the following formula [7]: error = 20 log 10 (1 ± 10 SIMD − MIMD 20 )
(3.37)
where SIMD and MIMD are the relative (dBc) values of intermodulation distortion products at the expected product frequencies for the source and measurement (DUT), respectively. Using this equation, and an error of ±3 dB leads to the rule of thumb that the SIMD should be at least 30 dB below that of the expected DUT IMD. In practice, SIMD of >40 dB below that to be measured is better.
3.9 Cross Modulation Cross modulation, sometimes called XMOD, is a type of distortion caused by the intermodulation/interaction between more than two tones in the same operational bandwidth. Historically, this measurement was not too common except in cable television devices, such as line amplifiers where up to hundreds of simultaneous signals are transmitted across the same wide bandwidth of operation (which can be greater than 1 GHz). Recent multicarrier digital modulation formats, such as orthogonal frequency division multiplexing (OFDM) for WiMAX or WLAN, use multiple carriers within the same bandwidth. This makes them susceptible to the effects of cross-modulation distortion products. Reference [8] provides an in-depth analysis of how cross modulation impacts the performance of CDMA receivers based on transmitter leakage through duplexers into the LNA front end.
54
Advanced Production Testing of RF, SoC, and SiP Devices
The measurement of cross modulation is performed by turning on all tones/carriers except one, and then measuring the power at the frequency of the carrier that is not turned on. Any power at this frequency is due to cross modulation between all other carriers.
3.10 Gain Compression
Output power (dBm)
Id
ea
l li
ne
ar
ou
tp
ut
The a1 term of (3.9) is a linear term corresponding to the gain of a DUT. Under lower power level (i.e., small-signal) conditions, the output of the DUT is related to the input by the proportionality factor, or gain a1. As the power level is increased, a distortion mechanism termed gain compression can come into play where the output begins to saturate, no longer following the linear gain. If only the a1 (linear) term of (3.9) is considered and it is converted to logarithmic scale and plotted as in Figure 3.4, the slope of the trace is unity. This plot, however, is that of a real device, which does not follow (3.9) at higher input power levels. At some point, the output power deviates from the unity sloped curve, moving to a saturation region (the dashed line shows the extrapolation of the linear trace). The measure of saturation, which is sometimes called first-order distortion, is gain compression and it is described by the standardized
1 dB compression point
Lin
ea
r(
slo
pe
=1
)
∆ = 1 dB
Actual device output
Input power (dBm)
Figure 3.4 Output power versus input power, demonstrating the concept of gain compression.
Tests and Measurements II: Distortion
55
measure called the 1-dB compression point, or P1dB. While mixers exhibit compression, the measurement was traditionally most often made on amplifiers; hence, the term “gain” compression. Wireless devices must operate over a wide dynamic range. The upper bound of the dynamic range is often specified with the 1-dB compression point. P1dB can be referenced to the input power level or the output power level (the projections onto the input or output axes of Figure 3.4). These are termed input referred and output referred, respectively. The P1dB of receivers are usually input referred, and the P1dB of transmitters are usually output referred. The equation describing the gain, in dB, at the 1-dB compression point is G 1 dB = G 0 − 1
(3.38)
where G0 is small-signal gain. The output power can be rewritten in terms of the compression as follows: P1 dB ( output ) − P1 dB ( input ) = G 1 dB = (G 0 − 1)
(3.39)
Given (3.39), the 1-dB compression point can be found by measuring the difference in the output power minus the input power. When that difference is 1 dB less than the small-signal gain, the 1-dB compression point has been determined. For production testing, test time must be considered. The 1-dB compression point can be found using a brute-force approach whereby the input power starts at a low level and is linearly swept upward in small steps, until the 1-dB compression point is found. A much more efficient method is the one in which, first, the gain is measured at a power level where the DUT is known to be linear. Then, a binary search routine is used to vary the input power to find the 1-dB compression point within some stated resolution. A variation on gain compression that is often used in production testing of wireless SoC devices is to operate the DUT at the P1dB point and then to perform another type of measurement. An example, using a Bluetooth device, is to overdrive the receiver to the 1-dB compression point and perform a bit error rate (BER) test under this condition to ensure integrity. To show that the input-referred and output-referred compression points are related, consider a DUT with nominal small-signal gain of 28 dB that has had the input-referred P1dB point determined to be –19 dBm. Rearranging (3.39), P1 dB ( output ) = P1dB ( input ) + G 1dB = −19 + 28 − 1 = 8 dBm
(3.40)
56
Advanced Production Testing of RF, SoC, and SiP Devices
3.10.1 Conversion Compression in Frequency-Translating Devices
A mixer, although considered a nonlinear device, has the same compression behavior. The only difference is that the input and output of the mixer, taken to be RF and IF, respectively (for example), are at different frequencies. The same algorithms apply, using power measurements at the RF and IF ports of the DUT. As RF input power is increased, IF output power increases. However, at some power level, the IF output power begins to increase at a lesser rate than the RF input power, and eventually the IF power level deviates from its linearly expected value by 1 dB. This point is the conversion compression point.
3.11 Minimizing the Number of Averages in Distortion Measurements Many distortion measurements involve measuring low-level signals and comparing them to a high-level (e.g., carrier) signal. A common mistake in production measurements is to set up the entire measurement hardware for the needs of the low-level signal acquisition. Consider that a low-level distortion signal such as the third-order product can, at times, be very near to the measuring equipment’s noise floor. Often, the inclination is to set up the entire measurement to accurately acquire the low-level signal. This can require multiple averages and oversampling. If N is taken to be the number of averages, then the test time can be increased, linearly, up to N times the single acquisition test time. For the low-level signal, it could be necessary to do this. However, doing this for the acquisition of the high-level signal, which is significantly above the measurement noise floor, leads to wasted test time and, ultimately, increased cost.
References [1]
Oliver, B., “Distortion and Intermodulation,” Hewlett Packard Application Note 15.
[2]
Pozar, D. M., Microwave Engineering, Reading, MA: Addison-Wesley, 1990.
[3]
Schaub, K., and J. Kelly, Production Testing of RF and System-on-a Chip Devices for Wireless Communications, Norwood, MA: Artech House, 2004.
[4]
“Theory of Intermodulation Distortion Measurement (IMD),” Maury Microwave Application Note 5C-043, 1999.
[5]
Texas Instruments, “Understanding and Enhancing Sensitivity in Receivers for Wireless Applications,” Technical Brief SWRA030, 1999.
[6]
Bain, D., “RF Distortion: Reducing IM Distortion in CDMA Cellular Telephones,” RF Design, December 1996, pp. 46–53.
Tests and Measurements II: Distortion
57
[7]
Barkley, K., “Two-Tone IMD Measurement Techniques,” RF Design, June 2001, pp. 36–52.
[8]
Ko, B., et al., “A Nightmare for CDMA RF Receiver: The Cross Modulation,” Proc. First IEEE Asia Pacific Conf. on ASICs, August 23–25, 1999, pp. 400–402.
4 Tests and Measurements III: Noise 4.1 Introduction to Noise Noise is unwanted fluctuations superimposed on a desired signal. Noise determines the accuracy and repeatability with which we can measure the signal. During the past few years, improvements in the performance of wireless communications systems have led to the need for tighter specification limits on noise and, thus, a better understanding of it. Noise is an unfortunate entity that will always be present when performing measurements, for example, an amplifier’s output power level is dominated by the noise of the amplifier at very low input power levels [1]. Typically, noise is associated with being undesirable, and that is the case when noise interferes with a particular parameter that one is attempting to measure, such as a current or voltage signal. In this case noise disrupts the accuracy of the measurement. However, when working with very low-level signals in wireless communications, the need to measure noise levels makes understanding noise desirable. Noise figure and phase noise are two parameters of wireless and SoC devices that warrant an understanding of the behavior of noise. 4.1.1
Power Spectral Density
Noise, being a random process, is characterized as nondeterministic. As a result, when analyzing noise in either the time or frequency domain, statistical approaches must be used. At RF frequencies the analysis is best accomplished using frequency-domain analysis; hence, this discussion will focus on that. The seemingly obvious approach to characterizing noise in the frequency domain is to simply take the Fourier transform of the noise signal. However, 59
60
Advanced Production Testing of RF, SoC, and SiP Devices
this is not possible because the random noise waveform cannot be defined as a simple exact time-domain function. To solve this problem, the power spectral density (PSD) is introduced as Sx (f
) = Tlim →∞
[
E XT (f
)
2
]
2T
(4.1)
where E is the expected value and XT(f ) is the Fourier transform of a random noise waveform, x(t), evaluated over the time interval –T < t < T [2]. An alternative definition states the PSD as being the plot of power of a signal as a function of frequency as shown in Figure 4.1. The power within a certain frequency range is calculated as follows: f2
P f1
f2
= ∫ S x f ( x )df
(4.2)
f1
The total power in a signal is calculated by integrating over all frequencies: ∞
Ptotal =
∫S
x
f ( x )df = x 2 (t )
(4.3)
−∞
where x 2 (t ) is a voltage or current signal and Ptotal is stated as the power across a 1-Ω resistor. As a result of this definition, the units of PSD are volts2/Hz (or
Figure 4.1 Power spectral density for a Gaussian distributed signal.
Tests and Measurements III: Noise
61
more commonly, dBm/Hz when discussing RF frequencies, specifically), which makes specifying the bandwidth a necessity when stating the power of a noise waveform. PSD will be used in the following sections to describe the characteristics of different types of noise. The concepts of PSD are also used when discussing noise figure and phase noise measurements. 4.1.2
Types of Noise
Noise can arise for many reasons. However, within the context of making electronic measurements, noise can be grouped into two types, fundamental and nonfundamental. Fundamental noise consists of that known as “white” noise, thermal noise, shot noise, quantization noise, and 1/f noise. Additionally, in test and measurement systems, nonfundamental noise can arise from electromagnetic coupling, cooling-induced current flow in semiconductors, ground loops due to differing potential reference points, or oscillations in amplifiers. The principal difference between these two noise types is that nonfundamental noise can be reduced or eliminated, but fundamental noise cannot. The figure of merit, noise figure, when measured at RF frequencies, encompasses mainly shot and thermal noise. Noise exists in many forms, but within the context of testing, the following are the dominant types and their relevance to testing will be discussed briefly: • Thermal noise; • Shot noise; • 1/f noise; • Quantization noise; • Quantum noise; • Plasma noise. 4.1.2.1 Thermal Noise
Thermal noise (Johnson noise) is broadband noise resulting from the random motion of electrons due to temperature. The kinetic energy of this random motion is proportional to temperature. This random motion of electrons (charge) produces a voltage across a resistance. It is usually the dominant fundamental noise found in circuits at room temperature. It was discovered by Johnson [3] and the mathematical description was derived by Nyquist [4]. As line a in Figure 4.2 shows, thermal noise is of the general class of “white” noise described by equal PSD per hertz and flat energy across the entire frequency spectrum. White noise gets its name from the analogy with white
Advanced Production Testing of RF, SoC, and SiP Devices
Power (dBm)
62
c b a
Frequency (Hz)
Figure 4.2 Power spectral density of various types of noise: (a) thermal noise, (b) shot noise, and (c) 1/f noise.
light, which also has equal power density across all frequencies in the optical band. True white noise cannot exist, because by definition it would require infinite bandwidth, which would also imply infinite energy. A practical description of white noise considers the noise to have a flat power density over some finite bandwidth. Because the power density of white noise is flat, it is by definition independent of frequency. This means that white noise signal power, for a given bandwidth, does not vary, no matter what center frequency is chosen, across the entire frequency spectrum. Therefore, white noise in one part of the spectrum is uncorrelated with white noise in another part of the spectrum. A few fundamental equations describing thermal noise must be introduced at this point. These are the foundation for noise figure measurements to be discussed later. In 1928 Nyquist derived a formula to describe thermal noise: v 2 = 4 hfBR
[e
hf kt
− 1]
(4.4)
where v 2 is the mean-square open-circuit thermal noise voltage across a resistor, h is Planck’s constant (6.626 × 1034 J-sec), f is frequency (in hertz), k is Boltzmann’s constant (1.38 × 10–23 J/K), T is absolute temperature (in kelvin), R is resistance (in ohms), and B is the bandwidth (in hertz) over which the noise is measured. The derivation of this equation involves extensive statistical thermodynamics and is beyond the scope of this book. Equation (4.4) is valid for any frequency, however it is often tedious to work with. Considering that, at microwave frequencies, hf < kT, the first two terms of a Taylor series expansion can be substituted into (4.4) as
Tests and Measurements III: Noise
e hf
kT
63
− 1~ hf kT
(4.5)
Substituting (4.5) into (4.4) leads to v 2 = 4 kTBR
(4.6)
which is no longer valid over the entire frequency spectrum, due to the approximation of (4.5). However, for most microwave/RF work, the approximation and (4.6) are valid and a lot easier to work with. As a worst-case example, consider the case where f = 100 GHz and T = 100K. In this case, hf (6.5 × 10–23) is still 100 times less than kT (1.4 × 10–21). Almost all RF noise calculations are based on (4.6), which is called the Rayleigh-Jeans approximation and is valid unless very high frequencies or very low temperatures are used. Consider a noise resistor delivering some noise power, Pn, to a load resistor, of equal resistance (for maximum power transfer) as shown in Figure 4.3. Using the voltage in (4.6), the noise power, in bandwidth B, delivered to the load resistor is calculated as follows: Pn = i 2R
(4.7a) 2
v Pn = R 2R
(4.7b)
P n = kTB
(4.7c)
Solving (4.7c) for Pn at room temperature (typically accepted to be T = 290K) gives Pn = 400.2 × 10–23 W in a 1-Hz bandwidth. Placing this into more useful units gives Pn = –174 dBm in a 1-Hz bandwidth, or Pn = –174 dBm/Hz.
Pn
R Noisy resistor
R vn
Figure 4.3 Lumped-element noisy resistor circuit.
64
Advanced Production Testing of RF, SoC, and SiP Devices
This is theoretically the lowest possible noise level of any system at room temperature because this value is based solely on kinetic energy due to thermal agitation of the molecules that make up matter. Note also that (4.7c) is completely independent of frequency. Thermal noise power is dependent only on temperature and bandwidth. 4.1.2.2 Shot Noise
Shot noise, also known as Schottky noise (because Schottky described it mathematically in 1928 [5]), is noise due to random fluctuations of charge carriers across a potential barrier in electronic devices. Typically electrical current charge carriers are electrons, which can be considered moving in a flow on a microscopic level. Because electrical current flow can be considered to be comprised of discrete particles, there is some random fluctuation in their movement through an electronic device. The power spectral density of shot noise is approximately broadband and flat as shown in Figure 4.2(b). The word approximately is used because there is a roll off of this type of noise at approximately 1015 Hz because the charge carriers have a finite travel time within the device. For the frequencies of interest in RF and SoC testing (<20 GHz) the roll off is negligible. 4.1.2.3 1/f Noise
The 1/f noise (flicker noise, “pink” noise) is found in many electronic devices and its origin is not fully understood. It was discovered early after the invention of the transistor when scientists were trying to eliminate noise from audio transistors. It is believed that 1/f noise arises from inherent defects in the substrates of semiconductors and unfortunately it cannot be eliminated. Some devices have exhibited 1/f noise down to frequencies much less than 1 Hz, where the noise merges with the natural drift of the device. Flicker noise measurements at these low frequencies are very difficult because of the long measurement times required. Flicker noise has a power spectral density that varies as the inverse of frequency: P flicker ∝
1 f
(4.8)
In Figure 4.2(c), it is apparent that the contributions of 1/f noise are most significant at low frequency, and its effects become negligible at high frequency. Unfortunately, based on this definition, the noise power density goes to infinity as frequency goes to zero. For that reason, this definition is not valid at dc, nor have any experimental observations traced 1/f noise to such low frequencies. A practical description would consider 1/f noise to have linear power density over
Tests and Measurements III: Noise
65
a given bandwidth for lower frequencies. As frequencies become higher, flicker noise levels off to a flat power spectral density like that of thermal or white noise. 4.1.2.4 Quantization Noise
Quantization noise is noise arising from the difference between the true analog value and the quantized versions of a signal in analog-to-digital converters, which are often used in test equipment. This noise is dominant in systems that have a limited dynamic range. The effects of quantization noise can be minimized or made negligible by careful design of the test system. 4.1.2.5 Quantum Noise
Quantum noise is broadband noise that results due to the quantized nature of charge carriers. It is typically only seen in systems that are either cold (near 0K) or operating at very high bandwidth (greater than 1015 Hz). Therefore, it is of little concern in production testing. 4.1.2.6 Plasma Noise
Plasma noise is noise that results from the random motion of charges in an ionized gas. Gases are those such as plasma or ionospheric gases. Ionization can occur in production test equipment. For example, a sparking electrical contact could create locally ionized air (plasma). A likely place that this could occur is at the contactor when testing power amplifiers. However, in general, plasma noise is not of concern when performing production tests. 4.1.3
Noise Floor
The concept of the noise floor is important to understand. It is important to know the noise floor of the test equipment with which noise measurements are being made. The noise floor is the level of power below which a desired signal cannot be detected. At this minimum power level, a desired signal is said to “fall through the noise floor.” Figure 4.4 shows two signals. One of the signals is above the noise floor and one is below. If one were to use the peak hold function of a spectrum analyzer, the trace would look like the solid line and no evidence of the desired signal would even be seen. This is why understanding the noise floor of the measurement instrumentation is critical. In practice, measuring a device with high gain, for example, 20 dB (when using a spectrum analyzer to perform noise measurements), would be no problem because the gain of the device would allow the noise signal to rise through the noise floor of a spectrum analyzer. However, for devices with low gain, additional components have to be added to the noise measurement setup. Typically, a low-noise, high-gain amplifier is added in the path between the DUT and the test equipment.
66
Advanced Production Testing of RF, SoC, and SiP Devices
Figure 4.4 (a, b) DUT signal in relation to the noise floor of the tester.
Because noise is random, when measuring noise signals from a DUT, the internal noise of the test equipment must be lower than the noise being measured. With tuned receiver-based measurement equipment, reducing the resolution (or IF) bandwidth will reduce the amount of noise in the measurement system. When doing this, however, the time to perform the measurement increases, so as with many measurements, trade-offs must be made, especially at low power level measurements near the noise floor. When the noise power level of the DUT approaches the noise floor of the test equipment, errors will be introduced to the measurement. In a worst-case scenario, when the DUT noise power is equal to the noise floor power of the tester, the measurement will appear to be 3 dB above the noise floor, that is, 3 dB in error [2].
4.2 Noise Figure 4.2.1
Noise Figure Definition
Sensitivity (often synonymous with signal-to-noise ratio) in wireless device receiver front ends is extremely important, because it enables detection and resolution of the weak signals (levels at or below –90 dBm) commonly used in communications and personal/local networks. A term called noise factor, or F, has been defined to quantify the impact a device has on the signal-to-noise ratio: F =
Si N i So N o
(4.9) T =T 0 = 290 K
Tests and Measurements III: Noise
67
This equation states that noise factor is the ratio of input signal-to-noise ratio to output signal-to-noise ratio at T = T0, commonly accepted to be 290K (room temperature) [6]. In words, noise factor is the degradation of the signal-to-noise ratio at T0. It is well known, however, that the magnitude of degradation is difficult to measure directly. Figure 4.5 depicts (4.9) showing the input power level of an amplifier (DUT) and the increased noise at the output of the amplifier resulting in a decreased signal-to-noise ratio. Note that the signal power is higher at the amplifier’s output than that of the signal before entering the amplifier. However, because the amplifier adds noise (via the mechanisms described earlier in this chapter) the noise floor at the output is raised significantly. Thus, the signal-to-noise ratio at the output is less than that of the input. The figure of merit, noise figure (NF), is used more readily throughout the industry. Noise figure is simply noise factor in units of decibels: NF = 10 log ( F )
(4.10)
Many engineers use these terms interchangeably, or incorrectly in speech, which is typically not a problem, because the correct understanding is inferred from the context of the discussion. However, inadvertently mixing these two terms up in calculations can have adverse effects. Keep in mind that a perfect noise DUT with no noise added would have a noise factor of F = 1 and a noise figure of NF = 0 dB. Thus, the potential values for noise factor and noise figure are
Si /Ni
G
Output signal: high peak power, poor signal-to-noise
Input signal: low peak power, good signal-to-noise
Power (dBm)
So /No
G
So /No
Si /Ni
Frequency (Hz)
Figure 4.5 Signal-to-noise degradation of a signal passing through a semiconductor device.
68
Advanced Production Testing of RF, SoC, and SiP Devices
F ≥1
(4.11)
0 ≤ NF < ∞
(4.12)
and
Building on (4.9), at T = 290K (the accepted temperature of usage of wireless devices) a direct correlation exists between receiver sensitivity and noise figure: S Ni NF = 10 log i So N o
(4.13a)
NF = 10 log ( S i N i ) − 10 log ( S o N o )
(4.13b)
NF = ∆( S i N i ) dB
(4.13c)
In words, (4.13c) states that a 1-dB noise figure reduction provides a 1-dB increase of receiver sensitivity, and vice versa [6]. This should make it apparent why noise figure is such a critical parameter. Noise figure measurements inherently involve the characterization of low-level signals. This requires that extra attention be paid to the details of the test setup to make accurate measurements. However, for production testing, compromises often need to be made because problems such as impedance mismatch due to DUT-to-DUT impedance variations can arise. This requires the engineer to understand all of the facts that come into play. The most important item to consider is that the noise of the equipment performing the measurement must be significantly lower than the noise that is being measured. 4.2.2
Cascaded Noise Figure
In a receiver front end, the LNA is the most critical stage with respect to noise figure. From the Friis equation [7], F sys = F1 +
F2 − 1 F3 − 1 Fn − 1 + + K+ G1 G 1G 2 G 1G 2 KG n −1
(4.14)
it can be seen that the noise figure performance of the first stage of a cascade is the most significant (the subscripts denote the stages). System designers try to design the first, or preamp, stage of a receiver such that the noise figure is low
Tests and Measurements III: Noise
69
and the gain is high. With the high gain, G1, the large value carries through in the denominator of each of the subsequent terms in (4.14), making their contribution to overall noise figure less significant, but not insignificant. Typically LNAs of wireless receivers have noise figures of 1.5 dB or better. One may argue that the LNA is not the first element of a receiver chain. It is often the antenna or a bandpass filter. That is true, and the antenna and filter do contribute to the overall noise figure. However, they are passive, lossy devices that offer no gain. The LNA is the first component in the chain that offers gain. 4.2.3
Noise Power Density
Often noise is expressed in the form of noise power density, or power spectral density, with the units of dBm/Hz. It is therefore essential to understand this quantity and to understand how to derive noise power from it. Understanding that noise power is specified in a bandwidth is the key to understanding why this convention is used. While many engineers do not refer to it by its formal terminology, noise power density, it is commonly inferred. 4.2.4
Noise Sources
A noise source is a one-port device that provides a known amount of noise to a DUT so that noise figure can be calculated. The simplest (and traditional) noise source is a resistor held at a fixed temperature. The electrons within the resistor have random motion that provides kinetic energy proportional to temperature. The energy is translated into a random voltage signal having a zero average value, but a nonzero rms value given by (4.4). If more noise power than a temperature-stabilized resistor can provide is desired, active noise sources can be used. Typical active noise sources are gas-discharge tubes or avalanche diodes. Diodes are more common in a production test equipment environment. In its on, or hot, state the avalanche breakdown mechanism of the diode produces the noise power. The 346B style of diode noise source has been available for many years and is still predominantly used today. Newer designs, many of which are surface mounted, are evolving that take advantage of newer technology that allows for small-sized modules to be used on the production load board. As a rule of thumb, the minimum noise power level between a hot and cold noise source must differ by at least 10 dB. 4.2.5
Noise Temperature and Effective Noise Temperature
Noise power is linear with temperature. It therefore stands to reason that temperature is used to characterize noise. Noise temperature is defined as the temperature at which a resistor would have to be placed to have the same available
70
Advanced Production Testing of RF, SoC, and SiP Devices
noise power spectral density as the actual noise source. This definition is based on the calculation of thermal noise power in (4.7) as Ta =
Na kB
(4.15)
where the subscript a denotes “available” and Na is the available noise power (in watts) of the actual source. A slightly different, but more useful value is effective noise temperature: Ne kB
T ne =
(4.16)
where Ne is the emerging power under the assumption that the power spectral density is constant across the measurement bandwidth. Effective noise temperature is calculated from the power emerging from the noise source when it is terminated in a nonreflecting and nonemitting load. Effective noise temperature is related to noise temperature by
(
T ne = T a 1 − Γ
2
)
(4.17)
where Γ is the reflection coefficient of the one-port noise source. Although these two quantities vary only in verbal definition, it is effective noise temperature, Tne, that is used in calculations surrounding noise figure in this chapter. 4.2.6
Excess Noise Ratio
Excess noise ratio (ENR) is a term used to describe the output of a noise source when it is used as an input stimulus to a circuit. The definition of ENR is as follows: ENR =
Noise power difference between hot and cold source (4.18a) Noise power at T 0 ENR =
k (T h − Tc )B
ENR =
kT 0 B T h −T c T0
(4.18b)
(4.18c)
Tests and Measurements III: Noise
71
where Th is the equivalent noise temperature of the noise source in the on, or hot, state (in kelvins), Tc is the equivalent noise temperature of the noise source in the cold state, and T0 is the reference temperature (assumed to be the standard 290K). Most often, in production testing of DUTs, Tc is simply T0 making (4.18c) ENR =
Th −1 T0
(4.19)
The preceding definitions provide ENR in linear units. In test equipment at RF frequencies it is more common to use logarithmic values, hence, ENR
dB
= 10 log ( ENR )
(4.20)
The ENR for typical noise sources used in production and benchtop testing of wireless devices is about 15 to 20 dB. If possible, it is desirable to use a low-ENR noise source when the noise figure is low enough to be measured with those conditions. Because a low-ENR indicates lower noise power levels, this means that the tester, or measuring equipment, will require minimal dynamic range and be less likely to be operating in the nonlinear range. Additionally, the use of a low-ENR noise source has a more constant impedance between the on and off states of the noise source. This is because a low-ENR noise source is typically a high-ENR noise source with an attenuator. Noise sources are calibrated by the National Institute of Standards and Technology (NIST), so as long as the calibration is legitimate and up to date, a lot of additional steps are not required. Some noise figure measuring equipment requires manual entry of ENR calibration tables. It is best to utilize automated data transfer from computer to tester, if available for this task, to avoid measurement errors caused by typographical mistakes.
4.2.7
The Y-Factor
The Y-factor is a ratio of hot to cold noise powers (in watts) and is defined as Y =
Nh Nc
(4.21)
If the noise source is at room temperature and the cold state is that of a noise diode simply turned off, then Tc = T0 and (4.21) becomes
72
Advanced Production Testing of RF, SoC, and SiP Devices
Y =
Nh N0
(4.22)
Because the Y-factor is a ratio of the measurement of two power levels, absolute accuracy of the test equipment is not the most critical issue. Of more importance is that the measurement be repeatable so that whether the diode is on or off, the test equipment measures under the same conditions. The Y factor is the foundation for most modern noise figure measurements/calculations. 4.2.8
Mathematically Calculating Noise Figure
Now that a few terms have been introduced, the noise figure of a device can be calculated. Having acquired the ENR value of the noise source (usually provided by the manufacturer) and having measured the Y-factor, noise figure is simply calculated as F = ENR (Y − 1)
(4.23)
Note that the ENR value and Y in (4.23) must be in linear units. However, the ENR value of a noise source is almost always supplied in decibels. Therefore, a more convenient calculation is NF = 10 log ( F )
(4.24a)
NF = 10 log (ENR ) − 10 log (Y − 1)
(4.24b)
NF = ENR
dB
−10 log (Y − 1)
(4.24c)
Although, upon immediate inspection, it looks as if there is no temperature dependence in this calculation of noise figure, note the inherent dependence of ENR on equivalent noise temperature due to (4.18). 4.2.9
Measuring Noise Figure
The various parameters needed to calculate noise figure can be acquired in several ways, but typically only three are used in practice for testing wireless and SoC devices. Those are termed the direct method, the Y-factor method, and the vector-corrected cold noise method [8]. The direct method is the simplest to implement for production but it is limited to devices with high gain. The Y-factor method is also a relatively straightforward technique to implement for production testing and is the foundation for most noise figure meters and analyzers. The cold noise method is a vector-corrected (S-parameter-based) method that is
Tests and Measurements III: Noise
73
designed for production testing because of the mismatches between the DUT and the contactor, load board, and so forth. Each technique has its advantages and disadvantages and it is up to the test engineer to make a choice based on his or her particular needs. The three methods are discussed next. 4.2.10 Direct Measurement of the Noise Figure
If the DUT has a large amount of gain, such as an SoC receiver, it may be acceptable to directly measure the noise and calculate noise figure. The reason why the DUT has to have a large gain is so that the noise power of the DUT is more than the noise floor of the measurement equipment (see Section 4.1.3 on the noise floor). This is also referred to as the cold noise method. The noise figure can be calculated from [6] F =
N0 kT 0 BG
(4.25)
This method can be very convenient and is likely the fastest method available. In (4.25) N0 is the noise power over a specified bandwidth, and B is the measurement bandwidth. The gain G of the DUT must also be measured, and it is likely that this measurement has already been done during some other part of the DUT testing. Finally, kT0 is simply the value –174 dBm/Hz. Having acquired N0, B, and G, the noise figure can be calculated by placing (4.25) into logarithmic values: NF = N 0
dB
− ( −174 dBm Hz ) − B
dB
−G
dB
(4.26)
4.2.11 Measuring Noise Figure Using the Y-Factor Method
Figure 4.6 is a plot of output noise power (in watts) versus source temperature (in kelvin). This plot will be used when implementing the Y-factor method for determining the noise figure of any two-port device. In this description, noise factor will be calculated and then converted to noise figure at the end. When the noise powers of two significantly different noise sources (hot and cold) are plotted against their equivalent noise temperatures, a lot of information can be gathered. The slope of the line is m= or
(N 2 − N 1 ) (T 2 − T 1 )
(4.27a)
74
Advanced Production Testing of RF, SoC, and SiP Devices
Noise power (W)
Nh m = kGaB
Nc
Na
Tc Temperature (K)
Th
Figure 4.6 Output noise power of a semiconductor device versus temperature.
m=
∆N ∆T
(4.27b)
The gain G (whether greater or less than 1) of a DUT linearly multiplies with the noise power of (4.7c) to lead to N = kGBT
(4.28)
Using (4.27b) and (4.28), the slope of the line in Figure 4.6 is m = kGB
(4.29)
Furthermore, the line segment made between the two points can be extrapolated to the y intercept, which will be termed Na, or the noise added by the DUT. Therefore noise factor can be calculated as in (4.23). Figure 4.7 shows a typical Y-factor measurement setup. Note that this can be viewed as a cascade of two stages, where the DUT is the first stage and the tester (receiver) is the second stage. Taking the first two terms of (4.14) and rearranging them, F1 = F12 −
F2 − 1 G1
(4.30)
Tests and Measurements III: Noise F1 = F12 −
75
F2−1 G1
F1G1
F2 Tester or NF analyzer
DUT
F12 Figure 4.7 The Y-factor measurement setup.
provides a means to calculate the noise factor of the DUT (F1). Additionally, F12 is the overall noise factor of the DUT and tester, F2 is the noise factor of just the tester, and G1 is the gain of the DUT. The first step is calibration, as shown in Figure 4.8(a). During this calibration step, the noise source is connected directly to the tester receiver. After the two power levels are measured corresponding to the applied hot and cold noise sources, the Y-factor and noise factor for the tester are calculated as F2Y2 Noise source
Tester receiver (a)
F2Y2
F1G1 Noise source
Tester receiver
DUT
F12 Y12 (b)
Figure 4.8 The Y-factor measurement procedure: (a) calibration stage and (b) measurement stage.
76
Advanced Production Testing of RF, SoC, and SiP Devices
Nh2 Nc2
(4.31)
ENR Y2 −1
(4.32)
Y2 = and F2 =
where ENR is the excess noise ratio of the noise source in linear units. Next, with the DUT inserted, as in Figure 4.8(b), the hot and cold power measurements are taken again to determine the Y-factor, noise factor, and gain of the DUT and tester combination as N h 12 N c 12
(4.33)
ENR Y 12 − 1
(4.34)
Y 12 = and F12 =
In both the calibration stage and the measurement stage, the measurement accuracy and repeatability can be improved by having the hot and cold noise sources repeatedly cycled on and off. Taking multiple measurements will allow for averaging to obtain a better value for the Y-factor. Also, when performing the hot and cold measurements, it is essential to ensure that the hot and cold power levels of the noise source are linear with respect to each other. The only remaining value to determine is G1, the gain of the DUT. That can be found from the noise power values that have already been measured: G1 =
N h 12 − N c 12 Nh2 − Nc2
(4.35)
After obtaining all of these values, they are inserted into (4.30) to solve for the DUT noise factor, F1, and thus, from (4.24a), noise figure is NF1 = 10 log ( F1 )
(4.36)
Following along these steps should prove to be very straightforward, but to implement the Y-factor method in a production environment, some extra steps
Tests and Measurements III: Noise
77
have to be taken. As long as they are considered at the early stages and prior to load board design, all should be well. Note that the method entails performing a calibration step. This is done without the DUT in the measurement path. This is often implemented through the use of switches on the load board. They allow the noise source to be connected directly to the tester (receiver) for the calibration step and to the DUT, as in the measurement stage. If a noise diode is going to be used, consider that many noise sources use a 28-V supply as a standard (lower voltage noise diodes are available as well). The dc supplies that are to be used must be free of noise themselves, as well as being capable of allowing the noise source to be switched on and off, perhaps in a cyclic fashion if averaging is to be used as mentioned earlier.
4.2.12 Measuring Noise Figure Using the Cold Noise Method
The Y-factor method of measuring noise figure is the most accurate method as long as the DUT is perfectly impedance matched to the tester. Due to impedance variation from DUT to DUT, it is nearly impossible to have a perfect match for all DUTs, even those within the same lot. Because the Y-factor method uses only scalar measurements, it does not take into account the phase information that can be used to correct for the impedance mismatch. The S-parameter-based cold noise method of noise figure measurement has been created to account for the impedance mismatch between the DUT and tester. It is based on a full set of S-parameter measurements (e.g., magnitude and phase of all four S-parameters for a two-port device). The method is similar to the Y-factor method, but has the added advantage of using a correction algorithm to correct for mismatches between the DUT and tester. These algorithms are very computationally intensive, but with today’s high-speed processors, it should add little to no overhead to the test time. The primary difference between this method and the Y-factor method is that in this method, the noise factors referred to in (4.30) are functions of the reflection coefficient Γ such that F1 = F12 (Γ ) −
F 2 (Γ ) − 1 G1
(4.37)
The two primary steps of the cold noise technique are: 1. The calibration process is performed in a manner similar to that of the Y-factor method, with the difference being that the measurements are full S-parameter measurements. From this, correction factors for impedance mismatching are created.
78
Advanced Production Testing of RF, SoC, and SiP Devices
2. With the DUT inserted (or switched in), full S-parameter measurements are made to find the true available gain of the DUT, rather than the “insertion” gain, as found from the Y-factor method. The available gain is used in conjunction with noise power and placed into (4.25). The name cold noise arises because the only noise source at measurement time is a 50-Ω termination at the input of the DUT. Refer to [8] for more detailed calculations of the S-parameter-based cold noise technique. The information provided here should be enough to help a test engineer decide which method is best for the particular application. In general, if the DUT is well impedance matched to the tester (and has minimal impedance match variation between DUTs), then the Y-factor would be the best choice due to its simplicity. If there is a poor match between the DUT and tester, or a lot of variation between DUTs (such that a perfect matching network that meets all needs cannot be created on the load board), then the cold noise technique is more suitable. 4.2.13 Noise Figure Measurements on Frequency-Translating Devices
Up to now, most of this discussion has focused on measuring the noise figure of amplifiers, or two-port devices. When measuring the noise figure of frequency-translating devices, such as mixers, there are some differences in behavior that need to be addressed. One primary difference is that when measuring the noise figure of mixers, the noise source ENR is that of the microwave frequency, but the input of the measurement instrument is tuned to the IF of the device. To ensure that this is not a problem, a broadband noise source must be used that extends between the RF and IF or, more importantly, one that has the same ENR at both frequencies. Many mixers are passive and have loss (conversion loss) associated with them. This loss corresponds to the value of G in (4.26). Placing a linear gain value of less than 1 (corresponding to loss) into the Friis equation, (4.14), will show that the second stage of a cascade system can have large impact on the overall noise figure. If a mixer of this type is being used in the tester or noise figure analyzer (for example, as a downconverter to a system IF), then it can introduce significant measurement error. The two main types of mixers are single-sideband (SSB) and double-sideband (DSB) mixers. The noise figure can be measured for both; however, care must be taken and an understanding of the effects of the various noise power levels is important. If measuring with a noise figure analyzer, actual conditions are measured. That is, if the mixer rejects one sideband, a SSB result is displayed. Similarly, if the mixer converts both sidebands, DSB results are displayed. Thus, care should be taken when interpreting results, because confusion
Tests and Measurements III: Noise
79
can occur if DSB results are used to predict performance of a SSB system. For example, if DSB results are taken for a SSB mixer, the noise figure will be 3 dB lower than reality. This could potentially cause problems in that the mixer’s noise figure is really 3 dB higher than reported. Additionally, measured gain will be 3 dB higher for DSB measurements of SSB mixers. This is because the measured bandwidth is twice the calibrated value. 4.2.14 Calculating Error in Noise Figure Measurements
Based on (4.14), the error that is introduced in noise figure measurements can be piecewise determined from the following equation [8]: 2
∆NF =
2
F12 F2 F ∆NF12 + F G ∆NF2 + 1 1 1 2
F12 F2 F2 − 1 F G ∆G 1 [dB] + F − F G ∆ΕNR 1 1 1 1 1
2
(4.38)
The detailed derivation of this equation is given in [9, 10]. The first term, consisting of ∆NF12, accounts for mismatch between the noise source and the DUT and the overall instrument uncertainty. The instrument error is most often small as long as the user has chosen the best-fit test equipment for the given DUT. The second term represents error due to the tester noise figure. For example, accuracy and repeatability are lost when the tester has a high noise figure relative to the DUT. If a preamplifier is not added to the tester, then this term can be a significant contributor. The third term is dependent on the DUT gain. If the DUT gain is high, then note that the large number, G1, in the denominator of the third term reduces the overall effect of ∆G1. If, however, G1 is small, then ∆G1 becomes more significant. Finally, the fourth term accounts for any uncertainty or error due to the noise source, or ENR. This can range from bad calibration data to the impact on accuracy due to the cold noise temperature being assumed to be 290K, but actually being different. 4.2.15 Equipment Error
When making noise figure measurements it is important to be aware of the equipment or tester and the methods that are used to perform the measurement. If, for example, the tester employs a downconversion scheme in making the
80
Advanced Production Testing of RF, SoC, and SiP Devices
power measurements, then it is necessary to know whether a DSB or SSB mixer is being used internally to the tester. The power in an unwanted sideband could be measured and added erroneously into the overall power measured. If the tester has a high noise figure itself, this limits the accuracy and repeatability with which noise figure can be measured. It is common practice to add a low-noise, high-gain preamplifier to the input of the tester. This preamplifier then becomes part of the tester and enables the tester noise figure to be reduced, as it becomes the first stage in (4.14). Nonlinearity is a problem in both the measurement equipment and the noise source. Any nonlinear effects within the detector will reappear in every calibration and measurement. Nothing one does to the DUT or external environment will change this. To minimize nonlinear effects, for example, if the device noise figure to be measured is quite low, then it is recommended that a low ENR source be used. The low ENR will require the detector to have less dynamic range, hence keeping the instrument in a linear mode of operation. 4.2.16 Mismatch Error
Impedance mismatch between the noise source and the DUT and between the DUT and the tester is perhaps the largest contributing source of error. As explained earlier, from an accuracy standpoint, it warrants the use of a full S-parameter-based noise figure measurement. The two primary problems arising due to mismatch are that noise power is lost at the mismatched interface and reflections of the noise power signal give rise to unpredictable effects. The noise source can impact mismatch error. Low ENR sources with high internal attenuation are a best choice due to the VSWR (see Appendix B) and more consistency of match between on and off impedances. Measurement of noise figure on DUTs with high gain is less susceptible to the effects of mismatch, because higher gain reduces the relative contribution of a second-stage noise figure component from the instrument [see (4.14)]. If S-parameter-based measurements of noise figure are made (and done properly), the error associated with mismatch can be reduced significantly. However, note that this method could be computationally cumbersome. Because a full vector-based measurement of the DUT is performed, error correction terms can be applied to the noise figure measurement and provide a result similar to one in which the DUT had been in a perfectly matched environment. 4.2.17 Production Test Fixturing
When measuring noise figure in a production environment, there are even more sources of error. For production testing of packaged DUTs, a test fixture, or contactor, is used on a load board with a fixed matching network. Due to
Tests and Measurements III: Noise
81
variation between DUTs, the match between DUT and load board (and, ultimately, tester) will vary from DUT to DUT. For production noise figure measurements of wafers, a wafer probe is used. In either case, the means of contacting the DUT will introduce error. It will add loss and mismatch. Ideally, the measured output power of the DUT has to be corrected and the effect of the fixture or probes has to be removed. A production test fixture is a common place to look for noise being introduced into the system. The most difficult problem with shielding the production test fixture is finding a shielding means that can physically fit within the constraints of the DUT handler that is used. While it is nearly impossible to completely remove the fixturing effects from the noise figure measurement, a typical technique is to use a scalar correction that can compensate for the loss at the input of the device. 4.2.18 External Interfering Signals
With the proliferation of the use of mobile phones, pagers, and so forth in the vicinity of the test environment, unwanted interfering signals can degrade the performance of the noise figure measurement. It is not uncommon for wireless LAN networks or microwave ovens to produce interfering signals. From an electronics standpoint, it is common practice to locate an interference source and shield it to remove the “cause” of the interference. In the case of production noise figure measurements, it is not uncommon to place an entire test system within a screen room, or type of Faraday cage structure (but this can be very expensive and therefore should be avoided when possible). As a rule of thumb, shielding should reduce extraneous signal levels by 70 to 80 dB. Noise due to the measurement instrument itself is not usually a problem, because commercially available instrumentation is typically well shielded. However, be aware that an older computer integrated into the test setup can add noise, because shielding requirements were less stringent years ago. 4.2.19 Averaging and Bandwidth Considerations
Finally, a word must be noted about the residual jitter that is present simply due to the fact that noise is a random electrical signal. Repeatability errors will be introduced because the measurement is performed over a finite time. (Infinite time is required to acquire the true value of noise, but that is obviously not practical.) Measurement averaging should be used when possible. Of course, measurement averaging adds test time, but through observation, a compromise must be determined between the amount of averaging and the degree of repeatability desired. The general relationship of jitter in the signal, resulting from averaging is
82
Advanced Production Testing of RF, SoC, and SiP Devices
Residual jitter ∝
1 N
(4.39)
where N is the number of averages. Alternatively, if the bandwidth of the measured noise is wide enough, enough noise data may be collected. The relationship between bandwidth B and residual jitter is Residual jitter ∝
1 B
(4.40)
This means that either the noise measurement should be performed over a large bandwidth or with many measurement averages to obtain the best repeatability.
4.3 Phase Noise 4.3.1
Introduction
Phase noise is a parameter that measures the spectral purity of a signal. It is particularly referenced to a sinusoidal, or CW, waveform. It is associated with the term jitter. The principal difference between the two is that jitter is a property described best by relating it to the time domain, whereas phase noise is best described when related to the frequency domain. When viewing a pure sine wave in the frequency domain, it will look like an impulse function with all of the energy concentrated at exactly the carrier frequency. In reality, if the frequency space around the carrier frequency is explored, there will be energy located at the adjacent frequencies. This energy is due mostly to phase noise. Its behavior is that of 1/f noise. In practice, phase noise is represented in the frequency domain as shown in Figure 4.9. It measures the spectrum of phase deviation. This is its most common representation, as a single-sideband power measurement in a 1-Hz bandwidth, at some frequency away from, but relative to, a carrier power. For example, a phase noise specification for a VCO on a SoC device might look like this: Phase noise = −90 dBc/Hz at 10 kHz offset which means that the measured power, in a 1-Hz bandwidth, at 10 kHz away from a carrier signal, is 90 dB lower than the power of the carrier signal.
83
Power (dBm)
Tests and Measurements III: Noise
1Hz bandwidth
fc Frequency (Hz)
foffset
Figure 4.9 Graph of a typical phase noise specification showing a measurement bandwidth of 1 Hz taken at some frequency offset from a stated carrier frequency.
In wireless digital communications, the modulated signals contain information that is determined by the phase state of the signal. If the signal encounters too much phase noise, the relative and absolute positions of the information upon demodulation will be disturbed and the information will be unable to be extracted from it. Figure 4.10 shows an IQ constellation for a digitally modulated signal. The radial errors are due to amplitude noise while the rotational errors are due to phase noise. The measurement of phase noise can also help to indicate other items such as BER and signal spreading. For example, in a GSM system, if phase noise is measured at a 200-kHz offset, the resultant value will tell how much energy is falling into the adjacent channel, because the 200-kHz offset is exactly the Quadrature-phase (Q) x x xx xx x x x x x x
x xx x x x x x x x x x
In-phase (I) x xxx xx x x x x xx
x x x xx x x x xx x x
Figure 4.10 Digital modulation I/Q constellation impaired by amplitude and phase noise.
84
Advanced Production Testing of RF, SoC, and SiP Devices
position of the adjacent channel. Any contributions from one channel to another channel can introduce such impairments as deteriorated BER [11]. 4.3.2
Phase Noise Definition
A pure sine wave is typically represented by the following equation: v (t ) = V 0 sin 2 πf 0t
(4.41)
where V0 is the peak voltage amplitude of the signal and f0 is the carrier frequency. The noise that can occur on this signal can exist in the form of either amplitude noise, phase (frequency) noise, or both. If the sine wave exhibits noise in the form of both amplitude and frequency, then (4.41) changes to v (t ) = [V 0 + a (t )] sin [ 2 πf 0t + φ(t )]
(4.42)
where a(t) is the amplitude noise and φ(t) is phase noise. The noise introduced by a(t) and φ(t) is shown in Figure 4.11 in both the time and frequency domains. Notice the spreading due to phase noise, as well as the amplitude modulation sidebands. The equation given in (4.42) also happens to be that of a signal having amplitude and phase modulation. The amplitude and phase modulating signals are those of random noise processes. Phase variations are caused by random processes giving rise to thermal fluctuations that modulate the pure signal. Because Pure sine wave
Noisy sine wave a(t)
Time (s)
Time (s) φ(t )
Frequency (Hz)
Frequency (Hz)
Figure 4.11 Noise on a sinusoidal waveform exhibiting amplitude and phase modulation properties.
Tests and Measurements III: Noise
85
phase and frequency are directly related, it is equivalent to group the variations under the category of “phase.” Just as with a phase-modulated signal, when viewing the noisy signal in the frequency domain, sidebands due to the noise are present. For this discussion of phase noise, it will be assumed that the amplitude noise contribution is much less than that of the phase noise contribution. The phase noise contribution, φ(t), could include both long- and short-term phase variation. In general, the long-term variation is considered to be frequency drift, whereas the term phase noise is reserved for short-term variations. Phase noise is the Fourier transform (or power spectral density) of the phase component of a sinusoidal signal scaled to dBc/Hz, where dBc refers to the power relative to the overall carrier power. Phase noise in the frequency domain can be expressed as ᏸ( f
)=
Pn Pc
(4.43)
where Pn is the rms noise power in a 1-Hz bandwidth at a frequency f Hz away from the carrier and Pc is the rms power of the carrier. The units of (4.43) are dimensionless. The ᏸ(f ) symbol is termed the Laplacian and represents the frequency notation of phase noise. It is used almost universally throughout literature. Often the units of ᏸ(f ) are expressed in decibels, making (4.43) become ᏸ( f
) = Pn [dBm Hz ] − Pc [dBm ]
(4.44)
The units of (4.44) are dBc/Hz. As the frequency offset approaches zero, the variation is more appropriately termed frequency drift. Near-in-phase noise measurements often pose the difficult task of requiring equipment to be able to measure both the carrier power and the phase noise. On very high stability oscillators (DUTs), these measurements can push the limits of the dynamic range of the test equipment. In the frequency domain, a pure sine wave would be represented as an impulse waveform, or an infinitely narrow peak. In practice, there is always some sideband present. This is inherently due to fundamental physics. From a physics standpoint, referring to both the fundamentals of the Fourier transform and the Heisenberg uncertainty principle, the only way to have an infinitely narrow peak would be to measure the signal over the time period of –infinity to +infinity, and that is obviously not possible (recall that the topic of this book is production testing). It is obvious that trade-offs have to be made in general
86
Advanced Production Testing of RF, SoC, and SiP Devices
phase noise measurements and, then, potentially further in production phase noise measurements to allow for cost-effective analyses. 4.3.3
Spectral Density-Based Definition of Phase Noise
Another method of defining phase noise is based on the one-sided power spectral density. Reference [2] defines this based on the random nature of the phase instabilities. Using the concept of the phase noise being equivalent to phase modulation by a noise source, the spectral density is defined as Sφ(f
) = φ2 ( f )
1 B
(4.45)
where B is bandwidth (in hertz) and the units of Sφ(f ) are radians2/Hz. Recalling from (4.43) that the power spectral density describes the power distribution as a continuous function, expressed in units of energy within a given bandwidth. The short-term instability is measured as low-level phase modulation of the carrier and is equivalent to phase modulation by a noise source. The traditional definition of phase noise, as in (4.41), is the ratio of the power in one phase modulation sideband per hertz, to the total signal power usually expressed in decibels relative to the carrier power per hertz of bandwidth (dBc/Hz). This traditional definition may be confusing when the phase variations exceed small values because it is possible to have spectral density values that are greater than 0 dB even though the power in the modulation sideband is not greater than the carrier power. IEEE Standard 1139 [12] has been modified to define phase noise as ᏸ( f
) = Sφ (f )
2
(4.46)
to eliminate any confusion [13]. 4.3.4
Phase Jitter
Using the spectral density-based definition of phase noise, phase jitter [φ2(f )], defined as the total rms phase deviation within a specified bandwidth, is calculated as φ
2
( f ) radians
f2
= ∫ S φ ( f )df f1
(4.47)
Tests and Measurements III: Noise
87
or φ2 ( f
) degrees
f
=
360 2 S φ ( f )df 2 π f∫1
(4.48)
Units of decibels may be used when phase jitter is relative to 1 radian (rms). Additionally, it is not possible to obtain Sφ(f ) from phase jitter unless the shape of Sφ(f ) is known.
4.3.5
Thermal Effects on Phase Noise
Thermal noise can limit the extent to which phase noise can be measured. From the fundamental description of thermal noise, described by kTB at room temperature (290K), noise power is –174 dBm/Hz. Because phase noise and amplitude noise are uncorrelated [see (4.42)], each contribute with equal probability to kTB. The phase noise power contribution to kTB is –177 dBm/Hz and the amplitude noise power contributes –177 dBm/Hz. (Note that each is 3 dB less than the total thermal power.)
4.3.6
Low-Power Phase Noise Measurement
Measuring the phase noise of low-power signals can be difficult. However, an LNA can be used to boost the device carrier power signal to levels necessary for successful measurements. The theoretical phase noise measurement is limited by the noise figure of the amplifier and the low signal power from the signal to be measured: ᏸ( f
) = −177[dBm Hz ] + NF(dB) − PDUT [dBm ]
(4.49)
where –177 dBm/Hz is the theoretical noise power due to phase noise at room temperature, NF is the noise figure of the amplifier, and PDUT is the power of the signal from the DUT before it is amplified.
4.3.7
High-Power Phase Noise Measurement
At high power levels (i.e., greater than 0 dBm), attenuators are often needed in the measurement setup. These reduce the signal-to-noise ratio and detract from the measurement. Therefore, it is recommended that these phase noise measurements be performed at lower power levels.
88
4.3.8
Advanced Production Testing of RF, SoC, and SiP Devices
Trade-Offs When Making Phase Noise Measurements
Two trade-offs must be made when making phase noise measurements: (1) measurement speed versus measurement information and (2) measurement ease versus measurement sensitivity. Obviously, for production testing, it is desirable to have a fast, easy-to-perform measurement; however, compromise must be made for each application. A fast measurement will provide reduced test times, but at the expense of providing little information about the signal. If more detailed information is needed, it comes at the expense of longer test times because more data from the device is needed. Because phase noise measurement information is a spectral distribution of noise data, the foremost contributor to measurement speed is the offset frequency selected because this frequency choice determines the longest time record or the narrowest resolution bandwidth. If the measurement equipment is using averaging (most often the case), it has the next highest contribution to measurement time. If using a spectrum analyzer, or spectrum analyzer–based equipment, then for offset frequencies that are far from the carrier, the resolution bandwidth can be much larger than the resolution bandwidth for offset frequencies that are near the carrier. The narrower the resolution bandwidth, the more time required to gather the measured data. An easy-to-perform measurement is often synonymous with a quick and custom measurement tailored to a specific device. This usually means that reconfiguring the measurement setup for another, different device takes more of an effort. Therefore, with measurement setups, it is desirable to have a setup that can meet the needs of multiple devices. Particularly for low-power signal phase noise measurements, if that type of sensitivity is needed, it often comes at the expense of difficult, or expensive, measurement setups. 4.3.9
Making Phase Noise Measurements
There are two critical items to be aware of when measuring phase noise or designing a phase noise measurement: 1. The measuring receiver must have a lower noise floor than the signal to be measured. 2. Any local oscillator in the measuring receiver must have better phase noise than that of the signal to be measured. In order for a measurement receiver (or spectrum analyzer) to be able to measure a device’s phase noise it is imperative that the noise floor of the receiver
Tests and Measurements III: Noise
89
be lower than the phase noise to be measured. Figure 4.12(a) shows a legitimate measurement setup where the signal can easily be discerned from the noise floor of the receiver. Note that the lower powered signal of Figure 4.12(b) has its desired phase noise measurement point below the noise floor of the measurement receiver. If it turns out that the measurement is just that of the noise floor of the receiver, then a solution is to increase the carrier power of the device such that the desired measured signal overcomes the receiver noise floor. Additionally, another concern is that the local oscillator in the measuring instrument’s receiver must not contribute phase noise that will detract from the measurement. In almost any receiver, the input signal (signal to be measured) will be mixed with the measuring instrument’s LO to produce a new (IF) frequency that is analyzed. If phase noise from the receiver’s LO is introduced, it may be interpreted as that of the DUT. If it is significant and at the frequency of interest, it will introduce measurement error. Assuming that the receiver has a spectrally clean LO, the only time that this becomes of concern is when attempting to measure near-in-phase noise. Unfortunately, low-phase noise RF sources often come at the expense of being slower. To avoid contributions from the local oscillator of the test system to the phase noise measurement of the DUT, it is important for the test system LO to have significantly better phase noise performance (at least 10 dB better) than the signal to be measured from the DUT. If the phase noise performance of the test system LO approaches that of the DUT, the tester will report a 3-dB poorer phase noise result than is the actual case. If the measured values are higher than expected and there is suspicion that the phase noise measured is being limited by the measurement system (receiver), Bad: offset of desired signal is below noise floor of tester
Power (dBm)
Good: offset of desired signal is above noise floor of tester
Noise floor of tester (a)
Frequency (Hz)
(b)
Figure 4.12 Effects of the noise floor of the tester: (a) good, and (b) bad (hides desired measurement point).
90
Advanced Production Testing of RF, SoC, and SiP Devices
then remove the device and take a raw phase noise measurement. A raw measurement is the phase noise of the receiver and it can be used as an indication of how well a device’s phase noise can be measured. Figure 4.13 demonstrates the effect of the phase noise of the receiver (tester). The dashed line represented the phase noise of the receiver. In Figure 4.13(a), the phase noise of the receiver falls beneath the phase noise of the device to be measured (solid line). Thus, the receiver is not limiting the capability to measure the phase noise of the DUT. In Figure 4.13(b), the phase noise of the receiver hides the signal of the device. The case in Figure 4.13(b) should be avoided, because it prohibits measurement of the true phase noise of the DUT. Sometimes, phase noise cannot be directly measured on the device. Methods such as external quadrature mixing and clean amplification of the device’s signal via an external LNA are required. 4.3.10 Measuring Phase Noise with a Spectrum Analyzer
The easiest way to measure phase noise, as well as the traditional method, is to use a spectrum analyzer. Note, however, that when using a spectrum analyzer, the measurement is of noise in general. It is not limited to just phase noise. Both amplitude and phase noise contributions are measured when using a spectrum analyzer where their respective contributions cannot be separated. Referring to (4.41), the definition of the measurement of phase noise is the Fourier transform of only φ(t). However, a spectrum analyzer provides the Fourier transform of the entire waveform, v(t). If the proper conditions are met, then an accurate measure of phase noise can be obtained with a spectrum analyzer.
Figure 4.13 Phase noise of the tester: (a) Good—a proper phase noise measurement, and (b) bad—the phase noise of the tester hides the signal from the DUT.
Tests and Measurements III: Noise
91
The universal assumption, when measuring phase noise with a spectrum analyzer, is that phase noise is the dominant noise present. It is also a condition that the phase noise must be relatively good, or low level. Thus, from (4.41), a (t ) V 0 << φ(t ) 2 π << 1
(4.50)
If this is the case (as is typically assumed in practice), then the calculation of phase noise from a spectrum analyzer (or any test equipment that uses an IF filter, similar to a spectrum analyzer) is ᏸ( f
) = Poffset
− P carrier − 10 log (BW )
(4.51)
where Poffset and Pcarrier are the power read from the spectrum analyzer display. BW is the resolution bandwidth (or IF filter) setting in hertz and its term is subtracted to normalize to 1 Hz.
4.3.11 Phase Noise Measurement Example
Measurement of phase noise in practice often leads to incorrect measurements, based on the engineer having a misconstrued interpretation of the definition. As an example, a phase noise measurement will be made using a simple, very spectrally pure signal generator output sent to a spectrum analyzer [C. C. Kang, Agilent Technologies, personal communication, July 2003]. Figure 4.14(a) shows a spectrum analyzer display of the signal generator output. The spectrum analyzer resolution bandwidth (RBW) is set to 100 Hz, and marker 1 (2 kHz away from the carrier) is 67.86 dB lower than the carrier (marker 1R). From (4.51) the phase noise is calculated as ᏸ( f
) = −67.86 − 10 log (100 ) = −87.86 dBc
Hz
(4.52)
The 10 log(100) term normalizes the 100-Hz resolution bandwidth to 1 Hz to be consistent with the definition of phase noise. The appropriate specification of this measurement is “the phase noise is 87.86 dBc/Hz at 2-kHz offset from the carrier.” In Figure 4.14(b) the same signal has been applied to the spectrum analyzer, but the resolution bandwidth of the spectrum analyzer has been reduced to 10 Hz. In this case the difference in the carrier and the signal at a 2-kHz offset is –77.86 dB. The narrower resolution bandwidth has seemingly provided lower phase noise. However, while the level of the display has been reduced, so has the normalization factor; hence, the calculated phase noise is once again
92
Advanced Production Testing of RF, SoC, and SiP Devices
Figure 4.14 (a–c) Phase noise measurement on a spectrum analyzer with different hardware settings.
ᏸ( f
) = −77.86 − 10 log (10 ) = −87.86 dBc
Hz
(4.53)
Reducing the spectrum analyzer resolution bandwidth will never interfere with making a proper phase noise measurement; however, it will increase the measurement time. Note that problems can occur when one attempts to increase the resolution bandwidth. Once again, the same signal is applied to the spectrum analyzer in Figure 4.14(c). In this case the difference in the carrier and the signal at a 2-kHz offset is –24.35 dB. Calculating phase noise, ᏸ( f
, ) = −54 .35 dBc ) = −24 .35 − 10 log (1000
Hz
(4.54)
Tests and Measurements III: Noise
93
provides an unexpected different result, which is a consequence of the fact that the resolution bandwidth that was chosen was too wide and the phase noise is hidden beneath the IF filter skirt of the spectrum analyzer. 4.3.12 Phase Noise of Fast-Switching RF Signal Sources
Although it is not apparent in the laboratory on benchtop RF signal sources, almost always, the quality of phase noise of the RF signal source is inversely proportional to the amount of time it takes to change frequencies or power levels. Often, with production test systems or rack-and-stack hardware architectures, designers of the hardware aim to find RF sources that exhibit the fastest frequency and power switching speeds possible. It is a logical assumption to do this, and for many devices it is the appropriate choice. However, modern wireless and SoC devices require higher performance and tighter tolerances in the area of phase noise. It is often the case that the fast, available RF signal sources of a test system are inadequate to perform these stringent measurements. This results in measuring the phase noise of the tester or, more appropriately, the tester noise floor, as shown in Figure 4.13(b). When choosing a test system, the availability of a low phase noise (although, most likely, slower switching) RF signal source will add flexibility to the tester. 4.3.13 Measuring Phase Noise Using the Delay Line Discriminator Method
Another possibility for performing phase noise measurements is with the delay line discriminator method. The previously presented phase noise measurement methods rely on a clean, low phase noise RF source that is used as a reference. The delay line discriminator method does not need a reference, which makes it an attractive choice, especially for phase noise measurements that are so challenging that it is difficult to get a reference source. Figure 4.15 shows the basic setup that is required to perform phase noise measurements with the delay line discriminator method. A splitter is used to divide the RF signal into two equal level signals. One signal is applied directly to a mixer, whereas the second signal is routed to a delay line and then into the mixer. The delay line is either an electronically programmable phase shifter or simply a piece of cable with a specific length. The signals that are routed into the mixer have exactly the same frequency but obviously a different phase. This method works best if the phase difference between the two signals is exactly 90°. Because two signals with the same frequency are applied to the mixer, the resulting output signal of the mixer is a dc voltage. However, the amplitude of the dc voltage is a function of the phase difference between the two incoming signals. The change in phase of an electromagnetic wave is a function of the length of the delay line as well as the frequency of the
94
Advanced Production Testing of RF, SoC, and SiP Devices
Tester Mixer
Power splitter
Digitizer
DUT
Delay line
Figure 4.15 Phase noise measurement using the delay line discriminator method.
signal: For higher frequencies there is a larger change in phase than for lower frequency signals. Applied to the delay line discriminator method this means that for larger offset frequencies there will be a different voltage measured than for smaller offset frequencies. This method enables measurement of various voltages as a function of the offset frequencies and it turns out that the slope of that curve is 6 dB per octave. Even though this method seems to be very attractive because it does not rely on a separate RF source as a reference, it has various disadvantages as well. First, it might be difficult to find a mixer that has a dc-coupled IF output. Next, passive mixers require a minimum signal strength in order to work properly. Finally, this method has a limited bandwidth because it requires delay lines that cover the frequency band of interest. Finally, it is difficult to automate and calibrate so that this method is not often used in SoC test equipment.
References [1]
Pozar, D. M., Microwave Engineering, Reading, MA: Addison-Wesley, 1993, pp. 582–594.
[2]
Witte, R. A., Spectrum & Network Measurements, Upper Saddle River, NJ: Prentice Hall, 1993.
[3]
Johnson, J. B., “Thermal Agitation of Electricity in Conductors,” Physical Review, Vol. 32, 1928, p. 97.
[4]
Nyquist, H., “Thermal Agitation of Electricity in Conductors,” Physical Review, Vol. 32, 1928, p. 110.
[5]
Schottky, W., “Small-Shot Effect and Flicker Effect,” Physical Review, Vol. 28, 1926, p. 74.
[6]
“Fundamentals of RF and Microwave Noise Figure Measurements,” Hewlett Packard Application Note 57-1, 1983.
Tests and Measurements III: Noise
95
[7] Friis, H. T., “Noise Figures of Radio Receivers,” Proc. IRE, July 1944, pp. 419–422. [8] “Noise Figure Measurement Accuracy: The Y-Factor Method,” Hewlett Packard, Application Note 57-2, 1992. [9] Lance, A. L., W. D. Seal, and F. J. Bayuk, “Noise Measurement Uncertainty,” J. Appl. Measurements, Vol. 2, 1974, pp. 70–75. [10] Boyd, D., “Calculate the Uncertainty of NF Measurements,” Microwaves & RF, October 1999, pp. 93–102. [11] “10 Hints for Making Successful Noise Figure Measurements,” Hewlett Packard Application Note 1341. [12] IEEE Standard 1139, 1988. [13] Ferre-Pikal, E. S., et al., “Draft Revision of IEEE Std 1139-1988: Standard Definitions of Physical Quantities for Fundamental Frequency and Time Metrology—Random Instabilities,” 1997 Int. Frequency Control Symp. Proc., 1997.
Selected Bibliography Adam, S., Microwave Theory and Applications, Upper Saddle River, NJ: Prentice Hall, 1969, pp. 490–502. Chambers, D. R., “A Noise Source for Noise Figure Measurements,” Hewlett Packard J., April 1983, pp. 26–27. Hewlett Packard, “Understanding and Measuring Phase Noise in the Frequency Domain,” Application Note 207, 1976. IRE Standards on Methods of Measuring Noise in Linear Two-Ports, 1959. Proc. IRE, January 1960. Noren, B., “Production Test Places New Requirements on Noise Figure Measurement Techniques,” Agilent Technologies, 1999.
5 Advances in Testing RF and SoC Devices 5.1 Introduction Industry has seen increasing levels of integration at the chip level with SoC devices for wireless communications. As a result, the production testing methodologies of the RF portions of these SoC devices have been impacted, introducing new challenges while at the same time offering advantages in terms of the cost of testing (COT). Some of the areas that need to be considered are new philosophies surrounding system-level testing, wafer probing of known- good die, more consideration of final test needs (such as concurrent testing) by chip design engineers, design for testability and built-in self-test, and how this increased integration drives the architectures of test systems. These areas will be addressed, and considerations for production testing that can align seamlessly with—and even take advantage of—this increased integration will be presented along with the associated implications for COT. The purpose of this chapter is to enlighten the reader to the impact that integration of electronic devices is having on production testing of these devices. In particular, major shifts in testing methodologies for RF devices are becoming available. A few key topics surrounding production testing are discussed in the following sections: • System-level testing; • RF wafer probing; • SiP versus SoC architectures; • Designers’ new responsibilities;
97
98
Advanced Production Testing of RF, SoC, and SiP Devices
• RF built-in self-test; • Impact on test system architecture; • Testing wide bandwidth devices.
5.2 System-Level Testing Modern highly integrated chips/packages have an “RF-to-bits” or RF-to-analog baseband architecture. One of the largest impacts of this RF integration is that it provides testers with the opportunity to take advantage of a paradigm shift, enabling system-level testing, which provides the advantage of decreased test times. However, a hurdle to overcome is its current lack of industry-wide acceptance. In fact, system-level testing has been the subject of many debates. This technique involves testing of the DUT as it is intended to be used. It is similar to a go/no-go test using digital modulation to measure things such as BER and error vector magnitude (EVM). The applied test signals contain digitally modulated information and mimic the signals that are received at the antennas of wireless devices, or inputs to wired RF devices. Traditionally, CW or single- and two-tone tests have been the extent of RF testing. Due to the simple discrete RF device architecture (RF inputs and outputs), this has been the only methodology available. Now that these discrete structures are being integrated, the final chip architectures are becoming more crowded and complex. Some arguments against system-level testing are that not enough time has been spent at the R&D level to determine whether these all-encompassing tests are secure enough to capture all of the failing parts. In an effort to resolve this issue while still keeping test times low, currently these all-encompassing system-level tests are supplemented by some amount of more traditional functional testing. As the product matures and confidence in design and fabrication is gained, the number of traditional functional tests will be reduced. Alternative approaches to production test plans can be considered as a compromise to the all-encompassing, fully tested SoC devices [1]. That is, one could perform system-level (BER or EVM) tests as the normal production test plan, yet also conduct periodic characterization test plans, every 100th DUT or so, for example. This allows for an efficient means to test the parts, while still feeding process information back to the designers and fabrication engineers. Using this technique, an effective test time, can be defined as follows: T effective =
(N − 1)T production + T characterization N
(5.1)
Advances in Testing RF and SoC Devices
99
For example, if the production test plan execution time is 2.0 seconds and every 50th DUT (N = 50) is run through a characterization test plan that takes 60 seconds, the effective test time is 3.16 seconds. As the product matures, and less process feedback is required, increasing N reduces the effective test time. If N is increased to 200, then the effective test time becomes 2.29 seconds.
5.3 RF Wafer Probing Traditionally, wafer probing has been avoided if at all possible, especially in the area of RF testing. Early designs of wafer probes and wafer probe interfaces had the challenge of handling parasitic capacitances and inductances seen at RF frequencies. Noise pickup was also a problem. However, with the increasing costs of more complex packages, the advent of the SiP, and the sale of known-good die (KGD), it has become clear that probing is becoming more necessary. Furthermore, because various functioning die are incorporated into the final package, in a worst-case scenario, a low-yield inexpensive die could jeopardize the entire package, rendering more expensive die in the package (plus the package) useless. This situation has driven the advancement of RF wafer probing technology. The concept of SiPs also falls into the category of integration. With SiPs, testing can be performed at the package or on the wafer before the parts are integrated into the module. Often, probing the individual die that comprise the module is preferred over extensive packaged testing. This mandates testing at the wafer level, which has historically been avoided for RF devices. As a result of this, the KGD phenomenon is driving RF wafer test into the mainstream [2, 3]. See Chapter 12 for a detailed description and analysis of wafer probing.
5.4 SiP Versus SoC Architectures The formal definition of SoC is having a system on a chip; however, recently the introduction of multiple dies in a package, or SiP, has arisen. In SoC devices, cores are integrated at the silicon level. In SiPs, the integration is done at the package level. With SiPs, different intellectual properties (IPs) can be used in the same package. In fact, in some cases, different vendors’ die can be used together. In a SoC, because the structures are all on the same chip, this mandates that the semiconductor technology [e.g., SiGe or complementary metal oxide semiconductor (CMOS)] must be of only one type. If the performance requirements of one of the functions on the SoC is near the limit of the technology type, then it can cause the entire SoC to fail.
100
Advanced Production Testing of RF, SoC, and SiP Devices
The term core must be defined. A core is a functional block, circuit, or stand-alone IP. The term core has been used for many years within the traditional SoC device design and testing community. The notion of the term is relatively new to RF test engineers. This is mainly because only recently have the discrete RF device functions (low-noise amplifier, mixer, and so forth) been included on the same die with complex digital or analog functional blocks. The difference between integration of RF cores into a SoC or into a SiP is mainly associated with the cost benefits of these two different types of devices as a function of the type of cores being integrated. Differences between the two types of integration include the expected yield of the cores and the product’s packaging cost. Likewise, the decision of whether to test the individual cores or to test an entire SiP is also a function of the yield of the individual cores. Consider that the yield of the overall integrated SiP is dependent on the yields of the individual cores as well as the yield of the package (material quality, bond wires, and so forth): Y SiP = Y core1Y core2 KY coreN Y package
(5.2)
Therefore, it is easily apparent that the more cores contained within a SiP, the more dependent on yield of the individual cores the SiP yield becomes. It takes just one bad lot of one of the cores to cause many other good cores and packages to be scrapped. However, on the positive side, if the fabrication processes are well controlled and the yields are high, the reward of waiting until the die are packaged into a SiP can be greatly reduced cost of test, especially if system-level testing is also implemented.
5.5 Designers’ New Responsibilities In traditional digital testing, the final test algorithms are usually created by the chip designer, often even programmed into the chip. Often, the designer and test engineer never even cross paths during the life cycle of the DUT. There are many items, such as this, that will change as integration levels increase. In the area of RF, the designers will need to begin to look ahead and plan strategies and chip architectures for new production testing methodologies, a practice that is not currently done. Independent of the cost accounting analysis and the managerial decision for an RF SoC or SiP, additional factors need to be considered [Ariana Salagianis, Agilent Technologies, personal communication, 2004]:
Advances in Testing RF and SoC Devices
101
• Engineering design and analysis (EDA) tools for RF cores to address
COT; • Active communication between the design and test engineering disci-
plines for the creation of cost-effective, on-chip design-for-testability (DFT) structures; • Collaborative test development teams for faster time to market.
Digital cores are easy to test using functional or structural tests. During the last couple of years, EDA companies have made major progress in the area of test cost reduction with the latest introduction of test program generators, data compression, and diagnostic capabilities. These capabilities allow for faster time to market, reduction of the production test time, and the utilization of lower cost testers. EDA competition on the digital domain has significantly increased and analog BIST (see Section 5.6) structures have become the next EDA competitive advantage. It is estimated that some time will lapse before RF BIST structures will become available for widespread use. This means that RF cores are likely to become the most expensive portion of the test cost in a SoC or SiP device. Even though both EDA and ATE companies are focusing on COT reductions, currently only the ATE companies can offer some form of test cost relief on integrated RF cores. Concurrent test execution is the needed ATE capability, and must be addressed by both the design and testing communities cooperatively. Concurrent testing is parallel testing at the device level. It takes advantage of core integration on SoCs and SiPs and expands the parallel test definition from a multisite to a multicore test. Concurrent testing requires that the cores be independently accessible and controllable. This level of independence is available as a result of physical isolation of the RF core in SiP devices or IP core isolation in the design of SoCs. When the RF core can be tested in a stand-alone manner and concurrently with the other cores in the SoC or SiP, its test time can be hidden within the test time of other cores having comparable test times, resulting in a considerable decrease in overall test time. In a SiP, when there is physical isolation of the die, provided that package interconnects do not minimize the independent access and control, there is a great opportunity for a concurrent testing application without any impact on the device design cycle. Test engineers can apply parallel core execution with limited information from the design team. This increased level of integration and drive toward reduced COT require a higher level of communication between the design and test engineering teams early in the process. Concurrent testing and successful application of RF DFT for core isolations require more direct communication between the test and design engineering teams. It is necessary to have a good understanding of the test time
102
Advanced Production Testing of RF, SoC, and SiP Devices
benefits of the concurrent test methodology proposed and the design effort and time-to-market implications required to make the desired design changes. Prior to the advent of SoC devices, a test engineer was assigned to a device and was responsible for implementing all required tests as defined by designers and/or marketing requirements. It is not typical for a single test engineer to have the required level of expertise for testing all technologies (i.e., RF, mixed-signal, and digital) across the multiple cores within a SoC, nor is there time for a single test engineer to meet the strict time-to-market requirements. Today, multiple test engineers work as a team on the same device, and the various core test programs are integrated to form the final wafer and/or package test. This new organizational structure in test engineering calls for ATE tools capable of providing smooth test integration.
5.6 RF Built-In Self-Test (BIST) While built-in self-test has been used for many years in digital circuit design and testing, it is in its infancy when being applied to RF circuits. The focus of BIST is on transistor-level defects, a level of granularity not traditionally observed by RF test engineers. More recently, studies on the implementation of BIST into RF devices have been appearing [4, 5]. Figure 5.1 shows a modern wireless ZIF radio transceiver architecture. Recent levels of integration have all functional blocks except for the power amplifier, duplexer, and antenna appearing on either the same silicon chip, or in the same package. The implementation of BIST at the baseband level in this example is achieved by a loopback between the analog-to-digital and digital-to-analog converters. Traditional BIST techniques are first performed on the baseband portion of the DUT, prior to enabling RF BIST [4]. Finally, to perform the RF BIST, the baseband DSP is used to generate a stimulus to be passed up through the transmitting chain, then through the test amplifier (TA), then back through the receiving chain to the baseband processor to be analyzed. The test amplifier is powered down during normal operation of the device. Furthermore, the engineer must consider the fact that the test amplifier could be defective, in which case, the decision to either scrap the entire DUT or choose an alternative test plan must be made. A typical generated test signal would be a pseudorandom bit sequence generated by the baseband stimulus. A typical BIST algorithm would be to generate the bit sequence, upconvert it, and pass it through the transmitting chain. Then after connecting to the receiving chain via the test amplifier and downconversion back to the baseband processor, a BER number is acquired. One shortcoming of this method is that the level of diagnosis of the problem is low.
Advances in Testing RF and SoC Devices
103
ADC 0°
From antenna
LO
LNA 90°
ADC TA
DSP DAC
To PA/antenna
0° LO 90° DAC
Figure 5.1 BIST implementation in a SoC ZIF transceiver.
For example, a poor BER result could be the result of, at a minimum, low gain in the transmitting or receiving chain, nonlinear distortion in one of the amplifiers, or poor noise figure of any one of RF or mixed-signal cores. A few RF BIST implementations already have U.S. patents. U.S. Patent 20040148121A1 [6] describes a BIST technique that requires only a low-cost frequency counter as test equipment. This method has an additional frequency divider designed into the device and has the ability to tap off of the transceiver’s transmitter output RF signal. This enables a low-cost means for measuring the output frequency of the RF signal, but at low frequency, because the RF signal’s frequency has been reduced through the built-in frequency divider as shown in Figure 5.2.
5.7 Test System Architecture With the integration of RF onto chips already containing high-speed digital and mixed-signal circuits, single-point solution test systems are no longer able to test these devices. Numerous test systems are available on the market, with varying degrees of capabilities. The once RF-only tester topology is disappearing as market demands also drive tester integration levels. The added analog and digital
104
Advanced Production Testing of RF, SoC, and SiP Devices
Transmitter Analog TX data
RF modulator
+ −
TX signal
PA
Frequency divider
RF out
Test out
Low-cost Pass/Fail frequency counter
DAC DAC N Configurable threshold register
Configurable division register
BIST circuit
Test controller
Figure 5.2 An on-chip (BIST) means for measuring transmitter RF frequency requiring only a low-cost frequency counter for test equipment [6].
functionality that appears along with RF on the chip now must also be available on the automated test equipment. Test systems are tasked with the challenge to stay a step ahead of the market in their test coverage capability and to be flexible enough to cater to varying sectors of the market.
5.8 Testing Wide Bandwidth Devices Lately, it seems as if the bandwidth utilization of communications devices is growing without bound. Wireless LAN devices are commonplace, with bandwidths from 18 MHz and up. Ultra-wideband (UWB) is another technology that requires enormous bandwidth [7]. Many other modern consumer devices use complex modulation techniques such as orthogonal frequency-division multiplexing (OFDM), mandating a wide bandwidth. With the increase in operational bandwidth comes the need to be able to test these wide bandwidths. The challenge of measuring the frequency response of wide bandwidth devices is not solely a technical one; it is also a matter of cost. Today’s ATE with RF typically costs between $500,000 and $2 million. The COT is in the region of 1 to 3 cents per second of test time. As the average price of chips for the consumer market falls, it is key to the manufacturing process that the COT be kept
Advances in Testing RF and SoC Devices
105
low. This is only possible if the test time is significantly low [8]. Sometimes, to combat costs, test times, and time to market, a load board testing solution may be the answer (building part of the test system on the load board as a focused solution).
5.8.1
New Test Methodologies
Reference [8] proposes a production load board measurement methodology to make these types of measurements. The proposed solution intends to overcome the limitation of typical ATE systems by using the arbitrary waveform generation (AWG) to modulate the RF output such that it causes the same effect as a swept frequency being applied to the DUT (the traditional way to accomplish this), instead of using multiple-tone frequencies or wideband modulated signals. In production testing, use of a swept frequency and downconversion in the test system is difficult and time-consuming. The difficulties arise from synchronizing the sweeping frequencies of the RF signal and the test system LO such that the downconverted frequency is constant. In addition, the amount of time required to download the data will be quite large [8]. To overcome this problem, a simple RF detector (e.g., Analog Devices AD8318 [9] or Linear Technologies LT5534 [10]) together with a buffer amplifier is placed on the load board and used to measure the output. The output of this detector circuit is a simple voltage, proportional to RF power, that can subsequently be measured with a digital pin contained within the SoC test system. The key to the speed of this measurement is the RF detector. The rapid advancement in the performance of RF detectors is due to the use of these devices in consumer products. Modern detectors have high linearity, wide operating frequencies, good dynamic range, are easy to use, and come in small packages. An RF power detector simply converts RF power to dc voltage (see Chapter 6 for a detailed description of power detectors). The main selection criteria for our purpose are linearity, flatness, and good return loss in the frequency of operation. Care should be taken to ensure that the DUT power output is within the linear region of the RF detector used [8]. Figure 5.3 shows the measurement setup. A filter is required between the DUT and the power detector to prevent unwanted spurs and noise outside the frequency range of interest from interfering with the measured value. For obvious reasons, the frequency response of the filter should be as flat as possible. The amplifier is acting as a buffer to ensure that the RF detector output is not affected by unexpected loading from the digital pin. The amplifier should have sufficient bandwidth and settling time to effectively track the detector output for the voltage measurement.
106
Advanced Production Testing of RF, SoC, and SiP Devices
Tester I AWG
Q
RF generator modulation source
Digital pin or digitizer
RF to DUT
RF detector and buffer amp
DUT
DUT RF output
Filter
Computer
Figure 5.3 A low-cost test solution for testing wide bandwidth RF parameters in communications devices.
5.8.2
Calibration
The frequency response of the entire circuit should be sufficient such that it does not affect the measurement. If this is not possible, it can be corrected by measuring the external circuit frequency response once to establish a baseline, and correcting for it during the measurement in production [8]. The actual accuracy of the circuit is also affected by the dc offset and linearity of the buffer amplifier as well as the linearity of the detector. There will be a need to calibrate or find the threshold of failure of the devices being tested using the calibrated RF source prior to the testing. To account for drifting of measurement results, routine automated calibration can be performed, for example, every hour [8].
5.9 Conclusion Chip architectures and the demand for COT reductions are changing production testing methodologies. Six key areas in which changes are occurring were highlighted in this discussion. RF integration into SoC (and SiP) devices is becoming the norm as driven by technological capabilities and market demands. Similar to the integration of analog, high-speed links, and digital cores, RF integration is driving the need for RF BIST advances to lead to further reduction of testing costs. At the hardware level, RF DFT becomes valuable and the ATE that is required for testing of modern SoC devices is one that can address multiple
Advances in Testing RF and SoC Devices
107
technologies (i.e., RF, mixed-signal, baseband, memory, and power management) with the maximum and optimized level of parallel test execution.
References [1] Lowery, E., “Integrated Cellular Transceivers: Challenging Traditional Test Philosophies,” Proc. 28th IEEE/CPMT/SEMI Int. Electronics Manufacturing Technology (IEMT) Conf., July 16–18, 2003, New York: IEEE 2003. [2] Gahagan, D., “RF (Gigahertz) ATE Production Testing On-Wafer: Options and Tradeoffs,” Proc. 1999 Int. Test Conf., 1999, p. 388. [3] Lau, W., “Measurement Challenges for On-Wafer RF-SoC Test,” Proc. Int. Electronics Manufacturing Technology Symp., New York: IEEE, 2002, pp. 353–359. [4] Dabrowski, J., “BiST Model for IC RF-Transceiver Front-End,” Proc. 18th IEEE Int. Symp. on Defect and Fault Tolerance in VLSI Systems, 2003. [5] Lupea, D., et al., “RF-BIST: Loopback Spectral Signature Analysis,” Proc. Design Automation and Test in Europe Conf. and Exhibition, 2003. [6] Obaldia, E., et al., “On-Chip Test Mechanism for Transceiver Power Amplifier and Oscillator Frequency,” U.S. Patent No. 20040148121A1, 2004. [7] Agilent Technologies, “Ultra-Wideband Communication RF Measurements,” Application Note 1488, 2004. [8] Goh, F., et al., “Innovative Technique for Testing Wide Bandwidth Frequency Response,” Wireless Broadband Forum 2004, Cambridge, U.K., 2004. [9] “Data Sheet AD8318,” Analog Devices, 2005. [10] “Data Sheet LT5534,” Linear Technologies, 2005.
6 Production Test Equipment Lawrence Roberts, Cree, Inc.
6.1 Introduction The current trend of RF SoC devices has grown from early discrete lumped designs to high levels of integration. With the advent of new, more complex modulation and test requirements, test engineers are forced to make decisions about the hardware used to test these DUTs. Traditionally, these types of decisions were straightforward, due to the low levels of complexity of DUTs, in that test engineers had to choose from standard commercial off-the-shelf instruments, akin to bench testing setups. This mode of operation worked for a long period of time, but when SoC DUTs started becoming increasingly more integrated, production test equipment had to undergo a paradigm shift [1]. When older discrete DUTs shifted from commercial off-the-shelf instruments, ATE became the platform most suited for testing complex SoC DUTs in a production environment. This paradigm shift forced both ATE vendors and test engineers to broadly consider the test equipment setup that provides the maximum test coverage for a wide variety of SoC DUTs. The focus of this chapter will be on the use, advantages, disadvantages, and setup of commonly used production test equipment hardware. This consists of the components of SoC ATE as well as hardware making up focused RF testers. Both discrete RF devices and highly integrated devices with RF front ends are discussed. Also illustrated are the complex factors that must be addressed to develop a successful production test equipment solution. After reading this chapter test engineers will be able to formulate a comparative ATE hardware 109
110
Advanced Production Testing of RF, SoC, and SiP Devices
chart versus test requirement list to determine an optimal configuration for the hardware they will use to meet their testing needs.
6.2 Tuned RF Receivers Utilizing a Digitizer 6.2.1
Description of Tuned RF Receivers Utilizing a Digitizer
In SoC DUTs, it is often necessary to test the RF functionality. Modern ATE must provide a way to measure the RF output from a DUT in order to ensure that the SoC design functions as expected and meets FCC regulations. These measurements are typically implemented using tuned RF receivers in the test equipment. The definition of a tuned RF receiver is one that uses a LO to lock onto a desired RF frequency and translate that signal down to a lower frequency. Figure 6.1 shows a typical block diagram for an integrated single-channel tuned RF receiver module. This lower frequency can be an intermediate frequency as in a heterodyne architecture or a very low baseband frequency. Following this downconversion is usually a lowpass filter that limits the IF signal to a certain bandwidth while attenuating harmonics, distortion, noise, and intermodulation products, if making multitone measurements. The IF signal then propagates to the lowpass filter output that feeds into an analog-to-digital converter (ADC), performs digitization of the IF signal, and is finally processed by a digital signal processor (DSP). The DSP is a powerful computational engine that can extract magnitude and phase information from the digitized IF signal, in addition to performing digital filtering. The use of a tuned RF receiver lends itself to both narrowband and wideband applications in which a known-good signal coming from the DUT is measured. Although current ATE vendors offer user-defined bandwidths, Downconverter Variable attenuator
LO ADC
Digital signal processor
Mixer Bandpass filter Antialias lowpass filter
A/D converter
Figure 6.1 Block diagram showing a basic single-channel tuned RF receiver as used in ATE.
Production Test Equipment
111
wideband tuned RF receivers are often used to test RF SoC DUTs that can require more than 40 MHz of signal bandwidth for accurate, modulated measurements. Because most production testing is done with pure sinusoidal signals, this discussion will focus on narrowband applications. 6.2.2
Comparison to Benchtop RF Instruments
Spectrum analyzers are swept-LO instruments that measure unknown signals, harmonics, distortion, and phase noise, to name a few common parameters. Swept-LO instruments use a ramp generator to create the horizontal, “swept” movement across the display from left to right while tuning the LO so that its frequency change is in proportion to the ramp voltage. It is a scalar-based instrument that provides only frequency and amplitude information. No phase information is available for the signal because signal separation components are not used [2]. 6.2.2.1 Signal Separation Devices
A question often asked by test engineers is “Do we really need phase information?” Well, the answer is often yes. We need phase information, in addition to magnitude information; time-domain characterization is very prevalent in SoC DUTs. Also, phase information is needed to perform the vector error correction needed for calibration and other test requirements. Phase information can be obtained from a signal when signal separation is used. Signal separation is an important component of both swept RF receivers and tuned RF receivers. Using a directional coupler or directional bridge, a portion of the incident signal is measured and provides a reference ratio. Figure 6.2
Reflected coupled path
Incident signal
Signal path
Incident coupled path
Signal path
Reflected signal
Figure 6.2 A signal separator as used in ATE for S-parameter-based measurements.
112
Advanced Production Testing of RF, SoC, and SiP Devices
shows a signal separation device used in modern ATE. The main observable difference between using a directional bridge and directional coupler is that the bridge will incur a higher insertion loss, relative to the coupler, due to its broadband nature. A directional bridge and coupler both have the ability to separate signals flowing in opposite directions, directivity, and excellent reverse isolation needed to reduce leakage power at the coupler input. One disadvantage to directional couplers is their inherent highpass response, which makes them unusable below frequencies of about 50 MHz. Directional couplers have very low loss and excellent isolation, making them ideal for high-frequency measurements. Conversely, directional bridges operate down to dc. In summary, signal separation is used to measure both the incident (forward) and reflected (reverse) waves at the input or output of a DUT, respectively [3]. A signal separation component often employed by most ATE vendors is the reflectometer. A reflectometer is a device with two directional couplers and two downconversion mixers. Using a reflectometer makes it possible for bidirectional signals to be on the same path, thus allowing for incident and reflected signal separation. This feature of reflectometers allows for both scalar and vector measurements to be performed. Because a reflectometer is bidirectional, it allows for a given port to transmit a stimulus from a signal source to the DUT or act as a receiver, capable of measuring signals from the DUT.
6.2.3
Tuned RF Receiver Parameters
It is important to point out that in most ATE, swept-based and tuned RF receivers can both utilize the same signal separation schemes. A point of distinction comes about from the receiver sensitivity and dynamic range. Typically most swept-based RF detectors utilize a broadband diode detector scheme, which has a medium sensitivity of around –60 dBm and dynamic range of around 60 to 70 dB, depending on the detector type. Tuned RF receivers, on the other hand, provide better sensitivity and dynamic range. The tuned RF receiver also provides excellent harmonic and spurious-signal rejection due to the narrow IF, which can be filtered appropriately. An important highlight of an integrated tuned RF receiver is that the test engineer must make trade-offs between receiver settings, ADC selection, and DSP algorithms to achieve an optimal solution [4]. In particular, traditional RF/microwave instruments such as an RF frequency counter (CW and pulsed), spectrum analyzer, and modulation analyzer can be set up in an ATE environment using a judicious selection of components and DSP algorithms. If an engineer were designing a test system, he or she should be familiar with the DUT test requirements and be able to select the appropriate hardware.
Production Test Equipment
113
Table 6.1 [4] gives traditional benchtop spectrum analyzer measurement parameters and places ATE RF receiver hardware into the same context. Each piece of hardware (tuned RF receiver, ADC, and DSP) and its associated measurement functionality correspond to traditional test setup parameters. For example, to select a resolution bandwidth of 100 kHz on a tuned RF receiver corresponds to selecting the following: • Tuned RF receiver: IF bandwidth = 100 kHz; • ADC: Sample range/memory depth/antialias filter BW; • DSP: Select the appropriate window type.
It should be clear from Table 6.1 that the test engineer and ATE manufacturer must factor in a wide variety of characteristics in order to achieve optimum test solution coverage that is configurable for a variety of SoC DUTs. Using Figure 6.1 as an example, we will look at the case of spurious testing requirements. The variable attenuator at the input needs to attenuate the RF input power to avoid forcing the mixer into compression. The phase noise, as well as
Table 6.1 ATE Receiver Hardware Requirements as They Relate to a Traditional Spectrum Analyzer Parameter
Tuned RF Receiver
ADC
DSP
Frequency range
RF frequency range
N/A
N/A
Resolution BW
IF bandwidth
Sample range Memory depth Antialias filter BW
Window type Min/max point Fast Fourier transform (FFT)
Phase noise
LO phase noise
N/A
N/A
Spurious
LO spurious downconverter image (DC) rejection IF amplitude
IP3 Antialias filter Number of bits
Finite impulse response (FIR) filter rejection
Sweep rate
Frequency switching speed
Retrigger time Data transfer speed
FFT computation time
Amplitude frequency DC response response Source: [4].
Input-leveling response FIR filter response Antialias filter response Window type
114
Advanced Production Testing of RF, SoC, and SiP Devices
the spurious noise, of the test system’s downconverting LO is imparted directly on the IF output of the mixer. This forces the LO to be selected according the phase noise and spurious specification. The bandpass filter (BPF) must be chosen to sufficiently reject mixer image frequency response and higher order harmonics. Looking at the ADC, the lowpass filter (LPF) ensures that the Nyquist criterion [5] is met prior to digitization of the IF signal. The signal-to-noise ratio (SNR) can be calculated as SNR
dB
= 6.02N + 1761 .
(6.1)
where N is the number of bits in the ADC. For quick calculations, a common rule of thumb is that the SNR is approximately equal to 6 times the number of bits. Finally, the DSP algorithm may use a finite-impulse response (FIR) filter for further harmonic and image rejection [4]. In summary, it should be evident that the test engineer and systems integrator must ensure that each element in the integrated RF tuned receiver module must be compliant with the spurious specifications. The test engineer needs to pay attention when identifying a test solution to ensure maximum flexibility with current and future
6.3 Modern IC Power Detectors A power detector reduces a complex RF or other analog signal to a dc voltage. Modern IC power detectors have been around for many years now. They are used to measure RF power levels in order to verify the correct behavior of the DUT, to ensure the DUT complies with FCC regulations, and to ensure the DUT conforms to its intended market application. This section will discuss the various types of IC power detectors and also provide a quantitative comparison among the types of detectors.
6.3.1
Overview of IC Power Detectors
Modern production test equipment utilizes power detectors in a variety of configurations in both sourcing and measuring paths. For measuring, the goal of the power detector is to measure the received power and control the wide dynamic range of the signals appearing at the receiver input. In the end, we want to present the ADC with a signal that has the highest possible SNR. Using variable gain amplifiers (VGAs) in an automatic gain control (AGC) configuration, power detectors, typically employed by ATE vendors, are used to modulate the receiver’s dynamic range and noise floor levels.
Production Test Equipment
115
On the transmitting side, power detectors are used by the device to control the amount of power from the PA, thus preventing it from overheating and consuming excessive amounts of current while satisfying FCC standards for maximum emissions [6]. In general, a power detector measures the applied RF voltage and produces an output dc level that is proportional to the input. The power detector has three regions of operation: square-law, linear, and crossover [7]. The input RF voltage levels determine which detection range will be used. The square-law region is modeled using (6.2). This shows that the output voltage is proportional to the input power. The linear region is modeled using (6.3). This reveals the fact that the output voltage is proportional to the input voltage, where n and m are constants of proportionality empirically determined from the IC fabrication process of the power detector. The crossover region is modeled using either of the following equations:
6.3.2
V out = nV in2 R in = nP in (Square-law region)
(6.2)
V out = mV in (Linear region)
(6.3)
Basic IC Power Detector Circuit Operation
So far we have been discussing power detectors from a high-level point of view. We have looked at the regions of operation and how the output signal voltage corresponds to the input signal voltage. These next few sections will take a more in-depth look at power detectors from an electrical circuit point of view. 6.3.2.1 Single-Balanced Diode Detector
At this point, it is important to highlight the major types of IC power detectors. The majority of power detectors are diode based, transistor based, or combinations of both. IC power detectors can be implemented using different diodes such as Schottky, point contact, pressure contact, and planar-doped barrier diodes. In general, diode-based detectors are broadband devices that are easy to fabricate and are relatively inexpensive, compared to the tuned RF receiver discussed previously. A typical single-balanced diode detector is shown in Figure 6.3(a). A test engineer may ask the question “How does the diode detector work?” The single diode detector in Figure 6.3(a) operates when an input RF voltage is higher than the diode threshold voltage, thus causing the diode to conduct and ultimately rectify the input signal during the positive cycle of a sinusoid. A capacitor is applied to the diode detector output and stores charge, almost equal to the input voltage of the sinusoid. To discharge the capacitor, a resistor is applied, parallel to the capacitor, with an end result of a steady-state dc voltage
116
Advanced Production Testing of RF, SoC, and SiP Devices
+
RLoad
Vin
Vout CLoad
−
(a)
Vdd
R1
R2 Q1NPN
Vin
Q2NPN
R3 R4
Vout +
C3
I1
C1
C2
I2
(b)
Vin
AMP
AMP
AMP Vout ~ Vin
(c)
Figure 6.3 Construction of diode detectors: (a) single balanced, (b) voltage divider enhanced BJT, and (c) logarithmic.
of the rectified steady-state ac input voltage. Diode detectors cannot recover frequency information; therefore, their use is limited to applications involving amplitude demodulation. Figure 6.4(a) shows a typical I-V curve for diode, also
Square law region
117
RF input voltage [V]
Breakdown voltage
Linear region
I-V curve
Current [I]
Production Test Equipment
Voltage [V]
Time [sec] Diode conduction turn-on
Cut-in voltage
Saturation current
Forward biased
Reversed biased (a)
(b)
Figure 6.4 Characteristic curves of diode-based power detectors: (a) current versus voltage (I-V) and (b) voltage versus time.
known as solid-state, detectors. Figure 6.4(b) shows the diode detector voltage–time curve. Like traditional diodes, once the minimum input voltage is established, conduction begins for any input voltage above this threshold level. Looking at Figure 6.4(a), the reader can see typical square-law and linear detection regions. Table 6.2, shown later in this chapter, lists the detection regions based on the diode detector. Up to this point, we have focused our discussion on detector architectures, but it is important to note that the actual detection is dependent on several parameters. A few parameters that affect the actual diode detector performance are the diode junction capacitance, minority carrier lifetime, load resistance, frequency, and dc bias, if necessary [8]. These are typical properties of semiconductor devices that are beyond the scope of this discussion. Before moving on to other detector architectures, we must discuss one of the more prevalent diode detectors: the Schottky barrier diode detector. The Schottky detector is an important, useful detector because it can be extended into the microwave frequency range. This detector does not suffer the same frequency limitations as the traditional diode detector because it is fabricated by interfacing a semiconductor region with a metal region. The semiconductor– metal interface is completely void of the minority carriers that are present in traditional diode detectors, thus enabling higher frequency usage. Schottky diode detectors, as with other diode detectors, come in many flavors, but the majority of ATE vendors, if using this type of detector, would use a zero-bias
118
Advanced Production Testing of RF, SoC, and SiP Devices
Table 6.2 Quantitative Comparison of IC Power Detectors
Parameter
Schottky Barrier
Logarithmic Amplifier
Transistor
3-dB bandwidth (GHz)
10
8
6
Max incident power (dBm, 50Ω)
–40 to +30
–60 to +5
–50 to +15
Dynamic range (dB)
70
65
65
N/A
<–20
–65 to +5
>–16
Detection range (dBm): <–20 Square-law Linear
>–10
Cost
Low (standard Low (standard Low (standard fabrication process) fabrication process) fabrication process)
Complexity
Low to moderate
Low to moderate
Low
Key points
Wide frequency range
High-temperature stability
Small size
Schottky detector. Basically, this type of detector overcomes the problem of having to supply a dc bias, which is required for traditional diode detectors, in order to detect low-level signals accurately. This is due to the fact that the Schottky detector is most sensitive at zero bias, when the saturation current is small, which corresponds to a large video resistance [9]. The next section will provide a decent quantitative comparison between diode, transistor, and log-based detectors. 6.3.2.2 Transistor Power Detectors
The transistor power detector has applications up to 6 GHz and is fabricated using bipolar-junction-transistor (BJT) or field-effect transistor (FET) technologies. The basic premise of this device is to improve detection accuracy in the three detection regions shown in Figure 6.4(a), increase dynamic range, and extend frequency response into the microwave range and higher. Figure 6.3(b) shows an example of a voltage divider BJT power detector [10]. This circuit operates by applying a single-ended ac signal at the base of Q1. The output voltage for a large signal in the linear detection region is V out = (1 − β )V ac + 0.5V T ln β
(6.4)
where VT = 25.1 mV at room temperature and β is characteristic of the design of the transistor.
Production Test Equipment
119
The second term in (6.4) introduces a small voltage offset [10]. The output voltage is still proportional to the square of the ac input signal amplitude. The output voltage for a small signal in the square-law detection region is [10] V out =
2 (1 − β ) V ac2
4V T
(6.5)
The BJT power detector delivers excellent amplitude range while minimizing the crossover detection region. Note that for frequencies above 6 GHz, the power detector output voltage drops quickly due to parasitic capacitance in the transistor. Obviously, this is one disadvantage when attempting to measure harmonics in an 802.11a DUT. Another disadvantage is its low-frequency detection range limitation, which is imposed due to the input ac coupling capacitor and because use of a bigger capacitor expands the low-frequency bandwidth. Because the trend in modern electronics is increased levels of integration, adding larger components introduces additional design constraints that must be addressed. Conversely, this type of detector has the advantages of wide bandwidth, lower cost, simplicity, dynamic range, temperature stability, low power, and detection sensitivity for both large and small input signals [10]. 6.3.2.3 Logarithmic Amplifier Detector
Figure 6.3(c) shows an example of a logarithmic amplifier detector. Typically, three to five 10- to 20-dB amplifiers are cascaded with the output of each going through a diode detector. A summing circuit is used to add the outputs from the diode detectors and is then lowpass filtered, resulting in a dc voltage proportional to the ac input voltage. In other words, the dc voltage output is directly proportional to the dBm power input. Equation (6.6) describes this linear-in-decibels relationship and is a familiar straight-fit line for an ideal output voltage for a particular input power [11]: V out = mP in − Intercept
(6.6)
where Pin is in dBm and m is the slope, or slew, in V/dB and is defined as follows: m=
V out ,B − V out , A P in,B − P in, A
The intercept, in units of dBm, is defined as
(6.7)
120
Advanced Production Testing of RF, SoC, and SiP Devices
Intercept =
P in,B − V out, A m
(6.8)
This allows for a linear output signal when measuring devices with a wide dynamic range. Original applications of the log amp detector were in radar, amplitude shift keying (ASK), and time-division multiple access (TDMA), but they are quite suitable for use in ATE applications. 6.3.3
Quantitative Comparison of IC Power Detectors
We conclude this section with a basic quantitative comparison of IC power detectors. Table 6.2 [10, 11] lists important parameters along with the typical values that a test engineer must weigh. These parameters are used to qualify different IC power detectors. The test engineer must decide which detection scheme to use based on the current and future SoC DUT requirements. 6.3.4
Types of Power Detectors Used in Production
This section will highlight three types of power detectors used in modern production testing. Because there is no perfect power detector to fit all testing needs, the advantages and disadvantages of each detector type will be discussed. 6.3.4.1 Broadband Power Detector Without Selectable Filter
A broadband power detector without a selectable filter can be used as any of the aforementioned power detector architectures, except tuned RF receivers, where there is no filter at the input of the power detector input, output from the DUT. The advantages to this are that there is no need to switch different filters in and out in order to isolate frequencies of interest, which decreases system complexity and test time. In addition, the test engineer can measure frequencies up to the limits of the power detector, thus enabling broadband measurements. System calibration is made easier because there is only one path to compensate for since all signals travel in one direction. A disadvantage is that all noise sources will pass through unattenuated and possibly degrade the tester’s ability to measure low-level signals near the noise floor. In addition, distortion, harmonics, and image responses may increase the chances of making false measurements while decreasing measurement accuracy and repeatability. 6.3.4.2 Broadband Power Detector with Selectable Filter
A broadband power detector with a selectable filter can be any of the aforementioned detector designs, except tuned RF receivers, with the addition of software selectable filters applied to the input of the power detector, output from the
Production Test Equipment
121
DUT. This type of power detection scheme has wide applications involving complex modulation formats typically encountered in modern production testing. The advantages of the selectable filter are that test engineers can program the tester to filter specific frequencies while passing others. Using this setup, measurement accuracy and repeatability are increased while broadband noise, distortion, and harmonic products are decreased. The disadvantages of using selectable filters are that the test times will increase, due to switching the filters, while raising ATE costs due to the added RF components associated with the selectable filters. 6.3.4.3 Downconversion Using Variable LO with Fixed Filter
The power detectors discussed in Sections 6.3.4.1 and 6.3.4.2 differ from a power detector that downconverts using a variable LO with a fixed filter. The advantages of using this type of detector lie in the excellent dynamic range and selectivity due to the fact that the IF is fixed, along with the filter, and the tester can maximize the RF path loss so that the detector input signal is optimized for measurement accuracy and repeatability. The tuned RF receiver, discussed in Section 6.2, incorporates this type of power detection scheme. In addition to improving dynamic range and selectivity, this detector type often has low noise products such as phase, distortion, and harmonic that is due to the low phase noise of the variable LO, which enhances measurement accuracy. The quality of the variable system LO source can either improve measurement accuracy and repeatability or it can degrade it, which gives the test engineer increased confidence regarding measurement accuracy. The disadvantages of this detection scheme lie in the fact that the ATE cost increases due to the variable LO source. Also, the specification of broadband measurements may be difficult to meet if the downconversion circuitry, along with the maximum frequency of the variable system LO, is not sufficient enough for the application frequency requirements. Also, the RF calibration complexity is increased due to the variable LO and the need to be able to accurately model the RF paths in order to compensate for losses.
6.4 Production Testing Using Digital Channels and PMU Modern production testing involving RF SoC devices typically encompasses voltage, current, and frequency measurements. The trend toward increased device integration is constantly requiring test engineers to rethink how to most effectively, yet efficiently, test devices to ensure a high level of quality. The use of digital channels and parametric measuring units (PMUs) is a necessity in performing stable production SoC tests. Through the use of a per-pin parametric measurement unit (PPMU), typical ATE platforms can test highly
122
Advanced Production Testing of RF, SoC, and SiP Devices
integrated RF SoC devices in an efficient manner. Since its inception, PPMU has lowered production COT by allowing multiple source/measure operations simultaneously.
6.4.1
Digital Channel and PMU Components
Most, if not all, ATE vendors integrate pin electronics and PMU components into their platforms. The function of digital channels is to provide a programmable driver, a programmable comparator, various relays, dynamic current load circuits, and other circuits necessary to drive and receive signals to and from the DUT [12]. Figure 6.5 [12] is a generic example of a digital channel and its associated pin electronics. Because modern RF SoC devices have many pins, typically 64 or more, a typical ATE platform must be able to supply digital channels to all pins, if required. A typical ATE platform’s driver circuitry consists of a fixed impedance driver, usually 50Ω, with two programmable logic levels, VIH and VIL. The VIH and VIL logic levels are controlled by a pair of driver-level digital-to-analog-converters (DACs) whose voltages are controlled by the test program. At times in the digital pattern, data coming from the DUT needs to see a high-impedance state (HIZ), which is fed into the comparator. For completeness, note that the driver circuits may also include programmable rise and fall times although fixed rise and fall times are typical in modern production testing. The speed of rise and fall times is vendor dependent and is typically between 1 and 3 ns in modern ATE platforms [12]. The function of the comparator is determining the logic level coming from the DUT. This is accomplished through the use of two programmable DACs with VOH and VOL logic levels. If the DUT signal is below VOL, then the signal is considered logic low. Conversely, if the DUT signal is above VOH, then the signal is considered logic high. For cases where the DUT output is between these thresholds, then the output state is considered a midpoint voltage. For signals outside of these thresholds, then it is considered a valid logic. A MASK logic state is considered for situations where the comparator results can be ignored. In summary, production testing involves the use of three drive states (LO, HI, HIZ) and five compare states (LO, HI, MID, VALID, and, MASK) [12]. The function of the programmable dynamic load is to force current to (source) and from (sink) the DUT output. The dynamic load is comprised of a pair of current sources connected to the DUT output in addition to a diode bridge. The diode bridge forces a programmable current into the DUT output whenever its voltage is below a programmable threshold, VTH. Conversely, the diode bridge forces a programmable current out of the DUT output whenever the voltage is above VTH [12].
50-Ω driver
VIH DAC
DUT pin
Drive data
IOL
Programmable dynamic load
Calibration circuits
VOH DAC _ Compare data
+ +
VTH DAC
IOH
Per-pin force and measure circuits
Production Test Equipment
VIL DAC
_
VOL DAC DC matrix
123
Figure 6.5 Digital channel pin electronics typical of those used in ATE. (Courtesy of Mark Burns/Texas Instruments.)
124
Advanced Production Testing of RF, SoC, and SiP Devices
The PPMU capabilities of modern testers offer low-resolution, low-current dc voltage and current source for each digital pin. Also, the per-pin circuits offer low-resolution voltage and current meters. Fundamental production tests such as continuity, leakage, input impedance, and output impedance testing take advantage of the low-resolution and low-current features [12]. Modern RF SoC testers often incorporate overshoot suppression circuits that dampen the overshoot and undershoot phenomenon in circuits with fast rise/fall times. The overshoot and undershoot phenomenon is the result of low-impedance DUT output driving into the load board traces and coaxial cables leading to the digital pin card electronics. This problem is alleviated by shunting the overshoot to a dc level through a diode [12]. The digital channel card typically has relays connected to other tester resources such as calibration and system dc meters and sources. These connections are used differently among various vendor platforms but may have application in tester system calibrations involving pin card electronics [12]. We have looked at the basics of a digital channel and PMU components, but now we will conclude this section with a discussion on making RF SoC frequency measurements utilizing a digital pin as a crystal reference since it offers excellent accuracy and repeatability.
6.4.2
Using a Digital Pin as a Crystal Reference Frequency
In modern RF SoC devices, measuring the digital frequency response from a PLL or fractional-N synthesizer is often mandated in the final test. The former way to test such devices would be to use an RF signal generator to supply the desired power and frequency to the crystal reference input, XTAL_IN for example. Today, this approach has been modified to yield a lower COT while maintaining test accuracy and repeatability for frequency measurements. This procedure utilizes a square wave with a 50% duty cycle with VIL and VIH set to predetermined levels, for example, 0 and 3V, respectively. Because the crystal reference frequency is typically on the order of tens of megahertz, this approach is well within the limits of modern ATE platforms. The test engineer needs to set up the driver pins associated with this digital channel, connect it to the appropriate device input, and perform the output measurements as before. In summary, we have discussed the basics of digital channels and PMU components. We looked at a generic example that is sure to vary among various ATE vendors, but the core functionality and operation will remain consistent. Also, the use of a digital channel as the crystal reference frequency has been illustrated. Now, we will turn our attention to another piece of equipment integrated in modern ATE, the digitizer.
Production Test Equipment
125
6.5 Digitizers (ADCs) In modern ATE production test equipment, a continuous-time analog waveform is digitized, from the DUT analog output, and processed using sophisticated DSP algorithms to measure a wide variety of test specifications. A digitizer is typically used to capture analog signals, much like an oscilloscope, from a SoC DUT, make digital samples, and further process the digitized samples, using DSP algorithms. After the analog waveform is digitized, test engineers often perform a fast Fourier transform (FFT) to observe the signal in the frequency domain, where further production testing occurs. Note that test engineers also work in the time domain with measurements such as EVM, phase imbalance, or eye diagrams. Figure 6.6(a) shows the structure of a typical waveform digitizer. It is important to note that the analog lowpass filter plays an important role in improving the dynamic range of the digitizer. If there were no filter, we would acquire an enormous amount of broadband noise.
Antialias lowpass filter
Vin+ Vin− Differential to singleended amplifier
ADC
PGA
Range control (a) Antialias lowpass filter
Waveform source memory
Vout+ DAC
PGA
(b)
Vout−
Single-ended Range control to differential amplifier
Figure 6.6 Block diagrams of ATE analog waveform instrumentation: (a) digitizer and (b) arbitrary waveform generator. (Courtesy of Mark Burns/Texas Instruments.)
126
6.5.1
Advanced Production Testing of RF, SoC, and SiP Devices
Digitizer Components
A typical digitizer often includes a differential to single-ended amplifier. This is needed because RF and mixed-signal SoC DUTs have differential outputs from the DUT, and it is more efficient, from a production standpoint, to have the conversion performed within the digitizer versus a load board implementation. Also this allows calibration to be in the tester, not on the load board. The programmable gain amplifier (PGA) stage at the digitizer’s input is used to adjust the signal level entering the ADC. In addition, the PGA maintains a high SNR by maximizing the signal level at the ADC input along with minimizing the noise effects of quantization error. The antialiasing filter limits the bandwidth of the incoming signal while reducing the noise and preventing signal aliasing [12]. The waveform capture memory collects digitized samples of the continuous waveform. A digitizer is often mainly characterized by its number of bits or sample rate. However, there are many other important defining parameters to consider. Table 6.3 [12–14] highlights various digitizer parameters with their associated performance notes.
6.6 Arbitrary Waveform Generators Section 6.5 introduced the digitizer that created digital samples from a continuoustime analog waveform. We now turn our attention to the arbitrary waveform generator. This instrument is capable of creating complex, periodic, and transient high-frequency waveforms. In the testing of SoC DUTs, virtually all ATE vendors use this instrument. 6.6.1
Overview of Arbitrary Waveform Generator
AWG instruments allow the test engineer to create and modify, in a short period of time, the complex waveforms used in the testing of SoC DUTs. Figure 6.6(b) illustrates a typical AWG module. An AWG consists of a band of waveforms stored as digital data. The stored waveforms are generated using real downloaded waveforms, a computer, or on-board DSP. A DAC converts the digital waveform data into stepped analog voltages. At the output of the DAC is a programmable, lowpass antialiasing filter that smooths the stepped signal into a continuous waveform. Most AWGs incorporate an output scaling circuit, or programmable attenuator, that is used to adjust the signal level. Many AWGs may incorporate dc offset circuitry that can be controlled by the test engineer [10]. Like the digitizer, an AWG is typically characterized by its number of bits, sample rate, bandwidth, waveform memory depth, dynamic range, cost, and flexibility. In today’s fast-paced production testing, time-to-market (which is significantly impacted by the reduced
Production Test Equipment
127
Table 6.3 Digitizer Performance Parameters and Their Significance Parameter or Feature
Significance
Number of bits
Direct correlation to SNR and dynamic range.
Bandwidth (Hz)
Essential for testing complex SoC DUTs with analog digitizer.
Total harmonic distortion THD (%) Specifies the amount of correlated noise for a digitizer. Maximum sample rate (Ms/s)
Needed for antialiasing, for example, oversampling.
Memory size (Mbyte)
Deep memory is required for testing today’s SoC DUTs because it allows for a longer sample time without interruption.
Dual-ported memory
Useful for digitizers acquiring data for analysis in either the frequency or time domain.
Dual-rate time memory
Useful in time-domain analysis since less memory is used than if the digitizer ran constantly at the postevent rate.
Multiple-segment memory
Segmenting the memory allows the ADC to digitize and store events in successive memory locations. Then it transfers data to the host after acquisition is complete.
Number of inputs
Modern SoC DUTs have differential outputs that require a digitizer with differential inputs.
2-dB input range headroom (dB)
Needed to protect against input level overshoots that cause signal clipping and thus distortion in the digitizer.
ADC architecture
The heart of any digitizer that must be evaluated for speed versus resolution. Several architectures exist and a test engineer must understand them to make intelligent decisions.
Flash
Fast throughput because conversion occurs in a single ADC cycle.
Sigma-delta
Low-bandwidth, high-resolution ADC.
Pipelined
Good mix of resolution and throughput.
Successive approximations
Good mix of resolution, at lower speeds, ability to convert multiple signals per ADC, and capability to convert nonperiodic multiplexed signals.
Number of channels
Tester space, cost per modules, and so forth.
Source: [12–14].
amount of time required to generate test programs) is an important concept that all test engineers need to understand. Reducing test development time is necessary and thus demands instruments that are flexible and easy to use. In addition to the traditional AWG, many RF sources incorporate an internal AWG module, thus increasing the flexibility for the test engineer. In testing RF SoC DUTs, the test engineer may need to create the complex waveforms required to test the receiver portion of the transceiver.
128
Advanced Production Testing of RF, SoC, and SiP Devices
Figure 6.7 highlights an AWG being modulated on the RF carrier. Looking at this figure, the test engineer needs to supply an I/Q data file and the RF source will perform an upconversion and modulate the RF carrier accordingly. In addition, this technique can be used in test time reduction by allowing for IP3, blocking, and other receiver tests that need multiple carriers and/or interferers instead of using system tone combiners. If the AWG is well designed and is linear over a wide range, the test engineer does not have to worry about mechanical switches being thrown in order to perform these tests and only needs one RF source with an AWG capability. This text file can be generated using a variety of software packages such as Matlab and MathCad, to name a few. Obviously, the test engineer still needs to adhere to the Nyquist sampling criterion when generating the I/Q waveform file, but the point here is that many commercial RF sources have this capability. 6.6.2
Creating AWG Waveform Files
One method to provide waveforms for an AWG is to create text files containing the in-phase (I) and quadrature-phase (Q) data. To generate the waveform files, the test engineer needs to enter a sampling rate, power level, number of points, and frequency of desired tones. This process will be illustrated using Matlab files [15]. The end result of this exercise will be to write the generated I and Q waveforms to two separate files. As with most AWGs, a simple list of numbers in each file is all that is required. The writeToFile.m script, shown here, writes the data to a designated file: writeToFile.m function [] = writeToFile(myFileName, myData) fid = fopen(myFileName,’wt’); [numPoints,dummy] = size(myData);
RF source Modulation
Upconversion
I text file 0°
IFI voltage signal LO
LO
Phase splitter Q text file
90°
Combiner
IFQ voltage signal
Figure 6.7 AWG signal modulation and upconversion to RF frequencies.
RF out to DUT
Production Test Equipment
129
fprintf(fid, ‘%g\n’, myData); fclose(fid);
We will next generate a simple sine wave, single CW tone, using the following writeSine.m script. One may realize that generating a sine wave is very straightforward, and indeed it is. The writeSine.m script is the fundamental script called within other scripts. writeSine.m function sineArray_v = writeSine(dBm, freq, sampleRate, numPoints); power_mW = 10^(dBm/10); power_W = power_mW/1000; impedance_ohms = 50; peak_v = sqrt(power_W*2*impedance_ohms)/sqrt(2); t = (0:numPoints-1)’/sampleRate; sineArray_v = peak_v*exp(j*2*pi*freq*t);
Because we are interested in RF AWG modulation, a few variables have been utilized and the choice was made not to use the “sin()” function built into Matlab. The arguments of writeSine are dBm, freq, sampleRate, and numPoints: dBm is the power level in dBm, to align with the types of power levels that are typically used in RF signals; freq is the frequency of the desired tone, in hertz; and sampleRate is the sample rate at which the waveform will be played back. For a simple sine wave, the value of dBm is irrelevant because it will be determined by the ATE RF source power level setting. Finally, numPoints is the number of points making up the waveform. Each i.txt and q.txt file will have numPoints points. The size of numPoints will have no impact on test time. The sine.m script shown here will generate a 1-MHz sine wave with a 0-dBm power level utilizing a sample rate of 100 Ms/sec and 10,000 samples. sine.m function [] = sine(); f1 = 1e6; sine1 = writeSine(0, f1, 100e6, 10e3); iq = sine1; % Normalize maxVoltage = max(max(abs(real(iq))), max(abs(imag(iq)))); iq = iq/maxVoltage; idata = real(iq); qdata = imag(iq); writeToFile(‘i.txt’, idata); writeToFile(‘q.txt’, qdata);
Notice the values of the arguments that are passed into writeSine.m. The I and Q text files are created using the writeToFile.m script. Note that the preceding script files could be modified to reflect the RF source used in any ATE. Because all ATE are different, the test engineer should know what format the RF AWG requires. Thus, the writeToFile.m script should be modified accordingly.
130
Advanced Production Testing of RF, SoC, and SiP Devices
6.7 Use of DSP in Production Test Equipment Digital signal processing is a concept that every test engineer needs to have knowledge of since he or she will encounter this quite often during his or her typical career. The use of DSP in production test equipment has changed the way ATE vendors implement their solutions. Many, if not all, ATE vendors incorporate predefined DSP algorithms into their ATE solutions. DSP is a powerful computation engine that has reduced the COT by reducing test development time, thereby enabling fast time to market (TTM). DSP algorithms have traditionally been a computer entity; it is only recently that this can be done on-board. Test development simplifcation is achieved through the use of DSP because of its ability to perform complex mathematical operations, using hardware or software, in a fast, yet efficient manner. The limitations of using DSP typically involve having the dedicated code and algorithm expertise needed to effectively program and integrate it into an ATE environment. Commonly used DSP algorithms are the FFT, inverse fast Fourier transform (IFFT), FIR filter, and window filter. Figure 6.8 [16] shows an example of a lowpass FIR filter. This particular filter has a stop band attenuation of –35 dB along with a sharp cutoff response, all performed within the DSP. To implement this filter using actual test equipment would add extra hardware to the test setup and would not allow the flexibility of changing filter parameters by simply changing a line of code. Software such as Matlab is used to generate code to 10 0
Attenuation (Hz)
−10 −20 −30 −40 −50 −60 −70 −80 0
1000
2000
3000
4000
5000
Frequency (Hz)
Figure 6.8 Response of a FIR filter.
6000
7000
8000
Production Test Equipment
131
perform these types of functions. Using DSP algorithms, the test engineer can easily modify a filter response based on the SoC DUT requirements. Use of DSP within ATE has increased flexibility, speed, and computation efficiency when developing test programs. Incorporating DSP on a piece of hardware is an additional way to reduce test time. This reduction is accomplished by utilizing any number of DSP vendor solutions in a way that circumvents the data from being uploaded to the workstation controller and performing DSP operations there. In addition to improving measurement speed, multisite efficiency is greatly increased as well. This can be accomplished if each test site has its own DSP to perform measurements and calculations versus having only one DSP for all test sites. Due to the fact that modern SoC testing is very competitive, increasing the multisite efficiency of high-volume devices can save a tremendous amount of money. The test engineer should also have knowledge of hardware configurations and how to specify various components. By utilizing innovative system controls, the test engineer is able to minimize tester overhead and maximize the ability to control the hardware using software.
6.8 Communicating with ATE Hardware This section describes how to connect various ATE hardware modules while allowing communication between them by means of software. 6.8.1
General-Purpose Interface Bus
Virtually all ATE requires continual development of methods to control the various hardware components while improving system and measurement speed. It is imperative that system communication overhead be kept to a minimum. The general-purpose interface bus (GPIB) was the dominant standard for test instrumentation in the 1980s [17]. Although the GPIB was widely used, it did not address the test industry need for portable test stations that were faster and more cost efficient. The GPIB was more than a sufficient improvement when testing was performed with a large amount of human interaction and test times were of lesser concern. It was not uncommon to find a test floor proliferated with lockers full of multimeters, oscilloscopes, custom load boxes, power supplies, and test leads used during verification. One can imagine the headaches and troubles encountered that lead to increased chances of false failures or passing faulty products. GPIB allowed test engineers to implement the automation of product verification while increasing confidence in testing and greatly improving production throughput [18]. Although GPIB data rates of up to 1 Mbps can be achieved,
132
Advanced Production Testing of RF, SoC, and SiP Devices
other major drawbacks led the ATE industry in search of alternative solutions. The display requirement, lack of modularity, need for faster throughput, and high cost led to the introduction of VXI to enhance and build on the GPIB standard. Still, today, GPIB is used for low-cost commodity RFIC DUTs where COT is not critical, although is quite slow by today’s standards. Plug-and-play interface cards allow for direct plug-ins to computers with enhanced interfaces. 6.8.2
VMEbus eXtensions for Instrumentation
VMEbus eXtensions for Instrumentation (VXI) addresses the shortcomings of GPIB-based instruments by allowing for an open architecture that can use many manufacturers’ products in a common mainframe. It was developed from the original Motorola VME standard (ANSI/IEEE Std. 1014-1987). Within the realm of ATE, VXI technology enhanced production floor testing efficiency by eliminating the need for excessive space, spare parts, and maintenance headaches since the units are modular and PC based (i.e., no displays) [19]. Its modular design inherently increases mean time between failure, and decreases mean time to repair. The small form factor leads to smaller size and reduced footprints of ATE, which has an impact on COT (see Chapter 7). VXI also has very precise timing and synchronization, which leads to improved measurement efficiency and reduced test times. In 1987, the VXIbus Consortium was organized to further extend the capabilities of VXI in production testing. Using VXI, throughput was greatly increased and allowed for reduced production test times. This, combined with VXI’s open standards maximizes flexibility and minimizes early obsolescence. Figure 6.9 [20] shows three configurations that use a VXI controller. The first configuration, Figure 6.9(a), consists of a VXI mainframe linked to an external controller via a GPIB. The controller talks across the GPIB to a GPIB–VXI interface module installed in the VXI mainframe, which translates the GPIB protocol to and from the VXI protocol. The primary advantages of this configuration are increased throughput over the GPIB and enhanced timing and synchronization for decreased test times. Figure 6.9(b) exhibits a stand-alone VXI configuration. It involves a custom VXI-based embedded CPU. The controller is a VXI module physically located in the VXI chassis, thus leading to a small form factor for a VXI system. The embedded controller connects directly to the VXI backplane. As with the first configuration, this setup offers good throughput and synchronization among test equipment. The configuration in Figure 6.9(c) uses a high-speed MXI (Multisystem Extension Interface) bus link from an external computer to control the VXI backplane. This configuration is similar to the embedded case discussed earlier, except that it has the flexibility to be used with a variety of computers and
Production Test Equipment
133
VXI Card VXI Card VXI Card VXI Card VXI Card VXI Card VXI Card
VXI mainframe
To other GPIB devices (a)
VXI Card VXI Card VXI Card VXI Card VXI Card VXI Card VXI Card
VXI mainframe
(b) MXI bus high-speed direct link
VXI Card VXI Card VXI Card VXI Card VXI Card VXI Card VXI Card
VXI mainframe
(c)
Figure 6.9 VXI-based controller configurations provide tremendous flexibility: (a) GPIB-VXI, (b) CPU embedded in VXI hardware (standalone), and (c) use of high-speed MXI bus [20].
workstations. Due to the high-speed link, the external computer can operate as though it is embedded directly in the mainframe [20]. This also offers high throughput, but additionally is relatively low cost. 6.8.3
Peripheral Component Interconnect eXtensions for Instrumentation
We now turn our attention to the Peripheral Component Interconnect eXtensions for Instrumentation (PXI). It is likely that VXI communication will continue to be used for quite a while because this technology can also be mixed and matched using hybrid VXI/PXI control. Hybrid VXI/PXI allows for virtually limitless ATE configurations using products leveraged from PCI, CompactPCI,
134
Advanced Production Testing of RF, SoC, and SiP Devices
VXI, and PXI-based architectures. This ensures that the ATE industry will constantly utilize the latest communication configurations needed to address production COT pressures. Although VXI is the dominant communication control in modern ATE, PXI is quickly becoming a widely accepted alternative. The major reasons are further reduced cost, smaller form factor, high performance, and highly integrated timing and triggering. 6.8.4
Summary of ATE Communication Interface Standards
Table 6.4 [21, 22] summarizes the differences among the common communications standards used in ATE. Reference [22] discusses these three interfaces among many other modern protocols, weighing the benefits and shortcomings of each in detail. 6.8.5
LAN eXtensions for Instruments
Thus far, three major instrument communication standards (GPIB, VXI, and PXI) have prevailed in the test and measurement industry for a very long time. LAN eXtensions for Instruments (LXI) is a new technology being developed by the LXI Consortium, which includes many test and measurement companies. LXI has come about due to the fact that modern test programs are demanding more cost-efficient solutions while maximizing test throughput. LXI is based on an industry standard communication bus that is available on most PCs. Unlike its predecessors, the choice is not about form factor such as rack-and-stack GPIB instruments or modular card-cage products like VXI or PXI. LXI transitions from GPIB, VXI, and PXI by accommodating standard instruments with front-panel displays and keyboards as well as instrument modules without a front panel in one test system architecture. Thus, Ethernet becomes the communications backbone of the system [23]. With a prevalence
Table 6.4 ATE Communication Standards and Their Associated Parameters Parameter
GPIB
VXI
Transfer width (bits)
8
Up to 32
Up to 64
Throughput (Mbps)
1
80
132
Yes
Yes
Timing and synchronization No
PXI
Modular
No
Yes
Yes
Cost
High
Medium
Low-medium
Form factor
Large
Medium
Small-medium
Production Test Equipment
135
of well-established instrument buses such as GPIB, VXI, and PXI, the test engineer should be knowledgeable about ways to expand test system functionality. Using LXI technology, this often complex task is lessened due to the fact that the LXI consortium had the foresight to plan for expanding LXI functionality for a wide variety of applications and scenarios. LXI devices use the same existing dimensions for GPIB-based instruments making them compatible in physical size [23]. However, there is a provision for half-rack-width GPIB instruments, supported by several current test and measurement vendors, to allow for reduced GPIB form factor, which can lower test system costs. Each LXI supplier is responsible for the mounting hardware that attaches the module to the rack. This method ensures that half-rack LXI modules from multiple vendors will fit together properly [23]. An advantage of LXI systems built with faceless modules over those used for VXI and PXI is that the LXI modules do not need an expensive card cage with a multilayer backplane, high-speed fans, a high-performance power supply, a slot-0 controller, or a proprietary communications link between the card cage and PC [23]. Ultimately, the test engineer can size the LXI modules to match performance, unlike card-cage products that may compromise performance to match size restrictions while using existing GPIB instruments alongside LXI instruments [23]. Because of this a lower COT will result from using LXI-based instruments. In addition to a small form factor, LXI modules are further characterized by the parameters listed in Table 6.5. The emergence of the LXI standard has offered a glimpse into the future of test instrument interfaces, and provides the test engineer with a flexible approach for integrating current and next generation instruments in a production environment [24]. Despite the enormous potential of LXI, it would be dangerous to assume that LXI will immediately displace every existing solution. For example, military and commercial test systems have adopted VXI primarily
Table 6.5 Key Parameters of LXI Modules Parameter
LXI Implementation Summary
Switches and indicators
Standardizes type and location of switches
Ethernet
IEEE 802.3 and TCP/IP protocol
Drivers
All modules have an interchangeable virtual instruments (IVI) driver
Documentation
All modules must have HTML documentation
Triggering of instruments
Divided into three classes, based on type of triggering
Cooling
Air intake on side of module, exhaust on rear
Note: See [23, 24] for a detailed analysis of LXI parameters.
136
Advanced Production Testing of RF, SoC, and SiP Devices
because it offers a highly modular, open standard based on computer and operating system–independent solutions [24]. Expanding a LXI system often involves addressing applications that require a mix-and-match of test instruments. The utility provided by a bridge device immediately expands the functionality of LXI beyond that of just another new standard by permitting a wide range of current and future test systems to leverage the advantages of LXI. [24]. However, there are still fundamental issues associated with LXI expansion that must be addressed, such as providing a transparent link between the target environment and the instrument and designing an optimal trigger interface.
6.9 Summary This chapter has introduced several important components that are prevalent in modern production test equipment. Today’s test engineer must be well versed in understanding the trade-offs when selecting production test equipment. In the dynamic world of SoC testing, increased cost and competitive pressures are forcing test engineers to factor in complex parameters when choosing integrated ATE hardware and software, to ensure adequate test coverage for a variety of SoC DUTs. Important aspects of selecting hardware were presented in this chapter. An overview of VXI was presented to illustrate the power and flexibility of using VXI, in a stand-alone configuration, or by combining it with other technologies for custom configuration of test platforms. A new test platform configuration based on LXI was introduced in detail to give the reader a good synopsis of the future next generation test system platform. Even with rising ATE costs, utilizing a hybrid configuration can offset system costs while maintaining high throughput levels. Finally, we cannot overemphasize the power of integrating DSP into software and thereby increasing its flexibility, ease of use, and application coverage in SoC testing.
References [1]
McLaughlin, J., J. Kelly, and A. Salagianis, “Paradigm Shifts in Production Testing as a Result of Increasing Integration in SoC/SIP Devices with RF Front Ends,” VLSI Test Symposium, Palm Springs, CA, 2005.
[2]
Schaub, K., and J. Kelly, Production Testing of RF and System-on-a-Chip Devices for Wireless Communications, Norwood, MA: Artech House, 2004, Ch. 4.
[3]
Agilent Technologies, “Exploring the Architectures of Network Analyzers,” Application note 1287-2, 2000.
[4]
Myers, S., and J. Webber, “Evaluating the Performance of DSP-based RF/microwave Measurement Systems,” Proc. AUTOTESTCON ’96, September 1996, pp. 261–265.
Production Test Equipment
137
[5] Witte, R., Spectrum and Network Measurements, Upper Saddle River, NJ: Prentice Hall, 1993, pp. 48–51. [6] Pilotte, M., “60dB Log Amp Provides Measurement and Control up to 8GHz,” RF Globalnet, July 4, 2004. [7] Agilent Technologies, “Square Law and Linear Detection,” Application Note 986, 1999. [8] Cowley, M., and H. Sorensen, “Quantitative Comparison of Solid-State Microwave Detector,” IEEE Trans. on Microwave Theory and Techniques, Vol. MTT-14, No.12, 1966. [9] Agilent Technologies, “The Zero Bias Schottky Detector Diode,” Application Note 969, 1999. [10] Zhang, T., W. Eisenstadt, and R. Fox, “A Novel 5GHz RF Power Detector,” Proc. 2004 Int. Symp. on Circuits and Systems, Vol. 1, May 23–26, 2004. [11] “A Fast Responding 60 dB Log Amplifier,” Product feature, Microwave J., Vol. 47, No. 4, April 2004. [12] Burns, M., and G. Roberts, An Introduction to Mixed-Signal IC Test and Measurement, Oxford, U.K.: Oxford University Press, 2001. [13] Agilent Technologies, “How to Choose VXI-based Scanning A/D Converters, Waveform Digitizers, and Oscilloscopes,” Application Note, 2004. [14] Black, B., “Analog-to-Digital Converter Architecture and Choices for System Design,” Analog Dialogue, Vol. 33, 1999. [15] Kelly, J., and L. Roberts, “Using Matlab to Generate RF AWG Waveforms for Production Testing,” Agilent Technologies Application Note, 2004. [16] Gomes III, W., and R. Chassaing, “Real-Time FIR and IIR Filter Design Using MATLAB Interfaced with the TMS320C31 DSK,” Proc. of 1999 ASEE Annual Conf., 1999. [17] VXI Technology Inc., “VXIbus Overview,” Technical Note, 2005, http://www. vxitech.com. [18] Kipfer, D., “Hybrid VXI/PXI System Architecture,” National Instruments Leadership Seminar Series, 2001. [19] VXI Technology, Inc., “Configuring Functional ATE Systems,” Technical Note, 2005. [20] Kimery, J., “MXI-2: A New Generation of VXI Control,” National Instruments Seminar, July 1995. [21] Wolfe, R., “Short Tutorial on VXI/MXI,” National Instruments Application Note 30, 1996. [22] Blonnigen, F., “What Is the Ideal Bus?” Evaluation Engineering, March 2001. [23] Drenkow, G., “LXI Unveiled as the Future of Test,” LXI Connexion, October 2005, pp. 14–17. [24] Semancik, J., “Expanding Test System Functionality with LXI,” LXI Connexion, October 2005, pp. 8–13.
7 Cost of Test 7.1 Introduction Cost of test (COT) is becoming an increasingly critical factor in the business models of semiconductor companies as well as fabless semiconductor companies. The reason for this is advances in technology. In the early days of the semiconductor industry, the goal was to simply get a good part—even if there was only one good part on the whole wafer. Later on, the cost to package the parts began to become a critical factor. However, the yield (i.e., the number of good dies per wafer) increased significantly over the years and so did the size of the wafer that is used to produce those chips. In the 1970s the typical wafer size was 3 inches (and even then there were very often only a handful of good die in the center of the wafer). In 2000 the first wafer fabs started to use 12-inch wafers to produce chips. In the meantime the yields were going up significantly and are often today in the mid-90% range or even higher. The methods of packaging have been getting more sophisticated as well over the years, and the cost of packaging has come down. Finally, the level of integration has increased to what was unthinkable just 10 years ago. Higher levels of integration, however, mean that the production testing of those dies is getting more complex and the equipment has to perform more functions and tests than in the past. This requirement, however, leads to a higher cost of the test equipment and/or longer test times and therefore higher COT. For instance, in the past a typical semiconductor manufacturer used to produce discrete components such as LNAs. Discrete LNAs were often in a dual-inline package (DIP) with eight pins and the requirements of the tester were very simple: one single power supply, an RF port to source a test 139
140
Advanced Production Testing of RF, SoC, and SiP Devices
signal, an RF port to measure the signal at the output of the DUT, and a noise source for the noise figure measurement. Today, the LNA is integrated into an RF radio that performs functions such as downconverting, demodulating, filtering, and digitizing. Adding to the complexity of testing is the fact that often more than one device is tested at the same time. Also, the number of pins per device has increased dramatically and often can be in the hundreds of pins per device for a highly integrated SOC chip. This, on the other hand, requires test equipment with multiple dc sources, RF sources and receivers, AWGs, digitizers, high-speed digital pins, and so forth. It can easily be seen that, although the cost to integrate this extra functionality has come down, the cost of the test equipment has increased due to the extra resource requirements in production testing. The fact that all other costs in the process of manufacturing semiconductors (wafer fab facility costs, packaging costs) have come down explains why management focuses more and more on COT; COT is now a significant portion of the overall cost of the process of manufacturing semiconductors. To highlight this, Table 7.1 compares the increase in COT relative to the manufacturing costs between 1980 and 2005. In Table 7.1, which shows COT associated with single-site testing of the devices, we can see that due to the extreme increase in number of dies per wafer due to larger wafer sizes and smaller dies as well as the decrease in cost to produce one wafer, the cost of test relative to the production cost has increased from 0.3% to almost 38%! This example explains why semiconductor companies focus increasingly on COT and methods to reduce it. Each semiconductor company has developed more or less sophisticated models to calculate the COT. The basics of those COT models are the same since they focus on available test time,
Table 7.1 Comparison of the Costs to Produce a Semiconductor Device in 1980 Versus 2005 Parameter
Value/Cost in 1980 Value/Cost in 2005
Wafer size
3 inches
Cost to produce wafer
$10,000
$3,000
Number of die per wafer
300
9,000
Production cost per die
$33.33
$0.33
Test time (seconds)
2
4
Test cost per second
$0.05
$0.05
Test cost per die
$0.10
$0.20
8 inches
Total production and test cost per die $33.43
$0.53
Test cost as percentage of total
37.50%
0.30%
Cost of Test
141
yields, equipment cost, and so forth. The next section describes the basic COT model.
7.2 Parameters Contributing to the COT Now, before we can start to develop the first basic model of COT we have to list the factors that go into the cost model [1, 2]. 7.2.1
Shifts and Hours Per Shift
The more hours per day the equipment is used, the more parts it produces and, therefore, the test cost per part is less. The potential for further increasing the number of hours a test floor operates is limited because most companies run their production floor in two or three shifts, 24 hours per day. 7.2.2
Utilization
The main goal of the tester is to be used to test parts. However, there are other functions that have to be performed like setting up the tester, calibrating the tester, QA testing, preventive maintenance, and repair. When a test house or semiconductor company is about to select test equipment from a vendor, two parameters are closely analyzed: mean time between failure (MTBF) and mean time to repair (MTTR). MTBF states the number of hours that the equipment is on average working before it has to be repaired. For production equipment this number is typically on the order of a couple of thousand hours for mature products. MTTR states how many hours it takes on average to repair the equipment. The utilization is the percentage of time that the tester is used for testing parts. 7.2.3
Yield
Testing separates the bad parts from the good ones. At the end of the day, the key result is the number of good parts tested. The bad ones are just thrown away (or sent to a failure analysis lab if the yield is much lower than expected) and basically wasted valuable tester time. That means, however, that the COT is always the cost to test a good part and, therefore, the production yield has to be part of the cost model as well. 7.2.4
Depreciation of the Test System
Test equipment is typically written off within 5 years. For handlers and other peripherals of the test cell, the time frame to write off equipment is typically
142
Advanced Production Testing of RF, SoC, and SiP Devices
between 5 and 8 years. For our cost model, we assume that both the tester and the handler are fully depreciated within 5 years using the straight-line method. 7.2.5
Test Time
The test time is the average time that it takes to test one part. Bad parts are typically not tested all the way to the end. In most cases, the bad part is put into a fail bin after the first failure occurred. However, there are cases when bad parts are tested to the end. This is typically done in order to collect correlation data to show patterns between different failing tests (for instance, to see if parts that fail for one parameter also fail another parameter). 7.2.6
Handler or Prober Index Time
This is the time that it takes the handler to change from one part to the other. After a device is tested, the handler places the tested part in a bin and then puts an untested part into the socket of the tester. If the test is not performed on packaged parts but on wafers, a wafer prober has to be used. The index time of a wafer prober is the average time that it takes to step from one die to the next die. 7.2.7
Additional Cost Parameters
The parameters just listed provide a very complete COT calculation that fulfills the needs of the majority of cases. Keep in mind, however, that numerous additional parameters could be added into the COT calculation. Different companies have different models for calculating the COT—some include those parameters and others do not. Therefore, those parameters are not included in the COT calculations here. The formulas presented in this chapter can easily be adjusted to include other parameters as well. Some of the additional parameters are discussed next. 7.2.7.1 Floor Space
Each square foot on a test floor (or, in the case of wafer probing, in a cleanroom) costs money that can be added into the COT model in the same way as the cost of capital. 7.2.7.2 Operators and Test Floor Personnel
Even though the tester runs basically by itself after the setup is finished, operators are needed to load the handlers with untested parts and then take the tested parts and move them to storage or packaging. Also, operators often are trained
Cost of Test
143
to fix minor issues such as handler jams. As a rule of thumb, in a properly operating test facility, there is about one operator for four testers. 7.2.7.3 Support
This is the cost to take care of the equipment and includes the cost for calibration, repair, and preventive maintenance. 7.2.7.4 Hardware
This is the amount of money that has to be spent on the load board, connectors, cables, and other load board components. 7.2.7.5 Development of the Test Program
This is labor and time. Typically, a test engineer is in charge of developing the test plan that runs on a tester. Also, a lengthy correlation analysis is typically performed by a product or test engineer to guarantee that the measurement results are correct. 7.2.7.6 Overhead
The cost of additional personnel not directly involved in the test process, such as material handlers and stockroom personnel, can be termed overhead. 7.2.7.7 Shipping and Customs Duty Taxes
When the wafer fab is at a different location than the test site, the wafers have to be shipped to that site and customs duty has to be paid.
7.3 Basic COT Model Table 7.2 provides the parameters used to develop the basic COT model. First, we calculate the available time to test parts in one year, TA: days hours seconds T A = 364 (0.6 ) = 18,869, 760 seconds (7.1) 24 3,600 hour year day The factor of 0.6 in (7.1) is the utilization of the tester as assumed in Table 7.2. Next, we calculate the number of tested parts during one year, NT: N T =T A
(t T + t I )
(7.2)
144
Advanced Production Testing of RF, SoC, and SiP Devices
Table 7.2 Parameters Used in the Basic COT Model Parameter
Typical Value
Shifts per day
3
Hours per shift
8
Yield
80%
Utilization
60%
Depreciation
5 years
Cost of the tester
$1,000,000
Cost of the handler
$250,000
Test time
1.5 seconds
Handler index time
1 second
where tT is the test time per device in seconds and tI is the handler index time in seconds. Using the values in Table 7.2 yields 7,547,904 tested parts in 1 year. Because COT is always the cost to produce a good part, we have to calculate how many good parts we can produce during that time frame. Therefore, we can apply the following equation: N = N T *Y
(7.3)
where N is the number of good parts and Y is the yield of the wafer. In our example we will test 6,038,323 good parts.1 Next we calculate the cost per year, C: C = (C T + C H ) T D
(7.4)
where CT is cost of the tester, CH is cost of the handler, and TD is depreciation time in years. In our example, the cost per year is $250,000. Now we can calculate the cost per device as the ratio of cost per year and the number of good parts that can be produced during a year: CP =C N
(7.5)
1. For this calculation it is assumed that the test time for a good part is the same as the test time for a bad part. This is not always the case. Many test plans are written in a way that they stop testing as soon as a failure is detected, which will result in lower test times for bad parts. As yield increases up to 95% or even beyond, this has less and less impact to the average test time per device.
Cost of Test
145
where CP is the test cost per good part. In our example we will have a test cost per good part of 4.14 cents.
7.4 Multisite and Ping-Pong COT Models In the case of multisite testing, the tester resources are shared between two or more devices. Whenever possible the test engineer will try to design the tests to function in parallel. However, parallel capability is often limited by the number of tester resources. For instance, all digital tests are typically performed in true parallel mode since one digital pin is connected to each digital I/O port of the device. Analog resources often have to be shared due to system configuration issues. An example would be a system that has one digitizer that has to be shared to perform DAC tests on two devices. In this case the DAC tests have to be performed for the device at site 1 first and then for the device at site 2. This means that those tests are done in serial mode. What exactly does multisite testing mean for the test times per device? Assuming that all tests can be done in parallel (which is an exception for SOC devices) and no extra overhead, Table 7.3 shows the test time per device relative to the single-site test time. In the case of ping-pong testing, the goal is to use the handler index time to test another part. Typically, a second handler is attached to the tester to achieve this goal. All tests are performed in this case in serial mode. The maximum throughput increase is obviously reached when the test time per device is less than or equal to that of the handler index time. In this case, the throughput increase is 100%. For test times greater than the handler index time, the throughput advantage decreases until (for extremely long test times) it approaches that of the single-site case. Figure 7.1 shows the relationship between throughput increase and handler index time (normalized to the handler index time). For test times that are Table 7.3 Test Time per Device (for Multisite Testing) Relative to Single-Site Test Time
Number of Sites
Test Time per Device, Normalized to the Single-Site Test Time
Single site
1
Dual site
0.5
Quad site
0.25
146
Advanced Production Testing of RF, SoC, and SiP Devices 2.5
Throughput increase
2
1.5
1
0.5
4.8 5.1 5.4 5.7 6
4.5
0.2 0.35 0.5 0.65 0.8 0.95 1.2 1.5 1.8 2.1 2.4 2.7 3 3.3 3.6 3.9 4.2
00.5
0
Test time, normalized to handler index time
Figure 7.1 Throughput increase of ping-pong testing with test time normalized to handler index time.
less than or equal to the handler index time, it is possible to double the throughput. This is when the normalized test time is less than or equal to 1 in Figure 7.1. For a test time that is twice the handler index time, we can still achieve a 50% increase in throughput, and for a test time that is four times longer than the handler index time, we can achieve a 25% increase in throughput. Now, let’s do some calculations on the COT for both ping-pong testing and multisite testing. To make the test cost comparison of multisite and ping-pong testing to single-site testing easier, we will use the same numbers for hours per day (24 hours), utilization (60%), yield (80%), and depreciation time (5 years). With the numbers from earlier we have TA = 18,869,760, where TA is the available seconds per year. 7.4.1
Ping-Pong Testing
In the case of ping-pong testing, we will need an extra handler. The handler cost therefore doubles compared to the single-site case. Because tests are not performed in parallel in that mode, the resources can be shared (eventually with switches and relays) so that the tester cost is constant. The following assumptions are made: , ,000 C T = $1000 C H = $500,000 t I = 1 second
Cost of Test
147
where CT is the cost for the tester, CH is the cost of the handler, and tI is the handler index time. Table 7.4 compares a ping-pong application to single-site testing under the assumption that the test time is less than the handler index time (i.e., the case where the test time relative to the handler index time is less than 1). The handler index time is assumed to be 1 second. Because the assumed test time is less than the handler index time, the handler index time becomes the dominating factor in the calculation of the relative test time per part. Therefore, the relative test time per part is 1 second in our example. As can be seen in Table 7.4, a ping-pong application can produce 15,095,808 devices per year versus 7,945,162 parts per year in a single-site application. This is an increase of 90%! Table 7.5 compares the test cost between single-site testing and ping-pong testing under the test time and yield assumptions from Table 7.4.
Table 7.4 Parameters of Single-Site Application Versus Ping-Pong Application Single-Site Application
Ping-Pong Application
Available test time per year in seconds: TA
18,869,760
18,869,760
Test time per device in seconds: tT
0.9
0.9
Relative test time per part in seconds: TR
1.9
1
Devices that can be tested within 1 year: NT = TA/TR
9,931,453
18,869,760
Yield in %: Y
80
80
Total number of good parts that can be tested in 1 year: N = NT *Y
7,945,162
15,095,808
Table 7.5 Test Cost per Device of Single-Site Application Versus Ping-Pong Application Single-Site Ping-Pong Application Application Cost of the tester: CT
$1 million
Cost of the handler(s): CH
$250,000
$500,000
Depreciation time in years: TD
5
5
Test cell cost per year: C = (CT + CH)/TD
250,000
300,000
Number of good parts per year: N
7,945,162
15,095,808
Test cost per good part: CP = C/N
3.15 cents
1.98 cents
$1 million
148
Advanced Production Testing of RF, SoC, and SiP Devices
As can be seen in Table 7.5, the test cost per device is in the case of single-site testing is 3.15 cents and in the case of ping-pong testing 1.98 cents. Therefore, ping-pong testing saves 37% compared to single-site testing. Those test cost savings are remarkable even though the test cell cost is higher due to the requirement of a second handler. Now, let’s investigate the case where the test time is longer than the handler index time. As mentioned earlier, the advantages of ping-pong testing decrease when the test times are long relative to the handler index time. The calculations in Table 7.6 will show why this is the case. As in the preceding example, the handler index time is assumed to be 1 second. As can be seen in Table 7.6, 5,031,936 devices can be produced in single-site mode versus 7,547,904 in ping-pong mode. This is an improvement of 50%. Table 7.7 compares the cost to produce those devices between single-site testing and ping-pong testing. Under the preceding assumptions we can still realize test cost savings of 20% per device by implementing ping-pong testing.
7.4.2
Multisite Testing
For multisite testing, two or more devices are put into the test fixture at the same time and then tested. The goal of the test engineer is to perform as many functions as possible at the same time in parallel. Doing tests in parallel is not always possible because of limitations of the equipment [3]. In this case the tester switches into serial mode and performs the measurement one site at a time. Also, in some cases the test engineer wants to perform measurements in serial mode [4]. A good example would be the spectral mask measurement of a WLAN Table 7.6 Parameters of Single-Site Application Versus Ping-Pong Application Single-Site Application
Ping-Pong Application
Available test time per year in seconds: TA
18,869,760
18,869,760
Test time per device in seconds: tT
2
2
Relative test time per part in seconds: TR
3
2
Devices that can be tested within 1 year: NT = TA / TR
6,289,920
9,434,880
Yield in %: Y
80
80
Total number of good parts that can be tested in 1 year: N = NT *Y
5,031,936
7,547,904
Cost of Test
149
Table 7.7 Test Cost Per Device of Single-Site Application Versus Ping-Pong Application Single-Site Application
Ping-Pong Application
Cost of the tester: CT
$1 million
$1 million
Cost of the handler(s): CH
$250,000
$500,000
Depreciation time in years: TD
5
5
Test cell cost per year: C = (CT + CH)/TD
250,000
300,000
Number of good parts per year: N
5,031,936
7,547,904
Test cost per good part: CP = C/N
4.97 cents
3.97 cents
transmitter where the concern could be crosstalk. To execute as many measurements as possible in parallel, the tester often has to be upgraded and that of course makes the tester more expensive. Also, the handler has to be multisite capable, which means that a higher cost is associated with the handler. For the multisite COT calculations and the comparison to the single-site case we will use the assumptions that are listed in Table 7.8 to calculate the number of devices that can be tested in 1 year. In the single-site case, the handler index time contributes 1 second to the relative test time per device. Therefore we calculate TR = 5 seconds. In the dual-site case, on the other hand, we have a test time per device of 2.2 seconds and the contribution of the handler is 1 second for two devices, which explains the relative test time per device of TR = 2.7 seconds.
Table 7.8 Parameters of Single-Site Application and Dual-Site Application Single-Site Dual-Site Case Case Utilization in %
70
70
Available test time per year in seconds: TA
22,014,720
22,014,720
Test time per device in seconds: tT
4
2.2
Relative test time per part in seconds: TR
5
2.7
Devices that can be tested within 1 year: NT = TA/TR
402,944
8153600
Yield in %: Y
90
90
Total number of good parts that can be tested in 1 year: N = NT *Y
3,962,650
7,338,240
150
Advanced Production Testing of RF, SoC, and SiP Devices
Table 7.9 Test Cost Per Device of Single-Site Application and Dual-Site Application Single-Site Application
Dual-Site Application
Cost of the tester: CT
$1 million
$1 million
Cost of the handler(s): CH
$250,000
$350,000
Depreciation time in years: TD
5
5
Test cell cost per year: C = (CT + CH)/TD
250,000
270,000
Number of good parts per year: N
3,962,650
7,338,240
Test cost per good part: CP = C/N
6.3 cents
3.68 cents
With the number of produced devices that were calculated in Table 7.8, we can calculate the COT for the single-site and dual-site case as shown in Table 7.9. The COT savings is obviously tremendous in the case shown in this table. Compared to single-site testing, it is 41.7% cheaper to test devices in dual-site mode! The case with the assumptions in Tables 7.8 and 7.9 is of course a desirable outcome but an exception in the real world. Why? Because the assumptions were made that almost all tests can be performed in parallel test mode and that the tester resources are sufficient to accommodate that. In practice, the following two cases are more likely to occur: (1) high degree of parallel test capability at high extra cost or (2) low degree of parallel test capability at no extra cost. Both situation are discussed next. 7.4.2.1 Case 1: High Degree of Parallel Test Capability at High Extra Cost
In this case the tester has to be upgraded to accommodate multisite testing in true parallel test mode. This is not uncommon and typically means that some cards, such as extra digitizers, waveform generators, or digital pin cards, are added to the tester. Table 7.10 compares the cost of single-site testing with dual-site testing under the assumption that the tester has to be upgraded. In terms of test time, the parameters from Table 7.8 are assumed. In reviewing Table 7.10, compared to the single-site case this is still a test cost savings of 28% per device even though the overall cost of the test cell is $80,000 higher per year in the dual-site case versus the single-site case. 7.4.2.2 Case 2: Low Degree of Parallel Test Capability at No Extra Cost
Case 2 is the case in which the tester does not have to be upgraded, but the number of measurements that can be taken in parallel is limited. In that case, for instance, all digital measurements are taken in parallel and the analog and RF
Cost of Test
151
Table 7.10 Test Cost Per Device of Single-Site Application and Dual-Site Application, Case 1 Single-Site Application
Dual-Site Application
Cost of the tester: CT
$1 million
$1.3 million
Cost of the handler(s): CH
$250,000
$350,000
Depreciation time in years: TD
5
5
Test cell cost per year: C = (CT + CH)/TD
250,000
330,000
Number of good parts per year: N
3,962,650
7,338,240
Test cost per good part: CP = C/N
6.3 cents
4.54 cents
Table 7.11 Parameters of Single-Site Application and Dual-Site Application, Case 2 Single-Site Application
Dual-Site Application
Utilization in %
70
70
Available test time per year in seconds: TA
22,014,720
22,014,720
Test time per device in seconds: tT
4
3.5
Relative test time per part in seconds: TR
5
4
Devices that can be tested within 1 year: NT = TA/TR 402,944
5,503,680
Yield in %: i
90
90
Total number of good parts that can be tested in 1 year: N = NT *Y
3,962,650
4,953,312
measurements are taken in serial mode. To calculate the number of parts that can be tested under those circumstances, we use the assumptions given in Table 7.11. To calculate the COT we use the assumptions listed in Table 7.12, where it can easily be seen that significant COT savings are possible even if the degree of parallel test capabilities is low, as assumed in Table 7.11. Compared to the single-site case, dual-site testing represents a savings of 13.5%.
7.4.3
Additional Variables for Multisite and Ping-Pong Testing
The models for multisite testing and ping-pong testing made one important assumption: that the time to develop the application and to prepare the test cell are the same between multisite/ping-pong testing and single-site testing.
152
Advanced Production Testing of RF, SoC, and SiP Devices
Table 7.12 Test Cost Per Device of Single-Site Application and Dual-Site Application, Case 2 Single-Site Application
Dual-Site Application
Cost of the tester: CT
$1 million
$1 million
Cost of the handler(s): CH
$250,000
$350,000
Depreciation time in years: TD
5
5
Test cell cost per year: C = (CT + CH)/TD
250,000
270,000
Number of good parts per year: N
3,962,650
4,953,312
Test cost per good part: CP = C/N
6.3 cents
5.45 cents
In most cases, however, it takes extra time to develop both, the test program and hardware for multisite or ping-pong applications. The hardware costs might be higher due to the requirement of additional and expensive sockets. With RF devices in particular, the process of tuning each site is time consuming. Also, the test cell preparation can be time consuming depending on the procedure that is applied to qualify the test cell. One common method is to run a certain number of characterized and KGDs on one site (where they should pass 100%) and then run the same number of samples at the other site(s). Assuming that there are no failing devices, this method works fairly quickly. However, if only one device fails at either site, the root cause has to be investigated with the potential that the start of the production can be delayed by many hours. Because every company has a different method of calculating those factors into their COT model, it is up to the reader to adjust the preceding models so that they include individual requirements and assumptions.
7.5 COT Considerations When Using Test Houses Often, smaller companies try to minimize their spending on capital equipment such as test equipment by instead using the services of test houses. Also, large companies that typically have their own test floors and production equipment occasionally use test houses in order to handle extra volume that cannot be tested on their own test floor due to capacity limitations. In most cases, a test house will quote a price per hour on one particular test system or in some other cases a price per tested part. To calculate its own internal costs, the test house uses models similar to those described in the preceding sections and includes other fixed costs such as floor space, labor, and administration costs and adds some markup to that number, which is the
Cost of Test
153
profit that the test house intends to make from that customer. However, the price that the test house will quote to a customer depends on two major factors, as discussed next. 7.5.1
Guaranteed Volume or Usage
If one customer guarantees the test house a certain volume, the test house might be willing to grant a better price compared to the case where the customer does not make a commitment at all. In such a case, the advantage to the test house is the fact that it can better plan and forecast for the needed volume. The customer has the advantage of better prices per hour but, in the case of a drop in volume, has to pay an hourly rate to the test house even though the test equipment is sitting idle on the test floor. 7.5.2
Availability of Testers
Let’s assume the following case: A semiconductor manufacturer approaches a test house because it needs about 2 testers for 1 year to produce their latest part. The test house has 20 testers from company A that are all utilized 100% with parts from other customers. The test house also has 20 testers from company B that are only utilized at 70% (which means that the equivalent of 6 testers are idle). Assuming that both testers have a similar purchase price, the test house most likely will quote a better price per hour to the customer in the case that this customer decides to use the tester of company B versus company A. This is because the test house is interested in loading all the testers on their floor and running them at maximum utilization before they will invest in new equipment.
7.6 Accuracy and Guardbands Each analog test can only be performed to a certain accuracy. The manufacturer of the test equipment typically creates a list that contains all of the different measurements and adds the required accuracy of each measurement. For instance, the accuracy of a RF power measurement might be specified as ±0.5 dB. This means that the manufacturer guarantees that the measurement result will be within ±0.5 dB of the “real” result. The main reason for that uncertainty lies in the calibration method of the tester as well as the contribution of noise. Each calibration requires a calibration device. For instance, the calibration device that is used for a power measurement is a power sensor. This power sensor itself has to be calibrated as well to an even stricter standard. The highest possible calibration standards are the NIST standards. Each lab that performs calibrations has at best calibration standards that are one level below the NIST standard. The closer to the NIST standard a
154
Advanced Production Testing of RF, SoC, and SiP Devices
calibration is performed, the stricter the requirements for the equipment, the environment (i.e., temperature and humidity control), and expiration. Even though it would be possible for the manufacturer of the tester to calibrate the test equipment with standards that guarantee a better accuracy, this is often not done because of the high cost that comes with such a calibration. What does this mean for the measurement result and the criteria on whether a part is failing or passing? Let’s take the following example of an RF power measurement. Let’s assume that the device has to emit at least +2.4 dBm of RF power and not more than +4.2 dBm. Figure 7.2 shows the distribution of the power measurement as well as the limits. The measurement accuracy of that test is assumed to be ±0.3 dB. A measurement result of +3 dBm certainly guarantees that the device is passing the criteria of at least +2.4 dBm output power. Including the measurement accuracy we know that the actual RF power is not less than (3 dBm – 0.3 dB) = 2.7 dBm. [At the same time it can be said the RF power is not more than (3 dBm + 0.3 dB) = 3.3 dBm.] Figure 7.3 shows the same distribution of RF power measurements, this time, however, after adding the measurement uncertainty into the limits. Let’s assume that the measurement result is 2.6 dBm. Even though this result is clearly better than the low limit, that device has to be put into the fail bin because of the accuracy of the measurement: We cannot guarantee that the actual RF power is more than 2.4 dBm. Assuming the worst case, the RF power could be (2.6 dBm – 0.3 dB) = 2.3 dBm. That’s where guardbands come into place: A guardband is added at the lower and upper limit to avoid that a test is Limits without guardbands 2.5 2 1.5 1 0.5 0 2
2.2
2.4
2.6
2.8
3
3.2 3.4 3.6 3.8 Device gain in dB
4
Figure 7.2 Distribution of RF power measurements with limits.
4.2
4.4
4.6
4.8
5
Cost of Test
155
Limits after applying guardbands 2.5 2 1.5 1 0.5 0 2
2.2
2.4
2.6
2.8
3
3.2 3.4 3.6 3.8 Device gain in dB
4
4.2
4.4
4.6
4.8
5
Figure 7.3 Distribution of RF power measurements with new limits.
passing when only the measurement result is taken into account but not the test equipment accuracy. Let’s take the previous example of the RF power measurement. Assuming that the lower and upper limits are 2.4 dBm and 4.2 dBm, respectively, and the accuracy of the test is ±0.3 dB, the new limits including the guardbands will be 2.7 and 3.9 dBm, respectively. It should be clear that the accuracy of the measurement has a direct impact on the number of good parts that are tested. Poor accuracy for a measurement means that the guardbands have to be wider. This, however, means that the range where a part is judged to be passing gets smaller. Particularly for parameters that are barely within the passing bands or a distribution that is skewed to either the upper or lower limit, the accuracy of the measurement is of utmost importance. Figure 7.4 takes the example distribution just discussed and shows the yield of that particular RF power measurement as a function of the measurement uncertainty. On the other hand, a device might be designed so well that the standard deviation of that measurement is very small and the mean of that measurement is exactly in between the upper limit and the lower limit. This could mean that even with wide guardbands the device is still easily within the passing range. The main purpose of guardbands is to avoid what is called a false positive. A false positive is a device that is binned as “pass” even though the device has at least one parameter that fails and would require that this part be binned into a “fail” bin. What are the implications of a false positive? In the case of a false positive, a bad part is shipped to the customer, and the customer integrates a bad part into its product. This will have implications all the way from failing of the
156
Advanced Production Testing of RF, SoC, and SiP Devices Yield as a function of measurement uncertainty
120 100
Yield in %
80 60 40 20 0 0
0.1
0.2
0.3
0.4 0.5 0.6 Uncertainty in dB
0.7
0.8
0.9
Figure 7.4 Yield as a function of measurement uncertainty.
finished product (for instance, a mobile phone) during final testing, customer returns, and extra warranty costs to the extreme case of a production stop to investigate the source of the problem. In any case, the costs associated with a false positive can be extremely high—typically significantly higher than the cost of the device itself. The other case is called a false negative. This is the case when a part is put into the fail bin even though all parameters are actually within the limits that would justify it to pass. The cost of a false negative is typically on the order of the production and test costs of that device and therefore much lower than the cost that comes with a false positive. However, the goal of each manufacturer is to have a high production yield and avoiding false negatives is one factor that allows manufacturers to achieve that goal. One method that is widely used to reduce the number of false negatives is to rerun the rejected parts after production of all samples is finished. Depending on the repeatability of measurements, this procedure can result in many extra passing parts.
7.7 Summary An example was given to show the increasing importance of COT to the overall costs of producing semiconductors. Next, a basic model was developed to calculate the COT with basic parameters such as system and handler costs, depreciation, utilization, and yield. This model was refined to calculate COT for ping-pong testing and multisite testing. Also, additional factors were considered,
Cost of Test
157
such as using the services of a test house and being subject to guaranteed volume or tester availability. Finally, the importance of measurement accuracy and the need for guardbands and their impact on yield and therefore COT was discussed.
References [1]
Horgan, J., “Test and ATE—Cost of Test,” EDA Weekly, March 8, 2004, http://www.EDACafe.com.
[2]
Garcia, R., “Redefining Cost of Test in an SOC World,” EE Evaluation Engineering, June 2003.
[3]
Cramer, R. and D. Proskauer, “ATE Implementations for Multisite Device Test,” EE Evaluation Engineering, July 2005.
[4]
Engelhardt, M., “Challenges and Cost of Test Considerations of Multisite WLAN Test,” Semicon Europe, Technical Symposium, Test Seminar, April 2004.
8 Calibration 8.1 Overview All measurements that are performed on any kind of equipment have errors due to inaccuracies in the measurement technique as well as the equipment itself. The purpose of calibration is to reduce (or in theory eliminate) the measurement error that is related to the measurement equipment. Two types of errors contribute to measurement errors: random errors and systemic errors. Random errors can only be characterized with probabilities and means of statistics. A good example of random error is the contribution of thermal noise. Unless the measurement is executed at absolute zero (0K), there is always a contribution of thermal noise to the measurement. Repeating the same measurement over and over will yield very similar results, plus or minus the random contribution of the noise. Obviously, random errors cannot be calibrated out due to the fact that they are random. The test engineer, however, will have to consider the effects of random error when he or she evaluates the results of a measurement. The other contributors to errors in a measurement are systemic errors. Systemic errors can be corrected for because it is possible to characterize the exact amount of their contribution to a measurement. For instance, when an RF measurement is performed, the loss between the test head and the digitizer is always the same for one specific frequency and therefore can be calculated out of the measurement result. Assuming that the measured RF power (without calibration) is –8.5 dBm at 1.8 GHz and the loss between the test head and the digitizer is known to be 3.5 dBm at 1.8 GHz, then we know that the actual power at the test head is (–8.5 + 3.5) dBm = –5 dBm. Figure 8.1 shows the relationship
159
160
Advanced Production Testing of RF, SoC, and SiP Devices
Power applied at the test head: −5 dBm
Loss through downconversion, cables, etc: 3.5 dB
Power measured at digitizer:−8.5 dBm
Figure 8.1 Block diagram of an RF power measurement on a tester, demonstrating that there is some power loss in the tester hardware.
between measured power at the digitizer and the real sourced power at the test head. Because the test engineer wants to characterize the device only and not the device together with the tester, the manufacturer of the tester includes algorithms to correct for the contribution of the test system. This will not be visible to the user of the test system after performing the calibration because the correction is applied right after the measurement is taken but before the result is displayed. To determine the correct offset values, the test engineer or test floor personnel have to calibrate on a regular basis. Also, it is important to know that the correction is not only applied to the measurement but also to the equipment that applies stimuli to the device. For instance, an AWG has to be calibrated to make sure that a waveform with the correct amplitude is sourced to the device. As mentioned earlier, calibration is the process of comparing the performance of a measurement system to a known standard and if possible compensating for the differences. But what is the golden standard that is used to evaluate a measurement system? In the United States, the government provides those standards through one of its agencies. The National Institute of Standards and Technology (NIST) [1] is part of the U.S. Commerce Department and maintains the strictest possible standards. A calibration lab will calibrate its equipment either directly with the NIST standards or with equipment that was calibrated under that standard. The fewer steps there are between the NIST standard and the equipment that is used by the calibration lab, the lower the variances will be that are part of every measurement. On the other hand, the closer the equipment is to the NIST standard, the stricter are the conditions in terms of temperature fluctuations, humidity range, and so forth that the equipment can be exposed to before the calibration is invalidated. Also, the fewer steps there are between the NIST standard and the calibration equipment, the shorter is the time period for which the calibration is valid. Every time that calibration equipment is used that was calibrated with standards that can be traced back to NIST standards, we talk about a traceable calibration or a trace cal.
Calibration 8.1.1
161
Calibration Methods
The test engineer for SoC devices needs a tester that has calibrated dc sources, digital and analog pins, PMUs, and RF resources. The following subsections briefly describe the critical parameters that are characterized during calibration. 8.1.1.1 Device Power Supply Calibration
Calibration typically includes measurements in all four quadrants of operation of the dc supply (positive voltage/positive current; positive voltage/negative current; negative voltage/positive current; negative voltage/negative current). 8.1.1.2 Digital Calibration
Calibration on digital channels includes at a minimum the level accuracy. Some testers include edge placement accuracy as well as rise and fall times. 8.1.1.3 Baseband Digitizer Calibration
The critical parameter during calibration of a digitizer is to accurately measure different voltage levels over time. Because digitizers are typically specified at a 50-Ω load as well as high impedance (most digitizers use 1 MΩ as high impedance), the calibration is often performed at those two different load levels. 8.1.1.4 Baseband AWG Calibration
The critical parameter is the accuracy of the voltage levels that are sourced over time as a function of the load that is applied to the AWG. 8.1.1.5 RF Calibration
The RF calibration process typically consists of three steps: 1. Power calibration: This guarantees that the power that is sourced into a device is at the expected level. 2. Vector calibration between different ports: This calibration is performed to establish the calibration planes with clearly defined magnitude and phase information 3. Noise figure calibration: Some devices such as LNAs require noise measurements. If this kind of measurement is required, the test equipment has to be calibrated for noise figure measurements. There are basically two different philosophies behind the calibration of RF measurement equipment. Depending on the type of equipment as well as the strategy of the manufacturer we can distinguish between (1) focused calibration or (2) system calibration.
162
Advanced Production Testing of RF, SoC, and SiP Devices
The goal of focused calibration is to calibrate the test system at the frequencies, power ranges, and levels that are required by one or a number of different test plans. The advantage of doing so is obvious: The test system gives maximum accuracy because no interpolation is required between two calibrated points. Also, it is very likely that a focused calibration will fail if a strong interfering tone is present at exactly one of the frequencies that is used in the test plan because the measured power does not fall into the expected range of power levels that let the calibration pass. In such a case, a failing calibration is a positive outcome because it avoids allowing the device measurement to be influenced by radiation that is unrelated to the device itself but a consequence of external factors. For instance, when measuring WLAN 802.11b devices, an often used frequency is 2.437 GHz. That is the carrier frequency of channel 6 for WLAN applications—and unfortunately the frequency where many WLAN routers operate, even on some test floors. Receiver measurements that use very low power levels can be negatively impacted by the strong transmitting signal of a device that is not part of the test cell but operates at the same frequency (in this example by a WLAN router). Fortunately, this kind of interference can be detected with a focused calibration, and the test engineer can change the frequency at which the device is tested to another channel before the device goes into production. The disadvantage of a focused calibration is certainly the fact that all of those frequencies and power levels have to be known to the calibration algorithm. That means that the calibration becomes test plan dependent and that a change in the test plan might automatically require the system to be recalibrated. When a system calibration is performed, the test system is calibrated over the whole frequency range in equal steps, whether those frequencies are actually used or not. For instance, a test system might be calibrated in 10-MHz steps from the lowest frequency of 10 MHz to the highest frequency of 6 GHz. The advantage of this method is that the calibration is independent of the test plans that are used. If a frequency is used in a test plan that was not calibrated, the tester will interpolate between two calibrated points. The disadvantage is the uncertainty that comes with any kind of interpolation as well as the fact that interferences at frequencies that are not specifically calibrated are difficult to detect. Some manufacturers took the approach of combining the advantages of focused calibration and system calibration. Typically, many changes are made to the frequencies and power levels that are used in a test program during the development process. Because recalibration would be too time consuming every time a change in power or frequency is made, the system operates during this stage in the “system calibration” mode. This allows changes to be made quickly while still getting reasonable accuracy. After freezing the test program or when the best possible accuracy is required, the test engineer calibrates the tester with a focused
Calibration
163
calibration and then the system works in the “focused calibration” mode with all of the advantages as described earlier. Every calibration requires calibration standards and there are two different approaches as to what kind of standards should be used for a calibration: (1) standards that are inside the tester or (2) external standards. In the case where standards that are inside the tester (typically in the test head as close as possible to the port or pin that is used for a signal) are used, the calibration is performed by switching to those standards instead of the port or pin that is routed to the test head under normal operation. This has the advantage that the system can be calibrated on a regular and preprogrammed basis and no user interaction is needed to perform the calibration. The disadvantage is that a switching matrix has to be used that will degrade over time, with the consequence that the calibration accuracy will deteriorate over time as well. Also, the actual calibration plane in such a case is not the test head but is inside the tester where the calibration standards are switched in and out. Because the desired reference plane is typically the test head RF port, the calibration plane is then moved mathematically from within the tester to the test head RF port by taking the cable length and cable loss into account in order to have accurate phase and magnitude information at the test head. Figure 8.2 shows how internal standards are used to perform the calibration. The second way to apply standards to calibrate a test system is to use external standards. In this case, the user has to invoke the calibration manually and then follow prompts that ask the user to connect different standards to the test head during the different stages of calibration. This kind of calibration comes with better accuracy, but also has the potential for error if the wrong standards
Switch matrix 1
Switch matrix 2
Reflectometer 1
Reflectometer 2
RF source
Tester/test head
Figure 8.2 Internal calibration standards.
RF source
Load
Through
Open
Load
Open
Short
Short
DUT
DUT
164
Advanced Production Testing of RF, SoC, and SiP Devices
are applied, if the standards are not connected correctly, or if the recommended calibration intervals are skipped. Also, it normally takes more time to perform a calibration that uses external standards because of the user interaction that is necessary. Some manufacturers of ATE decided to build calibration robots to avoid mistakes by the user and to be able to do calibrations with minimal operator interaction during a convenient time, for instance, during night shifts when system demand is low. Figure 8.3 shows the case where external standards are used for calibration.
8.2 Calibration Procedures A modern SoC tester has four subsystems. Each one of these subsystems requires a separate and independent calibration. To calibrate each one of those subsystems it is necessary to perform • DPS calibration; • Digital calibration; • Analog calibration; • RF calibration. 8.2.1
DPS Calibration
Because the purpose of the DPS is to provide constant voltage levels or currents to a device, the calibration is done by calibrating for level accuracy only. The External calibration standard
External calibration standard
Reflectometer 1
RF source
Tester/test head
Figure 8.3 External calibration standards.
Reflectometer 2
RF source
Calibration
165
main challenge is to cover the whole operating range of a DPS. The typical application of a DPS is to provide the operating voltage to a device, to measure currents in different operating modes of the device, and to adjust quickly to changes in the load in case the device under test is programmed from one state to another, for instance, from the power-down mode to the operating mode. Because a DPS has to be able to perform measurements of very low currents as well as high currents, it is built with two or more ranges. The manufacturer of the tester documents those ranges, and during regular operation the user can either select the correct range manually or have the DPS perform auto ranging. The calibration process, of course, has to cover all of these ranges. The calibration of the DPS itself is done by using a high-precision parametric measurement unit that is built into the tester as well as several well-characterized resistors, among other components such as transistors for switching purposes. DPS calibration includes typically high-current and low-current calibration, voltage DAC calibration, and current limiter calibration for each DPS pin that is available in a specific tester. 8.2.2
Digital Calibration
Digital calibration consists of up to four steps. Whether all those steps are performed during the calibration of the digital subsystem depends on the tester itself as well as the requirements of the application. The four steps are as follows: • DC level calibration; • AC calibration; • Drive-to-receive calibration; • Fixture delay calibration.
8.2.2.1 DC Level Calibration
The purpose of the dc level calibration is to ensure the accuracy of the driver and comparator. This level drifts over time and the calibration interval depends, among other factors, on the desired accuracy. For instance, the manufacturer of the tester might guarantee the following accuracy of the dc levels: ±10 mV within 2 weeks; ±15 mV within 4 weeks; ±20 mV within 3 months. If the application requires drive and compare level accuracy within ±10 mV, then the tester has to be calibrated every other week.
166
Advanced Production Testing of RF, SoC, and SiP Devices
The calibration itself is typically performed by applying internal, NIST-calibrated standards to the digital pins. Those standards can be found, for instance, in the clockboards in the form of precision resistors and voltage or current sources. The user does not have direct access to those internal NIST traceable standards. Those standards were calibrated in the factory while the tester was built. To guarantee traceability to the NIST standard, the calibration of those internal standards has to be repeated every 6 months or if a calibrated module in the tester was exchanged or added. This calibration is normally performed by the maintenance personnel of the manufacturer of the tester. The calibration of the digital subsystem is done more frequently as can be seen in the earlier example. 8.2.2.2 AC Calibration
The ac calibration is performed to guarantee the accurate placement of the drive and receive edges within the specified timing accuracy. For instance, the manufacturer of the tester might guarantee edge placement accuracy (EPA) of ±10 ps. Because the parameters of a digital card are specified to cover the range from very slow tester periods all the way to maximum speed, the ac calibration is performed at the highest speed (i.e., shortest tester period) of that digital card as well as numerous longer testing periods. In particular, the steps that are performed during ac calibration of a digital card are: • Adjustment of the drive edges for each channel: For this purpose the delay
between the card in the test head and the pogo pin is measured for each channel directly at the pogo pin since that is the calibration plane. Also, it is important to note that the drive edge adjustment always requires two digital channel pairs: one channel to drive and one channel to provide a termination and to calibrate the receive edges. Both channels are connected by switching a relay. • Positive and negative edge adjustment: Because the positive and negative edges have slightly different delay times, the drive edge adjustment has to be performed for both edges. 8.2.2.3 Drive-to-Receive Calibration
The drive-to-receive calibration is performed to compensate for the propagation delay between the driver and comparator by applying a pulse with the driver and measuring the time that it takes for that pulse to propagate from the driver to the comparator. Because this delay is strictly a function of the physical length between the driver and receiver, it should be a very stable value that does not change over time or temperature. Figure 8.4 shows the details of the driveto-receive calibration.
Calibration
167
Drive
AC relay Receive
Figure 8.4 Drive-to-receive calibration.
8.2.2.4 Fixture Delay Calibration
The digital calibration ensures accurate dc levels as well as edge placements at a predefined calibration plane. All digital pins have their corresponding pogo pins as the reference plane; skew between digital pins is always measured and guaranteed by the manufacturer up to that point. However, during a real application, the test engineer connects a load board to the test head in order to route all dc, digital, and analog signals from the test head to the device. Because it is impossible to guarantee exactly the same length of traces from all digital pins to the input of the device, it is obvious that even though two edges are calibrated to the pogo pins they will arrive at the input of the device at slightly different times due to the differences in trace lengths. Different approaches are used to avoid this effect as much as possible. One way is certainly to ensure that the trace lengths are the same for two signals (for instance, a differential clock signal) by accurately measuring the trace lengths during the layout process. Another possibility is to have round load boards, which ensure that a set of separate pin cards have the same physical distance to the center of the load board. However, not all signals have to be routed to the center of the load board, and the more signals there are the more complicated it will be to accommodate all signals such that they have the same distance to the input pin of the device. One simple method is frequently used to compensate for fixture delay effects: apply a time-domain reflectometry (TDR) measurement. In this case, the load board without device is connected to the test head and the driver of the digital pin applies a pulse. As can be seen in Figure 8.5, the signal then propagates from the driver through the tester and through the load board. Because there is no device in the socket of the load board, the pulse is reflected and travels back from the load board through the tester until it can be detected by the comparator. The time that it takes from sending the signal to receiving it is twice the fixture delay. This procedure is applied for all digital pins and the values of the delays are then saved in a file. Obviously there is one TDR file per load board or
168
Advanced Production Testing of RF, SoC, and SiP Devices Incoming pulse Reflected pulse Drive
DUT (open circuit during calibration)
Trace on load board Receive Pogo pin
Figure 8.5 Fixture delay calibration can be measured using a TDR measurement.
per device. That is why it is important for the test engineer to ensure that the TDR file that corresponds to a certain device is applied before the testing begins. If the wrong file is applied, a TDR measurement can have exactly the opposite effect as originally planned. Also, it is important to note that a TDR calibration can fail for some pins. A TDR calibration relies on signals that are reflected by the “open” and then travel back to the tester where they can be detected by the receiver. If the amplitude of the reflected signal is below a certain level, the signal cannot be detected by the receiver. The reason for a low signal amplitude can be due to components that are on the load board, for instance, matching components, that provide a load to the pulse with the consequence that the power is absorbed by those components instead of reflected by the “open.” Therefore, if TDR fails for a pin where additional components are between the test head and the device input and the application requires an accurate delay specification, the test engineer has to calculate the delay from the test head to the input of the device by measuring the physical length of the traces and calculating the delay out of that length and the dielectric constant of the load board material. 8.2.3
Analog Calibration
Whenever we talk about analog calibration we mean calibration of digitizers and AWGs. In most modern ATE, the analog calibration performs the steps discussed next. 8.2.3.1 AWG Calibration
To perform calibration of the AWG, a highly accurate power meter (or volt meter) is required. AWG calibration can be done either with a built-in meter or by connecting an external power meter (or volt meter) to the test head during calibration. During the calibration process, the AWG steps through preset frequency intervals while the meter is measuring the power/voltage that is sourced from the AWG. For each frequency the difference between the expected
Calibration
169
power/voltage and the actual power/voltage is recorded and added to the requested level during the regular use of the tester. For instance, if the volt meter measures 0.46V at a source frequency of 28 MHz, but in actuality expects 0.5V, the difference of (0.5V – 0.46V) = 0.04V is recorded and then added to the requested voltage of the AWG at that range as well as frequency. In the preceding example, the tester is “told” during the normal operation to supply 0.54V at the test head even though the user only asked for 0.5V. 8.2.3.2 Digitizer Calibration
After AWG calibration is performed, the digitizer will be calibrated. To do this the AWG output is connected directly to the digitizer input. Because the AWG is calibrated at this point, it can be used as a calibration standard for the digitizer. The calibration of the digitizer works in a way such that the AWG is sourcing predefined frequencies of specific voltage levels into the digitizer and the digitizer captures those signals. After that the difference between the expected voltage level and the actual voltage level is recorded and applied to the measurement during normal operation of the digitizer. It is important to note that both AWGs and digitizers work in different ranges and impedances. Therefore, the calibration has to cover not only the whole frequency band but also the different ranges for which the equipment is built. As an example, a digitizer might be specified for input frequencies from 10 to 100 MHz at four different ranges (for example, ±0.1V, ±0.5V, ±1V, and ±2V) as well as for impedances of 50Ω and 1 MΩ and the calibration performed every 100 kHz. Then the calibration process requires [(100 – 10)/0.1 + 1] × 4 ranges × 2 impedances = 728 steps. 8.2.4
RF Calibration
RF calibration on SoC testers can be subdivided into three categories: • RF source and measurement calibration; • RF de-embedding calibration; • Noise figure calibration.
8.2.4.1 RF Source and Measurement Calibration
To apply an RF stimulus to a device and to perform accurate RF measurements, it is important to apply a full vector calibration to the test head. Vector calibration means a calibration for magnitude and phase of the RF signals. Even though a stimulus does not require phase information but magnitude only, most measurements require both magnitude and phase information.
170
Advanced Production Testing of RF, SoC, and SiP Devices
The calibration method that is most frequently used for ATEs is called SOLT. This acronym stands for “short–open–load–through,” which are the standards that are used in order to perform the calibration. The characteristic of the “short” standard is that it reflects 100% of the power while the phase of the reflected wave has exactly 180° offset from the incoming wave. The “open” standard reflects all the power but the phase offset between the incoming wave and the reflected wave is 0°. A “load” standard should absorb all RF power that was applied into it. Because RF equipment is typically built with an impedance of 50Ω, the RF load is—at least in theory—exactly a 50-Ω resistor over the whole frequency range of interest. The “through” standard is typically reached by simply connecting both measurement ports with a cable before performing this calibration step. Because it is impossible to build perfect standards, the manufacturer of the equipment includes a table (normally in the form of a CD with files that will be uploaded to the host computer) with the parasitic values of each standard as a function of the frequency. Because each standard is characterized and the parasitic effects are known to the equipment, it is not possible to apply the calibration standards for one tester to a second tester without updating the second tester’s calibration file. The underlying model that uses the SOLT calibration is called the 12-term error correction model (6 errors on each port), and the reader should check the references [2, 3] that explain this calibration method in more detail. 8.2.4.2 RF De-Embedding Calibration
In most cases it is sufficient to perform an RF calibration as explained earlier since the loss from the test head to the device under test is small. However, in some instances the loss (or in some cases the gain) between the test head and the device has to be calibrated. This calibration is also called a de-embedding calibration. The de-embedding step during RF calibration is equivalent to the fixture calibration step when a digital calibration is performed. In general, no de-embedding calibration is needed when the RF input/output port of the device is connected to the test head simply through a short cable and/or a short microstrip line on the load board. However, if any active or passive components are placed between the RF input/output ports of the device and the test head, it is important to use a de-embedding technique. Those components might be attenuators to force a better match of the device ports, an amplifier to boost the signal power, a power splitter to apply the signal to more than one port on the device, and so forth. Three methods are commonly used to perform a de-embedding calibration [3]: (1) scalar offsets, (2) de-embedding with a network analyzer, and (3) De-embedding with in-socket calibration standards.
Calibration
171
Scalar Offsets
In the case that scalar offsets are applied, the test engineer characterizes the magnitude of the gain or loss of the circuitry between the test head and the input of the device. For instance, in the case where a 3-dB attenuator is at the input port of the device under test and a second 3-dB attenuator is at the output port of the device, the signal is attenuated twice by 3 dB with a total loss of 6 dB. Therefore, the gain measurement can be corrected by a scalar factor of 6 dB in the test program. The method of applying scalar offsets is the most widely used de-embedding technique but also the least accurate one. This is because the phase information is lost, but more importantly because of the difficulty in accurately characterizing the circuitry between the test head and the inputs/outputs of the device under test. For instance it is—depending on the package type—difficult or even impossible to put an RF probe on the contactor to measure the actual gain/loss of the RF signal. Also, the gain/loss of the fixture is typically a function of frequency and in the case where active components are on the fixture also a function of power, which varies slightly board over board. The dependency over frequency and power as well as over separate load boards makes it difficult to add separate scalar correction factors into the test program. A more accurate method is the one discussed next. De-Embedding with a Network Analyzer
In this case, the fixture is moved from the tester to the bench and each port is carefully characterized with a network analyzer. The network analyzer has to be calibrated over at least the frequency range to which the device will be exposed. In that way, a file in a specific format can be saved that contains both magnitude and phase information of each RF port of the fixture. A SoC tester with strong RF capabilities will have the option of reading the file that was generated at the bench and then applying the correction to the stimulus or measurement. This method still has the challenge of making good RF contact between the probe of the network analyzer and the contactor pad that will be connected to the device later, but assuming that this problem can be solved, it clearly has the advantage of keeping the phase information as well as taking care of the different characteristics over frequency. The most accurate method—and the most expensive one—is the one discussed next. De-Embedding with In-Socket Calibration Standards
To perform this kind of de-embedding calibration, the test engineer has to build or order customized calibration standards that come in a package that has the same physical dimensions as the device under test so that it fits into the socket. The standards that are built into that package are the same as the standards that
172
Advanced Production Testing of RF, SoC, and SiP Devices
are used for a regular RF test head calibration (SOLT standards). The algorithms that are applied to perform error correction are the same as those used for the case of regular RF test head calibration. As previously mentioned, cost is the main issue when this option is chosen. In most cases the standards used have to be ordered from a third-party vendor who specializes in calibration standards for in-socket RF calibration. The most accurate load standards have to be laser trimmed, which adds to the overall cost. Next, those standards cannot be reused because they are made for one package and one pinout. The chances that another device will be designed for the same package and pinout are very small, which means that new standards have to be ordered for each project that requires in-socket calibration. Table 8.1 compares the three methods of de-embedding in terms of the accuracy, cost of implementation, and engineering time [4] required for implementation. 8.2.4.3 Noise Figure Calibration
With the high levels of integration in modern SoC devices, the whole front end of the receiver is typically designed into one chip. This consists of LNAs as well as the downconversion chain. One key parameter is the NF of either the LNA or the whole chain that consists of LNA, mixer, baseband amplifiers, and so forth. To perform this measurement, the ATE has to have an NF measurement option, which automatically means that the NF measurement module has to be calibrated. Two methods of NF measurements are used in modern automated test equipment. Table 8.1 Comparison of De-Embedding Methods
Property
Scalar Offsets
De-Embedding with Network Analyzer
De-Embedding with In-Socket Standards
Magnitude and phase accuracy
Moderate
Good
Excellent
Cost of implementation
Low
Low
High
Engineering time required for implementation
Low
Moderate
High
Comment
Provides no phase information, but is very simple to implement.
Requires a network analyzer and a good understanding of measurement techniques.
Requires custom-built standards (high cost and time to manufacture).
Calibration
173
The Y-factor method [5–7] requires a noise source that is switched on and off during the measurement in order to extract the two key parameters of noise power and available gain/loss. This method works fine in well-defined environments such as labs where one single device has to be characterized. However, this method has the potential to result in significant errors in a case where the device under test is mismatched. More detailed information about noise figure can be found in Chapter 4. The second method of performing NF measurements is the cold noise technique [8]. This method does not require a noise source for the measurement since the noise is generated with a resistor at the temperature of the environment. Because this method measures only the noise power, the gain/loss of the device has to be measured with S-parameters. Therefore, the NF calibration of an ATE that uses the cold noise techniques consists of two major steps: performing the noise calibration as mentioned under the Y-factor method and performing an S-parameter calibration under different matching conditions in order to extract the available gain of the device. This method is faster to perform and more accurate than the first. However, it clearly depends on a clean calibration of all steps involved. To verify the accuracy of the calibration, a reference device should be measured before recording the data of unknown devices in order to make sure that the calibration was accurate.
References [1]
National Institute of Standards and Technology, http://www.nist.gov.
[2]
Agilent Technologies, “Applying Error Vector Correction to Network Analyzers,” Application Note AN1287-3, March 2002.
[3]
Agilent Technologies, “In-Fixture Measurements Using Vector Network Analyzers,” Application Note AN1287-9, August 2000.
[4]
Engelhardt, M., “Investigation and Verification of RF De-embedding Methods to Perform Accurate On-Wafer RF Measurements,” Semicon Singapore 2002, SEMI Technical Symposium, Test Seminar, 2002.
[5]
Agilent Technologies, “Noise Sources 10 MHz to 26.5 GHz,” Application Note 5953-6452, December 2000.
[6]
Cascade Microtech, “On-Wafer Measurements with the HP8510 Network Analyzer and Cascade Microtech Wafer Probes,” RF and Microwave Measurement Symposium, 1987.
[7]
Hewlett Packard, “Fundamentals of RF and Microwave Noise Figure Measurements,” Application Note 57-1, 1983.
[8]
Noren, B., “Production Test Places New Requirements on Noise Figure Measurement Techniques,” Agilent Technologies, 1999.
9 Contactors 9.1 Introduction Contactors, or test sockets,1 as shown in Figure 9.1 are the interface between the DUT and load board and are often the most critical element of the production test solution. They come in many shapes and sizes. The contactor is relatively small in size compared to the rest of the hardware, but infinitely large in value. There have been numerous cases where more than a million dollars worth of production ATE and handler equipment has been interfaced to an expensive load board only to have a poorly designed contactor weaken the entire setup. Compounding this issue is that the redesign of a contactor can require weeks, which can reduce any possibilities of meeting time-to-market goals [1]. Contactors are the link between the DUT and the rest of the test system, as shown in Figure 9.2. Physically, they sit atop the load board. The load board routes signals to and from the test system, so the contactor can be considered an extension of the load board, routing signals between the load board and DUT. Contactors perform the important task of providing a test site for the device in order for the critical performance characteristics of the device to be transferred to the test system. This information, ultimately, determines whether the device 1. For clarification, test sockets are used primarily in burn-in and benchtop testing. In burn-in applications, the primary function is to test for infant mortality. In benchtop testing, the primary function is to prove performance of the device prior to releasing the product for production testing. Not all test sockets can perform well in production environments, and not all of them can meet the demanding electrical requirements of benchtop and characterization testing.
175
176
Advanced Production Testing of RF, SoC, and SiP Devices
Figure 9.1 Contactors and test sockets come in many shapes and sizes. (Courtesy of Johnstech International, Inc.)
Attaching screws Handler docking plate
Alignment plate Load board Contactor body Contactor pins Alignment pin
Figure 9.2 The contactor as part of the test system. (Courtesy of Johnstech International, Inc.)
Contactors
177
passes or fails. In addition, depending on the capabilities of the contactor, it may help determine “how good” the device is. Because of its nature as an interconnect, the test contactor as it relates to the system interface is frequently one of the key areas to consider for improvement. Contactors are typically used in characterization and production testing. They are designed to test the electrical requirements of the product under the rigors of production test environments. Critical characteristics of contactors include the following: • The ability to test to, and beyond, the electrical requirements of the
DUT. Testing beyond the capabilities may show that the device performs better than specified, allowing the company to move the device into a higher performance application and increase its selling price for the device. • The ability to test in production environments with little to no down-
time with field-maintainable components. Mean time between assists (MTBA) can be specified upward of 100,000 insertions of a DUT into the contactor. • The ability to test over temperatures ranging from –55°C to 165°C.
9.2 Types of Contactors Various types of contactor technologies are in use, corresponding to the style of package to be tested. They are mechanical and exercised with each DUT that is placed onto the load board (Chapter 11) and have a limited lifetime. Contactors are usually comprised of a removable assembly that is mounted on the load board. When selecting a contactor it is important to make sure that the contactor is easy and fast to replace due to its finite life span. Contactors are identified by package type or technology. This is often one of the first delineators when selecting a contactor. Design engineers must know what kind of package they want their device to go into by the time contactor selection comes into play. Some of the package properties that affect contactor performance design are lead pitch, lead thickness, lead height, package body size (including mold flashing), and overall device thickness. Lead pitch is the distance between two adjacent leads or pads on a package [or balls on a ball-grid array (BGA) type of package]. Lead thickness and height of the package leads can affect the electrical and mechanical behavior of the interface, and package body dimensions affect the mechanical design of the contactor.
178
Advanced Production Testing of RF, SoC, and SiP Devices
From a contactor design standpoint, smaller packages are easier to design for. They have inherently improved electrical performance and longer production lives. Larger packages introduce problems such as poor signal integrity, excessive mechanical debris, higher likelihood of electrical opens and shorts, and increased maintenance. Contactors may also be categorized by technology. Each technology has its own pros and cons, but the key to selection of a test contactor is to select one that meets the electrical requirements while performing well enough in the production test environment to contribute to test cell efficiency and reduced COT. This section briefly covers different types of test contacting technology. The two largest problem areas of contactor design and operation are electrical signal losses and solder buildup. The following discussion of contactor technologies explains how these problems impact contactor design. 9.2.1
Spring Pin Contactor
Spring pin, or pogo pin, technology, shown in Figure 9.3(a), uses a spring probe to make contact with the DUT. This technology has very good current-handling and thermal characteristics. Traditionally, it is has been the most used technology for SoC package types. Areas of concern for this technology in production environments are solder buildup on the tip, long electrical length, pin replacements, and the fact that the spring will continue to plunge until it hits something, regardless of whether it hits a ball, pad, or lead. Overall, however, it is considered quite cost effective and reliable. 9.2.2
Elastomer/Interposer Contactor
The implementation of elastomer for contactors is done in many ways. Elastomerbased contactors, like that shown in Figure 9.3(b), are sometimes referred to as the interposer type. Particles of gold or platinum can be suspended within the elastomer, stacked balls of conductive material, or continuous vertical wires can be embedded in the elastomer. A further enhancement of this is to embed spring pins within the elastomer. The contact resistance is very good (low), providing an excellent electrical interface. However, routine maintenance can be expected due to wear and solder buildup. Typical lifetimes are on the order of only 10,000 insertions. 9.2.3
Cantilever Contactor
Although effective for burn-in applications, cantilever technology, shown in Figure 9.3(c), is not the best solution for high-performance electrical applications, primarily due to the long electrical length of the contact. The shorter the electrical path, the better the performance. In addition, they suffer from solder
Contactors
179
(a)
(b)
(c)
(d)
Figure 9.3 (a–d) Types of contactors: (a) spring pin, (b) elastomer/interposer, (c) cantilever, and (d) short ridge. (Courtesy of Johnstech International, Inc.)
buildup because there is no wipe action and the cantilever can fatigue and break over time. 9.2.4
Short Rigid Contactor
The short rigid technology, shown in Figure 9.3(d), takes advantage of a short electrical path for excellent electrical performance and has a self-wiping action to remove solder oxides. The contact force, pin height, and coplanarity are maintained with high-compression silicone rubber. This adds to the insertion lifetime of the contactor. Field-maintainable components ensure a long life. 9.2.5
Summary of Contactor Types and Their Properties
Table 9.1 summarizes some of the key properties [2–4] of the various types of contactors. The following sections discuss these properties.
180
Advanced Production Testing of RF, SoC, and SiP Devices
Table 9.1 Contact Types and Their Properties Elastomer/ Spring Pin Interposer
Cantilever Short Rigid
Reliability
Excellent
Excellent (short Poor lifetime)
Excellent
Electrical length
Long
Long
Long
Short
Current handling
Excellent
Good
Good
Excellent
Thermal characteristics
Excellent
Poor
Good
Excellent
Solder buildup
Yes
Yes
Yes
No
Contact self-inductance (nH)
0.5
0.1
—
0.5
Contact resistance (mΩ)
<40
<20
—
<20
Lifetime (insertions)
—
10,000
—
500,000
9.3 Contactor Properties When choosing a contactor, keep in mind that the contactor must meet certain electrical, thermal, and mechanical performance requirements. From an electrical perspective, the contactor must be able to withstand high power and introduce minimal distortion to high-frequency signals. When testing RF power amplifiers, where high currents may be used, special contactor materials and large heat sinks may be used. This means that they introduce low inductive and capacitive impedances and provide a low contact resistance. They must also be mechanically reliable to be able to withstand many insertions. Consider that a test that is executed in 1 second could contribute to more than 80,000 insertions per day. Even with a contactor having a lifetime of 1 million insertions, that would be less than 1 month. Additionally, if the DUT is to be tested at various temperatures, contactors must provide thermal insulation to maintain the DUT at a constant temperature and be able to change temperature without developing condensation that could affect the measured values of a test.
9.3.1
Electrical Properties
When a DUT operates in the RF frequency range (this includes high-speed digital applications operating at upward of 1 GHz), many electrical characteristics must be considered when it is placed in a test cell. Aside from the parasitic behavior and grounding schemes, contactors introduce many other parameters that need to be addressed. However, the items discussed in this section are not only RF specific. They apply to contactor use across the frequency spectrum.
Contactors
181
9.3.1.1 Contact Resistance
Contact resistance, sometimes termed CRES, is the dc resistance of a contact point and is the composite of three different resistances: constriction resistance of the contactor, film resistance, and bulk resistance. The values of contact resistance typically worsen with working lifetime of the contactor, mainly due to contact wear and solder buildup on the contacts from the DUT. This variation in contact resistance is the largest contributor to unstable test yields. Instabilities due to contact resistance are due to debris accumulation and oxidation at the contact surfaces. Values for contact resistance should not exceed a few milliohms. The equation used to describe contact resistance originates from the standard physical contact resistance [5], and is applied to contactors as follows [6]: R contact =
ρ πH σ film H + + R bulk 4 F R
(9.1)
The first term, constriction resistance is dependent on the resistivity, ρ, of the contactor material, its hardness, H, and the normal force being applied, F. Because of the difficulty of accurately assessing all of these parameters, contact resistance is often only approximated and experimentally determined. Film resistance is that portion of the overall resistance of the contact that is attributed to oxide(s), and other foreign surface matter on the contact. It is dependent on the conductivity of the film, σfilm, hardness, H, and normal force at the contact, F. High current, very high frequency, and/or high-speed digital devices demand very low contact resistance to prevent degraded performance results. Film resistance is to be minimized because it results in variable and unstable yield results. Bulk resistance is the electrical resistance of the material comprising the contact. It is directly proportional to the resistivity of the material and the length of the contact. It is inversely proportional to the cross-sectional area of the contact. 9.3.1.2 Inductance
Any contactor pin, or for that matter, a piece of ordinary wire, will have an inductance associated with it. Inductance is the property of an electric circuit that opposes a change in current. Ideally, the inductance of the contact should be zero, but that is not physically possible. When an alternating current passes through an inductance (contact pin), it sets up a reactance that causes a loss of energy to the propagating signal. It also affects the frequency response of the test setup that shows up generally as a decrease in the bandwidth of the system. In the case of pulse or square waves, it can cause deterioration to the leading and
182
Advanced Production Testing of RF, SoC, and SiP Devices
trailing edges of the pulses. If the contactor pins are part of the ground circuitry, it causes other undesirable effects. If two or more contacts are in proximity to each other, then each pin will have a mutual inductance associated with it. This is the additional inductance either added to or subtracted from a contact pin because of an ac flow in an adjacent pin, which is coupled via electromagnetic fields. The immediate effects of ground inductance in RF circuits and devices is manifested in the form of reradiation. A signal impinging on ground inductive elements can be coupled into the element(s) and reradiated, in which case it can cause interference to other parts of the circuit. Generally speaking, this type of electromagnetic interference is undesirable. Another way ground inductance can deteriorate the performance of an RF device is by reducing its Q (inductive figure of merit). When this happens, the impedance matching between a tuned circuit and its load results in a reduction of bandwidth or output power plus a decrease in the SNR of the device because of the additional noise injected into the circuitry by the effects of ground inductance. Clearly, ground inductance is undesirable in RF circuits. It also introduces other deleterious effects in high-speed logic circuitry, such as ground bounce. Typical contactor pin inductance should be less than 0.5 nH. Multiple contactor pins can be ganged in parallel to reduce the overall inductance, as long as they are spaced far enough apart to overcome mutual inductance between them. This is sometimes done to minimize the ground inductance in contactors for RF testing. Example 9.1
Because parallel inductance is treated in the same manner as parallel resistance, if the inductance of a single contactor pin is 0.25 nH, five of these pins in parallel would have a combined inductance of 0.05 nH, assuming they are far enough apart to negate the effects of mutual inductance among them. 9.3.1.3 Capacitance
Although a contactor pin by itself does not have capacitance, whenever it is put in proximity to another conducting body, then a capacitive element is introduced into the system. For instance, where the contact touches the pad or lead of a DUT, there will be a capacitance associated with the contact, or pin. Likewise, where the contact, or pin, touches a pad of a load board, there will now be a capacitance associated with the pin, as there will be between the pin and an adjacent pin or between the pin and ground. Capacitance acts as a storage element and pulls energy out of a contactor system when an alternating current passes through its contacts. As with inductance, the immediate overall effect is a reduction in the bandwidth of the system. A more severe impact of capacitance is the removal or reduction of the signal
Contactors
183
being transmitted through the system. The reason for this is due to impedance mismatch, which in turn causes part of the transmitted signal to be reflected back to its source. Because the capacitance between two contactor pins is directly proportional to surface area and inversely proportional to the distance between the contacts, it is evident that the further the spacing (pitch) between two contacts, the less the mutual capacitance between them. Also, smaller length contactor pins will have less capacitance than longer ones, hence smaller pins that are properly matched to the test cell will have a wider bandwidth. Unfortunately, some of these desirable dimensional properties are disappearing with decreasing device size. Values of self-capacitance typically range from 1 to 50 pF. 9.3.1.4 Electrical Crosstalk
An ac signal travels on a transmission line (or other media) by its electromagnetic field propagating in the dielectric of the transmission line; that is, electromagnetic lines of force are created. When these field lines overlap or interfere with the field lines of another transmission line in the vicinity, part of the signal will be induced into the second line. Likewise, the first line will pick up signals from the second line. This undesired signal interference is known as crosstalk and is related to the mutual inductance and capacitance between the conductors. Several factors affect crosstalk: the pitch of the pads (the distance between any pair of adjacent pads) or leads of the DUT, the grounding scheme between high-frequency contact pins, the contactor body material and thickness, and the geometry of the load board pads at the contactor/load board interface. As a general rule, crosstalk levels should be at least 30 dB below the desired signal at the frequency of interest. DC current leakage between two contacts is a form of electrical crosstalk that can impair signals. It is dependent on the quality of the dielectric between them and the type and degree of surface pollutants on the dielectric material. Normally a contactor body surface resistance of several megaohms should provide minimal current leakage, however, what constitutes excessive leakage depends on the operating characteristics of the DUT. As an example, 10V applied across a surface resistance of 10 MΩ will cause a current of 1 µA to flow. This could be sufficient to deteriorate the performance of such devices as ADCs, instrumentation ICs, and other input-sensitive DUTs. A leakage current of 1 pA or less between contactor pins is considered good. 9.3.1.5 Grounding
Ground plays many roles in contactors and the circuits comprising them; therefore, care must be exercised in developing the grounding structure. It acts as a reference for the circuit, provides shielding, controls crosstalk between signals, and provides the return path for current flow.
184
Advanced Production Testing of RF, SoC, and SiP Devices
In modern, high-density electronic device packaging, as many as 25% of the leads are grounded to control noise. Proper grounding can make the difference between a circuit that performs marginally to one that performs exceptionally well. A proper grounding scheme for any electronic circuit must take into consideration and implement several salient features. Many items must be considered when choosing a contactor or when designing a load board. Particular attention should be paid to the section of the load board in the vicinity of the contactor. The geometry of the ground path (length and area) is very important. Ground inductance and resistance to ground should be low in order to minimize power supply voltage drop. Ground returns should be carefully designed and routed, and full use should be made of ground to minimize crosstalk and maximize shielding where necessary. High-power RF contactor applications often require extra attention to grounding techniques. If one were to consider a high-power RF device in its end-use application, it is firmly soldered to a PC board. However, during production testing, keep in mind that the quality of the ground is a product of how well the contactor grounding connections and paths are designed. The shortcomings of the contactor grounds are added inductance, similar to that added by bond wires within the device. This impact is especially noticeable in power amplifier DUTs. In power amplifiers the input and output ground planes are shared. An out-of-phase output signal can couple back to the amplifier input causing oscillations [7]. When testing high-speed digital and switching devices, ground bounce is an adverse effect that may be encountered. It is caused by shifts in the internal ground reference voltage due to output switching. It affects a digital circuit by causing noise pulses, which in turn can cause errors called double clocking (receiving extra digital data). It also reduces the dynamic operating swing between high and low digital signals, thereby causing errors in the decoded data. The effects of severe ground bounce can be alleviated somewhat by parity checks and error-correcting coding. The result of ground bounce is a voltage generated across ground pins whose value is the product of the inductance of the pin and the rate of change of current flow through it. This voltage level is somewhere between the ground plane of the system and the ground internal to the device package. Fortunately, ground bounce voltages are usually small compared to the full-swing output voltage, and ground bounce rarely impairs transmitted (high-power) signals, but it often interferes with received (low-power) signals. 9.3.1.6 Electrical Length
The electrical length of a typical contactor pin is defined as the shortest electrical distance between the top of the pin where it mates with the pad or lead of the DUT and the bottom of the pin where it touches the pad of the load board.
Contactors
185
Because, for example, in the case of spring pin technology contactors, the dimension changes when the DUT is plunged, this is always specified at the compressed height of the contact, or pin, because that is the length at the time of signal transfer. Electrical length varies with the type of contactor, which in turn is dependent on the upper operating frequency of the DUT. Electrical lengths vary from contactor to contactor. As an example, an ideal contactor used for testing DUTs operating at 1 GHz would have contacts pins with an electrical length of 2 mm. 9.3.1.7 Bandwidth
The bandwidth of a test cell or system is determined by the highest frequency the system is successfully able to transmit or receive. Bandwidth is defined somewhat differently depending on whether the system is testing RF, mixed-signal, or logic devices. In the case of RF systems, the bandwidth is often defined as the point where the highest frequency is at half-power or 3 dB below the maximum level of the midfrequency of the system. For a pulse or digital logic system, the bandwidth is determined with respect to the preservation of the shape of a pulse passing through it. Generally speaking, the bandwidth of a system must support the third harmonic of the highest pulse operating frequency. The RF bandwidth of a test cell or system directly affects the highest frequency that the system will successfully pass. As a general rule, the smaller the physical size of the contactor, the higher its frequency response will be. Contactors typically are lowpass devices. This means they will pass all frequencies from their cutoff (high) frequency down to dc with a uniformly flat amplitude level. The upper bandpass frequency (cutoff) is typically –3 dB below the midpoint of the passband. However, many test contactors have the upper cutoff frequency at a level –1 dB below the midpoint of the passband. It is wise to match the contactor to the DUT. This means that a DUT with an upper operating range of, say, 1 GHz, should be paired with a test contactor whose upper –1-dB limit is perhaps 1.5 GHz. The idea here is to obtain good test performance and good economics. 9.3.1.8 Insertion Loss
Insertion loss (IL) is characterized by the nomenclature S21 and is a measurement of signal delivered from the source to the load through a transmission line. It is defined in logarithmic form as follows: IL
dB
= − 20 log S 21
(9.2)
In the ideal case, the insertion loss is 0 dB, which is 100% transmission of the signal. To complete the definition of IL, the bandwidth of the system must
186
Advanced Production Testing of RF, SoC, and SiP Devices
also be specified. Normally the 1-dB bandwidth is used. It stands to reason that a contactor with a 1-dB insertion loss at a very high frequency will be required for RF or high-speed digital devices. Example 9.2
Question: Using the aforementioned bandwidth and IL requirements, what would be the frequency range to which a contactor must exhibit 1-dB insertion loss when used in a WLAN application operating between 2.4 and 2.5 GHz? Answer: 7.5 GHz, because this would encompass the third harmonic of 2.5 GHz. Keep in mind that it is essential that the test cell and the test contactor be optimally matched to a specific impedance, usually 50Ω. 9.3.1.9 Return Loss
Return loss (RL) is characterized by the nomenclature S11 and is a measurement of signal return back to the signal source due to impedance mismatch in the transmission line. It is defined in logarithmic form as follows: RL
dB
= − 20 log S 11
(9.3)
In the ideal case, the insertion loss is infinite decibels, meaning no reflection. To complete the definition of RL, the bandwidth of the system must also be specified. Usable bandwidth in this case is based on a return loss better than –20 dB (i.e., 1/100 of the power being reflected back to the source). Again, a contactor with a return loss of –20 dB at 7.5 GHz would be adequate for testing WLAN devices, providing the test cell and the test contactor were optimally matched to a specific impedance, usually 50Ω. 9.3.1.10 Characteristic Impedance
It is very important to optimize the impedance matching of any test system carrying RF or high-speed digital signals. This means the contactor must be matched to the DUT/contactor interface as well as to the contactor/load board interface. Because of its structural architecture, a contactor will have a characteristic impedance determined by the dielectric constant of the insulating material comprising the housing and the pitch of the pins. The physical size and geometry of the pins also play a role in this. The pads of the load board should be carefully designed to optimize the impedance match so as not to cause undue reflections to an RF signal propagating through the test system. RF systems almost always have 50-Ω characteristic impedance, hence the test system must be matched or tuned to 50Ω. It does little good to have a super-high-performance contactor if the rest of the test system is poorly matched to it. Any
Contactors
187
mismatches will quickly degrade the bandwidth, insertion loss, and return loss of the test system with the result that devices will start failing during testing, which translates into poor yield. Particular to testing of RF and SoC devices, it is important that the physical size of the contactor be as small as possible. Small contactors allow the placement of impedance matching inductors and capacitors close to the DUT. In a few cases, manufacturers produce large contactors, but they have material removed from the underside so that matching components may be placed close to the DUT as shown in Figure 9.4. 9.3.1.11 Equivalent Circuit Model
A contactor pin may look like a simple piece of metal, but in reality it is defined in terms of its resistance, inductance, and capacitance (at a particular frequency). At low frequencies the contactor pins look like simple resistors, but as frequency increases, it is necessary to consider an electrical equivalent circuit, which is more useful in defining the electrical performance of the contactor and for matching the contactor to the DUT and load board. A typical equivalent circuit of a contactor is shown in Figure 9.5. In the equivalent circuit, two adjacent contactor pins are represented; L1 and L2 represent the self-inductances of the contactor pins. M21 is the mutual inductance between the pins. These inductances are a direct contributor to high-frequency instability, ground bounce, ringing, and a reduced electrical operating window in device testing. They also limit the upper frequency response of the contactor.
DUT Contactor
Thin mount capactitor Load board Decoupling pocket Figure 9.4 Some contactor bodies have machined pockets to allow placement of decoupling and impedance matching components. (Courtesy of Johnstech International, Inc.)
188
Advanced Production Testing of RF, SoC, and SiP Devices
Figure 9.5 Equivalent circuit of a contactor, representing two adjacent contactor pins.
The C21a and C21b are the mutual capacitances between adjacent pins on the DUT and load board surfaces of the contactor, respectively. These capacitances act like filters, limiting the bandwidth of the contactor. The R1 and R2 are shunt resistances of each of the contactor pins. Although they are most significant at low frequency, at RF frequencies they represent high-frequency loss due to skin effect and dielectric loss. They also limit the high-frequency response as signal loss occurs through the pin. 9.3.1.12 ESD
Electrostatic discharge (ESD) can be a problem for DUTs. The contactor plays a major role in this. Namely, charge builds up on the contactor body due to induction and mechanical friction between the handler, DUT, and contactor. Charge can be transferred from the handler to the contactor as the DUT is plunged. It can be transferred physically, referred to as triboelectric, or it can be transferred by induction as the two bodies come in proximity. To combat ESD, some handlers have built-in ionizers; however, this is not often necessary because many other precautions in the design of handlers and contactors address ESD. ESD- oriented (low-resistivity) plastics and ESD coatings are used in the parts of the handler that come into contact with the contactor and the contactor bodies as well [8]. 9.3.2
Thermal Properties
Any active electronic device (e.g., integrated circuits, transistors) will generate heat because of inherent inefficiencies in processing and/or converting electric
Contactors
189
signals that pass through them. The internal heat created must be dissipated to the environment outside the package of the device to prevent device damage or shortened reliability/life of said device. In test environments, heat is dissipated away from the device via conduction, mostly, but convection can account for some heat transfer. The maximum operating temperature of a solid-state device is directly dependent on the substrate material of which it is formed. CMOS semiconductors can operate up to temperatures of +175°C, but not for extended periods of time. Gallium-arsenide substrate materials can operate at a maximum temperature of +150°C, but again, not for long. To be safe and ensure good reliability, one should not allow the temperature of a die to exceed 80% of its maximum specified temperature. This means that a DUT must have a good, low thermal impedance path to the room environment during testing and final operation. The advent of the use of semiconductor devices in automobiles has created a need for high-current, short-duty-cycle devices that must operate in an environment that experiences extremes of temperature on both ends of the thermal spectrum. Because heat rises within the DUT leads, it is a paramount role for the contactor to be able to manage this heat. Contactors need to be checked to ensure they will handle high temperatures and currents generated during testing. The housings of many contactors are made of plastic such as Torlon. At 282°C, this material begins to transition from a stable solid state to a presoftening phase, which means it loses some structural stiffness. Regarding the contact pins in the contactor, the surface area where they make contact with the DUT and the load board is quite small, on the order of 10–7 m2 or less. Because of the diminished area of contact, the current density per unit surface area can be very large. This creates heat buildup, which if it is high enough, will microweld the contact to the DUT. These facts alone should be cause enough to take proper precautions to ensure that the contactor is robust enough for the job. When sizing a device to a contactor, especially devices that carry high current or generate a lot of heat, several things must be considered. These include the current-carrying capacity of the contacts in the contactor and thermal impedance. 9.3.2.1 Current-Carrying Capacity
Current-carrying capacity is highly dependent on the contact resistance of a contactor pin and its geometry. Recall that we mentioned earlier that contact resistance is mostly comprised of the bulk resistance of the material of which the pin is made and the interface resistance where it contacts another surface. The geometry affects this parameter because pin resistance is a function of the conductivity, length, and cross-sectional area. Many contactor pins are made of nickel/gold-plated beryllium copper or other materials having a high conductivity.
190
Advanced Production Testing of RF, SoC, and SiP Devices
It is important to note that fusing current will limit the current-carrying capacity of a contact pin. The current-carrying capacity of a pin can be calculated and experimentally verified by lab tests. A pin whose length-to-cross-sectional area is small may safely carry only 1 to 2A. Whereas a pin whose length-to-cross-sectional area is large, may carry up to 10A or more. Some contactor designs use several pins in parallel to transfer a known current that would exceed the safe current-carrying capacity of a single contact. An electric current passing through a contact pin will cause internal heating of the pin due to its electrical resistance. This joule heating will cause a temperature rise across the pin with more current causing higher temperatures and vice versa. If the pin gets too hot, it can cause the plastic surrounding the pin to soften and eventually fail. Certainly the performance of the test contactor would be degraded. Manufacturers of contactors typically specify their products as having the ability to carry a certain current at a contact’s temperature rise of 20°C above ambient temperature. As a rule of thumb, large contactors can safely carry more current per unit thermal rise than small ones. The values can range from 1A for a beryllium-copper pin 1 mm in length (cross-sectional area approximately 0.5 × 10−8 m2) to 5A for the same material pin with a length of 4 mm and a cross-sectional area of 1 × 10−6 m2. 9.3.2.2 Thermal Impedance
The thermal impedance θ of a contactor is a measure of the thermal rise (in degrees Celsius) within a material body that is transferring a known quantity of heat (in watts) through it. It is stated in units of °C/W: θ=
T 2 −T 1 P 2 − P1
(9.4)
where T1 and T2 define the thermal rise and P1 and P2 are the quantity of heat (power) in watts. Each part in a test cell has a thermal impedance associated with it and the overall performance of the cell is dependent on how effectively the combined thermal impedance will transfer heat from the die of the DUT to the environment of the area in which the test is occurring. When the contactor is bolted to the load board of a test cell, then the thermal impedances of the DUT, the load board, and the load-board-to-air environment must be summed into the overall thermal impedance for the test cell. A value for thermal impedance is specified for the contactor, and consists of the individual thermal impedances of the pins, thermal block, housing, and the interfaces (as a function of force and contact area) properly combined. High power or high current-carrying DUTs that generate heat in the neighborhood of
Contactors
191
several watts are the largest culprits of thermal problems. These DUTs often have a ground/thermal pad on the bottom of the device. This pad serves as a transfer point to remove heat from the device and transfer it to a suitable heat-dissipating medium. Contactors suitable for test with these devices generally have a thermal block of copper in the floor of the housing. This block may or may not be equipped with contacts whose purpose is to provide concentrated points of contact for heat transfer.
9.3.3
Mechanical Properties
Because tens of thousands of DUTs are placed into and out of the contactor on a daily basis at very high speed, it is imperative to understand the mechanical properties that make it successful. First, and foremost, it is imperative to have a means of repeatably positioning the DUTs within the contactor. Failure to have this will result in premature wearing of the contactor and, worse, reduced yields due to electrical shorting or jams that require test floor operator intervention and downtime. Decreasing DUT package sizes constantly make alignment more difficult. This section discusses the mechanical aspects of contactors. A lot of the concepts discussed in this section can be found in Chapter 11 where load boards are discussed in detail. 9.3.3.1 Alignment
Alignment of the plunging action of the DUT into the contactor is likely the most critical mechanical concern for the contactor. Alignment is achieved in many various ways. Initially, manual alignment of the plunger and contactor is performed during setup of the test cell. This can be done as often as each time the handler is removed (undocked) from and remated (docked) to the tester. However, most modern docking solutions enable redocking without extensive alignments after an initial coarse alignment has been performed. However, to add further assurance to DUT alignment with the contactor, alignment mechanisms are designed into the contactor. The alignment mechanism is a pattern in the contactor providing a path for correcting and calibrating the interaction of the device leads to the contactor leads or pins. The design of these alignment paths within the contactor body is becoming more important as DUTs and lead spacing both get smaller and as the number of leads per device continues to increase. The two common ailments caused by misalignment occur when the DUT leads do not make contact with the contactor pins, causing false continuity failures, and, worse, when DUT leads become lodged in the contactor, requiring operator intervention. (See Section 9.3.3.3 on jams.)
192
Advanced Production Testing of RF, SoC, and SiP Devices
9.3.3.2 Wobble
DUT packages may vary in dimension simply due to statistical process variations. When parts manufactured to the low end of the dimensional specification are placed in a contactor designed to nominal values, they fit loosely in the socket. This can lead to the effect known as wobble, the uncontrolled positioning of the DUT into the contactor. The impact of wobble is uncertainty in aligning the leads of the device to the mating contacts in the test contactor. Another result of wobble is that the leads of the DUT may become lodged between contacts or connected across the top of multiple contacts. This is a short-circuit condition. A short circuit can cause damage to the DUT when supply power is applied. It can cause electrical arcing and sparking in the contactor, which will leave carbon deposits that contaminate the contactor. These carbon deposits build up to create a condition in the socket of current leakage between contactor contacts. In contrast to the short-circuit condition, wobble can lead to a situation in which the DUT leads do not mate with the contacts in the test contactor. This creates open circuits in which continuity tests will cause false failures, reducing yield. To correct for potential mechanical ailments such as wobble, alignment plates and nests are designed to guide the leads of the part into the contactor and onto the contacts in the contactor. With proper alignment, the device leads mate with the contacts in the test socket every time. 9.3.3.3 Jams
When the leads of the component are lodged between contacts, this is referred to as a system jam. Most automated handling systems are not equipped to remove components from a contactor; therefore, jam conditions require operator intervention. Jam conditions mean lost throughput and an additional cost to testing. Parts inserted on top of the jammed part will obviously test incorrectly. This leads to additional productivity loss. When parts are manufactured to the high end of the dimensional specification (i.e., when parts are larger than the nominal specification), a tight fit results. The tight fit is most often referred to as a constrained system. The impact of constraint is lead bending and contact mashing, with automated material handlers jamming. The end result can include damaged devices, with a reduced test socket life, and a significant reduction in the overall effectiveness of equipment. 9.3.3.4 Insertion Force and Overtravel
Insertion force and overtravel pertain to how the DUT is controlled as it is inserted into the contactor. Insertion force and overtravel of the device lead on the contact must be controlled and held to certain limits. If there is not enough
Contactors
193
over-travel, the amount of wiping motion between the contactor pin and the DUT lead will not provide a good electromechanical connection. Electrically, the lack of overtravel results in a high-resistance connection. The highresistance connection starves parts of the necessary current, and parts may falsely be deemed marginally passing or, worse, falsely failed. If there is not enough overtravel, the tip of the contactor pin is not able to clean itself, resulting in solder buildup. Solder buildup can cause good parts to fail in test, and requires taking the test system off-line more frequently for contact cleaning. Again, there is lost throughput with increased costs of testing. 9.3.3.5 Hard Stops
Hard stops are used to control the amount of device travel or motion into the contactor. Specifically, it is the amount of contact deflection and contact wipe on the lead of the DUT that is regulated. This allows a repeatable insertion, to within the same limits, each time. Both socket and contactor components are damaged when devices are inserted beyond the manufacturer’s limits. The results of hard stops not being maintained include bending of DUT leads, bending of cantilever contacts, or tearing of the contactor elastomer, resulting in a gradual reduction in yield. DUT manufacturer specifications usually include over-travel distance and deceleration rates. 9.3.3.6 Contact Spring
There is always some degree of spring built into the contactor pins. The spring determines the contact pushback force on the lead of the device. This creates consistent contact pressure on the leads of the DUT for a good electromechanical connection. In some types of contactors, such as the cantilever, the spring action is built into the cantilever contact. Flexing of the cantilever contact causes the metal of the contact to fatigue over time, causing pushback force to degrade. This can lead to noncoplanarity, as shown in Figure 9.6(c). With the loss of coplanarity, the vertical heights of the contacts change with each device insertion. It is almost impossible to reproduce the point in the device lead interconnect with the contact tip when the contact tip hit point changes. The spring action for rigid-type contacts is often produced with a high-compression, silicone rubber elastomer. The elastomeric element repeatability returns the contact tip to its original height, while maintaining its spring force even after hundreds of thousands of device insertions. Over time, the elastomer can develop damage. This damage is addressed in routine maintenance and inspection. 9.3.3.7 Mechanical Crosstalk
Movement of a contact pin solely due to a response from the movement of an adjacent pin is referred to as mechanical crosstalk. While it is most prevalent in
194
Advanced Production Testing of RF, SoC, and SiP Devices
Nominal deflection
(a)
Nominal deflection
Nominal deflection
(b)
Overdeflection
(c)
Figure 9.6 (a–c) Degraded contact spring action can lead to a condition known as noncoplanarity where overdeflection of the contactor pins can occur as shown in part (c). (Courtesy of Johnstech International, Inc.)
elastomer and interposer-based contactors, it can exist in solid body contactors as the DUT lead/ball pitch becomes smaller. 9.3.3.8 Interface Analyzers
An interface analyzer [9] is a device used as a debugging, troubleshooting, or interface development tool. It is simply a transparent version of the contactor. The contact pattern is simulated with rectangular markings etched on the device and load board side of the analyzer. The number of slots, their location, and their spacing is identical to that of the contactor design. Elastomer positions are represented by etched markings on the contact centerline for both ends of the contacts. The interface analyzer is very useful because it allows device and load board contact points to be determined, without damaging devices or the load board. From a mechanical standpoint, it also can aid in determining device plunge placement alignment and repeatability. It aids in load board design by determining trace routing, keep-out areas, and contact pad alignment. It also allows decoupling and impedance matching component placement.
Contactors
195
9.4 Load Board Considerations This section discusses the role of the load board in relation to contactors. A detailed description of the load board is provided in Chapter 11. Optimizing performance essentially means matching the test contactor to the load board. Because a load board is basically a printed circuit board (PCB) of robust structural nature, the rules for its design follow precisely those for a PCB. Some things to consider in the design are listed here and constitute proper layout practice: • Use matched impedance traces up to the device or test contactor. • Place decoupling components close to the device. Recall that some
contactors have provisions to allow components to be placed very close to the DUT. • Optimize the pad aspect ratio where they mate with the contactor. • Separate high-frequency traces from ground as traces are routed away
from the contactor. A well-designed load board, matched to the contactor will pay excellent dividends in the performance of the test cell. Another key part in the interface is shown in Figure 9.7. This is the use of a load board stiffener. A stiffener is critical to prevent flexing of the load board as the handler plunges the DUT into place, as shown in the figure.
9.5 Handler Considerations Significant variations, such as orientation of the test plane and number of sites, exist among handlers. There are also many factors that affect handling accuracy such as package mold parting lines on the device body, which can cause interference, and misalignment and flash (excess loosely attached material on DUT package), which can create debris. These also affect contactor life and performance. Additionally, as the industry moves toward smaller, finer pitched devices, engineers working on handler interfaces are faced with a multitude of issues. Smaller devices are more fragile and more difficult to locate, move, and contact. Smaller devices provide less surface area to maintain horizontal and vertical control and the overall area to contact on the device is reduced. To ensure that the contactor will work on the first pass of the design and provide higher first-pass yields, customers must provide the contactor manufacturer with some information about the handler and its operation. Some of these items are the handler make and model number, alignment plate design, handler
196
Advanced Production Testing of RF, SoC, and SiP Devices Load board without board stiffener or backing plate Handler Contractor Avoid! Load board can flex and bow, not allowed
Load board with board stiffener or backing plate
Handler Contactor Load board
Load board stiffener or backing plate
Figure 9.7 A load board stiffener reduces or eliminates flexing of the load board as the handler plunges the DUT into the contactor. (Courtesy of Johnstech International, Inc.)
sequence of events while handling or aligning, and theory of handler operation. These are discussed in Chapter 10 on handlers.
9.6 Overall Equipment Effectiveness Overall equipment effectiveness (OEE) is a performance measure taken from the total productive maintenance (TPM) program [10]. It is used to understand, maintain, and improve the efficiency of equipment and is a measure that companies can use to track and validate business unit performance. The elements of OEE for a given process include availability rate, performance rate, and quality rate, described as follows: Rate Availability =
(OperatingTime − DownTime )
Rate Performance =
OperatingTime TotalOutput PotentialOutputAtRated Speed
(9.5)
(9.6)
Contactors
Rate Quality =
PassedOutput TotalOutput
197
(9.7)
Within the test cell, overall performance varies, depending on a plethora of variables. Ideally everything would operate with no yield losses and minimal maintenance. In real life, however, this is rare. With blind builds, package variations, handler variations, docking variations, maintenance and sustaining programs, and load board conformance, performance results often vary from interface to interface. Detailed descriptions of the various aspects of OEE can be found in [11]. 9.6.1
The Contactor and OEE
The contactor is the interconnect between the DUT and the other components of the test system. Because of this, the contactor is often looked at as the source of either problems, or the source for improvements with regard to OEE. Contactors fit into the formulas because that is where one often first looks to understand the impact of the problems and it is also where one looks to sustain, maintain, or improve the efficiency of the interface. The investigator may start with the contactor, because of its relationship to the DUT and the other test components. Investigators may also start here for feasibility and cost reasons. As one of the least costly items of the test system, the test contactor is more quickly and less expensively serviced and/or replaced. Test contactors can impact performance in any of the following ways: • Equipment failures: line down, broken contactors, torn elastomers,
worn-out alignment plate, bad spring pins, and so forth; • Setup and adjustments: difficult setups taking up to 3 hours, often due
to lack of docking repeatability, worn-out bushings, or load board violations; • Idling and minor stoppages: chronic intermittent performance such as
yield fluctuations due to package variables or poor interface and docking issues; • Reduced speed operation: package trim and form protection against lead
bending or Pb (lead)-free or elastomeric response characteristics; • Scrap and rework: all retest, which is timely and costly; • Startup losses: during implementation, startup losses can be significant.
The goal of understanding OEE is to improve the effectiveness of the test system to increase performance results. Unfortunately, the investigator will find
198
Advanced Production Testing of RF, SoC, and SiP Devices
that most problems in production are not contactor design issues. More often they are related to fit, form, and function. One way to minimize negative impact to OEE is to ensure that contactor and handler companies work together to develop an interface that optimizes the test system to improve production performance results. Often the test engineer acts as a liaison between these two vendors. Some items to be aware of are that contactor companies and handler companies measure their performance with regard to the customer’s success and end results differently: Handler companies may look at jam rates, whereas contactor companies may look at yields. Additionally, not all contactor and handler companies have the expertise for this task in-house and are often forced to outsource this function.
9.7 Maintenance and Inspection of Contactors Proper contactor maintenance and inspection can lead to higher yields. Routinely check for and replace worn contacts. Maintenance schedules depend on individual test system setups and practices. Items that may affect maintenance and inspection schedules include handler design and setup, correct plunge depth and device placement, package variations, lead plating variations, and general test floor maintenance activities. Key items to look for, depending on the contacting technology used, are worn contacts and damaged elastomers. These impairments are depicted in Figure 9.8. 9.7.1
Contactor Cleaning
Three primary methods are used to clean contactors: replacement, physical contact cleaning, and nonphysical contact cleaning. The first approach is to simply replace the contactor. Although this is the most expensive choice, it is also the safest and most thorough. Alternatively, physical contact methods such as brushing or using abrasive material can be used to clear debris or remove oxide buildup. Note that with these methods it is easily possible to damage the contacts and remove protective plating. Finally, the safest method is nonphysical contact cleaning. This could be as simple as blowing with compressed air, or better yet, an inert gas such as argon or nitrogen. Still, caution must be taken such that debris does not get scattered into undesirable areas or lodged into the contactor, prohibiting proper operation. Removing the contactor from the load board and cleaning it entirely, or in parts, in an ultrasonic bath is another relatively safe alternative. The main caution with this method is to make sure that the solvent used in the ultrasonic bath is nonreactive with the materials used in the contactor (contactor body, elastomer, protective plating).
Contactors
199
(a)
(b)
Figure 9.8 Damaged and worn contactors: (a) worn contacts and (b) torn elastomer. (Courtesy of Johnstech International, Inc.)
9.8 Manual Hold-Downs For engineering and characterization purposes it is often desirable to have a contactor with a clamp, or hold-down, on it so that a test engineer can manually place a DUT onto the load board. This is critical during load board debugging because it allows impedance matching to be performed on the load board without having to work around the handler.
9.9 Cost Considerations Contactors are subject to cost-accuracy trade-offs. If utmost accuracy of measurements is needed, it may be necessary to select an expensive contactor with a low lifetime (low number of insertions). On the other hand, if accuracy is not the most important parameter and maximum throughput is, then a low-cost contactor with a long lifetime may satisfy the requirements. Regardless of the combination, all of the costs of the contactor, downtime to replace, and frequency of replacement must be considered.
Acknowledgments The authors would like to thank Johnstech International Corporation personnel for their personal communication, contributions, and support for the technical details of contactor design and performance.
200
Advanced Production Testing of RF, SoC, and SiP Devices
References [1] Johnson, D., “Test Sockets and Contactors, A Critical Component of Semiconductor Production Test,” Johnstech International Corporation Application Note, 2004. [2] Johnson, D., Test Contactor Performance Handbook, Johnstech International Corporation, 2004. [3] Johnson, D., “Reducing the Cost of Semiconductor Testing with High-Performance Contacting Technology,” Johnstech International Corporation Application Note, 2004. [4] Knudsen, R., “Good Contact Design Improves Test Performance in BGA/CSP Applications,” Johnstech International Corporation Application Note, 2004. [5] Holm, R., Electric Contacts: Theory and Application, 4th ed., New York: Springer, 1967. [6] Broz, J., and G. Humphrey, “Controlling Test Cell Contact Resistance with Non-Destructive Conditioning Practices,” 2004 Burn-In and Test Socket Workshop, Mesa, AZ, March 7–10, 2004. [7] Wartenberg, S., “Contactor Design for High-Volume RF Testing,” Microwave Product Digest, February 2003, pp. 1–8. [8] Tan, J., and T. Fong, “Design Characteristics of Test Contactor & ESD Concerns,” 2000 Burn-In and Test Socket Workshop, February 27–29, 2000. [9] Johnstech International Corporation, Part number 12033/148/101, patent applied for April 10, 2001. [10] Nakajima, S., TPM Development Program: Implementing Total Productivity Maintenance, Portland, OR: Productivity Press, 1989. [11] Sherry, J., and D. Haupt, “Optimizing the Whole Test System to Achieve Optimal Yields with Lowest Test Costs,” Proc. 29th IEEE Intl. Electronics Manufacturing Technology Symp., 2004.
10 Handlers 10.1 Introduction When production testing of any packaged semiconductor device is performed, one of the major capital investments is the handler. The handler is a robotic tool for placing the DUT into position to be tested [personal communication with Kevin Brennan, Delta Design, 2006]. It communicates with the tester and provides the temperature stimulus and the means to handle the DUT while it is being tested. To demonstrate the significance of the handler, consider that today a test system can cost up to a few million dollars. Although the test handler may cost less than 10% of this amount, it is the handler that determines production test cell utilization. Expressed differently, if a handler could offer twice the productivity, then only half the number of multimillion-dollar testers would be needed [1]. While the handler communicates with the tester, it also provides signals that inform the tester when the DUT is ready for test and it receives binning information from the tester after the DUT is tested. The communication between handler and tester is controlled by specific software, sometimes known as a driver. After the test is performed, the handler then places the DUT into an appropriately selected “pass” bin or “fail” bin. Modern test systems offer enhanced and plentiful resources to enable multisite, parallel, and concurrent testing. Modern handlers have to follow in their footsteps to be able to handle these architectures. Otherwise, the multisite-capable tester is useless. Until recently, capital investments for semiconductor manufacturers primarily focused on improving yields, or the percentage of good devices in a batch 201
202
Advanced Production Testing of RF, SoC, and SiP Devices
or lot, rather than on maximizing productivity, or how many good devices the test cell can process in a given unit of time [1]. In some cases, this may not be the best methodology. Consider that modern test equipment capabilities have increased and device fabrication costs have decreased. In some cases it may make sense to focus on enhanced throughput to net higher numbers of good parts than to focus on improving lot yield by a few percent. Handlers are found in many varieties and have many different features. This chapter provides an overview of handlers, including information critical for the handler selection process. It covers the topic of handlers in quite some depth, even beyond the bounds of handlers used for RF and mixed-signal testing. In searching, the authors found little documentation about an overview of handlers for production testing, but references at the end of this chapter can provide more detailed information on the specific types of handlers.
10.2 Handler Types First and foremost, handlers come in many varieties so that they can move the many different package types that need testing. The four major ha ndler types are gravity-feed, pick-and-place, turret handlers, and strip test handlers. 10.2.1 Gravity-Feed Handlers
Gravity-feed handlers (Figure 10.1) work best for packages that are mechanically quite solid and can withstand friction on a sliding surface. Such package types are dual inline package (DIP), small outline integrated circuit (SOIC), miniature small outline package (MSOP), thin small outline package (TSOP), and leadless chip carrier (LCC). A gravity-feed handler usually feeds the devices into a slider via transportation tubes. When the device gets to the slider, or lead rail guide, it slides down to the load board by means of gravitational force. Because smaller, lighter packages pose a problem with friction, some handlers integrate air blowers into the channel along the gravity slider to assist in acceleration of the DUT to the load board. As far as RF applications go, gravity-feed handlers are typically used for lower pin count, discrete RF devices such as power amplifiers or low-cost RF devices. 10.2.2 Pick-and-Place Handlers
Advances in packaging technology and the introduction of more recent package types such as ball grid arrays and chip scale packages (CSPs) prompted the need for pick-and-place handlers (Figure 10.2). This was also necessitated by the
Handlers
203
Figure 10.1 Gravity-feed handler. (Courtesy of Multitest.)
popularity of quad flat pack (QFP) devices that could not be handled in gravityfed machines, the most widespread type of machines in use up to the early 1990s. Pick-and-place handlers can work with almost all types of packages. Typically using suction, this handler moves the DUT from a transportation tray to the load board contactor socket. The precision movement in these handlers is controlled through stepper motors. Pick-and-place handlers often employ numerous vacuum solenoids, rather than electrically controlled switches. This minimizes the introduction of electrical noise to the production testing environment. Pick-and-place handlers are the most common type used with modern, highly integrated, higher pin count RF/SoC devices. 10.2.3 Turret Handlers
Turret handlers (Figure 10.3) are used when the devices arrive at the test area in bulk form (e.g., bags or other large-quantity containers). The devices are poured into a bowl feeder and vibratory motion in the bowl forces the devices to be oriented and fed to the load board in the proper test position. These handlers
204
Advanced Production Testing of RF, SoC, and SiP Devices
Figure 10.2 Pick-and-place handler. (Courtesy of Delta.)
provide throughput equivalent to, or greater than, that of gravity-feed handlers, but the types of devices that they can test are limited. Turret handlers are often used with very simple devices such as passive components or RF switches. Some turret handlers capture the part on a centrally located chuck, usually out of a bowl, and move it around to one or more perimeter stations. The stations can be test sites, inspection, marking, binning, tape-and-reel, and so forth.
10.2.4 Strip Test Handlers
Strip test, or test-on-strip, handlers (Figure 10.4) are a more recently introduced generation of handlers. They are designed for testing unpackaged parts or devices that are in their lead frame assemblies, prior to assembly into the final packages. Strip testers were implemented in the late 1990s to cost-effectively test devices, such as memories, which are tested in a highly parallel configuration (e.g., 16 or more sites) [2]. More recently, RF and mixed-signal devices have been proven testable using strip testers [3, 4]. Figure 10.5 shows a strip test setup, whereby 48 devices are tested on each touchdown of the handler [5]. The key advantage of strip testing is the short index time required to linearly move the strip to the next group of devices. The index time per DUT is that same short index time, but divided by the number
Handlers
205
Figure 10.3 Turret handler. (Courtesy of Aetrium.)
of DUTs tested on a touchdown (in this case, 48), which becomes remarkably low. The primary requirements for strip testing are that the devices be securely mounted in the leadframe and that they have sufficient electrical isolation from each other to eliminate signal crosstalk. Table 10.1 shows the four main types of handlers and the package types they can test. Although strip test handlers do not test packaged devices, for a given package, they test in strip or panel form on their lead frames.
10.3 Choosing a Handler Type In addition to package type, another foremost determinant of the handler type is based on how the devices are delivered to the final production testing stage (i.e., trays, tubes). If the devices are delivered to the test area in tubes, then a gravity handler will likely be used. If the devices are delivered in trays, then a pick-and-place handler will be used. If the devices are delivered in bulk (typical of small package sizes), they will likely be tested in turret-based handlers. Finally, if the devices are still in the form of a substrate, then a strip test handler
206
Advanced Production Testing of RF, SoC, and SiP Devices
Figure 10.4 Strip test handler. (Courtesy of Delta.)
Table 10.1 Handler Technologies and the Package Types They Can Test Gravity-Feed Handlers
Pick-and-Place Handlers
Turret Handlers
Strip Test Handlers
DIP SOIC, MSOP, TSOP, LCC, MCR, PLCC, SIP, SOJ, SO, SSOP, TSSOP, MSOP, QFN, CSP, BGA, SOT, PLCC, CLCC, TO
BGA, CSP, µBGA, CSP, TO, SOT, CSP, BGA, QFN, MLF/leadless, QFP QFN, MLF TSOP, TSSOP, BGA, PLCC, TSSOP, SOIC, SOT, QFP, PLCC QFN, PGA, LGA
will be used. Typically, manufacturers are set up for a particular delivery mechanism and changing this is often undesirable. Some packages can be tested in more than just one type of handler. In the case of packages, the choice of handler type is based on either an industryaccepted standard or, often, on the installed base at the customer or at the test house. If a company that uses the pick-and-place method is transitioning their devices to QFN they would likely prefer to stay with pick-and-place handlers. Likewise, a company that is dominated by gravity machines would likely opt for
Handlers
4th touchdown
48 DUTs/ simultaneous tests
3rd touchdown
2nd touchdown
48 DUTs/ 48 DUTs/ simultaneous tests simultaneous tests
207
1st touchdown
48 DUTs/ simultaneous tests
Figure 10.5 Strip testing 48 devices in parallel. (Courtesy of Amkor.)
gravity-feed handlers. The one ideal package type for handlers is the QFN package. The QFN can run well on gravity-feed (in tubes), pick-and-place (in trays), strip test (substrate), or turret-based (in bulk) handlers [personal communication with Kevin Brennan, Delta Design, 2006]. From an alternative point of view, when choosing a package for the devices, the cost of packaging needs to be weighed against the cost of converting handlers.
10.4 Throughput Throughput is a measure of the number of devices that can pass through a handler in a given amount of time. The standard rating of handler throughput is max throughput. This is assuming 100% operational efficiency of the test cell (tester, handler, and all other peripherals) and, most importantly, it assumes zero test time, effectively singling out that time limited only by the handler. Additionally, the index time of the handler (i.e., the time it takes to place a tested DUT into the appropriate bin after testing is complete and obtain and place a new DUT into the contactor socket) can be a critical factor, especially when the test execution times are less than a second. A detailed discussion of index time is presented in Section 10.4.2. Stepping away from the throughput, based on the handler, the standard measure of throughput for the semiconductor industry is the number of devices that are tested in a 1-hour period of time, or units per hour (UPH). In the case where the test time is longer than the index time of the handler, throughput is calculated as [6] UPH =
3,600 × N DUT t index + t test
(10.1)
208
Advanced Production Testing of RF, SoC, and SiP Devices
where 3,600 is the number of seconds in 1 hour, tindex is the index time (in seconds) of the handler, ttest is the DUT test time in seconds, and NDUT is the number of DUTs being tested. For a multisite setup, NDUT is equal to the number of active sites. The word active is used because it is important to consider that all sites may not always be populated, for example, at the end of a tray (in some older handlers). For example, when the number of available sites (active or not) is used in (10.1), this provides the theoretical handler throughput for multisite, based on a given test time. A point worth mentioning is that for long test times (much longer than handler index time), all handlers operate at the same speed. In essence, index time becomes irrelevant. Considering this, if test times were always high, it would be best to choose the lowest priced handler. Equation (10.2) is used to judge throughput of a handler. The common term that a handler manufacturer will use is handler throughput at zero test time. This throughput calculation is independent of test time, and only dependent on handler index time: UPH =
3,600 × N DUT t index
(10.2)
Table 10.2 shows a number of commercially available handlers and gives some of their throughput characteristics. While the major deciding factor for handler choice is based on the package type to be tested, sometimes handler manufacturers design a handler to a target technology, such as for testing memory devices or for testing high-power devices. For example, it can be seen that memory test-oriented handlers tend to have a large number of sites to enable massive parallel testing. Also, in Table 10.2, the throughput corresponds to where the test time is zero to provide a throughput that would be limited by the handler only. The Index Time column reflects the real index time for single-site operation. To get the effective index time for multisite operation, divide this number by the number of active sites. 10.4.1 Number of Sites
The number of sites that a handler is capable of providing is also important. The number of sites available on a handler can be anywhere from one to more than 256 sites. However, for RF/SoC testing, quad site is considered state of the art. Handlers with greater than four sites are designed to accommodate devices with a high degree of digital testing or BIST, such as memory devices.
Handlers
209
Table 10.2 Various Handlers and Their Throughput Criteria
Target Devices
Type of Handler
Maximum Number Throughput of Sites (UPH)
Index Time (ms)
Advantest M4541A/AD
Various
Pick-and-place
8
6,000
N/A
Advantest M4551A
Image sensors
Pick-and-place
8
2,200
N/A
Advantest M6541A
Memory
Pick-and-place
64
6,000
N/A
Advantest M6542A/AD
Memory
Pick-and-place
128
6,200
N/A
Advantest M7321A
Memory
Pick-and-place
16
1,700
N/A
Advantest M7341A
Memory
Pick-and-place
16
1,700
N/A
Aetrium 55V6
Any QFN, SOIC Gravity-feed
2
12,500
250
Aetrium V8
Any QFN, SOIC Gravity-feed
4
24,000
250
Aetrium 3000
Any PQFP, BGA Pick-and-place
2
3,000
750
Aetrium 5800
Any QFN, SOT
Turret
4
16,000
225
Aetrium 8832
Any QFN, SOT
Turret
8
24,000
150
Delta 1688-ES
RF/SoC
Pick-and-place
2
6,000
500
Delta Castle
RF/SoC
Pick-and-place
9
5,200
550
Delta Edge
RF/SoC
Pick-and-place
16
6,000
440
Delta Summit
High-Power
Pick-and-place
2
1,750
<1,000
Delta Orion
CSP/BGA
Strip
Entire strip Strip/test time 3,000/strip; dependent 300 within strip
MCT Tapestry SC
CSP/BGA
Strip
128
150,000
2,000
MCT Tapestry SC
CSP/BGA
Strip
64
140,000
2,000
MCT Tapestry SC
CSP/BGA
Strip
8
75,000
2,000
Handler
Note: Data obtained from product specifications, brochures, and communication with vendor.
10.4.2 Index Time
As mentioned previously, index time, or the time that it takes to place a tested DUT into the appropriate bin after testing is complete and obtain and place a
210
Advanced Production Testing of RF, SoC, and SiP Devices
new DUT into the contactor socket, can be a critical factor, especially when the test execution times are less than a second. Typical handler index times range from 300 to 750 ms. For example, if the time to execute an entire test plan takes 0.5 second and the index time of the handler is 0.5 second, it is clear that only half of the processing time is actual testing. This demonstrates the benefit of multisite testing, which, in addition to being dependent on the tester software, is also highly dependent on the handler configuration and capabilities. 10.4.2.1 Multisite Index Time
In the case of multisite handlers, the index time is often stated as an effective index time on a per-device basis. This means that if it takes 500 ms for a quad site pick-and-place handler to process four devices after testing until the beginning of the next test, the effective index time is 125 ms per device. This is generalized by the following equation: t index, effective =
t index N sites
(10.3)
10.4.2.2 Index Time by Handler Technology
Based on single-site testing, gravity-feed and turret handlers offer the fastest index times, both typically being less than 300 ms. Pick-and-place handlers typically have index times of around 500 ms, independent of number of sites. For this reason, increasing the number of sites on a pick-and-place handler improves the effective index time (index time per DUT, in multisite testing) and overall COT, assuming that all sites are being utilized, of course. Table 10.3 provides a summary of typical index times, based on handler technology. Note how the effective index time is dependent on multisite testing. 10.4.2.3 Testing in Ping-Pong Mode
One methodology that can take advantage of index time is to use a multisite testing methodology known as ping-pong. Ping-pong hides the test time within the index time of the handler. It is important to note that this is only possible if the test time is shorter than the handler index time. Traditionally, this was accomplished by attaching two handlers to a single test system. More recently, handlers are available with multiple chucks, or device manipulators, to allow one or more devices to be tested while another one or more devices are being placed into bins and new devices readied for testing. Figure 10.6 shows the relationship between throughput increase and handler index time (normalized to test time). For test times that are less than or equal to the handler index time, it is possible to double the throughput. For a test time that is twice the handler index time, a
Handlers
211
Table 10.3 Index Times and Effective Index Times Based on Handler Technology
2.5
Handler Type
Effective Number Index Index of Sites Time (ms) Time (ms)
Gravity-feed
1
300
300
Gravity-feed
2
300
150
Pick-and-place
1
500
500
Pick-and-place
2
500
250
Pick-and-place
4
500
125
Strip test (strip-to-strip)
Any
2,500
2,500
Strip test (within a strip)
Any
300
300
Turret
1
200
200
Test time < Index time
Test time > Index time
Throughput increase
2 1.5 1 0.5 0 .0 05 0.2 0.35 0.5 0.65 0.8 0.95 1.2 1.5 1.8 2.1 2.4 2.7 3 3.3 3.6 3.9 4.2 4.5 4.8 5.1 5.4 5.7 6 Index time/test time
Figure 10.6 Throughput increase provided by ping-pong testing versus handler index time (normalized to test time).
50% increase in throughput can be achieved and for a test time that is four times longer than index time, a 25% throughput increase can be had. As an example, Table 10.4 shows the math that makes it apparent how testing in a ping-pong fashion can improve throughput. With ping-pong operation, when the test time is less than the index time, the effective test time per device is the individual test time added to the index time, divided by two (for dual site). When the test time is greater than the index time, then the index time is masked (horizontal line in Figure 10.6), so the effective test time per device is
212
Advanced Production Testing of RF, SoC, and SiP Devices
Table 10.4 Throughput Improvement from Using Ping-Pong Operation with a Handler Parameter
Test Time < Index Time Test Time > Index Time
Single Site Test time (second)
0.9
2.0
Index time (second)
1.0
1.0
Available time per year (second)
18,869,760
18,869,760
Throughput/year (100% yield)
9,534,194
6,289,920
Actual test time per device
0.9
2.0
Index time
1.0
1.0
Effective test time per device
0.95 [(0.9 + 1.0)/2]
2.0 (index time is masked)
Available time per year (second)
18,869,760
18,869,760
Throughput/year (100% yield)
19,068,389
9,434,880
Throughput increase of ping-pong (%)
100
50
Ping-Pong Dual Site (two devices)
the same as the actual test time. In either case, the use of ping-pong operation can significantly improve throughput. Chapter 7 extends this discussion to include the monetary benefits of ping-pong operation in the discussion of cost of test.
10.5 Testing at Various Temperatures Handlers are often used for environmental testing such as testing the DUT across various temperature ranges. They provide environmental stimuli that mimic the worst-case conditions that a device is likely to endure in the field [7]. When operating a handler under thermal conditions, a handler may need to provide cooling capability as well as heating capability. Typical ranges are from –55°C to +160°C for RF/SoC devices. Another feature that may be necessary is thermal soaking, or maintaining the DUT at a set temperature prior to testing or during testing. The focus of this discussion will be limited to thermal testing via gravity-feed or pick-and-place handlers because that covers the vast majority of use cases for thermal testing. 10.5.1 Tri-Temp and Slew Time
A commonly used term with handlers is tri-temp. This refers to a handler’s ability to provide three temperature conditions: high temperature, low temperature,
Handlers
213
and ambient or room temperature. Most handlers support at least two of the three temperature conditions (high and ambient), because cooling often takes extra equipment and requires more complicated design. An important parameter when performing temperature testing is slew time. This parameter is key when a device is being tested at more than one temperature in the same plunge of the handler. Slew time is the time it takes to change from one testing temperature to another testing temperature. 10.5.2 Methods of Heating and Cooling
Conventional means to provide an environmental temperature are through the use of liquid nitrogen or chilled water. Other technologies for cooling and heating are forced-air cooling or coolant mixing [personal communication with Kevin Brennan, Delta Design, 2006]. With any of these methods, typical heating and cooling accuracies are ±3°C. It is important to be aware that as the handler attempts to cool the environment of the device, the device is dissipating heat due to power. Ideally, by design, the temperature chamber has the ability to control this. For example, a commercial thermal chamber may have the capability to control devices with 100 W/cm2 and beyond without risk to the test result or thermal damage to priceless characterization devices. One way to accurately maintain the temperature set point is by efficiently dissipating the heat generated during device testing. Some handler manufacturers utilize thermal convection and can condition devices/materials to a wide range of temperatures. 10.5.3 Thermal Soaking of Devices
Sometimes a device has to soak at a set temperature either prior to or after testing. Often separate areas of the handler are designated for this. Figure 10.7 demonstrates this for the case of gravity-feed and pick-and-place handlers [8]. Proper design of these soak chambers results in lower soak times and enables more efficient and accurate testing of high-power devices. 10.5.4 Handler Design Considerations for Thermal Testing
The package material, typically either plastic or ceramic, plays an important role in the design of the thermal chamber. The chamber must take into account the different thermal conductivities for the different types of packages. It is also important that the thermal chamber on the handler maintain a safe temperature to the touch on the outside during high-temperature testing and also maintain a noncondensation-forming condition on the outside during cold temperature testing [9]. Materials that can withstand the extreme high and low temperatures without oxidizing or corroding are essential as well.
214
Advanced Production Testing of RF, SoC, and SiP Devices
Devices enter (load area) Thermal conditioning chamber
Thermal conditioning chamber
Test chamber
Thermal conditioning chamber
Recirculating test tray
Gravity Test chamber
Devices enter (load area)
Sorting section (unload area)
(a)
Sorting section (unload area) (b)
Figure 10.7 Flow diagram demonstrating thermal soaking as implemented in (a) pickand-place handlers and (b) gravity-feed handlers. (After: [8].)
10.6 Contacting the Device to the Load Board Chapter 9 discusses test contactors in great depth. It is worth mentioning here, though, that the contactor is a critical link between the handler and tester (or load board). Also a mechanical peripheral device of the test cell, the contactor must be designed in conjunction with the specifics of the handler in mind. Like anything that is placed in the path of the signals being passed into and out of the DUT, the contactor adds distortion due to its parasitic capacitance and inductance. In many modern high-speed devices, a technique known as plungeto-board is being implemented to minimize the parasitic effects. This technology involves the cooperation of the handler a little more than when using a contactor. As of this writing, plunge-to-board is not used with typical RF/SoC devices for wireless communications though. However, as the technology matures, it likely will be used. The fundamental concept behind plunge-toboard is to keep the distance from the DUT to the tester’s electronics as short and simple as possible. There are a few variations on the methods by which plunge-to-board is implemented. For example, the handler can have a vacuum that physically presses the DUT to the load board during the entire test. This is termed true plunge-to-board or high-frequency plunge-to-board. In lower frequency devices, it may be possible for the device to be placed onto the board by the handler, then released during the time the testing occurs.
Handlers
215
10.7 Handler Footprint The size of the handler, or its footprint, may or may not be a significant factor in the decision of which handler to use for testing. The handler’s footprint comes into play because it is important to note that with capital equipment, floor space is money. To reduce the amount of floor space required for production testing and to eliminate excess time, additional functionalities can be integrated into some handlers, such as DUT lead inspection and placement into tape and reel for shipping. All of these additional functions add to the amount of floor space needed and the potential for downtime. Of the main types of handlers, the gravity-feed handler, offers the smallest footprint [9], measuring approximately 6 ft2. Pick-and-place handlers exhibit footprints that measure on the order of 20 ft2 or more. As always, footprint advantages/disadvantages must be factored against other parameters such as throughput to determine the overall impact to the cost of test.
10.8 Tester Interface Plane The tester interface plane is defined as the plane in which the test head of the tester mounts to the handler. The two common variations of this are horizontal and vertical, as shown in Figure 10.8. The test head manipulators that are provided with almost all test systems can readily accommodate both of these positions. Sometimes, though, a test head may be too large and can be obstructed by either a portion of the handler or the floor (ground) of the test cell. If no potential obstruction is caused by the handler, this is referred to as an infinite plane. The choice of orientation of the tester interface plane is based on customer preference. However, there are some considerations when choosing the orientation for the first time. The vertical orientation tends to provide a larger interface plane and better visibility when docking. It requires a complex manipulator (mechanical structure to provide maneuverability in three dimensions and rotation), which adds to its cost. The horizontal interface does not usually require a manipulator.
10.9 Device Input and Output Semiconductor devices to be tested are delivered to the test area either in trays, tubes, or bulk form. Modern RF/SoC devices with large pin counts are typically delivered in trays. The bulk form may be bags, containers, or metal magazines. More recently, delivery to the test area in the form of strips of lead frames is starting to become more prevalent.
216
Advanced Production Testing of RF, SoC, and SiP Devices Horizontal interface plane Tester Handler
DUT Test head
(a)
Vertical interface plane Tester Test head
Handler DUT
(b)
Figure 10.8 Tester interface plane: (a) horizontal and (b) vertical.
Output from the handler also has many different formats. Many handlers have the ability to perform postprocessing of the DUT prior to output. These postprocessing steps may include things such as lead inspection and tape and reel output. Table 10.5 shows the typical forms of input and output media used with the various handler types. 10.9.1 Binning
In the simplest (and traditional) case, the output devices from a handler are placed into bins after test. Bins could be a tray or some repository on the handler or more likely, they are various trays (on pick-and-place handlers) or tubes (on gravity handlers) where the devices are placed based on predetermined criteria such as these: • Passing the entire test; • Failing any of the tests;
Handlers
217
Table 10.5 Handler Types and Their Input and Output Characteristics GravityFeed
Pick-andPlace Strip Test Turret
Trays
N
Y
N
N
Tubes
Y
N
N
N
Bulk
Y
N
N
Y
Substrate or lead frame
N
N
Y
N
Trays
N
Y
N
N
Tubes
Y
N
N
N
Bulk
Y
N
N
Y
Substrate or lead frame
N
N
Y
N
Tape and reel
Occasionally Rarely
N
Typically
Parameter Input Media
Output Media
• Failing because of a dc test; • Failing because of an RF test; • Failing a maximum frequency of operation test (common in
processors). The criteria to determine into which bin the device is placed is determined by the tester software. Sometimes it can become a little confusing because the tester keeps track of bins and the tester software often lets you configure multiple levels of binning (as is done for grading parts based on degree of failures). To further complicate binning, modern handlers often have complete microprocessor-controlled interfaces that can keep track of binning. Keeping these two synchronized can often become cumbersome. Sometimes, it might be simpler to put one of the two into dummy mode by disabling the bin-tracking software of the handler. To ensure that the index time of the handler is as low as possible, it is recommended to place the most highly accessed bins closest to the contactor socket so that the mechanical motion of the handler is minimized, thereby reducing index time. For example, if the yield of a given lot is greater than 90%, then it would be beneficial to place the “good,” or “pass” bins nearest to the contactor socket. This would enable the shortest range of motion for the most common task.
218
Advanced Production Testing of RF, SoC, and SiP Devices
10.9.2 Loading and Unloading of Devices
Auto-loaders and auto-unloaders of trays, tubes, or any other means in which the DUTs are delivered to the production testing stage allow the testing process to be much easier. Requiring a handler operator to load and unload DUTs into a handler can lead to a significant decrease in throughput. This comment is from firsthand experience—the authors have witnessed many social conversations between handler operators on the test floor taking precedence over the empty device feed in the handler. 10.9.2.1 Auto-Unloaders
Trays and tubes traditionally have been unloaded by a person (test floor operator). However, because recent yield rates are often near 95% or more, the time to fill tubes or trays has become very short and requires constant attention from a person to unload them. The benefit of auto-unloaders is that less dependence is placed on the operator. The disadvantages are that the unloading equipment requires additional capital investment and takes up floor space. In some cases, unloaders may be used on only some of the output bins. For example, on a high-volume product that has typical yields of 99.8%, auto-unloaders would be installed on the good bins but not on the bad bins. This would save on capital and floor space as well as reduce, significantly, the reliance on a person to keep the test cell operating efficiently. The only required operator intervention on the output side (regarding unloading) would be the much more infrequent unloading of the bad bin parts.
10.10 Conversion and Changeover Kits Because there are only about four basic handler types and a myriad of packages, each handler type must be able to work with multiple types of packages. However, to make this possible, conversion and changeover kits need to be considered. The variations that exist among the many packages make it necessary to modify the mechanical interface. These mechanical interfaces have become a necessity and an important consideration, especially for subcontractors who run devices for numerous different vendors throughout a day. An important consideration is the amount of time that is necessary to change between one type of package and another. Handler manufacturers usually reflect this in their specifications as device changeover time. Typically this time is on the order of 15 to 30 minutes. With many package types that have been around for a while, the conversion kits are available as off-the-shelf kits. Alternatively, handler manufacturers can design custom kits for newer packages. Regardless of the situation, to specify a device conversion kit it is necessary to know which contactor will be used and
Handlers
219
the orientation of the device on the load board and to be able to provide device drawings and samples.
References [1]
Gray, K., “Current Trends in Test-Handler Technology,” Evaluation Engineering, Vol. 36, No. 5, 1997.
[2]
Sakamoto, P., A. Berar, and M. Prim, “Leveraging Strip Technology for Massive Parallel Testing,” Evaluation Engineering, Vol. 39, No. 7, 2000.
[3]
STATS ChipPAC, “STATS ChipPAC Offers Strip Testing for Mixed Signal Devices,” Press Release, March 8, 2005.
[4]
Brennan, K., “Test-on-Strip: What It Takes, What It Offers Users,” Chip Scale Review, November 2002.
[5]
Amkor Technology, “Amkor Strip Test Overview,” Service Note, 2005.
[6]
Hayashi, K., “The Ins and Outs of Parallel IC Handlers,” Evaluation Engineering, Vol. 35, No. 5, 1996, pp. 37–39.
[7]
Salonga, M., “How Final Test Impacts the Assembly Line,” Chip Scale Review, July 2003.
[8]
Pfanhl, A. et al., “Temperature Control of a Handler Test Interface,” Proc. 1998 Int. Test Conf., 1998, pp. 114–118.
[9]
Pfahnl, A. et al., “Thermal Management and Control in Testing Packaged Integrated Circuit (IC) Devices,” SAE Technical Paper 1999-01-2723, Warrendale, PA: SAE International, 1999.
11 Load Boards 11.1 Introduction A load board is a PCB assembly that is used to route all of the tester resources to a central point, which then allows for the DUT to perform as required during its test time. This assembly may also be referred to as a DUT interface board (DIB). The load board is independent of the tester and is almost always unique to each DUT that is tested. One of the most time-consuming elements of developing a full production test solution is the design and fabrication of the load board. Engineers must take into account the fact that all of the dc power supply, digital control, mixed signal, and RF signal lines must coexist and be routed among each other on a common board. This inevitably requires a multilayered load board to be fabricated. Creating a load board is a process that includes design, layout, fabrication, assembly and test, and possibly multiple redesigns. Fabrication of the load board is very similar to that of the actual DUT, although not as complicated, and ample time for this effort should be included in a project schedule. Another often overlooked difficulty is the final impedance matching and tuning that is necessary after the board is fabricated. Time should be allowed for this effort, especially if it is being done for the first time. Having an RF circuit tuning expert on the team would help save significant time in this area. Alternatively, close communication with the DUT designer can provide time-saving tips, because the designer is aware of areas of the device that are sensitive to impedance matching.
221
222
Advanced Production Testing of RF, SoC, and SiP Devices
Many third-party companies provide load board services ranging from consulting to full start-to-finish delivery. Depending on your budget, it is often a wise investment to engage these companies. This chapter discusses the various topics that pertain to load boards, from design to operation. It focuses primarily on the electrical aspects as they pertain to common RF/SoC applications, which entail RF, mixed-signal, digital (for device control), and dc signal routing. A load board consists of a multilayer PCB with a contactor (Chapter 9) mounted on it. It mounts to the test head of the tester. Figure 11.1 shows a load board assembly consisting of the board and stiffener. The design of the load board is done so as to provide a negligible electrical and mechanical interface between the test system and the DUT. An improperly designed load board may limit the types of tests that can be run, diminish the quality of testing, or in the worst case, prevent the DUT from being tested at all [1]. The complexity of the design will be based on the performance requirements of the DUT. RF and high-speed mixed-signal devices require a much more complex load board than do low-speed digital devices. The cost for a relatively simple load board starts at a few thousand dollars and ranges up to $10,000 for a complex, multilayer, multisite RF/mixed-signal assembly (before contactors are added). Because the DUT board is at the center of the test system, designing and producing one that delivers accurate and reliable test results requires complete knowledge of the IC package to be tested, the contactors, the various test heads to which the load board will connect, and the handler or probing equipment used with the ATE system [2]. The remainder of this chapter is broken into three portions discussing materials, electrical, and thermal considerations when designing and fabricating load boards. It is quite difficult to separate one from the other, because they are quite interrelated. For example, one must design to transfer heat when designing for high-powered applications. Joule (current) heating mandates thermal design characteristics. For this reason, the thermal design considerations section will be DUT and contactor Load board Stiffener
Test head
Figure 11.1 Load board assembly.
Load Boards
223
constrained to externally applied temperature control, such as forced air heating or liquid nitrogen cooling.
11.2 Materials The first thing to consider when designing load boards is the choice of material to be used for the board. Years ago, the design was simple and only a handful of materials was available—or at least accepted—for load boards. Now, the multitude of materials and concepts available, such as multilayer boards and hybrids, has led to an exhaustive list of options. Choosing a material for use in an RF/analog application is a far more complex task than choosing a material for a low-speed digital application. For example, the traditional various Rogers brand of products were the choice for pure microwave circuit applications. However, they are only suitable for single-layer or two-layer boards due to their properties. This is not acceptable for modern highly integrated devices that require dedicated layers for each of the technologies (digital, mixed-signal, power supplies, RF, and multiple grounding layers).
11.2.1 Material Properties
Numerous materials are available that are used to produce PCBs in consumer products such as computers and mobile devices. The materials commonly used for load boards, however, are often designed taking the materials properties listed in Table 11.1 into account. This table lists properties for the various commonly used load board materials [3]. Keep in mind that some of these materials were developed specifically to optimize one or more of these properties. Load board materials can be divided into two major classes based on the type of reinforcement used. These are woven glass and nonwoven glass reinforcements. Woven glass–reinforced laminates, such as FR-4 (the name stands for Flame Retardant 4, stemming from its origins in other industrial uses), are lower in cost than nonwoven laminates and are cheaper to produce and process. Because of the amount of glass in the woven glass cloth, the dielectric constants of boards based on it are higher than those based on other reinforcements. Nonwoven materials require more expensive raw materials and are more expensive to process than the epoxy resin–based, glass-reinforced materials. The potential gains, mainly due to lower dielectric constant, that would result from using these materials in testing of consumer devices operating at less than 3 GHz are rarely, if ever, worth the added cost.
224
Advanced Production Testing of RF, SoC, and SiP Devices
Table 11.1 Properties Used in Selecting Load Board Materials
Material
Type
Tg (°C)
Dielectric Constant
Loss Tangent
FR-4 epoxy glass
Woven
125
4.1
0.02
Multifunctional FR-4
Woven
145
4.1
0.022
Nelco N4000-6
Woven
170
4.0
0.012
GETEK
Woven
180
3.9
0.008
BT epoxy glass
Woven
185
4.1
0.023
Cyanate ester
Woven
245
3.8
0.005
Polyimide glass
Woven
285
4.1
0.015
Teflon
Woven
N/A
2.2
0.0002
Rogers Ultralam C
Nonwoven
280
2.5
0.0019
Rogers 5000
Nonwoven
280
2.72.3
0.001
Rogers 6002
Nonwoven
350
3.0
0.0012
Rogers 6006
Nonwoven
325
6.0–10.0
0.002
Rogers RO3003
Nonwoven
350
3.0
0.0013
Rogers RO3006
Nonwoven
325
6.0-10.0
0.003
Source: [3].
11.2.1.1 Relative Dielectric Constant
Relative permittivity εr or, more commonly, dielectric constant (sometimes referred to as K ) is a property of an insulating material that enumerates the effect that it has on the capacitance of a conductor imbedded in or surrounded by it. It measures the degree to which an electromagnetic wave is slowed down as it travels through the insulating material. The higher the relative dielectric constant, the slower a signal travels on a load board trace, the lower the impedance of a given trace geometry, and the larger the stray capacitance along a load board trace. Given a choice, for load boards, lower dielectric constant is nearly always better [3]. Lower dielectric constant allows the board to be made thinner. However, most of the commonly used/available materials for load boards have the same range of values of dielectric constant. It is rarely the gating factor in choosing a load board material. An example of where it may be a gating factor is when high-layer-count boards are used and the thickness of each layer needs to be minimized to maximize dimensional stability. For example, in this case, nonwoven reinforcement types of boards would be more suitable than woven, because these typically have lower dielectric constants. Some of the nonwoven materials, through omission of glass, reduce the dielectric constant.
Load Boards
225
The dielectric constant of all materials changes with frequency and usually goes down as frequency goes up. This has an impact on the traces that carry signals on the load board. The velocity of signals increases as the frequency goes up, resulting in phase distortion in broadband devices. The impedance of a trace also goes down as frequency goes up, resulting in faster edges reflecting more than slower ones (which is prevalent in high-speed digital applications). The main effect this has is to cause errors in impedance calculations and measurements. As an example, if the relative dielectric constant measured at 1 MHz is used to calculate impedance, and a TDR analysis with a 125-ps rise time is used to measure the impedance, there will be disagreement due to the fact that two very different frequencies have been used. Figure 11.2 illustrates how the relative dielectric constant varies with frequency for some typical load board materials [3]. Broadband RF and microwave signal routing requires load board materials that have a dielectric constant that is as flat with frequency as possible over the range of use to minimize this problem. Often, when looking at data sheets among manufacturers for load board materials, one will see a difference in the stated dielectric constant even though the material name is the same. The source of this variation between load board materials and manufacturers is the ratio of reinforcement or glass to resin used to make the board. Figure 11.3 shows how the dielectric constant of a standard FR-4 laminate changes with the ratio of glass to resin. Note that the relationship is linear. It follows the mixing rule that is generically used with many composite material properties:
Relative dielectric constant, εr
5.0
4.8 FR-4 (42% resin) 4.6 Polyimide (42% resin) 4.4
FR-5 (42% resin) BT (42% resin)
4.2
Cyanate ester (42% resin)
FR-4 (55% resin)
4.0 1
2
5
10
20
50
100
200
500
Frequency (MHz)
Figure 11.2 Dependence of the relative dielectric constant on frequency for some typical load board materials. (Courtesy of Speeding Edge.)
226
Advanced Production Testing of RF, SoC, and SiP Devices
Relative dielectric constant, εr
Usual range for load boards 8.0 7.0 6.0 5.0 Pure resin 4.0 3.0 2.0
0
20
40 60 Resin content (%)
80
100
Figure 11.3 Dependence of the relative dielectric constant on the ratio of glass to resin content for standard FR-4 load board material. (Courtesy of Speeding Edge.)
εr = vol% resin × εr , resin + vol% reinforcement × εr , reinforcement
(11.1)
where the volume percent (vol%) is a number less than 1.0. Figure 11.3 is based on measuring the relative dielectric constant at 1 MHz in FR-4. Many of the disconnects between predicted impedance and measured impedance stem from that fact that the relative dielectric constant for one glass-to-resin ratio is used to calculate impedance, whereas the actual glassto-resin ratio of the material used to fabricate the load board is different. As an example, the relative dielectric constant 4.7 is for FR-4 with 42% resin measured at 1 MHz. Most multilayer materials contain about 55% resin. Typically, impedance of the finished load board is measured with TDR having an edge rate of typically of about 150 ps, which corresponds to about 2 GHz. The relative dielectric constant for this pair of conditions is approximately 4.1. These two sets of conditions, when used on the same load board—one to calculate, the other to measure—can result in an impedance error of as much as 5Ω in a 50-Ω system [3]. 11.2.1.2 Glass Transition Temperature
The glass transition temperature, Tg, is the temperature at which the resin (liquid bonding component) undergoes a phase change. When observing the coefficient of thermal expansion versus temperature plot (Figure 11.4), it can be identified by an abrupt change in coefficient of thermal expansion. When the temperature of the load board goes above the glass temperature (which can happen during fabrication of the load board when overheating of a load board device or DUT occurs), the resin portion of the composite structure of the load board begins to expand at a higher rate than the other components (typically
Load Boards
227
6.0
Change in thickness (%)
5.0 Melting point of solder (185°C) 4.0
-4
FR 3.0
K
TE
GE
−
2.0
de
mi
i oly
2p
r
ste
e ate
an
Cy
1.0 0.0
Tg indicated by circles 0
25
50
75
100
125
150
175
200
225
250 275
300
Temperature (°C)
Figure 11.4 Plot of thermal expansion versus temperature for typical load board materials, indicating the glass transition temperature, Tg, by the abrupt change. (Courtesy of Speeding Edge.)
copper and glass). Because the resin cannot expand in either the X or Y directions, virtually all of the expansion takes place in the Z axis. The vias and other plated through-holes are oriented in the Z axis and are placed under stress as soldering takes place during manufacturing of the load board [3]. For this reason, care must be exercised in choosing the proper resin material for each application of a load board. It can be expected that a resin with a higher glass transition temperature would be required for high-power device testing or when high-power, active RF components are to be used on the load board. As can be seen from Table 11.1, the major difference between load board materials is the glass transition temperature, Tg. In fact, all of these materials, except Teflon, were developed in an effort to arrive at a material that is easy to process and low in cost yet has a high Tg. The Tg goal is to get as close to the melting point of solder, 185°C, as possible. It can be seen that GETEK, BT epoxy glass, cyanate ester, and polyimide glass all achieve the desired Tg. Unfortunately, all of these have processing problems that make them more expensive—sometimes much more expensive—to use than the epoxy-based materials [3]. The original low-cost load board material was FR-4 with a Tg of around 125°C. This temperature was too low to provide reliable plated vias in boards thicker than 0.062 inch. Multifunctional FR-4, tetrafunctional FR-4, and the
228
Advanced Production Testing of RF, SoC, and SiP Devices
“high Tg” FR-4s, such as Nelco N4000-6, have been developed in an effort to preserve the ease of processing that these epoxy resin systems provide while raising the Tg. The “high Tg” FR-4 systems achieve Tg values in the range of 170°C to 180°C, high enough to build reliable thick boards (i.e., boards more than 0.062 inch thick). It should be possible to fabricate reliable boards as thick as 0.250 inch using these materials. The goal of a reliable, thick board at the lowest possible cost is the result [3]. The “high Tg” FR-4 laminates have a Tg sufficiently high that all but the most demanding applications can be handled with them. There is rarely a need to handicap a design with one of the other more exotic materials. 11.2.1.3 Loss Tangent
Loss tangent is a frequency-dependent material property that describes how much of the electromagnetic field (signal) that travels through a trace is absorbed into the dielectric surrounding the trace. It is commonly referred to, more simply, as loss. In the case of load boards, the dielectric material is the composite structure makeup of the load board. Load board material manufacturers provide the loss tangent of their materials as one of the common material parameters so that a proper material for the application can be chosen. Most people often neglect to fully understand the concept of loss tangent and instead assume that they should select the material with the lowest available loss tangent. By doing so, however, they place undue constraints and expense on selecting their load board materials. For example, ultra-low-loss materials are often used in digital applications when they are not needed. This results in increased load board costs without a corresponding benefit [3]. RF and high-frequency digital and mixed-signal signals have small wavelengths and require precise design of traces and circuits. The accuracy with which the circuits handle these low-level signals depends on a board with the lowest possible losses. Losses occur as reflections, where impedance changes at an interface (i.e., connectors, variations in trace width), and as a result of absorption of some of the signal in the dielectric materials. The latter is highly influenced by choice of load board material. A common misconception is to automatically choose an exotic material over the proven, more commonly used materials. For example, a designer may choose GETEK over FR-4 for RF applications. Although GETEK has a lower loss, the significantly higher cost over FR-4 is usually not worth the trouble for testing of consumer applications under 3 GHz. 11.2.1.4 Moisture Absorption
Moisture absorption is a common occurrence in the materials (resins) that make up load boards. It can alter the load board dielectric constant. It usually is not a catastrophic problem for most of the materials, but there are a few that are more
Load Boards
229
susceptible than others. Keep in mind that many production test environments introduce humidity to reduce electrostatic discharge. If a load board absorbs a significant amount of water, the resulting relative dielectric constant of the combination will be higher than that used to calculate impedance for the original design of the load board and can cause impedance mismatches [3]. Additionally, a more impairing effect of moisture absorption is increased leakage current. Materials with high moisture absorption properties may exhibit leakages in excess of what the circuits housed on them can withstand, causing false failures of DUTs. The two primary materials that exhibit problems due to moisture absorption are polyimide and cyanate ester. The moisture absorption levels of FR-4–based load boards are satisfactory for most consumer device digital and RF testing. 11.2.2 The Test Engineer’s Role in Material Selection
In the preceding sections, the key properties used in selecting a load board material were described. Depending on the particular application, only one, or perhaps all, may be needed to make a selection. An important thing to understand is that most of these material properties need to be understood and used to choose a load board solely for the purpose of withstanding the load board manufacturing process and reworking the load board in case of failure during operation. The reflow solder process and bonding of laminates (individual boards) during the fabrication of the load board involve high temperatures. Likewise, load board repair, for example, replacement of multipin surface mount components, can require intense localized heating, especially in the vicinity of large ground planes that absorb a lot of heat. (Have you ever noticed that it is a lot more difficult to solder to a ground connection than that of a signal connection?) Heat application during load board repair can be applied by direct contact or forced hot air. Both of these methods can be detrimental to the board by way of placing thermal stress on through-board vias and bonding between boards. A reputable load board manufacturer will remove the test engineer’s need to make these decisions due to their experience, but at least now you have an explanation regarding the choices that are made. 11.2.3 Layers
Not too long ago, boards used in production testing were simple, one- or two-layer designs. Although the primary focus of this chapter is RF and mixed-signal testing, we cannot avoid addressing digital design. The digital processor technology that drives technology today has also been advancing the complexity of modern communications devices. Load boards used for testing the
230
Advanced Production Testing of RF, SoC, and SiP Devices
high level of integration of today’s devices, many with embedded processors, follow the dependence on processor technology shown in Table 11.2 [4]. The layers are bonded together during final assembly. Each layer consists, typically, of only one type of signal, or ground. The order in which the layers are placed is termed stackup. Multiple stackup configurations are available that will work, and sometimes stackups are done in a particular order to minimize or maximize performance parameters. A common stackup for an eight-layer load board for testing of DUTs containing RF, mixed-signal, digital, and power supplies is shown in Table 11.3. Many load boards that have multilayers have two or more layers dedicated to ground and power circuits. These planes are referred to as ground planes and power planes. They are rectangular sheets of conductor that provide better electrical conductivity and better heat dissipation than narrow traces. They often take up an entire layer. The ground planes and power planes are strategically placed to prevent accidental antenna behavior due to radiating energy, and they also provide efficient distribution of power [5]. Each of the DUT power supplies should have a dedicated power plane—ideally on a separate layer. Even if the various power supplies require the same nominal voltage, the use of separate planes provides better noise immunity between the supplies and allows power supply levels to be changed independently later if necessary. To connect signals between the layers, or from internal layers to the test system, vias, or precision drilled holes, are used. 11.2.4 Hybrid Load Boards
The concept of a hybrid load board is an extension of the multilayer board, which allows for a lower overall cost of the load board to be achieved by choosing only the expensive load board materials for the layers that require it. By placing the high-frequency traces and signals on a common layer (typically the top) Table 11.2 Chronological Layer Count and Its Dependence on Processor Technology DUT Processor Number of Year Speed (MHz) Board Layers 1970 2
2
1980 10
8
1990 100
12
2006 3000+
Up to 24
Load Boards
231
Table 11.3 Typical Layer Stackup for a Modern Multilayer Load Board Layer Number
Signal Type
1
Signal, RF
2
Ground, RF
3
Ground, low-frequency analog (mixed-signal)
4
Signal, low-frequency analog (mixed-signal)
5
Power, low-frequency analog (mixed-signal)
6
Ground, digital
7
Signal, digital
8
Power, digital
it can allow lower cost layers to route the lower frequency signals and handle the grounds on all of the other layers. For example, the outermost layers of a load board aimed at testing a modern integrated SoC would be made of the Rogers 4000 series and the inner layers could be lower cost FR-4.
11.3 Electrical Modern production testing mandates that digital, mixed-signal, and RF signals coexist on a common load board. Proper routing of the signals is therefore imperative. Problems such as RF leakage, crosstalk, ground loop currents, and interference from external sources are discussed and addressed in the following subsections. Because debugging the load board is often quite a task itself, design tips to ease debugging are also presented. 11.3.1 Signal Routing and Traces
Paths on the load board, known as traces, carry the signals between tester and DUT and vice versa. As with any electrical system, there are also ground planes within the load board that serve to carry the return currents (also referred to simply as ground currents). For low-frequency signals, such as those below 100 MHz, the designer of the load board can model these traces as lossless transmission lines. For most high-speed digital and all RF applications, it is necessary to use electromagnetic simulation software to aid the design process. Managing the design requirements for signal return paths can be surprisingly more complicated than handling those of the signal traces. It is imperative
232
Advanced Production Testing of RF, SoC, and SiP Devices
to prevent the interaction of digital and analog return currents because the high-frequency noise on the digital ground may be coupled to the analog signals [6]. Because the concepts of grounding and return current design are so critical, they are covered in detail in Section 11.3.2. Signals in modern SoC devices can be categorized many ways. A common breakdown is dc, digital, high-speed digital, analog, mixed signal, and RF. All of these require at least a small degree of proper handling of trace design to ensure that there is no interaction between the various types of signals on the load board. Of this breakdown, the two key signal types that absolutely require special handling are RF and high-speed digital signals. Fortunately, a lot of the concepts are similar and stem from transmission line theory. Numerous electromagnetic simulation software packages are available to aid with this type of design. 11.3.1.1 DC, Digital, Analog, and Mixed-Signal Trace Design
Although the design concepts for dc, digital, analog, and mixed-signal load board traces are quite similar, the key point to remember is that a best practice is to keep these and their corresponding ground planes separated on the load board. The trace lengths for the low-speed digital signals should be matched, but they do not need the tight tolerance of the high-speed signal matching [1]. Although this is not always necessary, this practice will always minimize the effort needed when debugging the load board and result in a faster time to the start of testing of DUTs. As an example, in a mixed-signal load board application, at times the clock pulses from the digital signals could interfere with the analog signals if not properly separated. 11.3.1.2 RF and High-Speed Digital Trace Design
RF signals have wavelength dimensions that are very small and thus require that precise attention be paid to the details of the design characteristics and materials selection of load boards. High-speed signals also require special DUT board design techniques. Any imperfections in the trace will increasingly distort the signal and signal quality will deteriorate [7]. Because high-frequency signals are more likely to radiate energy and incur loss of the signal over lengthy traces, it is best to keep the trace lengths for RF and high-speed signals as short as possible. Avoid long parallel runs of mixed types of signals to reduce noise coupling. If the signals are differential pairs, then it is best to route them in pairs with equal lengths [1]. As with lower frequency design, it is even more important to have separate grounds for these types of signals and any other signals on the load board. 11.3.1.3 Characteristic Impedance and RF Trace Design
If the load board is moving high-frequency (RF or high-speed digital) signals, it is important to ensure that the entire environment is impedance matched to the
Load Boards
233
same characteristic impedance to minimize distortion and avoid loss and crosstalk effects due to impedance mismatch (reflections). Fortunately, most high-frequency test equipment is standardized to 50-Ω characteristic impedance. If the output of a DUT has an impedance that differs from the tester characteristic impedance, either an active buffer circuit or a simple resistor (having the same value as the characteristic impedance) can be added between the DUT and the tester. When a DUT fails excessively, there is a scramble to find bugs in test programs or faults in device design. Often, the problem could lie in something as simple as a load board impedance of 43Ω rather than the expected 50Ω [4]. When high-frequency signals are transmitted on a trace with a characteristic impedance that does not match the termination impedance (e.g., connector, contactor, or DUT) a portion of the signal is reflected. Traces must be designed to match the termination impedance. This can be accomplished on the DUT board by using microstrip lines and striplines because their characteristic impedance can be controlled by changing their physical characteristics [7]. Figure 11.5 shows the key parameters that are necessary to calculate the characteristic impedance, Z0, of a microstrip trace, which is one of the most common types of RF traces on load boards. From the dimensions in Figure 11.5 and a few material properties, Z0 of the trace is calculated as Z0 =
. h 595 ln 0.8w + t εr + 141 . 87
(11.2)
where t is the conductor thickness, h is the substrate thickness, εr is the relative permittivity (dielectric constant) of the board material, and w is the trace width. Note, however, that even if the width of a trace is exactly designed to match the termination impedance value, impedance mismatching can result from variations in the shape of the trace [7].
w t h
Dielectric Ground plane
Figure 11.5 Key parameters used in calculating the characteristic impedance of a microstrip trace on a load board.
234
Advanced Production Testing of RF, SOC, and SiP Devices
11.3.1.4 Proper Trace Design on the Load Board
There are many techniques for designing a load board and minimizing the chance of errors. However, most of this comes from experience. Even the most experienced load board designers can overlook items. It is important that someone who understands the operation of the DUT reviews the load board design. Reference [7] provides numerous tips on proper trace design for RF and high-frequency load boards. Some of these are presented in Table 11.4 and organized by the types of problems that can arise. 11.3.2 Grounding
Grounding of the load board is one of the most important items to consider and it can often make or break the overall performance and significantly affect yield. This is because ground impacts crosstalk, isolation, and feedthrough. When multisite test implementations are done, the situation becomes even more critical. Ground is critical because it is necessary to control what is referred to as ground loop current. Ground loop currents are created as a result of having two or more paths to a common ground. Current flows through these paths, creating noise that can be imparted to the signal of interest. To prevent ground loops, the solution is simple to state; all signal grounds must go to one common point. When two grounding points are unavoidable, one side of the ground must be isolated from the other side. Proper design of grounds is as important as the proper design of signal routing. Because a trace on a load board is a guided-wave structure (transmission line), the integrity of the current return path (ground) becomes very important. Breaks (discontinuities) in the copper used for the return, ground planes used for such a return that surround an unrelated via, or ground planes that extend beyond the distance necessary to support the signal traces can have degrading effects on signal integrity [3]. One other often overlooked item is to insulate all signal and ground lines and pads on the board from the load board stiffener to minimize capacitive coupling to the metal area of the stiffener [7]. 11.3.2.1 The Use of Separate Grounds
Consider that for RF/SoC load boards, the RF, mixed-signal, and digital signals all share the same board. Although there are numerous ways to design the grounding scheme, one way would be to assign separate grounds for each signal type. The different grounds should be tied together in one—and only one—place, as close to the DUT as possible. This common point where all of the grounds are joined together is often referred to as star ground. Figure 11.6 illustrates the star ground. Using a ferrite bead to connect to the star ground will
Load Boards
235
Table 11.4 Common Load Board Trace-Related Problems and Ways to Avoid Them Common Problem
Design Considerations to Avoid the Problem
Crosstalk and poor isolation
Do not place RF/high-frequency traces in parallel Use coaxial cable to conduct sensitive signals Maximize spacing between traces Distribute signal lines across multiple board layers Terminate striplines with a common characteristic impedance Use guard lines on DPS to shield voltage force and sense lines
Complex trace routing
Run traces under components If signal is low frequency, use a jumper wire Route traces on different layers and use vias to jump between layers Use coaxial cable if traces become too complex
Minimize reflections and distortion in RF signals
Minimize use of vias for RF signals Minimize the use of stubs on the DUT board Avoid sharp corners on traces Use a radius, or a chamfered corner, on traces Avoid size changes and notches on traces Use separate power supplies for digital and analog circuits If coaxial cables are used, ground the shielding of the cables Physically separate input and output signal lines
Short circuits or high-frequency coupling
Do not route signal lines on the bottom of the board Route traces on the inner layers of the board when possible Do not use solder mask as an electrical insulator
Introduction of noise to signal
Ensure that all ground connections are as short as possible Maximize the size of the digital and analog ground planes Route ground traces between sensitive traces for shielding Connect common ground planes at multiple points
Source: [7].
help to reduce high-frequency noise from coupling across ground domains [Don Faller, Agilent Technologies, personal communication, 2005]. It is acceptable, and common to have the same type of ground planes connected at multiple points at various places on the load board.
DC Gnd
Analog Gnd
Digital Gnd
Advanced Production Testing of RF, SOC, and SiP Devices
RF Gnd
236
Figure 11.6 Circuit representation of a star ground used to tie together all the various load board grounds.
11.3.2.2 Multisite Grounding
In the previous section, we explained how careful grounding design would involve the use of separate grounds for each signal type. For multisite testing applications, an extension of that design, which is strongly recommended, is to have separate grounds for each site. Each site will be tested independently, and there is no need for interconnection between them. Because there is no current flow between the two sites, there is no reason to connect their grounds. This will also eliminate high-frequency noise currents from coupling from one site to the other. If the decision is made to share ground planes between sites or if a single set of grounds for both sites cannot be avoided, separate the traces from the different sites so that the return currents on the shared ground will be separated [personal communication with Don Faller, Agilent Technologies, 2005]. As an example, Table 11.5 illustrates a grounding scheme for a dual-site SoC load board application that has RF, analog, digital, and power connections. Clearly, this is a very cautious grounding scheme and need not be always implemented this way, because doing so could lead to many additional layers on the load board, adding to its cost and complexity. One way to reduce the number of added layers yet still have this separation scheme is to share a layer for grounds of the same type for the different sites [Don Faller, Agilent Technologies, personal
Table 11.5 Independent Grounds in a Dual-Site SoC Load Board Site 1 Ground Name
Site 2 Ground Name
Other Grounds
RF Ground 1
RF Ground 2
Trigger Ground
Analog Ground 1
Analog Ground 2
General-Purpose Ground
Digital Ground 1
Digital Ground 2
Power Supply Ground 1
Power Supply Ground 2
Utility Ground 1
Utility Ground 2
Load Boards
237
communication, 2005]. For instance, in Table 11.5, both Digital Ground 1 and Digital Ground 2 would be on the same layer. The column in Table 11.5 labeled “Other Grounds” encompasses grounds for signals that may be common to both sites or grounds used for signals to trigger instruments in the tester, or utility lines (note that these grounds may never contact the DUT on the load board). Regardless of the multisite grounding scheme chosen, the star ground for each site’s grounds should be close to the DUT. If the decision is made to share grounds between sites, then the single star ground should be placed as symmetrically as possible between the two sites. 11.3.2.3 Crosstalk and Coupling
The most common results of improper grounding and ground design in a load board are crosstalk and coupling. The two items are closely related. For example, a filter circuit on a load board may not filter out the frequencies that it was designed for, or a relay, when open, may allow portions of a signal to leak through. These deteriorating effects are caused by nearby signals being poorly isolated from one another. If two traces are laid out in parallel on the load board, and one carries a small signal and one carries a large signal, the large signal can couple to the small signal. It is imperative to plan for this when designing the load board so that this can be avoided. 11.3.3 Device Power Supplies
Device power supplies (DPS) are the power supplies within the test equipment that provide power to the DUT. Care must be taken when routing the DPS signals within the load board and to the DUT. The key design criteria are to ensure that the required voltage, current, ripple, and noise reduction specifications are met. The regulated power is provided by the test system, which interconnects with the DUT via supply lines. Based on the current consumption requirements at run time and the nonnegligible resistance of the physical interconnect lines inside the test head where power initiates at the source, together with load board traces up to the DUT socket terminal, the typical interconnect scheme is to use the “kelvin” connections to minimize the load voltage drop on traces and to provide the required regulated voltage level at the DUT test point. When the device is an SoC device, comprised of RF, analog baseband, and digital signals, these signals must be routed along with the power supplies. DPS routing design is not as complex as that required for RF signals or high-speed digital signals, but there are a lot of trade-offs. The power supply lines have side effects that can be detrimental to acquiring the desired signal. For
238
Advanced Production Testing of RF, SoC, and SiP Devices
example, in an effort to reduce ripple added by the load board, load capacitance can be increased. However, when load capacitance is increased, it introduces current noise and can impact the tester’s ability to make low-current measurements. 11.3.3.1 Device Power Supply Filtering
The output impedance of an ideal power supply is zero. In reality, a power supply has a high impedance for high-frequency measurements. Therefore, a sudden current change in the DUT will cause a voltage change (noise) at the DUT supply pins. In addition, changes in the supply voltage will influence the generated or received signal and thus increase the noise level. Bypass capacitors are used to lower the impedance of the power supply, therefore reducing noise [6]. Capacitors can be placed between the power supply pin and ground pin of the DUT. This connection should be as short as possible. To filter high-frequency noise, it is necessary to select several small capacitors with resonant frequencies spanning the frequency bandwidth. Use a large bulk capacitance to filter low-frequency noise, which often manifests itself as power supply droop. For optimum noise reduction, use ceramic and electrolytic capacitors in parallel as bypass capacitors. The electrolytic capacitors filter low-frequency surge currents, and the ceramic capacitors filter high-frequency surge currents. When doing this, ensure that the ceramic capacitors are placed close to the DUT. Electrolytic capacitors (10 to 100 µF) cover the low-frequency range. They can be positioned anywhere on the DUT board [7]. At a minimum, the following tester device power supplies should be filtered on the load board [7]: • Power supply for test circuit components; • Power supply for pull-up; • Power supply for relays; • DUT reference voltage; • Termination bias voltages. 11.3.4 Components
Many different types of components are found on load boards. Resistors, capacitors, inductors, relays, operational amplifiers, filters, splitters, combiners, and various logic devices are often needed to enable signal conditioning and routing for production testing. At various times, engineers build the majority of their test system on the load board. Although not a wise choice, this is sometimes unavoidable. The general rule is to keep load boards as simple as possible, using mainly passive components. This reduces the probability of failure of active components and makes debugging simpler. This section describes the various
Load Boards
239
components used on load boards and provides some useful information when designing load boards. 11.3.4.1 Capacitors
Capacitors are used in many places on load boards. They are likely the most common component on there. For each use, different types of capacitors fit better than others. Selection of the correct capacitor type from all of the capacitor types available is key to proper circuit and load board operation [8]. The two most common uses of capacitors are as power supply filters or bypass capacitors and as RF decoupling capacitors (ac/dc separation). The two most common uses for capacitors on load boards are for device power supply filtering (Section 11.3.3.1) and decoupling. For RF decoupling, connect a small high-frequency decoupling capacitor as close as possible to each power pin of the DUT. The optimum value for this capacitor depends on the specific application. However, a ceramic capacitor of 1 to 22 nF is often a good choice. Surface-mount capacitors provide the best results because of their reduced lead inductance [7]. Dielectric Constant
Dielectric constant, or relative permittivity, εr, is the material property that determines the value of a capacitor. Capacitance is determined by the following equation: C=
εr ε 0 A t
(11.3)
where ε0 is the permittivity of free space (vacuum), and A and t are the area and thickness, respectively, of the dielectric material. Capacitors are produced as parallel plates, rolled dielectrics, or, most commonly, multilayers. Regardless of geometry, the higher the dielectric constant, the higher the value of capacitance. There are many types of capacitors/dielectrics. However, only a few of them are common for load boards. These are presented in Table 11.6 along with their frequency of operation and most common uses on load boards [9]. Ceramic Capacitors
Most load boards are abundant with ceramic multilayer capacitors. These are low-cost, highly temperature stable and reliable capacitors, commonly made from barium titanate or doped barium titanate. The dielectric constant of these can range up to 100,000. Load board applications for these capacitors are general-purpose filtering, bypass, and coupling for general-purpose signals and for high-frequency operations where variations caused by temperature are acceptable. They are primarily
240
Advanced Production Testing of RF, SoC, and SiP Devices
Table 11.6 Capacitors and Dielectric Materials Commonly Used in Load Boards Capacitor Type Dielectric Ceramic
Dielectric Frequency of Constant Operation Load Board Use
Barium titanate 10–100,000 1 kHz–1 MHz
dc and general filtering, wide frequency range, large capacitance to size, most common on load boards
Electrolytic Aluminum oxide 3.0–9.0
dc–1 kHz
High power filtering, high capacitance to size
Electrolytic Tantalum oxide
dc–1 kHz
High power filtering, high capacitance to size
20.0–28.0
designed for use in applications where small size and large values of capacitance are required [8]. Electrolytic Capacitors
Aluminum oxide and tantalum oxide are two dielectrics that are formed as a thin film on foil, then rolled with an electrolyte into a cylinder, then sealed to form an electrolytic capacitor. The electrolyte is continuously lost throughout the life of the capacitor through the end seals of the capacitor body. This has little effect on the capacitor reliability but when enough electrolyte is lost (usually in the range of 40%) the capacitor is considered worn out. Applications of this type of capacitor are filtering of the low-frequency components of pulsating dc power. Accuracy is not a large requirement in this type of capacitor. These capacitors are available in both polarized and nonpolarized types. The nonpolarized type is used when phase reversal is common, such as when filtering and coupling. If the sum of ac voltage components and dc voltage components exceed the rated value of the capacitor, or if the peak ac value exceeds the dc value, overheating and damage will occur. Because of this they must be used at no more than 80% of their rated voltage value so surges can be safely handled [8]. They are used only when high-power handling is required, or where a lot of load board space is available. In addition, most electrolytic capacitors are leaded, rather than surface mounted. Equivalent Series Resistance
Aside from the general concept of capacitance, the next most important parameter to consider is equivalent series resistance (ESR). No matter how high in quality a capacitor is, it always has additional components of inductance and resistance. The inductance is most pronounced in leaded capacitors, but the resistance is always present and needs to be considered. Figure 11.7(a) shows
Load Boards C
241
L
R
C0
Inductive behavior
Capacitance behavior
Impedance
(a)
R
fresonance
Frequency (b)
Figure 11.7 (a) Circuit model of a capacitor and (b) dependence of impedance on frequency, based on the circuit in part (a).
this basic circuit model of a capacitor. Note that the capacitive and inductive reactance values are dependent on frequency while resistance is not. In Figure 11.7(b) impedance (Z ) is plotted versus frequency. At low frequencies, the impedance of a capacitor [see the circuit in Figure 11.7(a)] is based on the capacitance value. At high frequencies, it is based on the inductance value. The dip in the curve reaches a finite minimum, which is the resonance frequency, or the impedance (resistance) of the resistor. The measured value of resistance at this point is termed the equivalent series resistance. This value is very important in load board design because it limits the power-handling capability of the capacitor. This resistive value causes internal heating of the capacitor, limiting its effectiveness on the load board. The ESR just described causes a capacitor to have maximum decoupling (minimum impedance) at a single frequency known as its resonant frequency; the resonant frequency depends on the size of the capacitor and can be found in capacitor data sheets. To get decoupling over a wide frequency range, it is necessary to use at least three capacitors of different sizes in parallel to overcome the parasitic effects of the resonant tank circuit that the parallel capacitors form [1].
242
Advanced Production Testing of RF, SoC, and SiP Devices
Additionally, when considering ESR properties, even within surface-mount ceramic capacitors, the ESR values differ. Often, a 1-µF capacitor is placed in parallel to a 100-nF capacitor. One would expect that the 100-nF capacitor would have a negligible effect; however, the ESR of the 100-nF capacitor is much better, so it performs best at high frequencies, whereas the 1-µF capacitor provides the necessary filtering for low frequencies. Although this is a fact, as capacitors of higher values are needed, it is difficult to obtain a low ESR value. 11.3.4.2 Resistors
Common uses for resistors on load boards are for voltage divider circuits, pull-up circuits, and RF resistive dividers. The most important specifications of resistors that need to be considered are the tolerance (expressed as a percentage) and the power-handling rating. A resistive divider can be used to split power in RF signals. It is quite broadband in frequency response. A two-way resistive divider is shown in Figure 11.8. The trade-off for the simplistic design of the resistive splitter (compared to a hybrid power splitter) is that 6.04 dB of signal loss occurs between the input port and either of the two output ports. In general, an N-way splitter can be made with the resistors having values calculated by R=
Z 0 (N − 1) (N + 1)
(11.4)
where Z0 is the characteristic impedance of the circuit. It is interesting to note that a resistive divider’s frequency response is broadband. Some devices cannot drive a signal into the characteristic impedance (typically 50Ω) provided by the test system [7]. Another use for resistors is to place them in series with the output of the DUT in high-speed CMOS applications. Resulting from this, however, the series placement of the resistor, Rs, causes the voltage of the signal output from the DUT to be reduced to Z0/3
RF_In/2
Z0/3 RF_In
Z0/3 RF_In/2
Figure 11.8 A two-way resistive divider for splitting RF signal power.
Load Boards
V0
tester
=V 0
DUT
Z0 Rs + Z 0
243
(11.5)
11.3.4.3 Inductors
Inductors are primarily used in load board power supply filtering to aid in noise reduction. They are also used to connect ground planes when more than one ground plane is used. In this case, they are in the form of a ferrite bead. The ferrite bead is used to connect the ground planes rather than a simple conductor because the ferrite bead acts as a resistor at high frequencies and dissipates high-frequency signal noise in the form of heat. This keeps noise from coupling across ground planes. 11.3.4.4 Passive Component Sizes
The most common package style for passive components such as capacitors, resistors, and inductors is the surface-mount package. The common size designation for these is based on the dimensions; 0805 and 0603 are just two of the common designations, where 0805 refers to a case whose length and width are 0.08 inch by 0.05 inch. Likewise, 0603 has a case with a 0.06-inch length and a 0.03-inch width. Package sizes of 0402 and 0201 are also available, but a word of warning is that manual on-board replacement of these small sizes can be quite difficult. 11.3.4.5 Relays
Relays are often used on load boards to allow multiple tester resources to connect to a DUT or to allow multiple DUTs to connect to a common tester resource, as in the case with multisite testing. One of the simplest things that can be done to benefit the load board design is to use sockets to ease replacement of relays. Because relays are mechanical and many devices are tested each hour, it is inevitable that relays need replacing on a regular basis. When relays are used, it is critical to connect to a power supply in the tester that can provide enough current to operate the relay. If more than one relay may be active at the same time, it is necessary to consider that when planning. In fact, as a worst-case scenario, load board designers usually plan to provide enough current for all relays to be active at the same time. The required current draw of a relay can be found from the relay manufacturer’s specifications and the following equation [1]: I relay =
V power R coil
(11.6)
244
Advanced Production Testing of RF, SoC, and SiP Devices
where Vpower is the operating voltage of the relay and Rcoil is the coil resistance of the relay. To plan adequate current supply for a number of relays that may be simultaneously active (N relays, for example), the following equation must be used: I max = NI relay
(11.7)
Multiple types of relays are available. For low-speed digital and analog signals, the most common choice is to use mechanical reed relays. For RF signals, a better choice for added noise immunity may be to use magnetic latching relays. Magnetic latching relays operate in a slightly different manner than traditional relays. An initial magnetic pulse is required to set the relay into a state, and it will remain there (even after power is removed) until an opposite magnetic pulse is applied to place it in the opposite state. The use of the fixed magnetic state makes this relay very resistant to false switching due to noise. 11.3.4.6 General Comments on Component Placement
When placing components on the load board, it is important to be aware of the contactor, contactor housing, load board stiffener, cable routing, and any keep-out areas. In some cases, it may be beneficial to mount components on the rear (tester side) of the load board. This is commonly done when designing for a wafer prober interface. If using leaded components, it is also advantageous to mount them on the rear to allow access to the leads from the front for easy debugging. In Chapter 9, Figure 9.4 shows how load board components can be placed under the contactor housing. This is sometimes necessary in RF applications so that impedance matching components can be placed as close to the DUT as possible. 11.3.5 Connectors
SMA and, more recently, SMP connector types are the most common seen on production load boards to connect the tester and load board, or to connect the complexly laid out areas of the load board. SMP types are used because of their small size and are very useful in high-density applications such as a full quad-site transceiver with three or more RF connections on each DUT. It is well worth the extra money spent on quality connectors for production testing. Through repeated connecting and disconnecting of, for example, a load board, the connectors can exhibit mechanical wear; for example, gold plating is worn off and electrical properties change. To minimize potential problems, it is imperative to keep the connectors in a test system clean (using a lint-free swab and rubbing alcohol, for example) and to ensure that any nuts or
Load Boards
245
collars on the connectors are tightened to the proper torque specification provided by the manufacturer [10] (see Appendix D). Through-board (in which the connector’s center pin goes through the load board), edge-mount (edge launched) and surface-mount (vertical- or surface-launched) connectors are the most common connector styles used on load boards. Through-hole styles, with legs, provide the added strength that is required to withstand multiple torquing and loosening of many RF connectors. Because the RF signal is usually supplied from the tester side of the load board, through-board RF connectors are preferred. When laying out the load board, connectors for the RF signal should be as close to the DUT (contactor) as possible. This decreases the amount of insertion loss and ripple caused by mismatch. In many cases the connector location is constrained by the tester configuration, and the contactor location is constrained by the location of the handler plunger. This does not give the load board designer a great deal of flexibility in terms of placement [7].
11.3.6 Cables
Four primary types of cables are used for RF signals: flexible, conformable, semirigid, and semiflexible. (See Appendix C.) They are typically not used on production testing load boards, except in the case when a daughter board is used in conjunction with the load board or in the case of special constraints or complex signal routing. However, they are often used to connect the load board to the test head. Typical load board connections to cables are SMA or SMP. Cables should be routed so that they do not interfere with load board components, the handler docking plate, or the handler. The cables should be clamped down if they are long or may interfere with other parts. The most common electrical performance parameters to consider when choosing cables are insertion loss, VSWR, crosstalk (leakage), and maximum frequency of use.
11.3.7 Vias
Vias are holes that are made either all the way through one or more layers of the load board or only partially through the load board. They are used to route signals between layers or from the top to the bottom of the load board. As such, they must be plated with metal after they are drilled. To preserve the integrity of the electromagnetic wave traveling through the PC board structure, every signal via must be accompanied by one or more ground vias [7].
246
Advanced Production Testing of RF, SoC, and SiP Devices
There are three main types of vias. All of them are machined using technologies such as mechanical drilling and laser drilling. Figure 11.9 shows these types of vias as applied to a multilayer load board. 11.3.7.1 Through-Hole Vias
Through-hole vias are the most common type. They pass through the entire load board and all of its layers. The main purpose is to provide a signal path from one side of the load board to the other. These are almost always created by means of mechanical drilling. 11.3.7.2 Blind Vias
Blind vias are only partially though the load board. They route signals between the top or bottom and the inner layers of the load board. Many different methods are used to create blind vias, such as controlled depth drilling, photo imaging, and laser drilling [3]. All of these methods require more time and precision than simple mechanical drilling and are therefore more costly. 11.3.7.3 Buried Vias
Buried vias route signals between the inner layers of a multilayer load board. Production of buried vias is accomplished in the same manner as blind vias (when the load board is still in the individual layer stage). It therefore is also more costly than the simple mechanical drilling of through-hole vias. Because buried vias are not visible from either surface, they are virtually impossible to troubleshoot after the load board is assembled.
11.4 Mechanical Design Considerations for Load Boards Producing a load board is a synergistic effort entailing material, electrical, mechanical, and thermal criteria. These criteria all really overlap and are somewhat dependent on each other. However, there are a few topics that are primarily mechanical, as discussed here.
Figure 11.9 Types of vias used in load boards.
Buried
Blind
Through
Multilayer load board
Load Boards
247
The physical load board is a printed circuit board. But, because it will have a mechanical handler pressing devices into place for testing, it must have some strength to withstand the repeated contact, often millions of contacts in its lifetime. To accomplish this, the load board is attached to a metal frame, which increases its structural integrity. This is often referred to as a stiffener. Without a stiffener, solder connections, multilayer board bonding, and vias can break due to the repetitive contacting motion of the handler coming down onto the load board.
11.4.1 Keep-Out Areas
One of the reasons that it is critical to know which handler or wafer prober will be used in the final testing is to design the load board with enough clearance to accept the interface to this equipment. Each handler or prober manufacturer will provide detailed specifications about their product including mechanical drawings that show the necessary clearances that are needed to achieve proper docking without mechanical interference. Sometimes, for example when testing at a test house, multiple handlers may be available. If there is any possibility that multiple handlers may be used, then plan for that by designing the load board with all possible keep-out areas provided. Keep-out areas are also required because of cable connections, especially RF connector cables. These keep-out templates are provided by the specific connector manufacturer. If connectors are being placed near the socket in an area that will be covered by a thermal air stream chamber for temperature characterization, then use right-angled connectors to allow the cables to lie flat along the load board and not interfere with the thermal chamber being placed over the contactor. If right-angled connectors are used, do not place any components in line that would prevent the cable from attaching horizontally. Semirigid RF cables may need several centimeters of room before they can attain sufficient clearance over adjacent components [1].
11.4.2 Other Mechanical Design Considerations
Because the load board stiffener is usually conductive, and it is in intimate contact with the load board, make sure that if leaded components are used, they do not contact any part of the stiffener, causing short circuits. Concerning the contactor, if possible, use a socketed design or some sort of receptacle to allow fast and easy changeover of the contactor. Also, provide some sort of marking on the load board to indicate the orientation for insertion of the DUT, for example, provide a mark to show pin 1.
248
Advanced Production Testing of RF, SoC, and SiP Devices
11.5 Thermal Design Considerations for Load Boards The following discussion of thermal considerations is limited in scope to that of externally applied high- or low-temperature conditions, such as those used in characterization testing. It is important to be aware that under any environmental conditions, DUT or load board component heating due to high-current or high-RF power conditions can adversely affect expected measurement results and cause permanent load board damage. Those concepts are touched on in the section of this chapter that discusses power. In the discussion of load board materials, FR-4 was mentioned as a widely used, low-cost load board material. If the load board will be used for high-temperature testing, then a high-temperature (i.e., high-Tg) variation of FR-4 is recommended. When low-temperature testing is planned, it is necessary to be aware that moisture and humidity can also be problematic. To combat this, load boards are designed to accept nitrogen gas purge assemblies. Neglecting to design this into the load board can result in failure of load board components through moisture absorption, causing their performance characteristics to change, possibly impacting the perceived DUT performance. Often, high-temperature testing is performed by applying a hot air stream in the vicinity of the DUT. If this is to be done, it is good practice to place temperature-sensitive components on the underside of the load board.
11.6 Load Board Verification After the load board has been fabricated, it is not yet quite ready for use in testing. A few items need to be checked out so that no damage is done to the load board, the tester, or DUT. Before power is applied for the first time, inspect the load board. Look for visible solder flaws such as cold solder joints. Examine all electrolytic capacitors for correct polarity. Measure the resistance between the DPS force and ground connections to confirm that a short does not exist. After these rudimentary tests are done, more extensive testing of the load board should be performed. This can be done manually, in the case of low pin count devices, or a better method is to make use of the test system itself. A simple test program can be written to perform basic current and voltage tests, as in the case of continuity. After all of this is done, then apply power to the load board and measure various points on the board to ensure that the voltages are as expected. 11.6.1 Time Domain Reflectometry
In data-carrying traces of the load board, especially higher speed ones, knowledge of the total amount of time for the signal to travel between the tester and
Load Boards
249
DUT becomes significant. At high speeds, the signal direction changes so frequently that the signals being sent by the tester may collide with those being sent by the DUT. As a rule of thumb, it is only necessary to consider round-trip time as a potential problem if the switching period of the signal is less than double the propagation delay of the signal path between the tester and DUT [7]. To aid in determining round-trip time, time domain reflectometry (TDR) can be used. TDR is a measurement technique that can be performed using benchtop equipment or the test system (most test systems have TDR functionality built into them). TDR measurement traditionally has been used to determine impedance in high-speed digital boards and is a natural choice for characterization of the ATE load boards [11–14]. TDR is also used to create offsets that the tester uses to compensate for differences in differential pair traces on the load board. If TDR is available, a clear measurement of the trace impedance can be performed if careful forethought is designed into the board. Test traces can be designed into each signal layer, having connectors attached onto the top surface. If the exact length is known, then the exact impedance can be measured.
11.7 General Debugging and Design Considerations There are many valuable tips and techniques for debugging that can only be gained through experience. After working together with a load board vendor on one’s load board design, many good-practice techniques will be gained. Some of these are discussed next. 11.7.1 Probe Points
Design labeled contact pins, pads, or vias at accessible locations on the surface of the load board so that multimeter or spectrum analyzer and oscilloscope probes can quickly determine the signal level and shape. This can save numerous hours of falsely chasing the source of failing DUTs. 11.7.2 Reference Designators
Reference designators are the labels given to components on the schematic. These can be silkscreened or, using less expensive modern technology, printed directly onto the load board surface. Needless to say, this can aid in quickly locating a portion of a circuit and verifying it based on the schematic. On a multisite load board, it is a good practice to have the components numbered relative to each site. This allows for easy debugging on both the schematic and the load board. For example, a power supply filter capacitor that is identically replicated on each site of a quad-site load board would have reference designators of C101, C201, C301, and C401 for sites 1 through 4, respectively.
250
Advanced Production Testing of RF, SoC, and SiP Devices
The benefits of this become immediately apparent during the schematic and layout reviews. 11.7.3 Component Layout
Although it is not intuitive, the schematic does not readily allow the viewer to visualize the component sizes. For very densely populated load boards, it is necessary to consider this, along with the keep-out areas determined at the beginning of the design stage. The capacitors that are used in the circuits for filtering power supply noise are usually placed close to the DUT, while other general-purpose capacitors are relatively evenly distributed throughout the load board. To avoid a mistake when initially populating the load board (if done manually) or when repairing the load board, polarized components (such as electrolytic capacitors) should be oriented in the same manner. 11.7.4 Schematic and Layout Reviews
Usually, a load board vendor does the bulk of the design and layout. However, for the most part, the test engineer knows the most about what signals will be needed and what instruments will be used. For this purpose, it is highly recommended that the test engineer be actively involved in the review of the schematic and layout of the load board. 11.7.5 Start with an Evaluation Board
Typically, before the load board is even finished, the test engineer is able to use a soldered-down device evaluation board to perform many of the tests. This board is usually optimized for performance of the device. For this very reason, it is worthwhile to use this evaluation board as the starting point and foundation of the load board design. This takes a lot of the device-dependent unknowns out of the design process, because the device manufacturer has already overcome these performance hurdles. This becomes more important as the signals increase in frequency, such as with RF and high-speed digital testing.
References [1]
Seat, S., “Guidelines Facilitate Load Board Design,” Test and Measurement World, October 2002.
[2]
Barabi, I., “Designing DUT Boards for IC Test,” Evaluation Engineering, August 1999, pp. 64–71.
Load Boards
251
[3] Ritchey, L. W., Right the First Time: A Practical Handbook on High Speed PCB and System Design, Volume One, Glen Ellen, CA: Speeding Edge, 2003. [4] Ayouth, Z., “Integrated Design of Custom Sockets and Load Boards,” Evaluation Engineering, August 2002. [5] Cirexx International, “Design for Printed Circuit Boards,” Application Note, 2005. [6] Agilent Technologies, “Beginner’s Guide for Test Circuit Development,” Application Solution Note ASN-2, 1994. [7] Agilent Technologies, “DUT Board Design Guide,” Part Number E7050-91037, Edition 5.1.0, 2005. [8] Smith, R., “Capacitor Selection for Loadboard Designs,” Agilent Technologies Technical Paper, 2003. [9] Ott, H., Noise Reduction Techniques in Electronic Systems, 2nd ed., New York: John Wiley & Sons, 1988, p. 138. [10] Hewlett Packard, “Coaxial Systems, Principles of Microwave Connector Care, for Higher Reliability and Better Measurements,” Application Note 326, 1988. [11] Smolyansky, D., and S. Corey, “Printed Circuit Board Interconnect Characterization From TDR Measurements,” Printed Circuit Design Magazine, May 1999, pp. 18–26. [12] Smolyansky, D., “TDR Characterization of ATE Fixture Boards,” Evaluation Engineering, November 2000. [13] Hewlett Packard, “TDR Theory,” Application Note 1304-2, 1988. [14] Hayden, L., and V. Tripathi, “Characterization and Modeling of Multiple Line Interconnections from TDR Measurements,” IEEE Trans. Microwave Theory and Techniques, Vol. 42, 1994, pp. 1737–1743.
12 Wafer Probing More and more semiconductor manufacturers are trying to implement full coverage of their SoC devices on the wafer level. The latest trends in this industry have forced many companies to implement this technology despite the fact that it is relatively expensive and demanding from an engineering point of view. Previously, most companies just did some dc probing to check the basic functionality of a die. In many cases the yield of a wafer was high enough to justify skipping any probing. It was more cost effective to package a couple of bad parts that were later filtered out during final testing. However, some manufacturers ship bare die to their customers and those customers are typically not willing to accept any failure rates of their own products due to untested components. This requirement lead to the known-good die (KGD) approach. Also, complete modules are increasingly becoming the standard where several dies and passive components such as filters are built into a single package. This technology is also known as a multichip module (MCM)1 [1, 2]. In this case, it is getting more and more critical to ensure that all components are functional before the module is built.
1. The technological development beyond multichip modules is called system in a package or SiP. Even though the idea is similar (i.e., combining various active or passive components into one package), SiPs push the limit further by implementing a complete system into one package. For a Bluetooth SiP, for instance, this includes the RF radio, power amplifier, baseband, filters, and so forth. A multichip module, on the other hand, combines components such as RF radio, filter, and power amplifier but by itself would not be sufficient to work in its final application without additional circuitry or chips.
253
254
Advanced Production Testing of RF, SoC, and SiP Devices
12.1 RF Wafer Probing Simple dc probing is widely used on RF and analog devices to check the basic functionality such as sleep current, current consumption in different modes, or leakage and continuity. However, only full specification testing can provide important information about the critical RF and analog parameters such as RF output power, adjacent channel power ratio (ACPR), receiver noise figure, IP3, or DAC and ADC linearity. Modern SoC transceivers with ZIF architecture have additional test requirements such as receiver blocking tests or dc receive/transmit offset calibration. The relatively low frequencies of baseband devices (input frequencies on the order of less than 20 MHz; including harmonics, the measurements are on the order of 60 MHz or less) make wafer probing of those devices a standard task. However, RF probing can be challenging from an engineering point of view as well as from a cost point of view. The high cost is due to the need for an RF tester, RF probes, and proper shielding for the whole setup, for instance, by performing the tests in a screen room. A wafer prober is the robotically controlled equipment that handles the wafers. In the area of RF testing, in particular, wafer probing has traditionally been avoided if at all possible. Early designs of wafer probes and wafer probe interfaces were unable to handle parasitic capacitances and inductances seen at RF frequencies. Noise pickup was an additional problem. However, with the increasing costs of more complex packages, the advent of SiP and MCM, and the sale of KGD, it has become clear that probing is becoming more necessary. Furthermore, because various functioning die are incorporated into the final package, in a worst-case scenario, a low-yield inexpensive die could jeopardize the entire package, making more expensive die in the package (plus the package) useless. This need has driven the advancement of RF wafer probing technology.
12.2 Yield of MCM Justifies Wafer Probing In testing, the term yield, Y, usually expressed as a percentage, describes the ratio of passed (good) units to the total number of units tested: Y =
N passed N tested
× 100%
(12.1)
In the case of an MCM or SIP, when there are multiple die (units) in one package, (12.1) is extended to
Wafer Probing
Y total = Y 1Y 2Y 3 KY N
255
(12.2)
where Ytotal is the overall yield of the MCM, which is comprised of the individual yields of the N components making up the MCM. As an example, assume that a manufacturer builds an MCM with three active and two passive components. For instance, a power amplifier (PA), an RF transceiver (RFT), a baseband chip (BB), and two filters (SW1 and SW2) are assembled into one package. Next, assume that each single component has a yield of 95%. Using (12.2), the MCM yield becomes Y MCM = Y PAY RFTY BBY SW1Y SW2
(12.3)
Y MCM = (0.95)(0.95)(0.95)(0.95)(0.95) = 0.77
(12.4)
That’s a 77% yield for the module despite the fact that each die had a respectable yield of 95% on its own. The realization of this can be shocking and it points out that the cost savings of not packaging 23% of the modules makes it economically interesting to implement on-wafer probing that covers the whole spec of each single device.
12.3 Probe Cards The device that is used to make contact with the wafer is called the probe card and is a complex PCB that contains a customized arrangement of probe needles or probe tips to allow all of the necessary tester resources to contact all of the bond pads on one or more devices simultaneously [3]. The main engineering challenges are calibration methods and accuracy, control over the impedance from the tester to the probe tip, measurement repeatability issues, performance degradation due to wear of the probe card or other factors such as dirt, cleaning methods, cleaning intervals, or planarity, as well as mechanical interfacing between the tester and the probe card. Different methods are used to overcome those challenges and it is up to the user and the requirements to determine which method is best. To identify the best probe card technology for a specific application, the tables in this chapter provide key parameters chosen from among the three major types of probe technology. Those major types are cantilever needle probes, coplanar probes, and membrane probes. Probe technologies are available that combine or try to combine the benefits of these three types of technology into one. Also, some vendors use different, trademark-protected names for these three technologies.
256
Advanced Production Testing of RF, SoC, and SiP Devices
12.4 Types of Probe Cards Test floor personnel are always interested in the number of contacts that can be made with one probe card before it has to be replaced. Depending on the technology of the probe card, this number is typically between 500,000 and 1 million. Also, in practice, a semiconductor company does not replace the probe card after the guaranteed number of insertions is reached. Typically, the performance of the probe card will be monitored and the probe card will be used until the performance degrades to a point that cannot be tolerated anymore. The exact amount of acceptable overuse is difficult to quantify since it depends, among other factors, on the type of probe card, cleaning procedure, oxidation of the bond pads, and contact pressure. Table 12.1 gives a brief overview of some of the key parameters for the different types of probe cards. Please note that Table 12.1 gives only a rough overview of the different parameters. There are many manufacturers and each one of them is trying to optimize some of those parameters with the result that one parameter in the table below can be very different from what a vendor specifies. For now let’s just focus on some of the key areas where each one of those probe types is used. 12.4.1 Cantilever Needle Probes
Cantilever needle probes are by far the most common type of probe cards in use today even though it was speculated for years that cantilever needle probes were
Table 12.1 Probe Card Types and Key Parameters Cantilever Needle Probe
Coplanar Probe
Membrane Probe
Serial inductance (nH)
4
Varies
< 0.2
Number of pins per card
<15 RF, 100s of IO pins
<10
<100 RF, up to 300 IO pins
Minimum pitch (µm)
100
100
250
Maximum frequency (GHz)
<6
Up to 110
12, in some applications up to 20
Impedance control
No
Yes, 50Ω
Yes, typically 50Ω, but can be custom built with impedances from 10 to 100Ω
Number of insertions
>1 million with proper cleaning
100,000 to 500,000
>1 million
Wafer Probing
257
at the end of their life and would be replaced by other technologies. So far no technology has been developed that is cheaper than needle probe cards—an argument that made test engineers willing to compromise on other parameters. Depending on the vendor, custom-made SoC probe cards can be purchased for as little as $2,000 with about 100 needles per card. The possible needle count per probe card is high enough to make this a good choice for parametric testing on devices that have a high pin count or for applications that run many devices in parallel, for instance, memory devices where it is common to have 32 devices tested in parallel. Some manufacturers have been successful in increasing the bandwidth up to 6 (sometimes 8) GHz with the result that they are used in some applications that require RF or fast digital signals to be sourced or measured. The challenge still is that most vendors do not guarantee the impedance of the needle all the way to the tip of the probe. Most needle probes are made out of tungsten, which is why needle probes are sometimes called tungsten probes. The key benefits of tungsten are that it provides good stiffness, good hardness of the tips, and good wear characteristics. Some other materials that are used for the needles are beryllium-copper (BeCu) or tungsten-rhenium. The major reason behind a loss in probe yield is dirt on the probe tips. To prevent the buildup of dirt on the probe tips, the probes are cleaned during production after a certain number of devices have been probed. This is done automatically by the wafer prober after, for instance, 100 dies are probed. Then the probes are cleaned with an abrasive gel before normal die testing is resumed. 12.4.2 Coplanar Probes
Coplanar probes are frequently used in labs to characterize RF devices with smaller pin counts. The RF performance is excellent and so is the impedance control all the way to the tip of the probe. Most coplanar probes for RF come in either a GS (ground–signal) or GSG (ground–signal–ground) configuration [1]. Obviously this configuration is intended for devices that have a ground pad to the right and/or left of the signal pad. This is an important feature for RF testing and helps to avoid possible oscillations of the device as well as ground loops. If coplanar probes are used on probe cards for production test of SoC devices, the probe card is seldom built with coplanar probes only. Instead, the coplanar probes are used for either high-speed digital or RF pads, while regular needle probes are used for the remaining digital, analog, and dc pins. Such a configuration makes sense from an electric point of view but can be challenging from a mechanical point. The needle probes have a different characteristic to flex than the coplanar probes, which means that over time the planarity between the coplanar and needle probes can be lost. The result is continuity problems, and the probes have to be realigned or—in the worst case—the expected
258
Advanced Production Testing of RF, SoC, and SiP Devices
number of touchdowns cannot be reached because realignment does not fix the planarity issues. Due to the mechanical challenges of a probe card with coplanar technology, the cost can be quite high. A typical amount is on the order of $10,000 for one probe card. 12.4.3 Membrane Probes
Membrane probes were developed to probe devices with stringent RF or high-speed digital needs. In most cases they are built with a limited number of probes that are optimized for RF performance and strict impedance control. Depending on the manufacturer, different impedances than 50Ω can be ordered to match the device requirements. The remaining probes on the probe card are built to make contact with the analog, digital, and power supply pins. The number of those low-speed, general-purpose or power supply probes is typically much higher than that of the RF probes. Typical applications for membrane probes are WLAN, Bluetooth, or WIMAX devices where accurate and repeatable RF measurements are the key objective. Due to those strict RF requirements and the high cost of nonrecurring engineering (NRE), the price of membrane probe cards is typically very high. For a WLAN RF radio with 64 probes (4 of them for RF), the cost can run up to $20,000 for one probe card. Table 12.1 explains that insertions of more than 1 million are possible for that kind of probe card. Membrane probe cards wear out quicker if bond pads of the device are oxidized and the probe card has to be cleaned more often than planned. This is due to the fact that the NiAu (nickel-gold) tips of membrane probe cards are sensitive to the cleaning procedure.
12.5 Selecting a Probe Card A few key items need to be considered when choosing a probe card technology. The following sections discuss these selection considerations. 12.5.1 Frequency Range
Deciding on the frequency range is one of the easier tasks. The user merely needs to go through the test document that lists the minimum test requirements and whatever is specified as the highest frequency determines the frequency range of the probes. Even though the frequencies go up for some of the newer applications such as the UWB, most consumer devices, such as WLAN 802.11b/g or Bluetooth, still operate in the frequency range of up to 2.4 GHz. It is important to note that many devices require harmonic testing for their
Wafer Probing
259
transmitters, especially third-order harmonics, which means that in the case of a device that operates at 2.4 GHz, the third-order harmonic frequency will be at 7.2 GHz. 12.5.2 Number of Pins
The number of pins is also easy to determine. The device spec lists all the signal and ground pins. This number can be as low as 12 for a power amplifier to several hundred for a highly integrated SoC device. The total number of pins can be misleading because what is more important is the number of RF pins and general-purpose or low-frequency and power supply pins that have to be accommodated on one probe card. Taking the example of the power amplifier, the typical requirement is for two RF pins, two or three logic pins, two power supply pins, and the rest are ground pins. For a highly integrated WLAN transceiver, on the other side, the number of RF pins can be as high as eight with about 70 digital pins, eight power supply pins, and eight low-frequency analog pins. In some cases the test engineer implements the testing in dual or quad site mode, which of course means that the numbers just mentioned have to be doubled or quadrupled. 12.5.3 Impedance Control
Impedance control is an important point in the decision-making process. Many devices are not too sensitive to RF impedance mismatches. If that is the case the test engineer has to take into account that the power that is sourced into the device is less than anticipated because a large amount of power is reflected back from the device input. Similarly, the measured RF power is higher than the reading from the measurement because of reflections. For measurements close to the sensitivity level of the device in particular, this can be a challenge and greater care should be taken to ensure proper impedance matching. Devices with a high output power, such as power amplifiers or transmitter outputs of higher integrated devices, tend to oscillate due to mismatch effects. The oscillations definitely invalidate the measurement but can even lead to the point where the device gets damaged. Therefore, maintaining impedance control for those devices is certainly of highest importance. The exact requirement of impedance control depends on the application. However, a match on the order of –10 dB is in most cases acceptable for RF input and output ports. 12.5.4 Decoupling and Current Limitations
Decoupling of the dc lines is another important factor to consider. When working with high-power devices in particular, it is important to place decoupling
260
Advanced Production Testing of RF, SoC, and SiP Devices
capacitors as close as possible to the device to avoid oscillations or noise problems. This is especially important for devices that operate in pulsed modes, such as a GSM power amplifier, and have a high peak current draw. In the case of a GSM power amplifier, a current of 2.5A is not unusual. The current limitations of the probe should be taken into account in any application but are a key consideration in cases in which the device is operating in a pulsed mode. Cantilever probes typically can handle the most current, say, up to 1A or more, depending on the design of the needles (e.g., needle diameter). Coplanar probes are normally not used to supply currents but to make contact with RF or analog ports (see Section 12.4.2). Therefore, current limitations are normally not a limiting factor when coplanar probes are used because a different probe type is used to contact the dc pads. Membrane probes have dc current ranges on the order of 300 mA. If the current draw is a challenge, the approach that is often chosen is to use more than one probe to make contact on the dc pad of the device (assuming that the pad size is sufficient to accommodate more than one probe). Also, most devices that require high current have more than one pad to supply dc current. In this case several probes can be used to make contact with the dc pads of the device. 12.5.5 Inductance
The probes are not only used to apply RF, analog, and digital signals as well as dc power to the device, but also to provide the grounding of the device. LNAs degrade tremendously without proper grounding where the gain goes down and the noise figure increases. Power amplifiers with ground problems tend to oscillate. These two examples show the importance of good ground connections, which can be achieved with low probe inductance. The best performance in terms of low inductance can be found in membrane probes with inductances as low as 0.2 nH.
12.6 Tester to Wafer Prober Interface After the decision about which probe card to use is made, the next step is to decide how to interface between the tester and wafer prober. This interfacing action is termed docking and three methods are used: soft docking, hard docking, and direct docking, each of which is discussed next. 12.6.1 Soft Docking
Soft docking makes the connection between the tester and the probe card with cables. Even though this sounds simple, many modern SoC devices require hundreds of pins, some of them for RF and some of them for fairly high-speed
Wafer Probing
261
digital applications, for instance, PCIexpress. Maintaining signal integrity between the test head and the probe card (which can be 5 or more feet away) is a challenging task, not to mention the added uncertainties of calibration errors and losses, troubleshooting errors, and bad connections. Overall, soft docking of the probe card is seldom applied because of the complexity and the lack of reliability. Unlike hard docking or direct docking, it does not require expensive hardware, which makes it an option if a low COT is the highest priority. 12.6.2 Hard Docking
Hard docking uses an interface board, termed a prober interface board (PIB), to connect the tester and wafer prober. The prober interface board is a mechanical fixture that routes tester resources (digital, analog, dc, and RF) from the test head to the probe card so that those signals can be applied to the die. This is the most frequently applied method for wafer probing. Figure 12.1 shows an example of a prober interface board. With most wafer probers, the interface is horizontal with the wafer prober sitting below the test head of the tester. This means that the test head has to be lifted and then rotated so that it is upside down. As can be seen in Figure 12.1, the load board (1) connects on the top side with the test head and on the bottom side to the wafer prober interface board (2), which is sometimes also referred to as the pogo tower. The prober interface board then connects to the probe card (4), which is mounted into the wafer prober (3).
1
2
1. Load board 2. Prober interface board 3. Wafer prober 4. Probe card
4 3
Figure 12.1 Prober interface board. (Courtesy of Verigy.)
262
Advanced Production Testing of RF, SoC, and SiP Devices
12.6.3 Direct Docking
Direct docking is when the test head of the tester connects directly to the probe card. It has the advantage of having direct and short connections between the test head and probe card. However, it also has a couple of disadvantages. First, there is no mechanical isolation between the test head and the probe station. Second, the space to place components such as matching components or decoupling capacitors is limited to the space on the probe card. These two disadvantages typically outweigh the advantage of direct and short connections between the test head and the probe card so that it is not frequently used in a mass production environment.
12.7 Calibration Methods for Measurements with Wafer Probing The required accuracy of the measurement that is performed on-wafer determines how to calibrate the setup. The calibration of (low-speed) digital signals and power supplies is simple because the force and sense lines are simply extended from the test head through the probe interface board to the probe card. Arbitrary waveform generators and digitizers are typically not calibrated to the tip of the probe because the loss from the calibration plane to the probe tip is small. With RF signals on the order of 1 GHz or higher this changes dramatically and the test engineer has to decide how to compensate for any kinds of loss between the calibration plane and the tip of the probe. Three commonly applied calibration methods are discussed next: scalar calibration, S-parameter-based calibration, and calibration with calibration substrates. For more details on the overall concepts of some of these calibration methods (applied universally to packaged parts as well as die), see Chapter 8. 12.7.1 Scalar Loss Calibration
By far, the most frequently applied method is the scalar correction. If phase information is not required, the accuracy is sufficient for most applications. It is also very cheap to implement and does not require expert engineering skills. 12.7.2 S-Parameter-Based Calibration
The method of performing S-parameter measurements with a network analyzer to de-embed the calibration plane is somewhere in between the method of scalar correction and performing calibration with on-wafer standards in terms of required engineering skill, cost, and accuracy. The method is accurate when it is performed the right way and it provides magnitude and phase information. It is
Wafer Probing
263
cheap to perform because all it requires is a network analyzer, some cables, and connectors. The uncertainty in this method is due to fact that the whole probe card has to be disconnected from the prober interface board and reconnected after the measurement is performed. Maintaining the same quality connection between the prober interface board and the probe card can be a problem due to dirt on the connectors, a different torque being applied to the connectors, and so forth. Those problems make it a less favorable choice among test engineers, but when it is performed properly, it will give good results. 12.7.3 Calibration with Calibration Substrates
Using a calibration substrate is the wafer probing equivalent of using socket calibration standards for testing of packaged parts. In this case, a calibration substrate is either custom made to fit the footprint of the probes, or it is part of the wafer that holds the dies that will be tested. It has the standards “through,” “short,” and “match” built onto the substrate. To take advantage of those on wafer standards, the tester uses the LRM (line–reflect–match) calibration method, instead of the more commonly used SOLT method to calibrate all the way to the tip of the probe. In terms of accuracy, this is certainly by far the most accurate method. Unfortunately, it is also the most expensive method and is rarely applied because few devices require such accuracy. An important advantage that is frequently mentioned when this method is used is that the phase information is available due to the vector calibration to the tip of the probe. However, most devices do not require phase information for RF stimulus or measure, which means that this is less an advantage but a very “nice thing” to have.
References [1]
Lau, W., “Measurement Challenges for On-Wafer RF-SOC Test,” Proc. Int. Electronics Manufacturing Technology Symp., New York: IEEE, 2002, pp. 353–359.
[2]
Amkor Technology, “System in a Package (SIP),” Technology Solution 101L, 2005.
[3]
Schaub, K., and J. Kelly, Production Testing of RF and System-on-a-Chip Devices for Wireless Communications, Norwood, MA: Artech House, 2004.
Appendix A Power and Voltage Conversions The dBm unit is decibels relative to 1 milliwatt. To obtain a dBm value from a value of power in milliwatts, use the following equation: P P dBm = 10 log 10 mW 1 mW
(A.1)
To obtain power in watts from a power level specified in dBm, use the following equation: PdBm
PW
10 10 = 1000 ,
(A.2)
The dBW unit is decibels relative to 1 watt. To obtain a dBW value from a value of power in watts, use the following equation: P P dBW = 10 log 10 W 1W
(A.3)
Note that (A.1), (A.2), and (A.3) are independent of characteristic impedance (Z0), hence, they will work for any impedance. If PW is broken down into its constituents, then 265
266
Advanced Production Testing of RF, SoC, and SiP Devices
PW =
V2 Z0
(A.4)
Placing (A.4) into (A.2), we arrive at the relationship between voltage and dBm. Note that it is dependent on Z0: PdBm 10
V = Z 0 (0.001)10
(A.5)
Often, for cable TV applications an impedance-dependent unit called VdBmV is used. It is defined as V V dBmV = 20 log 10 mV 1mV
(A.6)
The argument of the log10( ) function is actually a voltage ratio, but the decibel concept was originally defined for power. Because power is defined as V 2/R, the V 2 term gives rise to the logarithmic multiplier of 20: 10 log(X 2) = 20 log(X ). Also, V µV V dBµV = 20 log 10 1µV
(A.7)
Substituting (A.5) into (A.6), the following relationship is found: z V − dBmV = 10 log 0 + P dBm 0.001
(A.8)
For a 50-Ω device or circuit, V dBmV = 46.99 + P dBm
(A.9)
For a 75-Ω device or circuit, V dBmV = 48.75 + P dBm
(A.10)
Tables A.1 and A.2 are a means to demonstrate the relationships of the various power and voltage values. Note that the relationship between power in dBm and in watts is the same regardless of impedance. For example, in Table
Appendix A
267
Table A.1 Relationship Between Power and Voltages in Linear and Logarithmic Scales at 50-Ω Characteristic Impedances
P (dBm)
P (W)
–100
1.0 × 10–13 –53.01
2.0 × 10–6
–50
1.0 × 10–8
–3.01
0.0007
–40
1.0 × 10
–7
6.99
0.002
1.0 × 10
–6
16.99
0.007
–20
1.0 × 10
–5
26.99
0.022
–10
0.0001
36.99
0.071
–5
0.00032
41.99
0.126
–4
0.00040
42.99
0.141
–3
0.00050
43.99
0.158
–2
0.00063
44.99
0.178
–1
0.00079
45.99
0.199
0
0.001
46.99
0.224
+1
0.0013
47.99
0.251
+2
0.0016
48.99
0.282
+3
0.0020
49.99
0.316
+4
0.0025
50.99
0.354
+5
0.0032
51.99
0.398
+10
0.01
56.99
0.707
+20
0.1
66.99
2.236
+30
1
76.99
7.071
+40
10
86.99
22.36
+50
100
96.99
70.71
146.99
22361
–30
+100
1.0 × 10
V (dBmV) V (V)
7
A.1, where Z0 = 50Ω, –10 dBm corresponds to 0.1 mW. Referring to Table A.1, where Z0 = 75Ω, –10 dBm also corresponds to 0.1 mW. The differences between values in these two tables become apparent when Z0 is considered, as in voltage or power in units of dBmV. In a 50-Ω environment (Table A.1), –10 dBm corresponds to 71 mV, whereas in a 75-Ω environment (Table A.2), –10 dBm corresponds to 87 mV. A common reference point for every engineer should be to note that 0 dBm is equivalent to 1 mW. Keeping this in mind will be handy for those back-of-the-envelope calculations.
268
Advanced Production Testing of RF, SoC, and SiP Devices
Table A.2 Relationship Between Power and Voltages in Linear and Logarithmic Scales at 75-Ω Characteristic Impedances
P (dBm) P (W)
V (dBmV) V (V) –13
–100
1.0 × 10
–50
1.0 × 10–8
–40
1.0 × 10
–7
–51.25
3.0 × 10–6
–1.25
0.0009
8.75
0.003
18.75
0.009
28.75
0.027
1.0 × 10
–6
–20
1.0 × 10
–5
–10
0.0001
38.75
0.087
–5
0.00032
43.75
0.154
–4
0.00040
44.75
0.173
–3
0.00050
45.75
0.194
–2
0.00063
46.75
0.218
–1
0.00079
47.75
0.244
0
0.001
48.75
0.274
+1
0.0013
49.75
0.307
+2
0.0016
50.75
0.345
+3
0.0020
51.75
0.387
+4
0.0025
52.75
0.434
+5
0.0032
53.75
0.487
+10
0.01
58.75
0.866
+20
0.1
68.75
2.739
+30
1
78.75
8.660
+40
10
88.75
27.39
–30
+50
100
98.75
86.60
+100
1.0 × 107
148.75
27386
Table A.3 provides conversion formulas for the most often used units of voltage and power.
Appendix A
269
Table A.3 An Aid for Converting Between Volts, Watts, and dBm To Voltage (V) From Voltage (V)
Power (W)
Power (dBm)
2
1
Power (W)
WZ
Power (dBm)
dBm Z 1 × 10 −3 log −1 10
Note: Z is impedance in ohms.
(
)
V Z
10 log
1
10 log
(1 × 10 )log −3
−1 dBm
10
1
(
V2
Z 1 × 10 −3 W 1 × 10 −2
)
Appendix B VSWR, Return Loss, and Reflection Coefficient Reflection coefficient (Γ), return loss (RL), and voltage standing wave ratio (VSWR) are related. The most commonly used parameter in device specifications, however, is VSWR. Using an S-parameter-based definition, input and output reflection coefficients are defined [1] as follows: Γin = S 11 +
S 21 S 12 Γload 1 − S 22 Γtoad
(B.1)
Γout = S 22 +
S 21 S 12 Γsource 1 − S 22 Γsource
(B.2)
or
If the DUT is impedance matched to the test equipment, then the reflection coefficient of the source and load become zero and the input and output reflection coefficients of the DUT are simply the magnitude of S11 or S22 (referring to either the input reflection coefficient or output reflection coefficient, respectively):
271
272
Advanced Production Testing of RF, SOC, and SiP Devices
Γin = S 11
(B.3)
Γout = S 22
(B.4)
or
The relationship between the reflection coefficient and return loss is RL dB = −20 log 10 ( Γ )
(B.5)
The relationship between VSWR and the reflection coefficient is Γ=
VSWR − 1 VSWR + 1
(B.6)
Table B.1 Converting Between VSWR, Return Loss, and Reflection Coefficient VSWR
RL (dB)
0.0005
1.1:1
26.444
0.0476
0.0010
1.2:1
20.828
0.0909
56.491
0.0015
1.3:1
17.692
0.1304
1.004:1
53.997
0.0020
1.4:1
15.563
0.1667
1.005:1
52.063
0.0025
1.5:1
13.979
0.2000
1.006:1
50.484
0.0030
1.6:1
12.736
0.2308
1.007:1
49.149
0.0035
1.7:1
11.725
0.2593
1.008:1
47.993
0.0040
1.8:1
10.881
0.2857
1.009:1
46.975
0.0045
1.9:1
10.163
0.3103
1.01:1
46.064
0.0050
2.0:1
9.542
0.3333
1.02:1
40.086
0.0099
3.0:1
6.021
0.5000
1.03:1
36.607
0.0148
4.0:1
4.437
0.6000
1.04:1
34.151
0.0196
5.0:1
3.522
0.6667
1.05:1
32.256
0.0244
10.0:1
1.743
0.8182
1.06:1
30.714
0.0291
20.0:1
0.869
0.9048
1.07:1
29.417
0.0338
50.0:1
0.347
0.9608
1.08:1
28.299
0.0385
100.0:1
0.174
0.9802
1.09:1
27.318
0.0431
VSWR
RL (dB)
1.001:1
66.025
1.002:1
60.009
1.003:1
Appendix B
273
and it therefore follows that VSWR is related to reflection coefficient by VSWR =
1+ Γ 1− Γ
(B.7)
The acronym VSWR is often pronounced as a word, “vis-war,” rather than as the acronym that it is. Additionally, it is most often written as a ratio, relative to 1, as shown in the Table B.1. This table shows the relationship between VSWR, return loss, and reflection coefficient.
Reference [1]
Agilent Technologies, “S-Parameter Design,” Application Note 154, 2000.
Appendix C RF Coaxial Cables
Four primary types of cables are used for RF signals: flexible, conformable, semirigid, and semiflexible [1, 2]. The most common electrical performance parameters to consider when choosing cables are insertion loss, VSWR, crosstalk (leakage), and maximum frequency of use. Mechanically, the primary consideration is to ensure that the cable provides the ability to conform to any tight bends that may be needed in routing of the cables. If sharp bends are encountered, the user should attempt to remedy this by using large, gentle looping of the cable because sharp bends cause signal degradation. Strain relief, such as cable sheathing and fastening, should be used.
C.1 Flexible Cable Flexible cables are the most commonly used cable type for connecting a load board–mounted connector to the tester. Because they are flexible, they are especially good for applications that require many cables to be connected. It is important to have a minimum bend radius of less than 0.5 inch typically the cable will have a diameter of less than 0.115 inch. When using in proximity to a handler or moving parts on the test head (i.e., docking hardware), excess exposed cable should be tied down to the load board. 275
276
Advanced Production Testing of RF, SoC, and SiP Devices
C.2 Conformable Cable Conformable-type cables are hand formable, but stiffer than flexible cable and can be preformed to retain a shape. They typically have better electrical performance and lower insertion loss than does flexible cable. Because of their stiffness, it is more difficult to force multiple cables to fit into a small space.
C.3 Semirigid Cable Semirigid coaxial cables are formed cables that can be used for tight spaces. They are precisely bent by the manufacturer and retain their shape. Although these are the best choice for maintaining signal integrity, the fact that they are custom fabricated translates to a higher cost cable and the possibility of extended lead times when ordering. The cable can be designed such that the cable connector ends are near their final positions without needing additional forming. (The cables can often be slightly manipulated to precisely meet the mating connector.) The solid conductor of these cables minimizes attenuation and VSWR, and the outer conductor almost eliminates signal leakage. The outer jacket is typically conductive. These cables are typically used with only SMA cable connectors and are not meant for many flexures.
C.4 Semiflexible Cable Semiflexible cable is an alternative to semirigid cable. With its braided outer conductor, it allows repeated, easy positioning by hand. The outer and inner semirigid construction provides electrical performance close to that of semirigid cable. The use of semiflexible cables can eliminate the long product delay times associated with custom-fabricated semirigid cables.
References [1]
Agilent Technologies, “DUT Board Design Guide,” Part Number E7050-91037, Edition 5.1.0, 2005.
[2]
Tensolite, “Standard RF/Microwave Cable Assemblies,” Product Catalog, 2001.
Appendix D RF Connectors
This appendix provides a descriptive overview of commonly used connectors in RF testing (production, rack-and-stack, and bench). Only a few of these, such as SMA and SMP, are practical for production test load boards. RF coaxial connectors are the most important element in the cable system. High-quality coaxial cables have the potential to deliver all of the performance a system requires, but they are often limited by the performance of the connectors. Impairments such as power loss, electrical noise, and intermodulation distortion—a major concern in today’s communications systems—are minimized by the design and manufacturing techniques of these connectors. Connectors generally come in both “male” and “female” sections. Higher quality RF connectors are even sometimes designed and manufactured in male–female pairs to obtain optimal performance. To minimize potential problems, it is necessary to keep the connectors in a test system clean (using a lint-free swab and rubbing alcohol, for example) and to ensure that any nuts on the connectors are tightened to the proper torque specification provided by the manufacturer [1].
277
278
Advanced Production Testing of RF, SoC, and SiP Devices
D.1 Types of Connectors D.1.1 BNC Connector
The Bayonet Neill Concelman (BNC) connector is a relatively low-frequency (dc to 4 GHz), general-purpose RF connector designed for use in 50-Ω and 75-Ω systems. Developed in the late 1940s as a miniature version of the Type C connector, the BNC connector is named after Amphenol engineer Carl Concelman. The BNC is a miniature quick connect/disconnect RF connector. It features two bayonet lugs on the female connector; mating is achieved with only a quarter turn of the coupling nut. BNC connectors usually have nickel-plated brass bodies, Teflon insulators, and either gold- or silver-plated center contacts. These low-cost connectors are typically available with die cast and molded components. The 50-Ω and 75-Ω units are different. The 75-Ω connector has less plastic surrounding the male pin. If placed side by side, the difference is obvious. The 75-Ω BNC is likely the most common connector in the video industry, whereas the 50-Ω BNC is probably the most common coaxial connector in test and measurement, whenever performance allows [personal communication with Miklos Kara, Agilent Technologies, 2006]. D.1.2 Type C Connector
The Type C (Concelman) connector is medium sized and weatherproof and designed to work up to 11 GHz in 50-Ω systems. The coupling is a two-stud bayonet lock. Type C connectors provide constant 50-Ω impedance, but may be used with 75-Ω cable at lower frequencies (below 300 MHz) where no serious mismatch is introduced. D.1.3 Type N Connector
The Type N (Neill) connector is for use up to 11 GHz in a 50-Ω environment. It was named after Paul Neill of Bell Labs after being developed in the 1940s. The Type N connector offered the first true microwave performance. D.1.4 SMA Connector
The subminiature version A (SMA) connector is one of the most commonly used connectors in RF and SoC test equipment. The SMA connector was developed in the 1960s and uses a threaded interface. These connectors are for use in a 50-Ω environment and provide excellent electrical performance up to 26.5 GHz. SMA connectors are available in both standard and reverse polarity. Reverse polarity is a keying system accomplished with a reverse interface, and it
Appendix D
279
ensures that reverse polarity interface connectors do not mate with standard interface connectors. Of the various types of RF connectors, SMA connectors are used when you have sufficient board space and connectors do not have to be very close together. They are also used when a reliable and stable mechanical connection is required. A vertical through-hole SMA should not be used when launching a trace to a microstrip on the same side of the board (as the launch). This will create a stub (microwave transmission line effect) from the via, degrading performance. Because SMA connectors are commonly used in production test equipment, it is worthwhile to point out a very useful tip. When mating a male SMA connector (the one with the pin in the center) to a female connector, make sure to spin only the collar on the male connector. Engineers often (and incorrectly) spin the female connector. This has the effect of prematurely mechanically wearing the contact point where the male pin enters the female receiver, resulting in reduced performance due to debris from plating and reduced conductivity due to oxidation. D.1.5 SMB Connector
The subminiature B (SMB) connector is so named because it was the second subminiature connector to be designed. Developed in the 1960s, the SMB is a smaller version of the SMA with snap-on coupling. It is designed for both 50-Ω and 75-Ω impedances, and operation up to 10 GHz. D.1.6 SMC Connector
The subminiature C (SMC) connector is so named because it was the third subminiature connector to be designed. It has a threaded coupling with 10 to 32 threads. It is designed for both 50-Ω and 75-Ω impedances, and operation up to 10 GHz. D.1.7 SMP Connector
The subminiature P (SMP) connector is a recent connector type with extremely small dimensions that can operate at frequencies up to 40 GHz. It is often used for PC board–to–PC board interconnections. This lends itself to use in RF testing applications on load boards and has recently become more prevalent for exactly that. The design of the body of the connector allows it to be routed through PC and load boards, eliminating the need to design complex vias on multilayer boards. The SMP is not as mechanically robust as the SMA, but its size provides an advantage in dense signal routing applications. The connector is often attached to a microstrip trace on the load board.
280
Advanced Production Testing of RF, SoC, and SiP Devices
SMP connectors are not as reliable and robust as SMA connectors because the center pin can break, but they have good performance and take up much less load board space. Damage often results from inserting or extracting cables at an angle and not straight in or out. That is why it is recommended that an SMP extraction tool always be used to ensure proper insertion/extraction. SMP male connectors have three available detent (insertion) levels (depths). Limited detent is recommended for applications with occasional connecting operations (<500). Smooth bore is used with frequent connecting operations (500 to 1,000), but sometimes the cable may come out unintentionally. Full detent is not recommended because the insertion and extraction forces are so great that the solder joint can often be broken or in the worst case, the connector can be torn from the load board [personal communication with Robert Lee, Agilent Technologies, 2005].
D.1.8 TNC Connector
The Threaded Neill Concelman (TNC) connector is designed to operate to 11 GHz in a 50-Ω system. It was developed in the late 1950s and named after Amphenol engineer Carl Concelman. Designed as a threaded version of the BNC, the TNC series features screw threads for mating. Its geometry makes it compatible for mating with Type N connectors, but this is not recommended because the slight differences create impedance differences at the interface, causing signal reflections. TNC connectors are available in both standard and reverse polarity. Reverse polarity is a keying system accomplished with a reverse interface, and it ensures that reverse polarity interface connectors do not mate with standard interface connectors.
D.1.9 UHF Connector
The UHF connector is designed to operate only to 300 MHz, but at any impedance. This is one of the oldest RF connectors, developed in the 1930s by an Amphenol engineer named E. Clark Quackenbush. Invented for use in the radio industry, UHF is an acronym for ultra-high-frequency because at the time 300 MHz was considered high frequency. UHF connectors have a threaded coupling. In the 1970s, a miniature version of the UHF connector that operates at up to 2.5 GHz in a 50-Ω environment was introduced. Today, they are often used in mobile phones and in automotive systems, or other places where size, weight, and cost factors are critical. As an interesting piece of trivia, the center conductor socket of the UHF connector can accept a standard “banana” jack.
Appendix D
281
D.1.10 7/16 DIN Connector
This is a newer style RF connector designed by Deutsche Industries Norm (DIN) that operates to 5.2 GHz. The name is derived from its coaxial geometry. The “7” refers to the inner conductor’s outer diameter, and the “16” refers to the inner diameter of the outer conductor. The design of this connector provides very low (good) intermodulation distortion performance.
D.2 Tightening RF Connectors Most RF (threaded) connectors require tightening to a specified torque value to achieve guaranteed electrical performance. This is done using a torque wrench. Many manufacturers of cables and connectors offer these wrenches. Table D.1 provides torque values for common RF connectors. Some of the styles have a range of values provided. This is because the torque to be applied depends on the type of material (i.e., brass, stainless, beryllium-copper) used on the connector. Table D.1 RF Connector Performance and Torque Values
Connector
Maximum Torque Frequency (GHz) (in.-lb)
Torque (N-m)
APC 2.4 (2.4 mm)
50
8
0.9
APC 3.5 (3.5 mm)
34
8
0.9
APC 7 (7 mm)
18
12
1.4
4
N/A (bayonet)
N/A (bayonet)
C
11
N/A (bayonet)
N/A (bayonet)
N
18
12–15
1.4–1.7
SMA
26.5
7–10
0.8–1.13
SMB
10
N/A (snap on)
N/A (snap on)
SMC
10
3–5
0.34–0.57
SMP
40
N/A (snap on)
N/A (snap on)
BNC
SSMA
26.5
7–10
0.8–1.13
TNC
18
12–15
1.4–1.7
UHF
0.3
15
1.7
7/16 DIN
5.2
220–250
25–30
Source: [2–4].
282
Advanced Production Testing of RF, SoC, and SiP Devices
References [1]
Hewlett Packard, “Coaxial Systems, Principles of Microwave Connector Care (for Higher Reliability and Better Measurements),” Application Note 326, 1988.
[2]
Amphenol Corporation, 2003.
[3]
Johnson Components, “RF Connector Application Guide,” Part Number JCI 161, 1999.
[4]
Tensolite, “Standard RF/Microwave Cable Assemblies,” Product Catalog, 2001.
Appendix E Decimal to Hexadecimal and ASCII Conversions ASCII is an acronym for American Standard Code for Information Interchange. This is a character encoding system that uses a seven-bit code to represent characters through numbers. Because there are seven bits, there are 27, or 128 characters (0 through 127). Although this system was created many years ago, it is still used today in numerous applications. In the area of production testing, the first eight ASCII characters are commands that can be found in some driver interface communications for handlers and wafer probers. Table E.1 provides the mapping of characters per the ASCII standard along with the decimal and hexadecimal equivalents.
283
284
Advanced Production Testing of RF, SOC, and SiP Devices
Table E.1 Decimal, Hexadecimal, and ASCII Equivalents Dec Hex
ASCII
Dec
Hex
ASCII
Dec
Hex
ASCII
0
0x00
nul
48
0x30
0
96
0x60
`
1
0x01
soh
49
0x31
1
97
0x61
a
2
0x02
stx
50
0x32
2
98
0x62
b
3
0x03
etx
51
0x33
3
99
0x63
c
4
0x04
eot
52
0x34
4
100
0x64
d
5
0x05
enq
53
0x35
5
101
0x65
e
6
0x06
ack
54
0x36
6
102
0x66
f
7
0x07
bel
55
0x37
7
103
0x67
g
8
0x08
bs
56
0x38
8
104
0x68
h
9
0x09
ht
57
0x39
9
105
0x69
i
10
0x0A nl
58
0x3A
:
106
0x6A
j
11
0x0B
vt
59
0x3B
;
107
0x6B
k
12
0x0C
np
60
0x3C
<
108
0x6C
l
13
0x0D cr
61
0x3D
=
109
0x6D
m
14
0x0E
so
62
0x3E
>
110
0x6E
n
15
0x0F
si
63
0x3F
?
111
0x6F
o
16
0x10
dle
64
0x40
@
112
0x70
p
17
0x11
dc1
65
0x41
A
113
0x71
q
18
0x12
dc2
66
0x42
B
114
0x72
r
19
0x13
dc3
67
0x43
C
115
0x73
s
20
0x14
dc4
68
0x44
D
116
0x74
t
21
0x15
nak
69
0x45
E
117
0x75
u
22
0x16
syn
70
0x46
F
118
0x76
v
23
0x17
etb
71
0x47
G
119
0x77
w
24
0x18
can
72
0x48
H
120
0x78
x
25
0x19
em
73
0x49
I
121
0x79
y
26
0x1A sub
74
0x4A
J
122
0x7A
z
27
0x1B
esc
75
0x4B
K
123
0x7B
{
28
0x1C
fs
76
0x4C
L
124
0x7C
|
29
0x1D gs
77
0x4D
M
125
0x7D
}
30
0x1E
rs
78
0x4E
N
126
0x7E
~
31
0x1F
us
79
0x4F
O
127
0x7F
del
32
0x20
sp
80
0x50
P
Appendix E
285
Table E.1 (continued) Dec Hex
ASCII
Dec
Hex
ASCII
33
0x21
!
81
0x51
Q
34
0x22
“
82
0x52
R
35
0x23
#
83
0x53
S
36
0x24
$
84
0x54
T
37
0x25
%
85
0x55
U
38
0x26
&
86
0x56
V
39
0x27
‘
87
0x57
W
40
0x28
(
88
0x58
X
41
0x29
)
89
0x59
Y
42
0x2A *
90
0x5A
Z
43
0x2B
+
91
0x5B
[
44
0x2C
,
92
0x5C
\
45
0x2D -
93
0x5D
]
46
0x2E
.
94
0x5E
^
47
0x2F
/
95
0x5F
_
Dec
Hex
ASCII
Appendix F Numerical Prefixes Prefix
Symbol Factor
exa
E
× 1018 (quintillion)
peta
P
× 1015 (quadrillion)
tera
T
× 1012 (trillion)
giga
G
× 109 (billion)
mega
M
× 106 (million)
kilo
k
× 103 (thousand)
hecto
h
× 102 (hundred)
deka
da
× 101 (ten)
deci
d
× 10–1 (tenth)
centi
c
× 10–2 (hundredth)
milli
m
× 10–3 (thousandth)
micro
µ
× 10–6 (millionth)
nano
n
× 10–9 (billionth)
pico
p
× 10–12 (trillionth)
femto
f
× 10–15 (quadrillionth)
atto
a
× 10–18 (quintillionth)
287
About the Authors Joe Kelly received a B.S. in electrical engineering, an M.S. in ceramic engineering, and a Ph.D. in ceramic and materials and ceramic engineering from Rutgers University. His graduate work focused on engineering and modeling of electromechanical properties and loss mechanisms in piezoelectric ceramic resonators. His graduate work also consisted of an internship at the Army Research Laboratory, Fort Monmouth, New Jersey, and the design and implementation of numerous electrical and physical characterization tests of electroacoustic and piezoelectric materials. He has worked for Siemens, now Epcos, as a SAW filter design engineer. Dr. Kelly has worked in the area of production testing of RF and mixed-signal devices for seven years with Hewlett Packard, Agilent, and now Verigy. He is currently a senior engineer in Verigy’s Wireless Center of Expertise. He is the coauthor of Production Testing of RF and System-on-a-Chip Devices for Wireless Communications (Artech House, 2004). He can be reached at
[email protected]. Michael Engelhardt studied electrical engineering in Ulm, Germany, and Lille, France, and graduated with a “Diplom,” corresponding to the MSEE degree. His thesis focused on design and realization of 10-GHz low-noise amplifiers. He started his engineering career in R&D as a design engineer for microwave systems at Daimler-Benz Aerospace, now EADS, before joining Texas Instruments in Munich, Germany, and Dallas, Texas, as an RF product and test engineer. While working for Texas Instruments, he received an M.B.A. from the University of Dallas. He joined Agilent Technologies, now Verigy, in 2000 where he worked with various customers on RF and mixed-signal related projects as an applications engineer. In 2006, Mr. Engelhardt was presented with the Agilent 289
290
Advanced Production Testing of RF, SoC, and SiP Devices
Presidents Club award for excellence in applications engineering. He has presented his work at various international conferences, including “Investigation and Verification of RF De-Embedding Methods to Perform Accurate On-Wafer Measurements” (Semicon Singapore, 2002), “A Method to Match ATE Board to Evaluation Board Characteristics for RF Devices” (Semicon Europe, 2004), and “Challenges and Cost of Test Considerations in Multi-Site WLAN Testing” (IEEE Wireless Symposium, 2004). He currently lives in San Jose, California, and can be reached at
[email protected].
Index SoC, 109 Automatic gain control (AGC), 114 Auto-unloaders, 218
1/f noise, 64–65 defined, 64 PSD, 62 See also Noise 12-term error correction model, 170
B Bandpass filter (BPF), 114 Bandwidth considerations, 81–82 RF, 285 of test cell/system, 185 Baseband AWG calibration, 161 Baseband digitizer calibration, 161 Bayonet Neill Concelman (BNC) connector, 278 Binning, 216–17 Bipolar-junction-transistor (BJT), 118, 119 Blind vias, 246 Built-in self-tests (BISTs), 10, 102–3 algorithm, 103 focus, 102 implementation, 102 implementation in SoC ZIF transceiver, 103 transmitter RF frequency measurement, 104 Buried vias, 246
A Ac calibration, 166 Adjacent channel power ratio (ACPR), 254 Alignment, 191 Amplitude shift keying (ASK), 120 Analog calibration, 168–69 Arbitrary waveform generators (AWGs), 126–29 baseband calibration, 161, 168–69 capability, 128 dc offset circuitry, 126 defined, 126 files, creating, 128–29 overview, 126–28 signal modulation, 128 See also Production test equipment ASCII conversion, 283–85 Automated test equipment (ATE), 3, 4 analog waveform instrumentation, 125 communication interface standards summary, 134 DSP with, 131 hardware, communicating with, 131–36
C Cables, 245 conformable, 276
291
292
Advanced Production Testing of RF, SOC, and SiP Devices
Cables (continued) flexible, 275 RF coaxial, 275–76 semiflexible, 276 semirigid, 276 Calibration, 8–9, 159–73 ac, 166 analog, 168–69 AWG, 161, 168–69 with calibration substrates, 263 dc level, 165–66 de-embedding, 170–72 device power supply, 161 digital, 161, 165–68 digitizer, 161, 169 DPS, 164–65 drive-to-receive, 166–67 external standards, 164 fixed delay, 167–68 focused, goal, 162 internal standards, 163 LRM, 263 methods, 161–64 noise figure, 161, 172–73 overview, 159–64 power, 161 procedures, 164–73 RF, 161, 169–73 scalar loss, 262 SOLT, 170 S-parameter-based, 262–63 standards, 163 vector, 161 for wafer probing measurements, 262–63 wide bandwidth devices, 106 Cantilever contactor, 178–79 Cantilever needle probes, 256–57 Capacitance, 182–83 Capacitors, 239–42 ceramic, 239–40 common, 240 defined, 239 dielectric constant, 239 electrolytic, 240 equivalent series resistance, 240–42 Cascaded noise figure, 68–69 Ceramic capacitors, 239–40 Characteristic impedance, 186–87 Characterization testing, 3
Chip scale packages (CSPs), 202 Code division multiple access (CDMA), 18 Cold noise method, 72–73 measuring with, 77–78 steps, 77–78 Y-factor method versus, 77 See also Noise figure measurement Complementary code keying (CCK), 17–18 Conformable cable, 276 Connectors, 244–45 BNC, 278 DIN, 281 edge-mount, 245 RF, 277–81 for RF signal, 245 SMA, 278–79 SMB, 279 SMC, 279 SMP, 244, 279–80 surface-mount, 245 through-board, 245 tightening, 281 TNC, 280 Type C, 278 Type N, 278 UHF, 280 Contactors, 6–7, 175–99 bodies, 187 cantilever, 178–79 cleaning, 198–99 cost considerations, 199 critical characteristics, 177 defined, 6, 175 elastomer/interposer, 178 electrical properties, 180–88 identification, 177 illustrated, 176, 179 introduction to, 175–77 maintenance and inspection, 198–99 manual hold-downs, 199 mechanical properties, 191–94 as part of test system, 176 performance design, 177 properties, 180–95 short rigid, 179 spring pin, 178 technologies, 6–7 thermal properties, 188–91 type summary, 179–80
Index use, 177 Contact resistance, 181 Contact spring, 193 Continuous-wave (CW) signals, 16–17 Conversion/changeover kits, 218–19 Conversion compression, 56 Coplanar probes, 257–58 Cost of testing (COT), 97, 139–57 accuracy and guardbands, 153–56 as critical factor, 139 depreciation of test system, 141–42 development test program, 143 floor space, 142 handler/probe index time, 142 hardware, 143 introduction to, 139–41 operators/test floor personnel, 142–43 overhead, 143 parameters, 141–43 percentage, 140 shifts/hours per shift, 141 shipping/customs duty taxes, 143 with simple-site testing, 140 summary, 156–57 support, 143 with test houses, 152–53 utilization, 141 yield, 141 See also COT models COT models, 143–45 multisite, 148–51 parameters, 144 ping-pong, 146–48 variables, 151–52 See also Cost of testing (COT) Cross modulation (XMOD), 53–54 defined, 53 measurement, 54 Crosstalk coupling and, 237 electrical, 183 mechanical, 193–94 Current-carrying capacity, 189–90 D dc level calibration, 165–66 Decibels, 265 Decimal to hexadecimal conversion, 283–85 De-embedding calibration, 170–72 defined, 170
293
with in-socket standards, 171–72 methods, 170 methods comparison, 172 with network analyzer, 171 scalar offsets, 171 See also Calibration Delay line discriminator method, 93–94 Design-for-testability (DFT) structures, 101 Deutsche Industries Norm (DIN) connector, 281 Device input/output, 215–18 binning, 216–17 loading/unloading, 218 Device power supplies (DPS), 237–38 calibration, 161 defined, 237 filtering, 238 routing design, 237 Device under test (DUT), 2 contactors, 6 to DUT impedance variations, 68 impedance matched, 271 interface board (DIB), 6 noise factor, solving for, 76 SoC, 109 Dielectric constant, 239 Digital calibration, 161, 165–68 ac, 166 dc level, 165–66 drive-to-receive, 166–67 fixture delay, 167–68 See also Calibration Digital channels components, 122–24 pin electronics, 123 Digital signal processing (DSP) with ATE, 131 as powerful computation engine, 130 in production test equipment, 130–31 Digital-to-analog-converters (DACs), 122 Digitizers, 125–26 calibration, 161, 169 components, 126 dynamic range, 125 performance parameters, 127 Direct docking, 262 Distortion cross modulation (XMOD), 53–54 gain compression, 36, 54–56
294
Advanced Production Testing of RF, SOC, and SiP Devices
Distortion (continued) harmonic, 36, 38–42 intermodulation, 36, 42–48 linearity and, 36 measurements, minimizing number of averages, 56 SIMD, 52–53 in SoC devices, 36 Divider measurements, 25–26 Docking defined, 260 direct, 262 hard, 261 soft, 260–61 Downconversion, using variable LO with fixed filter, 121 Drive-to-receive calibration, 166–67 defined, 166 illustrated, 167 See also Calibration Dual-inline package (DIP), 139, 202 DUT interface board (DIB), 6, 221 E Edge placement accuracy (EPA), 166 Effective noise temperature, 70 Elastomer/interposer contactor, 178 Electrical, load boards, 231–46 Electrical length, 184–85 Electrical properties bandwidth, 185 capacitance, 182–83 characteristic impedance, 186–87 contact resistance, 181 electrical crosstalk, 183 electrical length, 184–85 electrostatic discharge (ESD), 188 equivalent circuit model, 187–88 grounding, 183–84 inductance, 181–82 insertion loss (IL), 185–86 return loss (RL), 186 See also Contactors Electrolytic capacitors, 240 Electrostatic discharge (ESD), 188 Engineering design and analysis (EDA) tools, 101 Equipment error, 79–80 Equivalent circuit model, 187–88 Equivalent series resistance (ESR), 240–42
properties, 242 resonant frequency, 241 Error vector magnitude (EVM), 18, 98 Evaluation boards, 250 Excess noise ratio (ENR), 70–71 defined, 70 noise sources, 71 External interfering signals, 81 F False negatives, 156 Fast Fourier transform (FFT), 125 Field-effect transistor (FET), 118 Fixed delay calibration, 167–68 defined, 167 illustrated, 168 TDR measurement, 167, 168 See also Calibration Flexible cable, 275 Focused calibration, 162 Forward transmission coefficient, 22 Fractional-N synthesizer, 124 Frequency shift, 85 G Gain compression, 36, 54–56 compression points, 55 conversion, 56 defined, 54 Gain measurement, 24 General-purpose interface bus (GPIB), 131–32 data rates, 131–32 defined, 131 Glass transition temperature, 226–28 Gravity-feed handlers, 5, 202 defined, 202 illustrated, 203 See also Handlers Grounding, 183–84 load board, 234–37 multisite, 236–37 separate grounds, 234–35 star, 236 H Handlers, 5–6, 201–19 considerations, 195–96 defined, 5 footprint, 215
Index gravity-feed, 5, 202 input/output characteristics, 217 introduction to, 201–2 pick-and-place, 5–6, 202–3 strip test, 204–5 tester communication, 201 throughput, 207–12 turret, 203–4 types, 202–5 types, choosing, 205–7 Hard docking, 261 Hard stops, 193 Harmonic distortion, 36, 38–42 measurements, 40–42 occurrence, 39 product examples, 46–47 products, 39 SINAD, 41–42 total, 40–41 See also Distortion Harmonic measurements, 30–31 Harmonics, 39 High-frequency plunge-to-board, 214 I IC power detectors, 114–21 BJT, 119 broadband, without selectable filter, 120 broadband, with selectable filter, 120–21 circuit operation, 115–20 downconversion, 121 logarithmic amplifier, 119–20 overview, 114–15 quantitative comparison, 120 single-balanced diode, 115–18 transistor, 118–19 types, 120–21 Impedance characteristic, 186–87 control, 259 thermal, 190–91 trace, 225 Index time, 209–12 by handler technology, 210 multisite, 210 testing in ping-pong mode, 210–12 Inductance, 181–82 defined, 181 probes, 260 Inductors, 243
295
Input power, output power versus, 54 Input-referred compression point, 55 Input reflection coefficient, 22 Insertion force, 192–93 Insertion loss (IL), 185–86 Intercept point general calculation, 49–50 output-referencing, 50–51 second-order (IP2), 48 third-order (IP3), 48 Interface analyzer, 194 Intermodulation distortion, 36, 42–48 defined, 42 higher-order products, 46 measurements, 48–52 product examples, 46–47 products of ZIF receiver, 48 second-order, 42–44 source (SIMD), 52–53 third-order, 44–46 See also Distortion Inverse fast Fourier transform (IFFT), 130 J Jams, 192 Jitter, 82 K Keep-out areas, 247 Known-good die (KGD), 7, 99, 253 L LAN eXtensions for Instruments (LXI), 134–36 advantage, 135 defined, 134 devices, 135 key parameters, 135 system expansion, 136 Leadless chip carrier (LCC), 202 Linearity, 36 Load boards, 221–50 assembly, 222 with board stiffener, 196 cables, 245 capacitors, 239–42 component layout, 250 components, 238–44 connectors, 244–45 considerations, 195
296
Advanced Production Testing of RF, SOC, and SiP Devices
Load boards (continued) contacting device to, 214 defined, 6 device power supplies, 237–38 electrical, 231–46 evaluation board, 250 grounding, 234–37 hybrid, 230–31 inductors, 243 introduction to, 221–23 keep-out areas, 247 layers, 229–30 material properties, 223–29 materials, 223–31 material selection, 229 mechanical design considerations, 246–47 multilayer PCB with contactor, 222 passive component sizes, 243 probe points, 249 proper trace design, 234 reference designators, 249–50 relays, 243–44 resistors, 242–43 schematic/layout reviews, 250 signal routing/traces, 231–34 thermal design considerations, 248 time domain reflectometry (TDR), 248–49 verification, 248–49 vias, 245–46 Loading devices, 218 Logarithmic amplifier detector, 119–20 Loss tangent, 228 Low-noise amplifiers (LNAs), 11 LRM (line-reflect-match) calibration, 263 M Manual hold-downs, 199 Materials, load boards, 223–31 glass transition temperature, 226–28 layers, 229–30 loss tangent, 228 moisture absorption, 228–29 nonwoven, 223 properties, 223–29 reinforcement, 223 relative dielectric constant, 224–26 stackup, 230 test engineer role, 229 woven, 223
Mean time between failure (MTBF), 141 Mean time to repair (MTTR), 141 Mechanical crosstalk, 193–94 Mechanical properties, 191–94 alignment, 191 contact spring, 193 hard stops, 193 insertion force/overtravel, 192–93 interface analyzers, 194 jams, 192 mechanical crosstalk, 193–94 wobble, 192 See also Contactors Membrane probes, 258 Miniature small outline package (MSOP), 202 Mismatch error, 80 Moisture absorption, 228–29 Multichip module (MCM), 253 Multisite grounding, 236–37 Multisite index time, 210 Multisite testing, 9, 148–51 goal, 148 parameters comparison, 149, 151 test cost per device comparison, 151, 152 variables, 151–52 N Network analyzer, de-embedding with, 171 NIST standards, 153, 160 Noise, 59–94 1/f, 64–65 factor, 66 introduction to, 59–66 phase, 82–94 plasma, 65 power density, 69 quantization, 65 quantum, 65 random, 66 shot, 64 on sinusoidal waveform, 84 sources, 69 thermal, 61–64 types, 61–65 Noise figure measurement, 72–73 code noise method, 72–73, 77–78 direct method, 73 equipment error, 79–80 error calculation, 79
Index on frequency-translated devices, 78–79 mismatch error, 80 Y-factor method, 73–77 Noise figure (NF), 11, 66–82 calibration, 161, 172–73 cascaded, 68–69 defined, 66–68 mathematically calculating, 72 Noise floor, 65–66 defined, 65 DUT signal in relation to, 66 effects, 89 Noise temperature, 69–70 defined, 69–70 effective, 70 Nonrecurring engineering (NRE), 258 Numerical prefixes, 287 O Operators, 142–43 Orthogonal frequency division multiplexing (OFDM), 18, 53, 104 Output power, input power versus, 54 Output-referred compression point, 55 Overall equipment effectiveness (OEE), 196–98 contactor and, 197–98 defined, 196 Overhead, 143 P Parametric measuring units (PMUs), 121–24 components, 122–24 use, 121 Passive component sizes, 243 Peripheral Component Interconnect eXtensions for Instrumentation (PXI), 133–34 Peripherals, 4–7 contactors, 6–7 handlers, 5–6 load boards, 6 test floor/test cell, 4–5 wafer probers, 7 Per-pin parametric measurement unit (PPMU), 121, 124 Phase jitter, 86–87 Phase-locked loops (PLLs) circuits, 25 defined, 15
297
settling time, 26–27 testing, 15–16 See also PLL measurements Phase noise, 82–94 defined, 84–86 of fast-switching RF signal sources, 93 in frequency domain, 85 introduction to, 82–84 phase jitter, 86–87 spectral density-based definitions, 86 of tester, 90 thermal effects, 87 See also Noise Phase noise measurement critical items, 88 with delay line discriminator method, 93–94 with different hardware settings, 92 example, 91–93 high-power, 87 incorrect, 91 low-power, 87 making, 88–90 with spectrum analyzer, 90–91 trade-offs, 88 Pick-and-place handlers, 5–6, 202–3 defined, 202–3 illustrated, 204 package types, 203 Ping-pong testing, 146–48, 210–12 parameters comparison, 148 single-site testing comparison, 147 test cost per device comparison, 147–48 variables, 151–52 Plasma noise, 65 PLL measurements, 24–27 divider, 25–26 settling time, 26–27 VCO gain, 26 Power-added efficiency (PAE), 31–32 Power amplifiers (PAs) defined, 11 testing, 11–12 Power measurements, 27–31 harmonic, 30–31 RF output, 28–29 spectral mask, 31 spur, 29–30
298
Advanced Production Testing of RF, SOC, and SiP Devices
Power spectral density (PSD), 59–61 1/f noise, 62 for Gaussian distributed signal, 60 shot noise, 62 thermal noise, 62 Power/voltage conversions, 265–69 Probe cards, 255–60 cantilever needle, 256–57 coplanar, 257–58 decoupling/current limitations, 259–60 defined, 255 frequency range, 258–59 impedance control, 259 inductance, 260 membrane, 258 number of pins, 259 parameters, 256 selecting, 258–60 types, 256–58 See also Wafer probing Probe points, 249 Prober index time, 142 Production test equipment, 109–36 arbitrary waveform generators (AWGs), 126–29 with digital channels and PMU, 121–24 digitizers, 125–26 DSP in, 130–31 IC power detectors, 114–21 introduction to, 109–10 RF receivers, 110–14 Production test fixturing, 80–81 Production testing, 2–3 defined, 2 outsourcing, 9–10 peripherals, 4–7 Production test systems, 3–4 ATE, 4 rack-and-stack, 3–4 Programmable gain amplifier (PGA), 126
Rayleigh-Jeans approximation, 63 Receivers testing, 14–15 tuned RF, 110–14 Reference designators, 249–50 Reflection coefficient, 271 return loss relationship, 272 VSWR and, 272 Relative dielectric constant, 224–26 Relays, 243–44 types, 244 uses, 243 See also Load boards Resistors, 242–43 Return loss (RL), 186, 271 characterization, 186 reflection coefficient relationship, 272 Reverse transmission coefficient, 22 RF calibration, 161, 169–73 de-embedding, 170–72 SOLT, 170 source and measurement, 169–70 RF connectors, 277–81 tightening, 281 types, 278–81 RF devices, 1 low-noise amplifiers, 11 power amplifiers, 11–12 testing, 10–18 testing advances, 97–107 RF measurements, 21–32 accuracy, 154 distribution, 154, 155 output power, 28–29 PAE, 31–32 PLL, 24–27 power, 27–31 S-parameters, 21–24 yield as function of uncertainty, 156 RF wafer probing, 99, 254
Q Quad flat pack (QFP), 203 Quantization noise, 65 Quantum noise, 65
S Scalar loss calibration, 262 Scalar offsets, 171 Second-order intercept point (IP2), 48 Second-order intermodulation distortion, 42–44 Semiflexible cable, 276 Semirigid cable, 276 Shipping/customs duty taxes, 143
R Rack-and-stack systems, 3–4 Radio-frequency. See RF devices Random errors, 8, 159
Index Short rigid contactor, 179 Shot noise, 64 defined, 64 PSD, 62 See also Noise Signal, noise, and distortion (SINAD), 41–42 Signal routing, 231–34 Signal-to-noise degradation, 67 Signal-to-noise ratio (SNR) calculation, 114 Single-balanced diode detector, 115–18 characteristic curves, 117 construction, 116 frequency information and, 116 quantitative comparison, 118 See also IC power detectors SiP devices, 1 architecture comparison, 99–100 testing, 10–18 Slew time, 212–13 SMA connector, 278–79 Small outline integrated circuit (SOIC), 202 SoC devices, 1 architecture comparison, 99–100 ATE, 109 distortion in, 36 DUT, 109 testing, 10–18, 23–24 testing advances, 97–107 Soft docking, 260–61 SOLT calibration, 170 Source intermodulation distortion (SIMD), 52–53 S-parameters, 21–24 application in SoC testing, 23–24 calibration, 262–63 defined, 21–22 gain measurement, 24 input match, 23–24 magnitude-phase notation, 23 output match, 24 real-imaginary notation, 23 for two-port device, 22 Spectral mask measurements, 31 Spectrum analyzer, phase noise measurement with, 90–91 Spring pin contractor, 178 Spurs creation, 29 measurements, 29–30
299
in transmitting spectral output, 30 Stackup, 230 Strip test handlers, 204–5 defined, 204 illustrated, 206 requirements, 205 See also Handlers Subminiature C (SMC) connector, 279 Subminiature P (SMP) connector, 279–80 Superheterodyne architecture, 12, 13 Systemic errors, 8, 159 correction, 8, 159 defined, 8 System-in-a-package. See SiP devices System-level testing, 98–99 System-on-a-chip. See SoC devices T Temperatures glass transition, 226–28 handler design considerations, 213–14 heating/cooling methods, 213 slew time and, 212–13 soaking of devices and, 213 testing and, 212–14 Test cells, 4–55 Testers interface plane, 215 wafer probe interface, 260–62 Test floor defined, 4 personnel, 142–43 shifts, 141 space, 142 Test houses COT considerations, 152–53 defined, 9 guaranteed volume/usage, 153 tester availability, 153 Testing characterization, 3 modern standards, 16–18 multisite, 9, 148–51 new methodologies, 105 ping-pong, 146–48, 210–12 PLLs, 15–16 production, 2–3, 9–10 receivers, 14–15 RF now-noise amplifiers, 11 system-level, 98–99
300
Advanced Production Testing of RF, SOC, and SiP Devices
Testing (continued) temperatures, 212–14 transmitters, 15 VCOs, 15–16 wide bandwidth device, 104–6 Test programs, 7–8 defined, 7 development, 143 Test sequence, 8 Test sockets, 176 Test systems architecture, 103–4 capabilities, 103 challenge, 104 depreciation, 141–42 Test time, 142 Thermal noise, 61–64 defined, 61 PSD, 62 See also Noise Thermal properties, 188–91 current-carrying capacity, 189–90 impedance, 190–91 See also Contactors Thermal soaking defined, 213 flow diagram, 214 Third-order intercept point (IP3), 11, 48, 52 Third-order intermodulation distortion, 44–46 Threaded Neill Concelman (TNC) connector, 280 Through-hole vias, 246 Throughput, 207–12 criteria, 209 defined, 207 index time, 209–12 number of sites, 208 Time-division multiple access (TDMA), 120 Time-domain reflectometry (TDR) load boards, 248–49 measurement, 167, 168 Time to market (TTM), 130 Total harmonic distortion (THD), 40–41 defined, 40 measurements, 41 Total productive maintenance (TPM) program, 196
Traces defined, 231 high-speed digital design, 232 mixed-signal design, 232 proper design, 234 RF design, 232–33 Transceivers, 12–14 architecture illustration, 13–14 defined, 12 superheterodyne architecture, 12, 13 zero-IF (ZIF) architecture, 12–14 Transfer function, semiconductor devices, 37–38 Transistor power detectors, 118–19 Transmitters, testing, 15 True plunge-to-board, 214 Tuned RF receivers, 110–14 benchtop comparison, 111–12 dynamic range, 112 hardware requirements, 113 parameters, 112–14 sensitivity, 112 signal separation devices, 111–12 utilizing digitizer, 110–11 Turret handlers, 203–4 defined, 203–4 illustrated, 205 Type C connector, 278 Type N connector, 278 U UHF connector, 280 Ultra-wideband (UWB), 104 Unloading devices, 218 Utilization, 141 V Variable gain amplifiers (VGAs), 114 Verification, load boards, 248–49 Vias, 245–46 blind, 246 buried, 246 defined, 245 through-hole, 246 See also Load boards VMEbus eXtensions for Instrumentation (VXI), 132–33 chassis, 132 controller configurations, 133 defined, 132
Index throughput and, 132 Voltage controlled oscillators (VCOs) measurements, 26 testing, 15–16 Voltage standing wave ratio (VSWR), 271 acronym pronunciation, 273 reflection coefficient relationship, 272–73 W Wafer probing, 7, 253–63 calibration methods, 262–63 defined, 7 direct docking, 262 hard docking, 261 probe cards, 255–60 RF, 254 soft docking, 260–61 tester interface, 260–62 yield of MCM justification, 254–55 Wide bandwidth devices, 104–6
301
calibration, 106 new test methodologies, 105 testing, 104–6 Wobble, 192 Worldwide Interoperability for Microwave Access (WiMAX), 18 Y Y-factor, 71–72 measurement, 73–77 measurement procedure, 75 measurement setup, 74, 75 Yield, 141 as function of measurement uncertainty, 156 of MCM, 254–55 Z Zero-IF (ZIF) architecture, 12–14
Recent Titles in the Artech House Microwave Library Active Filters for Integrated-Circuit Applications, Fred H. Irons Advanced Production Testing of RF, SoC, and SiP Devices, Joe Kelly and Michael Engelhardt Advanced Techniques in RF Power Amplifier Design, Steve C. Cripps Automated Smith Chart, Version 4.0: Software and User's Manual, Leonard M. Schwab Behavioral Modeling of Nonlinear RF and Microwave Devices, Thomas R. Turlington Broadband Microwave Amplifiers, Bal S. Virdee, Avtar S. Virdee, and Ben Y. Banyamin Classic Works in RF Engineering: Combiners, Couplers, Transformers, and Magnetic Materials, John L. B. Walker, Daniel P. Myer, Frederick H. Raab, and Chris Trask, editors Computer-Aided Analysis of Nonlinear Microwave Circuits, Paulo J. C. Rodrigues Design of FET Frequency Multipliers and Harmonic Oscillators, Edmar Camargo Design of Linear RF Outphasing Power Amplifiers, Xuejun Zhang, Lawrence E. Larson, and Peter M. Asbeck Design of RF and Microwave Amplifiers and Oscillators, Pieter L. D. Abrie Distortion in RF Power Amplifiers, Joel Vuolevi and Timo Rahkonen EMPLAN: Electromagnetic Analysis of Printed Structures in Planarly Layered Media, Software and User’s Manual, Noyan Kinayman and M. I. Aksun FAST: Fast Amplifier Synthesis Tool—Software and User’s Guide, Dale D. Henkes Feedforward Linear Power Amplifiers, Nick Pothecary Generalized Filter Design by Computer Optimization, Djuradj Budimir High-Linearity RF Amplifier Design, Peter B. Kenington
High-Speed Circuit Board Signal Integrity, Stephen C. Thierauf Integrated Circuit Design for High-Speed Frequency Synthesis, John Rogers, Calvin Plett, and Foster Dai Intermodulation Distortion in Microwave and Wireless Circuits, José Carlos Pedro and Nuno Borges Carvalho Lumped Elements for RF and Microwave Circuits, Inder Bahl Microwave Circuit Modeling Using Electromagnetic Field Simulation, Daniel G. Swanson, Jr. and Wolfgang J. R. Hoefer Microwave Component Mechanics, Harri Eskelinen and Pekka Eskelinen Microwave Engineers’ Handbook, Two Volumes, Theodore Saad, editor Microwave Filters, Impedance-Matching Networks, and Coupling Structures, George L. Matthaei, Leo Young, and E.M.T. Jones Microwave Materials and Fabrication Techniques, Second Edition, Thomas S. Laverghetta Microwave Mixers, Second Edition, Stephen A. Maas Microwave Radio Transmission Design Guide, Trevor Manning Microwaves and Wireless Simplified, Thomas S. Laverghetta Modern Microwave Circuits, Noyan Kinayman and M. I. Aksun Neural Networks for RF and Microwave Design, Q. J. Zhang and K. C. Gupta Nonlinear Microwave and RF Circuits, Second Edition, Stephen A. Maas QMATCH: Lumped-Element Impedance Matching, Software and User’s Guide, Pieter L. D. Abrie Practical Analog and Digital Filter Design, Les Thede Practical MMIC Design, Steve Marsh Practical RF Circuit Design for Modern Wireless Systems, Volume I: Passive Circuits and Systems, Les Besser and Rowan Gilmore Practical RF Circuit Design for Modern Wireless Systems, Volume II: Active Circuits and Systems, Rowan Gilmore and Les Besser
Production Testing of RF and System-on-a-Chip Devices for Wireless Communications, Keith B. Schaub and Joe Kelly Radio Frequency Integrated Circuit Design, John Rogers and Calvin Plett RF Design Guide: Systems, Circuits, and Equations, Peter Vizmuller RF Measurements of Die and Packages, Scott A. Wartenberg The RF and Microwave Circuit Design Handbook, Stephen A. Maas RF and Microwave Coupled-Line Circuits, Rajesh Mongia, Inder Bahl, and Prakash Bhartia RF and Microwave Oscillator Design, Michal Odyniec, editor RF Power Amplifiers for Wireless Communications, Steve C. Cripps RF Systems, Components, and Circuits Handbook, Second Edition, Ferril A. Losee Stability Analysis of Nonlinear Microwave Circuits, Almudena Suárez and Raymond Quéré System-in-Package RF Design and Applications, Michael P. Gaynor TRAVIS 2.0: Transmission Line Visualization Software and User's Guide, Version 2.0, Robert G. Kaires and Barton T. Hickman Understanding Microwave Heating Cavities, Tse V. Chow Ting Chan and Howard C. Reader
For further information on these and other Artech House titles, including previously considered out-of-print books now available through our In-Print-Forever® (IPF®) program, contact: Artech House
Artech House
685 Canton Street
46 Gillingham Street
Norwood, MA 02062
London SW1V 1AH UK
Phone: 781-769-9750
Phone: +44 (0)20 7596-8750
Fax: 781-769-6334
Fax: +44 (0)20 7630 0166
e-mail:
[email protected]
e-mail:
[email protected]
Find us on the World Wide Web at: www.artechhouse.com