2006 IEEE Nuclear and Space Radiation Effects Conference Short Course Notebook
July 17, 2006 Ponte Vedra Beach, Florida...
65 downloads
999 Views
22MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
2006 IEEE Nuclear and Space Radiation Effects Conference Short Course Notebook
July 17, 2006 Ponte Vedra Beach, Florida
Modeling the Space Radiation Environment and Effects on Microelectronic Devices and Circuits
Sponsored by: IEEE/NPSS Radiation Effects Committee
Supported by: Defense Threat Reduction Agency Air Force Research Laboratory Sandia National Laboratories NASA Living With a Star Program Jet Propulsion Laboratory NASA Electronic Parts and Packaging Program
Approved for public release; distribution is unlimited.
2006 IEEE Nuclear and Space Radiation Effects Conference
Short Course
Modeling the Space Radiation Environment and Effects on Microelectronic Devices and Circuits
July 17, 2006 Ponte Vedra Beach, Florida
Copyright© 2006 by the Institute of Electrical and Electronic Engineers, Inc. All rights reserved. Instructions are permitted to photocopy isolated articles for noncommercial classroom use without fee. For all other copying, reprint, or replication permission, write to Copyrights and Permissions Department, IEEE Publishing Services, 445 Hoes, Lane, Pascataway, NJ 08555-1331.
Table of Contents Section I...............................................................................................................................I Introduction Dr. Robert Reed Vanderbilt University Institute for Space and Defense Electronics
Section II.............................................................................................................................II Modeling the Space Radiation Environment Dr. Mike Xapsos NASA Goddard Space Flight Center
Section III...........................................................................................................................III Space Radiation Transport Models Dr. Giovanni Santin ESA / ESTEC and Rhea System SA
Section IV...........................................................................................................................IV Device Modeling of Single Event Effects Prof. Mark Law University of Florida
Section V.............................................................................................................................V Circuit Modeling of Single Event Effects Jeff Black Dr. Tim Holman Vanderbilt University Institute for Space and Defense Electronics
2006 IEEE NSREC Short Course
Section I: Introduction
Dr. Robert Reed Vanderbilt University Institute for Space and Defense Electronics
Approved for public release; distribution is unlimited
Introduction This Short Course Notebook contains manuscripts prepared in conjunctions with the 2006 Nuclear and Space Radiation Effects Conference (NSREC) Shot Course, which was held July 17, 2006 in Ponte Vedra Beach, Florida. This was the 27th Short Course offered at the NSREC. This Notebook is intended to be a useful reference for radiation effects experts and beginners alike. The topics chosen each year are timely and informative, and are covered in a greater level of detail in this forum than is possible by individual contributed papers. The title of this year’s short course is “Modeling the Space Radiation Environment and Effects on Microelectronic Devices and Circuits.” This one-day Short Course will provide a detailed discussion of the methods used by radiation effects engineers to model the space radiation environment and some of its effects on modern devices and circuits. The remarkable advances in modern device technology offers specific challenges for high-fidelity radiation effects modeling. These include the need for improved modeling of the variability of space radiation, the transport of the environment through spacecraft structures and chip packaging, and detailed single event effects modeling at the device and circuit level. This notebook has four technical sections on different aspects of the problem. The first section focuses on methods used to predict the space radiation environment. The next section provides a detailed discussion of the basic interactions of radiation with matter and describes existing radiation transport computer codes and methods. The next two sections are focused on Single Event Effects (SEE) modeling. The first will focus on the proper use of Technology Computer Aided Design (TCAD) tools to model charge transport after a single ionizing event; the second and final Notebook section provides details on modeling SEEs in integrated circuits. Each attendee received a complimentary CD-ROM that contains an archive of IEEE NSREC Short Course Notebooks (1980-2006). This collection covers 27 years of the on-day tutorial courses, presented yearly at the NSREC. It serves as a valuable reference for students, engineers and scientists. The Short Course Notebook is divided into 5 sections as follows: Section I (this section!), provides the motivation for the selection of the Short Course topics and the biographies of the instructors. Section II, by Dr. Mike Xapsos, will discuss recent developments in modeling the trapped particle, galactic cosmic ray and solar particle radiation environments. The metrics for describing the effect these radiations have on electronic devices and circuits will be introduced. This includes ionizing dose, displacement damage dose and linear energy transfer (LET). A substantial portion of the course will be devoted to the recent application of models for characterization of radiation environments. The origins of the methods will be described leading up to the environment applications. Comparisons with traditional models will be shown. Example results for different phases of the solar cycle and for missions ranging from low earth orbit out to interplanetary space will be presented. Section III, by Dr. Giovanni Santin, will provide a review of the physical interactions of the space radiation environment with matter and models used to compute the environment local to the microelectronic circuit. The first portion will be devoted to defining the important physical processes that must be included when modeling the transport of space radiation environment through spacecraft materials. Then Dr. Santin will provide an overview of the current techniques and tools that are available for transport modeling. The last portion will focus on the application and validation of GEANT4 for use in transport model of the space environment with emphasis on the effects on microelectronic devices.
Section IV, by Prof. Mark E. Law, will discuss using device and process simulation tools effectively to model single event upset behaviors. Modeling single event upset provides many challenges to TCAD tools. In this course, practical pitfalls will be described and techniques will be discussed to avoid these problems. Several issues can create problems. First, numerical approximations must be understood and controlled by the user. Second, the device geometry, doping, and materials need to be set up correctly. Third, physical transport models have specific limitations for application to single event simulations. Most TCAD models have been tuned to MOS device transport, and may not be appropriate for bulk charge removal in a single event case. A complex simulation example illustrating these points will be presented to help illustrate good practice. Section V, by Jeff Black and Dr. Tim Holman, will discuss the various tools currently available for simulating single event effects at the circuit level. Circuit level simulation can be performed more efficiently than TCAD simulation at a cost of reduced simulation fidelity. This course will provide an understanding of the circuit simulation fidelity and how to make use of the results in design and analysis tasks using modern technology. They will provide an overview of single event effect mechanisms with emphasis on the circuit structures responsible for charge generation. They will also provide a classification of circuit simulation tools and describe the simulation challenges and potential pitfalls. The bulk of the course will cover the tools available for circuit simulation, defining the circuits and stimulus inputs, setting up the simulations, and analyzing the results. An example of single event effects simulation will be shown on each class of circuit simulator. I would like to personally thank each of the Short Course Instructors, Mike Xapsos, Giovanni Santin, Mark Law, Jeff Black, and Tim Holman for their substantial efforts to guarantee the success of the 2006 NSREC Short Course. The preparation of these manuscripts and the presentations given at the conference involve a great deal of personal time and sacrifice. I think I can speak for all of those that attended the Short Course and read these notes when I say THANK YOU. I would also like to thank Lew Cohn for his efforts in reviewing the manuscripts and ensuring that the Short Course Notebooks were printed in a timely manner, and the DTRA print office for printing the Notebooks. In addition, I would like to that Dale Platteter for his efforts in publishing the CD-ROM. Finally, I would like to thank the team of people that server as reviewers for these notes and the presentations.
Dr. Robert A. Reed NSREC Short Course Chairman ISDE Vanderbilt University
Biographies Dr. Robert Reed Short Course Chairman Vanderbilt University Institute for Space and Defense Electronics Robert A. Reed received his M.S. and Ph.D. degrees in Physics from Clemson University in 1993 and 1994. After completion of his Ph.D. he worked as a post-doctoral fellow at the Naval Research Laboratory and later worked for Hughes Space and Communication. From 1997 to 2004, Robert was a research physicist at NASA Goddard Space Flight Center where he supported NASA space flight and research programs. He is currently a Research Associate Professor at Vanderbilt University. His radiation effects research activities included topics such as single event effects and displacement damage basic mechanisms and on-orbit performance analysis and prediction techniques. He has authored over 70 papers on various topics in the radiation effects area. He was awarded the 2004 Early Achievement Award from IEEE/NPSS and the 2000 Outstanding Young Alumni Award from Clemson University. Robert has been involved in the NSREC community since 1992 serving as 2004 Finance Chairman, 2002 Poster Session Chairman, and 2000 Short Course Instructor.
Dr. Mike Xapsos NASA Goddard Space Flight Center Mike Xapsos is a research physicist in the Radiation Effects and Analysis Group at NASA Goddard Space Flight Center where he oversees its work on the space radiation environment. This involves developing models of the environment and using models and tools to determine radiation requirements for NASA missions. He is the Project Scientist for the Space Environment Testbeds (SET) Project and the Radiation Lead for the Solar Dynamics Observatory (SDO) Mission. Prior to joining NASA in 2001 he worked in the Radiation Effects Branch at the Naval Research Laboratory, where he also researched problems in device radiation physics. He holds a Bachelor’s degree in physics and chemistry from Canisius College and a PhD degree in physics from the University of Notre Dame. He has held the position of Guest Editor for the IEEE Transactions on Nuclear Science, Technical Program Chairman for the IEEE Nuclear and Space Radiation Effects Conference, and is currently General Chairman for the Single Event Effects Symposium. He has published over 75 technical papers and holds one US patent.
Dr. Giovanni Santin ESA/ESTEC and Rhea System SA Giovanni Santin is an analyst in the Space Environments and Effects Analysis section at the European Space Agency (ESA / ESTEC) on loan from RHEA Tech Ltd for the support to ESA programs. He is a specialist in radiation transport codes for Monte-Carlo simulations. His current research interests are in development and use of radiation environment models, radiation effects modeling for manned and unmanned missions, radiation analysis engineering tools and radiation monitors. He holds a Bachelor’s degree in physics and a PhD degree in physics from the University of Trieste, Italy. Prior to joining ESA with RHEA in 2002 he worked on experimental particle physics at CERN for the University of Geneva and on medical physics at the University of Lausanne. In addition to his research in space environment, he is involved in medical physics research, mainly in developments for PET and SPECT and in dosimetry for radiation therapy.
Professor Mark E. Law University of Florida Mark Law is a professor and chair of Electrical and Computer Engineering at the University of Florida. He received the B.S. Cpr.E. degree from Iowa State University in 1982 and the Ph.D. degree from Stanford University in 1988. His current research interests are in integrated circuit process modeling, characterization, and device modeling. Dr. Law was named a National Science Foundation Presidential Faculty Fellow in 1992, College of Engineering Teacher of the Year in 1996-97, and a UF Research Fellow in 1998. He was editor-in-chief of the IEEE Journal on Technology Computer Aided Design. He is currently the vice president for technical activities of the IEEE Electron Device Society. He chaired the 1997 Simulation of Semiconductor Process and Devices Meeting, the 1999 and 2002 silicon front-end processing symposium of the Materials Research Society, the 2005 Ultra-Shallow Junctions workshop and chaired the 2000 International Electron Devices Meeting. He was named an IEEE Fellow in 1998 for his contributions to integrated circuit process modeling and simulation.
Jeffrey D. Black Dr. W. Timothy Holman Vanderbilt University Institute for Space and Defense Electronics Jeffrey D. Black is a Senior Research Engineer in the Institute for Space and Defense Electronics (ISDE) at Vanderbilt University. He received his BSEE at the United States Air Force Academy in 1988 and his MSEE at the University of New Mexico in 1991. He is currently pursuing his PhD at Vanderbilt University. Jeff’s areas of specialty and interest are single event effects and mitigation approaches. Prior to joining ISDE in 2004, Jeff worked for Mission Research Corporation, now ATK Mission Research, in Albuquerque, NM. Jeff is just completing his three year term as Secretary of the Radiation Effects Steering Group. He has enjoyed serving the NSREC community in various positions. Dr. W. Timothy Holman is a member of the Institute for Space and Defense Electronics and a research associate professor in the Department of Electrical and Computer Engineering at Vanderbilt University. His current research is focused on radiation effects in analog and mixed-signal circuits, and the design of radiation-hardened mixed-signal circuits in CMOS and BiCMOS technologies. In addition to his research, Dr. Holman has developed new methods for video-based delivery of educational material that are used to produce archival CD-ROM copies of the NSREC short course for attendees each year.
2006 IEEE NSREC Short Course
Section II: Modeling the Space Radiation Environment
Michael Xapsos NASA Goddard Spaceflight Center Greenbelt, MD 20771
Approved for public release; distribution is unlimited
Modeling the Space Radiation Environment Michael Xapsos, NASA Goddard Space Flight Center NSREC 2006 Short Course Outline I. Introduction................................................................................................................. 2 II. The Solar Activity Cycle ............................................................................................ 3 III. The Earth’s Trapped Radiation Environment ......................................................... 5 A. The Magnetosphere and Trapped Particle Motion.................................................. 5 B. Characteristics of Trapped Protons......................................................................... 8 C. The AP-8 Model ..................................................................................................... 9 D. Recent Developments in Trapped Proton Models ................................................ 12 E. Characteristics of Trapped Electrons .................................................................... 18 F. The AE-8 Model ................................................................................................... 19 G. Recent Developments in Trapped Electron Models ............................................. 20 IV. Galactic Cosmic Rays ........................................................................................... 29 A. General Characteristics ......................................................................................... 29 B. Galactic Cosmic Ray Models................................................................................ 32 V. Solar Particle Events ................................................................................................. 35 A. General Characteristics ......................................................................................... 35 B. Solar Proton Models ............................................................................................. 37 1. The Maximum Entropy Principle and the Distribution of Solar Proton Event Magnitudes ............................................................................................. 37 2. Cumulative Fluence During Solar Maximum................................................... 39 3. Cumulative Fluence During Solar Minimum ................................................... 42 4. Extreme Value Theory and Worst Case Events................................................ 44 5. Self-Organized Criticality and the Nature of the Energy Release Process ....... 48 a) Rescaled Range Analysis.............................................................................. 49 b) Fractal Behavior............................................................................................ 52 c) Power Function Distribution......................................................................... 53 C. Solar Heavy Ion Models ....................................................................................... 54 VI. Future Challenges ................................................................................................. 56 VII. References............................................................................................................. 57
I-1 II-1
I.
Introduction
There are a number of environmental hazards that spacecraft must be designed for, which includes low energy plasma, particle radiation, neutral gas particles, ultraviolet and x-ray radiation, micrometeoroids and orbital debris. This manuscript is focused on hazards present for devices and integrated circuits in the space environment. Hence it is mainly concerned with three categories of high-energy particle radiations in space. The first is particles trapped by planetary magnetic fields such as the earth’s Van Allen Belts. The second is the comparatively low-level flux of ions that originate outside of our solar system called galactic cosmic rays. The third is bursts of radiation emitted by the sun, characterized by high fluxes of protons and heavy ions, referred to as solar particle events. In order to have reliable, cost-effective designs and implement new space technologies, the radiation environment must be understood and accurately modeled. Underestimating the radiation levels leads to excessive risk and can result in degraded system performance and loss of mission lifetime. Overestimating the radiation levels can lead to excessive shielding, reduced payloads, over-design and increased cost. The last approximately 10 years has been a renaissance period in space radiation environment modeling for a number of reasons. There has been a growing need for some time now to replace the long-time standard AP-8 and AE-8 radiation belt models. These are based on data that badly needed to be updated. A growing number of interplanetary exploration initiatives, particularly manned initiatives to the moon and Mars, are driving the development of improved models of the galactic cosmic ray and solar particle event environments. Improved radiation detectors and other technologies such as those operating on the Advanced Composition Explorer (ACE) and the Solar, Anomalous and Magnetospheric Particle EXplorer (SAMPEX) satellites have led to unprecedented measurement accuracy and resolution of space radiation properties. Finally, the pervasive use of commercial-off-the-shelf (COTS) microelectronics in spacecraft to achieve increased system performance must be balanced by the need to accurately predict their complex responses in space. The main objective of this section of the short course is to present recent developments in modeling the trapped particle, galactic cosmic ray and solar particle event radiation environments for radiation effects applications. This will start with background information and initial reviews of the traditional models before proceeding to the newer models. In the case of solar particle event models a number of probabilistic methods not commonly found in the literature have recently been applied. An overview of the origins and backgrounds of these methods will be given leading up to the environment applications. Comparisons between various models will be shown for different phases of the solar cycle and for missions ranging from low earth orbit out to interplanetary space. As galactic cosmic rays and solar particles enter and interact with the earth’s upper atmosphere, showers of secondary particles are produced. Secondary neutrons are the most important contributor to single event effects at altitudes below about 60,000 feet. Discussions of the atmospheric and terrestrial neutron environments can be found elsewhere [Ba97], [Ba05].
I-2 II-2
II.
The Solar Activity Cycle
The sun is both a source and a modulator of space radiations. Understanding its cyclical activity is an important aspect of modeling the space radiation environment. The solar activity cycle is approximately 11 years long. During this period there are typically 7 years during solar maximum when activity levels are high and 4 years during solar minimum when activity levels are low. In reality the transition between solar maximum and solar minimum is a continuous one but it is often considered to be abrupt for convenience. At the end of each 11-year cycle the magnetic polarity of the sun reverses and another 11-year cycle follows. Thus, strictly speaking the total activity cycle is approximately 22 years long. Of the space radiations considered here the magnetic polarity apparently only affects the galactic cosmic ray fluxes [Ba96a], and not the trapped particle or solar particle event fluxes. Thus, things are often viewed on an approximately 11-year cyclical basis. Two common indicators of this approximately 11-year periodic solar activity are sunspot numbers and solar 10.7 cm radio flux (F10.7). The most extensive record is that of observed sunspot numbers, which dates back to the 1600s. This record is shown in Figure 1. The numbering of sunspot cycles began in 1749 and it is currently near the end of solar cycle 23. The record of F10.7 began part way through solar cycle 18 in the year 1947 and is shown in Fig. 2.
Figure 1. The observed record of yearly averaged sunspot numbers.
I-3 II-3
Figure 2. Measured values of solar 10.7 cm radio flux. Although sunspot numbers and F10.7 are commonly accepted indicators of solar activity, quantitative relations to measured radiation events and fluxes are not necessarily straight forward. Solar particle events are known to occur with greater frequency and intensity during the declining phase of solar maximum [Sh95]. Trapped electron fluxes also tend to be higher during the declining phase [Bo03]. Trapped proton fluxes in low earth orbit (LEO) reach their maximum during solar minimum but exactly when this peak is reached depends on the particular location [Hu98]. Galactic cosmic ray fluxes are also at a maximum during solar minimum but in addition depend on the magnetic polarity of the sun [Ba96a]. There has been considerable effort put into forecasting long-term solar cycle activity. A review of a number of the methods is presented by Hathaway [Ha99]. These include regression methods, which involve fitting a function to the data as the cycle develops. Also discussed are precursor methods, which estimate the amplitude of the next cycle based on some type of correlation with prior information. These methods can also be combined. In addition, physically based methods are being developed based on the structure of the magnetic field within the sun and heliosphere [Sc96], [Di06]. However, accurate methods for predicting future solar cycle activity levels prior to the start of the cycle have thus far been elusive. A potential breakthrough, however, has recently been reported that uses a combination of computer simulation and observations of the solar interior from instrumentation onboard the Solar and Heliospheric Observatory (SOHO) [Di06]. Given the current state of this modeling, probabilistic models of solar activity can be useful. Such a model of F10.7 is shown in Figure 3 [Xa02]. This also illustrates the general behavior of the observed cyclical properties, at least over recent cycles. The greater the peak activity of a cycle, the faster the rise-time to the peak level. Furthermore the cyclical activity is asymmetric such that the descending phase of the cycle is longer than the ascending phase.
I-4 II-4
Figure 3. Probabilistic model of F10.7. The various curves are labeled as a function of confidence level that the activity shown will not be exceeded [Xa02].
III.
The Earth’s Trapped Radiation Environment
This section leads up to recent modeling developments for trapped protons and trapped electrons geared toward radiation effects applications. Initially a review of background information and related physical processes will be given. Further background information can be found in [Ba97], [Ma02] and [Wa94]. A.
The Magnetosphere and Trapped Particle Motion
The earth’s magnetosphere consists of both an external and an internal magnetic field. The external field is the result of plasma or ionized gas that is continually emitted by the sun called the solar wind. The internal or geomagnetic field originates primarily from within the earth and is approximately a dipole field. As shown in Figure 4, the solar wind and its embedded magnetic field tends to compress the geomagnetic field. During moderate solar wind conditions, the magnetosphere terminates at roughly 10 earth radii on the sunward side. During turbulent magnetic storm conditions it can be compressed to about 6 earth radii. The solar wind generally flows around the geomagnetic field and consequently the magnetosphere stretches out to a distance of possibly 1000 earth radii in the direction away from the sun.
I-5 II-5
Figure 4. The earth’s magnetosphere. Figure 5 shows the geomagnetic field, which is approximately dipolar for altitudes of up to about 4 or 5 earth radii. It turns out that the trapped particle populations are conveniently mapped in terms of the dipole coordinates approximating the geomagnetic field. This dipole coordinate system is not aligned with the earth’s geographic coordinate system. The axis of the magnetic dipole field is tilted about 11 degrees with respect to the geographic North-South axis and its origin is displaced by a distance of more than 500 km from the earth’s geocenter. The standard method is to use McIlwain’s (B,L) coordinates [Mc61]. Within this dipole coordinate system, L represents the distance from the origin in the direction of the magnetic equator, expressed in earth radii. One earth radius is 6371 km. B is simply the magnetic field strength. It describes how far away from the magnetic equator a point is along a magnetic field line. Bvalues are a minimum at the magnetic equator and increase as the magnetic poles are approached.
I-6 II-6
Figure 5. The internal magnetic field of the earth is approximately a dipole field. Next the basic motion of a trapped charged particle in this approximately dipole field will be discussed. Charged particles become trapped because the magnetic field can constrain their motion. As shown in Figure 6 the motion a charged particle makes in this field is to spiral around and move along the magnetic field line. As the particle approaches the polar regions the magnetic field strength increases and causes the spiral to tighten. Eventually the field strength is sufficient to force the particle to reverse direction. Thus, the particle is reflected between so called “mirror points” and “conjugate mirror points”. Additionally there is a slower longitudinal drift of the path around the earth that is westward for protons and eastward for electrons. Once a complete azimuthal rotation is made around the earth, the resulting toroidal surface that has been traced out is called a drift shell. A schematic of such a drift shell is shown in Figure 7.
I-7 II-7
Figure 6. Motion of a charged trapped particle in the earth’s magnetic field.
Figure 7. Illustration of the geometry of drift shells. B.
Characteristics of Trapped Protons
Some of the characteristics of trapped protons and their radiation effects are summarized in Table 1. The L-shell range is from L = 1.15 at the inner edge of the trapped environment out beyond geosynchronous orbits to an L-value of about 10. Trapped proton energies extend up to a few 100’s of MeV, at which point the fluxes begin to fall off rapidly. The energetic trapped proton population with energies > 10 MeV is confined to altitudes below 20,000 km, while I-8 II-8
protons with energies of about 1 MeV or less are observed at geosynchronous altitudes and beyond. The maximum flux of energetic protons occurs at an L-value of around 1.8 and exceeds 105 p/(cm2-s). Close to the inner edge, proton fluxes are modulated by the atmospheric density. They can decrease by as much as a factor of 2 to 3 during solar maximum due to atmospheric expansion and resulting losses caused by scattering processes. Trapped protons can cause Total Ionizing Dose (TID) effects, Displacement Damage (DD) effects and Single Event Effects (SEE). The metric used for TID studies is ionizing dose, defined as the energy deposited per unit mass of material that comprises the sensitive volume. The unit commonly employed is the “rad” where 1 rad = 100 erg/g. One metric for protoninduced displacement damage is to use the equivalent fluence of a given proton energy, often taken as 10 MeV [An96]. A quantity analogous to the ionization dose, called the displacement damage dose (DDD), is also used to study displacement effects in materials [Ma99], [Wa04]. It is defined as the energy that goes into displaced atoms per unit mass of material that comprises the sensitive volume. The units are analogous to ionizing dose except that it is the nonionizing component. Finally, it is noted that studies of proton-induced SEE commonly use the proton energy incident on the sensitive device volume as a relevant parameter. Most proton-induced SEE occur as a result of target recoil products that result from interactions with the incident proton. The incident proton energy has a significant influence on these products and that is the reason why results are commonly presented in terms of it. Table 1. Trapped Proton Characteristics. L-Shell Values
Energies
1.15 – 10
Up to 100’s of MeV
*
C.
Fluxes* (>10 MeV) Up to ~ 105 cm-2s-1
long-term average
Radiation Effects
Metrics
Total Ionizing Dose (TID); Displacement Damage (DD); Single Event Effects
Dose for TID; 10 MeV equivalent fluence and Displacement Damage Dose for DD
The AP-8 Model
The well-known AP-8 trapped proton model is the eighth version of a model development effort led by James Vette. Over the years these empirical models have been indispensable for spacecraft designers and for the radiation effects community in general. The trapped particle models are static maps of the particle population during solar maximum and solar minimum. They are mapped in a dipole coordinate system such as the (B,L) coordinates described in section IIIA. A spacecraft orbit is calculated with an orbit generator. The orbit coordinates are then transformed to (B,L) coordinates and the trapped particle radiation environment determined. Models such as this are implemented in the SPace ENVironment Information System (SPENVIS) suite of programs [http]. Details of the AP-8 model and its predecessors can be found in [Sa76], [Ve91] and [Ba97].
I-9 II-9
Figure 8 is a contour plot of the trapped proton population with energies > 10 MeV shown in a dipole coordinate system. The x-axis is the radial distance along the geomagnetic equator in units of earth-radii while the y-axis is the distance along the geodipole axis, also in units of earth-radii. Thus, a y-value of zero represents the geomagnetic equator [Da96]. A semicircle with a radius of one centered at the point (0,0) represents the earth’s surface on this plot. It is seen that it is a particularly convenient way to reduce a large quantity of information and get an overview of the particle population on a single plot.
Figure 8. The trapped proton population with energies > 10 MeV as predicted by the AP-8 model for solar maximum conditions. From SPENVIS, [http]. For spacecraft that have an orbit lower than about 1000 km the so-called “South Atlantic Anomaly” (SAA) dominates the radiation environment. This anomaly is due to the fact that the earth’s geomagnetic and rotational axes are tilted and shifted relative to each other as discussed in section IIIA. Thus, part of the inner edge of the proton belt is at lower altitudes as shown in Figure 9. This occurs in the geographic region south and east of Brazil. It is shown in Figure 10 as a contour plot on geographic coordinates for > 10 MeV proton fluxes at a 500 km altitude.
I-10 II-10
Figure 9. The South Atlantic Anomaly [Da96].
I-11 II-11
Figure 10. Contour plot of proton fluxes > 10 MeV in the SAA at a 500 km altitude during solar maximum. From SPENVIS, [http]. The main difference between the solar maximum and solar minimum maps is seen at low altitudes where the fluxes are less during solar maximum. The reason is that the atmosphere expands as a result of heating during solar maximum so that trapped protons are lost due to scattering processes at a higher rate. D.
Recent Developments in Trapped Proton Models
This section discusses some of the measurements and modeling efforts that have been performed in an attempt to provide a more updated and dynamic description of the trapped proton population. The advantages of the AP-8 model are its long heritage of use and rather complete description of trapped protons in terms of energies and geographic location. However, it is based on data that were taken mainly in the 1960’s and early 1970’s. Thus a serious concern is whether it still accurately represents the trapped proton environment today. The PSB97 model, developed at the Belgian Institute for Aeronomy (BIRA) and the Aerospace Corporation, is a LEO model for the solar minimum time period [He99]. It is based on the Proton/Electron Telescope (PET) onboard SAMPEX. A notable feature of this model is its broad proton energy range, which extends from 18.5 to 500 MeV. One of the significant extensions of this model beyond AP-8 is that it accounts for secular variation of the geomagnetic field. This variation results because the center of the geomagnetic dipole field drifts away from the geocenter of the Earth at about 2.5 km per year and the magnetic moment decreases with time [http]. The overall effect is to draw the SAA slowly inward toward the earth. A comparison of measurements of the SAA made for > 18.5 MeV protons at an altitude of 500 km is shown in Figure 11 [He99]. It is seen that compared to the AP-8 model of magnetic field epoch 1960, the PSB97 model of magnetic field epoch 1995
I-12 II-12
shows that the SAA has a higher peak flux value that has drifted westward. It also indicates the SAA covers a broader geographic region.
Figure 11. Comparison of the SAA during solar minimum for > 18.5 MeV protons at a 500 km altitude for the time period of the AP-8 model (left) and for the modern SAMPEX/PET measurements (right) [He99]. The Low Altitude Trapped Radiation Model (LATRM), formerly called NOAAPRO, is a LEO proton model developed at the Boeing Company [Hu98]. It is based on 17 years of data taken by the TIROS/NOAA satellites. It accounts for the secular variation of the geomagnetic field using an internal field model. One of the important new features of this model was to account for a continuous solar cycle variation of the trapped proton flux as opposed to AP-8, which transitions discontinuously between the solar maximum and solar minimum periods. This was done using F10.7 as a proxy for the atmospheric density, which controls the proton flux at low altitudes. Figure 12 shows the proton flux for different L-values superimposed upon F10.7 for the period of time the model is based on. It is seen that the flux is anti-correlated with F10.7 due to the greater losses of protons to the atmosphere during solar maximum. The proton flux also shows a phase lag that is dependent on L. Using these empirical relations, the LATRM is able to describe the trapped proton variations over the complete solar cycle as well as make projections into the future.
I-13 II-13
Figure 12. Comparison of the trapped proton flux (points) for low L-values to F10.7 (dotted curve) [Hu98]. The Combined Release and Radiation Effects Satellite PROton (CRRESPRO) trapped proton model is based on data collected for a 14-month period during solar maximum of solar cycle 22 [Gu96]. Although the population of trapped protons in the region of the inner belt is fairly stable, measurements from this satellite demonstrated significant temporal and spatial variability of trapped particles. In particular it showed that the greatest time variations of trapped protons occur in Medium Earth Orbit (MEO). CRRESPRO consists of both a quiet and an active model of trapped protons during solar maximum ranging from L-values of 1.15 to 5.5. The quiet model is for the mission period prior to a large geomagnetic storm that occurred in March 1991 and the active model is for the mission period afterward. Figure 13 shows the CRRESPRO quiet and active models along with AP-8, demonstrating the formation of a second, stable proton belt for L-values between 2 and 3. The belt was particularly apparent in the 20 to 70 MeV energy range [Gu96]. Although the flux levels began to decay immediately they were still measurable on the Russian METEOSAT after about 2 years [Di97].
I-14 II-14
Figure 13. CRESSPRO quiet and active models compared to AP-8 for 55 MeV differential proton fluxes [Gu96]. Recently, the Trapped Proton Model-1 (TPM-1) was developed by Huston [Hu02]. This model combines many of the features of LATRM and CRRESPRO. It covers the geographic region from about 300 km out to nearly geosynchronous orbit for protons in the 1.5 to 81.5 MeV energy range. It models the continuous variation of fluxes over the solar cycle, and also contains a model of both quiet and active conditions as observed onboard CRRES. TPM-1 has a time resolution of 1 month, which is a significant improvement over AP-8. The AP-8 model should be used only for long-term average fluxes. As discussed above, the TPM-1 and PSB97 models have a number of advantages over the AP-8 model. In addition, these models are based on relatively modern instrumentation compared to AP-8. Thus, it is interesting to examine how representative the AP-8 model is of the current trapped proton environment. Figure 14 shows a comparison of the fluxes calculated for an orbit similar to the International Space Station for TPM-1 (quiet conditions), PSB97 and AP-8. All results are for the solar minimum time period. Comparing TPM-1 to AP-8 it is clear there is a significant difference in the hardness of the energy spectra. TPM-1 calculates lower fluxes than AP-8 for low energies and higher fluxes for high energies. Examining the results calculated with PSB97, it is seen that the overlap with the TPM-1 model in their common energy range of about 20 to 80 MeV is excellent. Thus, it would appear that significant discrepancies now exist with the AP-8 model for LEO. A combination of TPM-1 and PSB97, including an update of data
I-15 II-15
taken with the SAMPEX/PET instrument, would result in a fairly complete trapped proton model.
Figure 14. Comparison of fluxes predicted by three trapped proton models for an orbit similar to the International Space Station during solar minimum [La05]. Figures 15 and 16 are comparisons of the TPM-1 and CRRESPRO models, both quiet and active, and the AP-8 model for solar maximum conditions. This comparison is for a 2000 km x 26,750 km elliptical orbit with a 63.4 degree inclination. In this case it is seen that AP-8 predicts significantly higher fluxes over nearly the full energy spectrum, although the difference is less for the TPM-1 active calculation. It can be seen from the examples discussed in this section that these type of comparisons are highly orbit dependent and must be considered on a case-by-case basis. An excellent summary of a number of model comparisons for common orbits is given in Lauenstein and Barth [La05].
I-16 II-16
Figure 15. Comparison of trapped proton models for an elliptical orbit during quiet conditions and the solar maximum time period [La05].
I-17 II-17
Figure 16. Comparison of trapped proton models for an elliptical orbit during active conditions and the solar maximum time period [La05]. E.
Characteristics of Trapped Electrons
Some of the characteristics of trapped electrons are summarized in Table 2. There is both an inner and an outer zone or belt of trapped electrons. These two zones are very different so the characteristics are listed separately. The inner zone ranges out to an L-value of about 2.8. The electron energies range up to approximately 4.5 MeV. The flux reaches a peak near L = 1.5 and the value is about 106 cm-2s-1 for > 1 MeV electrons. These fluxes gradually increase during solar maximum by a factor of 2 to 3. This electron population, though, tends to remain relatively stable. The outer zone has L-values ranging between about 2.8 and 10. The electron energies are generally less than approximately 10 MeV. Here the region of peak flux is between L-values of 4.0 and 4.5 and the long-term average value for > 1 MeV electrons is roughly 3 x 106 cm-2s-1. This zone is very dynamic and the fluxes can vary by orders of magnitude from day to day. The distribution of trapped particles is a continuous one throughout the inner and outer zones. However, between the two high intensity zones is a region where the fluxes are at a local minimum during quiet periods. This is known as the slot region. The exact location and extent of the slot region depends on electron energy but it is between L-values of 2 and 3. The slot region is an attractive one for certain types of missions due to the increased spatial coverage compared to missions in LEO. However, the radiation environment of this region is very dynamic.
I-18 II-18
Trapped electrons contribute to TID effects, displacement damage effects and charging/discharging effects. As discussed previously, the metric for describing TID effects is dose. In a fashion analogous to protons, the metric for electron-induced displacement damage is either 1 MeV equivalent electron fluences or displacement damage dose. It should be noted though that the application of the displacement damage dose concept is not as straight forward for electrons as it is for protons [Wa04]. Finally, charging/discharging effects can be either spacecraft surface charging caused primarily by low energy electrons or deep dielectric charging caused by high energy electrons. A key parameter for these analyses is the potential difference induced by charging between a dielectric and a conductive surface. Table 2. Trapped Electron Characteristics.
*
F.
Inner Zone
L-Shell Values 1 to 2.8
Outer Zone
2.8 to 10
Energies Up to 4.5 MeV Up to 10 MeV
long-term average
Fluxes* (> 1 MeV) 106 cm-2s-1 3x106 cm-2s-1
The AE-8 Model
The long-time standard model for trapped electrons has been the AE-8 model [Ve91], [Ve91a], [Ba97]. It consists of two static flux maps of trapped electrons – one for solar maximum and one for solar minimum conditions. Due to the variability of the outer zone electron population, the AE-8 model is valid only for long periods of time. Fig. 17 is a contour plot of the trapped electron population with energies > 1 MeV shown in dipole coordinates. The structure of the inner and outer zones is clearly seen. Since AE-8 is based on an internal magnetic field model, results are shown only out to geosynchronous altitudes but the trapped electron population exists well beyond this. An interesting feature of the outer belt is that it extends down to low altitudes at high latitudes.
I-19 II-19
Figure 17. The electron population with energies > 1 MeV as predicted by the AE-8 model for solar maximum conditions. From SPENVIS, [http].
G.
Recent Developments in Trapped Electron Models
If only the trapped particle populations are considered, the inner zone is often dominated by radiation effects due to trapped protons while the outer zone is often dominated by radiation effects due to trapped electrons. Thus, recent trapped electron models have focused on the outer zone. A feature of the outer zone is its high degree of variability and dynamic behavior. This results from geomagnetic storms and substorms, which cause major perturbations of the geomagnetic field. For example, processes such as coronal mass ejections and solar flares cause disturbances in the solar wind, which subsequently interacts with the earth’s magnetosphere. Energy is extracted from the solar wind, stored and dissipated, resulting in the injection and redistribution of electrons into the magnetosphere. Although the physical details of the injection mechanisms are not completely understood, recent measurements from the Upper Atmosphere Research Satellite (UARS) illustrate the high degree of variability of electron flux levels prior to and after such storms. Figure 18 shows the electron energy spectra for 3.25 < L ≤ 3.5 after longterm decay from a prior storm (day 235) and two days after a large storm (day 244) compared to the average flux level over a 1000 day period [Pe01]. It is seen for example, at 1 MeV, that the difference in the one-day averaged differential fluxes over a 9-day period is about 3 orders of magnitude.
I-20 II-20
Figure 18. Total electron flux before and after a geomagnetic storm compared to a long-term average as measured onboard the UARS [Pe01]. Due to the volatile nature of the outer zone, it seems reasonable to resort to probabilistic methods in order to improve on the AE-8 model. The average flux measured during a period of time will approach the long-term average as the measurement period increases. This is illustrated in Figure 19, which is a statistical model of the median, 10th and 90th percentile fluxes measured in geostationary orbit by instrumentation onboard METEOSAT-3 [Da96]. The abscissa is the time period of the measurement and ranges from about one day to a little over one year. This figure indicates that about a month of data in the 200 to 300 keV energy range must be accumulated in this orbit in order to approximate the median flux. It turns out that even longer periods are needed for higher energy electrons and for orbits with lower L-values. These type calculations can also be used to put a constraint on the period of time over which a longterm model such as AE-8 should be used. A conservative rule of thumb is that AE-8 should not be applied to a period any shorter than 6 months. A model such as that shown in Fig. 19 is also useful for estimating worst-case fluxes averaged over different time scales.
I-21 II-21
Figure 19. Statistical model of the median, 10th and 90th percentile fluxes in geostationary orbit for approximately 200 to 300 keV electrons [Da96]. Instrumentation onboard UARS was used to construct a probabilistic model during the declining phase of solar cycle 22 [Pe01]. Figure 20 shows the probability of encountering a daily- averaged, > 1 MeV trapped electron flux for a given L-value. Note that such a probability plot indicates both the most frequently occurring flux value and its variation for a given L. The values of L covered in this work range from about 2 to 7. Note that for L-shells between 2 and 3 corresponding to the slot region, the highest probabilities correspond to the lowest observed fluxes. However, the overall range of possible flux values is several orders of magnitude, indicating the volatility of the region. Figure 20 shows the highest fluxes are between L-values of 3.5 and 4.5. For L > 4.5 the fluxes decrease steadily with increasing L.
I-22 II-22
Figure 20. Probability plot of encountering a given > 1 MeV electron flux at a given L-value during the declining phase of solar cycle 22 [Pe01]. The observations made of the slot region with instrumentation onboard the UARS satellite are consistent with recent results obtained from the TSX5 mission over an approximately 4 year period [Br04]. Figure 21 shows a cumulative probability plot of daily averaged > 1.2 MeV electron fluxes in this region. The distribution shows the probability that a daily averaged flux exceeds the threshold flux shown on the x-axis. The well-known “Halloween-2003” storm occurred during this mission and is shown for reference along with results for the AE-8 model. Interestingly, these measurements show that the AE-8 model results were exceeded every day during the 4-year mission.
I-23 II-23
Figure 21. Cumulative probability plot of > 1.2 MeV electron fluxes observed in the slot region during the 4-year TSX5 mission [Br04]. The statistical models discussed above give results for both an average flux and some indicator of the dispersion that can be used for determining a worst-case flux. Probabilistic approaches exist that focus only on worst-case scenarios. One such method is that of extreme value statistics. Extreme value methods are discussed in section VB4. These methods have been used to study daily-averaged fluxes of > 2 MeV electrons measured by the GOES satellites. It has been estimated from about one solar cycle of data that the largest observed flux on March 28, 1991 (8 x 104 cm-2s-1sr-1) would be exceeded once every 20 years [Ko01]. Although this result in itself is of minimal use for radiation effects applications, the overall utility of such an approach for analyzing trapped electron flux variations is relatively unexplored. The FLUx Model for Internal Charging (FLUMIC) [Wr00] software tool was developed as a worst-case daily flux model of the outer belt to be used with the deep dielectric charging model DICTAT. The model is based on data from several satellites in the > 0.2 to > 5.9 MeV range taken between 1987 and 1998. It uses fits to the most intense electron enhancements over this time period to account for properties such as energy spectra and solar cycle and seasonal dependence. The result is a model of the highest fluxes of penetrating electrons expected during a mission. Another general approach to describe the trapped electron fluxes in the outer belt is to relate them to the level of disturbance of the geomagnetic field. There are several geomagnetic indices that could possibly be used as a basis for this. Brautigam, Gussenhoven and Mullen developed a quasi-static model of outer zone electrons ordered by a 15-day running average of the geomagnetic activity index, Ap [Br92]. The Ap index is an indicator of the general level of
I-24 II-24
global geomagnetic disturbance. The daily outer zone electron energy spectra during the CRRES mission were separated according to the Ap index and averaged, thus producing flux profiles based on geomagnetic activity. The result is the basis for the CRRESELE model. An example of this is shown in Fig. 22 for 0.95 MeV differential electron fluxes for 6 levels of geomagnetic activity [Gu96]. It is seen that the flux changes are much larger for the smaller L-shell values shown. The current CRESSELE model, which is valid for solar maximum, features flux profiles ranging for 6 levels of geomagnetic activity, an average profile, and a worst-case profile encountered during the mission.
Figure 22. Differential electron energy spectra centered at 0.95 MeV during the CRRES mission for 6 different 15-day running average values of the Ap geomagnetic index. As conditions become more disturbed, the fluxes increase [Gu96]. Spurred on by the CRRESELE model, the European Space Agency funded an effort to further develop models of outer zone electrons based on geomagnetic activity indices [Va96]. The CRRES data were used to train neural networks using the geomagnetic index Kp as input. This is another general indicator of the global geomagnetic disturbance, similar to the Ap index. Thirty networks were trained to estimate flux intensities at 5 energies and 6 L-values during the CRRES time period. A simulated data base of electron flux intensities was subsequently generated dating back to 1932 when the Kp index was first tracked. The validity of using 14 months of data to generate a simulated catalog of 60+ years of fluxes in this manner is unknown. The goal of this effort was to use the simulated fluxes to develop improved models. Currently
I-25 II-25
there exists the ESA-SEE1 model that was developed from this effort. It represents an average flux map of trapped electrons during solar minimum and was intended as a replacement for AE-8 during this time period. The initial version of the Particle ONERA-LANL Environment (POLE) model for the geostationary electron environment was developed in 2003 [Bo03]. It is based on 25 years (1976-2001) of Los Alamos satellite data and is the most detailed model available of trapped electron data over the course of a solar cycle. It provides mean, worst-case and best-case fluxes with a time resolution of one year. The initial model covered the energy range of 30 keV to 2.5 MeV. A recent update has extended the upper energy range to 5.2 MeV and added 3 more years worth of data [Si06]. Figure 23 shows the evolution of the mean electron flux over about 2.5 solar cycles for the complete electron energy range of the satellite data. It is seen that the lower energies show relatively little variation with time while the higher energies tend to reach their maximum flux during the declining phase of the solar cycle.
Figure 23. Time and energy dependence of the mean electron flux at geostationary altitudes over about 2.5 solar cycles [Si06]. It is interesting to see how these more recent models compare with the traditional AE-8 model for common orbits. Keep in mind that AE-8 is supposed to represent the average flux for the period and orbit of interest. Figure 24 is a comparison of the average electron fluxes as a function of energy for POLE, CRRESELE and AE-8 predictions for a geostationary orbit. Figure 25 is a similar comparison except that worst-case predictions are presented. Results for the FLUMIC model, a worst-case model, are also shown in Figure 25. It is seen that generally,
I-26 II-26
the predicted fluxes for AE-8 are rather high compared to the average fluxes predicted by the other models except at the very lowest and very highest energies. For the worst case predictions in Figure 25, there is a rather large spread in the results at low energies, but the predictions converge for energies beyond about 1 MeV.
Figure 24. Model comparisons for average electron fluxes of POLE and CRRESELE at geostationary altitudes to AE-8 [La05].
I-27 II-27
Figure 25. Model comparisons for worst case electron fluxes of POLE, CRRESELE and FLUMIC at geostationary altitudes to AE-8 [La05]. Finally, Fig. 26 compares the POLE model at solar maximum and solar minimum with AE-8 for a geostationary orbit. There is no distinction between these two periods in the AE-8 model at geostationary altitudes. The POLE model shows little difference between solar maximum and solar minimum at low energies but shows higher fluxes during solar minimum at higher energies.
I-28 II-28
Figure 26. Comparison of the POLE model to AE-8 at geostationary altitudes for solar maximum and solar minimum conditions [La05].
It is seen that in geostationary orbit, the predictions of AE-8 are generally higher than the average flux predictions of more recent models. In fact, AE-8 is more similar to some of the worst-case flux models than the average flux models. Other comparisons for elliptical MEO are shown in Lauenstein, [La05].
IV.
Galactic Cosmic Rays
A.
General Characteristics
Galactic cosmic rays (GCR) are high-energy charged particles that originate outside of our solar system and are believed to be remnants from supernova explosions. Some general characteristics are listed in Table 3. They are composed mainly of hadrons, the abundances of which is listed in the Table. A more detailed look at the relative abundances is shown in Figure 27. All naturally occurring elements in the Periodic Table (up through uranium) are present in GCR, although there is a steep drop-off for atomic numbers higher than iron (Z=26). Energies can be as high as 1011 GeV, although the acceleration mechanisms to reach such high energies are not understood. Fluxes are generally a few cm-2s-1, and vary with the solar cycle. Typical I-29 II-29
GCR energy spectra for a few of the major elements during solar maximum and solar minimum are shown in Figure 28. It is seen the spectra tend to peak around 1 GeV per nucleon. The flux of the ions with energies less than about 10 GeV per nucleon is modulated by the magnetic field in the sun and solar wind. During the high activity solar maximum period there is significantly more attenuation of the flux, resulting in the spectral shapes shown in Figure 28. Table 3. Characteristics of Galactic Cosmic Rays. Hadron Composition 87% protons 12% alphas 1% heavier ions
Energies
Flux
Radiation Effects
Metric
Up to 1011 GeV
1 to 10 cm-2s-1
SEE
LET
Figure 27. Abundances of GCR up through Z = 28.
I-30 II-30
Figure 28. GCR energy spectra for protons, helium, oxygen and iron during solar maximum and solar minimum conditions [Ba96a]. SEE are the main radiation effects caused by GCR in microelectronics and photonics. The metric traditionally used to describe heavy ion induced SEE is linear energy transfer (LET). LET is the energy lost by the ionizing particle per unit path length in the sensitive volume. For SEE studies the path length is often divided by the material density and expressed as an areal density. The units of LET that are commonly used are then MeV-cm2/mg. For SEE analyses energy spectra such as those shown in Figure 28 are often converted to LET spectra. Such integral LET spectra for solar maximum and solar minimum conditions are shown in Figure 29. These spectra include all elements from protons up through uranium. The ordinate gives the flux of particles that have an LET greater than the corresponding value shown on the abscissa. Given the dimensions of the sensitive volume this allows the flux of particles that deposit a given amount of charge or greater to be calculated in a simple approximation.
I-31 II-31
Figure 29. Integral LET spectra for GCR during solar maximum and solar minimum. The LET spectra shown in Figure 29 are applicable to geosynchronous and interplanetary missions where there is no geomagnetic attenuation. The earth’s magnetic field, however, provides significant protection. Due to the basic interaction of charged particles with a magnetic field, the charged particles tend to follow the geomagnetic field lines. Near the equator the field lines tend to be parallel to the earth’s surface. Thus all but the most energetic ions are deflected away. In the polar regions the field lines tend to point toward the earth’s surface, which allows much deeper penetration of the incident ions. The effect of the geomagnetic field on the incident GCR LET spectrum during solar minimum is discussed for various orbits in [Ba97]. B.
Galactic Cosmic Ray Models
The original Cosmic Ray Effects in MicroElectronics (CREME) suite of programs of Adams [Ad87] was developed specifically for microelectronics applications. It turned out to be a very useful and popular tool and has been updated since then. CREME96 is the most recent version [Ty97] and uses the GCR model of Moscow State University (MSU) [Ny96a]. In principle the MSU model is similar in approach to a GCR model that was originated independently at NASA by Badhwar and O’Neill [Ba96a]. Both models are based on the diffusion-convection theory of solar modulation [Pa85]. This is used to describe the penetration of cosmic rays into the heliosphere from outside and their transport to near earth at 1 Astronomical Unit (AU). The solar modulation is used as a basis to describe the variation of GCR energy spectra over the solar cycle, as shown in Figure 28. However, the implementation of the solar modulation theory for the two models is different. The Badhwar and O’Neill model
I-32 II-32
estimates the modulation level from GCR measurements at 1 AU. Correlations to ground-based neutron monitor counting rates are then made to establish long-term predictive capability. The MSU model is not as direct but uses multi-parameter fits to ultimately relate solar cycle variations in GCR intensity to observed sunspot numbers. Comparisons of the GCR proton and alpha particle spectra of the two models above plus that used in the QinetiQ Atmospheric Radiation Model (QARM) show discrepancies among all three models for narrow time ranges [Le06]. Examples of this are shown in Figure 30 for protons. This is not surprising considering the details of the solar modulation implementation are different. However, similar predictions are seen for the total fluence over the course of a solar cycle.
Figure 30. GCR proton energy spectra predicted by the MSU, Badhwar and O’Neill, and QARM models for two different dates [Le06]. The recent high-quality measurements of GCR heavy ion energy spectra taken on the ACE satellite make possible a stringent test of the GCR models. Comparisons of model results and the ACE data for the 1997 solar minimum period are shown in Figure 31 for 4 of the major elements in the energy range of about 50 to a few hundred MeV per nucleon. It is seen that both models yield good results for heavy ions. Over the range of data shown, the NASA model of
I-33 II-33
Badhwar and O’Neill tends to have a more accurate spectral shape while the MSU model tends to show a smaller root-mean-square deviation from the data.
i Figure 31. Comparison of the NASA model of Badhwar and O’Neill and the MSU model to measurements made with instrumentation onboard the ACE satellite during 1997 [Da01]. A recent development led by the California Institute of Technology is to use a transport model of GCR through the galaxy preceding the penetration and subsequent transport in the heliosphere. [Da01]. During the initial propagation of GCR through the galaxy use is made of knowledge of astrophysical processes that determine the composition and energy spectra of GCR. Comparisons of the fitted model spectra to the ACE satellite measurements are shown in Figure 32. The elements C and Fe are GCR primaries while B, Sc, Ti and V are GCR secondaries produced by fragmentation of primaries on interstellar H and He. The goal of this new approach is to provide an improved description of GCR composition and energy spectra throughout the solar cycle.
I-34 II-34
i Figure 32. The new approach of the California Institute of Technology to describe GCR energy spectra compared to the ACE data during 1997 [Da01].
V.
Solar Particle Events
A.
General Characteristics
It is believed that there are 2 categories of solar particle events and that each one accelerates particles in a distinct manner. Solar flares result when the localized energy storage in the coronal magnetic field becomes too great and causes a burst of energy to be released. They tend to be electron rich, last for hours, and have an unusually high 3He content relative to 4He. A Coronal Mass Ejection (CME), on the other hand, is a large eruption of plasma (a gas of free ions and electrons) that drives a shock wave outward and accelerates particles. CMEs tend to be proton rich, last for days, and have a small 3He content relative to 4He. A review article by Reames gives a detailed account of the many observed differences between solar flares and CMEs [Re99]. CMEs are the type of solar particle events that are responsible for the major disturbances in interplanetary space and the major geomagnetic disturbances at earth when they impact the magnetosphere. The total mass of ejected plasma in a CME is generally around 1015 to 1017 grams. Its speeds can vary from about 50 to 1200 km/s with an average speed of around 400
I-35 II-35
km/s. It can take anywhere from about 12 hours to a few days to reach the earth. Table 4 lists some further general characteristics of CMEs. Table 4. Characteristics of CMEs. Hadron Composition 96.4% protons 3.5% alphas ~0.1% heavier ions
Energies Up to ~GeV/nucleon
Integral Fluence Peak Flux (>10MeV/nucleon) (>10MeV/nucleon) >109 cm-2
>105 cm-2s-1
Radiation Effects TID DD SEE
All naturally occurring chemical elements ranging from protons to uranium are present in solar particle events. They can cause permanent damage such as TID and DD that is due mainly to the proton and possibly alpha component. Just because the heavy ion content is a small percentage does not mean it can be ignored. Heavy ions, as well as protons and alpha particles in solar particle events, can cause both transient and permanent SEE. Figures 33 and 34 illustrate the periodic yet statistical nature of solar particle events. They are plots of the daily solar proton fluences measured by the IMP-8 and GOES series of spacecraft over an approximately 28 year period. Figure 33 shows > 0.88 MeV fluences while Figure 34 shows > 92.5 MeV fluences. The solar maximum and solar minimum time periods are shown in the figures to illustrate the dependence on solar cycle for both low energy and highenergy protons.
Figure 33. Daily fluences of > 0.88 MeV protons due to solar particle events between approximately 1974 and 2002. I-36 II-36
Figure 34. Daily fluences of > 92.5 MeV protons due to solar particle events between approximately 1974 and 2002. The available solar particle data that cover the largest period of time are for protons. Since the available solar heavy ion data are not nearly as extensive, solar proton models and solar heavy ion models will be discussed separately. B.
Solar Proton Models
Sections B1 – B5 describe the application of probabilistic models to solar proton event data, including the origin of the models. This will be done in a sequence that emphasizes the construction of a set of tools that are useful to the design engineer starting from the basics. Section B1 describes the distribution of event magnitudes. B2 and B3 describe modeling cumulative fluences over the course of a mission. B4 discusses worst-case events during a mission. Finally, B5 describes a model that has implications for the energy release and predictability of events. It indicates a potential new direction toward a physically based model for solar proton events. 1.
The Maximum Entropy Principle and the Distribution of Solar Proton Event Magnitudes Given that the occurrence of solar particle events is a stochastic phenomenon, it is important to accurately model the distribution of event magnitudes. However, in general it can be rather difficult to select a probability distribution for the situation where the data are limited. There have been a number of empirical assumptions that the event magnitudes can be represented by certain distributions. For example, lognormal distributions [Ki74], [Fe91] and
I-37 II-37
power function distributions [Ga96], [Ny99] have been used. The lognormal distribution describes the large events well but underestimates the probability of smaller events. On the other hand power functions describe the smaller events well but overestimate the probability of larger events. This section describes a method for making arguably the best selection of a probability distribution for a limited set of data that is compatible with known information about the distribution. The Maximum Entropy Principle was developed by E.T. Jaynes [Ja57] using the concept of entropy originated by Shannon [Sh49]. Jaynes showed in his studies of statistical mechanics that the usual statistical distributions of the theory could be derived by what became known as the Maximum Entropy Principle. This led Jaynes to re-interpret statistical mechanics as a form of statistical inference rather than a physical theory. It established the principle as a procedure for making an optimal selection of a probability distribution when the data are incomplete. Entropy is defined mathematically the same way as in statistical mechanics but for this purpose it is a measure of the probability distribution’s uncertainty. The principle states that the distribution that should be selected is the one that maximizes the entropy subject to the constraints imposed by available information. This choice results in the least biased distribution in the face of missing information. Choosing the distribution with the greatest entropy avoids the arbitrary introduction or assumption of information that is not available. It can therefore be argued that this is the best choice that can be made using the available data. The probability distribution’s entropy, S, is defined [Ja57], [Ka89] S = − p ( M ) ln[ p ( M )]dM
(1)
where p(M) is the probability density of the random variable M. For the case of solar particle event fluences, M is conveniently taken as the base 10 logarithm of the event fluence. A series of mathematical constraints are imposed upon the distribution, drawing from known information. In this case the constraints are [Xa99]: a) The distribution can be normalized. b) The distribution has a well-defined mean. c) The distribution has a known lower limit in the event fluence. This may correspond to a detection threshold, for example. d) The distribution is bounded and consequently infinitely large events are not possible. The resulting system of equations are used along with equation (1) to find the solution p(M) that maximizes S. This has been worked out for many situations [Ka89] and can also be solved using the LaGrange multiplier technique [Tr61]. Using this procedure the following result for solar proton event fluences has been obtained for the solar maximum time period [Xa99]:
N = N tot
−b φ − b − φ max −b −b φ min − φ max
(2)
where N is the number of events per solar maximum year having a fluence greater than or equal to φ, Ntot is the total number of events per solar maximum year having a fluence greater
I-38 II-38
than or equal to φmin, -b is the index of the power function, and φmax is the maximum event fluence. Equation (2) is a truncated power function in the event fluence. It behaves like a power function with an index of -b for φ << φmax and goes smoothly to zero at the upper limit φmax. Figure 35 shows > 30 MeV solar proton event data compared to the best fit to equation (2). The data are from the 21 solar maximum years during solar cycles 20 – 22. It is seen that the probability distribution derived from the maximum entropy principle describes the data quite well over its entire range. This strong agreement indicates that this probability distribution captures the essential features of a solar proton event magnitude distribution. It is a power function for small event sizes and falls off rapidly for very large events. The interpretation of the maximum fluence parameter φmax is interesting in itself and will be discussed further in section B4.
Figure 35. Comparison of the maximum entropy theory result for the distribution to 3 solar cycles of data during solar maximum [Xa99].
2.
Cumulative Fluence During Solar Maximum During a space mission the solar particle event fluence that accumulates during the solar maximum time period is often the dominant contribution to the total fluence. Thus, much prior work focuses on this period of the solar cycle. A solar cycle typically lasts about 11 years. A commonly used definition of the solar maximum period is the 7-year period that spans a starting
I-39 II-39
point 2.5 years before and an ending point 4.5 years after a time defined by the maximum sunspot number in the cycle [Fe93]. The remainder of the cycle is considered solar minimum. Once the initial or underlying distribution of event sizes during solar maximum such as that shown in Figure 35 is known, it can be used to determine the accumulated fluence for a period of time during solar maximum. Due to the stochastic nature of the events, confidence level approaches are often used so that risk-cost-performance tradeoffs can be evaluated by the designer. The first such model was based on King’s analysis of >10 to >100 MeV protons during solar cycle 20 [Ki74], [St74]. One “anomalously large” event, the well-known August 1972 event, dominated the fluence of this cycle so the model predicts the number of such events expected for a given mission length at a specified confidence level. Using additional data, a model from JPL emerged in which Feynman et al. showed that the magnitude distribution of solar proton events during solar maximum is actually a continuous distribution between small events and the extremely large August 1972 event [Fe90]. Under the assumptions that this underlying distribution can be approximated by a lognormal distribution and that the occurrence of events is a Poisson process, the JPL Model uses Monte Carlo simulations to calculate the cumulative fluence during a mission at a given confidence level [Fe90], [Fe93]. An example of this is shown in Figure 36 for > 30 MeV protons. Thus, according to this model, there is approximately a 10% probability of exceeding a proton fluence of 1010 cm-2 for a 3-year period during solar maximum. This corresponds to a 90% confidence level that this fluence will not be exceeded.
Figure 36. JPL91 solar proton fluence model for > 30 MeV protons. The misprint of x-axis units has been corrected from the original reference [Fe93].
I-40 II-40
More recently several different techniques have been used to demonstrate that the cumulative fluence distribution during solar maximum is consistent with a lognormal distribution for periods of time up to at least 7 years [Xa00]. This was shown using the Maximum Entropy Principle, Bootstrap-like methods [Ef93] and by Monte Carlo simulations using the initial distribution shown in Figure 35. Thus the cumulative fluence distribution is known once the parameters of the lognormal distribution are determined. These parameters depend on the proton energy range and the mission duration. They have been determined from the available satellite data and well-known relations for Poisson processes. Figure 37 shows examples of the annual proton fluences for >1, >10 and >100 MeV protons plotted on lognormal probability paper. This paper is constructed so that if a distribution is lognormal, it will appear as a straight line. It further illustrates that the cumulative fluences are well described by lognormal distributions. The fitted data can also be used to determine the lognormal parameters for different periods of time and is used in the ESP Model [Xa99a].
Figure 37. Cumulative annual solar proton event fluences during solar maximum periods for 3 solar cycles plotted on lognormal probability paper. The straight lines are results for the ESP model [Xa00]. Figure 38 shows a representative comparison of the models discussed above. In addition it shows an update of the ESP Model, called PSYCHIC [Xa04], in which the data were extended to cover the time period from 1966 to 2001 and the proton energy range extended to over 300
I-41 II-41
MeV. Results shown are for the 90% confidence level and for a mission length of two solar maximum years. In all cases the energy range shown corresponds to the data range on which the statistical models are based, i.e. no extrapolations are used. Thus, the model differences seen are an indicator of model uncertainties. The spectral shape for the King Model is based on the August 1972 event and is therefore somewhat different than the other model results. The JPL91, ESP, and PSYCHIC models all agree reasonably well for their common 1 to 60 MeV energy range. Note that extrapolation of the JPL91 Model beyond 60 MeV results in an overestimate of the mission fluence. A significant advantage of the PSYCHIC model is its broad energy range and incorporation of several sources of satellite data.
Figure 38. Comparison of different models of cumulative solar proton event fluence during solar maximum for a 2 year period and the 90% confidence level [Xa04].
3.
Cumulative Fluence During Solar Minimum It has often been assumed that the solar particle event fluence during the solar minimum time period can be neglected. However, for missions that are planned mostly or entirely during solar minimum it is useful to have guidelines for solar particle event exposures, especially considering the current frequent use of COTS microelectronics, which can exhibit rather low total dose failure levels.
I-42 II-42
Due to the relative lack of events during solar minimum, confidence level based models are difficult to construct for this period. However, recent solar minimum time periods have been analyzed to obtain 3 average solar proton flux levels that allow varying degrees of conservatism to be used [Xa04]. These flux levels are included in the PSYCHIC model and are shown in Figure 39. First there is the average flux vs, energy spectrum over all 3 solar minimum periods that occurred between 1966 and 2001. A more conservative choice is the highest flux level of the 3 periods or “worst solar minimum period”. Finally, the most conservative choice is the “worst solar minimum year”. This corresponds to the highest flux level over a one year solar minimum time period. It is the one-year interval beginning April 23, 1985 and ending April 22, 1986. Once the choice of a flux-energy spectrum is made the cumulative fluence-energy spectrum is calculated using the mission time period during solar minimum.
Figure 39. Solar proton flux vs. energy spectra for the 3 solar minimum model spectra in the PSYCHIC model. Also shown for comparison purposes is the average proton flux during solar maximum [Xa04]. For comparison purposes, Figure 39 also shows the average solar proton flux during solar maximum for the time period 1966 to 2001. It can be concluded that during the solar minimum time period the event frequencies are generally lower, event magnitudes are generally smaller and the energy spectra are generally softer. Physically this is consistent with the fact that the sun is in a less disturbed state during solar minimum.
I-43 II-43
4.
Extreme Value Theory and Worst Case Events An important consideration for spacecraft designers is the worst-case solar particle event that occurs during a mission. One approach is to design to a well-known large event such as that which occurred in October 1989 [Ty97], or a hypothetical one such as a composite of the February 1956 and August 1972 events [An94]. Energy spectra of some of the most severe solar proton events during solar cycles 19-22 are shown in Figure 40. In addition, there are event classification schemes in which the magnitudes range from “small” to “extremely large” that can be helpful for design purposes [St96], [Ny96].
Figure 40. Some of the most severe solar proton event energy spectra in solar cycles 19-22 [Wi99]. However, more useful information can be provided to the designer if a confidence level associated with the worst case event is known for a given mission length. The designer can then more systematically balance risk-cost-performance tradeoffs for the mission in a manner similar to what is done for cumulative fluences. Once the initial probability distribution such as that shown in Figure 35 is determined it becomes possible to construct such a statistical model using extreme value theory. In the usual central value statistics, the distribution for a random variable is characterized by its mean value and a dispersion indicator such as the standard deviation. Extreme value statistics, pioneered by Gumbel [Gu58], focuses on the largest or smallest values taken on by the distribution. Thus, the “tails” of the distribution are the most significant. For the present
I-44 II-44
applications the concern is with the largest values. An abbreviated description of a few useful relations from extreme value theory is given here. Further detail can be found elsewhere [Gu58], [An85], [Ca88]. Suppose that a random variable, x, is described by a probability density p(x) and corresponding cumulative distribution P(x). These are referred to as the “initial” distributions. If a number of observations, n, are made of this random variable, there will be a largest value within the n observations. The largest value is also a random variable and therefore has its own probability distribution. This is called the extreme value distribution of largest or maximum values. These probability distributions can be calculated exactly. The probability density is
f max ( x; n) = n[P( x)]
n−1
p( x)
(3)
n
(4)
and the cumulative distribution is
Fmax ( x; n) = [P( x)]
An example of the characteristics of such a distribution is shown in Fig. 41 for n-values of 10 and 100 compared to the initial distribution (n = 1), taken to be Gaussian. Note that as the number of observations increase the distributions become more highly peaked and skewed to the right.
Figure 41. Extreme value distributions for n-values of 10 and 100 compared to the initial Gaussian distribution [Bu88]. I-45 II-45
As n becomes large, the exact distribution of extremes may approach a limiting form called the asymptotic extreme value distribution. If the form of the initial distribution is not known but sufficient experimental data are available, the data can be used to derive the asymptotic extreme value distribution by graphical or other methods. For practical applications there are 3 asymptotic extreme value distributions of maximum values – the type I or Gumbel , type II and type III distributions. Examples of extreme value modeling of environmental phenomena such as floods, wave heights, earthquakes and wind speeds can be found in a number of places [Gu58], [An85], [Ca88]. This modeling was first applied to radiation effects problems by Vail, Burke and Raymond in a study of high density memories [Va83]. It has turned out to be a very useful tool for studying the response of large device arrays to radiation. One reason is that the array of devices will fail over a range of radiation exposures and it is important to determine at what point the first failure is likely to occur. Other radiation effects applications have been found for arrays of gate oxides [Va84], [Xa96], sensor arrays [Bu88], [Ma89] and EPROMs [Mc00]. For the application to solar particle events the interest is in the worst-case event that will occur over a period of T solar maximum years. Since the number of events that can occur over this period is variable, the expression for the extreme value distribution must take this into account. Assuming that event occurrence is a Poisson process [Fe93], it can be shown that the cumulative, worst case distribution for T solar maximum years is [Xa98a] Fmax ( M ; T ) = exp{− N tot T [1 − P ( M )]}
(5)
where P(M) is the initial cumulative distribution, which is closely related to equation (2) [Xa99]. Figure 42 shows results for worst-case event fluences for mission lengths of 1, 3, 5 and 10 solar maximum years. The ordinate represents the probability that the worst-case event encountered during a mission will exceed the > 30 MeV proton fluence shown on the abscissa. Also shown in the figure by the vertical line denoted by “Design Limit” is the maximum event fluence parameter, φmax. As will be discussed next, this parameter can be used as an upper limit guideline. Results analogous to these have also been obtained for peak solar proton fluxes during events [Xa98], which are very relevant for SEE. The event fluence magnitudes are discussed here because of the interesting comparison that can be made with historical data to help validate the model.
I-46 II-46
Figure 42. Probability model for worst-case event fluences expected during the indicated time periods during solar maximum [Xa99]. A unique feature of this model is the upper limit parameter for a solar proton event fluence, φmax. For the case of > 30 MeV protons this turns out to be 1.3 x 1010 cm-2. However, this is a fitted parameter that was determined from limited data. There must be some amount of uncertainty associated with the parameter. Thus, it should not be interpreted as an absolute upper limit. One method of estimating its uncertainty is the parametric “bootstrap” technique [Ef93]. This method attempts to assess the uncertainty of the parameter due to the limited nature of the data. The idea is to randomly select event fluences according to the distribution given by equation (2) until the number of events in the distribution is simulated. The equation is then fitted to the simulated data, and the parameters extracted. The procedure is repeated, and each time the parameters have different values. After a number of simulations, the standard deviation of the parameter of interest can be determined. This technique showed the upper limit parameter plus one standard deviation equaled 3.0 x 1010 cm-2 [Xa99]. A reasonable interpretation for the upper limit fluence parameter is that it is the best value that can be determined for the largest possible event fluence, given limited data. It is not an absolute upper limit but is a practical and objectively determined guideline for use in limiting design costs. Constraints on the upper limit of solar proton event sizes can be put on models as a result of studies of historical-type evidence. Relatively small fluctuations of 14C observed in tree rings over a long period of time [Li80] and measured radioactivity in lunar rocks brought back during the Apollo missions [Re97] are consistent with the upper limit parameter but are not especially restrictive. The strictest constraint to date comes from analysis of approximately 400 years of I-47 II-47
the nitrate record in polar ice cores [Mc01]. The largest event reported was estimated to be 1.9 x 1010 cm-2 for > 30 MeV protons. This was the Carrington event that occurred in September 1859. Fig. 43 shows a bar graph of the upper limit parameter, φmax, for > 30 MeV protons including the one standard deviation uncertainty that was estimated from the parametric bootstrap method. This is compared with the reported value for the Carrington event. It is seen that these quantities are well within the uncertainties. Also shown for reference is the value for the October 1989 solar particle event that is commonly used as a worst-case event.
Figure 43. Comparison of the > 30 MeV solar proton event fluences of the October 1989 event, the 1859 Carrington event as determined from ice core analysis [Mc01], and the model upper limit parameter plus one standard deviation shown by the error bar [Xa99].
5. Self-Organized Criticality and the Nature of the Energy Release Process Organizations such as NASA, ESA and others have put substantial resources into studies of the sun’s properties as related to the occurrence of solar particle events. One of the main goals is to find a reliable predictor of events. Despite this significant international effort, solar particle events can occur suddenly and without obvious warning. In addition to potential problems with electronic systems and instrumentation, this is an especially serious concern for new space initiatives that plan to send manned spacecraft to the moon, Mars or interplanetary space. Thus, there is strong motivation to develop predictive methods for solar particle events. It is hoped that the apparent stochastic character can be overcome and predictability achieved if precursor phenomena such as x-ray flares or magnetic topology signatures can be properly interpreted or if the underlying mechanisms are identified. This section discusses the very basic
I-48 II-48
question of whether the nature of the energy release process for solar particle events is deterministic or stochastic. In other words is it possible to predict the time of occurrence and magnitude of solar particle events or are probabilistic methods necessary? The self-organized criticality (SOC) model is a phenomenological model originated by Bak, Tang and Wisenfeld [Ba87] that can give insight into the basic nature of a system. It postulates that a slow continuous build-up of energy in a large interactive system causes the system to evolve to a critical state. A minor, localized disturbance can then start an energyreleasing chain reaction. Chain reactions and therefore energy releasing events of all sizes are an integral part of the dynamics, leading to a “scale invariant” property for event sizes. This scale invariance results in power function distributions for the density functions of event magnitudes and waiting times between events. As a result of this basic nature it is generally assumed in the literature that accurate predictions of the magnitude and time of occurrence of such events are not possible. A system in a SOC state is therefore generally assumed to be probabilistic in nature. Applications for the theory of SOC have been found in natural phenomena such as earthquakes, avalanches and rainfall. A useful conceptual aid is the sandpile. If sand is dropped one grain at a time to form a pile, the pile soon becomes large enough that grains may slide down it, thus releasing energy. Eventually the slope of the pile is steep enough that the amount of sand added is balanced, on average, by the amount that slides down the pile. The system is then in the critical state. As single grains of sand are subsequently added, a broad range of consequences is possible. Nothing may happen or an avalanche of any size up to a “catastrophic” one may occur. The dynamics of this interactive system do not allow accurate predictions of when an avalanche will occur or how large it will be. It has recently been shown that the energy release due to solar particle events is consistent with the dynamics of a SOC system [Xa06]. This was based on three analyses of 28 years of solar proton data taken by the IMP-8 and GOES series of satellites. The first is rescaled range (R/S) analysis, a method used to determine if events show long-term correlation. The second is a demonstration of fractal properties of event sizes, which suggests “scale invariant” behavior. The third is an analysis of the integral distribution of fluence magnitudes, which is shown to be a power function. These are hallmark features of systems that exhibit self-organized criticality.
a) Rescaled Range Analysis Rescaled range (R/S) analysis, originated by Hurst [Hu65], is a method that indicates whether or not events show long-term correlation. The original goal of Hurst was to provide a basis for estimating the optimum size of water storage reservoirs. An optimum size was taken as a reservoir that never ran dry or overflowed. The analysis was based on a history of floods and droughts in the region of interest over a period of many years. For a period of years beginning at time t the cumulative input to the reservoir is Yt +τ =
t +τ i =t
Xi
(6)
where the Xi are the observed inputs for a given time interval, i.e. the daily or monthly input. The cumulative deviation for the total observation period of τ years is then
I-49 II-49
∆Yt +τ =
t +τ i =t
(X
i
− Yt +τ
)
(7)
where Yt +τ is the mean value of the stochastic quantity Xi. Thus, the cumulative deviation represents the difference between the actual cumulative input to the reservoir at a given time and a cumulative calculation based on the average inflow over the total time period of interest. This analysis permits identification of the maximum cumulative input and the value of the minimum cumulative store thereby enabling identification of the optimum size of the reservoir. The difference between the maximum and minimum values is customarily identified as the range. In order to compare results for different rivers Hurst rescaled the range by dividing it by the standard deviation of the inputs over the period of the record, τ. It turns out that this rescaled range is given by R / S = aτ H
(8)
where a and H are constants [Pe02]. The latter constant is called the Hurst coefficient. It is known that if the inputs are completely random and uncorrelated the rescaled range should vary as the square root of the elapsed time, i.e. H would equal 0.5. Contrary to this expectation Hurst found that the rescaled range varied as the 0.7 to 0.8 power of the elapsed time indicating that the events showed long-term correlation. He found that many other natural phenomena such as rainfall, temperatures, pressures and sunspot numbers had power indices in the same range. In Figure 44 a plot analogous to that used by Hurst to describe flood and drought periods is shown for solar proton daily fluences for the year 1989. The quantity shown on the ordinate is the cumulative deviation expressed in equation (7) and can also be termed the net proton fluence. It is the analog of the reservoir level in Hurst’s analysis. A negative slope on this plot indicates a lack of solar proton events (a “solar proton drought”). When an event occurs there is a rapid increase in the net proton fluence, producing the jagged appearance of the plot. This is indicative that there is a build-up of energy with time that is released in bursts.
I-50 II-50
Figure 44. Cumulative deviation plot of daily solar proton fluences in 1989 [Xa06]. The difference between the maximum and minimum values in Figure 44 is conventionally referred to as the range. When divided by the standard deviation it is the rescaled range. To carry out a complete R/S analysis a number of samples covering different time periods in the total record are used to determine a series of rescaled range values. When R/S values are amenable to this analysis, they yield a straight line when plotted as a function of the period on a log-log scale. As seen in Figure 45 the solar proton data are well described by rescaled range analysis. The power index, H, has been determined using equation (8) to obtain a result of 0.70 [Xa06]. This is typical of those for other natural phenomena and indicates long-term correlation between solar particle events. This can be interpreted as a consequence of the fact that the amount of energy stored in the system, i.e. the sun’s corona, is dependent on the system’s past history.
I-51 II-51
Figure 45. Rescaled range analysis of > 0.88 MeV protons for 1989 [Xa06].
b) Fractal Behavior A significant feature of a system in a SOC state is that when its features are viewed on a different scale the character of the appearance does not change. This is closely related to Mandelbrot’s concept of fractal geometry [Ma83], a formulation of the complexity of natural patterns observed in nature, which tend to have similar features regardless of the scale on which they are viewed. Well-known examples are coastlines, snowflakes and galaxy clusters. Figure 46 shows the net proton fluence as a function of monthly fluences as compared to Figure 44, which is for daily fluences. If the axis units were not visible it would not be possible to distinguish the 2 figures. For this reason processes of this type have been described in the literature by terms such as “scale invariant”, “self-similar” and “fractal” [Ba96], [Je98], [Sc91]. This scale invariance is further evidence of a SOC system, and suggests the possibility of power function behavior in the fluence magnitudes. In fact, it has been suggested that a fractal can be thought of as a snapshot of a SOC process [Ba91].
I-52 II-52
Figure 46. Cumulative deviation plot for > 0.88 MeV protons for the time period 1973 to 2001 [Xa06].
c) Power Function Distribution A necessary characteristic of SOC phenomena is that the number density distribution of event magnitudes is a power function [Ba96], [Je98], [Pe02]. An integral distribution of monthly solar proton fluences for a 28-year period is shown in Figure 47. The ordinate represents the number of occurrences when the monthly fluence exceeds that shown on the abscissa. It is seen that this distribution is a straight line on a semi-logarithmic plot that spans about 4 orders of magnitude. The number density function is [Xa06] dN − 29.4 = dΦ Φ
(9)
In this case the density function turns out to be exactly proportional to the reciprocal of the fluence. Thus, the solar event data can be represented by a power function of a type commonly referred to as 1/f [Ba87]. It can therefore be viewed as 1/f noise, also known as flicker noise. It is well known that this type of noise results when the dynamics of a system is strongly influenced by past events. Additionally, it reinforces the results is section B5a. Thus, an especially compelling argument can be made that solar particle events are a SOC phenomenon [Xa06].
I-53 II-53
Figure 47. Integral distribution of monthly solar proton fluences > 1.15 MeV, from 1973 to 2001 [Xa06]. The general behavior of a SOC system is that of a non-equilibrium system driven by a slow continuous energy input that is released in sudden bursts with no typical size as indicated by the power function distribution shown in equation (9). Although research involving SOC is still a developing field and there is much yet to be learned about the sun’s dynamics [Lu93], [Bo99], [Ga03], these results strongly suggest that it is not possible to predict that a solar particle event of a given magnitude will occur at a given time. This also suggests a direction toward a more physically based model involving a description of the energy storage and release processes in the solar structure. It is possible that such a model could explain useful probabilistic trends such as why larger and more frequent solar proton events are observed to occur during the declining phase of the solar cycle compared to the rising phase [Sh95].
C.
Solar Heavy Ion Models
Solar heavy ion models are generally not as advanced as solar proton models due to the large number of heavy ion species, which complicates measurements of individual species. For microelectronics applications, solar heavy ion models are needed primarily to assess SEE. In an attempt to model worst-case events, the original CREME model [Ad87] and subsequently the CHIME model [Ch94] scaled heavy ion abundances to protons for individual events. However, this assumption that the events with the highest proton fluxes should also be heavy ion rich turned out to be inconsistent with subsequent data [Re99] and led to worst-case event models that were too conservative [Mc94]. Modifications of the original CREME code were made in the MACREE model [Ma95] to define a less conservative worst-case solar particle event. MACREE
I-54 II-54
gives the option of using a model based on the measured proton and alpha particle spectra for the well-known October 1989 event and an abundance model that is 0.25 times the CREME abundances for atomic numbers, Z > 2. A model that originated at JPL [Cr92] characterizes the distribution of 1 to 30 MeV per nucleon alpha particle event fluences using a lognormal distribution in order to assign confidence levels to the event magnitudes. The alpha particle data are based on measurements from the IMP-8 satellite for solar maximum years between 1973 and 1991. For ions heavier than Z = 2 an abundance model is used and the fluxes are scaled to the alpha particle flux for a given confidence level [Mc94]. The current version of the widely used CREME code, CREME96, uses the October 1989 event as a worst-case scenario. It provides 3 levels of solar particle intensity [Ty97]. These are the “worst week”, “worst day” and “peak flux” models, which are based on proton measurements from the GOES-6 and -7 satellites and heavy ion measurements from the University of Chicago Cosmic Ray Telescope (CRT) on the IMP-8 satellite. The most extensive heavy ion measurements in the model are for C, O and Fe ions [Ty96]. It is noteworthy that the energy spectra of these 3 elements extend out to roughly 1 GeV per nucleon. The remaining elemental fluxes are determined from a combination of measurements limited to 1 or 2 energy bins and abundance ratios. Comparisons to the CREME96 worst case models have been made with data taken by the Cosmic Radiation Environment DOsimetry (CREDO) Experiment onboard the Microelectronics and Photonics Test Bed (MPTB) between 2000 and 2002 [Dy02]. The data show that 3 major events during this time period approximately equaled the “worst day” model. An example of this is shown in Figure 48 for an event that occurred in November 2001.
Figure 48. Comparison of a solar heavy ion event that occurred in November 2001 with the CREME96 “worst day” model. The progression of daily intensities is indicated with the peak intensity occurring on day 2929 of the mission.
I-55 II-55
The above models can be used to calculate worst-case SEE rates induced by heavy ions. Another quantity of interest is the average SEE rate during a mission, which means that models for cumulative solar heavy ion fluence must be developed. Tylka et al. used a Monte Carlo procedure similar to the JPL91 solar proton model [Fe93] to predict cumulative fluences for certain elements during a mission at a specified confidence level [Ty97a]. This was done for 2 broad energy bins each for alpha particles, for the CNO group, and for Fe. It is based on the University of Chicago CRT data taken between 1973 and 1996. The new PSYCHIC model [Xa06a] is based on measurements of approximately 1 to 200 MeV per nucleon alpha particle data taken onboard the IMP-8 and GOES series of satellites between 1973 and 2001. For Z > 2 heavy ions the energy spectra and abundances relative to alpha particles are determined from measurements by the Solar Isotope Spectrometer (SIS) instrument on the ACE spacecraft for the major elements C, N, O, Ne, Mg, Si, S and Fe. These measurements were taken between 1997 and 2005. The remaining less prevalent elements are scaled according to an abundance model using the measured energy spectra of the major elements.
VI.
Future Challenges
There are many future challenges that are faced in attempting to model the space radiation environment. First there should be a goal to produce more dynamical and more physical models of the environment. The resulting increased understanding should allow more accurate projections to be made for future missions. For trapped particle radiations, this would mean initially developing descriptions or particle maps for various climatological conditions that occur throughout the solar cycle for the full range of particle energies and geomagnetic coordinates covered by the AP-8 and AE-8 models. Ultimately, it would mean developing an accurate description of the source and loss mechanisms of trapped particles, including the influence that magnetic storms have on the particle populations. Galactic cosmic ray models are closely tied to solar activity levels, which modulate the fluxes of the incoming ions. Challenges for these models are to incorporate an improved description of the solar modulation potential and to develop cosmic ray transport models that incorporate knowledge of astrophysical processes. Solar particle events demonstrate a strongly statistical character. A major challenge for these models is to develop a description of the energy storage and release processes in the solar structure. This would provide a more detailed probabilistic view of the cyclical dependence of event frequencies and magnitudes. Developing and implementing a strategy to deal with the radiation environment for manned and robotic space missions is critical for new interplanetary exploration initiatives. Getting astronauts safely to Mars and back will be the greatest exploration challenge of our lifetimes. It will involve planning and implementing strategies for the interplanetary radiation environment to an unprecedented degree. The lack of predictability of solar particle events underscores the importance of establishing a measurement system in the inner heliosphere for the early detection and warning of events [Xa06]. Once an event is detected, accurate predictions must be made of the transport process to the Earth, Mars and possibly beyond so that properties such as time of arrival, duration, intensity and energy spectrum can be transmitted well ahead of the arrival time. The current GCR models depend on knowing the solar activity levels in order to predict GCR fluxes. Thus, the lack of an established method for predicting future solar cycle activity is a
I-56 II-56
serious concern that must be addressed for new exploration initiatives. Especially disconcerting are the occasional large drops in solar activity from one cycle to the next as seen in Figure 1. This translates to a substantial increase in GCR flux from one cycle to the next, which would be a serious problem for long-term manned missions should the mission happen to occur during an unfavorable cycle. Thus, in spite of the recent progress that has been made in modeling the space radiation environment over the last 10 or so years, much work remains to be done.
VII. References [Ad87] [An85] [An94] [An96] [Ba96a] [Ba87] [Ba91] [Ba96] [Ba97] [Ba05] [Bo99] [Bo03] [Br92] [Br04] [Bu88] [Ca88]
J.H. Adams, Jr., Cosmic Ray Effects on Microelectronics, Part IV, NRL Memorandum Report 5901, Naval Research Laboratory, Washington DC, Dec. 1987. A.H-S. Ang and W.H. Tang, Probability Concepts in Engineering Planning and Design, Vol. II, Wiley, NY, 1985. B.J. Anderson and R.E. Smith, Natural Orbital Environment Guidelines for Use in Aerospace Vehicle Development, NASA Technical Memorandum 4527, Marshall Space Flight Center, Alabama, June 1994. B.E. Anspaugh, GaAs Solar Cell Radiation Handbook, JPL Publication 96-9, 1996. G.D. Badhwar and P.M. O’Neill, “Galactic Cosmic Radiation Model and Its Applications”, Adv. Space Res., Vol. 17, No. 2, (2)7-(2)17 (1996). P. Bak, C. Tang and K. Wisenfeld, “Self-Organized Criticality: An Explanation of 1/f Noise”, Phys. Rev. Lett., Vol. 59, 381-384 (1987). P. Bak and K. Chen, “Self-Organized Criticality”, Scientific American, Vol. 264, 4653 (Jan. 1991). P. Bak, How Nature Works – The Science of Self-Organized Criticality, SpringerVerlag, NY, 1996. J.L. Barth, “Modeling Space Radiation Environments” in 1997 IEEE NSREC Short Course, IEEE Publishing Services, Piscataway, NJ. R. Baumann, “Single-Event Effects in Advanced CMOS Technology” in 2005 NSREC Short Course, IEEE Publishing Services, Piscataway, NJ. G. Boffeta, V. Carbone, P. Giuliani, P. Veltri and A. Vulpiani, “Power Laws in Solar Flares: Self-Organized Criticality or Turbulence?” Phys. Rev. Lett., Vol. 83, 46624665 (1999). D.M. Boscher, S.A. Bourdarie, R.H. W. Friedel and R.D. Belian, “A Model for the Geostationary Electron Environment: POLE”, IEEE Trans. Nucl. Sci., Vol. 50, 22782283 (Dec. 2003). D.H. Brautigam, M.S. Gussenhoven and E.G. Mullen, “Quasi-Static Model of Outer Zone Electrons”, IEEE Trans. Nucl. Sci., Vol. 39, 1797-1803 (Dec. 1992). D.H. Brautigam, K.P. Ray, G.P. Ginet and D. Madden, “Specification of the Radiation Belt Slot Region: Comparison of the NASA AE8 Model with TSX5/CEASE Data”, IEEE Trans. Nucl. Sci., Vol. 51, 3375-3380 (Dec. 2004). E.A. Burke, G.E. Bender, J.K. Pimbley, G.P. Summers, C.J. Dale, M.A. Xapsos and P.W. Marshall, “Gamma Induced Dose Fluctuations in a Charge Injection Device”, IEEE Trans. Nucl. Sci., Vol. 35, 1302-1306 (1988). E. Castillo, Extreme Value Theory in Engineering, Academic Press, Boston, 1988.
I-57 II-57
[Ch94]
[Cr92] [Da96] [Da01] [Di97] [Di06] [Dy02] [Ef93] [Fe90] [Fe93] [Ga96] [Ga03] [Gu58] [Gu96] [Ha99] [He99] [http] [Hu65] [Hu98] [Hu02] [Ja57]
D.L. Chenette, J. Chen, E. Clayton, T.G. Guzik, J.P. Wefel, M. Garcia-Munoz, C. Lopate, K.R. Pyle, K.P. Ray, E.G. Mullen and D.A. Hardy, “The CRRES/SPACERAD Heavy Ion Model of the Environment (CHIME) for Cosmic Ray and Solar Particle Effects on Electronic and Biological Systems in Space”, IEEE Trans. Nucl. Sci., Vol. 41, 2332-2339 (1994). D.R. Croley and M. Cherng, “Procedure for Specifying the Heavy Ion Environment at 1 AU”, JPL Interoffice Memorandum 5215-92-072, July 1992. E.J. Daly, J. Lemaire, D. Heynderickx and D.J. Rodgers, “Problems with Models of the Radiation Belts”, IEEE Trans. Nucl. Sci., Vol. 43, No. 2, 403-414 (April 1996). A.J. Davis, et al., “Solar Minimum Spectra of Galactic Cosmic Rays and Their Implications for Models of the Near-Earth Radiation Environment”, J. Geophys. Res., Vol. 106, No. A12, 29,979-29,987 (Dec. 2001). E.I. Diabog, Report on the International Workshop for Physical and Empirical Models at Dubna, http://wings.machaon.ru/inp50/english/dubna.html, April 1997. M. Dikpati et al., Geophys. Res. Lett. (online), March 3, 2006. C.S. Dyer, K. Hunter, S. Clucas, D. Rodgers, A. Campbell and S. Buchner, “Observation of Solar Particle Events from CREDO and MPTB During the Current Solar Maximum”, IEEE Trans. Nucl. Sci., Vol. 49, 2771-2775 (2002). B. Efron and R.J. Tibshirani, An Introduction to the Bootstrap, Chapman & Hall, NY, 1993. J. Feynman, T.P. Armstrong, L. Dao-Gibner and S.M. Silverman, “New Interplanetary Proton Fluence Model”, J. Spacecraft, Vol. 27, 403-410 (1990). J. Feynman, G. Spitale, J. Wang and S. Gabriel, “Interplanetary Fluence Model: JPL 1991”, J. Geophys. Res., Vol. 98, 13281-13294 (1993). S.B. Gabriel and J. Feynman, “Power-Law Distribution for Solar Energetic Proton Events”, Solar Phys., Vol. 165, 337-346 (1996) S.B. Gabriel and G.J. Patrick, “Solar Energetic Particle Events: Phenomenology and Prediction”, Space Sci. Rev., Vol. 107, 55-62 (2003). E. Gumbel, Statistics of Extremes, Columbia University Press, NY, 1958. M.S. Gussenhoven, E.G. Mullen and D.H. Brautigam, “Improved Understanding of the Earth’s Radiation Beltsfrom the CRRES Satellite”, IEEE Trans. Nucl. Sci., Vol. 43, No. 2, 353-368 (April 1996). D.H. Hathaway, R.M. Wilson and E.J. Reichmann, “A Synthesis of Solar Cycle Prediction Techniques”, J. Geophys. Res., Vol. 104, A10, 22375-22388 (1999). D. Heynderickx, M. Kruglanski, V. Pierrard, J. Lemaire, M.D. Looper and J.B. Blake, “A Low Altitude Trapped Proton Model for Solar Minimum Conditions Based on SAMPEX/PET Data”, IEEE Trans. Nucl. Sci., Vol. 46, 1475-1480 (Dec. 1999). http://www.spenvis.oma.be/ H.E. Hurst, Long Term Storage: An Experimental Study, Constable & Co., Ltd., London (1965). S.L. Huston and K.A. Pfitzer, “A New Model for the Low Altitude Trapped Proton Environment”, IEEE Trans. Nucl. Sci., Vol. 45, 2972-2978 (Dec. 1998). S.L. Huston, Space Environments and Effects: Trapped Proton Model, Boeing Final Report NAS8-98218, Huntington Beach, CA, Jan. 2002. E.T. Jaynes, “Information Theory and Statistical Mechanics”, Phys. Rev., Vol. 106, 620-630 (1957).
I-58 II-58
[Je98] [Ka89] [Ki74] [Ko01] [La05] [Le06] [Li80] [Lu93] [Ma89] [Ma95]
[Ma83] [Ma99] [Ma02] [Mc61] [Mc01] [Mc94] [Mc00] [Ny96]
H.J. Jensen, Self-Organized Criticality, Cambridge University Press, Cambridge, UK, 1998. J.N. Kapur, Maximum Entropy Models in Science and Engineering, John Wiley & Sons, Inc., NY, 1989. J.H. King, “Solar Proton Fluences for 1977-1983 Space Missions”, J. Spacecraft, Vol. 11, 401-408 (1974). H.C. Koons, “Statistical Analysis of Extreme Values in Space Science”, J. Geophys. Res., Vol. 106, No. A6, 10915-10921 (June 2001). J-M. Lauenstein and J.L. Barth, “Radiation Belt Modeling for Spacecraft Design: Model Comparisons for Common Orbits”, 2005 IEEE Radiation Effects Data Workshop Proceedings, pp. 102-109, IEEE Operations Center, Piscataway, NJ, 2005. F. Lei, A. Hands, S. Clucas C. Dyer and P. Truscott, “Improvements to and Validations of the QinetiQ Atmospheric Radiation Model”, Accepted for publication in IEEE Trans. Nucl. Sci., June 2006 issue. R.E. Lingenfelter and H.S. Hudson, “Solar Particle Fluxes and the Ancient Sun” in Proc. Conf. Ancient Sun, edited by R.O. Pepin, J.A. Eddy and R.B. Merrill, Pergamon Press, London, pp.69-79 (1980). E.T. Lu, R.J. Hamilton, J.M. McTiernan and K.R. Bromund, “Solar Flares and Avalanches in Driven Dissipative systems”, Astrophys. J., Vol. 412, 841-852 (1993). P.W. Marshall, C.J. Dale, E.A. Burke, G.P. Summers and G.E. Bender, “Displacement Damage Extremes in Silicon Depletion Regions”, IEEE Trans. Nucl. Sci., Vol. 36, 1831-1839 (1989). P.P. Majewski, E. Normand and D.L. Oberg, “A New Solar Flare Heavy Ion Model and its Implementation through MACREE, an Improved Modeling Tool to Calculate Single Event Effect Rates in Space”, IEEE Trans. Nucl. Sci., Vol. 42, 2043-2050 (1995). B.B. Mandelbrot, The Fractal Geometry of Nature, W.H. Freeman & Co., NY, 1983. C.J. Marshall and P.J. Marshall, “Proton Effects and Test Issues for Satellite Designers – Part B: Displacement Effects” in 1999 IEEE NSREC Short Course, IEEE Publishing Services, Piscataway, NJ. J. Mazur, “The Radiation Environment Outside and Inside a Spacecraft” in 2002 IEEE NSREC Short Course, IEEE Publishing Services, Piscataway, NJ. C.E. McIlwain, “Coordinates for Mapping the Distribution of Magnetically Trapped Particles”, J. Geophys. Res., Vol. 66, 3681-3691 (1961). K.G. McCracken, G.A.M. Dreschoff, E.J. Zeller, D.F. Smart and M.A. Shea, “Solar Cosmic Ray Events for the Period 1561 – 1994 1. Identification in Polar Ice”, J. Geophys. Res., Vol. 106, 21585-21598 (2001). P.L. McKerracher, J.D. Kinnison and R.H. Maurer, “Applying New Solar Particle Event Models to Interplanetary Satellite Programs”, IEEE Trans. Nucl. Sci., Vol. 41, 2368-2375 (1994). P.J. McNulty, L.Z. Scheick, D.R. Roth, M.G. Davis and M.R.S. Tortora, “First Failure Predictions for EPROMs of the Type Flown on the MPTB Satellite”, IEEE Trans. Nucl. Sci., Vol. 47, 2237-2243 (2000). R.A. Nymmik, “Models Describing Solar Cosmic Ray Events”, Radiat. Meas., Vol. 26, 417-420 (1996).
I-59 II-59
[Ny96a] [Ny99] [Pa85] [Pe01] [Pe02] [Re97] [Re99] [Sa76] [Sc91] [Sc96] [Sh49] [Sh95] [Si06] [St74] [St96] [Tr61] [Ty96] [Ty97] [Ty97a]
R.A. Nymmik, M.I. Panasyuk and A.A. Suslov, “Galactic Cosmic Ray Flux Simulation and Prediction”, Adv. Space Res., Vol. 17, No. 2, (2)19-(2)30 (1996). R.A. Nymmik, “Probabilistic Model for Fluences and Peak Fluxes of Solar Energetic Particles”, Radiation Mesurements, Vol.30, 287-296 (1999). E.N. Parker, “The Passage of Energetic Charged Particles Through Interplanetary Space”, Planet. Space Sci., Vol. 13, 9-49 (1985). W.D. Pesnell, “Fluxes of Relativistic Electrons in Low Earth Orbit During the Decline of Solar Cycle 22”, IEEE Trans. Nucl. Sci., Vol. 48, 2016-2021 (Dec. 2001). O. Peters, C. Hertlein and K. Christensen, “A Complexity View of Rainfall”, Phys. Rev. Lett., Vol. 88(1), 018701-1 (2002). R.C. Reedy, “Radiation Threats from Huge Solar Particle Events”, in Proc. Conf. High Energy Radiat. Background in Space, edited by P.H. Solomon, NASA Conference Publication 3353, pp.77-79 (1997). D.V. Reames, “Particle Acceleration at the Sun and in the Heliosphere”, Space Sci. Rev., Vol. 90, 413-491 (1999). D.M. Sawyer and J.I. Vette, AP-8 Trapped Proton Environment for Solar Maximum and Solar Minimum, NSSDC/WDC-A-R&S, 76-06, NASA Goddard Space Flight Center, Greenbelt, MD, Dec. 1976. M. Schroeder, Fractals, Chaos and Power Laws, W.H. Freeman & Co., NY, 1991. K.H. Schatten, D.J. Myers and S. Sofia, “Solar Activity Forecast for Solar Cycle 23”, Geophys. Res. Lett., Vol. 6, 605-608 (1996). C.E. Shannon and W. Weaver, The Mathematical Theory of Communication, University of Illinois Press, 1949. M.A. Shea and D.F. Smart, “A Comparison of Energetic Solar Proton Events During the Declining Phase of Four Solar Cycles (Cycles 19-22)”, Adv. Space Res., Vol. 16, No. 9, (9)37-(9)46 (1995). A. Sicard-Piet, S. Bourdarie, D. Boscher and R.H.W. Friedel, “A Model of the Geostationary Electron Environment: POLE, from 30 keV to 5.2 MeV”, Accepted for publication in IEEE Trans. Nucl. Sci., June 2006 issue. E. G. Stassinopoulos and J.H. King, “Empirical Solar Proton Models for Orbiting Spacecraft Applications”, IEEE Trans. Aerospace and Elect. Sys., Vol. 10, 442-450 (1974). E.G. Stassinopoulos, G.J. Brucker, D.W. Nakamura, C.A. Stauffer, G.B. Gee and J.L. Barth, “Solar Flare Proton Evaluation at Geostationary Orbits for Engineering Applications”, IEEE Trans. Nucl. Sci., Vol. 43, 369-382 (April 1996). M. Tribus, Thermostatics and Thermodynamics, D. Van Nostrand Co., Inc., NY, 1961. A.J. Tylka, W.F. Dietrich, P.R. Boberg, E.C. Smith and J.H. Adams, Jr., “Single Event Upsets Caused by Solar Energetic Heavy Ions”, IEEE Trans. Nucl. Sci., Vol. 43, 2758-2766 (1996). A.J. Tylka et al., “CREME96: A Revision of the Cosmic Ray Effects on Microelectronics Code”, IEEE Trans. Nucl. Sci., Vol. 44, 2150-2160 (1997). A.J. Tylka, W.F. Dietrich and P.R. Boberg, “Probability Distributions of High-Energy Solar-Heavy-Ion Fluxes from IMP-8: 1973-1996”, IEEE Trans. Nucl. Sci., Vol. 44, 2140-2149 (1997).
I-60 II-60
[Va83] [Va84] [Va96] [Ve91] [Ve91a] [Wa94] [Wa04] [Wi99] [Wr00] [Xa96] [Xa98] [Xa98a] [Xa99] [Xa99a]
[Xa00] [Xa02]
P.J. Vail, E.A. Burke and J.P. Raymond, “Scaling of Gamma Dose Rate Upset Threshold in High Density Memories”, IEEE Trans. Nucl. Sci., Vol. 30, 4240-4245 (1983). P.J. Vail and E.A. Burke, “Fundamental Limits Imposed by Gamma Dose Fluctuations in Scaled MOS Gate Insulators”, IEEE Trans. Nucl. Sci., Vol. 31, 14111416 (1984). A.L. Vampola, Outer Zone Energetic Electron Environment Update, European Space Agency Contract Report, Dec. 1996; http://spaceenv.esa.int/R_and_D/vampola/text.html. J.I. Vette, The NASA/National Space Science Data Center Trapped Radiation Environment Program (1964-1991), NSSDC 91-29, NASA/Goddard Space Flight Center, National Space Science Data Center, Greenbelt, MD, Nov. 1991. J.I. Vette, The AE-8 Trapped Electron Environment, NSSDC/WDC-A-R&S 91-24, NASA Goddard Space Flight Center, Greenbelt, MD, Nov. 1991. M. Walt, Introduction to Geomagnetically Trapped Radiation, University Press, Cambridge, 1994. R.J. Walters, S.R. Messenger, G.P. Summers and E.A. Burke, “Solar Cell Technologies, Modeling and Testing” in 2004 IEEE NSREC Short Course, IEEE Publishing Services, Piscataway, NJ. J.W. Wilson, F.A. Cucinotta, J.L. Shinn, L.C. Simonsen, R.R. Dubbed, W.R. Jordan, T.D. Jones, C.K. Chang and M.Y. Kim, “Shielding from Solar Particle Event Exposures in Deep Space”, Radiat. Meas., Vol. 30, 361-382 (1999). G.L. Wrenn, D.J. Rodgers and P. Buehler, “Modeling the Outer Belt Enhancements of Penetrating Electrons”, J. Spacecraft and Rockets, Vol. 37, No. 3, 408-415 (MayJune 2000). M.A. Xapsos, “Hard Error Dose Distributions of Gate Oxide Arrays in the Laboratory and Space Environments”, IEEE Trans. Nucl. Sci., Vol. 43, 3139-3144 (1996). M.A. Xapsos, G.P. Summers and E.A. Burke, “Probability Model for Peak Fluxes of Solar Proton Events”, IEEE Trans. Nucl. Sci., Vol. 45, 2948-2953 (1998). M.A. Xapsos, G.P. Summers and E.A. Burke, “Extreme Value Analysis of Solar Energetic Proton Peak Fluxes”, Solar Phys., Vol. 183, 157-164 (1998). M.A. Xapsos, G. P. Summers, J.L. Barth, E.G. Stassinopoulos and E.A. Burke, “Probability Model for Worst Case Solar Proton Event Fluences”, IEEE Trans. Nucl. Sci., Vol. 46, 1481-1485 (1999). M.A. Xapsos, J.L. Barth, E.G. Stassinopoulos, E.A. Burke and G.B. Gee, Space Environment Effects: Model for Emission of Solar Protons (ESP) – Cumulative and Worst Case Event Fluences, NASA/TP-1999-209763, Marshall Space Flight Center, Alabama, Dec. 1999. M.A. Xapsos, G.P. Summers, J.L. Barth, E.G. Stassinopoulos and E.A. Burke, “Probability Model for Cumulative Solar Proton Event Fluences”, IEEE Trans. Nucl. Sci., Vol. 47, No. 3, 486-490 (June 2000). M.A. Xapsos, S.L. Huston, J.L. Barth and E.G. Stassinopoulos, “Probabilistic Model for Low-Altitude Trapped-Proton Fluxes”, IEEE Trans. Nucl. Sci., Vol. 49, 27762781 (Dec. 2002).
I-61 II-61
[Xa04] [Xa06] [Xa06a]
M.A. Xapsos, C. Stauffer, G.B. Gee, J.L. Barth, E.G. Stassinopoulos and R.E. McGuire, “Model for Solar Proton Risk Assessment”, IEEE Trans. Nucl. Sci., Vol. 51, 3394-3398 (2004). M.A. Xapsos, C. Stauffer, J.L. Barth and E.A. Burke, “Solar Particle Events and SelfOrganized Criticality: Are Deterministic Predictions of Events Possible?”, Accepted for publication in IEEE Trans. Nucl. Sci., June 2006 issue. M.A. Xapsos et al., submitted to the 2006 NSREC (Ponte Vedra Beach, FL).
I-62 II-62
2006 IEEE NSREC Short Course
Section III: Space Radiation Transport Models
Giovanni Santin* European Space Agency *on loan from RHEA System SA Dennis Wright Makoto Asai Stanford Linear Accelerator Center
Approved for public release; distribution is unlimited
III-1
Space Radiation Transport Models Giovanni Santin, European Space Agency and RHEA System SA Dennis Wright, Makoto Asai, Stanford Linear Accelerator Center NSREC 2006 Short Course Outline Introduction..................................................................................................................................... 3 I. Space Radiation Transport: Physics........................................................................................ 4 A. Radiation in Space: Types and Energy Ranges .................................................................. 4 B. In the spacecraft / In the Devices........................................................................................ 5 C. Particle interactions: fundamental forces............................................................................ 6 D. Particle interactions: cross section, probabilities, mean free path ...................................... 6 E. Electromagnetic Interactions .............................................................................................. 7 1. Photon processes............................................................................................................. 7 2. Charged particles: Electrons and Positrons................................................................... 12 3. Charged particles: Protons and Ions ............................................................................. 15 4. Energy loss.................................................................................................................... 16 5. Straggling...................................................................................................................... 16 F. Nuclear Interactions .......................................................................................................... 17 1. Nucleon-nuclei processes.............................................................................................. 17 G. Interplay of Processes ....................................................................................................... 18 1. Electromagnetic showers .............................................................................................. 18 2. Electrons and positrons in matter.................................................................................. 19 3. Bremsstrahlung and ionization from low penetrating radiation ................................... 20 4. Proton range and straggling .......................................................................................... 21 5. Knock-on particles and fragments from nuclear interactions ....................................... 21 6. Shower of lattice-atom displacements .......................................................................... 22 II. Radiation Transport Techniques ........................................................................................... 24 A. Analytical / Monte Carlo .................................................................................................. 24 B. Single Particle / Collective Effects ................................................................................... 24 C. Look-Up Tables / Sectoring analysis............................................................................... 25 D. Forward / reverse transport .............................................................................................. 26 III. In depth: Monte Carlo techniques..................................................................................... 28 A. General Concepts .............................................................................................................. 28 1. Monte Carlo for elementary particle transport.............................................................. 28 2. Random generators ....................................................................................................... 29 B. Variance Reduction........................................................................................................... 29 C. Interfaces........................................................................................................................... 30 D. Output: tallies.................................................................................................................... 31 IV. Radiation Transport Tools ................................................................................................ 33 A. Historical tools .................................................................................................................. 33 1. ETRAN / ITS ................................................................................................................ 33 2. SHIELDOSE................................................................................................................. 34
III-1
B.
Present............................................................................................................................... 35 GEANT4 ....................................................................................................................... 35 MCNPX ........................................................................................................................ 36 FLUKA ......................................................................................................................... 36 PHITS ........................................................................................................................... 36 NOVICE ....................................................................................................................... 36 PENELOPE................................................................................................................... 37 EGS ............................................................................................................................... 37 BRYNTRN / HZETRN................................................................................................. 37 SRIM / TRIM................................................................................................................ 37 C. Future ................................................................................................................................ 38 V. GEANT4 applications and physics validation for space environment analyses................... 39 A. Introduction and Kernel .................................................................................................... 39 B. Geometry........................................................................................................................... 40 C. Physics Processes.............................................................................................................. 42 1. Electromagnetic ............................................................................................................ 42 2. Decay ............................................................................................................................ 44 3. Hadronic processes and models .................................................................................... 44 D. Physics Validation ............................................................................................................ 46 1. Electromagnetic ............................................................................................................ 46 2. Hadronic........................................................................................................................ 48 E. Geant4-based radiation analysis and tools........................................................................ 50 1. Sector Shielding Analysis Tool (SSAT) ....................................................................... 50 2. PLANETO-COSMICS ................................................................................................. 51 3. Multi-Layered Shielding Simulation Software (MULASSIS) ..................................... 52 4. GEANT4 Radiation Analysis for Space GRAS............................................................ 53 5. Monte Carlo Radiative Energy Deposition (MRED) – Vanderbilt .............................. 53 Conclusion .................................................................................................................................... 55 References..................................................................................................................................... 55 1. 2. 3. 4. 5. 6. 7. 8. 9.
III-2
Introduction Knowledge of the potential impact of the radiation environment on evolving space-borne devices relies on precise analysis tools for the understanding and the prediction of the basic effects of the particle environment on new technologies. In addition to cumulative effects such as dose, single event effects (SEE) in modern microelectronics are often a major cause of spacecraft failures or anomalies [Da04][Ba04]. Simulations improve the understanding of the underlying phenomena of the interaction of the particle radiation with the spacecraft devices. They can thus play a major role both in understanding system performance in space and in improving the design of flight components. Engineering design margins are a crucial issue for missions flying commercial off-the-shelf (COTS) technology and sensitive detectors. A complete space qualification ground test procedure for all new components makes costs higher, without being able to cover, in energy and species, the entire range of the particle radiation population in space. The availability of reliable simulation tools could lower costs by complementing a more limited set of experimental tests, while still giving enough confidence on the component’s behavior in space. Several issues are related to the effectiveness and the reliability of transport tools for the description of the radiation environment local to the sensitive devices. The main ones are linked to the fundamental physics modeling, but the transport algorithms utilizing these models present some delicate aspects too. The purpose of this lecture is to introduce some of the basic concepts in space particle radiation transport. Section I gives a short summary of the main fundamental physics interaction types encountered by particle radiation in matter (which translates both in shielding mechanisms and radiation-induced effects in devices). Sections II and III provide an introduction to radiation transport and more detail on Monte Carlo techniques, their application to particle transport applications, and some of the issues related to Monte Carlo algorithms. Section III gives an overview of historical and present radiation transport tools, and some guesses on the trends for the years to come. Finally, Section IV focuses on GEANT4, a modern and promising MC toolkit, which has found several space application cases in recent years. The main features of the toolkit are outlined before concentrating on the physics models and on some validation examples. The section ends with a number of GEANT4-based tools for space applications and related results.
III-3
I. Space Radiation Transport: Physics A. Radiation in Space: Types and Energy Ranges The first lecture introduces in great detail the characteristics of the radiation environment. As a short explanation for some of the requirements for transport tools, we briefly mention here the main components of the radiation environment, in terms of the great variety of particle species and wide energy spectrum [He68][Da88]. The radiation belt trapped radiation environment includes protons with energies up to several hundred MeV, and two electron belts, whose spectrum extends to energies of a few MeV. The combination of their motions in the Earth's magnetic field (gyration about field lines, bouncing between the magnetic mirrors, and drift around the Earth) makes the particle field at the spacecraft effectively isotropic. There are a few exceptions, notably low-altitude protons, which can cause differences of a factor three or more in fluxes arriving from different azimuths. During Solar Particle Events (SPE) large fluxes of energetic protons and other particles are produced. This component of the environment is event driven, with occasional high fluxes over short periods, and unpredictable in time of occurrence, magnitude, duration. The composition, mainly protons and alphas, but also heavier ions, electrons, neutrons and gammas, varies greatly between events. Cosmic radiation, which originates outside the solar system, includes heavy and energetic (HZE) ions and has a spectrum that varies approximately as ~E-2.5. Particles with energies in excess of 1020 eV have been detected on Earth. Because of their energy spectrum and their charge, despite the low intensity (about 2 to 6 /cm2s) they are difficult to stop and cause intense ionization along their tracks (and occasionally high-energy nuclear fragments). Other environment components (energetic and low-energy plasma, Oxygen atoms, debris) are neglected here.
Figure 1 Simplified diagram of typical particle radiation spectra from them main space environment sources. It is worth mentioning that ground-testing procedures usually employ mono-energetic beams of relatively low energy, whereas in space, spacecraft are exposed to continuous spectra, and
III-4
ground-testing facilities have limited or no access to the energies of the cosmic ray environment that has just been introduced. Only by complementing testing with detailed simulations can a complete coverage of the expected effects to devices in space be obtained.
B. In the spacecraft / In the Devices When considering the impact of the space environment on spacecraft devices, several sources of radiation have to be considered. The first component, briefly summarized in the previous section, is the external particle field, whose models are presented in great detail in the first lecture. The particle environment local at the device level differs from the external one, and is given by the superposition of: a. the external field, modified or “attenuated” by the presence of shielding, b. secondary radiation (such as delta electrons, nuclei fragments and subsequent de-excitation) produced by the interaction of the external field with the spacecraft structure or within the device itself, c. natural radioactivity due to unstable isotopes that are present in the materials used for the spacecraft or the payloads, or induced radioactivity, due to the formation of unstable isotopes as a result of the interaction of the radiation with the materials. It is clear that all these particle fields need to be correctly modeled and transported in a realistic spacecraft model to assess their effects on sensitive devices, and that the description of the physics involved plays a fundamental role in this process. The interaction of the external radiation with matter is usually described in terms of shielding, slow-down, or protection. The net effect is indeed generally energy degradation and consequently decreased fluxes after shielding. When considered from the point of view of the interaction of the radiation within the sensitive devices, the same physics processes play an active role, as they are responsible for charge injection, activation, excitation, or device degradation. However, fundamental processes in the devices are of the same type as in the S/C structure. In the next sections we briefly introduce the possible interaction types for the main radiation species in the space environment, with an emphasis on their impact on the interaction of radiation with spacecraft and sensitive devices therein.
Figure 2 Radiation sources in space: primary particles, secondary particles from interactions in the spacecraft structures, natural and induced radioactivity.
III-5
C. Particle interactions: fundamental forces Four fundamental forces drive radiation interactions with matter: the electromagnetic force (which involves charged particles and photons), the weak force (responsible for example for the β decay in radioactivity: n → p + e − + ν e ), the strong force (which acts on hadrons, e.g. between protons and neutrons, and holds the nucleus together) and the gravitational force. It is possible to associate to each of them non-dimensional coupling constants, which are related somehow to the “force” of each interaction. Interaction type
g0
Range [m]
Strong
0.1 – 0.15
10-15
Weak
10-5
10-18
Electromagnetic
α = 1137 = 4πε 0 ηc
Infinite
Gravitational
10-39
Infinite
e2
Table 1 The four fundamental forces can be compared in terms of coupling constants and range of interaction. In the following sections we describe the main mechanisms that intervene in the interaction of the space radiation environment with matter. These mechanisms all consist of single interactions or ensembles of interactions belonging to two of the four fundamental forces just introduced: the electromagnetic and the nuclear ones.
D. Particle interactions: cross section, probabilities, mean free path The interaction of particles with matter is typically described in terms of single collisions of the incident particles with individual particles in matter. These collisions are described in terms of cross section, which gives a measure of the probability for a reaction to occur. The cross section for the single processes can be calculated when the basic mechanisms of the interaction are known. The cross section for the interaction of an incident particle with a target atom is defined as the ratio of the interaction probability P and the incident particle flux Φ: P σ= (1) Φ The distribution function P x , known as the “survival probability”, is the probability that a given particle will not interact after a distance x [Le94], and can be computed based on the probability of having an interaction between x and x dx . Probability of interaction in x , x dx wx ; w (2) where n is the number of target particles per unit volume, and is the interaction cross section. It is easily found that the probability of a particle surviving a distance x is exponential in distance, (3) P(x) = exp( − wx) [ P(x = 0 ) = 1 ].
III-6
From this, the probability of a particle having an interaction in F(x)dx = exp( − wx)wdx ,
x , x dx
can be derived: (4)
whereas the probability of having an interaction anywhere between 0 and x is Pint ( x) = 1 − exp(− wx ) . The “mean free path” is the mean distance traveled by a particle without collisions ∫ xP(x)dx = 1 = 1 . λ= ∫ P(x)dx w nσ
(5)
(6)
E. Electromagnetic Interactions Charged particle and photon interaction with matter is mainly of electromagnetic type, and it leads to the degradation of the incoming particle energy and/or to its scattering, or to photon absorption. A brief overview of the many interaction types can be a useful guideline through the next sections. Heavy particles (such as protons) lose their energy mainly through electromagnetic collisions with atomic electrons in the material, whereas electrons lose energy both with collisions with the electrons in the material and via radiative emission (bremsstrahlung) caused by accelerations induced by the electric field from the nuclei. Photons at low energy interact via the photoelectric effect with bound electrons in matter, at medium energies via the Compton effect with quasi-free electrons in the outer atomic shells, and finally at high energies (above several MeV) via electron-positron pair production in the nuclear electric field.
1. Photon processes a) Coherent, elastic or Rayleigh scattering The coherent or Rayleigh process occurs between photons and bound electrons, without energy being transferred to the atom. The resulting scattered photons have therefore the same energy as before the interaction. The angular differential cross section can be expressed as a function of the scattering angle ϑ and the atomic form factor F (q, Z ) : dσ Rayl
2 2 1 + cos ϑ [F (q, Z )]2 (7) = re dΩ 2 where q is the momentum transfer. At low energies (up to a few keV) the form factor is approximately independent of scattering angle, with a real part that represents the effective number of electrons that participate in the scattering, so that the total Rayleigh cross section reduces to: 8 σ Rayl = πre 2 Z 2 . (8) 3 At higher energies, the scattering factor falls off rapidly with scattering angle and can be found tabulated [Hu79].
III-7
Figure 3 Diagram of photon coherent scattering. The photon is scattered but its energy is unchanged.
b) Photoelectric The photoelectric effect dominates at low energies (< 100 keV) and consists in an interaction between the photon and the atoms (not with individual electrons). A free electron cannot absorb a photon and also conserve momentum so the interaction always involves bound electrons (the majority of the interactions involve the K-shells). As a consequence of the interaction, an electron (“photoelectron”) is ejected with a kinetic energy E pe = E γ − Eb , where Eb is the electron “binding energy”, and the vacancy is filled by electrons from the outer shells, with related emission of fluorescence (mostly X-rays) and/or Auger electrons. An approximation of the cross section for the non-relativistic photoelectric effect is given by [Ha36] for energies far from the K-shells. 7/2
m c2 σ Phot = 4 2 Z α e σ T hom (9) h ν At lower energies, the cross section contains discontinuities (“edges”) related to the atomic shell structure. 5
4
Figure 4 Diagram of photoelectric effect. The incident photon is absorbed and induces the emission of an electron.
c) Compton The Compton effect dominates the interactions at medium energies, from 100 keV to several MeV (the actual energy range is material dependent). It is produced by the scattering of the incident photon with quasi-free electrons in the outer atomic shells. The process is “incoherent” because each atomic electron acts as an independent scattering center. By conservation of energy and momentum, and considering the electron free and at rest, the energy of the diffused photon is given by the following expression, symmetric about the incoming photon direction: III-8
E (ϑ ) =
E 1 + 02 mc An approximated [Kn89]: dσ Compt 1 2 = r0 2 dΩ
E0
(10)
(1 − cos ϑ ) angular differential cross section has been obtained by Klein-Nishina p2 p0
2
(
p0 p + − sin 2 ϑ ) p p0
(11)
e2 is the classical electron radius (~2.8 10-15 m). The 2 4πε 0 me c approximation of free and at-rest target electrons fails at lower incident photon energies (below a few tens of keV), where important deviations can be observed [Ri75] and Doppler broadening and binding effects must be introduced. per target electron, where re =
Figure 5 Diagram of Compton effect. A photon is scattered transferring part of its energy to an electron, which is ejected from the atom.
d) Pair production At higher energies, photon interactions with matter are dominated by pair production, which is the absorption of a photon and the creation of an electron-positron pair (“external pair production”). The process generally occurs as an interaction with the electric field of an atomic nucleus. In case it interacts with an electron, an extra electron is added to the final state (“triplet production”). The process also occurs as “internal pair conversion”, with electron-positron pairs emitted from nuclei, decays or collisions between charged particles. The energy of the photon is directly converted into the mass of the two particles and therefore it must exceed approximately twice the rest mass of the electron: Ethr = 2me c 2 . Above the threshold energy the pair production cross section slowly rises with energy and in practice the probability of an interaction remains very low until a gamma energy of several MeV. The following formula gives the asymptotic cross section (for high gamma energies): 7 183 1 σ Pair = 4 Z 2αre ln 1 / 3 − (12) 9 Z 54 whereas for intermediate energies the cross section depends on the incident photon energy: 7 2 E 109 − σ Pair = 4Z 2αre 2 ln (13) 2 9 m c e 54
III-9
Figure 6 Diagram of pair and triplet production effect. (Top) Pair production from a photon interacting with the nucleus field. (Bottom) Triplet production from a photon interacting with the electron field.
III-10
e) Photon interactions in different materials As a conclusion of the sections on photon interactions with matter, it is useful to present the probability for the different effects to occur in different materials. Figure 7 summarizes the cross section for photoelectric effect, Compton effect and pair production in Silicon (left) and Tungsten (right) as examples of low-Z and high-Z materials. One can notice the great difference in the shape of the photoelectric cross section, due to the atomic shell structures, and in the energy ranges at which each process dominates over the others.
Figure 7 Photon interaction cross-section data for Silicon (left) and Tungsten (right). Data obtained from the NIST XCOM database [Be90].
III-11
2. Charged particles: Electrons and Positrons a) Elastic collisions Elastic collisions are due to the Coulomb interactions of electrons with the atom nucleus field, screened by the atom electrons. They are responsible for changes in the electron and positron direction of motion, but not for energy loss.
Figure 8 Diagram of electron elastic collision. The electron is scattered by the atomic nucleus, keeping its energy approximately unchanged. Small-angle electron scattering corresponds to distant collisions. It is dominated by the ionic Coulomb potential and can be described by the Mott relativistic extension [Mo29] [Mo65] of the Rutherford cross section, which can be approximated as: dσ Mott ( Ze 2 ) 2 ϑ (14) = (1 − β 2 sin 2 ) . dΩ 2 2 2 4 ϑ (4πε 0 ) (4 E kin ) sin 2 This reduces to the Rutherford formula as β → 0 . The approximate McKinley-Feshbach [Mc48] formulae make the detailed results by Mott for relativistic electrons easier to use for computational purposes. Large-angle scattering corresponds to a deeper probing of the atomic structure near the distance of closest approach and is much more sensitive to correlation, exchange, bound-state resonances, and interference effects, especially at the largest scattering angles.
b) Inelastic collisions Together with bremsstrahlung emission (introduced on page number II-13) inelastic collisions are responsible for energy loss of electrons (and positrons) in matter. They constitute the main energy loss mechanism at low and intermediate energies (up to several tens of MeV). Inelastic scattering is the result of Coulomb interactions between the incident electron and atomic electrons. Part of the energy and momentum of the incident electron is transferred to the target system, and the interaction final state may not include only single-electron excitation or atomic ionization (with electron-hole pair production), but can involve many atoms in the solid (“plasmon excitation”). Bethe [Be59] first obtained a quantum mechanical calculation of inelastic collisions, derived on the basis of the Born approximation, which is essentially an assumption of weak scattering. The original Bethe formulation was extended to treat electron interactions in condensed matter [Fa63][Ev55]. Calculations dedicated to the special case of incident electrons and positrons led III-12
to the specialized Møller [Mø32] and Bhabha [Bh36] formulations of the energy loss theory, for electron-electron and electron-positron inelastic scattering respectively. The energy loss for electrons/positrons can be expressed as dE Z 1 τ 2 (τ + 2) C + − − ln ( τ ) δ 2 − =K F (15) dx A β 2 2( I / mec 2 ) 2 Z
where τ is the kinetic energy of the particle in units of mec 2 , δ is the density correction and C gives the shell correction. F (τ ) is different for electrons and positrons. For a detailed treatment of the energy loss of electrons and positrons, see also [Fe86][Le94]. The energy loss by ionization can also be compared to the radiative component of the electron energy loss, treated in the following section, so that the relative importance of the two energy loss mechanisms can be assessed as a function of the electron energy.
Figure 9 Diagram of electron inelastic collisions leading to atomic excitation or ionization.
c) Bremsstrahlung Electrons and positrons passing through matter undergo acceleration (deceleration) in the interaction with the electrostatic field of the atoms and emit radiation. The process is called bremsstrahlung, or “braking radiation” and is depicted in Figure 10. The electron cloud screening of the field of the atomic nucleus is high for high incident electron energies; it can be neglected at low energies. As a consequence, at low energies the energy loss can be approximated as 2E 1 dE (16) − = KZ 2 E ln − − f ( Z ) 2 dx Brem m c 3 e At high incident energies the energy loss can be expressed as dE 183 1 (17) − = KZ 2 E ln 1 / 3 − − f ( Z ) dx Brem 18 Z If the logarithmic term at low energies is neglected, the energy loss rate can be approximated as proportional to the energy of the incident electron:
III-13
−
dE 1 = E dx Brem X 0
(18)
where X 0 is called “radiation length” and represents the distance over which radiative emission reduces the initial projectile energy by a factor 1 / e . A complete differential cross section can be given by the Bethe-Heitler formula [Be34] corrected and extended by screening of field of nucleus by atomic electrons, bremsstrahlung from atomic electrons, Coulomb corrections to Born approximation, dielectric suppression (matter polarization), LPM suppression (multiple scattering of electron while still in formation zone).
Figure 10 Diagram of electron bremsstrahlung emission process. The electron emits a photon as a consequence of acceleration induced but the atomic electrostatic field.
d) Positron annihilation A positron can annihilate in the interaction with an atomic electron, with the emission of two photons. The process cross section is higher for lower positron energies, so that the process happens from a system almost at rest with the emission of two photons “back-to-back”.
Figure 11 Diagram of positron annihilation process. A positron annihilates with an atomic electron with the emission of two photons.
e) Cherenkov effect Charged particles passing through a dielectric at a speed greater than the speed of light in the material emit light, undergoing a phenomenon analogous to the emission of mechanical shock waves moving faster than the speed of sound. The effect is named after P.A. Cerenkov, who first III-14
predicted it in 1934. Light is emitted at a fixed angle ϑ to the direction of particle motion, such that c cos ϑ = (19) vn and consists of a continuous spectrum, with a cut-off wave length determined by the frequency-dependent refraction index in the equation above. Cerenkov light is often used in particle detectors, giving identification of particle species from direct information on the particle velocity.
Figure 12 Diagram of Cerenkov effect in a thin (transparent) target, with the formation of the characteristic ring of photons.
3. Charged particles: Protons and Ions The effects of the passage of protons and ions in matter can be summarized, at least for the electromagnetic component of the interactions, in the slowing down (energy loss) and deflection of the particles. The main interactions responsible for these effects are elastic collisions with nuclei and inelastic collisions with atomic electrons in the material.
a) Elastic collisions Elastic scattering of protons and ions in matter happens as a consequence of the interaction with the material nuclei, screened by the electron cloud. The process is similar to the electron elastic scattering previously described, the main difference being the non-negligible mass of the projectile. While the total kinetic energy is conserved, part of it is transferred from the projectile to the target nucleus.
Figure 13 Diagram of the elastic electromagnetic interaction of an incident ion with a target atom, screened by the electron cloud, with transfer of part of the energy to the target nucleus. III-15
b) Inelastic collisions Ion electromagnetic inelastic collisions are interactions of the incident ions with the field of the atomic electrons in the material. The result of the interaction is the excitation or the ionization of the target atom.
Figure 14 Diagram of inelastic electromagnetic interaction of incident ions with electrons of the target atoms, inducing atomic excitation or ionization.
4. Energy loss A major part of the energy loss of electrons and ions in a material is generally due to the inelastic interaction of the projectiles with the field of the electrons in the target nucleus. The amount of energy transferred to the electrons in each collision is a very small fraction of the projectile energy, but the number of collisions in media of normal density is very high, giving as a result a significant loss of kinetic energy. Most of the collisions induce limited transfer of energy, and are denoted as “soft”. More seldom “hard” collisions occur, which induce atomic excitation with the ejection of a fast electron (often referred to as δ -ray). Bethe, Bloch and others first gave a correct quantum mechanical description of the energy loss phenomena. The resulting stopping power can be approximated by the following formula dE Z 1 2meγ 2v 2Wmax C − 2 β 2 − δ − 2 + zL1 + z 2 L2 − = Kz 2 ln (20) 2 2 dx Aβ I Z where Wmax is the maximum energy transfer in a single collision and I (main parameter in the formula) is the mean excitation potential of the target material. Two correction terms are usually considered: the density correction δ and the shell effect given by C / Z . The additional terms zL1 + z 2 L2 represent the Barkas and the Bloch correction. For a detailed description of the energy loss phenomena, the reader is referred to [Zi85].
5. Straggling The energy loss process is not continuous, but statistical in nature, as it derives from a series of collisions. As a consequence, the measurement of the range of a set of identical particles will result in a distribution of ranges, centered about some mean value, a phenomenon referred to as “range straggling”. When the number of collisions is high and the total energy loss along the path is given by the sum of a large number of small contributions, the global fluctuations follow a gaussian distribution. On the contrary, when the total number of collisions along the particle path is low (for example in thin absorbers, or in gases) the fluctuations are large and follow the Landau asymmetrical distribution.
III-16
The continuously slowing down approximation (CSDA) range neglects scattering and straggling, and therefore differs from the real projected range, which measures the average depth of penetration measured along the initial particle direction. Figure 15 shows the measurement of the proton range from the intensity curve of a mono-energetic proton beam passing through an absorber material. The “mean range” R0 of the beam is indicated for the case of gaussian range fluctuations. The “extrapolated range” Rext is obtained by extrapolation of the line of maximum gradient in the curve. The “straggling parameter” S, which is a measure of the range fluctuations, can be obtained from the difference between these two ranges: S = Rext − R0 .
Figure 15 Typical range curve for a mono-energetic proton beam in an absorber. The fluctuations (straggling) in the energy loss process can be quantified with the extrapolated range.
F. Nuclear Interactions Particles coupled to the strong force (such as protons, neutrons, π mesons, nuclei) are subject to nuclear interactions (also commonly referred to as “hadronic”). This force has a typical range that is much shorter than the electromagnetic one, so it becomes effective for charged particles only when the energy involved in the interaction is higher than the Coulomb barrier generated by the nucleus charge. An approximate expression for this barrier can be obtained from the charges of the incident and target hadrons 1 Z i ZT U Coul = (21) 4πε 0 rh where rh ≈ 10−15 m, typical range for hadronic interactions (see Table 1).
1. Nucleon-nuclei processes a) Elastic Elastic nuclear collisions conserve the total kinetic energy of the nucleon-nucleus system, and thus do not modify the target nucleus species nor its excitation state. Due to the mass of the incident nucleon, the target nucleus recoil energy after the interaction is not negligible, and leads to local intense ionization along the recoil path.
III-17
Figure 16 Diagram of elastic nucleon-nucleus collision, with transfer of part of the projectile energy to the recoil nucleus.
b) Inelastic In inelastic hadronic collisions, part of the total kinetic energy is transferred to the excitation or the break-up of the target nucleus. Excited states may later decay by gamma ray or other forms of radiative emission, or further break-ups. Due to the complexity of the interactions, no single theory of hadronic collisions exists, which applies to all energy ranges. On the contrary, a collection of models can be used, which are complementary in the description of the particle species and energy range. As in the elastic case, recoil nuclei and fragments are the cause of local intense indirect ionization.
Figure 17 Diagram of nucleon-nucleus inelastic interaction, with break-up of the target nucleus and emission of secondary particles.
G. Interplay of Processes 1. Electromagnetic showers A typical case of interconnection between fundamental processes involving different particle species in a range of different energies is given by the interaction of electrons, positrons and gammas in matter. As an example, an energetic electron can dissipate its energy with the emission of a bremsstrahlung gamma and continue its path through the material. This gamma, if its energy is high enough, will possibly undergo pair production process generating an electronpositron pair, which will dissipate energy radiatively (with the emission of bremsstrahlung photons). This iterative particle production mechanism continues until the typical gamma energy
III-18
falls below the critical energy for pair production. The process is represented in Figure 18. The same process can of course be initiated in a similar way by a positron or a gamma. The macroscopic phenomenon of production of this large number of electrons, positrons and photons is known as “electromagnetic shower”.
Figure 18 Diagram representing the development of an electromagnetic shower. (Right) Simplified cascade description, based on the similarity of pair production and bremsstrahlung cross sections, from which some quantitative estimates can be drawn on particle multiplicity and shower profile. A realistic description can only be obtained through detailed simulations.
2. Electrons and positrons in matter The final effect of the interaction of electrons with matter in space applications can vary significantly because it results from the interplay of electron spectrum, shielding materials and thickness, device sensitivity, and the dominant physics processes in the radiation effect analysis. Electron interactions with material produce mainly excitation and ionization and results in energy loss and scattering, which can lead to highly convoluted electron paths and bremsstrahlung emission. Figure 19 shows the continuous slowing down approximation (CSDA) range and stopping powers for electrons in several materials (hydrogen, aluminum, tungsten and polyethylene) available from the US National Institute of Standards and Technology (NIST) website [Ni98]. 7 W Al Polyethylene H
CSDA range [g/cm2]
6 5 4 3 2 1 -
1
2
3
4
5
6
7
8
9
10
electron energy [MeV]
Figure 19: Electron CSDA range in different materials (from low-Z to high-Z, plus polyethylene as an example of a hydrogen-rich material) as a function of energy. Data from NIST ESTAR database [Ni98].
III-19
The CSDA range corresponds to the integral pathlength traveled by electrons assuming no stochastic variations between different electrons of the same energy. Because of multiple scattering, the effective range in matter is in general shorter than the CSDA range (depending upon the electron energy and material). Shielding effectiveness results from the combination of range along the path and scattering, which diverts the electron trajectories. In the case of thin shielding the latter dominates, making high-Z materials more effective. Radiative emission is more important for high electron energies and thick shielding, making low-Z materials more convenient. Multi-layered shielding (with low-Z materials first) usually provides a good solution, gradually slowing down and diffusing the projectile particles.
3. Bremsstrahlung and ionization from low penetrating radiation Total energy loss of electron and positrons is given by the sum of the loss from ionization and bremsstrahlung emission. The bremsstrahlung contribution dominates above a certain energy value, which is indicated as “critical energy” Ec . This energy can be obtained from an approximation of the single contributions [Ba96], which for liquids and solids is: 610 [ MeV ] (dE / dx )Ioni ≈ (dE / dx )Brem ⇒ Ec ≈ (22) Z + 1 . 24 High-energy photon energy loss then mainly comes from bremsstrahlung emission. Emitted photons interact (as described in the sections on photon interactions) through a number of processes (Rayleigh scattering, photoelectric effect, Compton effect and pair-production), resulting in the loss or scattering of the incident photon, and for the three latter processes, the production of electrons or positrons that may induce further ionization or bremsstrahlung [Ec06]. Through this mechanism, bremsstrahlung production allows energetic electrons to deposit energy significantly beyond the range of electrons in materials due to the longer average ranges of the photons.
Figure 20 Simulations of 1 MeV electrons impinging (from the left) on a 0.1 mm Silicon detector, protected by a 1 mm Aluminum shield. While all electrons are stopped in the first layer or backscattered, a bremsstrahlung photon is emitted in the shielding and reabsorbed in the sensitive volume (simulations obtained with GRAS/Geant4 [Sa05]). III-20
4. Proton range and straggling Inelastic collisions with atomic electrons are the main mechanism responsible for the energy loss of protons and ions in matter. Due to their large mass, proton and ion projectiles experience a much less significant scattering with respect to the case of incident electrons. However, as introduced in the description of the basic processes, some deflections of the proton track do occur. In addition, the energy loss process is not in fact continuous, but statistical in nature. As a consequence, the measurement of the range of a set of identical particles will result in a distribution of ranges, centered about some mean value, a phenomenon referred to as “range straggling”. The continuously slowing down approximation (CSDA) range neglects scattering and straggling, and therefore differs from the real projected range, which measures the average depth of penetration measured along the initial particle direction. For the total energy loss of the incident proton, a higher Z/A in the Bethe-Bloch formula for low-Z materials, combined with limited scattering, gives a more efficient shielding per unit mass compared to high-Z ones, as shown in the figure. 1E+03
projected range [cm]
1E+02 1E+01 1E+00 1E-01 1E-02
s
1E-03 Hydrogen
1E-04
Aluminum
1E-05 1E-06 1E-02
Tungsten 1E-01
1E+00 1E+01 proton energy [MeV]
1E+02
1E+03
Figure 21 Mean projected range of protons (neglecting hadronic interactions) in Hydrogen, Aluminum and Tungsten, scaled to an equal density of 1 g/cm3. The data in the plot were produced with SRIM [Sr03]. For applications such as analyses of devices sensitive to single event effects and for the biological effects of radiation in human missions, in addition to the inelastic interactions with electrons, one must consider the small but significant probability of interaction of protons with the nuclei in the target material, treated in the next section.
5. Knock-on particles and fragments from nuclear interactions The role of nuclear interactions in radiation effects to components depends on the type and energy spectrum of the radiation source, the shielding configuration and device susceptibility. Effects of hadronic interactions include local indirect intense ionization through nuclei fragments and increased background and secondary interactions from neutrons and gammas from excited nuclei. As a result, secondary particles from nuclear interactions contribute to total dose,
III-21
biological effects in human missions (the effect originating both from local interactions and neutrons production in heavy spacecraft structures or planetary shelters) and transient effects. While qualitative estimates of nuclear reactions can be produced with approximate models, quantitative assessment of the role of nuclear interactions to transient effects require precise description of secondary particle production double differential cross sections. As an example of the importance of direct versus indirect ionization by recoil nuclei from nuclear interactions, see [Tr04a][Ko05a][Wa05a].
Figure 22 After [P.J McNulty, Notes from 1990 IEEE NSREC Short Course]
6. Shower of lattice-atom displacements A particle collision in matter can transfer to atoms sufficient energy to displace them [Zi85]. Hit atoms can then move in the lattice and thus create vacancies and stop in interstitial positions. The concentration of resulting effective recombination or trapping centers, responsible for performance degradation in semiconductor devices such as bipolar transistors, is proportional to the concentration of vacancy-interstitial pairs (also known as Frenkel defect). The basic physics processes involved in the displacement collisions are particle and energy dependent. Coulomb elastic scattering dominates for electrons and low energy protons, elastic hadronic scattering for low energy neutrons, whereas at higher energies (above 10-20 MeV) inelastic processes are most important for both protons and neutrons. The displacement mechanism is often a cascade event, and numerous models have been developed to estimate the number of induced Frenkel pairs, such as Kinchin and Pease [Ki55].
III-22
Figure 23 (left) Diagram of a Frenkel pair cascade. (After [Space radiation effects on microelectronics, NASA JPL].) (right) Calculations of proton induced NIEL in GaAs [Ju03], compared to Summers et al. (After [Su93].)
III-23
II. Radiation Transport Techniques The previous section introduced the interaction of particle radiation with matter from the point of view of the underlying physics. The level of detail and realism in the geometry description and the precision required in the description of the basic physics processes depend on the radiation effects under study and is often very different for space missions in an early phase of conceptual design compared to advanced and precise verification for payload and subsystem. Therefore, when modeling the transport of radiation from the outer space to the spacecraft interior for the assessment of the radiation effects to the sensitive devices, the computation can rely on a range of techniques of different complexity and precision, and based on drastically diverse approaches. The following sections introduce some of the issues related with radiation transport models.
A. Analytical / Monte Carlo Transport methods using the “analytical” or “deterministic” method solve the integro-partialdifferential Boltzmann transport equations that describe how radiation fields are transformed when passing through a given mass thickness. This is usually done in a one-dimensional straightahead approximation. As a consequence of the mathematical approach in the analytical solutions, deterministic methods are in general fast (as an example, the HZETRN tool allows field mapping within the International Space Station in tens of minutes using standard finite element method geometry) but approximated. Deterministic tools usually provide solutions to 1-D configurations only, but recent developments extended the range of applicability to 3-D models [Wi04]. The Monte Carlo approach, which aims at simulating the particle transport process as it happens in nature, will be described in more detail later in the paper. While analytical methods provide solutions to the equations that describe the radiation transport, in MC methods the particle propagation is simulated directly, and there is no need to write down the transport equations [Co95]. Depending on the details required in the physics description and in the geometrical models, and on the characteristics of the radiation sources, MC calculations may indeed be in particular cases very demanding in terms of computational resources. This limitation in MC tends to become less important with the advent of modern computers, together with variance reduction techniques and mixed forward/reverse simulations.
B. Single Particle / Collective Effects In environments such as cosmic rays, solar protons or ions and trapped radiation charged particles at relatively high energy are characterized by low charge density. As a result, their motion can, at a first approximation, be modeled with a single-particle approach. This means that collective effects, which are very important for example in plasma behavior, can be neglected. This approximation, which greatly simplifies the simulation techniques, starts to show its limitations in certain extreme conditions, at lower energies and higher densities. Examples of deviation from single-particle modeling include scintillating detectors, whose photon yield often presents a non-linearity at high energy deposition rates, and charge collection in semiconductor devices, where the modeling of electron drift is greatly affected by the presence of high charge density. In all the areas where the assumption of independent particles starts to fail, a precise and complete description of the phenomena requires interfacing to dedicated tools, providing III-24
algorithms such as Particle In Cell (PIC) plasma simulation or charge transport in finite element models.
Figure 24 Example areas where simulations need to account for non-negligible collective effects in a radiation transport. (Left) electric potential map in the vicinity if a satellite as a consequence of the activation of electric propulsion thrusters obtained with SPIS [Fo05] (image after [Ro05].) (Right) High-density ionization charge deposition from fragments emitted from nuclear interactions, used as input to detailed TCAD simulations (After [Ba06].)
C. Look-Up Tables / Sectoring analysis Particle transport tools utilizing a look-up table approach are wide spread in the space radiation domain, partly because of the serious limitations in available computing power in the early days of computers. Especially suited for shielding studies, these tools provide fast analysis, but on the other hand they are limited to simple geometries and to a given set of shielding and detector materials. Among look-up table tools, SHIELDOSE (later described in more detail) is probably the one with the widest usage in space shielding applications. Conversely, Monte Carlo simulations, which will be described in detail in the next sections, come much closer to describing an authentic space environment but require more time and computing power.
Figure 25 Ray tracing techniques can complement 1-D shielding studies to produce approximate shielding assessment in complex geometries.
III-25
Figure 26 Dose analyses can be obtained in mono-energetic source, 1-D shielding configurations with accurate transport models, and later used to produce dose-depth curves for arbitrary spectra (Plot after SHIELDOSE-2 [Se94] implementation in SPENVIS [He00].)
To analyze doses in more complex geometries, it is common in the engineering process to perform “sectoring” of the actual shielding and establish the amount of shielding encountered by a large number or linear rays traced from the target point to space. The ray shielding is used to look up the dose in the “dose vs. depth” curve produced by external tools such as SHIELDOSE2, and summed with the appropriate weighting for the solid angle. This process, while giving an engineering approximation of the dose, has a number of shortcomings (which derive from the limitations of the look-up table tools), in particular concerning the lack of treatment of shielding of different materials (which are generally converted to a thickness of equivalent aluminum) and the lack or treatment of radiation scattering and secondary production. Ray-tracing techniques have been implemented in many engineering tools dedicated to the analysis of the space environment, such as ESABASE [Es94] and SYSTEMA. The Sector Shielding Analysis Tool (SSAT) [Sa03], which will also be described later, implements the above technique using Geant4 for ray tracing of non-interacting “geantino” particles.
D. Forward / reverse transport Within the classical “forward” approach to the particle transport models, particles in a given state of position and momentum are followed as they interact with matter (as described in the first part of these notes) dissipating their energy and possibly generating secondaries. III-26
While providing an intuitive and realistic description of the particle processes, this approach shows in some applications, clear disadvantages. As an example, radiation effects assessment in heavily shielded systems presents practical computational problems in the accumulation of statistically significant estimates, due to the low transmission probabilities. Similarly, microdosimetry in big spacecraft structures is potentially affected by a low geometrical efficiency in isotropic radiation environments. Biasing techniques (later described in more detail) can provide more efficient calculation while keeping the forward Monte Carlo approach. The ray-tracing technique previously introduced partially overcomes these problems, but at the cost of approximations in the physics modeling. The “adjoint” technique, used by the reactor physics community already in the 1960’s [Ka68], proposes instead a different approach. We can introduce the classical transport equation in integral form P S P K P' P P ' dP ' (23) where S(P) is the source density and K(P’Æ P) the density of collisions at P, and the functional F P f P dP . (24) Kalos introduces a new transport equation, “adjoint” to the conventional “forward” one: J P f P K P P ' J P ' dP ' (25) It follows that: F J P S P dP (26) This means that the solution of the new J equation permits the estimation of F in a way analogous to the method used in the forward problem, but the transport is such that successive points are higher in energy, earlier in time. In addition, the J equation is suitable for Monte Carlo calculations. Such computations, starting at the detector and scoring at the source, offer several advantages, including the possibility of computing doses at a point. In addition, the adjoint function can be used as optimum importance function for biasing in forward Monte Carlo. Among others, the NOVICE tool [Jo76][No00], and AMC [Di96] include algorithms for adjoint calculations in 1-D and 3-D geometries.
Figure 27 Diagram of the reverse Monte Carlo technique: particles are tracked, starting at the detector, backward in time and with increasing energy, until they reach the external source, where the scoring takes place.
III-27
III. In depth: Monte Carlo techniques A. General Concepts Monte Carlo, also known as stochastic simulation, implements a generic computation method that makes use of random numbers in its algorithm for the solution to a mathematical problem. Statistical computation methods have been used since the 18th century. Buffon (1707-1788), obtained an estimate of the π constant by observing the random position of a needle dropped on a grid.
For d
,
N throw π = N cross 2
Figure 28 Drawing showing the stochastic algorithm used by Buffon in his experiment for the determination of the constant. With respect to the computational discretization methods, which are typically applied to ordinary or partial differential equations that describe underlying physical or mathematical system, in MC methods the physical process is typically simulated directly, and there is no need to even write down the differential equations that describe the behavior of the system. The only requirement is that the physical (or mathematical) system be described by probability density functions (PDF's) [Co95]. The modern MC method was first developed by Enrico Fermi in the 1940’s to study the moderation of neutrons, and further developed by Stanislaw Ulam at the Los Alamos Laboratory for the development of the hydrogen bomb after World War II. The name originates from Nick Metropolis, who suggested it for the similarity to the randomness of the results in the gambling casinos in Monaco. MC methods are often used to describe the behavior of stochastic systems, as in statistical physics, where it helps to overcome the complexity given by the large number of degrees of freedom. However, MC methods are also used for the calculation of deterministic processes, when the complexity of the mechanisms involved makes analytical solutions impossible or computationally unrealistic.
1. Monte Carlo for elementary particle transport In radiation transport, real particle processes can be described by distribution functions that represent the probability for an interaction to occur, and the features of the physical state of its
III-28
outcome. The approach of MC algorithms for solving the transport problem is to draw random samples from the distribution functions to describe single particle interactions with matter and to choose among the allowed states after each interaction. Key element in Monte Carlo particle simulations is the correct balancing in the occurrence of each interaction type by random sampling. What follows is a simplified introduction to the application of the Monte Carlo technique to the particle transport in matter. The detailed implementation of the algorithms can significantly differ from one Monte Carlo code to another. Given the probability density function (PDF) F(x) for a given particle giving the probability of interaction in the interval (x, x+dx) [Le94] as previously described, one can introduce the cumulative distribution function (CDF) Pint ( x) :
Pint ( x) = ∫ F(x' )dx'= 1 − exp( −wx)
(27)
and generate an interaction using the inverse method: ln( 1 − η) (28) η = 1 − exp( − wx) ; x = − w is uniformly distributed in the interval [0,1]. where In heterogeneous geometry models, the interaction cross sections, and thus the final interaction probability, depend on the material. The mean free path can be used then to obtain a materialindependent sampling: x (29) = xw = −ln( 1 − η) λ is the number of interaction length in the given material. The quantity x Random sampling is also used for the selection of the final state after interaction, when this is not predetermined by the physics constraints (e.g. conservation laws).
2. Random generators MC algorithms rely on the availability of random number sequences: in deterministic computations randomness is essential to cover evenly the parameter phase space, whereas in the simulation of stochastic processes (such as in particle transport or quantum physics calculations) they mimic the stochastic behavior of the state functions. Generation of long random number series is a very active research field (see for example [Ja90] for a review on random engines). It is worth noticing that in practice for many MC algorithms absolute randomness of the series is not a strict requirement, and on the contrary the use of predictable pseudo-random numbers (which can be reproduced knowing the generation algorithm and its input parameters or seed state) helps for the reproducibility and debugging of simulation results. The quality of the generator can be measured in a variety of ways, although its simple repeat interval is a useful index for the applicability of the engine. Quoting Robert R. Coveyou (Oak Ridge National Laboratory) "The generation of random numbers is too important to be left to chance."
B. Variance Reduction Depending on the details required in the physics description and in the geometrical models, and on the characteristics of the radiation sources, MC calculations may indeed be in particular cases very demanding in terms of computational resources. Variance reduction techniques (VRT) III-29
aim to reduce the computing time, keeping constant the mean value of an estimator and reducing its variance. In “analog” simulations the possible outcomes of measurements to the estimator of an observable occur with the same frequencies as they do in nature. On the contrary, in “biased” simulations, the contributions that are important to the estimator are sampled more often than the less important ones, and weights are associated to tracks to compensate. Variance reduction techniques can be classified in four groups [Br00]: a) Truncation methods (e.g. energy, time, geometry cutoff); b) Population control methods (such as geometry splitting and Russian roulette, energy splitting/roulette, weight cutoff, weight window): many samples of low weight are tracked in important regions, while few samples of high weight in unimportant regions; c) Modified sampling methods (exponential transform, implicit capture, forced collisions, source biasing): to sample from any arbitrary distribution rather than the physical probability as long as the particle weights are then adjusted to compensate; d) Partially deterministic methods (next event estimators, controlling the random number sequence): to control the normal random walk process through deterministic-like sequence.
Figure 29 Variance reduction techniques: diagram showing “splitting” and “Russian roulette” geometry biasing algorithms, applied to example particles traveling from a region m to a region n with greater importance, or from region n to region m with lesser importance.
C. Interfaces As briefly described in the first sections, radiation transport tools must be able to model a number of particle species over a wide range of energies, and realistic geometry modeling is required for a correct description of effects to spacecraft components, which are due to the local radiation environment. In addition, usability requirements often include visualization capabilities, friendly command interfaces and easy-to-use computation results. The challenge of developing a MC tool meeting all requirements is normally alleviated by modular designs allowing easy interfaces to internal and external packages for physics and geometry modeling, pre- and post- processing tasks or visualization. Several examples of such interfaces can be found in the literature for the accomplishment of several tasks including physics modeling [Ko03][Fa01], geometry [Ch01], and pre- and post- processing (such as TCAD interfaces [Ho05]) III-30
Depending on the software design and on the software technology used for the implementation of the MC tool, interfacing to external packages can require very different amount of resources.
D. Output: tallies Depending on level of access to the source code, on software design and on transport algorithms used in the tools, different methods and options are available for extracting from the simulations information about the radiation transport itself or the effects in modeled sensitive devices. In general, the accuracy of the MC results depends on the accurate description of the entire chain of the simulation, including geometry model, radiation source, transport models and response modeling. In addition, as discussed in several sections of the paper, statistics can in specific cases limit the precision of the tallies. Some MC tools implement algorithms providing an estimate of the precision of the simulation results [Br00]. Common MC output for studies of performance degradation in scientific detectors, commercial payload components or service elements such as solar arrays include cumulative quantities such as total ionizing dose (TID) or non-ionizing energy loss (NIEL) and fluence spectra. NIEL analyses can make use of local microscopic energy deposition tallies or of fluence-based estimates, relying on external conversion tables. Microscopic NIEL calculation is only possible in the cases where a complete implementation of all relevant particle physics interactions with atomic nuclei is available (such as single screened Coulomb scattering, or elastic and inelastic neutron nuclear collisions). For the macroscopic approach, which produces NIEL based on local radiation fluence, several NIEL coefficient tables are available in the literature. They have been obtained with different methods and for different semiconductor technologies by the CERNRD48 (ROSE) collaboration for protons, neutrons, electrons and pions. Recently new curves for damage estimates based on calculations in several semiconductor materials (including Si, GaAs, InP) [Su93] and a new collection of NIEL curves computed for typical device materials for protons and neutrons [Ju03] have been made available.
Figure 30 Fluence-to-NIEL coefficients are available from the literature for different incident particles and different materials. (Left) Calculations for protons after [Ju03] (Right) On-line compilation of coefficients for neutrons [Va00] based on [Gr92][Ko92][Hu93].
III-31
The same two options (microscopic and macroscopic) are available for the calculation of LET spectra from locally transported radiation fields, through the evaluation of the local energy deposit in the sensitive devices or via conversion from fluence based on particle, material and energy dependent LET tables. Requirements from exploration mission radiation analysis include the possibility of obtaining biological-related tallies such as dose-equivalent, equivalent-dose, effective-dose or other quantities that can be then further elaborated in risk-assessment studies. Dose-equivalent [Ic91][Ic03] calculations take into account the Relative Biological Effectiveness (RBE) of radiation as a function of particle type and energy through the use of quality factors (QF) applied to local ionizing energy deposition. The Q L relationship between the QF and the LET can be implemented based on the ICRP 60 recommendations [Ic91]. Equivalent-dose estimates requires more complex tallying algorithms, as global Weighting Factors ( w R ) are applied depending on the external incident field type and energy. The values adopted in [Ic91] are being re-appraised and new factors have been proposed in [Ic03]. Effective-dose requires dose tallying in several sensitive organs in human phantoms, which is often unpractical because of limitations in the MC tool geometry capabilities or simply because of the requirements on computing resources to achieve sufficient statistical significance of the results. To overcome this problem, local transported radiation fluence spectra can be convolved with pre-computed effective-dose conversion coefficients [Pe00], in a way similar to the NIEL macroscopic algorithm previously presented. Specific tallies may also be used in advanced, dedicated tools for example for the direct evaluation of effects to components (e.g. SEE, or damage coefficients) with interfaces to external models.
III-32
IV. Radiation Transport Tools The following sections describe some of the many Monte Carlo tools that have been implemented since Fermi’s pioneering developments. The selection was based on the advancement in the description of the physics processes and of the geometry models, and in addition on existing applications in the space domain
A. Historical tools While later in this section we give a brief description of some of the MC tools nowadays in use, it is worth mentioning again E. Fermi, who in the 1930's used Monte Carlo in the calculation of neutron diffusion, and later designed the Fermiac, a Monte Carlo mechanical device used in the calculation of criticality in nuclear reactors. Also of great importance were the studies by von Neumann, who in the 1940's developed a formal foundation for the Monte Carlo method, establishing the mathematical basis for probability density functions (PDFs), inverse cumulative distribution functions (CDFs), and pseudorandom number generators. The work was done in collaboration with Stanislaw Ulam, who realized the importance of the digital computer in the implementation of the approach.
Figure 31 (Left) The Fermiac, a Monte Carlo mechanical device used in the calculation of criticality in nuclear reactors. (Right) Von Neumann standing in front of the Institute computer (courtesy of the Archives of the Institute for Advanced Study, Princeton).
1. ETRAN / ITS The ETRAN code [Be73] implements the result of the earlier studies by Berger (see for example [Be68a] [Be68b]) on the Monte Carlo simulation of multiple Coulomb scattering of fast charged particles, to solve electron and proton transport problems. The “condensed-history” algorithm was developed as an alternative to the direct simulation of single physical scattering processes, occurring in very large number even in short path-lengths. Electrons and protons interacting with matter undergo an enormous number of collisions resulting in small energy losses and deflections, and a relatively small number of “catastrophic” collisions in which they may lose a major fraction of their energy or may be turned through a large angle [Be63]. The ETRAN code provides the resulting algorithms for the description of this complex process of diffusion and energy degradation.
III-33
Details of the Monte Carlo model and of the cross sections used can be found in Berger and Seltzer [Be68a, Be68b, Be70, Be74, Be88], along with numerous comparisons to experimental results. The ETRAN Bremsstrahlung results, based on the use of a set of empirically corrected Bethe-Heitler Bremsstrahlung cross sections, were adjusted to reflect the exact calculations of the Bremsstrahlung production cross section of Pratt et al. [Pr77]. The expertise in physics modeling of ETRAN was used in the integration of the Integrated TIGER Series (ITS) Monte Carlo tool [Ha92]. ITS can be applied to the solution of linear timeindependent coupled electron/photon radiation transport problems, also in presence of non uniform electric and magnetic field with variance reduction techniques, user-friendly interface and several predefined output options. As the development of ITS is continuing, the tool could deserve a place also in the section on “Present” transport tools.
Figure 32 (Left) Typical particle trajectories in foil [Be63] (Right) Energy-pathlength plot of hypothetical enectron case history. Solid curve corresponds to a Monte Carlo model of Class II with catastrophic collisions, resulting in the occurrence of secondary knock-on electrons (delta rays). The dotted curve corresponds to the continuous-slowing-down approximation [Be63][Sc59]
2. SHIELDOSE SHIELDOSE-2 [Se80][Se94] is probably the most widely used tool for the estimation of radiation dose behind various shielding in spacecraft. The tool utilizes a lookup table approach. The data for electrons were calculated with the Monte Carlo code ETRAN, described earlier in this section. The treatment of protons was limited to Coulomb interactions but neglected nuclear interactions. The error incurred by this simplification is generally no greater than 10-20% for shields up to about 30 g cm-2. The proton calculations were done in the straight-ahead, continuous-slowing-down approximation using the stopping power and range data of Barkas and Berger [Ba64]. Alsmiller et al. [Al69] have shown that neglecting angular deflections and range straggling is negligible in spare-shielding calculations. SHIELDOSE-2 [Se94] differs from SHIELDOSE mainly in that it contains new cross sections and supports several new detector materials, and has a better treatment of proton nuclear interactions. The electron calculations obtained with the Monte Carlo code ETRAN include: 1. electron energy loss, including energy loss straggling (fluctuations) due both to multiple inelastic scattering by atomic electrons and to the emission of bremsstrahlung photons;
III-34
2. angular deflections of electrons due to multiple elastic scattering by atoms; 3. penetration and diffusion of the secondary bremsstrahlung photons; 4. penetration and diffusion of energetic secondary electrons produced in electron-electron knock-on collisions (delta rays) and in the interaction of bremsstrahlung photons with the medium (pair, Compton, and photoelectrons). The two versions of SHIELDOSE are implemented in the SPENVIS [He00] web based framework. As a summary, SHIELDOSE is fast and accurate for relatively thin shields, but with some major limitations: it handles essentially one dimensional (spherical, planar) geometries; electron transport is based on planar one dimensional simulations; proton induced secondary particle effects introduced by the shield are not explicitly treated; and it is only applicable to aluminum shielding and certain types of detector material.
Figure 33 SHIELDOSE geometry configurations (finite-thickness slab, semi-infinite medium, solid sphere, hollow sphere)
B. Present 1. GEANT4 GEANT4 [Ag03] is an open source object-oriented simulation toolkit that offers a wide set of electromagnetic and hadronic physics models, good performance of the particle transport in complex geometry models and the possibility of interfacing to external packages such as simulation engines and visualization or analysis tools. While developed in the context of High Energy Physics (HEP) experiments, the same attention has been paid to nuclear physics, space applications, medical physics, astrophysics and radio-protection. Geant4 will be discussed in more in later sections.
III-35
2. MCNPX MCNPX [Br00] [Pe05], a general Monte Carlo N-Particle transport code, represents a major extension of the MCNP code, allowing for the ability to track all types of particles. This code enables a relatively easy specification of complex geometries and sources. The default cut-off energies are 1 keV for photons and electrons. For photons, coherent scattering is considered. Secondary electron transport is considered with the “Thick-Target-Bremsstrahlung model” (TTB) that entails immediately annihilating the secondary electrons and tracking the Bremsstrahlung photons locally produced. For the electron transport, MCNPX uses the “condensed history” Monte Carlo method from ITS 3.0.
3. FLUKA FLUKA [Fa01] is a general purpose tool for calculations of particle transport and interactions with matter, covering an extended range of applications spanning from proton and electron accelerator shielding to target design, calorimetry, activation, dosimetry, detector design, Accelerator Driven Systems, cosmic rays, neutrino physics, radiotherapy etc. The physics description includes models for ions at high energy with an interface to the DPMJET code (>5 GeV/n) and to the Relativistic Quantum Molecular Dynamics (RQMD) code at lower energies. The access to the source code is limited.
4. PHITS The Particle and Heavy Ion Transport code System (PHITS) [Iw02] is a relatively recent development. It is based on NMTC/JAM [Ni01a] and can simulate hadron-nucleon collisions up to 200 GeV, nucleus-nucleus collisions up to several GeV/nucleon, and transport of heavy ions, all hadrons including low energy neutrons, and leptons. Cross sections of high-energy hadron-nucleus reactions are calculated by the hadronic cascade model, JAM (Jet AA Microscopic Transport Model) [Na01], which explicitly treats all established hadronic states and resonances. The JQMD (JAERI Quantum Molecular Dynamics) [Ni95] model was integrated in the code to simulate nucleus-nucleus collisions. In the particle transport simulation, the SPAR code [Ar73] is adopted for calculating the stopping powers of charged particles and heavy ions. PHITS can also deal with the low energy neutron, photon and electron transport based on evaluated nuclear data libraries in the same manner as in the MCNP4C [Br00] code.
5. NOVICE The NOVICE code system [No00][Jo98][Jo76] calculates radiation effects in threedimensional models of space systems. NOVICE can also be used for other radiation transport and shielding analyses not related to space activities. The algorithms contained in NOVICE have been proven in more than three decades of applications. In fact, some algorithms were developed in their original form in the early 1960's. One of the main features of the NOVICE system is the possibility of running reverse (“adjoint”) Monte Carlo transport of electrons, Bremsstrahlung, protons, and other heavy ions. Outputs include dose, charging, current, and any user supplied response functions. A major option provides for calculation of pulse height spectra, with coincidence/anti-coincidence logic. These data can be used for upset/latchup predictions in arbitrary sensitive volume geometries. Geometry models from CAD tools can be imported and
III-36
used for radiation transport. The tool has a relatively wide users’ community in research institutes and Space Industry, despite a limited access to the source.
6. PENELOPE PENELOPE [Sa01][Se97][Se03] is a general-purpose Monte Carlo code system for simulation of coupled electron-photon transport in arbitrary materials and in the energy range from a few hundred eV to about 1 GeV. Photon transport is simulated by means of the standard, detailed simulation scheme. Electron and positron histories are generated on the basis of a mixed procedure, which combines detailed simulation of hard events with condensed simulation of soft interactions. In addition, a geometry package called PENGEOM permits the generation of random electron-photon showers in material systems consisting of homogeneous bodies limited by quadric surfaces, i.e. planes, spheres, cylinders, etc.
7. EGS The EGSnrc system [Ka00] is a package for the Monte Carlo (MC) simulation of coupled electron-photon transport. Its current energy range of applicability is considered to be 1keV - 10 GeV. EGSnrc is an extended and improved version of the EGS4 package [Ne85] originally developed at SLAC. It incorporates many improvements in the implementation of the condensed history technique for the simulation of charged particle transport and better low energy cross sections.
8. BRYNTRN / HZETRN Commonly used particle transport programs using the analytical or deterministic approach include the NASA BRYNTRN and HZETRN codes [Wi89][Wi95], which have been extensively used for manned space applications. The present version of the HZETRN code (which incorporated the galactic cosmic ray transport code GCRTRN and the nucleon transport code BRYNTRN) is capable of HZE ion simulations in either the laboratory or the space environment. The computational model consists of the lowest order asymptotic approximation followed by a Neumann series expansion with non-perturbative corrections. The physical description includes energy loss with straggling, nuclear attenuation, nuclear fragmentation with energy dispersion and downshift [Tw05]. Recent papers present the validation of the ion transport against measurements with iron ions [Wa05b]. The development of an extension to three dimensions has been recently presented [Wi04].
9. SRIM / TRIM SRIM [Zi85][Sr03] is one of the reference tools for the calculation of atom displacement induced by ions. It is in fact a group of programs that calculate the stopping and range of ions (up to 2 GeV/amu) into matter using a quantum mechanical treatment of ion-atom collisions. Physics models include screened Coulomb collision, with exchange and correlation interactions between the overlapping electron shells. The ion has long-range interactions creating electron excitations and plasmons within the target. These are described by including a description of the target's collective electronic structure and inter-atomic bond structure when the calculation is setup. The charge state of the ion within the target is described using the concept of effective charge, which includes a velocity dependent charge state and long range screening due to the collective electron
III-37
sea of the target. TRIM (the Transport of Ions in Matter), included in SRIM, can be applied to complex targets made of compound materials with up to eight layers, each of different materials, to calculate the final 3D distribution of the ions and all kinetic phenomena associated with the ion's energy loss: target damage, sputtering, ionization, and phonon production. All target atom cascades in the target are followed in detail.
C. Future Predicting the evolution of the field of radiation transport techniques is a difficult task, as it is linked to many factors. These include both scientific progresses on transport processes, as there are several areas in which the present knowledge is certainly incomplete, and technological aspects, mainly related to computational resources and methods. The increased available computing power on single machines, combined with distributed computing (GRID), will make MC calculations affordable in locations, fields and subjects where previously only analytical, often approximated approaches could give an answer in a reasonable time. The application of MC techniques to the medical field is opening the field to entirely new, large user communities. This will certainly inject new requirements and fresh resources for the development of next generation MC-based transport tools. Among the needs of the medical community, accurate near-Real-Time dosimetry has a high priority for the optimization of radiotherapy techniques and protocols. Interfaces in radiation transport modules based on advanced MC tools to and from CAD / TCAD geometry and analysis models, and integration with pre- and post- processing modules within friendly (Graphical-) User Interfaces will have an impact on the usability of the MC techniques, which are based on highly advanced fundamental science, in the engineering community. First interesting examples are offered by the user success of the MULASSIS integration in the SPENVIS framework [He00] and the recent development of the RADSAFE framework [Ho05][Wa05a]. Also from this point of view, MC tools based on advanced software technologies have a clear advantage. Transport methods with non Monte Carlo techniques, such as the deterministic solution of the Boltzmann transport equations in finite element models [Bo05], will probably continue to coexist with the stochastic simulations, despite continuous dramatic increase in available computing resources. Finally, the next years will definitely see the extension of the applicability of particle MC tools to areas not traditionally covered by single-particle transport applications, at the border or partially overlapping with fields such as plasma simulations or bio-molecular dynamics. In this respect, the challenging developments in the physics models will need to find a right balance between the description of processes from basic principles and data-driven models, and will need to interface to external dedicated resources. Examples of this are extreme-low-energy transport in condensed matter impact on DNA molecular dynamics [Ni01b], or intra-nuclear processes where the knowledge is incomplete and ad-hoc solutions are proposed on a case-by-case basis.
III-38
V.GEANT4 applications and physics validation for space environment analyses A. Introduction and Kernel Geant4 [Ag03][Al06] is a toolkit for the simulation of elementary particles passing through and interacting with matter. Geant4 is the successor of Geant3 [Br87], the world-standard toolkit for high energy physics (HEP) detector simulation, and it is one of the first successful attempts to redesign a major package of HEP software for the next generation of experiments using ObjectOriented technologies. It has a rich experience of detector simulations in past decades. Since the beginning of its development, a variety of requirements have also been taken into account from heavy ion physics, CP-violation physics, cosmic ray physics, astro- and astroparticle- physics, space science and engineering, and medical applications. The Geant4 simulation toolkit provides comprehensive geometry and physics modeling capabilities embedded in a robust but flexible kernel structure. The Geant4 kernel offers: a) particle tracking, b) geometry description and navigation in any kind of field, c) abstract interfaces to physics models, d) event management with a costless stacking mechanism for track prioritization, e) a variety of scoring options and flexible detector sensitivity description, f) several event biasing (variance reduction) options, g) command definition tools with powerful range checking capabilities, and h) interfaces to visualization and GUI systems. Geant4 offers physics models that cover a diverse set of interactions over a wide energy range, from optical photons and thermal neutrons to high-energy reactions at the Large Hadron Collider (LHC) and in cosmic ray experiments. For many cases, alternative models covering the same physics interaction with the same energy range are offered for the users’ choice depending on their requirements of physics accuracy and CPU performance. Thanks to the polymorphism mechanism of Object-Orientation, the user can easily add or alternate some of the physics models without affecting the other models. The Geant4 simulation toolkit is developed and maintained by the international Geant4 collaboration. All of the Geant4 source code, documents, examples for various levels of users from novice to most advanced users and associated data files may be freely downloaded from the collaboration’s web page. The Geant4 collaboration offers an extensive user-support process, including users’ workshops and tutorials, the “HyperNews” forum e-mail services, requirement tracking, problem reporting, and public users’ meetings named Technical Forum for the formal collection of user requirements.
III-39
B. Geometry The Geant4 kernel has a wide variety of built-in solid shapes, from the simplest Constructed Solid Geometry (CSG) shapes such as boxes and tube segments, through more complicated CSG shapes such as twisted trapezoids and tori. It also has Boundary-Represented (BREP) solids so that the user can define shapes with arbitrary surfaces including planar, 2nd or higher order, Spline, B-Spline and NURBS (Non-Uniform B-Spline). Furthermore, boolean operations are provided to combine CSGs into more complex shapes. To place solids, in addition to the simple placement, Geant4 offers various options to reduce the size of memory required for the most complicated and realistic geometries. These options include the so-called parameterized volume, in which just one object of this class may represent many volumes of different positions, rotations, sizes, shapes and materials. The Geant4 geometry navigator automatically optimizes the user’s geometry to ensure the best navigation performance [Co03]. In addition, through the abstract interfaces, the user can easily customize the navigator to fit the user’s particular geometry. Figures show sample geometries implemented by Geant4 for space applications, ranging from planetary scale, to detailed satellite structure, down to micro-geometries of semiconductor devices.
Figure 34 Geant4 geometry model of interplanetary space: particle trajectories in the Earth's magnetosphere (IGRF & Tsyganenko89 models, January 1st 1982) simulated by Geant4 [De03] [Ma05]. Courtesy Laurent Desorgher, University of Bern
III-40
Figure 35 Geant4 geometry model of satellite structures and payloads: International Space Station and detailed ESA Columbus models [Er04].
Figure 36 Geant4 geometry model of micro-electronic components: 3-D schematic diagram illustrating the candidate SRAM memory technology used in simulations, complete with overlayers (Al interconnects, polysilicon, tungsten plugs, and bulk silicon). [Ba06] Geant4 calculates the curved path of a particle trajectory in a field by integrating the equation of motion. Through abstract interfaces, Geant4 offers magnetic, electric and electromagnetic fields and equations of motion for these fields. Through the same abstract interfaces the user can also implement other types of field (such as gravitation). Saving the description of a geometrical setup is a typical requirement of many experiments, which makes it possible to share the same geometry model across various software packages. The geometry description markup language (GDML) [Ch01] and its module for interfacing with Geant4 have been extended to facilitate a geometrical description based on common tools and standards. A new module enables the user to save a Geant4 geometry description, which is in memory, by writing it into a text file by extensible markup language (XML) [Po06].
III-41
C. Physics Processes In the same way that users may build complex geometries from basic shapes, Geant4 allows detailed physics suites to be assembled from basic, independent physics processes. These processes are classified into three broad areas: electromagnetic, decay, and hadronic.
Figure 37 Simplified diagram of the coverage of the Geant4 physics models, compared to the energy range of the main radiation sources and species in the space environment.
1. Electromagnetic The Geant4 electromagnetic processes cover a range of interaction energies from 100 eV up to about 1 PeV. Most important for space radiation transport are the processes involved in shower production and propagation. These include multiple scattering, ionization, Bremsstrahlung, pair production, photoelectric effect and Compton scattering. Other processes offered by Geant4 include Rayleigh scattering, Cherenkov radiation, scintillation, transition radiation, and synchrotron radiation. The multiple Coulomb scattering model used in Geant4 belongs to the class of condensed simulations. Rather than simulate in detail each individual scattering and displacement, the net effect of several scatterings is modeled for a given step length. The modeling is based on the Lewis charged particle transport theory [Le50], and the final displacement and angle are calculated for each tracking step. Multiple scattering may be applied to all charged particles. Ionization and Bremsstrahlung processes have been developed especially for electrons and positrons. Both processes contribute to energy loss, which is divided into “discrete” and “continuous” regimes. Above a given energy (corresponding to the secondary production threshold discussed below), bremsstrahlung proceeds by the emission of hard gammas. Below this energy, continuous energy loss simulates the exchange of low-energy virtual photons. Similarly for ionization, energy loss for particles above a given energy proceeds by Møller [Mø32] or Bhabha [Bh36] scattering from atomic electrons. Below this energy, continuous
III-42
energy loss is calculated. Similar ionization and bremsstrahlung processes have been developed for muons. They have been specialized for the effects the greater muon mass and use different parameterizations for energy loss. For space applications, the ionization of hadrons and ions is especially important. Geant4 has a specialized ionization process for hadrons such as protons and pions, and one for ions. Both processes use the Bethe-Bloch model for energy loss until the energy of the Bragg peak is approached. At that point they employ a specific Bragg model. The Bragg model is tuned differently for the hadron and ion ionization processes in order to take into account the large differences in projectile masses and charges.
a) Delta ray production and tracking thresholds When simulating electromagnetic showers, or any process, which produces large numbers of secondary particles, the user must always decide at which energy to stop tracking particles. For many Monte Carlo codes, this energy is used as a cutoff at which all particles, including the primary, are stopped. The remaining energy is then deposited at the stopping point. This can lead to unphysical peaks in energy deposition for highly segmented or small-dimension applications. The optimum cutoff energy will also vary from material to material, which can be inconvenient if there are many materials in the simulation. Geant4 addresses both these problems by introducing a secondary production threshold instead of a cutoff. The threshold is a distance, or range, which is the same throughout the simulation geometry, regardless of the material. In each material this distance is converted to an equivalent energy, which is then used to decide if secondary particles are produced or not. If a primary particle can produce a secondary, which travels more than this distance in a medium, the secondary is produced and the primary continues with reduced energy. If the produced secondary would not be able to go this far, no particle is produced. In this case the primary is still tracked, but mean energy loss is used to determine how the particle loses energy. In this way the primary is tracked down to zero energy, when it stops.
b) Low energy processes In many cases, details like atomic shell structure, screening and ionization potentials are of critical importance. This is especially true for incident particles of 1 MeV and below. For this reason Geant4 provides two sets of electromagnetic processes. One is meant to cover energies from 1 keV up to about 1 PeV. The other, called the low energy processes, covers the range 250 eV up to 100 GeV for electrons. Both sets take into account atomic shell structure and ionization potentials, but the low energy processes include more detail, which in large part is taken from the evaluated data libraries EPDL97 [Cu97], EEDL [Pe97a], and EADL [Pe97b]. Low energy versions of the following processes are available: electron ionization, electron Bremsstrahlung, hadron ionization, Compton scattering, photoelectric effect, pair production, and Rayleigh scattering. Another option for electromagnetic physics between 100 eV and 1 GeV is the Penelope [Sa01] processes. These are valid for positrons, electrons and gammas. Ion energy loss, which is of relevance for single event effects in space applications, is described by specialized models in the low energy package, based on ICRU-49, Ziegler 1977 [Zi77], or Ziegler 1985 [Zi85] (scaled by the effective ion charge using the Brandt–Kitagava model [Br82]).
c) Optical Photons Although technically part of the electromagnetic sector, optical photons are treated uniquely in Geant4 because of their long wavelengths: wavelike behavior must be simulated in terms of
III-43
particles. As a result, quantities like polarization can be treated, but not the overall phase of the wave. Optical photons can be used as incident particles or be generated as secondaries from the scintillation and Cherenkov processes listed above. Four processes may be assigned to an optical photon: reflection/refraction, absorption in bulk material, Rayleigh scattering and wavelength shifting. Each of these processes requires some information to specified about the material in which the photons travel, such as index of refraction, absorption length, the spectrum and timing of scintillation light, and various surface properties.
2. Decay Geant4 provides for weak and electromagnetic decays of long-lived, unstable particles either at rest or in flight. Decay modes are chosen according to branching ratios in the decay table for each particle. The final states of the decay modes are calculated according to several models, including VA, Dalitz theory or simple phase space. Users may assign specialized decay channels and lifetimes to these particles. Of more importance to radiation transport and effects is the radioactive decay process. It handles b- , b+ and a decay of nuclei as well as electron capture and isomeric transitions. It may be assigned to ions and may occur in flight or at rest. This process is closely connected to the hadronic processes discussed below and employs some of the nuclear evaporation models.
3. Hadronic processes and models Geant4 provides four basic processes for hadronic interactions: elastic and inelastic scattering, capture and fission. The user may easily implement each of these four types with several available models. In space applications, where the energy range of interest covers many orders of magnitude, more than one model will be required for inelastic scattering and perhaps for elastic scattering as well. This is because there is no single theory of hadronic interactions, which applies to all energy ranges. Also, in some energy ranges there is more than one model available, so that the user may choose the one, which works best in his application.
Figure 38 Summary of Geant4 models for hadronic collisions and nuclear de-excitation (after [Tr04b])
III-44
a) Models for elementary particle projectiles These models can be classified roughly according to the energy range to which they apply. At the highest energies, from about 25 GeV up to a few TeV, three models are available: high energy parameterized (HEP), quark-gluon string (QGS) and Fritiof fragmentation. At high energies, projectiles are sensitive to small-scale details within a nucleus. The projectile hadron is therefore likely to interact with a single proton or neutron. During this interaction, quark antiquark pairs (quark strings) may be excited and then decay to produce more hadrons. Some of these hadrons will be produced outside the nucleus. Others may initiate a cascade with other particles within the nucleus. The QGS and Fritiof models are both theory-based and differ in the way that the quark strings fragment to produce hadrons. The HEP model is based on the GHEISHA model of Geant3, and for the most part uses parameters taken from fits to data, rather than theory. There is a transitional region between 510 GeV and 25 GeV, which is too low in energy for the high energy and string fragmentation models to apply, yet to high for the cascade models described below. Geant4 fills the gap by using the low energy parameterized (LEP) model. Like the HEP model, this is derived from the GHEISHA model of Geant3 and gets most of its parameters from fits to data rather than from theory. The energy range from 100 MeV to 10 GeV is covered by the binary cascade [Ge06][Be00][We04b], Bertini cascade [Gu68][Be71] and low energy parameterized models [Ge06][Tr04b]. These models describe the propagation of a hadron through the nuclear medium, which produces secondary hadrons, which in turn produce tertiary hadrons, and so on, until the primary particle energy is dissipated. At the end of this cascade, the nucleus is left in a highly excited state, which must be de-excited by another model. For incident hadrons of energy 200 MeV and below, there is not enough energy to generate a cascade, but the nucleus can nevertheless reach a highly excited state through the formation of particle-hole states. The pre-compound model was designed for this purpose. It causes the particle-hole states to decay, leaving the nucleus in a cooler, equilibrium state. Several equilibrium models are then automatically invoked, such as gamma, neutron, proton and fragment emission, which take the nucleus to its ground state. All of above models may be used for incident protons and neutrons and most may be used for pions. The HEP, LEP, QGS, Fritiof and Bertini models are also valid for kaons. The LEP, HEP and Bertini models may be used for long-lived hyperons, and the LEP and HEP models may be used for anti-nucleons and anti-hyperons.
Figure 39 Schematic presentation of the Bertini model for intra-nuclear cascades. A hadron with 400 MeV energy is forming an INC history. Crosses present the Pauli exclusion principle in action.
III-45
b) Models for ion projectiles Other models of special importance to radiation effects deal with the hadronic interactions of ions. The LEP model, discussed above, may be applied to deuterons, tritons and alphas at incident energies of 100 MeV and below. For higher incident energies (below 10 GeV/n), a version of the binary cascade model has been developed which may be applied to incident ions with A ≤ 12. It may also be used for incident ions with A > 12 if the target consists of nuclei with A ≤ 12. This model produces final state fragments based on a statistical model and does not currently include projectile-target correlations. Another model in the same energy range is Wilson abrasion (0-10 GeV), which is a simplified macroscopic model for nuclear-nuclear interactions based largely on geometric arguments rather than detailed consideration of nucleonnucleon collisions. As such the speed of the simulation is found to be faster than models such as the binary cascade. Geant4 also provides interfaces to the external models JAM and JQMD [Ko03], which simulate nucleus-nucleus interactions at higher energies. The EM dissociation model (0-100 TeV) can be used to simulate nucleus-nucleus collisions in which the exchanged virtual photons become hard enough that nucleons and nuclear fragments are ejected.
c) Models for low energy neutrons Neutron induced reactions, including capture, fission, inelastic and elastic scattering can produce gammas, charged particles and heavy nuclear fragments, which may affect electronic devices. In some cases a detailed treatment of neutron interactions and cross sections down to thermal energies is required. For this, Geant4 provides the high precision neutron package. The cross sections, channels and final state distributions are taken almost entirely from tabulations of neutron data from several different databases [En91], [Fe98], [Je95]. As a result the simulated reactions agree very well with the cases where the final states have been well measured.
D. Physics Validation The above processes are compared to data whenever possible in order to validate the physical models and their implementations. A sampling of these comparisons is included below for both electromagnetic and hadronic processes.
1. Electromagnetic Figure 40 shows the multiple scattering of 6.56 MeV protons from 92.6 um of Si [Ag03]. Data points are shown in gray, and the solid-line histogram is the result of Geant4. Agreement with data is uniformly good at all measured angles. Figure 41 compares Geant4 Bremsstrahlung with data from 50 MeV electrons passing through a thin Be target (top) as well as a thick Be/W target [Iv04]. Black dots represent the data and the histogram is the result of Geant4. The top plot shows relative dose versus radius from the beam and the bottom plots shows energy deposition versus radius from the beam. For the thin target the agreement is excellent, while for the thick target, Geant4 produces excess energy deposition at larger radii.
III-46
Figure 40 Multiple scattering of 6.56 MeV protons on 92.6 um of Si. (Plot after [Ag03].)
Figure 41 Dose in thin and thick targets due to Bremsstrahlung of 50 MeV electrons. (Plot after [Iv04].)
III-47
Figure 42 shows proton ionization loss in Fe, plotted versus the log of the proton energy. The dashed line represents the current, improved ionization model and agrees very well with the black data points [Bu05].
Figure 42 Proton ionization loss in iron. (Plot after [Bu05].)
2. Hadronic Figure 43 compares the result of the Geant4 Binary cascade model with the neutron yield from 256 MeV protons incident upon Be, Al, Fe and Pb (Be top, Pb bottom) [Iv03]. Agreement with data is generally good above 50 MeV incident proton energy. At lower energies Geant4 produces an excess of neutrons. This excess however, is small for heavy nuclei like Pb. Figure 44 shows the neutron fluence resulting from 400 MeV/nucleon Fe incident on a thick Cu target. The data (black dots) are compared to the prediction of the Binary cascade light ion model for 0, 7.5, 15, 30, 60 and 90 degrees. [Ko05b][We04a]. The horizontal axis id the detected neutron energy and the vertical axis is the neutron fluence (n/MeV/sr). Near the endpoint of the spectra, the agreement with data is good in most cases. At lower neutron energies and angles Geant4 under-estimates the data. However at 60 and 90 degrees, agreement is good at all energies. Validation of models and processes in Geant4 continues as new data appear and as new models are developed. The small sample shown above indicates that agreement with data is generally better for the electromagnetic processes than the hadronic ones. This is at least in part due to the existence of a comprehensive and well-tested electromagnetic theory. The agreement of hadronic process with data can be expected to improve as the models improve and as more data are taken in previously untested energy ranges.
III-48
Figure 43 Neutron yield from 256 MeV protons on Be, Al, Fe and Pb at various angles. (Plot after [Iv03].)
Figure 44 Neutron fluence at various angles as a result of 400 MeV/nucleon Fe bombarding a thick Cu target. (Plot after [Ko05b].) III-49
E. Geant4-based radiation analysis and tools Being a toolkit, GEANT4 does not offer ready-to-use executables to simulate the radiation transport in matter, nor does it provide tools for the analysis of the radiation effects, and it is the responsibility of the user to develop an appropriate tool on top of the GEANT4 libraries. The majority of published work performed with Geant4 up to now is based on private customised applications utilizing the Geant4 libraries for the particle transport. The European Space Agency (ESA) has recently been strongly focused on making Geant4 readily accessible to a variety of engineering applications and WWW-based radiation effects studies through the development of easy-to-use interfaces, advanced models, and auxiliary software. The tools presented in this section therefore represent only partly the analyses performed, and are given as examples of the capabilities of the toolkit, together with some significant results obtained in recent years. For several tools there is open access to the source code, and in some cases a public web interface is provided for simple applications.
1. Sector Shielding Analysis Tool (SSAT) The Sector Shielding Analysis Tool (SSAT) is among the first ready-to-use tools based on Geant4 supported by ESA programs. SSAT [Sa03] performs ray tracing from a user-defined point within a Geant4 geometry to determine shielding levels (i.e. the fraction of solid angle for which the shielding is within a defined interval) and shielding distribution (the mean shielding level as a function of look direction). To achieve this the tool utilizes the fictitious geantino particle, which undergoes no physical interactions, but flags boundary crossings along its straight trajectory. Knowledge of the positions of these boundary crossings together with the density of the material through which the particle has passed can be used to profile the shielding (in g/cm2) for a given point within the geometry. It produces distributions of shielding material and thickness as viewed from a given point within the configuration as a function of direction from that location. This approach is highly useful for calculating the absorbed radiation dose, and for finding optimal shielding geometries. The tool has recently been upgraded with the direct calculation of doses based on externally provided dose-depth curves.
III-50
Figure 45 The SSAT ray-tracing tool [Sa03] provides shielding profiles by material and dose estimated based on external dose-depth curves. An application is shown to the radiation analysis for the ESA ConeXpress mission
2. PLANETO-COSMICS The PLANETO-COSMICS GEANT4-based tool, by Bern University, enhances and generalizes the previous MAGNETO-COSMICS and ATMO-COSMICS [De03][Ma05][De05] tools, which were developed for the transport of particles in the Earth magnetosphere and atmosphere. The new tool has been extended [Pl05] with a description of the local magnetic field on Mars [Gu05a] and of the dipole field on Mercury [Gu05b]. Models for the planetary atmospheres are also included. Among other features, MAGNETO-COSMICS can produce cosmic-ray cutoff rigidity at single points, along trajectories or on maps.
Figure 46 (Left) ATMOCOSMICS: atmospheric ionization rate induced by galactic cosmic rays during the minimum and maximum of solar activity, respectively [De05]. The dotted lines represent the measurements obtained by Neher from 1959 to 1965 [Ne54] [Ne58] [Ne67]. (Right) PLANETOCOSMICS: Magnetic field at the surface of Mars [Gu05a] III-51
3. Multi-Layered Shielding Simulation Software (MULASSIS) The Multi-Layered Shielding Simulation Software (MULASSIS) [Le02] is a successful example of ready-to-use analysis tool with user-friendly interface. It performs fluence, total ionizing dose (TID), pulse height spectrum (PHS), dose equivalent and NIEL dose analysis in 1D (or better 1.5-D) geometries made of several layers of arbitrary materials. A web interface to MULASSIS has been developed in SPENVIS [He00]
Figure 47 MULASSIS: The shield shown on the left here is made of three layers (titanium, aluminum and carbon fiber from left to right) and a thin silicon detector layer is placed behind the carbon fiber layer. Superimposed on top of the geometry are the interaction tracks of 1 GeV protons incident from the left. The image on the right represents a screenshot of the MULASSIS web interface inside the SPENVIS framework.
Figure 48 (Left) A comparison of the secondary proton and neutron spectra derived by MULASSIS/Geant4 4.1 (2002) and MCNPX simulations. Trapped protons are the incident particle. Overall agreement is achieved, for the protons in particular. There are some differences in the neutron spectra and this is a reflection of the different physics model used in the two codes, Pre-Compound model in the case of MULASSIS and HETC intra-nuclear cascade model in the MCNPX code. (Right) Total ionising doses for the Si detector behind Al shield of various values of the thickness. For comparison doses predicated by SHIELDOSE-2 for the same trapped proton source are also plotted.
III-52
4. GEANT4 Radiation Analysis for Space GRAS GEANT4 Radiation Analysis for Space (GRAS) [Sa05] is a tool for simulating the effects of the space radiation environment. To allow for flexibility, a modular approach has been followed for the geometry model, the physics description and the extraction of the radiation effect data. The main input to the GEANT4 simulation are the geometry model (for which the default format is GDML) and the physics models to be used. The GRAS tool provides a friendly interface for the input of the geometry model and the choice of the list of the physics models to be used. The user constructs the list of physics models for a given application via a scripting interface with a modular approach. In addition, GRAS offers ready-to-use modules for the analysis of the effects of radiation in user defined sensitive devices. Numerous kinds of analysis are offered, such as cumulative ionizing and NIEL dose, equivalent dose and dose equivalent, LET, charge-deposit and fluence in 3D geometry models. The software design allows easy integration of new geometry interfaces, physics models and analysis capabilities. The scripting interface avoids the need of C++ programming and eases the integration of the tool into external frameworks.
Figure 49 (Left) The GRAS framework for the radiation effect analysis modules. (Right) Simulation of the HERSCHEL PACS photoconductor ground proton beam test. (After [Sa05])
5. Monte Carlo Radiative Energy Deposition (MRED) – Vanderbilt Monte Carlo Radiative Energy Deposition (MRED) [Ho05][Wa05a] is a unique radiation effects research tool developed at Vanderbilt University based on the Geant4 libraries [Ag03], and the Synopsys (formerly ISE) TCAD tools [Sy04] for on-orbit predictions and for technology evaluation. MRED adds to the Geant4 physics a model for screened Coulomb scattering of ions [Me05], and includes tetrahedral geometric objects [Ko05a], a cross section biasing and track weighting technique for variance reduction, and a number of additional features relevant to semiconductor device applications. The Geant4 libraries frequently contain alternative models for the same physical processes and these may differ in level of detail and accuracy. Generally, MRED is structured so that all physics relevant for radiation effects applications is available and selectable at run time. This includes electromagnetic and hadronic processes for all relevant particles, including elementary particles that live long enough to be tracked.
III-53
Figure 50 The RADSAFE concept, based on the integration of the GEANT4 libraries in a wide radiation effect analysis framework for On-orbit predictions and for Technology Evaluation. The framework includes the Synopsys (formerly ISE) TCAD tools.
Figure 51 Effect on electronics components from ion nuclear interactions, as obtained with the Vanderbilt RADSAFE/MRED tool. The raw counts spectrum of deposited charge for 523 MeV Ne in Silicon. The solid curve represents both direct and indirect ionization processes whereas the diamond-dotted line represents direct ionization from the primary. [Wa05a] (Right) Comparison of the MRED based cross-section predictions for a circuit Q of 1.21 pC. Q was derived by a visual best-fit of the integral cross-section curves for all ions. [Wa05a]
III-54
Conclusion Particle transport has an important role in the complex process of the analysis of the effects of radiation to devices in space, as it helps in two crucial tasks: the propagation of the radiation from the outer space to the environment local to the devices inside the spacecraft, and the detailed interaction of the local environment with the devices. As simulations tackle directly the fundamental science behind the interaction of the particle radiation with the spacecraft devices, they can thus play a major role both in understanding of the system performance in space and in helping design of flight components. The paper introduced the basic concepts behind the radiation transport algorithms, focusing in particular on the features of the Monte Carlo method. A particular attention has been given at the physics modeling of GEANT4 and related validation results, with the hope to give an overview of the capabilities of modern MC tools, and thus increase both the confidence in their results and the awareness of the critical areas in which the understanding of the underlying phenomena is still incomplete.
References S. Agostinelli et al., "Geant4 – A Simulation Toolkit", Nucl. Instrum. Meth. A 506, 2003, p. 250. URL: http://cern.ch/geant4 [Al06] J. Allison et al., “Geant4 developments and applications”, IEEE Trans. Nucl. Sci. 53, 1 (2006) 270-8 [Al69] Alsmiller, R. G., J. Barish, and W. W. Scott, Nucl. Sci. and Enrg., 35, 1969. [Ar73] Armstrong T.W.and Chandler K.C. “Stopping Powers and Ranges for Muons, Charged Pions, Protons, and Heavy Ions”, Nucl. Instrum. Methods 113, 1973, 313. [Ba04] L.P. Barbieri and R.E. Mahmot, “October-November 2003’s space weather and operations lessons learned”, Space Weather, Vol.2, 15-29, 2004. [Ba06] D.R. Ball, K.M. Warren, R.A. Weller, R.A. Reed, A. Kobayashi, J.A. Pellish, M.H. Mendenhall, C.L. Howe, L.W. Massengill, R.D. Schrimpf, and N.F. Haddad "Simulating Nuclear Events in a TCAD Model of a High-Density SEU Hardened SRAM Technology." RADECS 2005 and accepted for publication in Trans. Nuc. Sci., 2006.Ball, RADECS 2005 [Ba64] Barkas, W. H., and M. J. Berger, NASA Publ. SP-3013, 1964. [Ba96] R. M. Barnett et al., “Review of Particle Physics 1996”, Phys. Rev., D 54 (1996) 1708 [Be00] L. Bellagamba, A. Brunengo, E. Di Salvo, and M. G. Pia, “Object-oriented design and implementation of an intra-nuclear transport model,” INFN, Rep. INFN/AE00/13, Nov. 2000. [Be34] HA Bethe, W Heitler, Proc. Roy. Soc. (London), 1934 [Be59] H. A. Bethe and J. Ashkin, “Passage of radiation through matter,” in Experimental Nuclear Physics, Vol 1, Editor E Segrè, Published by Wiley, New York, 1959. [Be63] M. J. Berger, “Monte Carlo Calculation of the penetration and diffusion of fast charged particles”, in: B. Alder, S. Fernbach und M. Rotenberg (Eds.), Methods in Comput. Phys., Vol 1 (Academic, New York, 1963) pp. 135 - 215. [Be68a] Berger, M. J., and S. M. Seltzer, NASA Publ. SP-169, 1968a.
[Ag03]
III-55
[Be68b] Berger, M. J., and S. M. Seltzer, Computer Code Collection 107, Oak Ridge Shielding Information Center, 1968b. [Be70] Berger, M. J., and S. M. Seltzer, Phys. Rev., C2, 621, 1970. [Be71] H. W. Bertini and P. Guthrie, “Results from Medium-Energy Intranuclear-Cascade Calculation”, Nucl. Phys.A169, (1971). [Be73] M. J. Berger, “Improved point kernels for electron and beta ray dosimetry”, NBS Report NBSIR 73-107 (1973). [Be74] Berger, M. J., and S. M. Seltzer, Nucl. Instr. and Meth., 119, 157, 1974; Seltzer, S. M., National Bureau of Standards Publ. NBS-IR 74457, 1974. [Be88] S. M. Seltzer, “An overview of ETRAN Monte Carlo methods”, in Monte Carlo Transport of Electrons and Photons, edited by T. M. Jenkins, W. R. Nelson, A. Rindi, A. E. Nahum, and D. W. O. Rogers, pages 153 - 182, Plenum Press, New York, 1988. [Be90] M.J. Berger, J.H. Hubbell, S.M. Seltzer, J. Chang, J.S. Coursey, R. Sukumar, and D.S. Zucker, “XCOM: Photon Cross Sections Database”. URL: http://physics.nist.gov/PhysRefData/Xcom/Text/XCOM.html [Bh36] H.J. Bhabha, “The scattering of positrons by electrons with exchange on Dirac’s theory of the positron”, Proc. R. Soc. A 154, 1936, 195-206 [Bo01] J. Bogart, D. Favretto, R. Giannitrapani, “XML for Detector Description at GLAST”, CHEP01 Conf. Proceedings. [Bo05] E. Boman, J. Tervo, M. Vauhkonen, “Modelling the transport of ionizing radiation using the finite element method”, Phys. Med. Biol. 50 (2005) 265-280 [Br00] J F Briesmeister, Ed., "MCNP - A General Monte Carlo N-Particle Transport Code, Version 4C," LA-13709-M, 2000. [Br82] W. Brandt and M. Kitagawa, Phys. Rev. B25 (1982) 5631 [Br87] R. Brun, F. Druyant, M. Marie, A. C. mcPherson, and P. Zanarinin, “GEANT3,” CERN DD/EE/84-1, Revised 1987 and subsequently. [Bu05] H. Burkhardt et al., “GEANT4 Standard Electromagnetic physics package”, Proceedings of MC2005, Chattanooga, Tennessee, April 17-21, 2005, on CD-ROM, Americal Nuclear Society, LaGrange Park, IL (2005). [Ch01] R. Chytracek, “The Geometry Description Markup Language”, CHEP01 Conf. Proceedings. URL: http://cern.ch/gdml [Co03] G. Cosmo, “Modeling Detector Geometries in Geant4”, Proceedings of the 2003 IEEE NSS/MIC/RTSD Conference, Portland (Oregon, USA), October 2003 [Co95] Computational Science Education Project, “Introduction to Monte Carlo Methods”, 1995 [Cu97] D. Cullen, et al., EPDL97: the Evaluated Photon Data Library, 97 version, UCRL50400, Vol. 6, Rev. 5, 1997. [Da04] E.J. Daly, “Outlook on space weather effects on spacecraft”, in “Effects of space weather on technology infrastructure”, pages 91-108, Kluwer Academic Publishers, 2004. [Da88] Daly E.J., "The Evaluation of Space Radiation Environments for ESA Projects", ESA Journal 12, 229 (1988). [De03] L. Desorgher, E.O. Flueckiger, M.R. Moser, R. Buetikofer, “GEANT4 applications for simulating the propagation of Cosmic Rays through the Earth’s magnetosphere and atmosphere”, Geophysical Research Abstracts, Vol. 5, 11356, 2003 III-56
[De05]
L. Desorgher, E.O. Flueckiger, M. Gurtner, M.R. Moser, R. Buetikofer, “ATMOCOSMICS: A GEANT4 code for computing the interaction of Cosmic Rays with the Earth’s atmosphere”, International Journal of Modern Physics A, Vol. 20, No. 29 (2005) 6802-6804 [De06] DESIRE web page: http://gluon.particle.kth.se/desire [Di96] F.C. Difilippo, M. Goldstein, B.A. Worley and J.C. Ryman, “Adjoint Monte Carlo methods for radiotherapy treatment planning”, Trans. Am. Nucl. 74, 1996, 14–6 [Ec06] European Cooperation for Space Standardization, ECSS-E-10-12 Working Group, “Space engineering: Methods for calculation of radiation received and its effects, and a policy for design margins”, 2006 [En91] ENDF/B-VI, Cross Section Evaluation Working Group, ENDF/B-VI Summary Document, BNL-NCS-17541 (ENDF-201) National Nuclear Data Center, Brookhaven National Laboratory, Upton, NY, USA, 1991. [Er04] T. Ersmark, P. Carlson, E. Daly, C. Fuglesang, I. Gudowska, B. Lund-Jensen, R. Nartallo, P. Nieminen, M. Pearce, G. Santin, N. Sobolevsky, “Status of the DESIRE project: GEANT4 physics validation studies and first results from Columbus/ISS radiation simulations,” IEEE Trans. Nucl. Sci., 51, Issue: 4, 1378–1384 (2004). [Es94] ESABASE Reference Manual, ESABASE/GEN-UM-061, Issue 2, March 1994 [Ev55] R.D. Evan, “The Atomic Nucleus”, McGraw-Hill, New York, 1955. [Fa01] A. Fassò, A. Ferrari, J. Ranft, P.R. Sala , “FLUKA: Status and prospective for hadronic applications”, invited talk in the Proceedings of the MonteCarlo 2000 Conference, Lisbon, October 23--26 2000, A. Kling, F. Barao, M. Nakagawa, L. Tavora, P. Vaz eds., Springer-Verlag Berlin, p. 955-960 (2001). [Fa63] U. Fano, Ann. Rev. Nucl. Sci. 13, 1, 1963. [Fe48] Note on census-taking in Monte Carlo calculations E. Fermi and R.D. Richtmyer 1948. A declassified report by Enrico Fermi. From the Los Alamos Archive. [Fe86] R.C. Fernow, “Introduction to experimental particle physics”, Cambridge University Press, 1986 [Fe98] FENDL/E2.0, The processed cross-section libraries for neutron-photon transport calculations, version 1 of February 1998. Summary documentation H. Wienke, M. Herman, Report IAEA-NDS-176 Rev. 0 (International Atomic Energy Agency, April 1998). Data received on tape (or: retrieved on-line) from the IAEA Nuclear Data Section. [Fo05] J. Forest, J.-F. Roussel, A. Hilgers, B. Thiebault, and S. Jourdain, “SPIS-UI, a new integrated modeling environment for space applications”, Proceedings of the 9th Spacecraft Charging Technology Conference, Tsukuba, Japan, 2 - 9 April 2005. JAXA. [Ge06] Geant4 Physics Reference Manual (2006, June). [Online]. Available: http://pcitapiww.cern.ch/geant4/G4UsersDocuments/UsersGuides/PhysicsReference Manual/print/PhysicsReferenceManual1.pdf [Gi99] S. Giani, V. N. Invanchenko, G. Mancinelli, P. Nieminen, M. G. Pia, and L. Urban, “GEANT4 simulation of energy losses of ions,” INFN, Rep. INFN/AE-99/21, Nov. 1999. [Gr92] P.J. Griffin et al., SAND92-0094 (Sandia Natl. Lab.93) [Gu05a] M. Gurtner, L. Desorgher, E.O. Flueckiger, M.R. Moser, “Simulation of the interaction of space radiation with the Martian atmosphere and surface”, Advances in III-57
Space Research 36 (2005) 2176–2181. [Gu05b] M. Gurtner, L. Desorgher, E.O. Flueckiger, M.R. Moser, “A Geant4 application to simulate the interaction of space radiation with the Mercurian environment”, Advances in Space Research, 2005. [Gu68] M. P. Guthrie, R. G. Alsmiller and H. W. Bertini, Nucl. Instr. Meth, 66, 1968, 29. [Ha36] H.Hall, Rev. Mod. Phys. 8, 358 (1936) [Ha64] Hammersley, J.M., and D.C. Handscomb, 1964, Monte Carlo Methods, Methuen, London. [Ha92] J A Halblieb, R P Kensek, T A Mehlhorn, G D Valdez, S M Seltzer, M J Berger, “ITS Version 3.0: The Integrated Tiger Series of coupled electron/photon Monte Carlo transport codes,” SAND91-1634, Sandia National Laboratories, 1992. [Ha94] Hammond, B.L, W.A. Lester, Jr., and P.J. Reynolds, 1994, Monte Carlo Methods in Ab Initio Quantum Chemistry, World Scientific, Singapore. [He00] D. Heynderickx, B. Quaghebeur,, E. Speelman, E. Daly, “Space Environment Information System (SPENVIS): a WWW interface to models of the space environment and its effects”, AIAA-2000-0371, 2000. SPENVIS web-site: http://www.spenvis.oma.be/spenvis/ [He68] Hess W.N., "The Radiation Belt and the Magnetosphere", Blaisdell Publ. Co. (1968). [He86] Heermann, D.C., 1986, Computer Simulation Methods, Springer-Verlag, Berlin. [Hi87] W. Daniel Hillis. 1987. The connection machine. Scientific American, June, pp. 108– 1 15. [Ho05] C. L. Howe, R. A.Weller, R. A. Reed, R. D. Schrimpf, L.W. Massengill, K. M. Warren, D. R. Ball, M. H. Mendenhall, K. A. LaBel, and J. W. Howard, “Role of heavy-ion nuclear reactions in determining on-orbit single event rates,” IEEE Trans. Nucl. Sci., Vol 52, No.6, Dec. 2005. [Hu79] Hubbell J. H. and Øverbø I. (1979) Relativistic atomic form factors and photon coherent scattering cross sections, J. Phys. Chem. Ref. Data 8, 69-105. [Hu85] C. C. Hurd. 1985. A note on early Monte Carlo computations and scientific meetings. Annals of the History of Computing 7:141–155. [Hu93] M. Huhtinen and P.A. Aarnio, NIM A 335 (1993) 580 [Ic03] “Relative biological effectiveness (RBE), quality factor (Q), and radiation weighting factor (wR)”, Annals of the ICRP, Vol. 33, No. 4, 2003. [Ic91] “1990 Recommendations of the International Commission on Radiological Protection”, Annals of the ICRP, Vol. 21, No. 1-3, 1991. [Ic93] “Stopping powers and ranges for protons and alpha particles,” Int. Commission on Radiation Units and Measurements (ICRU), Rep. 49, 1993. [Iv03] V. Ivanchenko et al., “The Geant4 Hadronic Validation Suite for the Cascade Energy Range”, 2003 Conference for Computing in High Energy and Nuclear Physics, LaJolla, California, March 2003. [Iv04] V. N. Ivanchenko, “Geant4: physics potential for instrumentation in space and medicine”, Nucl. Instr. Meth. A, 525 , pp. 402-405, 2004. [Iw02] Iwase H., Niita K. and Nakamura T. (2002) J. Nucl. Sci. Technol. 39, 1142. [Ja90] F. James, "A review of pseudorandom number generators", Computer Physics Communications 60 (1990), 329-344. [Je95] T. Nakagawa et al., JENDL-3 Japanese Evaluated Nuclear Data Library, Version 3, Revision 2. J. Nucl. Sci. Technol. 32 (1995), p. 1259. Abstract-INSPEC | AbstractIII-58
Compendex | Order Document | Abstract + References in Scopus | Cited By in Scopus [Jo76] T M Jordan, “An adjoint charged particle transport method,” IEEE Trans Nucl Sci., 23, p1857, 1976. [Jo98] Thomas M. Jordan, “NOVICE, Introduction and Summary”, Experimental and Mathematical Physics Consultants, 1998. [Ju03] Insoo Jun, Michael A. Xapsos, Scott R. Messenger, Edward A. Burke, Robert J. Walters, Geoff P. Summers, and Thomas Jordan, “Proton nonionizing energy loss (NIEL) for device applications”, IEEE Trans. Nucl. Sci., Vol. 50, No. 6., Dec. 2003 [Ka00] I. Kawrakow, “Accurate condensed history Monte Carlo simulation of electron transport. I. EGSnrc, the new EGS4 version”, Med. Phys. 27, 485 - 498 (2000) [Ka68] M. H. Kalos, “Monte Carlo integration of the adjoint gamma-ray transport equation”, Nuclear Science and Engineering, 33, 1968, 284-290 [Ka86] Kalos, M.H., and P.A. Whitlock, 1986, Monte Carlo Methods, Volume 1: Basics, John Wiley & Sons, New York. [Ki55] G.H. Kinchin and R.S. Pease. The Displacement of Atoms in Solids by Radiation. Reports on Progress in Physics, 18:1--51, 1955. [Kn89] G.F. Knoll, “Radiation detection and measurement (2nd edition)”, John Wiley and Sons, 1989 [Ko03] T. Koi, M. Asai, D. H. Wright, K. Niita, Y. Nara, K. Amako, T. Sasaki, “Interfacing the JQMD and JAM Nuclear Reaction Codes to Geant4”, CHEP03 Conf. Proceedings [Ko05a] A. S. Kobayashi, D. R. Ball, K. M. Warren, M. H. Mendenhall, R. D. Schrimpf, and R. A.Weller, “The effect of metallization layers on single event susceptibility,” IEEE Trans. Nucl. Sci., Vol 52, No.6, Dec. 2005. [Ko05b] T. Koi et al., “Ion Transport Simulation using Geant4 Hadronic Physics”, Proceedings of MC2005, Chattanooga, Tennessee, April 17-21, 2005, on CD-ROM, Americal Nuclear Society, LaGrange Park, IL (2005). [Ko92] A. Konobeyev, J. Nucl. Mater. 186 (1992) 117 [Le02] F.Lei et al., “MULASSIS: a Geant4-based multilayered shielding simulation tool”, IEEE Trans. Nucl. Sci. 49 (2002) 2788-93 [Le50] H. W. Lewis, “Multiple scattering in an infinite medium,” Phys. Rev.,vol. 78, no. 5, pp. 526–529, 1950. [Le94] W.R. Leo, “Techniques for Nuclear and Particle Physics Experiments”, SpringerVerlag, 1994 [Lo66] Los Alamos Scientific Laboratory. 1966. Fermi invention rediscovered at LASL. The Atom, October, pp. 7-11. [Ma05] Magnetocosmics web site: URL: http://cosray.unibe.ch/~laurent/magnetocosmics [Mc48] W.A. McKinley and H. Feschbach, Phys. Rev. 74, 1759 (1948). [Me05] M. H. Mendenhall and R. A. Weller, “An algorithm for computing screened Coulomb scattering in Geant4,” Nucl. Instrum. Meth. B 227, pp. 420–430, 2005. [Me49] N. Metropolis and S. Ulam. 1949. The Monte Carlo method. Journal of the American Statistical Association 44:335-341. [Me87] Nick Metropolis, “The Beginning of the Monte Carlo Method”, Los Alamos Science, 15, 1987, p. 125. [Mo29] N.F. Mott, Proc. Roc. Soc. A, v. 124, 1929, 425 III-59
[Mø32]
C. Møller, “Zur Theorie des Durchgang schneller Elektronen durch Materie”, Ann. Phys., Lpz. 14, 1932, 531-585. [Mo65] N.F. Mott and H.S.W. Massey, “The theory of Atomic Collisions”, Oxford University Press, London, 1965, 3rd ed., pp. 53-68. [Na01] Nara Y., Otuka N., Ohnishi A., Niita K. and Chiba S. (2001) Phys, Rev. C61, 024901. [Na99] Nara et al., Phys. Rev. C56 (1999) 4901. [Ne54] Neher H.V., and Anderson, H.R., 1954, J. Geophys. Res., 69, 807 [Ne58] Neher,H.V., Peterson, V.Z., and Stern, E.A., 1958, Phys. Rev., 90, 655 [Ne67] Neher, H.V., 1967, J. Geophys. Res., 72, 1527 [Ne85] W. R. Nelson, H. Hirayama and D. W. O. Rogers, “The EGS4 code system", SLACreport-265, 1985. [Ni01a] Niita K., Takada H., Meigo S. and Ikeda Y. (2001) Nucl. Instr. and Meth. B184, 406. [Ni01b] P. Nieminen and M.G. Pia, Il progetto Geant4-DNA, AIRO Journal, March 2001 [Ni95] K. Nita et al., Phys. Rev. C52 (1995) 2620. [Ni98] NIST stopping power and range tables for electrons, protons, and helium ions: http://physics.nist.gov/PhysRefData/Star/Text/contents.html [No00] “NOVICE: A radiation transport/shielding code, user's guide”, Experimental and Mathematical Physics Consultants, 2000. [Pe00] M. Pelliccioni, “Overview of Fluence-to-Effective Dose and Fluence-to-Ambient Dose Equivalent Conversion Coefficients for High Energy Radiation Calculated Using the FLUKA Code”, Radiat. Prot. Dosim. 88(4), 279-297 (2000) [Pe05] D. B. Pelowitz, ed., “MCNPX user’s manual version 2.5.0,” Los Alamos National Laboratory report, In press (February 2005). [Pe97a] S.T. Perkins, et al., Tables and Graphs of Electron Interaction Cross Sections from 10 eV to 100 GeV Derived from the LLNL Evaluated Electron Data Library (EEDL), UCRL50400, Vol. 31, 1997. [Pe97b] S.T. Perkins, et al., Tables and Graphs of Atomic Subshell and Relaxation Data Derived from the LLNL Evaluated Atomic Data Library (EADL), Z=1100, UCRL50400, Vol. 30, 1997. [Pl05] Planetocosmics web site: http://cosray.unibe.ch/~laurent/planetocosmics [Po06] W. Pokorski, R. Chytracek, J. McCormick, G. Santin, “Geometry Description Markup Language and its application-specific bindings”, CHEP06 Conf. Proceedings [Pr77] Pratt, R. H., H. K. Tseng, C. M. Lee, and L. Kissel, At. Data and Nucl. Data Tables, 20, 175, 1977. [Ra55] A Million Random Digits with 100,000 Normal Deviates, The RAND Corporation, Glencoe, IL: The Free Press, 1955. Now available, a 2001 edition of this RAND Classic. [Ri75] R. Ribberfors, Phys. Rev. B 12 (1975) 2067. [Ro05] J.-F. Roussel, F. Rogier, D. Volpert, J. Forest, G. Rousseau, and A. Hilgers, “Spacecraft plasma interaction software (SPIS) : Numerical solvers - methods and architecture”, Proceedings of the 9th Spacecraft Charging Technology Conference, Tsukuba, Japan, 2 - 9 April 2005. JAXA. [Sa01] F. Salvat, J.M. Fernandez-Varea, E. Acosta and J. Sempau, “PENELOPE, A Code System for Monte Carlo Simulation of Electron and Photon Transport”, Proceedings of a Workshop/Training Course, OECD/NEA 5-7 November 2001, III-60
[Sa03] [Sa05] [Sc59] [Se03] [Se79] [Se80] [Se94] [Se97] [Sr03] [Su93] [Sy04] [To93] [Tr04a]
[Tr04b] [Tw05] [Ul47] [Ul50] [Va00]
NEA/NSC/DOC(2001)19. ISBN:92-64-18475-9 G. Santin et al., "New Geant4 based simulation tools for space radiation shielding and effects analysis", Nuclear Physics B (Proc. Suppl.) 125, pp. 69-74, 2003 G. Santin, V. Ivanchenko, H. Evans, P. Nieminen, E. Daly, “GRAS: A generalpurpose 3-D modular simulation tool for space environment effects analysis”, IEEE Trans. Nucl. Sci. 52, Issue 6, 2005, pp. 2294 – 2299. DO Schneider and DV Cormack, Radiation Res. 11, 418 (1959) J. Sempau, J.M. Fernandez-Varea, E. Acosta and F. Salvat, “Experimental benchmarks of the Monte Carlo code PENELOPE”. Nuclear Instruments and Methods B 207 (2003) 107-123. Seltzer, S. M., Electron, Electron-Bremsstrahlung, and Proton Depth-Dose Data for Space-Shielding Applications, IEEE Trans. Nuclear Sci., 26, 4896, 1979. Seltzer, S. M., SHIELDOSE, A Computer Code for Space-Shielding Radiation Dose Calculations, National Bureau of Standards, NBS Technical Note 1116, U.S. Government Printing Office, Washington, D.C., 1980. S.M. Seltzer, “Updated calculations for routine space-shielding radiation dose estimates: SHIELDOSE-2”, NIST Publication NISTIR 5477, Gaithersburg, MD, 1994. J. Sempau, E. Acosta, J. Baro, J.M. Fernandez-Varea and F.Salvat, “An algorithm for Monte Carlo simulation of the coupled electron-photon transport”, Nuclear Instruments and Methods B 132 (1997) 377-390. SRIM computer code available from: http://www.srim.org G.P. Summers, E.A. Burke, P. Shapiro , S.R. Messenger, R.J Walters, "Damage correlations in semiconductors exposed to gamma, electron and proton radiations, IEEE Trans. Nucl. Sci., Vol. 40, No. 6., Dec. 1993 TCAD Tools, Synopsys, Fremont, CA, 2004. Lawrence W Townsend, John W Wilson, Ram K Tripathi, John W Norbury, Francis F Badavi, and Ferdou Khan, “HZEFRG1, An energy-dependent semiempirical nuclear fragmentation model,” NASA Technical Paper 3310, 1993. P. Truscott, F. Lei, C.S. Dyer, A. Frydland, S. Clucas, B. Trousse, K. Hunter, C. Comber, A. Chugg and M. Moutrie, “Assessment of Neutron- and Proton-Induced Nuclear Interaction and Ionization Models in Geant4 for Simulating Single Event Effects”, IEEE Trans. Nucl. Sci. 51 (2004), 3369-74 P.R.Truscott, “Nuclear-nuclear interaction models in Geant4”, QINETIQ/ KI/ SPACE/ SUM040821/1.1, 2004 J. Tweed, S.A. Walker, J.W. Wilson, F.A. Cucinotta, R.K. Tripathi, S. Blattnig, C.J. Mertens, “Computational methods for the HZETRN code”, Adv Space Res. 2005;35(2):194-201 S. Ulam, R. D. Richtmyer, and J. von Neumann. 1947. Statistical methods in neutron diffusion. Los Alamos Scientific Laboratory report LAMS-551. This reference contains a letter from von Neumann. S. Ulam. 1950. Random processes and transformations. Proceedings of the International Congress of Mathematicians 2:264-275. A. Vasilescu (INPE Bucharest) and G. Lindstroem (University of Hamburg), “Displacement damage in silicon, on-line compilation”. URL: http://sesam.desy.de/members/gunnar/Si-dfuncs.html III-61
[Wa05a] K. M. Warren, R. A. Weller, M. H. Mendenhall, R. A. Reed, D. R. Ball, C. L. Howe, B. D. Olson, M. L. Alles, L.W. Massengill, R. D. Schrimpf, N. F. Haddad, S. E. Doyle, D. McMorrow, J. S. Melinger, and W. T. Lotshawand, “The contribution of nuclear reactions to single event upset cross-section measurements in a high-density seu hardened sram technology,” IEEE Trans. Nucl. Sci., Vol 52, No.6, Dec. 2005. [Wa05b] S.A. Walker, J. Tweed, J.W. Wilson, F.A. Cucinotta, R.K. Tripathi, S. Blattnig, C. Zeitlin, L. Heilbronn, J. Miller, “Validation of the HZETRN code for laboratory exposures with 1A GeV iron ions in several targets”, Adv. Space. Res. 2005;35(2):202-7 [We04a] H.P. Wellisch et al., “Ion transport simulation using Geant4 hadronic physics”, Computing in High Energy and Nuclear Physics (CHEP04) Conference Proceedings, Interlaken, Switzerland, 2004. [We04b] J.P. Wellisch et al., “The binary cascade,” Eur. Phys. J. A, vol. 21, p.407, 2004. [Wi04] J.W. Wilson, R.K. Tripathi, G.D. Qualls, F.A. Cucinotta, R.E. Prael, J.W. Norbury, J.H. Heinbockel, J. Tweed, “A space radiation transport method development”, Advances in Space Research 34 (2004) 1319–1327 [Wi89] J. W. Wilson, L. W. Townsend, J. E. Nealy, S. Y. Chun, B. S. Hong, and W. W. Buck et al., “BRYNTRN: A Baryon Transport Model,” NASA TP-2887, 1989. [Wi95] J. W. Wilson, F. F. Badavi, F. A. Cucinotta, J. L. Shinn, G. D. Badhwar, and R. Silberberg et al., “HZETRN: Description of a free-space ion and nucleon transport and shielding computer program,” NASA TP-3495, 1995. [Zi77] J. F. Ziegler, “The Stopping and Ranges of Ions in Matter”, New York, Pergamon, 1977, vol. 4. [Zi85] J. F. Ziegler, J. P. Biersack, and U. Littmark, “The Stopping and Range of Ions in Solids”, New York: Pergamon, 1985, vol. 1.
III-62
2006 IEEE NSREC Short Course
Section IV: Device Modeling of Single Event Effects
Prof. Mark Law University of Florida
Approved for public release; distribution is unlimited
Single Event Upset in Technology Computer Aided Design Prof. Mark E. Law, University of Florida NSREC 2006 Short Course I Introduction to TCAD ...................................................................................................... 2 A Overall Issues Driving TCAD..................................................................................... 3 II Numerical Approximations ............................................................................................. 4 A Time Discretization..................................................................................................... 4 B Spatial Discretization .................................................................................................. 8 III Physical Approximations – Process Modeling ............................................................ 10 A Process Flow ............................................................................................................. 10 III Physical Approximations - Device Modeling.............................................................. 15 A Tool Types ................................................................................................................ 15 B Basic Approximations ............................................................................................... 16 IV - Models for Device Simulation .................................................................................. 18 A Low Field Mobility ................................................................................................... 19 B Surface Scattering ..................................................................................................... 21 C Velocity Saturation.................................................................................................... 22 D Quantum Corrections to Inversion Layers ................................................................ 23 V Simulation Studies ........................................................................................................ 23 VI Conclusions.................................................................................................................. 29 VII Acknowledgements .................................................................................................... 29 VIII References................................................................................................................. 29
IV-1 IV-1
I Introduction to TCAD Technology Computer Aided Design (TCAD) is the simulation of manufacturing processes and device performance. TCAD has been in use for more than 25 years and are in widespread use to help design, verify, and debug processes and devices. This short course reviews the state-of-the-art for TCAD and how it can be applied to simulating radiation events. TCAD tools fall under several broad categories. Process simulators allow the user to input process recipes and predict the device structure and doping. They have many complicated components, since there are many process steps. Some tools can simulate the etch, deposition, and lithography step[Ul88], [Ad95]. Others focused on simulation of the implant profile as a function of dose and energy (UT-Marlowe [Kl92]). Others focused on simulation of the thermal steps – anneals and oxide growth (SUPREM-IV[La88]). The recent trend has been to combine all these capabilities in one tool, e.g. Sentaurus-Process. Device simulators use the output of a process simulator as input. They are given the structure and doping profile. By solving the transport equations for electrons and holes and the electrostatics, they predict the operating conditions of the device. Classic examples of these tools are PISCES[Pi83] and MINIMOS [Se80]. Device simulators usually can solve for DC operating point, AC small-signal, RF harmonic balance for large signal, and switching transients. Modern simulators include special modules for power devices and things like single event upset (Sentaurus-Device). Mixed-mode simulators [Ro88] also allow the addition of small circuits around the main simulated device structure. In some cases, codes have been prepared to handle multiple device structures linked with circuit elements. These are useful for examining how a device performs in a larger environment (for example [Do95]). This can be a critical component for single-event simulation, since the device transient currents can be computed and used as part of the overall SRAM cell to see if an upset occurs. Circuit simulators (SPICE [Mc71]) take abstracted device models and solve for the behavior of larger circuits. Although they sacrifice accuracy in the device modeling, they make up for it in vastly increased computational throughput. It is possible to run very large circuits in circuit simulators that are infeasible with today’s mixed-mode simulators. Radiation events can be simulated across all of these tools. A radiation strike in the simplest case generates mobile charges that can flow in the device. The terminal currents can be analyzed with existing device simulators. Small circuits can be examined with a mixed-mode tool to see if the logic state changes with a radiation strike. A device simulator can be used to calibrate the response of a device so it can be inserted into a large circuit simulation. This portion of the short-course will focus on device simulation. Since device simulation requires inputs from process simulation, this topic will also be covered. No device simulation can be considered valid if the structure being simulated is not an accurate representation of the actual device structure. “Garbage-in, Garbage-out” applies strongly
IV-2
here. The author has a lot of experience with “bad device simulation” that was really just a poor approximation of the actual device structure.
A Overall Issues Driving TCAD The widely quoted International Technology Roadmap for Semiconductors (ITRS) [IT06] enumerates critical device dimensions (physical gate lengths, oxide thickness, junction depths, etc.) needed to meet performance goals of the future. The device of the future is almost certain to have an alternate gate dielectric beyond today’s nitrided oxides. It is possible the device could have a midband gap workfunction metal. It also likely that planar bulk CMOS will be replaced with novel dual gate structures to better control shortchannel behaviors. SiGe strained layers and capping layers will also be engineered to produce strain to boost mobility. The likely gate stack will contain vastly different materials than today’s poly / oxide / doped silicon device. It will have meta-stable dopant concentrations in the source and drain to attempt to control parasitics. Mechanical stress and how it influences processing and device performance will be critical factors. At the nanometer scale, all materials are in close proximity and strain sources will have little room to relax. Manufacturing variation of devices could well be a limiting factor as the number of atoms in the device gets to be a limit. Longer-term device architectures include quantum wires, carbon nanotubes and perhaps eventually molecular devices. Each of these new architectures, along with extremely scaled Si MOSFETs, represent new challenges for both TCAD process and device modeling, challenges which no tool existing today comprehensively addresses. The challenge for modeling will be greatly compounded when both channel and junction dimensions are less than 10nm. In this regime, the fundamental concepts of continuum modeling, like diffusion, average concentration and mobility, lose their meaning because of the lack of statistically significant amount of ions, carrier scattering events, and even electrons [Ve93]. At these dimensions, individual ion and extended defects can profoundly affect device structure, quantum transport effects are amplified, and contact regions now dominate performance because the channel resistance is so small. We are seeing the effects of approaching these limits even today. Statistical fluctuation effects modulate threshold voltages due to the finite number of dopants in the channel [St98]. Carrier mobility modeling must be treated empirically for each technology, with ballistic, quantum, and other non-equilibrium effects all lumped in with bulk parameters [Wa04]. For the projected device, TCAD is likely to be more necessary than it was for predecessor planar bulk devices. For close to twenty years, we have been able to scale bulk CMOS in fairly straightforward ways. Today’s device looks much like those from the mid ’80’s. Since the future device will feature different dominant transport, new materials, and meta-stable processing the need for TCAD is greater than ever before. Experimental structures cost more with each generation and evaluating device options is more difficult and complicated than ever before. This is certainly true when attempting to understand radiation hardness. Process and Device TCAD in industry today has two primary functions. The first is to assist with direct process design, which requires the simulation tools to be calibrated to a
IV-3
reasonably well-characterized process flow. Once calibrated, the tools can be used to target specific process parameters, explain data and debug the process issues, and predict the performance of new process options. The objective is accurate quantitative prediction. The characterization effort required to calibrate process and device tools is a lengthy process requiring substantial use of SIMS of implant and anneal conditions, TEM device cross-sections (to verify dimensions), and comprehensive I-V and C-V data. There needs to be a tight partnership between the process and the TCAD engineers, the latter of which must have expertise with both process and devices models. Of course, the tools themselves must be able to model, at least empirically, all important phenomena that determine performance. The second important way tools are used today is to gain conceptual understanding. This understanding can take the form of examining idealized design trade-offs or investigating entirely new device or process concepts. In this role, detailed calibration of the tools is not needed, but the underlying physics must be correct in order to capture the right trends. In these new design spaces, empirical models cannot be relied upon. This is the primary mode that TCAD has been used in the radiation effects community. It has not been particularly feasible to run enough events to be statistically significant. Finally, as scaling challenges in traditional Si technology open the door for more serious consideration of new device concepts and materials, accurately assessing these new options with simulation will become of critical importance. Each of these new device architectures will require incredibly expensive changes in process tools, and simulation work that can reliably sift through the myriad of options can potentially save a huge amount of resources. The radiation tolerance of these process options will be need to evaluated. Clearly the TCAD community needs to prepare for this by working on an infrastructure flexible and physically detailed enough to evaluate these options and the inherent statistical variations involved with them.
II Numerical Approximations Underlying all TCAD tools are numerical approximations. These numerical approximations control the convergence, CPU time consumed, and error in the simulation results. Some approximations aid overall convergence and some hinder it. Most users of device simulation have had problems with convergence. We all want simulations to run fast and different numerical techniques offer different CPU trade-offs. Most importantly, numerical approximations control the calculation error. All simulations (Process, Device, or Circuit) contain errors from their numerical approximations. Understanding and controlling the sources of error are critical to getting desired results. In this section we’ll discuss error from time and spatial approximations.
A Time Discretization Obviously, radiation events at the device level are by their nature transient. It is absolutely necessary to simulate accurately the evolution of the charge and terminal currents in time. The simplest way to approach transient simulation in a device simulator is to recast the differential equation into integral form:
IV-4
t1
"C = F(C,t) "t
C(t1 ) " C(t 0 ) =
$ F(C,t)#t
t0
In these equations, C is a function of time and F is a function of C and t. No statement is being made about F as of yet – it could also be a differential operator. Most methods of ! solving the time dependent!equation relate to forming a polynomial approximation to F(C,t) and integrating it in time. In choosing an appropriate method, there are three main issues. First is the accuracy of the method. Second is the computation time required to solve the equation. Third is the stability of the method. Accuracy refers to the overall error in the approximations. Generally, a user wishes to set a tolerance for the computation. Stability refers to the ability to damp errors. Does the error accumulate or decrease as the timesteps go forward? If there is an error in C(t0), does it get larger or smaller in C(t1)? As an illustration, let’s use the simple backward Euler method:
C(t1 ) " C(t 0 ) = ( t1 " t 0 ) F(C(t 0 ),t 0 ) The value of the integral is approximated as the value at the beginning of the interval times the width of the time interval (Figure 1). This is a closed form expression, or ! explicit, since the value of the concentration at the end of the interval shows up only once in the expression. We can compute the value point by point throughout time. This can provide considerable benefit computationally. However, the error is not as well contained. If we use a Taylor’s series to approximate F(C,t), we can write that the error in the integral is:
C(t1 ) " C(t 0 ) = ( t1 " t 0 ) F(C(t 0 ),t 0 ) +
1 2 #F ( t1 " t0 ) 2 #t
The additional term represents the largest component in the Taylor’s series and is proportional to the time interval squared and the first derivative in time of the function F. ! This makes good qualitative sense. If the function is changing rapidly in time, assuming F is constant with the value at the beginning of the interval is a poor approximation. The error is proportional to the square of the size of the time interval. This is known as a first order accurate model (the first term of the Taylor’s series is in the calculation).
F(C,t)
F(C,t)
t0
t1
t0
t1
Figure 1 – Schematic of backward Euler and trapezoidal rule. The trapezoidal rule integration is shown on the left, Euler on the right. IV-5
Stability is usually analyzed with a test problem.
dy = "y dt with λ << 0.0. This has a solution of eλt. For any large value of t, the solution should go to zero. Any time step method can be applied to this equation that will produce formulas ! of the form: y n = A( "#t)y n$1
!
where is A is an function dependent on the method and Δt, the timestep, is shorthand for t1-t0 . For the case of the backward Euler method: A( "#t) = "#t
Any errors in the previous solution are damped (or magnified) by λΔt. It is obviously desirable that this be less than one for this test problem. Time steps larger than this can ! cause the magnitude of the approximate solution to increase. This is clearly not in line with the continuous and correct solution. A method is said to be A-stable if the numerical approximations go to zero as n goes to infinity. This is only true for the backward Euler method when the magnitude of λΔt is less than one. For some problems, A-stability is not sufficient. This includes most interesting device simulations. A method is L-stable if it is A-stable and magnitude of A goes to zero as λΔt goes to infinity. These style methods do not force any restriction on the time step size to maintain correct answers. The backward Euler method is A-stable for small timesteps, but is obviously not L-stable. In practice, it is hard to tell where the limit to stability is for a particular problem, so L-stable methods are much more desirable than A-stable methods. The trapezoidal rule is another common time discretization technique. This uses the endpoints of the integrals (Figure 1) to compute the value. 1 C1 " C0 = #t [ Fo + F1 ] + Error 2 1 $ 2F Error = #t 3 2 12 $t
!
IV-6
the subscripts indicate the time point at which the evaluation is done. The error term is proportional the third power of the time step – this method is second order accurate. This means for the same accuracy, much larger time steps are available using trapezoidal rule compared to backward Euler. However, this method is implicit. The value of C1 cannot be computed closed form because it is likely a component of F1. This means in many practical cases an iterative technique must be employed to solve the equations. Newton’s method is the technique most frequently employed. Figure 2 – Ringing behavior with the trapezoidal rule.
C(t)
t0
t1
While more accurate than the Backward Euler, trapezoidal rule does not offer any improvement on stability. It can be shown for the test problem that A(λΔt) is: A( "#t) =
1+ 12 "#t 1$ 12 "#t
This is A-stable. Since λ is negative, each application of the trapezoidal rule will decrease the magnitude of the next step in the solution. However, it is not L-stable. A goes to ! minus one as λΔt goes to infinity. The trapezoidal rule is prone to “ringing” – an oscillation of the solution value positive to negative and back to positive. Figure two shows this in operation. This presents obvious problems in the context of solving for the electron concentration, as an example, since a negative concentration has no physical meaning. Most device simulators, including Sentaurus device, use the TR-BDF method [Ba85]. This is a compound method made up of a trapezoidal (TR) step followed by a second order backward difference formula. The method is overall second order accurate and Lstable. Some “ringing” problems can occur with the first TR step, but these can be trapped and smaller time-steps chosen to avoid them. In general, stability is not a concern with this technique.
IV-7
However, accuracy with the TR-BDF method is important to control. The method is second order accurate, so the error is proportional to the time step cubed and the secondderivative in time of F. For device simulation, F is the continuity equation and is generally proportional to current. When the current is changing rapidly during a simulation is when very small time steps must be employed. In a single-event-upset case, this is almost always true as the deposited carriers flow from the radiation strike to contacts or recombine. Fortunately, most simulators can self-estimate the time step. This works well after the first time step. An estimator is used to guess the magnitude of the next time step after solving one step. Commonly employed is a Milne’s device – using another method to estimate the solution at the same point in time. This is easily done with TR-BDF. A larger TR step over the whole compound method can be employed. The magnitude of the difference can be used to estimate the second derivative of F, and therefore the magnitude of the next time step. This is typically conservative for most problems and provides good control over the error.
B Spatial Discretization Automatic estimation and control of error is unfortunately far more complicated in space than it is in time. Users are typically left to on their own to determine appropriate grid spacing. As will be shown in this section, single event upset problems complicate the situation enormously. To keep things simple, let’s consider Poisson’s equation for our analysis of spatial errors. Most of this analysis will apply equally to the continuity equations, but the math is simpler since there are no drift terms to analyze. Poisson’s equation is: r r " • #"$ = q(N D % N A % n + p) where ε is the permittivity, ψ is the potential, q is the electronic charge, ND and NA are the donor and acceptor concentrations, and n and p are the electrons and holes. The ! electron and hole concentrations would be independent variables with solution given from the continuity equations. For most finite-volume approaches (typical in device simulation), the equation would be transformed with Gauss’ theorem to: r r % "#$ • n dl = % q(N D & N A & n + p)dV
(
)
The left hand-side is a contour integral around a control volume. The electric field is dotted with the outward normal and integrated over the surface area of the volume. The ! right hand side is a volume integral of the charge. Both of these can now be treated separately. The contour integral component is easier to discuss if it is done in two dimensions. Figure 3 shows the typical case for a simple grid. The control volume is defined to be the area closest to the grid node than to any other grid node. This can be shown to be the perpendicular bisectors of the triangle sides. The contour integral then runs through the IV-8
midpoints of the edges connected to the node. Evaluating the electric field is then easy to do as a straight-line approximation across the edge. Simply subtracting the values at the nodes and dividing by the length of the edge gives a good approximation to the gradient. By construction, this gradient is perpendicular to the contour line segment.
Figure 3 – The dot in the center is the grid node. The solid lines are the grid lines. The dashed lines are the control volume (or area in 2D). The volume integral is evaluated on this section. The line integral is evaluated around the border of the control volume. The volume integral can also be easily approximated. The node lies near the center of the interval. The simplest approximation is to use the nodal value and multiple by the area associated with the node. It is easier to construct the error expression for this approximation than for the contour integral expression. If the point is at the center of the interval, the error can be written as:
Err = "x 2
2 # 2$ # 2$ 2# $ + "y + "x"y #x 2 #y 2 #x#y
where ρ is the charge being integrated. This assumes the node is exactly in the middle of the interval in both the x and y direction. The error is proportional to the grid spacing ! squared and the second derivative of the integrated quantity. A similar expression is valid for the contour integral although harder to derive. This gives some useful guidelines on picking the grid spacing. For Poisson’s equation, the second derivative of potential is equal to the charge. Where charge is high, the grid should be small. Inversion layers and depletion layers should have small grid spacing. Changes in grid spacing should be done slowly in any direction, because being out of the middle of the interval increases the error. The carrier continuity equations are discretized in the same way as Poisson’s equation. Instead of electric field, the current is substituted. The charge is replaced by the time rate of change of the carrier concentration and any recombination that may be occurring. This has a nice intuitive feel. In the absence of recombination the net current into the box is IV-9
equal to the time rate of change of the carrier concentration in the box. By the same extension as with Poisson’s equation, places where the carrier density will be changing rapidly in time should have fine grid. This leads to some difficulty for single-event upset simulation. The charge cloud deposited by the radiation strike needs to have fairly fine grid. As the charge is removed, the grid spacing in the vicinity of the strike can be coarsened. Radiation events are an ideal place for adaptive grid refinement.
III Physical Approximations – Process Modeling No radiation event simulation will be accurate if the structure is not accurate. Simple approximations to material shapes and doping profiles can introduce significant discrepancies between measured and simulated results. To avoid “garbage in, garbage out” care must be taken to have a accurate representation of the device structure. This is usually done with process simulation. A brief overview of these tools is in order.
A Process Flow Most of industry today is using tools organized around three main techniques. High temperature simulations are done using primarily descendents of SUPREM-IV[La88] or FLOOPS [La98] with finite element techniques. Implant simulations are done either using look-up tables based on moments or with Monte Carlo approaches [Kl92]. Pattern transfer steps (lithography, deposition, and etch) are usually solved with level set based approaches [Ad94, Se97]. Commercially available tools like Sentaurus-process offer capabilities in all three areas so full flow process integration examples can be done. All of the commercial process simulator variants have implant models based on the table lookup of moments of the ion distribution. A great deal of work has been done on characterizing and producing accurate moment tables. Since this is a table-based approach, it is not predictive[Pa90]. Measurements must be done to determine the profile shape in advance of simulation. As the implant energy has been reduced in processing, additional measurements and table entries have been required to support modeling. In addition to specific tuning for the implant species, dose, energy, tilt-angle, and rotation can be taken into account in the table. This allows does effects on channeling, for example, to be handled cleanly. Obviously, new species are difficult to incorporate with this approach. New effects are also difficult to incorporate – for example the amorphous layer depth depends on dose rate. This isn’t handled by moment-based approaches. Moment-based approaches are also generally poorly suited to predict the damage profiles resulting from the implant. Monte Carlo techniques have been employed for decades to perform predictive simulation of the dopant profile. TRIM and MARLOWE are well known versions of these tools and exist with various variants today. The major advantage these tools have is they can perform a first-principles way of computing the damage profile. They can predict the onset and depth of amorphized layers, which is becoming increasingly important. A major challenge for these tools is to handle channeling - this is difficult to
IV-10
do as the damage accumulates and the channels are closed off. Lucky-ion techniques are used to statistically enhance these rare events. The damage profiles produced providing starting conditions for the defect populations used in the dopant simulations. SUPREM-IV and its commercial variants have been the industry workhorses for nearly two decades for thermal simulations. These code feature hard-coded models discretized in space using finite volumes or finite elements. Several levels of models are usually available that allow users to trade CPU time for model complexity. More modern tools like FLOOPS and Sentaurus-process offer scripting capabilities for model development and contain a range of built-in model scripts. Models for dopant annealing are based around point defects and typically feature a +1 approach to computing damage effects[Gi91]. The +1 approach assumes the implant process generates a number of interstitials equal to the doping profile. Although the actual damage is much larger, most of the damage is in the form of Frenkel Pairs – interstitials and vacancies. These easily recombine in the substrate and produce a filled lattice site. The dopants are added to the crystal and to accommodate them an interstitial is forced out by the dopant. This is the simple physical reasoning behind the +1 modeling approach. Empirically observed extended defects in silicon also tend to follow this rule – they contain a number of atoms after annealing approximately equal to the implant dose. In materials that have been amorphized, this approach needs to be adjusted to account for regrowth. In these cases, it is frequently better to use the full damage profile from Monte Carlo techniques. The damage profile and extended defects are critical to modeling diffusion. There has been a lot of effort on modeling how the isolated point defects precipitate into extended defects. There is a hierarchy of defect structures from small clusters [St01] to {311} ribbon like defects [Ea94] and then dislocation loops[Jo88]. During annealing, these defects form and then evolve. During the evolutionary process, the extended defects lose point defects to the environment at differing rates. As we will see, these processes are critical in determining the dopant diffusion and activation processes.
Vacancy A + V ! AV Mechanism
Interstitial A + I ! AI Mechanism
Dopant atom IV-11
Interstitialcy A + I ! AI Mechanism
Silicon atom
Figure 4 – Depiction of diffusion paths. The first is the vacancy exchange mechanism. The second is the interstitial kick-out mechanism. The third is the shared lattice mechanism, referred to as an interstialcy. Dopant diffusion is typically handled by using a diffusivity that depends on the local point defect concentration: D = D* (fI CI / CI* + (1-fI) CV / CV*) where the subscripts I and V are for interstitial and vacancies, the superscript * is for the thermal equilibrium value, D is the diffusivity, C is the concentration of defect, and fI is the fraction of the diffusion that occurs through an interstitial mechanism. The dopant continuity differential equation is then coupled with ones for the interstitials and vacancies – this gives rise to the widely used three-stream (Interstitial, Vacancy, and Dopant) diffusion models (Figure 4 – shows and example of the interactions). These equations typically will include electric field and high concentration diffusion effects as well. An excellent review of these interactions is in Plummer, Griffin and Deal book [Pl00]. The +1 approach with appropriate tuning can predict the transient enhanced diffusion (TED) behavior over a wide range of implant and annealing conditions. Prediction of junction depth and active dopant concentration is the primary aim of these tools. As thermal cycles have shortened, diffusion has decreased. These anneals frequently have non-equilibrium processes controlling activation. Predicting activation has become largely more important than predicting diffusion, since the diffusion is being minimized by the anneal cycle. Process developers are still looking to maximize the activation during these minimum cycles.
-I -Bi
+I
B Bi BI2
+Bi B2 B 2I B2I2 B2I3
B3 B3I B3I2 B3I3
B4 B4I B4I2 B4I3 B4I4
Figure 5 – Boron deactivation paths. Boron and Boron-Interstitial pairs can be captured to form immobile, inactive complexes. Obviously, this can be complicated and the interstitial concentration is critical to accurate modeling. IV-12
Activation modeling is quite complex. For example, boron has a number of complex clusters [Pe98, Li02] that can form during processing and actual activation can be much lower than solubility (Figure 5). This clustering process is driven by interstitial release from damage. So the dynamics of activation can be complex and is generally not modeled by a “solid solubility” limit. Activation levels must be closely tuned and calibrated for ultimately accurate device simulation. The device simulation tool will depend more on the active concentration than the chemical concentration, but process simulation tools are generally more accurate at predicting the chemical concentration. The point defect population controls most diffusion and activation processes. Modeling of the point defect populations is therefore a critical underlying capability. More sophisticated models beyond the +1 approach will add differential equations to account for extended defects (dislocation loops, 311 defects) and how they interact with point defects [Av04]. The extended defects act as a reservoir for point defects so their modeling is critical. Figure 6 shows these defect structures as observed in plan-view transmission electron microscopy. An important component in these models is the surface. Oxide / Silicon interfaces have been the most characterized and there is general agreement that this interface is a strong sink for point defects. However, changes in the capping material (nitrided oxides, for example) could have a large effect on the how the interface interacts with point defects. Most novel gate materials have not been adequately characterized for how they interact with point defects. This can be especially important in predicting the doping profile laterally under the gate edge and can control the reverse-short channel behavior.
Figure 6 – Examples of TEM of defect structures. The left plan-view TEM shows {311} defects. The right contains dislocation loops. By measuring sizes, a calculation of the number of contained interstitials in each defect is possible. Oxide and silicide formation are typically based on around an extension to multiple dimensions of the Deal-Grove model for oxidation. The reacting species is solved for to
IV-13
compute the concentation of reactant at the growing interface. The solution to this partial differential equation provides growth rates, but not the flow of the material in response to growth. This is usually treated with a viscoelastic solver. More sophisticated approaches use the resulting mechanical forces to alter the flow, growth rate, and diffusion of the reactant species. Oxide growth is the most calibrated and extensive work was done on LOCOS structures to parameterize growth and flow. Generally, these approaches are most accurate for situations in which the viscous flow of the material is dominant over the elastic behavior. This is true for almost all LOCOS approaches, but it is less true for thin oxide growth as a liner in STI processes. These corners and shapes can be important for single-event simulation. The local shape of the corner and actual material boundaries could significantly alter the charge collection for an event near edge. In addition, the shapes are critical in computing the correct mechanical strain and stress. Of increasing importance is the mechanical strain in the crystal. In many modern devices, strain is being intentionally introduced to modulate the mobility (see next section). The strain is not localized in the channel and in fact extends several microns. These intentional sources of strain interact with unintentional sources (like dopants, extended defects, STI) to create the final strain profile of the structure. This can be critical for single-event simulation, since the strain will alter mobility throughout the device and charge path. The strain will make the mobility anisotropic, as different directions will be preferred in the strain field for current flow. Modern device simulation can capture strain from a number of sources and compute the distributed strain in the crystal. It is important that this be computed and integrated into the device simulator. To date, most approaches to do so are ad-hoc and are not appropriate for all situations.
III Physical Approximations - Device Modeling A Tool Types There are two main types of tools in widespread use today. The first are single particle methods. The second are continuum methods. Both can solve the same physics although some things are computationally easier in one or the other approach. Particle simulators are often referred to as Monte Carlo solvers. This is because of the random nature of the simulation. In Monte Carlo codes, single electrons are followed through accelerating fields, scattering events, and recombination. A statistically meaningful number of electrons must be followed to compute currents. Statistical enhancements are required because the rare events are actually the most interesting to follow. For example, since most electrons never leave the source, the code chooses some "lucky" electrons and have their statistics scaled up. Electric fields are computed through Poisson’s equation Self-consistent solution is a problem – the electric field is determined by the final position of the electrons, which is not known a-priori. Iterative schemes are employed to make this work correctly.
IV-14
Continuum solvers usually work with moments of the Boltzmann’s transport equation. The most familiar approach is the drift-diffusion model of current flow in a continuity equation. The differential equation is discretized in space typically using finite-volume techniques. Generally speaking, continuum methods are faster than Monte Carlo techniques. Solving a sufficient number of particles takes a long time. Monte Carlo techniques can more easily implement more realistic physics for velocity overshoot, complex changes in the band structure, and quantum mechanics corrections. For most devices, however, the physics contained in drift-diffusion is sufficient. Continuum tool types are also the ones in most widespread use, so I’ll limit the discussion to these. There is a hierarchy in today's device simulation tools from least to most sophisticated in terms of the physics modeled. At the top are models based on various approximations of the Boltzmann Transport equation, where carriers are treated as classical particles. At the bottom are methods for including the wave nature of carriers to varying degrees, the most fundamental of which are based upon solving the full blown Schrodinger equation. State of the art device tools used for CMOS process development almost exclusively rely on drift-diffusion based approaches, supplemented with models to account for 1st order quantum effects and mobility in inversion layers.
B Basic Approximations The basic system of equations for device simulation consist of Poisson’s equation: " # $"% = &q( p & n + N D & N A )
where ε is the electrical permittivity, q is the electronic charge, n and p are the electron and hole concentrations, ND and NA are the donor and acceptor concentrations. ψ is the ! electrostatic potential. In silicon devices, this is usually referenced to zero at the silicon intrinsic potential. For more complex heterostructure devices, the potential reference has to be established consistently across the material stack. The electron and hole concentrations are determined by the continuity equations:
r $n " # J n = qR + q $t
r $p " # J p = qR + q $t
where R is the net electron-hole recombination / generation rate and J is the current density of the appropriate carrier. Recombination is critical for single-event upset ! simulations, and we will return to model options that are available. The complexity, of course, comes from how the current density and recombination are defined. In the simplest case, the current density can be expressed as: r r r J = nqµ"# n = qµnE + qD"n
! IV-15
where φ is the quasi-fermi level, E is the local electric field, D is the diffusion coefficient, and µ is the mobility. The first expression is more general and can be applied more easily to a wide variety of cases. The second expression for the current is the well-known and more familiar drift-diffusion model. This describes transport of mobile carriers due to electrostatic and statistical forces. The physical picture of transport is that carriers move as classical particles with lots of scattering (from ions and phonons). It then follows that carriers are assumed to be in equilibrium with their immediate environment, so that all transport properties can be described by macroscopic quantities such as mobility coefficients, which depend only on the "local" quantities like dopant concentration or electric field. Most device simulators also offer a model called variously thermodynamic or lattice heating. These models compute the local heating from all terms in the device [Wa89]. These can come from Joule heating and recombination terms. The lattice heating equation is solved with sources of heat coming from the carrier transport equations. The current density is also modified to include the driving force of temperature. This selfheating behavior is usually more important for bipolar devices than for MOS. For some single-event upset cases, it may also be important since a lot of heat will be generated through recombination events in silicon. This heat generation also could be quite a distance from the contact temperature sinks. This local heating could, thereby, influence and modify current flow locally in the charge distribution. Of critical importance in simulating the local heating is the contacts. Most of the device heat will flow out the contacts into the interconnect and package. This has to be approximated correctly to get the temperature in the device correct. The physical picture presented by Drift-Diffusion break downs for sub-micron devices, where the average distance between scattering events becomes comparable to the device dimensions. In this situation, carriers are no longer in equilibrium with the local environment and simplified transport models like field-dependence are no longer applicable. Despite this fact, tools based upon the Drift-diffusion models are still the mostly widely used in industry today. The reason for Drift-diffusion's success follows from the fact that most industrial device design analysis relies primarily on getting the electrostastics of the problem correct, while the transport part can be calibrated within a design space by adjusting the mobility parameters to measurement. The first part, the correct electrostatics, is adequately handled by solving Poisson's equation and Drift-diffusion's sufficiently accurate treatment of mobile charge allocation (if the structure is correct). The second part, mobility calibration, affords the tools a mechanism for obtaining quantitative accuracy in calculating currents. Both of attributes together allow technologies to adequately target process parameters, analyze data, and compare the performance of new process options, within a given technology generation. Although these adjustments might be adequate for CMOS simulation, single-event upset is not CMOS simulation. The current paths are more complex and depend on the radiation strike. In CMOS simulation, the current is constrained in the inversion layer – something not true with single-event simulation. Some of the same corrections applied to
IV-16
CMOS simulation might make single-event simulation worse, since the single-event current will not be flowing in the inversion layer. Careful considerations of models will be required to do accurate single-event simulation. This will be discussed in more detail when we describe model choices in detail. When drift-diffusion cannot be used, there are other simulation options. In these cases, the carrier transport cannot be treated entirely as a local phenomenon. The carriers get “hot” and have velocities above predicted by local properties. In these cases, it can be useful to use the energy balance approach in which the carrier energy is solved for explicitly[St62, Bl70]. The energy-balance model alters the current flow equation to be: r r r r J = qµn (n"E c + k B Tn "n #1.5k B Tn " ln me ) Ec is the conduction band energy, Tn is the electron temperature, kB is Boltzmann’s constant, and me is the electron mass. The first term is the drift term generalized to ! include changes in electrostatic potential, electron affinity, and the band gap. The remaining terms handle the diffusion, carrier temperature gradient, and spatial variation in effective mass. The last two terms are typically smaller than the first two, but can be important in some cases. A similar equation would be written for the hole transport. The difficulty comes in computing the local carrier temperature. This is the typical default set of equations for Sentaurus-Device: r r "W n W % W no + # $ Sn = J n $ #E C % H n % n "t &n 2 r 3 k T r k nµ T r Sn = % ( B n J n + B n n #Tn ) 2 q q
where Sn the energy flux, Hn is the energy gain or loss from recombination, Wn is the energy density, and Wno is the equilibrium energy density. This corresponds the ! formulation of Stratton. The energy density is expressed:
3 W n = nkB Tn 2 A similar set of equations can be constructed for the hole energy. To first order, this allows for carrier to gain energy in a high field region and carry that increased velocity ! across the channel. There have been several different formulations and Sentaurus-device offers several ways to configure the equations. It is very important for single-event simulation to include the energy from recombination events. Most single-event simulations do not include energy associated with the charge cloud. The carriers are entered into the simulation at thermal equilibrium. With the energy balance system, the carriers could be entered with higher temperatures – as is likely to be true. IV-17
IV - Models for Device Simulation Modern Drift-diffusion simulators features additional modifications to make them more suitable for modeling transport in MOSFETs, where carriers move in high-field inversion-layers near the insulating SiO2 surface. The first is a mobility model tailored to account for the effects of both vertical and lateral fields. The second is a simple correction for the quantum repulsion at the SiO2 interface. Even though the picture of electrons as classical particles is questionable, these corrections enable simulation for a range of deep submicron devices. There can be multiple terms that contribute to mobility. These are usually combined using Mathiessen’s rule: 1 1 1 = + +L µ µb µs
Multiple terms from the bulk and surface can be combined in this fashion. ! A Low Field Mobility
The Phillips unified mobility [Kl92] is so-called because it unifies the description of majority and minority carrier mobility. It includes carrier-carrier scattering, ionized impurity scattering and screening by carriers, and impurity clustering. It was originally developed for bipolar device simulation but has also been used in CMOS device simulation. This model is preferred for single-event for two reasons. The unified treatment of the majority and minority carriers means there will be no discontinuity in the mobility when the charge cloud is deposited across a junction. Since there will be a high concentration of created carriers, including carrier-carrier scattering in a unified way is critical. In the early phases of the evolution of the deposited charge, carrier-carrier scattering will dominate. The Phillips model contains components for the lattice phonon scattering and scattering from ionized centers. The two terms are combined using Mathiessen’s rules. The lattice phonon component depends only on the lattice temperature. The scattering component is quite complicated: " N %" N %( " n + p% µn = µ1$$ sc ''$$ ref '' + µ2 $$ '' # N sceff N scf & # N sceff & N sc = N D + N A + p N sceff = N A + GN D +
p F
ND and NA are the donor and acceptor concentrations, µ1 and µ2 are model parameters, G is a minority impurity screening function, F is the electron-hole screening, α is a fitting ! parameter, and N is a parameter dependent on the type of majority doping (arsenic, ref phosphorus, or boron). Similar expressions with different parameters apply to holes. The
IV-18
model is well tuned to a wide variety of conditions. Temperature dependence is included in the µ1, µ2, G and F terms. These are scaled as appropriate to the lattice temperature or carrier temperature if energy balance is being solved.
Electron Mobility (cm2/Vs)
Figure 7 shows the mobility of electrons as a function of doping for negligible acceptors and holes. The mobility is a strong function of doping over moderate doping levels and then levels off at high concentration. At typical doping concentrations in the source, drain, and extension the mobility is essentially constant.
1200 1000 800 600 400 200 0 1016
1017
1018 1019 1020 Doping (cm-3)
1021
1022
Figure 7 – Electron mobility as a function of donor doping concentration. Figure 8 shows the electron mobility at a fixed donor concentration of 1016 cm-3 with varying hole concentration. At high concentrations of electron / hole pairs as found in the particle track of a radiation strike, figure 8 demonstrates how important the carrier – carrier scattering terms are. These terms have been calibrated against high injection conditions in bipolar devices.
IV-19
Electron Mobility (cm2/Vs)
1200 1000 800 600 400 200 1016
1017 1018 1019 1020 Hole Concentration (cm-3)
1021
Figure 8 – Electron mobility as a function of hole concentration at a fixed donor concentration of 1016 cm-3
B Surface Scattering In modern MOSFET’s, surface scattering limits the inversion layer mobility. To get the correct device current, it is necessary to account for these terms. There are two terms that are important, acoustic surface phonon scattering and surface roughness scattering. There are several treatments available, and in this work I will describe the Lombardi model [Lo88]. #
µac =
B C(N /N ) + 1/ 3 0 k E " E " (T /T0 )
,1 & E /E 2 3) ( " ref ) + E " + µsr = ( ( $ % + ' *
N is the sum of the donor and acceptor concentrations, E " is the perpendicular electric field, and the remaining terms are model parameters. The perpendicular electric field ! controls the mobility. Figure 9 illustrates the mobility reduction. To make sure that these terms are applied only at the surface, they are combined with a modified Mathiessen’s ! rule: 1 1 D D = + + + L D = e"x / # µ µb µac µsr
The scaling parameter D keeps the scattering confined to the surface region. X is the distance to the surface and λ is a parameter to scale away the surface terms. This ! approach needs to be carefully used for single event upset simulation. A charge cloud may get significant contributions from surface terms but still be some distance away from
IV-20
Mobility (cm/Vs)
the surface. The default for λ in Sentaurus-device, for example, is 10nm. Significant contributions from surface terms could still be important some distance from the surface.
260 240 220 200 180 160 140 120 100 0
0.2
0.4 0.6 0.8 1 1.2 1.4 Normal Electric Field (MV/cm)
1.6
Figure 9 – Surface mobility as a function of perpendicular field for a doping of 1018cm-3.
One of the more difficult things for a device simulator is the computation of the perpendicular electric field. Generally, terms can defined on each of the mesh. Quantities can be easily computed on the edge – mobility, electric field, diffusion gradient and the current flux can be assembled. The perpendicular field requires information from off the edge. There are two means for computing the appropriate perpendicular electric field. The first is to specify an interface and compute the field perpendicular to the interface. This requires each element in the mesh to be store a vector in the perpendicular direction – an easy enough calculation. This method can converge well and is useful particularly when the current is flowing perpendicular to the interface. In a single event upset case, the current vector may not align with the channel. It is not clear how this could distort the results. The other technique is to compute the field perpendicular to the current flow. This is a more complicated dependence. It can also break down in areas of low current, since the current direction is not well defined and can negatively impact convergence. Since the point of these models is to approximate surface scattering, it also may be inappropriate to apply it.
IV-21
C Velocity Saturation In high electric fields in the direction of the current flow, the velocity saturates. This is easily taken into account by modifying the mobility. There are several different forms for including these effects. The most common is the Canali model [Ca75]:
µ=
µlow r ( ,1/ ( ) " % µ E +1+ $ low ' . +* # v sat & .-
where vsat is the saturation velocity, and E is the electric field parallel to current flow, and µλow is the low field mobility computed from all sources – doping, surface scattering, ! carrier-carrier scattering, etc. This limits the effective velocity to a maximum given by the velocity saturation. If energy balance is being used, then it is necessary to use a different formulation. The formulation is the same, but instead of being related to the field the velocity saturation is related to the local carrier energy. These effects are important in modeling submicron CMOS, but probably do not influence the evolution of the charge in a single-event upset case. Most of the charge will be deposited in regions of relatively low field.
D Quantum Corrections to Inversion Layers In modern CMOS devices, the inversion layer is frequently quantized. This has the effect of shifting the charge distribution away from the interface. There is an increase in the capacitance and a shift in the threshold voltage due to quantization. Most of these techniques add an extra potential term in the classical formula for density: $ E " EC " # ' n = NC exp& Fn ) kB T % (
Λ is the corrective potential, n is the electron density, T is the carrier temperature, and EFn is the electron Fermi level energy. This has to propagated into the current expressions ! appropriately – usually by using the quasi-fermi level description of the current flow. If Fermi statistics are used, the above expression would need to be replace with the appropriate Fermi integral. The trick, of course, is to compute the correction potential Λ. One technique is to compute the 1D Shrödinger equation solution. This is computationally challenging. It can significantly increase the CPU time and frequently has convergence difficulities. It will almost certainly be impractical for the 3D simulations required for single-event upset cases. The density gradient model [An87, An89] is numerically robust but adds an additional differential equations to be solved. This increases the CPU time significantly. It works
IV-22
well in multiple dimensions and can be applied in the charge density cloud of a radiation strike without significant difficulty. If the charge density is tightly confined, this model could produce quantized states in the charge cloud that are inappropriate. The most appropriate is the modified local-density approximation[Pa82]. This model is based on a triangular potential well near the inversion layer surface. It naturally rolls off away from the interface, since the adjusted charge distribution has a dependence on the distance to the interface. In this way, the model is also consistent with the surface scattering techniques used earlier.
V Simulation Studies Both ion and proton strikes can cause single-event effects. Both strikes can create can create electron-hole plasmas along the strike path. Both can create secondary events – dislodging atoms already in the structure to create charge somewhere else. In general, ions produce higher charge densities than protons. Linear energy transfer (LET) is frequently used to characterize these events. LET is defined as the energy density deposited along the strike path. This can be computed from codes like TRIM [Zi85]. Simple metrics can be used to convert this to carrier densities. Once the charge is generated, it can be collected several ways and show as terminal current. First, any charge generated in a high field region (like a PN junction depletion region) will be swept out by the field and give a current flow pulse. The charge track itself will perturb the field and lead to field assisted funneling. In this case, the electric field is extended into quasi-neutral regions and the charge present is collected and swept through the device [Hs81]. Finally, diffusion of carriers will lead some of them to diffuse into depletion layers. These carriers will then respond to the field and be swept to the terminals [Ki79]. These mechanisms are relatively easy to understand. More complicated mechanisms also exist that can lead to significant enhancement of the collected charge. An ion shunt can be created by a plasma track that connects to isolated regions. For example, in a bipolar device it can short the emitter to the collector through the base region [Ha85]. This can also happen to one of the many parasitic bipolar devices found in CMOS technologies. In some cases, this may lead to latchup. During this time, the transient on-bipolar device can experience current gain. An ion shunt can also be created from source to drain across the channel (ALPEN) [Do99, Ta88]. There are two broad categories of events that can occur due to a single particle radiation strike. We are neglecting total dose effects in this discussion to focus on one-time events. The first is permanent damage that requires the part to be replaced. The second are transient events that go away with time or after a reset. The bulk of this discussion will focus on the second type – single event transients. Single event transient (SET) behaviors are also classified into several categories. Analog (ASET) and digital (DSET) are usually separated. Analog glitches are harder to characterize and tend to be more transient. In memory circuits, single event upset (SEU)
IV-23
is the phrase normally used and refers to a bit flip event. Multiple bit upsets (MDU) are multiple bit failures caused by a single ion-strike. Although rare, they are becoming more common due to the small size of circuit elements. Collection from the charge cloud can occur across multiple devices. Very fast circuit failures can also be classified this way when the charge collection occurs over several clock cycles. All of the above can result in single event error or soft error. These are errors that become observable at the system level. Long lasting problems (like an error in a state machine) are referred to as single event functional interrupt (SEFI). These errors can have long persistence before recovery is complete because the error propagates in the logic operation. Simulating a radiation strike in advanced tools is not easy. The problem is inherently 3D and therefore difficult to set up. Dodd [Do96] provides an excellent review of the issues. This example is built around a NMOS device that is 0.13µm long and 1µm wide. The device used the International Technology Roadmap for Semiconductors to estimate some appropriate material parameters. Doping was further tuned to get appropriate IV curves both in the linear and log scale. This helped insure the threshold voltage, subthreshold slope, and drive currents were appropriate to this technology node. Figure 10 shows the device.
Figure 10 – The device structure in three-dimensions. The red bar is the gate area. The device must be embedded in a large block of silicon. The radiation charge cloud is generated throughout the bulk and this region needs to be in the simulation structure to get the charge collection correct. The structure must also be wide enough that the side reflecting boundaries allow the charge to dissipate laterally. Normal device side
IV-24
boundaries are reflecting and will bounce the charge back to the center of the structure and result in an overestimation of the charge collection. Figure 11 shows the same device in cross-section, blown up to emphasize the active area. The deep source and drain regions can clearly be seen. You can also see the buried p-well doping and the extensions. The well doping is about 1µm thick, beyond the bottom of the picture. It is also easy to see the increased channel doping present to control the threshold voltage.
Figure 11 – Test device in cross-section. It is critical to make sure the device is appropriate. For this device, curve matching to DC results on IDS v. VDS and VGS were performed on both linear and logarithmic scale. At this point, the radiation strike simulation can proceed. Figure 12 shows the result of these simulations at several gate voltages.
IV-25
Figure 12 – Sample IV curves for the tuned device. Sentaurus-device normalizes the charge distribution from the analytic distribution to make sure poor grid does not result in a mismatch in the initial charge dose. However, a poor grid can lead faster charge removal in the sample. Poor grid does tend to increase the current flow in the device. Timestep changes can also influence the final results of the device. To demonstrate some of these concepts, we’ll simulate with two different meshes illustrated in Figure 13. We’ll also simulate with the good mesh and bad timestep control.
Figure 13 – Good mesh on the left and a bad mesh on the right.
IV-26
Figure 14 – Initial electron density from the radiation strike Figure 14 shows the initial electron density in the structure after the strike. You can see the large charge present across the channel region that will shortly be collected by channel field. There is a large generation of charge throughout the bulk. This charge will slowly diffuse and be collected. Figure 15 shows the electron density 50ps after the strike. The current has spread. In the well regions, the current has not spread as far due in part to the decreased mobility in the heavier doped region. This is close to the peak in the delivered current.
IV-27
Figure 15 – Electron Density at 50ps after the strike. Finally, the entire current transient is shown in Figure 16. Figure 16 shows the current v. time. There is a fast spike in the current from the drift regions followed by a longer tail driven by diffusion based collection. The comparison for three cases is also presented. The first is a good mesh and good timestep. The second is a bad mesh and good timestep. The final is a good mesh and poor timestep. You can see differences in the current peak. The maximum swing is from the good result is about 30% in the peak current and about the same in the total charge collected. This can obviously result in errors in estimating the effect on surrounding circuitry and demonstrates the need for good mesh and timestep control. In this case, the bad mesh results in worse answers than the bad timesteps. This will not always be true and will depend on how bad the timestep and mesh are.
IV-28
Figure 16 – Current transients and charge collection for the three cases.
VI Conclusions Accurate simulation of single-event is quite possible with the tools available today. Care must be taken in configuring the simulation. Single-event simulations present interesting numerical challenges, particularly in grid generation. They also require careful selection of physical models, as defaults may not include some things that are critical for singleevent conditions. First, single-event simulations present some interesting numerical challenges. Care must be exercised in the setting up the grid. The charge cloud needs to have fine grid to get the appropriate numerical accuracy. This will not be the default grid for most CMOS simulations. Second, appropriate physical models choices need to be included. Of primary importance is carrier-carrier scattering. To get CMOS characteristics correct, quantum inversion corrections and surface scattering must also be included.
VII Acknowledgements I would like to acknowledge the expert advice and work of Oluwole Ayodele Amusan and Art Witulski in section V. They ran the simulations at my request and helped extensively with the interpretation of the results.
VIII References Adalsteinsson, D. and J. A. Sethian (1995). "A Level Set Approach to a Unified Model for Etching, Deposition, and Lithography .1. Algorithms and 2-Dimensional Simulations." Journal of Computational Physics 120(1): 128-144.
IV-29
Adalsteinsson, D. and J. A. Sethian (1995). "A Level Set Approach to a Unified Model for Etching, Deposition, and Lithography .2. 3-Dimensional Simulations." Journal of Computational Physics 122(2): 348-366. M. G. Ancona and H. F. Tiersten (1987). “Macroscopic physics of the silicon inversion layer,” Physical Review B, vol. 35, no. 15, pp. 7959–7965. M. G. Ancona and G. J. Iafrate, (1989) “Quantum correction to the equation of state of an electron gas in asemiconductor,” Physical Review B, vol. 39, no. 13, pp. 9536–9540. Avci, M.E. Law, E. Kuryliw, A.F. Saavedra, K.S. Jones (2005). “Modeling Extended Defect ({311} and Dislocation) Nucleation and Evolution in Silicon”, Journal of Applied Physics, 95(5), p. 2452-2460. Bank, R. E. and A. Weiser (1985). "Some A Posteriori Error Estimators for Elliptic Partial Differential Equations." Mathematics of Computation 44(170): 283-301. K. Bløtekjær (1970). “Transport Equations for Electrons in Two-Valley Semiconductors,” IEEE Transactions on Electron Devices, vol. ED-17, no. 1, pp. 38–47. C. Canali et al., (1975). “Electron and Hole Drift Velocity Measurements in Silicon and Their Empirical Relation to Electric Field and Temperature,” IEEE Transactions on Electron Devices, vol. ED-22, no. 11, pp. 1045–1047. Dodd, P. E. and F. W. Sexton (1995). "Critical charge concepts for CMOS SRAMs." IEEE Transactions on Nuclear Science 42(6): 1764-1771. Dodd, P.E. (1996) “Device Simulation of Charge Collection and Single Event Upset”. IEEE Transactions on Nuclear Science 43(2): 561-575. Dodd, P.E. (1999) “Basic Mechanisms in Single-Event Effects,” IEEE NSREC Short Course Notes, Norfolk, VA. Eaglesham, D. J., P. A. Stolk, et al. (1994). "Implantation and transient B diffusion in Si: The source of the interstitials." Appl. Phys. Lett 65(18): 2305-2307. Giles, M. D. (1991). "Transient Phosphorus Diffusion Below the Amorphization Threshold." J. Electrochem. Soc. 138(4): 1160-1165. International Technology Roadmap for Semiconductors, http://public.itrs.net/. J. R. Hauser, S. E. Diehl-Nagle, A. R. Knudson, A. B. Campbell, W. J. Stapor, and P. Shapiro, (1985) “Ion Track Shunt Effects in Multi-Junction Structures,” IEEE Trans. on Nuclear Science, vol. 32, pp. 4115-4121. C. M. Hsieh, P. C. Murley, and R. R. O’Brien (1981). “A Field-Funneling Effect on the Collection of Alpha-Particle-Generated Carriers in Silicon Devices,” IEEE Electron Device Letters, vol. 2, pp. 103-105.
IV-30
Jones, K. S., S. Prussin, et al. (1988). "A Systematic Analysis of Defects in Ion-Implanted Silicon." Appl. Phys. A 45: 1-34. S. Kirkpatrick (1979). “Modeling Diffusion and Collection of Charge from Ionizing Radiation in Silicon Devices,” IEEE Trans. on Electron Devices, vol. 26, pp. 1742-1753. D. B. M. Klaassen (1992). “A Unified Mobility Model for Device Simulation—I. Model Equations and Concentration Dependence,” Solid-State Electronics, vol. 35, no. 7, pp. 953–959. Klein, K. M., C. Park, et al. (1992). "Monte-Carlo Simulation of Boron Implantation into SingleCrystal Silicon." IEEE Transactions on Electron Devices 39(7): 1614-1621. Law, M. E. and R. W. Dutton (1988). "Verification of Analytic Point Defect Models Using SUPREM-IV." IEEE Transactions on Computer-Aided Design 7(2): 181-190. Law, M. E. and S. M. Cea (1998). "Continuum based modeling of silicon integrated circuit processing: An object oriented approach." Computational Materials Science 12: 289-308. Lilak, A. D., M. E. Law, et al. (2002). "Kinetics of boron reactivation in doped silicon from Hall effect and spreading resistance techniques." Applied Physics Letters 81(12): 2244-2246. C. Lombardi et al., (1988). “A Physically Based Mobility Model for Numerical Simulation of Nonplanar Devices,” IEEE Transactions on Computer-Aided Design, vol. 7, no. 11, pp. 1164– 1171. McCalla, W. J. and D. O. Pederson (1971). "Elements of Computer-Aided Circuit Analysis." IEEE Transactions on Circuit Theory CT18(1): 14-&. G. Paasch and H. Übensee, (1982). “A Modified Local Density Approximation: Electron Density in Inversion Layers,” Physica Status Solidi (b), vol. 113, no. 1, no. pp. 165–178. Park, C., K. M. Klein, et al. (1990). "Efficient Modeling Parameter Extraction for Dual Pearson Approach to Simulation of Implanted Impurity Profiles in Silicon." Solid State Electronics 33(6): 645-650. Pelaz, L., M. Jaraiz, et al. (1997). "B diffusion and clustering in ion implanted Si: The role of B cluster precursors." Appl. Phys. Lett 70(17): 2285-2287. Pinto, M. R. and R. W. Dutton (1983). "An Efficient Numerical-Model of CMOS Latch-Up." IEEE Electron Device Letters 4(11): 414-417. Plummer, J.D., Deal, M.D., and Griffin, P.B. (2000). “Silicon VLSI Technology: Fundamentals, Practice, and Modeling”, Prentice Hall, New Jersey. Rollins, J. G. and J. Choma (1988). "Mixed-Mode Pisces-Spice Coupled-Circuit and Device Solver." IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems 7(8): 862-867. Selberherr, S., A. Schutz, et al. (1980). "Minimos - a Two-Dimensional Mos-Transistor Analyzer." IEEE Transactions on Electron Devices 27(8): 1540-1550.
IV-31
Sethian, J. A. and D. Adalsteinsson (1997). "An Overview of Level Set Methods for Etching, Deposition, and Lithography Development." IEEE Transactions on Semiconductor Manufacturing 10(1): 167-184. Stiebel, D., P. Pichler, et al. (2001). "A reduced approach for modeling the influence of nanoclusters and {113} defects on transient enhanced diffusion." Applied Physics Letters 79(16): 2654-2656. Stolk, P. A., F. P. Widdershoven, et al. (1998). "Modeling statistical dopant fluctuations in MOS transistors." IEEE Transactions on Electron Devices 45(9): 1960-1971. R. Stratton (1962). “Diffusion of Hot and Cold Electrons in Semiconductor Barriers,” Physical Review, vol. 126, no. 6, pp. 2002–2014. E. Takeda, D. Hisamoto, and T. Toyabe. (1999) “A new soft-error phenomenon in VLSIs– the alpha-particle induced source/drain penetration (ALPEN) effect,” Proc. IEEE Int. Reliability Phys. Symp., pp. 109-112. Ulacia, J. I., C. J. Petti, et al. (1988). "Crystal-Orientation Dependent Etch Rates and a Trench Model for Dry Etching." Journal of the Electrochemical Society 135(6): 1521-1525. Venugopal, R., Z. B. Ren, et al. (2003). "Simulating quantum transport in nanoscale MOSFETs: Ballistic hole transport, subband engineering and boundary conditions." IEEE Transactions on Nanotechnology 2(3): 135-143. Wang, J., P. M. Solomon, et al. (2004). "A general approach for the performance assessment of nanoscale silicon FETs." IEEE Transactions on Electron Devices 51(9): 1366-1370. G. Wachutka (1989). “An Extended Thermodynamic Model for the Simultaneous Simulation of the Thermal and Electrical Behaviour of Semiconductor Devices,” in Proceedings of the Sixth International Conference on the Numerical Analysis of Semiconductor Devices and Integrated Circuits (NASECODE VI), vol. 5, Dublin, Ireland, pp. 409–414. Ziegler, J.F., J.P. Biersack, U. Littmark (1985). The Stopping and Range of Ions in Solids, New York, Pergamon Press.
IV-32
2006 IEEE NSREC Short Course
Section V: Circuit Modeling of Single Event Effects
Jeff Black Dr. Tim Holman Vanderbilt University Institute for Space and Defense Electronics
Approved for public release; distribution is unlimited
TABLE OF CONTENTS 1.0 Introduction...................................................................................................................................... 2 2.0 Fundamental Single Event Effects Overview .................................................................................. 4 2.1 Single Event Effects Mechanisms ............................................................................................... 5 2.1.1 Basic Charge Deposition/Generation.................................................................................. 5 2.1.2 Charge Collection Enhancements ....................................................................................... 8 2.1.3 Multiple Node Charge Generation.................................................................................... 10 2.2 Single Event Effects Structures ................................................................................................. 14 2.2.1 Bulk MOSFETs ................................................................................................................ 14 2.2.2 BJTs .................................................................................................................................. 22 2.3 Circuit Simulation Responses.................................................................................................... 23 2.3.1 Permanent Circuit Responses ........................................................................................... 24 2.3.2 Transient Circuit Responses ............................................................................................. 24 2.4 Payoffs/Pitfalls of Circuit Modeling and Simulation ................................................................ 25 2.4.1 Payoffs .............................................................................................................................. 25 2.4.2 Pitfalls ............................................................................................................................... 26 2.5 Past Challenges with Respect to Circuit Simulation of SEEs ................................................... 27 3.0 Transistor-Level SEE Modeling and Simulation ........................................................................... 28 3.1 Available Tools and Capabilities............................................................................................... 28 3.2 Circuit Model Requirements ..................................................................................................... 29 3.2.1 P-N Junction Charge Collection ....................................................................................... 29 3.2.2 Ion Shunt Charge Collection............................................................................................. 29 3.2.3 Parasitic Bipolar Enhancement Charge Collection........................................................... 30 3.3 Defining the Inputs .................................................................................................................... 31 3.4 Simulation Approaches and Results .......................................................................................... 35 3.4.1 Static SE Simulation ......................................................................................................... 36 3.4.2 Dynamic SE Simulation.................................................................................................... 42 4.0 Mixed-Level SEE Modeling and Simulation................................................................................. 48 4.1 Available Tools and Capabilities............................................................................................... 48 4.2 Breaking Up the Problem .......................................................................................................... 49 4.3 Simulation Approaches and Results .......................................................................................... 51 5.0 Circuit-Level SEE Modeling and Simulation ................................................................................ 60 5.1 Transistor-Level SEE Modeling................................................................................................ 60 5.2 Behavioral Modeling ................................................................................................................. 61 5.2.1 Macromodels .................................................................................................................... 61 5.2.2 Macromodels for SEE Simulations................................................................................... 63 5.3 Components and Languages for Behavioral Modeling ............................................................. 64 5.3.1 Compact Models Using Behavioral Elements .................................................................. 66 5.3.2 Behavioral Modeling of Mixed-Signal Systems............................................................... 66 6.0 System-Level SEE Modeling and Simulation ............................................................................... 68 6.1 Available Tools and Capabilities............................................................................................... 69 6.2 Simulation Approaches and Results .......................................................................................... 73 6.2.1 Transient Fault Assessment .............................................................................................. 73 6.2.2 Fault Injection ................................................................................................................... 77 7.0 Summary ........................................................................................................................................ 84 8.0 References...................................................................................................................................... 86
V-1
1.0 Introduction This is final part of the 2006 Nuclear and Space Radiation Effects Conference (NSREC) Short Course titled, “Modeling the Space Radiation Environment and Effects on Microelectronic Devices and Circuits.” The entire Short Course details most of the process as shown in Figure 1. Part 1, “Modeling the Space Radiation Environment;” Part 2, “Space Radiation Transport Models;” and Part 3, “Device Modeling of Single Event Effects (SEE)” covers the top three blocks of this process. This part covers the circuit modeling blocks in the lower right part of the figure. The space error rates models have been subjects of previous short courses and are not covered in this one. The 1997 Short Course part titled, “Single Event Analysis and Prediction,” by Ed Peterson is a strong reference for that process block [Pe97]. Orbit •Space Radiation Environment Models •Space Radiation Transport Models
Process/Materials
Device Design •Device Models Layout Circuit Models Transistor-Level
•Space Error Rates Models
Subcircuit Design Error Threshold
Circuit Models Behavior-Level
System Design
Errors/Day
Circuit Response
Figure 1. Space Radiation Single Event Effects Modeling Flow Diagram
This discussion of the circuit modeling of SEE is split into five main sections. The first section talks about the relationship of circuit models to preceding blocks, especially device models. Since the translation from device to circuit modeling usually loses layout (or geometrical) design information, the important details of this section are the understanding of the SEE fundamental mechanisms as it relates to circuit modeling, the application of the device model outputs, and the pitfalls of circuit modeling. The next four sections talk about specific classes of circuit modeling. Section 3 covers circuit modeling at the transistor-level. Section 4 covers the hybrid class of circuit modeling known as mixed-mode, which applies device modeling with transistorlevel circuit modeling. Section 5 talks about behavioral-level circuit modeling and Section 6 talks about system-level circuit modeling, where the goals of the modeling is to ascertain the impact of the SEE upon the circuit function or output performance.
V-2
Behavioral-Level and System-Level
Author(s) Baumann Buchner/McMorrow Cressler Weatherford Buchner/Baze Hoffman/Dibari Dodd Dressendorfer Peterson Alexander Normand Massengill Peterson Pickel
Mixed-Level
Year 2005 2005 2003 2002 2001 2000 1999 1998 1997 1996 1994 1993 1983 1983
Transistor-Level
Preceding Short Courses
Fundamental Mechanisms Related to Circuit Modeling
There have been Short Courses at the NSREC since 1980 and many of these have covered some form of SEE. Figure 2 shows a list of preceding Short Course parts by year and author. The list is a cross reference of sections from that Short Course that are applied in this paper. There are many other potential Short Course references, but are not included in this list. A couple items of note come from this figure. First, the fundamental mechanisms of SEE generation and transistorlevel circuit modeling have been covered by a number of short courses, where mixed-mode and behavioral-level circuit modeling have received less attention. These two areas will receive more attention in this paper. Second, two short courses have been specifically identified for their strong relation to this paper. Lloyd Massengill wrote and presented a Short Course in 1993 titled, “SEU Modeling and Prediction Techniques,” which is the last course dedicated to the detail of SEE modeling. Steve Buchner and Mark Baze wrote and presented a Short Course in 2001 titled, “Single-Event Transients in Fast Electronic Circuits,” which provided a significant coverage of SEE modeling at the circuit level.
Reference [Ba05] 3.0 [Bu05] 4.1 4.2, 6.1, 8.0 [Cr03] X.0 X.0 [We02] 2.6, 2.7, 4.0 2.9, 5.1 2.9 [Bu01] 3.2, 3.3 5.1, 5.3 5.1, 5.3 5.1, 5.3, 5.5 [Ho00] 4.2, 4.3 [Do99] 3.2, 3.4, 4.1 5.2 [Dr98] 4.1 [Pe97] 3.5 [Ae96] 8, 10 12 [No94] 3.2 [Ma93] 1.0, 2.0, 3.0 4.0 [Pe83] * [Pi83] 2.0 3.0
*Short Course did not Include Numbered Sections
Figure 2. Preceding Short Course Cross Reference
V-3
2.0 Fundamental Single Event Effects Overview Before tackling the details of single event modeling, we will introduce and discuss some of the terminology used throughout this text. When an energetic nuclear particle penetrates any semiconducting material, it loses energy through Rutherford scattering (Coulombic interactions) with the semiconductor lattice structure. Through predominately Compton interactions with the nuclei of the crystalline structure, the slowing of the particle as it transfers energy to the lattice leaves an ionization nail of free electron-hole pairs; mobile charge carriers which were electrically nonexistent before the radiation event. Within an integrated circuit structure, these excess carriers can deposit charge in unexpected and unwanted places, often leading to voltage transients on the nodes of the circuit and current transients across device junctions. Unlike total dose radiation which causes gradual global degradation of device parameters and dose-rate radiation which causes photocurrents in every junction of a circuit, a single event interaction is a very localized effect, and can lead to a seemingly spontaneous transient within a region of the circuit. If this transient influences a node which is storing information, it may lead to an upset; that is, the corruption of the information to an unrecognizable, unreadable, or unstable state. This upset can, in turn, lead to a circuit error if this corrupted state alters legitimate information stored in or propagating through the circuit. That is, an upset becomes an error when it is either latched or is misinterpreted as valid data by other circuitry. The working definition of upset in this work is a corrupted electrical state, and an error is the finalized effect of that state. Localized information errors due to single event upsets (SEUs) can be (1) transient, (2) permanent, or (3) static. Transient errors are spurious signals which can propagate through the circuit paths during one or more clock cycles. These asynchronous signals can either propagate to a latch and become static, or be overwhelmed by the legitimate synchronous signals of the circuit. Timing of the radiation-induced signals relative to the synchronous signals plays a key role in the possibility of errors. These types of errors are most important in combinational (nonsequential) circuitry and analog subsystems. Permanent errors are often called hard errors because of their destructive, non-correctable origins. In this case, the single event causes physical damage to the circuit, leading to a noncorrectable fault. Single-event (SE) -induced burnout (SEB) and gate rupture (SEGR) in power transistor are examples of hard errors. These errors are most often analyzed and modeled at the individual device level. Single-event soft errors (due to single-event upsets, SEUs) and multiple-bit soft errors (due to multiple-bit upsets, MBUs) belong to a class of errors which are static (latched by the circuitry) but can be corrected by outside control. These soft errors overwrite information stored by the circuit, but a rewrite or power cycle corrects or resets the part to proper operation with no permanent damage.
V-4
A special class of single-particle effects can lead to either permanent of soft errors, depending on the severity of the circuit response. SE-induced snapback (SES) in n-channel MOS output devices and SE-induced Latchup (SEL) in CMOS structures are regenerative current conditions which, if the current levels are benign, can be reset. However, if the regenerative current energy exceeds the thermal dissipation of the affected region, these effects can cause melting and permanent physical damage to the circuit [Ma93].
2.1
Single Event Effects Mechanisms
2.1.1 Basic Charge Deposition/Generation 2.1.1.1 Single Event Charge Deposition As a heavy, charged particle (ion) passes through a semiconductor material; it loses energy by Rutherford scattering with the nuclei of the lattice. Energy is transferred from the particle to bound electrons which are ionized into the conduction band, leaving a dense plasma track of electron-hole pairs (EHPs). The rate of this energy loss to EHP creation, often expressed as stopping power or linear energy transfer (LET), has the dimensions of energy per unit length along the path of the particle. The LET of any particular particle depends on the mass and energy of the particle, as well as the density of the target material. Thus, units of LET are usually MeV/gm/cm2, or converted to MeV/µm in a specific target material. Stopping powers and ranges for various ion species and energies have been tabulated, and are available in [Zi85]; or can be calculated with Ziegler’s TRIM Computer code. 2.1.1.2 Depletion Region Drift Collection Charge deposition in a bulk semiconductor region is typically of no consequence; it will eventually recombine. If, however, that charge is deposited in or near a p-n junction, the electrons and holes will be separated, leading to charge flow and a photocurrent generation. This charge collection by the p-n junctions leads to a circuit response to the single event penetration. There are several mechanisms for this collection and conduction process including depletionregion drift collection, field-assisted funneling collection, diffusion collection, and ion-shunt collection. It should be noted that charge collection and charge generation are distinct processes, and they must be modeled as such. Figure 3 shows an ion penetrating the depletion region of a p-n junction. The built-in electric field causes electrons to be swept to the n-region and holes to the p-region. This drift motion is limited by the saturation velocity of the carriers, which for electrons in silicon is approximately 1x107 cm/sec, so the time period of this transient is very short. For example, the speed of electron drift across a 0.5 µm depletion region which is heavily reverse biased can be approximated by 0.5x10-4 cm / 1x107 cm/sec = 5 ps. The simplest model for this extremely fast transient would be a current impulse across the junction with area equal to the total amount of charge deposited in the depletion region, i.e. ID = QDδ. Figure 4 shows the equivalent circuit for this model.
V-5
Figure 3. Depletion Region Charge Collection in a P-N Junction (after [Ma93])
Figure 4. Simple Circuit Model for the Drift Region Collection (after [Ma93])
2.1.1.3 Field-Assisted Funneling Collection Early on in the investigations of ionizing particle effects on junctions, it was recognized by Hsieh et. al. [Hs81] that the creation of a highly-concentrated free carrier track (a plasma track) within a junction depletion region perturbs the region itself, so that the simplified depletion current calculation presented above does not adequately describe the actual charge collection. Hsieh showed that the generated carrier track from an alpha-particle penetration of a junction severely distorts the potential gradients along the track length, creating a field funnel. Figure 5 shows a qualitative schematic of the mechanism leading to the creation of the field funnel. Upon the creation of the plasma track, a path of free carriers appears between the n and p regions (Figure 5a). The depletion region is effectively hidden from the carriers, which are free to move toward (electrons) or away from (holes) the positively-biased n region (Figure 5b). The spreading resistance along this plasma ‘wire’ leads to a voltage chop along the length of the track. Thus, the potential which initially appeared across the depletion region is distributed down the track (Figure 5c). Carriers outside the original depletion region will be accelerated by this field and be collected promptly at the junction by a drift mechanism, the funneling effect.
V-6
Figure 5. Qualitative View of the Funnel Effect: a) Creation of the Ion-Induced Plasma Track, b) movement of electrons toward the positive bias through the conduit, and c) potential drop along the track and redistribution of equipotential lines down the track (after [Ma93])
2.1.1.4 Diffusion Collection Charge generated outside the funnel region, but within a diffusion length of a junction, diffuses to the junction and can be swept across the depletion region, leading to another current mechanism [Ki79]. This collecting junction maybe the hit junction or an innocent neighboring region, as shown in Figure 6. Diffusion is a much slower process, so this current component is delayed with respect to the field-assisted collection current. Typical time domains are nanoseconds for diffusion collection.
Figure 6. Diffision of Charge to Neighboring Circuit Nodes (after [Mc82])
Because of the complex, three-dimensional nature of the diffusion charge transport, charge collection by this method is very dependent on the geometry circuit layout and the distances from the hit location to nodes in the vicinity. Most modeling of this effect is performed using 2D or 3-D finite element charge transport codes, as described in the previous short course.
V-7
2.1.2 Charge Collection Enhancements 2.1.2.1 Ion Shunt Effect Figure 7 shows a unique and interesting phenomenon which has become increasingly important in modern, dense integrated circuits where geometries are scaled to small dimensions. In this figure, an ion track of free carriers has penetrated two proximal junctions. Since the plasma track, while it exists with a high carrier concentration, acts as a conductive path; the path between the two n regions in the figure can act as a current conduit, or resistive connection, between the regions. Charge which was not even generated by the ion hit can move through this conduit just as current through a wire.
Figure 7. Simplified View of the Ion Shunt Effect (after [Ha85])
Knudson et al experimentally observed more current at collecting nodes in multilayer structures than could be accounted for by the total ionized charge [Kn84]. The reason for this extra current is conduction through the conduit from the different biases applied to the top and bottom n regions. This connection of two regions by the ion’s plasma track is called an ion shunt. Of course, in modeling dense integrated circuits for SEE, the inclusion of any possible ion shunts is essential. This is especially important in dense CMOS on thin epitaxial layers and bipolar circuit technologies. Bipolar transistors are fabricated with very thin base regions, making them highly susceptible to the shunt effects. 2.1.2.2 Parasitic Bipolar Enhancement Effect An effect first observed in Silicon-on-Insulator (SOI) Metal-Oxide-Semiconductor (MOS) devices that now could be important in bunk MOS devices is parasitic bipolar enhancement. A most critical aspect of the proper modeling of an ion strike to a SOI MOS device (either n or p type) is the isolated body region. Whether this region is electrically floating, or connected to the source region by a low-resistance strap (body tie), any charge deposited in this region by a SE must either recombine or exit the region via three methods: (1) across the body-source junction, (2) across the body-drain junction, or (3) out the body-to-source tie. Because both junctions are reverse biased, most of the ionic charge exits the region via the body-to-source tie. This result, shown in Figure 8, can lead to a potential gradient from the hit location to the body tie. If the potential near the hit is large enough to cause minority carrier injection across the body source junction, then parasitic bipolar action can be created between the source and drain. The current created by the ion strike is in effect the base current for this parasitic bipolar transistor; this base
V-8
current can be amplified to create a large collector current at the sensitive node. Thus, the SE current is amplified with the gain of the parasitic bipolar transistor. [Ke89]. ION TRACK
SOURCE TIE RBS
IRBS IInd SOURCE
- - + + + ++ BODY- + - + -
IPh DRAIN
O XIDE
Figure 8. Diagram Showing the Relevant Currents Induced in an SOI Structure Following an Ion Strike (after [Ma93])
Circuit simulation results for an SOI RAM cell is shown in Figure 9 [Al90]. Since the parasitic bipolar current is in concert with the SE current affecting the hit node voltage, the effect enhances the effect of the ion hit, leading to an increased susceptibility to the ion charge (i.e. a reduced Q needed for upset).
Figure 9. SOI MOS Simulation Results Showing Parasitic Bipolar Effect (after [Ma93])
2.1.2.3 Alpha Particle Source-Drain Penetration Effect Although somewhat similar to the ion shunt effect, the alpha particle source-drain penetration effect (ALPEN) charge collection mechanism results from a disturbance in the channel potential that Takeda et. al. referred to as a funneling effect. The effect is illustrated in Figure 10 and was based upon an ion strike that passes through both the source and the drain. Immediately following the ion strike, there is no longer a potential barrier between the source and channel.
V-9
This can lead to significant source-drain conduction current which mimics the MOSFET device being on. This mechanism was revealed by 3-D alpha-particle simulations and has been experimentally verified. The experiments indicated that source charge injection due to the ALPEN mechanism increases rapidly for effective gate lengths below about 0.5 µm. Later work predicted the same direct channel conduction mechanism can occur in 0.3-µm gate length MOSFETs even for normal incidence strikes, and can lead to charge multiplication [Do99, Ta88].
Figure 10. Illustration of the ALPEN effect (after [Ta88])
2.1.3 Multiple Node Charge Generation In highly scaled geometries (<250 nm), the effect of a single ion track can be observed on multiple circuit nodes through a variety of the effects discussed in Sections 2.1.1 and 2.1.2. The most obvious method the affect multiple nodes is through diffusion as shown in Figure 6. Not as obvious is the relationship of the size of the ion track to the size of the microelectronic devices. Figure 11 shows a simulation of an ion strike directly on the transistor channel. The enhanced current is seen by the shunting of the source and drain of the transistor as predicted by the ALPEN effect. Figure 12 shows the same transistor now with the ion strike indirectly affecting the channel. In both cases, the channel was affected by the ion strike either directly or indirectly. For multiple transistors is a common well, the collapse of the well potential by a single ion strike can cause affect some or all of the transistor channels. This has been observed in both device simulation and heavy ion testing. Figure 13 shows the results of a device simulation of two transistors. In the top cross section, the transistors have a well contact between the drains and in the lower cross section, there is no well contact. An ion strike was simulated on the drain on the right. Without the well contact, the well collapse extended to the channel of the device on the left. As a result, this device collected charge, even though it was not directly struck. Figure 14 shows the two cases of drain current for the left device. Note the significant increase when the well contact did not exist [Bl05].
V-10
Figure 11. Direct Channel Ion Strike Demonstrating ALPEN-Like Effect
Figure 12. Indirect Channel Effect
V-11
Figure 13. Two Transistor Simulation Showing Well Collapse (after [Bl05])
Figure 14. Simulated Current on Device in Same Well as Ion Strike (after [Bl05])
Multiple node charge generation was shown by Olson et. al. in an mixed-level simulation experiment to determine the cause of SEUs at energies lower than expected. The particulars of this experiment will be more detailed in the section on mixed-level simulation. However, with respect to multiple node charge generation, the authors looked at a few transistors in the circuit design (Figure 15). We will make particular notice of the p-channel transistors, M0, M2, M7, and M8 in the figure. With an ion strike on M10, an n-channel transistor, the potentials of the pchannel transistors are all affected, as shown in Figure 16. In fact, one of the charge collections, M2, was even increased due to parasitic bipolar gain. This example demonstrates the possible complexity of charge collection, especially is deep submicron microelectronic technology.
V-12
Figure 15. Simulated 3-D Structure for Multiple Node Charge Generation (after [Ol05])
Figure 16. (a) P-Channel Transistors Cross Section, (b) Electrostatic Potential, and (c) Hole Density Response Due to an Ion Strike in the Source of M10 (after [Ol05])
V-13
Multiple node charge generation is nothing new. Quantum mechanics would assert that all microelectronic circuit nodes are affected in some manner by a single ion strike. It has just been that the probability of charge generation at multiple nodes being observable or creating an observable effect has been very small. However, at current microelectronic technology generations, the charge generation at secondary nodes has become important. This creates an additional challenge to modeling, especially circuit modeling. While device modeling may be able to generate charge at multiple nodes, typical circuit modeling techniques do not incorporate this response. This has led to the recent increase in the application of mixed-level (device-level and transistor-level) modeling to SE issues.
2.2
Single Event Effects Structures
The main challenge of circuit modeling of SEE is the loss of the spatial or layout data of the circuit. As a result, the possible p-n junctions that could collect the single event charge need to be captured from the layout. This section will show the possible sources of single event charge collection in bulk MOS Field Effect Transistors (MOSFETs) and Bipolar Junction Transistors (BJTs). Other structures, like SOI MOSFETs, will not be shown but can similarly be converted.
2.2.1 Bulk MOSFETs 2.2.1.1 Basic SEE Structures Figure 17 shows two MOSFET device representations side by side. The schematic representation is on the right and an example layout is on the right. The layout is just a 2-D cut plane down the device and still does not contain all of the layout information, but it will suffice for this example. The schematic shows the transistor body contacts being connected through a distributed resistance as well as a p-n junction diode between the transistor wells. In typical circuit modeling, this resistance or diode may not be included as it may not have a significant impact of the circuit performance. But, both can be relevant for SEE modeling. For example, if the N+ well contact (highest node) is modeled with a direct current (DC) voltage source and no resistance, then the well potential will not change during simulation.
V-14
N N+ P+ P+ PN+ N+ P+ P Figure 17. MOSFET Schematic and Layout Example
Figure 18 adds the possible current sources for SEE modeling. These current sources represent almost every p-n junction in the circuit layout as drawn. Both transistor drains and sources are pn junctions as well as the diode carried over from Figure 17. The N well on top also forms a junction with the P- substrate, but this could be consider as part of the diode already included. This only difference is the doping in the P or P- material.
V-15
N N+ P+ P+ PN+ N+ P+ P Figure 18. MOSFET Example Showing P-N Junction Current Sources
Figure 19 now shows the remainder of the possible sources for SEE modeling in the MOSFET example. In this figure, the sources for the ion shunt effect are shown as resistors from the P+ drain/source implant to the P- substrate. These have been shown as a time dependent resistors and this will be discussed in more detail later. The possible sites for parasitic bipolar or the ALPEN effect are shown in the same place. Each MOSFET has a parasitic BJT and this can be modeled as such [Ke89]. The ALPEN effect is similar to the ion shunt, so a time dependent resistor is once again appropriate.
V-16
N N+ P+ P+ PN+ N+ P+ P Figure 19. MOSFET Example Showing Shunt and Bipolar Sources
What is absent from the previous two figures is the actual design of the circuit. These are drawn as just unconnected MOSFETS. If the two transistors are used in an inverter, then some of the potential sources do not exist or are unimportant. The inverter circuit is shown in Figure 20. Note that the P+ drain and N+ body contact are now both connected to VDD. Since they are kept at the same potential, this p-n junction will not generate current, at least not initially. If we assume that the input to the inverter is high, VDD, then the output is low, VSS. As a result, the lower transistor has approximately the same potential in all areas, so there is no current collection in this area, as shown in Figure 21. The upper transistor has one p-n junction reverse biased and the ALPEN and parasitic BJT also remain. The shunt sources are likely not relevant in this state since the shunt would tend to short the P+ transistor to the substrate. Since the output is approximately the substrate potential already, no effect should be observed. In the input low case, Figure 22, the reversed bias p-n junction, ALPEN, and parasitic BJT change to the lower transistor. Since the output is being driven by the higher transistor, an ion shunt would tend to short the output, so this effect needs to be considered in this case. Both these examples are just static simulation cases for a given input condition. This will not matter in the case of the resistance or parasitic bipolar device modeled in the circuit, since the voltage dependence is included. However, the current sources for the reverse biased p-n junctions should be dependent on the junction voltage.
V-17
VDD N N+ P+ P+ P-
Input Output N+ N+ P+ P VSS
Figure 20. Example MOSFET Inverter Circuit
VDD N N+ P+ P+ Input
P-
Output N+ N+ P+ P VSS
Figure 21. Example MOSFET Inverter, Input High, SEE Sources
V-18
VDD N N+ P+ P+ Input
P-
Output N+ N+ P+ P VSS
Figure 22. Example MOSFET Inverter, Input Low, SEE Sources
2.2.1.2 Complex SEE Structures The distinction between the subclasses of SEE is rather context driven, but here we will distinguish effects which can lead to permanent circuit failures from those events which simply upset the normal circuit operation. The following single particle effects are either inherently destructive, or given the proper conditions may be destructive, to a circuit. It is not possible to properly cover the complexities of these effects in this tutorial, but the basic structures will be presented and models introduced. For further details, the reader is directed to the references. Latchup is a regenerative current-flow condition which can be induced in any semiconductor structure which possesses a parasitic n-p-n-p path; but it has been of most concern in bulk CMOS integrated circuits. The basic cross-sectional structure of a latchup path and its circuit equivalent are shown in Figure 23. The charge collection (photocurrent) from a single-event hit in either the base emitter junction of the parasitic npn transistor, or the emitter-base junction of the pnp transistor, can trigger the regenerative circuit -- single event latchup (SEL). Current flow in RW (the spreading resistance of the well region to the VSS contact) or in RS (the spreading resistance of the substrate to the VDD contact) can forward bias the parasitic transistors, leading to more base current and greater forward bias on the transistors, in a regenerative action. This circuit provides a path for large amounts of current flow between the power supply rails. If the energy created by this current path exceeds the thermal dissipation capacity of the surrounding material, melting and electromigration can occur, leading to a destructive breakdown -- a hard error, Even if destructive breakdown does not occur, the latched path will persist until power is removed from the circuit, causing a catastrophic failure of the circuit [Ma93].
V-19
Figure 23. Parasitic SCR Structure Leading to Latchup in a CMOS Structure (after [Ma93])
SEL is most often modeled via a circuit model as in Figure 24 where the resistances are ohmic paths between the regions and device spreading resistances for the geometry shown in Figure 23 [Jo90]. However, analysis is complicated by the three-dimensional geometry involved in the parasitic spreading resistances and the 3-d characteristics of the parasitic bipolar transistors [Oc81]. 2-D or 3-D modeling is often used to characterize these lumped parameters for the circuit model.
Figure 24. Latchup Circuit Model (after [Jo90])
Similar to latchup, snapback is also a regenerative current mechanism, but does not require the pn-p-n structure [Su78]. In n-channel MOS transistors with large currents, such as output driver devices, a parasitic npn bipolar action can be triggered by SE-induced avalanche multiplication near the drain junction of the device [Oc83], shown in Figure 25. Avalanching in the depletion region of the drain, due to the large current flow, causes holes to be injected into the p-substrate region under the gate. These holes act as base current to the parasitic npn transistor, causing electrons to be injected by the source and collected by the drain. This increased current leads to more avalanching, more base current, and the regenerative loop is closed. Figure 26 shows a circuit model that can be used to model the snapback effect in a CMOS output inverter [Be88].
V-20
Figure 25. Snapback Structure (after [Be88])
Figure 26. Snapback Circuit Model (after [Be88])
Power MOSFET devices, which have large applied biases and high internal electric fields, are susceptible to single event induced burnout (SEB). As shown in Figure 4.11, the penetration of the source-body-drain region by a single-event can forward bias the thin body region under the source. If the terminal bias applied to the drain exceeds the local breakdown voltage of the parasitic bipolar, the single-event strike can initiate avalanching in the drain depletion region [Hh87]. Local power dissipation due to the large drain-to-source current leads to destruction of the device. Similar effects have also been seen in power bipolar devices [Ti91].
Figure 27. Single Event Burnout Structure and Distributed Circuit Model in a MOS Power Device (after [Hh89])
Models for this effect typically require 2-D or 3-D simulation of the complex field lines and carrier transport in and near the avalanching region; however, a few circuit models and at least one analytical model have been presented [Hh87]. Figure 27 shows the circuit equivalent for the
V-21
effect. Proper use of this model requires very accurate characterization of the parasitic bipolar, especially the current dependent avalanche multiplication and the junction breakdown voltage. If an ionizing particle passes through the gate oxide region of a power transistor, a transient plasma filament can connect the gate conductor and the semiconducting material under the gate oxide. In power transistors, where a large bias is applied to the gate, this filament will allow large currents to flow through the gate oxide, possibly leading to thermal breakdown and destruction of the oxide, a single event gate rupture (SEGR) [Fi87,Wr87].
2.2.2 BJTs Figure 28 shows some examples schematics and layout for BJTs. This is pictured similarly to the MOSFET devices show in the previous section. The schematic/layout on the left is a vertical npn BJT as is typical of most modern devices, even SiGe Heterojunction Bipolar Transistors (HBTs). The schematic/layout on the right is a horizontal pnp BJT. There are other types of BJTs, but this is a good representative of the class of devices. Figure 29 shows the p-n junctions that could be locations of current collection and Figure 30 shows the possible shunt locations. It is possible to consider the shunt location between the collector and emitter of the horizontal pnp to be similar to the ALPEN effect as well.
P+
P+
N
NN+
P+ P-
PP+
P+ P
N+
N+
Figure 28. BJT Schematic and Layout Examples
V-22
P+
P+
N
NN+
P+ P-
PP+
P+ P
N+
N+
Figure 29. BJT Example Showing P-N Junction Current Sources
P+
P+
N
NN+
P+ P-
PP+
P+ P
N+
N+
Figure 30. BJT Example Showing Shunt Sources
2.3
Circuit Simulation Responses
So far the discussions have been how a single event deposits charge and the mechanisms for collecting it and the translation of spatial information from the layout to the circuit schematic of those mechanisms. At this point, the discussion turns to understanding and classifying the potential circuit response to SEE. The discussion in 2.2.1.2 already introduced some circuit responses, specifically the destructive kind. This document does not discuss the details of these responses in detail and instead focuses on the remaining, non-destructive responses. However, all known responses will be introduced in this section. The most basic term is Single Event (SE) or Single Event Phenomena (SEP). This is the interaction of a single ionizing particle with a semiconductor device. This is a localized interaction; the event occurrence does not depend on flux or total exposure. The SE of SEP is spatially and temporally random. All remaining terms define circuit responses starting with Single Event Effects (SEE). SEE is the broad class of all circuit or system responses. V-23
2.3.1 Permanent Circuit Responses Single Event Latchup (SEL) is a condition where the parasitic pnpn structure in CMOS is latched to a high current state. This can either be destructive or non-destructive. In the non-destructive case, the affected device will have to have power recycled to restore normal operation. Long term high current will eventually result in failure. Single Event Snapback (SES) is a transistor latchup condition in MOSFETS and more typically in input/output circuitry. This latchup condition does not require the parasitic pnpn structure. Single Event Burnout (SEB) is a destructive event in power MOSFETs. Single Event Gate Rupture (SEGR) is a destructive event in CMOS where the gate is struck by a heavy ion and the parasitic effects cause the electric field to exceed the maximum and the gate is ruptured. This is a rare event and is typically observed when the part is operated in an overvoltage condition. It is also generally observed in power MOSFETs. Single Event Hard Error (SEHE) is another potential term that might be encountered. This is generally a higher class than SEGR. Another possible hard error is that dynamic logic elements can become hard errors without a gate rupture.
2.3.2 Transient Circuit Responses Besides all the different responses that can be permanent in the circuit, there are a number of terms used to describe transient circuit responses. All these terms can lead to a large amount of confusion. In the list of permanent circuit responses, the mechanisms and/or structures were all unique. With respect to transient circuit response, the mechanism is the same for the whole class of terms. The difference is in how the circuit response to the mechanism. The mechanism for the transient circuit response was covered in detail in section 2.1. A SEP deposits excess charge at or near a p-n junction and if there is a reverse bias, the nodes at that junction collect the excess charge. The response is generally short lived so the nodes see a pulse of current. The specifics of this pulse will be discussed in the transistor-level modeling section. The terms in this section just describe how a circuit might respond to the current pulse. Single Event Transient (SET) is the physical signal glitch caused by an SEP. It can also be used to discuss the propagation of the current pulse in the circuit. There are two defined forms ASET (analog single event transient) and DSET (digital single event transient) though there is no fundamental difference between the forms. ASET is typically used when working with analog types of circuits and the response is a current or voltage transient. DSET, on the other hand, is typically used when the SET is initiated in combinatorial logic outside of memory circuits and caused the memory circuit to respond erroneously. Single Event Upset (SEU) is a bit flip or other corruption of stored information due to an SEE (usually applied to memory circuits). It is also the potential end result of the DSET, if the memory circuit latches the incorrect value. The SEU can be corrected by the circuit.
V-24
Multiple Bit Upset (MBU) is a SEP that affects multiple circuits at once, not from multiple ion hits but from one event. This is seen in memory cells where many nodes that are physically adjacent are affected (spatial relationship). It is also seen in sequential circuits operating at a high rate of speed, much faster than the single event, so that multiple clocked bits are errant (temporal relationship). Single Event Functional Interrupt (SEFI) is an SEP that affects a critical portion of the circuit design, like a state machine. A SEFI is non-destructive, but may linger for a long time causing incorrect operation of the circuit. An example is a SEU in the power-on reset circuit that initiates a reset while the circuit is operational.
2.4
Payoffs/Pitfalls of Circuit Modeling and Simulation
Integrated circuit vendors have three avenues to address the single event susceptibility of a particular part. The part can be developed using the most advanced available design procedures and fabrication processes for mitigating single event effects, then placed in service in the hostile environment with the hopes that the conservative design and rad-hard technology effectively resists the single particle bombardment and that failure will not occur. Secondly, the part can be built and subsequently extensively tested in ground-based particle accelerators which mimic the type and energy of particles expected in the service environment. Based on the survivability and upsets seen in these tests, that particular design and process technology can be verified as tolerant of the expected particles in its flight/service regime. Thirdly, the circuit manufacturer can use prior experience and physical theory to develop accurate models of the response of existing circuits to the single particle environment of interest. Then, if and only if, the models are inclusive and robust enough to be predictive, the designer can apply these models during the design phase to optimize the circuit for its particular flight environment. Once the design is finished, a rough prediction of the effects of single events on the circuit can be derived. Of course, the most common approach is a combination of the three procedures above. A designer can use a radiation-hardened process technology and rad-hard circuit design techniques, use predictive modeling to optimize his design for single-event tolerance and predict its failure level, use ground based testing to verify radiation tolerance, and confidently place the part into service. Accurate and reliable modeling is an integral component of this design procedure.
2.4.1 Payoffs Properly executed modeling of single event radiation has several unique attributes. It can: • Provide insight into physical mechanisms leading to experimentally observed effects, especially if these observed effects are intertwined with others, or if the underlying mechanisms ate impossible to experimentally measure. • Uncover the relationship between physical parameters of the material and/or circuit and the observed response to the radiation. • Provide a designer with ‘what-if’ results on design or processing changes without the time and expense of fabrication and experimental verification; that is, a tight feedback loop between design changes and predicted radiation vulnerability. • Provide for the prediction of a circuit’s response to the conditions in the field of operation before actual deployment. • Identify design flaws and ‘bottlenecks’ in the system leading to upset vulnerability. V-25
•
Provide insight into the effectiveness of hardening schemes.
The above payoffs relate to all parts of single event effects modeling and not just circuit modeling. Circuit modeling and simulation provides some unique payoffs: • Circuit simulation is faster than device simulation. Many simulations can be run with varying temporal and ion strike locations in the same computation time as a single complex device simulation. • Circuit simulation also lends itself to a greater number of nodes that can be simulated simultaneously. Device simulation is still limited to a fairly small number of devices. • Circuit simulation is especially effective is identifying the flaws and ‘bottlenecks’ in a design allowing more complex device simulations to focus on smaller pieces.
2.4.2 Pitfalls Of course, the positive aspects of modeling outlined above do not come without serious constraints and unexpected pitfalls. One of the most important, yet illusive, qualities of effective modeling is the proactive recognition of limits and deficiencies of the model. All too often, models are extrapolated to an application where they no longer apply. Erroneous results then reflect badly on the original model, while the real problem is incorrect application of the model outside of its valid constraints. A second pitfall is the completeness of the model. The model will, obviously, not account for effects which are not included in the model (the Modeler’s Law). Even if the model developer believes an effect is negligible, ignoring it will only make it go away in the model, not in the actual response. The completeness of a model is a constant tradeoff with its usability and efficiency. A third pitfall of modeling, which is complementary to the second, is the ‘forest and trees’ problem. A model which places too much emphasis on the quantitative accuracy of unimportant effects can mask the qualitative results important to the modeler. Not only is this a philosophical statement, it describes very real, practical limits also. In many cases, a model which over emphasizes secondary details can be totally prohibitive in complexity when applied to a complete system. The appropriate level of complexity and accuracy of a model is entirely dependent on the application, and the choice requires cleverness from both the model developer and the model user. The development of new models, or even the application of existing models to a particular part or system, can be a time-consuming, expensive task. The rewards lie in the list of Section 2.4.1, not the least of which is the close link between design actions and predicted radiation consequences. Once again the above list applies in general to all parts of single events modeling. Circuit modeling and simulation also have unique pitfalls. The most pressing pitfall is that lack of spatial information in the circuit model. While a modeler can do their best to include all spatial effects, this is a very challenging a complex problem. The first issue with the lack of spatial information is the ability to really look at multiple node charge collection. When an ion strike hits a various part of the circuit, multiple nodes may collect charge and not all at the same instant
V-26
it time. A device model would correctly capture this effect, but the circuit modeler will be challenge to duplicate it. Secondly, the distributed resistances shown in Figure 17 and Figure 28 are distributed all across the p-n junction. At best, this can be modeled as a lumped resistance along the junction. Finally, with circuit modeling, it is possible to significantly overestimate or underestimate the circuit response to the single event. Overestimating the response leads to the application of designs that are more radiation hardened than required. This is okay if one can apply the circuit design in the system. Underestimating the response is the much worse problem. This is typically the result of assuming away a mechanism that couldn’t be assumed away. As long as the modeler is aware of these pitfalls, the results of the circuit modeling and simulation can be appropriately applied to addressing single event effects susceptibility.
2.5
Past Challenges with Respect to Circuit Simulation of SEEs
Back in 1993, Dr. Massengill gave an overview of the future of modeling with respect to SEEs. It is interesting to look at that now and observe how well or not well the community has progressed on his challenges. In 1993, the challenges and areas for continued research in SE modeling are many. A few of these are: • The improvement of circuit models for scaled, submicron devices and technologies, including advances in charge collection models for these structures • The development of models for emerging devices and technologies as they appear, • A reassessment of the basic assumptions involved in SE modeling when applied to the high speed and dense circuits continuing to emerge • The development of comprehensive analysis techniques for single-event in combinational logic • More work in the area of single-event modeling of analog subsystems • The advancement of system-level analysis techniques • True integration of SE modeling into the early design phases of microelectronic design (even commercial), as part of the integrated engineering CAD environment [Ma93] Many of these concepts are presented in the next four sections of this short course. Following that section, the summary will review the progress made on each of these challenges.
V-27
3.0 Transistor-Level SEE Modeling and Simulation In the previous section, we discussed the charge generation by a single penetrating ion and the charge collection by the surrounding junctions. Stepping up in a hierarchical view, this understanding can relate the collection of charge in individual device junctions to changes in the circuit currents and voltages. Models can be incorporated into circuit simulation of the devices interconnected in a subcircuit. The goal is to model the relationship between the SE induced perturbations to transients, upsets, and functional interrupts. These circuit models can then be used to: (1) assess the vulnerability of a circuit to particular LETs, (2) compare the SE hardness (tolerance) of different designs and/or technologies, (3) develop and simulate the effectiveness of hardening schemes, and (4) predict the error response of a circuit in a particular environment. In this section, we will introduce a transistor-level description of the SE photocurrent, based on developed models based on the physical mechanisms. This transistor-level photocurrent model will be integrated into models of subcircuits and circuits to study its effect on sensitive circuit nodes [Ma93].
3.1
Available Tools and Capabilities
Transistor-Level tools model the electrical responses of transistors in a circuit. The models of the transistors can vary from very simple to very complex depending on the source of the model. Transistor models from the process foundry will typically fall in the complex category and include many inherent parasitic resistances and capacitances within the modeled device. Subcircuits are created by wiring the transistors and other components (resistors, capacitors, diodes, etc) either by writing a ‘netlist’ or graphically developing a schematic. SPICE, SMARTSPICE, ACCUSIM, and SPECTRE are commercial tools that fall in this category [Bu01]. At this level of SEE modeling, it is difficult to model and simulate the current generation and collection as it would actually occur in the device and circuit. Basically, the transistor model lumps parameters together so that the current generation mechanism cannot be physically implemented. An example of this is shown in Figure 31. These views of the BJT schematics and layouts show the base resistance. The base resistance is a function of the distance of the base contact from the narrow base region. The current source generation from the base emitter p-n junction is coincident with this narrow base region as well. So, the picture on the left shows how the current source should actually be implemented. But, when a BJT transistor model is applied, the base resistor is included in the transistor model. So, a modeler may have to place the current source on the other side of the base resistor as shown on the right. Though it may be difficult to model and simulate the current generation and collection as it would actually occur, as the initial current generation propagates through transistors and circuits elements, this type of modeling becomes much more useful. The transistor models are designed to simulate time and frequency dependent transfer functions. So, if one can accurately represent the initial current generation, then the propagated outputs or circuit responses will be fairly indicative of the actual response.
V-28
P+
P+
N-
N-
N+
N+ P-
P-
P+
P+ P
P
N+
N+
Figure 31. Actual BJT Current Source versus Circuit Simulated Current Source
3.2
Circuit Model Requirements
Section 2.2 showed many of the locations for potential current generation in transistors. This can be applied to other circuits as well. This section will expand upon that topic and cover the various models that have been developed to implement the charge collection from an ion strike.
3.2.1 P-N Junction Charge Collection The charge collection at a p-n junction is the simplest model at the transistor-level. In order to collect charge, the p-n junction needs to be reversed biased. The current flows from the higher potential in the n-type semiconductor to the lower potential in the p-type material. But, that is where the simplicity ends. The amount of charge collection due to drift is dependent on the width of the depletion region, which is approximately dependent on the square root of the reverse bias voltage. The amount of charge collection due to funneling is also dependent upon the applied voltage, but that effect is more complex. Finally, the amount of charge collection due to diffusion would be generally independent on the applied voltage.
Figure 32. Lumped-Parameter Model of the SE Current Pulse and the Hit Junction (after [Ma93])
3.2.2 Ion Shunt Charge Collection Hauser et al have developed an analytical model for the ion shunt based on physical first principles [Ha85]. The Hauser model describes the total charge delivered to a particular circuit node as the sum of the normal charge collected by funneling and charge conducted through the shunt conduit. This analytical model is based on the assumptions of uniform doping along the plasma track and a uniform charge generation (LET) along the track. However, the model is very enlightening as to the major contributors to the shunt effect in ICs. It shows that shunt charge
V-29
depends on (1) the carrier mobilities, (2) the mean time the track exists, (3) the length of the resistive track, and (4) the voltage difference between the two nodes connected by the track. The equation below is in the form of a resistance between the shunt nodes which appears just for time TR. where: TR [sec] is the time the shunt exists LR [cm] is the distance between like regions spanned by the shunt θ is the angle of particle incidence from normal incidence µ [cm2/V-s] is the average mobilities of charge carriers in the shunt, and NO [cm-1] is the linear charge density along the track. Figure 33 shows the equivalent circuit model for these currents.
Figure 33. Ion Shunt Circuit Model (after [Ha85])
All of the parameters of this shunt model come from known material device parameters except TR. This time that the shunt exists is a complicated function of the voltages on the shunt nodes, and has not been analytically modeled by first principles. It can, however, be modeled in device tools. Knudson et. al. later expanded the Hauser shunt model to include a spreading resistance at the ends of the shunt in series with the actual shunt resistance [Kn86]. Experimental results of charge collection in various geometries were used to empirically fit the parameters of this model.
3.2.3 Parasitic Bipolar Enhancement Charge Collection A lumped-parameter circuit model for an SOI MOSFET device response to the SE hit is shown in Figure 34 [Ke89]. The model includes the p-n junction current source as well as the parasitic BJT. Though this model was development for SOI devices, it can be extended to modern bulk MOSFET devices, where this parasitic bipolar enhancement is being observed [Ol05].
V-30
Figure 34. Lumped-Parameter SOI Device Model for SE Charge Collection
The SOI charge enhancement mechanism described here involves a 3-D, distributed effect. The potential along the length of the body region varies with position, and the electrical currents then depend on this potential. Bipolar action is very localized only a small portion of the body region is biased high enough to maintain bipolar conduction. In order to model the geometrical nature of the body region, a distributed circuit model has been used. The width of the body region is modeled by a distributed resistance/capacitance network and local bipolar devices. A singleevent induced current pulse can be included at any point along the network to simulate hits at various positions along the width of the body. Because of the 3-D nature of the SOI single-event problem and the complicated nature of the bipolar charge movements in the body region, single-event modeling of these circuits is amenable to mixed-level simulations. Another approach is to apply microscopic simulations to a specific device in order to quantitatively understand the properties of the device response to the ion hit. Then, to hierarchically build on this information, develop an accurate description of the device to include in the full circuit simulation.
3.3
Defining the Inputs
A straightforward, one-dimensional, analytical model was developed by Hu to describe the charge flow (current) by field tunneling [Hu82]. The model arising from Hu’s study of the funneling effect of the single event produces a time dependent current across the struck junction with an instantaneous rise time and a fall time shape governed by the cosh2 function.
Messenger developed a model for the SE current pulse as a double exponential given by [Me82] where α [sec-l] is the time constant of charge collection from the funnel (similar to Hu’s model above) and β [sec-l] is the time constant for the initial formation of the funnel region. This type of SE current pulse is shown in Figure 3.6. The double exponential form of the SE charge collection is the most common form used in transistor-level simulations.
V-31
Figure 35. Typical Shape of the SE Charge Collection Current at a Junction
It is relatively easy to implement the double exponential current source in a simulation code such as SPICE; with the values of α and β empirically derived. Typical parameters of this pulse are a rise time on the order of tens of picoseconds and a fall time on the order of 200 to 300 picoseconds. The equation for this current pulse which matches the form in circuit simulation codes such as SPICE is:
where the parameters are defined in Figure 36 [Ma93]. The total charge delivered by the current pulse of is the integral over time of I(t). Performing this integration gives:
Figure 36. Double-Exponential Current Profile for Transistor-Level Simulations (after [Ma93])
The magnitude and time profile of the current model depend on material parameters, the ion species, the ion energy, and the hit location relative to the junction. As shown in the experimental pulse-shape results of Figure 37, both the magnitude and time profile of the actual collection current pulse can be highly variable [Wa88]. If the time profile of this collection current is not important to the circuit response to the hit, then the simulated ion strike can almost V-32
take any shape and the total charge is the important characteristic. If, however, the time profile is critical to the circuit response, more detailed development of the current shape is necessary, for example, from device modeling. In either case, at this level of the hierarchy, a circuit-level model of the collection current at each junction of interest is included in the lumped-element circuit description, as shown in Figure 32.
Figure 37. Example Experimentally Observed Charge Collection Waveforms (after [Wa88])
In an analysis of the frequency domain of the SE charge collection, Boulghassoul et. al. compared the relative effect of the pulse shape on the propagation of the transient signal through an analog circuit. Figure 38 shows the circuit used for this evaluation with boxes outlining the input, gain, and output stages of the amplifier. A SE current pulse was generated at one point in each of these stages and given two different shapes, shown in Figure 39, but the same total charge. Figure 40 presents the results of this analysis. All graphs show the pulse coming out of the output stage. The farther the SE current source from the observation, the less dependent the output waveform on the input shape [Bo02b].
V-33
Figure 38. Circuit for Analysis of Pulse Shape Dependence (after [Bo02b])
Figure 39. SE Current Pulse Shapes Examples, Same Total Current (Charge) (after [Bo02b])
V-34
Figure 40. Simulation Results of Effect of Different SE Pulse Shapes (after [Bo02b])
Historically, it has become common practice to use the total charge delivered by the current waveform as a single descriptor of the SE effect on the affected circuit node. This can be an extremely dangerous simplification, as will be presented in subsequent sections, since it assumes the time-profile of the charge delivery to the sensitive node is unimportant to the response. From this simplification comes the concept of critical charge, QC. QC is a property of the particular circuit (not the ion or environment) and is defined as the minimum charge delivered by the SE current waveform of Figure 35 which causes that circuit to lose information and create an error condition. In rough terms, it is the amount of charge needed on a sensitive node to cause an upset. Of course, QC, tells nothing about the time profile of the delivery of that charge. Critical charge is commonly used as a figure of merit in the comparison of circuit design types and technologies. The quantity describes the vulnerability of a circuit to single events without the complications of ion species, ion energies, LET, or type of charge collection.
3.4
Simulation Approaches and Results
There are basically two types of transistor-level simulations: static and dynamic. The static versus dynamic distinction refers to the state of the simulated circuit and not the modeled SE. Static SE simulation is assessing the circuit’s ability to maintain its state. For example, a static SE simulation may examine a memory circuit and the amount of charge it takes to change it from a ‘0’ to a ‘1’ or vice-versa. Another example is an analog circuit (e.g. operation amplifier) propagating the transient current pulse with constant inputs. On the other hand, dynamic SE simulation puts the circuit in an active condition where the input conditions are changing before, during, or after the SE strike. V-35
3.4.1 Static SE Simulation The most common type of static SE simulation is memory circuits. This includes bulk memory array random access memory (RAM) including both dynamic versions (DRAM) and static (SRAM). The dynamic here refers to the fact that the memory cells lose information over time and need to be refreshed periodically. Other types of memory circuit are read-only memory (ROM) including programmable ROM (PROM), electronically erasable PROM (EEPROM), and others. The ROM may be challenging to model the SE strike at the Transistor-Level due to the fact that many of these memory types store the information on a floating gate capacitor structure. Other types of interesting memory circuits for static SE simulation are cell library static latches and flip-flops. The typically type of static SE simulation is the actually upset of the memory contents by the SE. One of the major areas of topic in the 1993 Short Course by Dr. Massengill was DRAM upset modeling. While this still may be of interest to some, DRAM cells are generally considered to be very soft with respect to SE strikes at this point. Basically, the charge stored on a DRAM capacitor is so small that it does not take much to remove this charge. In addition, DRAM capacitor structures have gotten very complex and technology dependent that they are a challenge to model. However, the static SE simulation of the DRAM cell illustrates a couple of concepts important to the understanding of the subject material. Figure 41 below shows an example of a circuit simulation of a DRAM capacitor connected to a pass gate P-N junction. The initial charge of the capacitor is a certain noise margin above the level required by the sense amplifier to correctly read the cell state. When an ion strike occurs, the charge stored on the capacitor decreases. If the stored charge falls below the noise margin, the sense amplifier will likely interpret an incorrect result and the information in the DRAM cell is lost. The first important concept this example illustrates is that this SEU resulted from a total charge collection perspective and not from a rate of charge collection. If the cell would have had a higher initial state, it would have just taken longer for the information to be lost. The second important concept this example illustrates is the problem with the variability of the SE interaction in the circuit. If we assume that the SE deposits a constant amount of charge in every DRAM cell at the same instant in time, not all cells will respond the same. A DRAM cell charge will be initialized after a write or refresh and then discharge over time. The point in time of the SE strike with respect to the last write or refresh would be relevant. Also, due to processing and layout differences, every DRAM cell will not be written or refreshed to the same charge level, there will be some statistical variation. One could simply take all these variables to their worst case (worst DRAM cell location, highest pass transistor leakage, and longest time after refresh) and do the simulation. This would bound the conservatively bound the problem, but may significantly overestimate the susceptibility of the DRAM cell. There are simulation methods (Monte Carlo) to be able to account for these statistical variations, but this example illustrates a pitfall of SE simulation versus the actual response.
V-36
Figure 41. DRAM Capacitor SE Discharge Example (after [Ma93])
The static random access memory (SRAM) cell shown in Figure 42 is the most common memory cell encountered in static SE simulation. The SRAM cell is basically two back-to-back CMOS inverters forming a simple latch. It has two and only two stable states: a LOW voltage at node NI and a HIGH voltage at node N2, and vice versa. The regenerative, high-gain, positive feedback of each inverter driving the input to the other provides static (non-refreshed) operation and a high degree of noise immunity. In this discussion, we will assume that a binary ‘1’ is stored by LOWHIGH complementary voltages at nodes NI and N2, and a binary ‘O’ is stored by a HIGH-LOW complementary pair. The particular physical arrangement of these voltage levels in relation to binary logic levels is arbitrary.
Figure 42. Six Transistor (6-T) SRAM Cell Schematic (after [Ma93])
Since this circuit contains both n-channel and p-channel devices, it is susceptible to ion strikes in two places, as shown in Figure 43. In the figure, an ion strike to the HIGH-side n-channel drain tends to degrade this voltage, while an ion strike to the LOW-side p-channel drain tends to enhance this voltage. Of course, the LOW-side/HIGH-side distinction is dependent on the binary bit stored at the time of the ion strike. Single event modeling of this circuit involves the inclusion of current sources representative of the SE charge collection process at the sensitive junctions.
V-37
Figure 43. 6-T SRAM Cell with Ion Strike Current Sources (after [Ma93])
The CMOS SRAM cell will actively latch in one of its stable states following any disturbance of the critical nodes (N1 and N2). Thus, the definition of an upset is simply any perturbation which causes the cell to switch from one stable state to the other. Simulation results of two SE hits to node N2, one not causing upset and the other causing upset are shown in Figure 44. As can be seen in the latter case, the SE photocurrent perturbs the HIGH node to a level which causes the cell to regenerate to the other stable state. This regeneration process is very quick, and an externally observable error occurs [Ma93].
Figure 44. Simulation Results Showing No Upset (Left) and Upset (Right) of the Cross-Coupled Information Nodes in a CMOS SRAM Cell (after [Ma93])
The distinction in the upset mechanism of the SRAM cell versus the DRAM cell is that the upset is dependent on the rate of charge collection rather than the total charge collected. Once the SE charge collection rate exceeds the feedback, regenerative process inherent to the SRAM cell, the cell will upset. The total charge collected is not the relevant metric, though this will scale fairly well with the charge collection rate. However, modelers may report the critical charge of the circuit, QC, which is the minimum total collected charge needed to upset the circuit. Though somewhat misleading, this will be fine as long as the shape of the SE current pulse is maintained. In Figure 44, the minimum charge needed to cause this upset, QC, for this example is 0.25 pC. Finding QC for such a circuit is usually accomplished via iterative circuit simulations to determine the minimum current pulse magnitude (and resultant charge) needed to cause the upset. The upset process in CMOS SRAMs is dominated by the regenerative current process. This is evident in Figure 44, as a very small change in the current pulse leads to very different results. V-38
Essential to accurate circuit modeling of upsets are: 1) accurate models for the dynamic (voltage dependent) capacitances in the devices (this would include both junction and diffusion capacitances) and 2) accurate device current models for the transient regeneration currents [Mn83]. Besides the static SE simulation of memory elements, there is the static SE simulation of the propagation of transients in digital and analog circuits. Buchner and McMorrow presented a short course in 2005 titled, “Single Event Transients in Linear Integrated Circuits.” This short course covered this general area of static SE simulation [Bu05]. The groundbreaking work in this area was performed by Boulghassoul et al. They validated that it was possible to develop a transistor-level circuit simulation of the generation and propagation of SE currents in a LM124 Operational Amplifier. This work concentrated on evaluating the SE response of a circuit which detailed transistor-level models were not available. Thus, a lot of effort was placed on matching the derived transistor-level model to actual electrical performance. The next step in the development of the transistor-level model was to tweak the model with respect to laser SE laboratory testing. This demonstrated the sensitivity of some of the design parameters to the SE response of the circuit. For example, Figure 45 shows a sample plot of the SE current pulse from a constant deposited charge while varying a collector-base junction capacitance in a BJT. The amplitude and pulse width was highly sensitive to this transistor parasitic capacitance [Bo02a].
Figure 45. Example Sensitivity of SE Pulse Generation to Device Characteristics (after [Bo02a])
Once the transistor-level model of the LM124 in an inverting amplifier configuration was validated with laser SE testing, it was used to predict the SEE of the LM124 in a non-inverting amplifier configuration. Since ion strikes are characterized by the randomness of locations and orientations, the translation of these constraints to equivalent computer-generated environment required an intensive computational effort. Because all the transistors of the circuit and their multiple junctions are potential targets, current sources modeling ion-induced transient currents were swept junction by junction across every transistor terminals where a charge collection process could occur: emitter–base junction, collector–base junction, emitter–collector shunt, or
V-39
collector–substrate junction. The effort did not apply simultaneous multiple current sources to the circuit nodes to account for multiple ions strikes or a single event affecting multiple nodes. The integrated charge of the current sources was swept from 0.1 pC to 10 pC and applied directly to the junction capacitance, to avoid the voltage drops across the spreading resistances at the terminals. This simulation results, shown in Figure 46, were then compared to the broad-beam test results, shown in Figure 47. There are clearly areas of the graphs that are matched and those are identified with numbers. Notice that the simulation did not completely predict all the results (lower right corner of Figure 47) and over predicted the pulse width of curve (3) [Bo02a].
Figure 46. Full Circuit Simulation Results for LM124 Non-Inverting Amplifier (after [Bo02a])
Figure 47. Broad-Beam Test Results for LM124 Non-Inverting Amplifier (after [Bo02a])
Another area of static SE simulation is the propagation of the SE current pulse in digital logic. This was discussed in good detail in the 2001 short course by Buchner and Baze [Bu01]. A recent simulation experiment by Clark et. al. looked at the effect of the SE charge collection on the propagation through chains of inverters. They first set out to model the current injection dependent on the node voltage, so that when the reverse bias on the P-N junction reduced, the rate of charge collection reduced. This has an effect of spreading out the SE current pulse as V-40
shown in Figure 48. This voltage transient was then simulated propagating in a chain of inverters. Figure 49 shows the result of this study. Small pulse transients were attenuated by the circuit, while large transient were free to propagate through the chain of inverters [Cl03].
Figure 48. Circuit Induced SE Pulse Shape Produced in Circuit Simulation (after [Cl03])
V-41
Figure 49. Propagation of SE Current Transient Showing (a) Significant Attenuation, (b) Slight Attenuation, and (c) No Attenuation, Infinite Propagation (after [Cl03])
3.4.2 Dynamic SE Simulation Dynamic SE simulation is typically a lot more involved than static simulation, since now the circuit inputs are being changed with respect to the SE strike. This type of simulation is especially relevant when the width of the SE pulse is on the order of the circuit clocking signals or flip-flop setup and hold times. This means that the SE pulse can appear as a legitimate signal within the circuit and it that signal propagates to a memory input at the right point in the clock cycle, data can be errantly stored. This is shown pictorially in Figure 50. This type of simulation has been done for the past few years in looking at high speed logic in SiGe HBTs.
V-42
Figure 50. Constraints on the Arrival of a Propagating SE Pulse in Relation to the Probability of Latching and Causing an Error (after [Ka92])
SiGe HBTs are a potential technology for applications needing very high speed logic. To get an idea of the scope of the simulation problem, Figure 51 shows the schematic for a D flip-flop. To fully simulate a single flip-flop circuit with potential SE sources in a static sense would be a significant undertaking. Adding the complexity of clocking the circuit at 2 GHz with a alternating D input makes the SE simulation all that more complex.
Figure 51. SiGe HBT D Flip-Flop Schematic (after [Ni02])
A device simulator was used by Niu et. al. to determine the SE currents that could be expected to be produced in the HBTs. The resulting circuit model and the modeled currents are shown in V-43
Figure 52 below. The collector current, icn, is a summation of the other three currents in the transistor model. As seen in the right side of this figure, the charge collection is characterized by a very fast prompt current followed by a long sustained current. Since one clock cycle is 0.5 ns, this chart shows the current level for 2 clock cycles. So, the charge collection will last over multiple clock cycles as can the effect of the SE pulse. The simulation output, shown in Figure 53, illustrates the effect of the SE pulse over multiple clock cycles. A static SE simulation would not have shown this effect [Ni02].
Figure 52. SiGe HBT SE Transistor Model and Device Simulator Current Output (after [Ni02])
Figure 53. Simulated Output Waveforms for Sample HBT D Flip-Flop Showing Multiple Errors (after [Ni02])
CMOS digital logic is also seeing the effect of transient propagation on the error rate in memory cells. Gadlage et. al. examined this effect by correlating dynamic SE simulation with heavy ion
V-44
test results on dual interlocked cell (DICE) flip-flops. The DICE flip-flop was assumed to be immune to static SEUs for purposes of this study. The test chip was a shift register of DICE flipflops with an even number of inverters between each. Any errors seen while clocking the test chip would be errors due to latching of the transient SE current pulse. The test data, shown in Figure 54, supports this assumption. The next step was to simulate the same circuit being clocked at 100 MHz and to inject a double exponential SE current pulse into an inverter. The characteristics of the double exponential were varied and compared to the test data. A plot of some examples is shown in Figure 55. The simulated data is shown on the legend with three numbers; the rise time constant, the fall time delay, and the fall time constant of the double exponential [Ga04].
Figure 54. DICE Flip-Flop Shift Register Heavy Ion Response (after [Ga04])
Figure 55. DICE Flip-Flop Shift Register Test and Simulated Data at 100 MHz (after [Ga04])
V-45
Another dynamic SE simulation analysis was performed on DICE flip-flops by Wang and Gong. They looked at the window of vulnerability of the timing of the SE strike to the clock edge. The critical charge needed to latch in the incorrect data varied as this relative timing of the SE strike. Figure 56 shows an example of the study while also showing the effect of changing the hold time of the latch itself. Increasing the hold time meant that the SE current pulse had to be longer in order to be incorrectly interpreted as an errant signal. This analysis showed the effect of changing the DICE latch hold time on the window of vulnerability [Wn04].
Figure 56. Window of Vulnerability Example of DICE Flip-Flop with Varying Hold Times (after [Wn04])
An interesting dynamic SE simulation was carried out by Zhao and Leuciuc on a latched comparator build on SOI CMOS. The simulated circuit is shown in Figure 57 and the results on dynamic SE simulation are shown in Figure 58. The interesting concept from the data is that the output of the comparator was only affected at certain times during the clocking cycle. That is why there is a large variety in the output waveforms from the simulations. From this simulation, they were able to develop a concept of a critical time window versus deposited charge which would lead to a single event effect in the comparator [Zh04].
V-46
Figure 57. SOI CMOS Latched Comparator Schematic (after (ZHA04])
Figure 58. SOI CMOS Latched Comparator Simulation Results Showing Different Transients Depending on Timing on Ion Strike and the Latching Clock (after (ZHA04])
V-47
4.0 Mixed-Level SEE Modeling and Simulation The term mixed-level or mixed-mode has two potential meanings in the radiation effects community, depending on the background and application. Originally “mixed-mode” referred to a simulator that combined a device-level model with standard circuit-level SPICE models to create a unified simulation environment in which the effects of single-event strikes on a particular device could be studied at the circuit level. However, the term “mixed-mode” is also used to describe simulations that are more typically referred to as mixed-signal simulations, in which both analog and digital circuits are simulated simultaneously. As more advanced circuits such as precision analog-to-digital and digital-to-analog converters are added to space and military systems, the simulation of single-event effects in a mixed-signal environment have rapidly grown in interest. Recent techniques for mixed-signal SEE simulations are covered in Section 5. Early mixed-level simulation tools were developed for circuits in which a device-level model existed for a critical component, but for which a SPICE-level circuit model was unavailable or difficult to create for a physical characteristic or interaction of interest [Rl88]. The SPICE and PISCES simulation engines were bridged with appropriate software to permit a PISCES device model to run in conjunction with simple SPICE-level models for the remainder of the circuit elements. The advantages of this technique quickly became apparent in the radiation effects community, and today many commercial TCAD simulators support mixed-level simulations for single-event effects. There are two primary difficulties with transistor-level SEE modeling and simulation. One is the generation of an accurate SE current pulse. Once that signal is generated within the circuit, these tools become a lot more accurate in propagating the SE current pulse. Device simulators (2-D and 3-D tools) can much more accurately represent the SE current pulse generation, but are limited in size of a circuit to be modeled. The second difficulty is the charge sharing at multiple nodes. Typical transistor-level SEE modeling applies the ion strike to one node. People have even hit multiple nodes just to observe the effects, but these experiments are not tied to the spatial relationship of the circuit nodes. These are a couple of reasons why mixed-level SEE modeling and simulation has become more widely used.
4.1
Available Tools and Capabilities
Some commercial device simulators have also been developed which allow selected components in a circuit to be modeled at the device level while the rest of the circuit is modeled at the circuit level. Figure 59 shows a mixed-level modeling representation for a four transistor latch circuit in which one transistor is modeled at the device level while the remainder of the latch is modeled at the circuit level. Examples of these tools are Davinci, Atlas SEE, NanoTCAD, and Sentaurus Device. These tools are frequently used in modeling the charge collection, transient pulse generation and early circuit responses to single event transients. The size of the circuit that can be modeled is generally limited to just a few (<25) circuit elements [Bu01].
V-48
Figure 59. Mixed-Level Simulation Example (after [Bu01])
4.2
Breaking Up the Problem
The simultaneous solution of device and circuit equations has been increasingly used recently. This technique, known as mixed-mode or mixed-level simulation, was developed by Rollins at USC/Aerospace in the late 1980s [Rl88]. The term “mixed-level” is probably less confusing and more descriptive than “mixed-mode.” In a mixed-level simulation of SEU, the struck device is modeled in the “device domain” (i.e., using multi-dimensional device simulation), while the rest of the memory cell is represented by SPICE-like compact circuit models, as illustrated in Figure 59. The two regimes are tied together by the boundary conditions at contacts, and the solution to both sets of equations is rolled into one matrix solution [Rl88, My93]. The advantage is that only the struck device is modeled in multiple dimensions, while the rest of the circuit consists of computationally-efficient SPICE models. This decreases simulation times over multiple-device techniques and greatly increases the complexity of the external circuitry that can be modeled. A potential drawback of the mixed-level method is that coupling effects between adjacent transistors have been shown to exist at the device level using 2-D simulations [Fu85] and later in 3-D simulations [Bl05, Ol05]. These effects cannot be taken into account when only one device, the struck device, is modeled at the device level. In order to consider multiple node charge generation in mixed-level simulation, more than one device needs to be simulated. One method is to simply model an entire cell in a 3-D device simulator. Roche et. al. performed an experiment of mixed-level modeling to a full 3-D device model of an SRAM cell. The authors compared the results to standard mixed-level simulations and found that in cases where no coupling effects between transistors existed, mixed-level simulations were adequate to reproduce the full SRAM cell results. For some strike locations, however, coupling effects between adjacent transistors were observed. Mixed-level simulations with a single transistor device in the device simulator are incapable of predicting such effects [Ro98]. As inter-device spacing decreases with increasing integration levels, coupling effects can be expected to become more important, and other approaches are necessary. To date, there are really two main applications of mixed-level simulation. The first is the device modeling of a single transistor with the rest of the circuit being modeled at the transistor-level. This provides the benefit of being able to simulate the single event in the device model and apply V-49
the circuit effects to the single event. In many cases, the transistor modeled at the device level is considered to be isolated, meaning that the single event charge collection is only seen on that device and there is no multiple node charge collection. The potentials and currents at other parts of the circuit may be affected by this charge collection, but the response is a circuit level response at not as a direct result of the ion strike. This is similar to what is shown in Figure 59. The second application of mixed-level simulation is in the analysis of multiple node charge collection. A few recent heavy ion experiments have shown single event upsets at a lower than expected deposited energy. In many cases, this has lead to a mixed-level simulation to determine how the multiple node charge collection is occurring and device a new layout to improve the design. How one decides which transistors to model in the device domain is somewhat of an iterative process at this point in time. An example of the inclusion of mixed-level SEE modeling and simulation in the troubleshooting of unexpected data is provided in the next couple of figures. Figure 60 shows a flow diagram that a modeler might use to compare transistor-level model results with test results. From the circuit design, a modeler would identify a critical node or a set of critical nodes and then simulate strikes on these nodes at given deposited charges to see if the circuit responded as seen in the experiment. A complex circuit will have varying responses depending on the node struck and the configuration of the inputs. This can lead to an estimation of the circuit cross section given that observed response. If the model results compare reasonably well with the test results, then the effort is usually complete. The application of transistor-level modeling will typically lead the testing so that the testing is conducted with some idea of the expected circuit response. Circuit/Cell Design
Critical Node Identification Transistor-Level Modeling and Simulation Test Results
Compare
Spatial Relationship Determination Cross Section Estimation Observed Response
Cross Section
Figure 60. Transistor-Level Modeling versus Test Results Flow Diagram
There are some potential problems with the above description for matching transistor-level simulation data and test data. One such problem occurs when there are unexpected circuit responses at a lower than expected deposited charge. In order to troubleshoot this unexpected
V-50
response that cannot be accounted for in traditional transistor-level modeling, mixed-level modeling can be applied. Figure 61 shows a potential methodology for performing this troubleshooting. Since the circuit is being divided into areas that are modeled at the device level with the remainder at the transistor-level, the first part of the job is to identify circuit nodes that if affected as a collection can create the observed response. The easiest example of this is two circuit nodes in a memory cell designed to tolerate an ion strike on one node. There will be a number of nodal pairs that can cause the memory cell to upset. One might go about this by simply transistor-level simulation with multiple current sources. Once these critical node pairs (or how ever many may matter) are determined, the spatial relationship between these nodes needs to be determined. This will likely indicate the worst case for charge sharing if a set of nodes are more closely spaced. These nodes are then modeled at the device-level, while the rest of the circuit is at the transistor-level. And, as in the previous flow diagram, this can lead to simulating the observed response and an estimation of the affected cross section. Circuit/Cell Design Critical Node Pairs/Trios/Etc Identification Spatial Relationship Determination Test Results
Compare
Mixed-Level Modeling and Simulation Cross Section Estimation Cross Section
Observed Response
Figure 61. Mixed-Level Modeling versus Test Results Flow Diagram
So, how one goes about breaking up the mixed-level problem is really up to the user. There is no clear cut methodology for deciding which transistors or nodes to model at the device-level and which to not. In recent work, the decision is made by necessity, as in the troubleshooting of unexplained results. This does lead to increased understanding of multiple node charge collection, but it doesn’t help much in hardness assurance.
4.3
Simulation Approaches and Results
Besides the case of multiple node charge collection, device-level modeling is superior to transistor-level modeling in other cases. Reviewing the transistor-level models in section 3.2, the only one easily usable is the p-n junction charge collection, which is just a dependent current source. The ion shunt and parasitic bipolar enhancement models are very complex to implement and take some investment in process analysis to determine the model parameters. As a result,
V-51
transistor-level SEE modeling and simulation is typically just considers the basic charge collection effects. These other effects, ion shunting and parasitic bipolar enhancement, may be lumped into the dependent current source by increasing the current and/or changing the SE pulse shape. However, this will reduce the accuracy of the SE simulation. The device-level model will implement ion shunt and parasitic bipolar enhancement by default. And, as long as the device is constructed properly, the simulated response will be a good representation of the actual response. So, when you are simulating a technology where these other effects are important, mixed-level SEE modeling is good choice. While the mixed-level modeling has good potential, there are also some pitfalls. As a result of trying to make the device-level model as small as possible for computational efficiency, the device-level tool may not accurately model the charge collection. This can occur with improperly setting up the boundary conditions in the mixed-level model. The diffusion charge collection mechanism can be a large part of the total collected charge. If the device-level model is small, on the order of a diffusion length, the diffusion charge can be reflected at the model boundary back into the p-n junction of the device resulting in an incorrect increased charge. Another boundary condition pitfall is the circuit connection. Just like in the transistor-level case, the power and ground contact should not be directly connected to fixed potentials, but connected through resistances so that the nodes can change potential. This will more accurately represent the circuit conditions in the SE. Hirose et. al. used mixed-level modeling to establish a baseline to SRAM test results and then to develop methods for improving the circuit response. The specific technology being modeling was SOI and the reason for the mixed-level modeling was due to the fact that they wanted accurate cross section upset response. The simulated one of the SRAM transistors in devicelevel and the rest at the transistor-level and performed a spatial experiment of the strike location. At specific points, the energy deposited would cause upset and at others it would not. Thus, they could fairly accurately determine the upset cross section versus energy deposited and compare this to the test results. An example of the spatial nature of the upset plots is shown in Figure 62 [Hi02]. This spatial type experiment cannot be performed with just transistor-level modeling. Figure 63 shows the results of the study. It demonstrates the ability to accurately predict the upset cross section with mixed-level modeling and simulation as well as demonstrating the improvement of the circuit response due to the hardening method.
V-52
Figure 62. SEU Sensitive Area as a Function of LET (after [Hi02])
V-53
Figure 63. Mixed-Level Simulation Results Compared with Test Results (after [Hi02])
An extension of the SOI SRAM was performed by Hirose et. al. in 2004. This particular study looked at the effectiveness of the body ties in a new technology generation. The interesting piece of this study as it relates to the application of mixed-level modeling and simulation is that the improvement in threshold LET was seen due to reduction of parasitic bipolar gain and increased capacitance from the body tie itself. The device-level part of the mixed-level simulator properly implemented the parasitic bipolar amplification. In a strictly transistor-level circuit simulation, this would have to be artificially added and may not be an accurate representation of the device physics [Hi04]. Dodd et. al. demonstrated the ability of a mixed-level tool to perform the propagation of SETs in digital circuits. Figure 64 shows the experiment design with one transistor modeled at the device-level and the remainder of the circuit at the transistor-level. The output of the devicelevel simulation piece of the mixed-level simulation is given in Figure 65 for both bulk and SOI type devices at the same technology node. It is notes that the plots are formatted the same to show the bulk voltage change being larger, but the transient width to start being very similar. An example of the propagation of the SET is shown in Figure 66 for just the bulk technology. This shows how the authors converged on the LET required to achieve unattenuated propagation. One of the study conclusions is shown in Figure 67. This shows the critical LET for each technology for the unattenuated propagation of SETs in inverter chains as a function of the feature size. It clearly demonstrates the superiority of SOI technology for this response [Do04].
V-54
Figure 64. SET Propagation Mixed-Level Circuit Schematic (after [Do04])
Figure 65. Simulated Pulse Generation Comparing Bulk to SOI in Mixed-Level Simulation (after [Do04])
V-55
Figure 66. SET Propagation in Bulk Technology Showing Circuit Attenuation and Unattenuated Propagation (after [Do04])
Figure 67. Mixed-Level Propagation Study Result Showing Critical LET for Unattenuated Propagation versus Feature Size (after [Do04])
An example of applying mixed-level modeling and simulation to multiple node charge collection was a study conducted by Wang et. al. into SRAM spacing. They used the mixed-level tool to determine design rules for spacing of adjacent SRAM cells so that a single event would only affect one SRAM cell and mitigate the potential for MBUs in the SRAM array. Figure 68 shows
V-56
the depiction of the mixed-level experiment. The two darker areas are the critical nodes under study and are the drains of two transistors in separate SRAM cells. The rest of each SRAM circuit is modeled at the transistor-level. They performed two types of simulations, one directly striking SRAM 2 drain and one striking between the two devices at a low angle of incidence. 60 degrees is a standard ground test angle, so that simulation data could be matched to the test data. Certainly, very low angle of incidence ions have a probability to directly strike both sensitive regions, but that was not modeled. Some of the results of the study are provided in Figure 69. The direct ion strike to SRAM 2 is on the left and shows the sensitivity of the SRAM cell to upset versus energy (LET = ~0.8 MeV●cm2/mg. Note the other SRAM cell is not significantly affected. In the plot on the right, the centered ion strike is shown at an LET of 60 MeV●cm2/mg. There is no significant affect to either cell (note the change in voltage scale).
Figure 68. SRAM Spacing Mixed-Level Simulation Example (after [Wg03])
Figure 69. Mixed-Level Multiple Node Charge Collection in SRAM Results (after [Wg03])
Another example of multiple node charge generation and the application of mixed-level modeling and simulation was provided by Olsen et. al. This project examined unexplained SEU error plots, especially at lower LETs, as seen in Figure 70. An 3-D device-level simulation was conducted to determine the minimum LET to cause an upset, assuming one and only one node was affected. This is the vertical line in the figure. However, test data, shown with the V-57
diamonds, shows a change in the cross section at that LET, but also shows upsets at lower LETs. A mixed-level modeling effort was undertaken by the authors to determine the cause of the unexpected cross section. Figure 71 shows the layout of the SRAM cell and the corresponding schematic. Figure 15 and Figure 16 had demonstrated the multiple node charge sharing that was seen on this layout. Figure 71 shows the region in the layout where this multiple node charge sharing can cause the cell to upset at lower than expected LETs. The end result of this effort was an insight to the charge sharing problem and an ability to re-layout the cell to obtain the expected SEU resistance [Ol05].
Figure 70. Device-Level Simulation Results versus Test Data for SRAM Device (after [Ol05])
V-58
Figure 71. Layout and Schematic for SRAM Device in Mixed-Level Simulation (after [Ol05])
IV-59 V-59
5.0 Circuit-Level SEE Modeling and Simulation 5.1
Transistor-Level SEE Modeling
The optimal technique for SEE modeling at the circuit level will depend upon the complexity and function of the circuit in question. For circuits with a few dozen to a few hundred transistors, the use of the standard double-exponential current source injection with SPICE-level simulations is commonly used. The advantage to this approach is that it works equally well with digital, analog, or mixed-signal circuits, and in the past has proven to be reasonably accurate in correlating experimental SET measurements with simulated SET pulse widths and durations. This approach requires the user to know which circuit nodes (or p-n junctions) are vulnerable to a strike, and to repetitively apply the appropriate current source at each vulnerable point in order to determine the worst-case SET response(s) for a given circuit. Hand editing a netlist thousands of times, manually running a simulation thousands of times on a single workstation, and manually collating the results from those simulations becomes an intractable problem for larger analog and mixed-signal circuits. Twenty years ago, a comprehensive SET simulation of a mixed-signal circuit containing several thousand transistors would have been prohibitively difficult and expensive. However, the exponential increase in computer performance per unit cost, along with numerous innovations in computer software and operating systems, has changed that. Today even a small company or university can afford to construct a multi-node computer cluster at low cost, and most of the repetitive steps of simulation can be automated by using programming languages such as PERL to create scripts that interact with simulators such as Cadence Spectre or Synopsys HSPICE. Kauppila et al demonstrated this evolution in computing capability by automating 17,784 SET simulations for a pipelined analog-to-digital converter stage (Figure 72) [Ku04]. These simulations had to cover 18 different strike times identified during one conversion cycle while using current source injection with an LET of 100 at every junction of the circuit. The input signals also had to cover all possible digital output states of the pipeline stage. Automated scripts were written to control netlist creation, simulation management, archiving of output data, and upset/error detection using predefined thresholds. These scripts interacted with and controlled Cadence Spectre, making the iteration of each SEE simulation fully automatic. All 17,784 simulations were successfully performed on the Vanderbilt University ACCRE computer cluster.
Figure 72. Pipelined A/D converter stage used for automated SET simulation
V-60
5.2
Behavioral Modeling
The standard current source injection technique becomes difficult or impractical as simulation times increase and thousands (or millions) of different p-n junctions become possible targets within a single circuit application. Beyond this point, standard SPICE models can be supplemented (or replaced) with behavioral elements or components that greatly simplify the degree of computational complexity required to model SET response in a circuit. Behavioral models can take the form of macromodels in which multi-transistor circuits are replaced with simpler SPICE elements, compact models in which standard SPICE models are enhanced with behavioral elements, and higher-level behavioral modeling at the subcircuit or system level that dispenses with simulation at the transistor level altogether.
5.2.1 Macromodels Macromodeling is a basic form of behavioral modeling that has been used since the early days of SPICE simulators [Co92]. Macromodels are most useful in situations where the costs of computational time and resources are prohibitive for a particular simulation, and reduced accuracy can be tolerated in order to reduce those costs. In macromodels, standard SPICE elements are used to either replace transistor-level circuits with their idealized equivalents, or to create a mathematical approximation of an entire circuit without the use of transistor-level models. +
Rout -A V in
+
C out
-
V out
-
Figure 73. A small-signal macromodel for a common source amplifier
An example of macromodeling is shown in Figure 73, in which the small-signal performance of a transistor amplifier must be simulated. Assuming a common-source amplifier with a DC gain of -A, an output resistance of Rout, and an output capacitance Cout, the small-signal frequency response of the amplifier can be approximated as −A Vout = . Vin 1+ s(Rout Cout ) The equivalent mathematical expression for this frequency response can be directly simulated as shown using a dependent source, a resistor, and a capacitor. Accordingly, the common-source amplifier can be replaced with these three simple circuit elements provided that the accurate simulation of large-signal and DC parameters is not required by the designer. If the accurate simulation of those parameters is required, additional elements can be added to the macromodel. For example, DC current sources can be added to model quiescent DC power dissipation, and idealized diode models can be used to create nonlinear limiting effects such as output saturation V-61
due to power supply limits. More complex macromodels such as the op amp macromodel shown in Figure 74 can provide reasonably accurate simulations of AC, DC, and transient behavior without the need for a single transistor-level model in the simulation [Br94].
Figure 74. A comprehensive macromodel for an operational amplifier (after [Br94])
In some cases a macromodel may still contain a small number of transistor-level models in addition to basic SPICE elements, as shown in Figure 75. Two transistors are used for the differential input pair, two are used for a current mirror load, and one is used for a tail current source. The current mirror and current source have relatively little effect on the small-signal behavior of the circuit, and can be replaced with basic SPICE elements. The differential input pair, on the other hand, determines the input resistance, gain-bandwidth product, and intrinsic noise of the amplifier. Therefore, a reasonable compromise for the macromodel is to retain the crucial input transistors but replace the remaining components with a diode and two current sources.
V-62
Figure 75. Transistor models and simple SPICE elements combined in a macromodel.
5.2.2 Macromodels for SEE Simulations Boulghassoul et al examined the application of macromodels to the generation and propagation of single event transients [Bo03]. An ion strike was internally applied to an LM124 macromodel supplied by the vendor. Figure 76 shows a resulting simulation in which the single event response was compared to the SEE response of the original micromodel (i.e. transistor-level SPICE model). In this particular instance the macromodel significantly underestimated the response, showing a significant pitfall of this type of single event modeling. This pitfall relates to the fact that the topologies and components of the micromodel and macromodel are generally completely different. Furthermore, the devices and p-n junctions responsible for charge collection in a micromodel typically do not even exist in a macromodel. A macromodel is created to approximate the external behavior of a circuit, not its internal behavior. Accordingly, adding a current source to inject charge at an internal node of the macromodel will have a completely different effect when compared to charge injection in the micromodel. In most cases, equivalent circuit nodes will not even exist, forcing one to make a “best guess” for the appropriate node.
Figure 76. SEE response showing macromodel underestimation of resulting transient (after [Bo03])
Clearly, macromodels are not useful for accurate simulation of single event effects that originate internally in a transistor-level circuit. However, they can still be very useful for system-level simulations where a single-event transient is generated externally to the macromodeled circuit. To demonstrate this, an amplifier circuit using two OP27 op amps was simulated as shown in V-63
Figure 77. Op amp OP27 #1 was modeled at the transistor level, and current source charge injection was used to create a single-event transient at an internal transistor. Op amp OP27 #2 was simulated using both a micromodel and a macromodel, and its output voltages compared between the two simulations. The results of this simulation (Figure 78) and many others showed very good correlation between the two models, proving that macromodels can be successfully applied to SEE simulations provided the single-event transient is not generated internally within the macromodel itself.
Figure 77. An amplifier circuit simulation combining a micromodel with a macromodel (after [Bo03])
Figure 78. Amplifier circuit comparison of macromodel versus micromodel outputs (after [Bo03])
5.3
Components and Languages for Behavioral Modeling
Macromodeling allows a designer to emulate any circuit response that can be defined using simple SPICE elements, but complex macromodels may require significant effort and ingenuity to create in this manner. Modern simulators have addressed this problem in two ways. The first method is the incorporation of behavioral components that can be precisely specified simply by writing out the desired equations. For example, Synopsys HSPICE includes behavioral voltage and current sources that can be used to simulate linear or nonlinear mathematical functions. A device such as a voltage controlled oscillator (VCO) can be easily modeled with a single voltage source as Evco osc 0 VOL='voff+gain*SIN(6.28*freq*(1+V(control))*TIME)'.
V-64
Because digital logic gates can generally be adequately modeled in terms of input/output logic levels and propagation delays alone, behavioral components are particularly useful for the simulation of digital circuits. For example, digital logic gates can be specified using HSPICE behavioral sources combined with standard voltage sources and resistors as shown in Figure 79. Behavioral models of digital gates are especially useful for simulating the propagation of SEE pulses through a digital circuit, although (like macromodels) they cannot be used to accurately simulate the internal generation of a single-event transient.
Figure 79. Behavioral models of AND and NAND gates in HSPICE (after [Sy06])
The second method is to simply dispense with component-based behavioral simulation altogether, and use a behavioral modeling language in which complex circuit functions are specified by describing the appropriate functionality in software. Two of the best known behavioral modeling languages (or hardware description languages) are VHDL and Verilog, which were both originally created to simulate digital circuits and systems. In the past twenty years both languages have been enhanced to include the ability to simulate analog and mixedsignal circuits (as VHDL-AMS, Verilog-A, and Verilog-AMS). Both languages have also been adopted as IEEE standards [Ie04, Ie05]. Verilog-A is of particular interest because several commercial circuit simulators (Cadence Spectre, Mentor Graphics Eldo, Synopsys HSPICE, Silvaco Harmony-AMS) allow Verilog-A behavioral extensions to be combined or included within standard SPICE netlists.
V-65
5.3.1 Compact Models Using Behavioral Elements Standard transistor SPICE models (e.g. BSIM models) can be enhanced with behavioral elements to provide simpler and more accurate SET simulations while preserving the accuracy of a SPICE simulation. Figure 80 shows an example of a SPICE compact model where an SOI BSIM transistor model is supplemented with a behavioral resistor and behavioral current sources written in Verilog-A. These behavioral components are only active during the simulation of an SEE, and then only if the engineer specifies an ion strike on that particular transistor. The behavioral components can be calibrated to TCAD simulations and SEE measurements to provide a degree of simulation accuracy that is superior to the standard SPICE current source injection technique, but computationally much faster than mixed-mode circuit simulations. In addition, Verilog-A facilitates the use of simulation flags that allows a designer to choose any transistor in the circuit for a simulated ion strike without changing the netlist itself.
Figure 80. A compact model combining Verilog-A behavioral elements with a BSIM4 model
5.3.2 Behavioral Modeling of Mixed-Signal Systems Besides VHDL and Verilog-A, other general-purpose mathematics programs such as Matlab and Mathematica can be used for SEE modeling. These programs are actually very powerful programming environments that can be used to simulate relatively complex mixed-signal systems without a single conventional SPICE circuit model. The main challenge to using such software is that a designer must have a good understanding of the system to be modeled, and be able to create an accurate mathematical description of its behavior that can in turn be translated into the appropriate modeling syntax. On the other hand, once the behavioral model has been created, it can be easily modified or enhanced to allow the designer to determine the best systemlevel approach to mitigate single-event effects. An example of this technique was demonstrated by Leuciuc et al, who developed behavioral models for delta-sigma modulators using Matlab Simulink as the base tool [Le04]. Figure 81 shows an example of this modeling approach, where a circuit-level Gm-C integrating stage of a delta-sigma modulated was modeled from a mathematical expression of its behavior which in turn was translated to Matlab format. Next, a single-event voltage pulse vse was modeled as a step function and summed into the data stream. The accuracy of this single-event behavioral model was validated by comparing it to an HSPICE micromodel simulation.
V-66
Figure 81. (a) A Gm-C integrator and (b) its behavioral model with an SEE voltage pulse (after [Le04])
The versatility of the Matlab programming environment enabled the researchers to easily compare different delta-sigma modulator topologies, and to vary the orders and oversampling ratios of the modulators to determine how those parameters affected SEE hardness. Figure 82 shows an example of the analysis results comparing feedback (FB) versus feed-forward (FF) topologies. In this case, feed-forward delta-sigma modulators were shown to provide fewer errors when compared to feedback modulators. Furthermore, lower orders and higher oversampling ratios improved SEE hardness for both types of topologies.
Figure 82. Simulation results comparing different delta-sigma modulator topologies (after [Le04])
In conclusion, it should be noted that although programs like Mathematica and Matlab have interfaces and function libraries that speed the model development process, this type of behavioral modeling approach could be implemented with any general-purpose programming language. The best language choices will be those that provide the modeler with the most comprehensive mathematical subroutines and functions for system-level design.
V-67
6.0 System-Level SEE Modeling and Simulation Beyond circuit-level SEE simulation lies the problem of system-level SEE simulation in which circuits containing millions of transistors must be verified for functionality. At such scales, modeling a circuit at the level of individual transistors (e.g. SPICE or its variants) becomes impractical even with modern computing resources. Accordingly, high level approaches must be taken where behavioral or rules-based techniques and languages are used to model complex systems at the level of logic gates or larger functional blocks. Such system-level techniques are most useful for modeling single-event effects in very large digital circuits where crucial performance parameters (e.g. transition time, propagation delay, maximum operating frequency) can be readily quantified and described mathematically. (In contrast, analog circuits cannot be so easily parameterized even with VHDL-AMS and Verilog-A, and may require SPICE-level simulations. However, the simulation of a few hundred transistors is sufficient in most cases). Transient effects can usually be modeled at the behavioral level even in large systems, so some degree of time-dependent effects can still be simulated even without transistor models. There are many reasons why a designer might need to perform full SEE simulation of a large digital system; (1) the circuits are not repeated or arrayed, in contrast to memories where one cell can be analyzed and the results applied to the whole array, so regularity assumptions do not hold, (2) the many paths an error signal can follow in its propagation are influenced by the fact that only certain paths are active at any one time, depending on the input vector to the logic and the timing, and (3) multiple errors can be generated by a single hit, requiring simulation of many error paths [Ka91]. Figure 83 shows a simple combinational circuit with a SE hit. The figure depicts some of the complexities listed above.
Figure 83. Complex Circuit Showing Muitiple Possible Paths Following a SE hit (after [Ka92])
In the example shown in Figure 83, each gate (NAND2, NOR2, XOR, etc) could be described by (for example) a truth table and a set of timing constraints. Timing constraints are like the delay induced by the gate, like the delay in the INV output going from high to low following an input
V-68
going from low to high. This behavioral description can get very involved adding variability for input rise and fall times and output loading of the gate. With respect to behavioral-level SE simulation, a modeler must make some assumptions about the SE model since it can only be described in the behavioral model.
6.1
Available Tools and Capabilities
The tools that support system-level modeling and simulation span a wide range of capabilities. Microelectronic design tools have been the largest contributor to this area. Tools that support logic development, timing analysis, etc. can be used to also aid in assessing the effects of a SE. Most of these tools support digital design and as a consequence only are applicable to that type of SE analysis. From a SE modeling and simulation perspective, the general goal of system-level simulation is to characterize the effect of the ion strike on the circuit itself. A fairly simple example is to examine a SEU in a circuit flip-flop. If that flip-flop feeds an AND gate where the opposite input is low, then the logic level of the upset flip-flop does not matter in the response of the system. If the opposite input to the AND gate is changed at some point later in time and the flipflop has not been rewritten or corrected, then the error would propagate, yet it still may have no impact on the overall circuit. Some system-level tools use logic level ‘schematics’ along with logic response, or Boolean logic descriptions of the logic component subcircuits (i.e. gates) to model the response of a high level logic circuit (i.e. controller, processor, etc.). QUICKSIM, MODELSIM, and SPEEDSIM are examples of logic level tools. Many of these tools also perform timing analysis of logic circuits based on parameterized timing descriptions of the individual gates. These tools may be used to determine whether a logic path is ‘open’ for transient transmission and further evaluate the observability of bit errors [Bu01]. Outside of microelectronic design tools, the radiation effects community has developed some custom tools for system-level SE modeling and simulation. One of the first was the development of a tool called SITA [Ka91]. This tool replaced the transistor-level circuit level simulations with simpler closed form equations for the responses of each gate to single-event induced transients. In this way the circuit simulation is tailored specifically to the problems of SET propagation, attenuation, and capture. This eliminated a great deal of extraneous calculations being performed by the commercial circuit simulators and retains sufficient accuracy of waveforms for single event rate predictions [Bu01]. A second method for replacing circuit simulation with simpler equations was developed, based on solutions to a generic MOS gate primitive drawn in Figure 84 [Dh94]. In this method the transient fault is modeled by a piece-wise quadratic current waveform and transient signals by piece-wise linear waveforms. In circuits as large as 1700 transistors this method was shown to be nearly 100 times faster than SPICE simulations. Comparison of waveforms between this technique and SPICE3 were quite favorable. Like SITA, use of this algorithm requires development of a library of parameters for each gate and fabrication process.
V-69
Figure 84. Generic MOS Gate Primitive Example (after [Dh94])
In addition to SITA, the long execution times associated with commercial transistor-level simulators motivated the development of several other approaches for modeling transient propagation, attenuation, and capture. A mixed-level behavioral-level simulator called DYNAMO was developed for the purpose of modeling both transient and bit errors. In this tool, all subcircuits are represented at the logic level in the beginning, and analyzed using logic simulation techniques up to the point that a transient error occurs. At this point, all affected subcircuits switch to their circuit level representation and are analyzed using circuit simulation. This continues until the transients have settled, (i.e. captured as bit errors) at which point the subcircuits can be returned to their logic level representation. In this manner, circuit level simulation is used only when deemed necessary by the software. When applied to a 4000 gate processor, SEU simulations using this technique were report to take about 60 seconds of CPU time for complete analysis of each simulated ion hit [Ya92, Bu01]. A second mixed-level transient fault simulator called FAST uses a logic level timing simulator called TIFAS to track transients and a zero delay logic simulator called TPROOFS to track bit errors. The TIFAS, transient simulation, portion of this tool demonstrated very fast simulation times for large circuits, simulating transient effects for 100,000 injected upsets in a 17,000 gate circuit in only 43 minutes [Ch96]. Figure 85 is a bubble chart for this tool’s operational flow. In replacing transistor-level simulation with a logic-level timing simulator this approach represents a higher level of abstraction and faster computation times than the parameterized gate equations used in SITA or proposed by Dharchoudhury et. al. [Dh94]. However some loss in accuracy was noted [Bu01]
V-70
Figure 85. FAST Fault Simulation Environment and Flow (after [Ch96])
An even higher level of abstraction is used in a probability matrix tool called SEUPER_FAST [Bz95, Bz98a, Bz98b]. In comparison to commercial tools this software is not a simulator at all but is instead a mathematical model. The motivation for development of this tool was to reduce execution time for VLSI error rate calculations by avoiding simulations at both the transistor and logic level. This method uses both a parameterized description of gate responses and a logic state distribution file to create a matrix equation for each gate in the circuit that represents its probabilities of transient upset and bit error generation, transient and bit transmission, and transient capture. The gate level schematic of the entire circuit is then used to assemble these matrices into a tensor that represents the connectivity of the circuit. Solution of the tensor then yields a probabilistic estimate of the circuit error rate. Figure 86 is a simplified depiction of this process for a circuit in which gates are indexed by letters A, B, C, and D, and circuit nodes are indexed by numbers 1 through 7. ER represents a matrix of error rates, P a matrix of transmission and capture probabilities and σ a matrix of generation cross-sections. In its treatment of transients this tool ignores many lower order effects. For example it does not consider transient pulse shapes, treating all transients as square pulses of a fixed width, and it ignores pulse attenuation and simply assigns zero propagation probability to all generated pulses with widths narrower than the minimum set-up-and-hold time of the target register. This method has been demonstrated on combinational logic circuits as large as 1000 gates. The execution time for these was about 10 minutes. While this is extremely fast, the inaccuracy in predicting transient induced errors for any given logic string can be as high as 30% [Bu01].
V-71
Figure 86. SEUPER_FAST Error Transmission Probability Matrix
A recently published tool called SEU_TOOL represents what might best be termed a multimode approach [Ma97, Ma00]. Figure 87 shows the steps involved. This tool uses parameterized closed-form circuit models for transient pulse generation, a structural VHDL logic level simulation for pulse attenuation and propagation, a probabilistic model for transient capture, and a second high level VHDL logic simulation for bit error observability. In addition to the circuit modeling calculations, this method also contains algorithms at various steps in the calculation to identify the worst case error rate contributors in order to reduce computation time. It has been demonstrated on circuits on the order of 400 gates. Given the detail of modeling capability in this method, its accuracy is largely a function of the quality and completeness of the parameters used for input [Bu01].
V-72
Figure 87. SEU_TOOL Operations Flow Chart (after [Ma97])
Beyond software based tools, system-level SE modeling and simulation can be done in hardware. This is being done in complex processors and field programmable gate arrays (FPGAs) to determine potential system effects of the single event. While this may not provide full coverage of potential SE sources and effects, this analysis is being applied widely it seemed very appropriate to cover it in this text.
6.2
Simulation Approaches and Results
There are two recent areas of research interest in the system-level analysis of SEEs: transient fault assessment and fault injection. The first, transient fault assessment, has to take into account the effect of the transient nature of the system signals and the SE pulse. Whereas the second, fault injection, typically consider the fault or soft error to have occurred and considers what impact it will have on the system response
6.2.1 Transient Fault Assessment Massengill et. al. applied the SEUTool to the AM2901 Bitslice Processor to assess the impact of combinational logic to the overall error rate. Figure 88 shows a view of combinational and sequential logic. In the past, the radiation effects communication was concerned with SE strikes to the sequential circuit, the D Flip-Flops. This would also be considered a static upset and is V-73
typically independent of the clock rate. But, as the SET widths approach setup and hold times of the sequential circuits, it is possible to propagate an erroneous signal to the sequential circuit from the combinational circuit, as shown in the figure. Of course, this has to arrive at the correct part of the clock cycle to have an effect, but the rate of errors typically scales with the clock rate. This concept can be a little confusing as sequential circuit can be thought of being made with combinational logic. Additionally, any input to the sequential circuit is considered combinational to that sequential circuit. For example, in a shirt register (chain of D Flip-Flops), the preceding D Flip-Flop is combinational to the current D Flip-Flop. In order to reduce this confusion, this course refers to static and dynamic errors, the first being insensitive to the clock rate (direct SE hit) and the second scaling with the clock rate (combinational SE hit).
Figure 88. Combinational and Direct SE Schematic Locations (after [Ma00])
The SEUTool has two parts of the soft error, or SEU, assessment. First is looks at the probability of each node causing a soft fault in the circuit system. This probability is calculated as shown in Figure 89. However, the calculation does not stop there. There is always a chance that the system might not be affected by that change in state of that one bit. Figure 90 takes the calculation this one step further and considers the observability of the soft error in the system output [Ma00].
V-74
Figure 89. Probability of Error Generation Calculation in SEUTool (after [Ma00])
Figure 90. Probability of Observable Error Calculation in SEUTool (after [Ma00])
The authors describe the process by which they calculate the above probabilities in the demonstration circuit, the AM2901, shown below in Figure 91. However, once this work was complete, they were able to perform a few different types of fault analyses. The first type of analysis was the relative size of the error cross section in each of the various pieces of the system. This is shown in Figure 92. If one is trying to reduce the overall error cross section of the system, this provides a view of the subsystem which most impacts this cross section. This could simply be a map of the relative number of nodes in each of these sections, which likely provides little information. But, if this does not scale as the number of nodes, then this perspective will provide some valuable insight into the system-level error generation.
V-75
Figure 91. AM2901 Block Diagram (after [Ma00])
Figure 92. Number of Vulnerable Circuit Nodes Versus Collected Charge for the AM2901 (after [Ma00])
The second type of system assessment is shown in Figure 93. This provides a view of the error cross section against different types of potential circuit operation or modes. Once again, it is possible that this type of assessment is just purely based upon the number of nodes involved in performing the calculation. But, one could use the information in the figure to assess different
V-76
types of algorithms to implement. If one algorithm favored AND functions over ADD and ACCU functions, then the resulting error cross section could be lower.
Figure 93. Error Cross Section Versus Collected Charge Versus Circuit Mode (after [Ma00])
The third type of analysis discussed by the authors is shown in Figure 94. This shows a breakout of the error cross section for various circuit elements in the ACCU section of the AM2901. This type of analysis is very useful to the hardware designers in that it provides valuable insight into the areas of the circuit that can be hardened. Also, once a harder circuit is designed, this type of analysis could be reperformed to check the improvements made.
Figure 94. Identification of Responsible Circuit Elements in Low Collected Charge Regimes (after [Ma00])
6.2.2 Fault Injection Fault injection techniques have been explored in order to predict the processor error rate for a given program. The use of such fault injection techniques to simulate in-space or in-beam SEUs could allow testing of full flight applications, not just representative benchmarks. The so-called
V-77
code emulating an upset (CEU) injection method has been validated for several simple microprocessors. Its effectiveness was proven by comparing radiation data and CEU-based predictions. The essence of this technique is to inject a bit flip randomly, that is, into a random memory cell of the DUT at a random instant, and to observe the consequence on the operation of the studied application. A CEU experiment consists of repeatedly running the target program and injecting a pseudorandom fault each time in a Monte Carlo simulation of SEUs. When enough repetitions are done, a statistically valid result is obtained for the average number of injected faults needed to produce a given type of error in the program [Re02]. However, the CEU approach can only inject faults in those targets accessible to the processor’s instruction set, that is, only bits that can be read and written. This intrinsic limitation causes a potentially serious impact on the accuracy of the error-rate predictions. To investigate this issue for a very powerful and complex modern processor, Rezgui et. al. performed CEU experiments to predict the SEU application cross-sections for several benchmark programs compiled for the PowerPC7400 microprocessor. For comparison, they conducted broad-beam SEU experiments on the same benchmarks. The main goal was determining the accuracy of CEU-based error-rate predictions for the most difficult target to date, a processor that includes such advanced features as multiple simultaneous execution units, a high degree of pipelining, two levels of cache control, and internal L1 instruction and data caches [Re02]. The CEU injection technique measures the ratio of actual errors to injected faults (or CEUs) for a specific processor running a given application. Multiplying the underlying SEU cross-section, obtained from ground testing, gives the application specific cross-section application. The measured (or broad-beam) application cross-section can be obtained as the number of errors detected divided by the number of particles (integrated flux) per unit area incident on the device under test.
When a CEU experiment has arrived at correct predictions, then these two cross sections will be equal [Re02]. They started with the measure SEU cross section for the processor’s registers, shown in Figure 95. They used this to predict the application error cross section or the observed error rate for three different applications with the cache turned on and off. Figure 96 shows one of these applications with the actual error cross section plotted against the predicted error cross section. The curve on the left is without the cache being used and the one on the right is with the cache. Without the cache, this technique did a good job at predicting the observed error rate of the microprocessor.
V-78
Figure 95. Measured Cross Section of PowerPC7400 Microprocessor Registers (after [Re02])
Figure 96. Example of CEU Prediction versus Actual for a Single Application (after [Re02])
Faure et. al. did a study of the upset in cache memory on the overall error rate. They compared three different approaches of fault injection including one which was completely software based. The targeted processor for this study was ESAs LEON implementation of the SPARC V8 Instruction-Set Architecture. The choice was guided by the following two considerations: • Gate level (VHDL description) is freely available, which was consistent with already available chips. • It embeds cache memories, and they are indirectly addressable by the instruction set, using the so-called address space identifier mechanism. The three different fault injection approaches were CEU, emulator based fault injection, and cache simulation. The CEU methodology described in [Re02] was presented as targeting processors which do not include data cache memories. The reason of this limitation was that data cache memories could not be addressed by the instruction set. However, the LEON architecture provides the user with some instructions allowing this task. In this case, the same flow of CEU operation was kept to inject faults in the data cache. The emulator based fault injection technique was an extended version of the emulation-based approach described in [Re02], where simulation-based fault injection efficiency is improved by means of emulation: the processor model was first instrumented for supporting fault injection and then implemented on a FPGA device. The cache simulation method was a C program that simulated the cache subsystem, including the possibility of performing fault injection [Fa03].
V-79
One result of this study is provided in Figure 97. This showed a fairly consistent error rate among the three techniques, though the authors noted a discrepancy with the emulation approach. However, their study did shown that the CEU method could be used to assess cache SEUs provided the modeler had direct access to the cache through the instruction set.
Figure 97. Cache Fault Injection Comparison (after [Fa03])
FPGAs present an interesting case study for application in space. Due to their regular nature, if one can predict the SE response of the building block circuit, then one can likely predict the SE response of the whole chip. And, since FPGAs are configurable, some radiation hardening techniques, such as triple modular redundancy, can be employed to build in to mitigation the SEEs. As a result, the SEEs of FPGAs have been a subject of many studies over the past few years. Ceschia et. al. set out to examine the effect of upsets in the configuration memory in SRAM based FPGAs. They decoded the information stored inside the device configuration memory, thus becoming able to precisely associate each bit in it with the corresponding FPGA resource. These bits define how the FPGA resources are used to form a netlist implementing the circuit mapped on the FPGA. In other words, these bits determine how the configurable logic blocks (CLBs) are connected and which functions the lookup tables (LUTs) inside the CLBs implement. For a Xilinx Virtex device, they obtained a map where all the 864 configuration bits for each CLB are organized as follows: • North and South interconnection bridges: they control the routing of IO signals between the considered CLB and the surrounding CLBs. • Internal interconnections: they control the routing of signals within each of the two slices composing a CLB. • Control resources: they define the behavior of the programmable resources within a CLB, e.g., the active low/active high state of the reset signals for the flip-flops. • LUTs: they store the truth table for the combinational functions implemented by the CLB [Ce03]. Following the device configuration-memory decoding, they identified all the possible configurations for a given resource by considering its configuration bits, modifying them one by one, and recording the introduced modification of the resource configuration. By repeating this process for all the FPGA resources, they were able to identify all the possible effects of a single SEU in the device configuration memory. Just looking at the effect on the signal routing, they found the following errors induced in the configured circuit (A picture of the basic routing configuration is shown in Figure 98.
V-80
•
Open: the signal path corresponding to Net_1 is set to the open state, in such a way that IN_0 and OUT0 are no longer connected. As a result, the CLBs (or the output pads) that are fed with the signal previously traveling over Net_1 become dangling. • Antenna: a new signal called Net_2 is enabled, whereas Net_1 is deleted as in the Open case. The new signal path (shown in Figure 99), may influence the behavior of the implemented circuit, because the CLBs or output pads originally driven by the deleted net are now driven by an unknown logic value; moreover, it may increase the power consumption. • Short: a new signal path called Net_2 is enabled in parallel to Net_1. The new signal path conflicts with the already existing one Net_1 as shown in Figure 99. In this case, Net_1 and Net_2 are shorted, resulting in the propagation of unknown values to the CLBs (or output pads) fed with the output OUT1. As in the previous case, a Short may increase the power consumption. • None: the signal path configuration is not affected by the fault that modified an unused portion of the device configuration memory. From the fault injection simulation in the configuration memory, they were able to determine the probability of the effect on the FPGA resources. This is provided in Figure 100 [Ce03].
Figure 98. FPGA Routing Example Baseline (after [Ce03])
Figure 99. Upset FPGA Routing, Antenna (Left) and Short (Right) (after [Ce03])
V-81
Figure 100. Probability of the Effects to the Routing Configuration in a CLB (after [Ce03])
Violante et. al. developed a FPGA fault injection simulation approach as shown in Figure 101 below. The Fault List Generation Tool identifies the FPGA’s resources in the application layer (for logic implementation, signal routing, etc.) that are used and it generates the list of faults (Fault List) to be injected, accordingly to the fault models. The fault models implemented included the signal path faults models as described above and the following logic fault models: • Combinational: the logic function implemented by the logic resource is modified. • Routing: signal routing inside the logic resource is altered. For example, the paths connecting input ports to the LUTs implementing combinational functions are modified. • Sequential: the content of a user memory bit is modified. Each fault is described by the couple (fault-injection time, fault location) describing when the SEU appears, and which resource it modifies. The Fault Simulation Tool simulates serially the faults in the Fault List. During simulations the outputs produced by the faulty application layer are compared with those of the fault-free one. As soon as a mismatch is found, the simulation is stopped and the effect provoked by the injected fault is classified as Wrong Answer. Conversely, in case the simulation of the Input Stimuli set concludes, and no mismatch is found, the fault is classified as Effectless [Vi03].
Figure 101. FPGA Fault Injection Simulation Approach (after [Vi03])
V-82
The approach and tools described by Violante were then put to the test by comparing the measured cross section versus the predicted cross section. The results are provided in Figure 102 for a circuit composed of four 16 x 16-bit binary multipliers. The inputs of the four multipliers were connected in parallel, while the outputs were connected to an XOR gate array. Although the approach’s accuracy needed further improvements, the proposed approach seems a viable solution for anticipating the analysis of SEUs in the FPGA’s configuration memory [Vi03].
Figure 102. FPGA Fault Injection versus Heavy Ion Testing Results (after [Vi03])
V-83
7.0 Summary As stated earlier, Dr. Massengill gave an overview of the future of modeling with respect to SEEs in his 1993 Short Course. It is interesting to look at that now and then project the future from this point. In 1993, the challenges and areas for continued research in SE modeling are many. A few of these are: • The improvement of circuit models for scaled, submicron devices and technologies, including advances in charge collection models for these structures • The development of models for emerging devices and technologies as they appear, • A reassessment of the basic assumptions involved in SE modeling when applied to the high speed and dense circuits continuing to emerge • The development of comprehensive analysis techniques for single-event in combinational logic • More work in the area of single-event modeling of analog subsystems • The advancement of system-level analysis techniques • True integration of SE modeling into the early design phases of microelectronic design (even commercial), as part of the integrated engineering CAD environment [Ma93] With respect to these challenges, the community on a whole has done an admirable job in making progress in many of these areas. Achievements towards the first two challenges, the improvement of circuit models for scaled, submicron devices and technologies and the development of models for emerging devices and technologies, were given in Sections 3 and 4. The development and improvement of mixed-level modeling and simulation has enabled the analysis of deep submicron device technologies and circuits. The community has also continued to examine the transistor-level simulation results against test results to validate the approaches. The achievements in the development of comprehensive analysis techniques for SEEs in combinational logic were provided in all sections. This is an obvious concern given the potential increase in the potential cross section for effects. While some good progress has been made in this area, more work is needed. All sections also presented work in the area of SEE modeling in analog subsystems. This work has evolved quite well with the analysis of circuit macromodels and the development of behaviorallevel SEE modeling. The advancement of system-level analysis techniques was presented in Sections 5 and 6. The work has also progressed well with behavioral-level analysis and error injection techniques, but there is much room for more development and advancement towards this end. Finally, with respect to true integration of SE modeling into the early design phases, it is noted that this is a continual challenge for the radiation effects community. Since 1993, there have been also been some changes in the responses and what is important to model. This was predicted by Dr. Massengill, but these changes lead to new challenges. Some of these are: • The development of a modeling approach for charge collection at multiple nodes/sites in the same circuit from a SE • Continued improvement in SE modeling applied to high speed circuits and the assessment of circuits in a dynamic SE environment
V-84
• • •
Advancement of behavioral level models to include the ability to model more catastrophic error models (errors in multiple memory locations, errors in control circuitry) Development of analog behavioral modeling that accurately represents the TransistorLevel models and actual response Incorporation of more that upset cross section data into error-rate calculations
This course has attempted to introduce the reader to single-event effects, the motivation for modeling these effects, and the basic terminology needed to dig further into the literature on the issue of SE analysis. A hierarchical approach to SE modeling has been presented which begins with the transistor response to a single-event particle, then builds on these results to move to the total microelectronic chip response. This short course also built on the device modeling as presented by the previous speaker. Each level of the hierarchy uses the results from the previous level as input data, thus controlling the complexity of the problem by limiting the details needed from lower levels. Accuracy is maintained by experimental corroboration of the results at each level. The course has also attempted to outline the critical assumptions and limitations involved with the popular models seen in the literature, so that the user may gauge the applicability to his particular problem. It is hoped that this course has given the reader a flavor for the many levels of complexity involved in a complete simulations of SE effects in microelectronics. Much enlightened scientific work has been performed and compiled over the past years by many groups and individuals. The reader is referred to the literature, especially the IEEE Transactions on Nuclear Science, the lEEE Journal of Solid State Circuits, and the IEEE Transactions on Electron Devices, for more information on this field of study.
V-85
8.0 References [Ae96] [Al90] [Ba05] [Bz96] [Bz98a] [Bz98b] [Be88] [Bl05]
[Bo02a]
[Bo02b] [Bo03] [Br94] [Bu01] [Bu05] [Ce03]
D. R. Alexander, “Design Issues for Radiation Tolerant Microcircuits for Space,” IEEE NSREC Short Course Notes, Indian Wells, CA, 1996. M. L. Alles, K. L. Jones, J. E. Clark, J. C. Lee, W. F. Kraus, S. E. Kerns, and L. W. Massengill, “SOI/SRAM Rad-Hard Design Using a Predictive SEU Device Model,” GOMAC Conference Digest of Papers, Las Vegas, NV, Nov. 1990. R. Baumann, “Single-Event Effects in Advanced CMOS Technology,” IEEE NSREC Short Course Notes, Seattle, WA, 2005. M. P. Baze and S. P. Buchner “Characterization of logic cell SEE responses”, Proceedings of the Eighth Single Event Effects Symposium, Apr 1996. M. P. Baze “SEU_DO_FAST - a high speed SEU modeling tool for VLSIC designers,” Proceedings of the Ninth Single Event Effects Symposium, Apr 1998. M. P. Baze, “A High Speed SEU Modeling Tool for VLSIC Designers”, Proceedings from the Topical Research Conference on Reliability, University of Texas, Session 3, Oct. 1998. B. A. Beitman, “N-Channel MOSFET Breakdown Characteristics and Modeling for P-Well Technologies,” IEEE Trans. on Electron Devices, vol. 35, pp. 1935-1941, Nov. 1988. J. D. Black, A. L. Sternberg, M. L. Alles, A. F. Witulski, B. L. Bhuva, L. W. Massengill, J. M. Benedetto, M. P. Baze, J. L. Wert, and M. G. Hubert, “HBD Layout Isolation Techniques for Multiple Node Charge Collection Mitigation,” IEEE Trans. on Nuclear Science, vol. 52, pp. 2536-2541, Dec. 2005. Y. Boulghassoul, L. W. Massengill, A. L. Sternberg, R. L. Pease, S. Buchner, J. W. Howard, D. McMorrow, M. W. Savage, and C. Poivey, “Circuit Modeling of the LM124 Operational Amplifier for Analog Single-Event Transient Analysis,” IEEE Trans. on Nuclear Science, vol. 49, pp. 3090-3096, Dec. 2002. Y. Boulghassoul, L. W. Massengill, T. L. Turflinger, and W. T. Holman, “Frequency Domain Analysis of Analog Single-Event Transients in Linear Circuits,” IEEE Trans. on Nuclear Science, vol. 49, pp. 3142-3147, Dec. 2002. Y. Boulghassoul, J.D. Rowe, and L.W. Massengill, “Applicability of circuit macromodeling to analog single-event transient analysis,” IEEE Trans. on Nuclear Science, vol. 50, pp. 2119-2125, Dec. 2003. M.E. Brinson and D.J. Faulkner, “Modular SPICE macromodel for operational amplifiers,” IEEE Proceedings on Circuits, Devices and Systems, vol. 141, pp. 417420, Oct. 1994. S. P. Buchner and M. P. Baze, “Single-Event Transients in Fast Electronic Circuits,” IEEE NSREC Short Course Notes, Vancouver, BC, 2001. S. Buchner and D. McMorrow, “Single-Event Transients in Linear Integrated Circuits,” IEEE NSREC Short Course Notes, Seattle, WA, 2005. M. Ceschia, M. Violante, M. Sonza Reorda, A. Paccagnella, P. Bernardi, M. Rebaudengo, D. Bortolato, M. Bellato, P. Zambolin, and A. Candelori, “Identification and Classification of Single-Event Upsets in the Configuration Memory of SRAMBased FPGAs,” IEEE Trans. on Nuclear Science, vol. 50, pp. 2088-2094, Dec. 2003.
V-86
[Ch96] [Cl03] [Co92] [Cr03] [Dh94] [Do99] [Do04] [Dr98] [Fa03]
[Fi87] [Fu85] [Ga04] [Ha85] [Hi02]
[Hi04]
[Ho00]
H. Cha, E.M. Rudnick, J.H. Patel, R.K. Iyer and G.W. Choi, “A Gate-Level Simulation Environment for Alpha-Particle-Induced Transient Faults,” IEEE Trans. on Computers, vol. 45, pp. 1248-1256, Nov. 1996. K. A. Clark, A. A. Ross, H. H. Loomis, T. R. Weatherford, D. J. Fouts, S. P. Buchner, and D. McMorrow, “Modeling Single-Event Effects in a Complex Digital Device,” IEEE Trans. on Nuclear Science, vol. 50, pp. 2069-2080, Dec. 2003. J.A. Connelly and P. Choi, Macromodeling with SPICE, Prentice Hall, 1992. J. Cressler, “Radiation Effects in SiGe HBT BiCMOS Technology,” IEEE NSREC Short Course Notes, Monterey, CA, 2003. A. Dharchoudhury, S. M. Kang, H. Cha, and J. H. Patel, “Fast timing simulation of transient faults in digital circuits”, Proceedings of the 1994 IEEE/ACM International Conference on Computer-Aided Design, November 1994. P. E. Dodd, “Basic Mechanisms in Single-Event Effects,” IEEE NSREC Short Course Notes, Norfolk, VA, 1999. P. E. Dodd, M. R. Shaneyfelt, J. A. Felix, and.J. R. Schwank, “Production and Propagation of Single-Event Transients in High-Speed Digital Logic ICs,” IEEE Trans. on Nuclear Science, vol. 51, pp. 3278-3284, Dec. 2004. P. V. Dressendorfer, “Basic Mechanisms for the New Millennium,” IEEE NSREC Short Course Notes, Newport Beach, CA, 1998. F. Faure, R. Velazco, M. Violante, M. Rebaudengo, and M. Sonza Reorda, “Impact of Data Cache Memory on the Single Event Upset-Induced Error Rate of Microprocessors,” IEEE Trans. on Nuclear Science, vol. 50, pp. 2101-2106, Dec. 2003. T. A. Fischer, “Heavy-Ion-Induced, Gate Rupture in Power MOSFETs,” IEEE Trans. on Nuclear Science, vol. 34, pp. 1786-1791, Dec. 1987. J. S. Fu, H. T. Weaver, R. Koga, and W. A. Kolasinski, “Comparison of 2D memory SEU transport simulation with experiments,” IEEE Trans. on Nuclear Science, vol. 32, pp. 4145-4149, Dec 1985. M. J. Gadlage, R. D. Schrimpf, J. M. Benedetto, P. H. Eaton, D. G. Mavis, M. Sibley, K. Avery, and T. L. Turflinger, “Single Event Transient Pulsewidths in Digital Microcircuits,” IEEE Trans. on Nuclear Science, vol. 51, pp. 3285-3290, Dec. 2004. J. R. Hauser, S. E. Diehl-Nagle, A. R. Knudson, A. B. Campbell, W. J. Stapor, and P. Shapiro, “Ion Track Shunt Effects in Multi-Junction Structures,” IEEE Trans. on Nuclear Science, vol. 32, pp. 4115-4121, Dec. 1985. K. Hirose, H. Saito, Y. Kuroda, S. Ishii, Y. Fukuoka, and D. Takahashi, “SEU Resistance in Advanced SOI-SRAMs Fabricated by Commercial Technology Using a Rad-Hard Circuit Design,” IEEE Trans. on Nuclear Science, vol. 49, pp. 2965-2968, Dec. 2002. K. Hirose, H. Saito, S. Fukuda, Y. Kuroda, S. Ishii, D. Takahashi, and K. Yamamoto, “Analysis of Body-Tie Effects on SEU Resistance of Advanced FD-SOI SRAMs Through Mixed-Mode 3-D Simulations,” IEEE Trans. on Nuclear Science, vol. 51, pp. 3349-3354, Dec. 2004. L. Hoffmann and R. C. DiBari, “Radiation Testing and Characterization of Programmable Logic Devices (PLDs),” IEEE NSREC Short Course Notes, Reno, NV, 2000.
V-87
[Hh87] [Hh89] [Hs81] [Ho82] [Ie04] [Ie05] [Jo90] [Ka91] [Ka92] [Ke89] [Ki79] [Kn84]
[Kn86] [Ku04]
[Le04] [Ma93] [Ma97]
J. H. Hohl and K. F. Galloway, “Analytical Model for Single Event Burnout of Power MOSFETs,” IEEE Trans. on Nuclear Science, vol. 34, pp. 1275-1280, Dec. 1987. J. H. Hohl and G. H. Johnson, “Features of the Triggering Mechanism for Single Event Burnout of Power MOSFETs,” IEEE Trans. on Nuclear Science, vol. 36, pp. 2260-2266, Dec. 1989. C. M. Hsieh, P. C. Murley, and R. R. O’Brien, “A Field-Funneling Effect on the Collection of Alpha-Particle-Generated Carriers in Silicon Devices,” IEEE Electron Device Letters, vol. 2, pp. 103-105, April 1981. C. Hu, “Alpha-Particle-Induced Field and Enhanced Collection of Carriers,” IEEE Electron Device Letters, vol. 3, pp. 31-34, Feb. 1982. IEEE Standard VHDL Analog and Mixed-Signal Extensions-Packages for Multiple Energy Domain Support, IEEE Std 1076.1.1-2004. IEEE Std 1364 -2005 IEEE Standard for Verilog Hardware Description Language, IEEE Std 1364-2005 (Revision of IEEE Std 1364-2001). A. H. Johnston and B. W. Hughlock, “Latchup in CMOS from Single Particles,” IEEE Trans. on Nuclear Science, vol. 37, pp. 1886-1893, Dec. 1990. N. Kaul, B. L. Bhuva, S. E. Kerns, “Simulation of SEU Transients in CMOS ICs,” IEEE Trans. on Nuclear Science, vol. 38, pp. 1514-1520, Dec. 1991. N. Kaul, “Computer-Aided Estimation of Vulnerability of CMOS VLSI Circuits to Single-Event Upsets,” PhD Dissertation, Dept. of Electrical Engineering, Vanderbilt University, 1992. S. E. Kerns, L. W. Massengill, D. V. Kerns, M. L. Alles, T. W. Houston, H. Lu, and L. R. Hite, “Model for CMOS/SOI Single-Event Vulnerability,” IEEE Trans. on Nuclear Science, vol. 36, pp. 2305-2310, Dec. 1989. S. Kirkpatrick, “Modeling Diffusion and Collection of Charge from Ionizing Radiation in Silicon Devices,” IEEE Trans. on Electron Devices, vol. 26, pp. 17421753, Nov. 1979. A. R. Knudson, A. B. Campbell, P. Shapiro, W. J. Stapor, E. A. Wolicki, E. L. Peterson, S. E. Diehl-Nagle, J. Hauser, and P. V. Dressendorfer, “Charge Collection in Multilayer Structures,” IEEE Trans. on Nuclear Science, vol. 31, pp. 1149-1154, Dec. 1984. A. R. Knudson, A. B. Campbell, J. R. Hauser, M. Jessee, W. J. Stapor, and P. Shapiro, “Charge Transport by Ion Shunt Effect,” IEEE Trans. on Nuclear Science, vol. 33, pp. 1560-1564, Dec. 1986. J.S. Kauppila, L.W. Massengill, W.T. Holman, A.V. Kauppila, and S. Sanathanamurthy, “Single event simulation methodology for analog/mixed signal design hardening,” IEEE Trans. on Nuclear Science, vol. 51, pp. 3603-3608, Dec. 2004. A. Leuciuc, B. Zhao, Y. Tian, and J. Sun, “Analysis of single-event effects in continuous-time delta-sigma modulators,” IEEE Trans. on Nuclear Science, vol. 51, pp. 3519–3524, Dec. 2004. L. W. Massengill, “SEU Modeling and Prediction Techniques,” IEEE NSREC Short Course Notes, Snowbird, UT, 1993. L. W. Massengill, M. S. Reza, B. L. Bhuva and T. L. Turflinger, “Upset cross-section modeling in combinational CMOS logic circuits”, GOMAC 1997 Digest of Papers, Vol. XXII., pp. 626, March 1997.
V-88
[Ma00]
[My93] [Mc82] [Me82] [Mn83]
[Ni02]
[No94] [Oc81] [Oc83] [Ol05]
[Pe83] [Pe97] [Pi83] [Re02] [Ro98] [Rl88] [Sy06]
L. W. Massengill, A. E. Baranski, D. O. Van Nort, J. Meng and B. L. Bhuva, “Analysis of Single-Event Effects in Combinational Logic - Simulation of the AM2901 Bitslice Processor”, IEEE Trans. on Nuclear Science, vol. 47, pp. 26092615, Dec. 2000. K. Mayaram, J. H. Chern, and P. Yang, “Algorithms for transient three-dimensional mixed-level circuit and device simulation,” IEEE Trans. Computer-Aided Design, vol. 12, pp. 1726-1733, Nov 1993. F. B. Mclean and T. R. Oldham, “Charge Funneling in n- and p-type Si Substrates,” IEEE Trans. on Nuclear Science, vol. 29, pp. 2018-2023, Dec. 1982. G. C. Messenger, “Collection of Charge on Junction Nodes from Ion Tracks,” IEEE Trans. on Nuclear Science, vol. 29, pp. 2024-2031, Dec. 1982. T. M. Mnich, S. E. Diehl, B. D. Shafer, R. Koga, W. A. Kolasinski, and A. Ochoa, “Comparison of Analytical Models and Experimental Results for Single Event Upset in CMOS SRAMs,” IEEE Trans. on Nuclear Science, vol. 30, pp. 4620-4623, Dec. 1983. G. Niu, R. Krithivasan, J. D. Cressler, P. A. Riggs, B. A. Randall, P. W. Marshall, R. A. Reed, Member, B. Gilbert, “A Comparison of SEU Tolerance in High-Speed SiGe HBT Digital Logic Designed With Multiple Circuit Architectures,” IEEE Trans. on Nuclear Science, vol. 49, pp. 3107-3114, Dec. 2002. E. Normand, “Single Event Effects in Systems Using Commercial Electronics in Harsh Environments,” IEEE NSREC Short Course Notes, Tucson, AZ, 1994. A. Ochoa and P. V. Dressendorfer, “A Discussion of the Role of Distributed Effects on Latch-Up,” IEEE Trans. on Nuclear Science, vol. 28, pp. 4292-4297, Dec. 1981. A. Ochoa, F. W. Sexton, T. F. Wrobel, G. L. Hash, and R. J. Sokel, “Snapback: A Stable Regenerative Breakdown Mode of MOS Devices,” IEEE Trans. on Nuclear Science, vol. 30, pp. 4127-4130, Dec. 1983. B. D. Olson, D. R. Ball, K. M. Warren, L. W. Massengill, N. F. Haddad, S. E. Doyle, and D. McMorrow, “Simultaneous Single Event Charge Sharing and Parasitic Bipolar Conduction in a Highly-Scaled SRAM Design,” IEEE Trans. on Nuclear Science, vol. 52, pp. 2132-2136, Dec. 2005. E. L. Peterson, “Single Event Upsets in Space: Basic Concepts,” IEEE NSREC Short Course Notes, Gatlinburg, TN, 1983. E. L. Peterson, “Single-Event Analysis and Prediction,” IEEE NSREC Short Course Notes, Snowmass, CO, 1997. J. C. Pickel, “Single Event Upset Mechanisms and Predictions,” IEEE NSREC Short Course Notes, Gatlinburg, TN, 1983. S. Rezgui, G. M. Swift, R. Velazco, and F. F. Farmanesh, “Validation of an SEU Simulation Technique for a Complex Processor: PowerPC7400,” IEEE Trans. on Nuclear Science, vol. 49, pp. 3156-3162, Dec. 2002. Ph. Roche, J. M. Palau, K. Belhaddad, G. Bruguier, R. Ecoffet, and J. Gasiot, “SEU response of an entire SRAM cell simulated as one contiguous three dimensional device domain,” IEEE Trans. on Nuclear Science, vol. 45, pp. 2534-2543, Dec 1998. J. G. Rollins and J. Choma, Jr., “Mixed-Mode PISCES-SPICE Coupled Circuit and Device Solver,” IEEE Trans. on Computer-Aided Design, vol. 7, pp. 862-867, Aug. 1988. Synopsys HSPICE Simulation and Analysis User Guide.
V-89
[Su78] [Ta88] [Ti91] [Vi03] [Wa88]
[Wg03]
[Wn04] [We02] [Wr87] [Ya92] [Zh04] [Zi85]
E. Sun, J. Moll, J. Berger, and B. Adlers, “Breakdown Mechanism in Short Channel MOS Transistors,” IEDM Technical Digest, pp. 478-482, 1982. E. Takeda, D. Hisamoto, and T. Toyabe, “A new soft-error phenomenon in VLSIs– the alpha-particle induced source/drain penetration (ALPEN) effect,” Proc. IEEE Int. Reliability Phys. Symp., pp. 109-112, 1988. J. L. Titus, G. H. Johnson, R. D. Schrimpf, and K. F. Galloway, “Single-Event Burnout of Power Bipolar Junction Transistors,” IEEE Trans. on Nuclear Science, vol. 38, pp. 1315-1322, Dec. 1991. M. Violante, “Accurate Single-Event-Transient Analysis via Zero-Delay Logic Simulation,” IEEE Trans. on Nuclear Science, vol. 50, pp. 2113-2118, Dec. 2003. R. S. Wagner, N. Bordes, J. M. Bradley, C. J. Maggiore, A. R. Knudson, and A. B. Campbell, “Alpha, Boron, Silicon, and Iron Ion-Induced Current Transients in LowCapacitance Silicon and GaAs Diodes,” IEEE Trans. on Nuclear Science, vol. 35, pp. 1578-1584, Dec. 1988. J. J. Wang, W. Wong, S. Wolday, B. Cronquist, J. McCollum, R. Katz, and I. Kleyner, “Single Event Upset and Hardening in 0.15 µm Antifuse-Based Field Programmable Gate Array,” IEEE Trans. on Nuclear Science, vol. 50, pp. 2158-2166, Dec. 2003. W. Wang and H. Gong, “Edge Triggered Pulse Latch Design With Delayed Latching Edge for Radiation Hardened Application,” IEEE Trans. on Nuclear Science, vol. 51, pp. 3626-3630, Dec. 2004. T. Weatherford, “From Carriers to Contacts, A Review of SEE Charge Collection Processes in Devices,” IEEE NSREC Short Course Notes, Phoenix, AZ, 2002. T. F. Wrobel, “On Heavy Ion Induced Hard-Errors in Dielectric Structures,” IEEE Trans. on Nuclear Science, vol. 34, pp. 1262-1268, Dec. 1987. F. L. Yang and R. A. Saleh, “Simulation and Analysis of transient faults in digital circuits”, IEEE Journal of Solid State Circuits, vol-27, pp. 258-264, Mar 1992. B. Zhao and A. Leuciuc, “Single Event Transients Characterization in SOI CMOS Comparators,” IEEE Trans. on Nuclear Science, vol. 51, pp. 3360-3364, Dec. 2004. J. F. Ziegler, J. P. Biersack, and U. Littmark, The Stopping and Range of Ions in Solids, Pergamon Press, New York, 1985.
V-90