Hard IP, an introduction to increasing ROI for VLSI Chip designs.
Peter Rohr
DEDICATION TO
MY LOVELY
WIFE,
MICHEL...
20 downloads
509 Views
8MB Size
Report
This content was uploaded by our users and we assume good faith they have the permission to share this book. If you own the copyright to this book and it is wrongfully on our website, we offer a simple DMCA procedure to remove your content from our site. Start by pressing the button below!
Report copyright / DMCA form
Hard IP, an introduction to increasing ROI for VLSI Chip designs.
Peter Rohr
DEDICATION TO
MY LOVELY
WIFE,
MICHELLE,
WHO
HAS
MADE
A
HAPPILY
MARRIED
MAN OUT
OF
A
ONCE
CONFIRMED
BACHELOR.
ACKNOWLEDGMENTS For most people, writing a book such as this one means a lot of difficult, sometimes confused and sometimes elated moments. Whenever I needed clarity, I could always talk to Chris Strolenberg and have a competent, enlightening discussion. Many thanks to Chris. Whenever I needed encouragement, I could always count on Hein van der Wildt and his wife Elisa. Both of them gave me a lot of support. For valuable technical input and/or help reviewing some of the material, I wish to thank Simon Klaver, Stephan Filarsky, Giora Karni, Johan Peeters and Ivo van Zandvoort. Thanks also to William Ruby, Reynaldo Hernandez and Ravi Tembhekar for information on the Synopsys products described in Chapter 7. And whenever I got stuck in any way on my computer, I could always count on John Choban to get me going again. Peter Rohr
PR E FACE IP
REUSE.
SOFT
AND
HARD
A clear indication of the pervasiveness of electronics in today's world was the concern over impending worldwide disasters caused by breakdowns in interlinked electronic systems, due to a one digit change in the calendar from 1999 to 2000. Today's complex VLSI chips are at the heart of this extreme level of dependence. In terms of the requirements for electronic systems, whose uses range from communications to air traffic control, from security to consumer goods, there are constant demands for more speed, more functionality, more sophistication. Almost all of these demands are linked to faster, more complex VLSI chips. Of course, this tremendous need for more complex chips can not be easily met. In fact, there is a great deal of talk about the necessity for a significant increase in productivity to design chips faster and inexpensively enough to meet the needs of hi-tech industries. Considering current consumers' love affair with any kind of hi-tech gadgets, there is only one way for these demands to go - up! In
this
book,
IP reuse is
we explore I one
productivity,
of the
(Intellectual
Property) reuse in
terms of VLSI
the time-to-market and
timing
closure.
take advantage of chips that already exist and have proven themselves. for these
existing chips
they need
to be rejuvenated somehow to meet
to show how we can
chips,
to be truly
useful and provide
we will
However,
competitive performances,
higher performance requirements.
reuse IP, how to improve the performance of the
and even how we can
in
IF reuse simply means that,
instead of developing a design from scratch every time we need a new chip,
need
chips.
answers to the many concerns about necessary gains
So we
reused
improve manufacturability of the reused IP.
Today, synthesis is the most common path to complex VLSI designs. Design teams and management, in other words the infrastructure dealing with the creation of complex VLSI designs, are set up to facilitate the process of synthesizing new chips. Consequently, it is not surprising that resynthesis is also the primary focus of IP reuse today. A high-level description of a design previously fabricated in some now outdated technology is resynthesized to a different library of a more advanced technology. This approach is generally referred to as Soft IP reuse because the IP database is in the form of some high-level language in software. Hence, Soft IP. There is, however, another way to reuse existing designs. An increasingly accepted and more direct path to reusing existing designs is to start with the data that describes the physical layout of a chip. This physical layout data that was used to make the masks to fabricate the chip can be reused, rejuvenated and the postlayout optimized. Rejuvenation of a VLSI chip starting with Hard IP means that an existing and proven layout has to be retargeted or migrated to a more advanced technology, one that allows smaller minimum critical layout dimensions. This approach is generally referred to as Hard IP reuse because the IP database is the physical layout. We use terms such as retargeting and migration interchangeably, meaning that the physical layout of a chip is laid out anew, according to the different design and layout rules of a higher performance semiconductor process. When we discuss more involved processes such as IP optimization, IP creation or process modifications, such an increase in manufacturing yield, we often use more generic terms like Hard IP engineering.
THIS BOOKOS FOCUS This book primarily focuses on Hard IP reuse. However, we will see that Soft and Hard IP methodologies complement each other in many situations. In fact, in those situations, both are virtually necessary. We need to keep in mind that all Soft IP will eventually become hard and may require some postlayout, which means Hard IP-based performance optimization or solving some unexpected problems, especially those related to timing. In Chapter 7, we also examine the two approaches to compare their respective strengths in relationship to the application and sometimes the form in which the IP is available.
Synthesis-based Soft IP reuse
is assumed to
is described here in minimum detail. synthesis
be well known
as a methodology,
There is a lot of literature on
and the recently published RMM thoroughly covers Soft
and
it
the subject of
IP reuse
[1].
On the other hand, Hard IP reuse is newer in its presently emerging form. It is not well known or understood and we think it is a critically important methodology for Very Deep Sub-Micron (VDSM) technologies (we generally refer to it as simply Deep Sub-Micron (DSM) in this book). In addition, the dramatic increase in impact of physical layout on the performance of DSM VLSI chips leads to totally new approaches for performance optimization, IP creation and manufacturing yield manipulations. These approaches are closely linked to the techniques used for Hard IP engineering. It is hoped that this book will explain one way to take advantage of the newly gained power of semiconductor physics that is reflected in the substantial effects of physical layout and its manipulation on chip performance.
OVERVIEW OF THE CHAPTERS In Chapter 1, we examine some of the reasons why IP reuse has become such a hotly debated issue. Incredible time-to-market pressures, shorter market windows, a rapid increase in chip complexity with a corresponding increase in design time coupled with very expensive processing lines that need to be "fed," all suggest a careful analysis of the design process. Although processing lines are currently full, fluctuations in the economy and the relentless march towards smaller layout dimensions and increasingly complex chips suggest potential vulnerability. Should we really design every new chip from scratch or in some cases take this new and promising IP reuse route? We discuss why an IP reuse methodology may make a lot of sense today and in the future. we provide a description of how to retarget existing Hard IP. We show how libraries, memories, data paths and entire chips can be retargeted from process to process. Designs with proven track records will get a "facelift" and once again be competitive. We compare past methodologies such as linear shrinks with today's polygon-by-polygon re-layout to the exact process layout rules. This approach is generally referred as compaction, although it is really a readjustment of layout dimensions, allowing an enlargement or reduction of any layout geometry. With the rapid advancement of VLSI design into DSM technologies, linear shrinks are no longer adequate and do not take full advantage of very extensive and expensive progress made in processing technology. It will become clear how a sophisticated migration approach, based on polygon compaction, can provide superior results. In Chapter 2,
6
In Chapter 3, we examine another aspect of compaction: Performance optimization through layout manipulation for DSM technology chips. We will demonstrate that this back-end layout optimization is complementary to any front-end design methodology, yielding substantial performance improvements.
In the past, the active part of a circuit was optimized, but it is now necessary to optimize the interconnects together with the active parts as inseparable pairs. Issues such as signal integrity, electromigration, interconnect-to-interconnect capacitive loading and excessive power dissipation are becoming so important that layout optimization may be a must, not just a luxury. Chapter 4, we examine the application of compaction - not as an afterthought once a design has already been laid out - but as a productivity enhancing step during the physical layout design. When carefully designing the layout of a building block, such as a library element, a memory cell, a macro, the layout has to comply with the design rules imposed by the process and the desired electrical In
behavior of the building block. Instead of observing these burdening rules during the design process, we can leave it to the incremental compaction steps during the design process or at the end of a design cycle to enforce all the rules and user inputs. We will examine the benefits of this "carefree" layout design. we discuss some of the special challenges faced and solved with Hard IP engineering. Digital circuits has been the focus up to this point. What about the migration of analog or mixed signal designs? Another interesting challenge is hierarchy maintenance through Hard IP migration. For much of the Hard IP migration done up to now, the hierarchy in the source layout gets lost during the In
Chapter 5
retargeting process. The newest migration approaches allow a complete maintenance of the hierarchy from layout to re-layout and we will discuss this capability. Design guidelines for simplifying Soft IP reuse have been given in the recently published RMM [1]. What about guidelines for minimizing difficulties for Hard IP reuse? Hard and Soft IP reuse enable an efficient S-o-C methodology by integrating various existing designs in one chip. What are the major challenges and hopefully solutions to overcome the difficulties encountered in such an S-o-C methodology? Finally, Design for Manufacturing (DfM) is currently a hot issue. The same methodologies used for sophisticated retargeting can be used to improve DfM. In Chapter 6, we discuss some of the tools now available in the industry and tools that would be helpful for Hard IP retargeting to yield maximum benefits. And, as is the case for many "point solutions" in the FDA industry, compaction should become an integral part of a modern design or reuse
flow. As minimum layout dimensions continue to shrink, this seamless integration will undoubtedly take place. In Chapter 7, we compare "design from scratch" with Soft IP and Hard IP reuse. We review today's design flows for a VLSI design for various methodologies. This will lead to an appreciation of where the
main work is performed and to an understanding of the bottlenecks that will affect the time-to-market and the cost of chips. The risks of design from scratch versus IP reuse, the costs of the tools needed to successfully accomplish all the tasks to be done as well as the skill levels required to do the jobs are critical. Return On Investment (ROI) is another angle and measure in comparing the three approaches discussed here. Why not benefit from existing and proven designs through reuse? Reuse is considered by many people to be the only long-term strategy for solving the present productivity dilemma in the chip design industry. The benefits of Soft and Hard IP reuse should become clear in these discussions.
CHAPTER
l
HARD IP AND SOFT IP REUSE 1.
IP REUSE
IN ADDRESSING
THE NEED FOR INCREASED VLSI DESIGN
PRODUCTIVITY In Chapter 1, we discuss three topics: 1. We start by discussing some of the major challenges in DSM VLSI chip design and how pre-DSM design practices need to adjust to the new reality of DSM technologies. 2. We then present some ideas on how Hard IP will help to address issues such as how to achieve a more predictable time-to-market, how to prolong benefits from previous investments in the design of complex chips by reusing them, and how to lower the risks of not filling processing lines while being able to absorb last minute processing tweaking in a design for larger yield and better performance. 3. Finally, we preview the potential solutions proposed in this book that are discussed in more detail in the following chapters. Throughout the discussions in Chapter 1, we make statements based on the assumption that IP reuse will actually help address some of the problems faced by the chip design and manufacturing industries. Although this is consistent with what is generally expected when talking about IF reuse and it helps to initiate a discussion of the issues, we will need to justify these statements as we move through the chapters. 1.1
SOME GENERAL OBSERVATIONS ABOUT VLSI CHIPS Keeping in mind the need for improvements in design productivity, we will look at the steps involved in designing today's complex DSM VLSI chips versus the much discussed IP reuse methodologies. It seems appropriate to examine and compare the effort, time, skill level, tools and risks involved in designing chips from scratch versus Soft or Hard IF reuse. The generally accepted standard for growth in complexity of VLSI chips has been set by Gordon Moore's Law. It seems that, at least for now, the capabilities of manufacturing chips have no problem following this law. However, design productivity has difficulty keeping up with manufacturing progress. All efforts to achieve a higher level of design intent and the increasing use of behavioral-level synthesis are still not sufficient to prevent mounting difficulties in providing new VLSI chip designs fast enough to keep processing lines filled with fresh, innovative designs, ones that take full advantage of manufacturing capabilities. Meeting the challenges of designing multimillion transistor chips, the level of complexity that can currently be fabricated, requires radically different approaches to design. And once again we hear the well worn battle cry, "We need a paradigm shift!" This shift could be referred to as Intellectual Property reuse (IP reuse). This need for a substantial increase in design productivity triggered the inconceivable in today's "newly improved driven" world, i.e. to reuse existing designs and retarget them to the latest, most advanced processes. Of course, the only part that would literally be reused is the design content, which is in fact a reuse of IP content. While retargeting a chip, additional performance improvements could at the same time be achieved with layout optimization. For now, we will only discuss simple retargeting by taking advantage of the
5
tighter, higher performance process layout rules. The other step alluded to here, performance optimization through small adjustments brought about by pushing polygons around, is discussed in Chapter 3. As already indicated, we discuss in this chapter both Soft and Hard IP, but the Soft IP discussion will be largely limited to a comparison of the two methodologies and pointing out complementary values of Soft and Hard IP. After all, Soft IP reuse is well established and requires no explanation. On the other hand, Hard IP reuse using compaction is a less common approach that has yet to be fully accepted by the engineering community. This should not be surprising. Polygon-based Hard IP retargeting of large blocks, with limited hierarchy maintenance, has only recently become technologically feasible, and full hierarchy maintenance is just around the corner. In fact, many experts in the hi-tech industry still do not believe in its feasibility, in spite of what is now a proven track record established by successful chip-based projects. But different approaches are not always quickly accepted. Even in Silicon Valley, the "Paradise for new ideas," many new methodologies are embraced more slowly than hoped for, because solutions that work are already in place. Proven technologies are not easily displaced. Nevertheless, in general, new ideas will eventually be accepted, combined with a little perseverance. For reuse of existing IP, there are at least two paths that have been followed. As is usually the case, each is being proclaimed as "the only way to go." One approach is Soft IP reuse, the other Hard IP reuse. We will show later that in some cases they are actually complementary. For now, we will discuss Soft IP and Hard IP reuse as standalone processes. For Soft IP reuse, existing high-level software chip descriptions can be targeted to newer processes for Soft IP reuse. Much, if not most, of the high-level programming could be reused: test scripts, simulation scripts, models. For Hard IP reuse, existing layout databases will be transformed according to new physical layout design rules. As we already know, the methodology is based on compaction. We will show that for Hard IP reuse, existing simulation and test vector suites can be reused and any software, such as control programs developed for the part in question, can also be reused. As mentioned, such retargeted Hard IP will then yield a higher performance, and be denser and consequently smaller than in the original layout. In addition, several of these retargeted designs, can now be combined in one chip as a S-o-C approach, providing much higher levels of integration than was previously possible. This retargeting of proven designs to newer technologies has become one of the more promising approaches to achieving the needed gains in productivity. It seems obvious that a reuse of existing, proven designs by retargeting to more advanced technologies, as opposed to starting from scratch, should lead to substantial savings in engineering and a reduction of risks, while taking full advantage of advancements in processing technologies. By shortening the cycle time required to produce a "new" chip, it would also address the hottest issue at present in the market: A shortening of the time-to-market. The main goal is to benefit from the rapid advances in processing technology, combined with reuse of knowledge previously invested in these chips.
.1.1
SOME MAJOR CHALLENGES IN VLSI CHIP DESIGN Many of the books on subjects related to VLSI design still describe classic design flows. However, the push towards smaller and smaller minimum layout geometries, as in DSM technologies, is increasingly
changing design flows, often rendering them more iterative. Needless to say, one of the key goals is still to design a chip that will perform as expected the first time around. Accordingly, the steps taken in the design flow should lead to predictable results. There are many questions concerning DSM design flows, and there seems to be only one recent book that focuses on some of them [2]. To meet the time-to-market schedule is probably the single most critical requirement. To accomplish this, the number of steps in a design flow, the number of iterations through certain sequences of steps to get things right, needs to be reduced or at least predictable. The complexity and time it takes to go through these steps has to be as controllable and predictable as possible. The steps required to design a VLSI chip are generally known. The number of iterations needed to get things right are generally not. Because of DSM effects, design flows are in a dramatic state of flux.
1.1.2
DESIGN ISSUES FOR PRE-DSM TECHNOLOGIES Design methodologies have or should have dramatically changed from pre-DSM technologies to DSM technologies. The following statements can be made about pre-DSM design methodologies: 1. Before DSM technologies, resistance and capacitance of metal lines could be ignored except for interconnects in poly. So to avoid timing problems on timing critical interconnects, metal was simply used instead of poly, except for some specific interconnects such as clocks. For such critical nets, even metal lines had to be carefully designed. 2. Before DSM technologies, the timing models for an entire chip could often be taken from a library of characterized blocks. So for Gate Arrays, Sea of Gates, Standard Cell designs and programmable arrays, a netlist was all that had to be provided to the foundry. Exceptions were, of course, fully custom designs for which no precharacterization was possible. Only a functional simulation was required with timing analysis for setup and hold times to verify that everything was connected together correctly. The timing of a chip could be based on library elements alone because, before DSM technologies, the on-chip timing of digital ICs was dominated by the active parts of the circuit, the transistors and their associated parasitics. Timing was localized by the active parts! Accordingly, careful characterization and often precharacterizations for various technologies of library blocks to be used on chips provided all the necessary timing information. For intercommunication between active blocks, a simple netlist sufficed, which is merely a logical assignment between communicating contact points. This supplies none of the information needed for DSM designs on physical characteristics or timing of the paths between the active blocks, the interconnects. So for pre-DSM technologies, the active parts of a VLSI chip, the transistors, or blocks such as gates, standard cells, macros, determined and dominated the timing of the entire chip. This localization in timing allowed the timing analysis of an entire chip, no matter how large, to be done on relatively small pieces in isolation. Parasitics could be modeled as lumped elements. Parasitic capacitances where directly determined by the size of the active devices. Their values were known. Because of these relatively small, uncoupled building blocks, the required accuracy could be determined relatively easily with switch-level or transistor-level models. This brought about a high level of confidence in the predicted performance of the chip.
One word of caution: To really be sure that a chip works, a worst case, state-dependent and consequently vector-dependent simulation is needed. However, a full, functional, worst-case simulation is very time-consuming and often avoided. In conclusion, small building blocks of a chip could be carefully characterized for pre-DSM technologies and digital designs, as if they were standalone. Then, once the active parts were characterized, the timing of the entire chip was under control. During the physical layout of the chip, such as floorplanning and routing, the timing of the chip remained unchanged. Constraints for the interconnect routing had to be specified only for a very limited set of nets, such as clock lines for very high-speed digital circuits. For the sake of completeness and in contrast to digital circuits, the layout issues were always part of the design challenge for analog circuits, even for pre-DSM designs. This was not so much because of interconnects, but because of symmetry or tracking requirements between pairs of transistors, resistors or capacitors. Thermal considerations, voltage gradients and noise in the chip were other critical issues. For Hard IP migration, there are layout challenges of analog circuits for pre-DSM and DSM technologies. Integrated analog circuits have always been special from many viewpoints. In Chapter 6, we devote some time to discussing analog problems in conjunction with Hard IP migration. Analog migration can be performed successfully and some companies actually do so routinely. But analog migration needs to be carried out with caution.
1.1.3
DESIGN ISSUES FOR DSM TECHNOLOGIES For DSM technology chips, timing is no longer limited to the active parts. Timing is determined by the active and passive parts together, with interconnects dominating much of the passive parts. In fact, the following general statements can be made for DSM technology chips: 1. The timing performance of a chip will be determined by both the active parts and interconnects. The active parts of the circuit still need to be characterized carefully, although the interconnects, and not they, dominate the timing. 2. Accurate timing performance can not be determined until the chip is completely laid out, the physical parameters have been extracted and the simulation models back-annotated. All the data available from front-end design practices is adequate only for estimates. 3. For DSM technologies, a back-end, postlayout optimization can significantly improve chip performance. In Chapter 3, we discuss just how much one can affect the performance of a chip with postlayout optimization through small adjustments of the location of polygons. Interconnects constitute an additional difficulty for DSM technologies. Often, interconnects can no longer be modeled as "lumped" R and C values. They now need to be modeled as distributed R/C loads. As technology advances, interconnects may even have to be modeled as distributed L/R/C loads and finally as transmission lines. The larger the vertical distance between interconnects and the back plane (the silicon), the stronger the inductive effects will become. Accordingly, with more and more metal layers and top layers being farther and farther away from the "ground plane," the inductive effects will get stronger for the top metal layers. We address the interconnect modeling question in Chapter 3. There is also some good news. We will see that in most cases, there are good approximate and relatively simple models that yield an accurate time delay analysis for many situations. With much of the recent focus on improving design productivity through IP reuse, many challenging questions remain. While migrating individual blocks is straightforward, mixing and matching designs
CHAPTER 0
based on the various different design methodologies and processes is not. But this is not just a migration problem, it is also a question of how to make all these designs work together on one chip, how to interface them. Presently, the problems need to be solved on a case-by-case basis. Considerable progress has already been made, particularly due to the efforts of the Virtual Socket Interface (VSI) alliance, which played a major role in clarifying some of the issues. As far as Hard IP reuse is concerned, many of those in the engineering community are still skeptical about fully embracing this type of IP reuse methodology. Much of the skepticism seems based on past experiences with compaction. A lot of progress has been made with this methodology. However, it takes time for them to become part of generally accepted engineering practices, as is the case with many new "design" methodologies. Scheduling pressures in particular often force the engineering field to do what is familiar and known to work within a predictable time schedule.
1.1.4
GOING FROM PRE-DSM TO DSM REQUIRES CHANGES As processing technology evolves, as layout dimensions get smaller and chips larger and as interconnects start to dominate performance, design practices need to change. Some of the cherished and established measures of good design quality may no longer be valid. For this discussion, we examine some of the considerations which are to be kept in mind, some changes in design philosophy that might be beneficial, as processing technology moves from pre-DSM to DSM technologies. The discussion will not focus on ideas on how make Hard IP reuse easy. These design guidelines for making retargeting easier will be discussed Chapter 5. We have already discussed the primary focus for obtaining a well designed chip and making it as fast as possible for pre-DSM technologies. Speed is determined by good design principles, such as optimally dimensioning the transistors, and a small chip size is achieved with an optimal layout. The rest is a question of the minimally allowed layout dimensions for certain transistor geometries as determined by the processing technology used. It was perfectly acceptable to ignore metal interconnects as a timing factor except perhaps for clock lines. The interconnect dimensions and their layout were only critical for layout density and current carrying capability. With a bias toward examining only issues that affect the physical layout, a key issue has always been to develop as small a chip as possible for a certain function. With this in mind, some of the well established techniques for achieving good, small pre-DSM designs, and some of their benefits are: 1. Smaller chips meant more chips per wafer, better yield because of the area-related defect density and smaller, less expensive packaging. 2. Smaller chips could be achieved through logic minimization. Logic minimization decreased the number of gates and the number of active devices for a given function. Smaller numbers of active devices achieved through logic minimization meant smaller parasitics and more speed. 3. Smaller chips could be achieved through floorplanning that made the blocks fit nicely together, as closely as possible. If this resulted in longer interconnects, it was not a problem. Problems that had to be considered were an inability to route due to a bad floorplan, too many vias introducing parasitics and affecting the yield and other issues. Again, we know that we need to review these guidelines. In
summary: Minimizing chip size and device count were key issues.
Priorities have changed for today's DSM technologies. Placing more active circuitry on a chip is inexpensive and easy to do. If the result is shorter interconnects, it is well worth it. Consequently, everything possible should be done to shorten on-chip interconnects for today's DSM technologies and, where necessary, balance them even for nonclock lines.
If interconnects can be shortened due to an increase in some hundred additional "unnecessary" active devices on multimillion transistor chips, it is a rather small price if they do not significantly increase power consumption. Some of these old, proven methods of design optimization such as logic minimization may actually lead to an unwanted and unnecessary loss in circuit performance. Some more sensible design guidelines that take DSM effects into account should be followed, such as: I .The very small active devices take up much less space than the routing. We already suggested that logic minimization might not lead to the best floorplan and the best design. For instance, parallelism of certain functions may lead to shorter interconnecting wires. 2. Placing blocks too closely together may result in routing difficulties, requiring a lot of ripping up and rerouting and prolonging the design process or increasing wire length. This is particularly critical since routing is already one of the most time-consuming steps. Also, it could force the router to switch metal layers too often to complete the routing, resulting in too many vias with high parasitics affecting speed and lowering chip yield. 3. Squeezing together metal lines to save routing space may negatively affect signal integrity as a result of on-chip cross-talk, increase capacitive loading which slows down the chip, increase dynamic power consumption and create yield problems due to bridging. Besides, narrow metal lines to save space will increase parasitic resistance, lowering available voltage at the active devices and may add electromigration problems. In summary: Everything possible within reason should be done to shorten interconnects each
.1.5
and keep some of them from being to
close to
other.
THE CONCEPT OF HARD IP REUSE Figure 1.I shows a high-level conceptual image of Hard IP retargeting. We discuss the required inputs and the details of what is required to migrate various layout structures and what we can expect as outputs in Chapter 2.
RETARGETIN _F
|
SOURCE LAYOUT
SOFTWARE
NO_
MIGRATED LAYOUT
Fig. 1.1 Hard IP Retargeting
1.1.6
WITH HARD IP MIGRATION, ONLY SOME CIRCUIT PROPERTIES
CHANGE
We mentioned that we may use a linear shrink or compaction for migrating a chip to a different process. Since one of the major goals of migration is to reuse existing designs with a minimum amount of rework, we would like certain characteristics of the Hard IP to remain unchanged or to change undramatically and predictably. Of course, we do want the IP performance to improve. If we can show that functionality is not affected by retargeting, that the IP speed improves and that minor changes in relative timing are such that the IP still works after migration, we will have achieved the main goals of IP reuse.
14
CHAPTER
R
For the functionality of a chip to remain unchanged through migration, the netlist must remain unchanged. The netlist remains the same if the topology of a layout remains unchanged through the migration step. Clearly, the remapping accomplished with the migration process does not change the topology of a circuit. For instance, a polygon edge to the left of another polygon edge before the migration process will be positioned either closer or farther from this polygon edge, but still to the left after migration. This means the netlist remains unchanged and the functional behavior of a circuit before migration equals the functional behavior of a circuit after migration. The relative timing on a chip is the most critical aspect enabling a chip to work correctly. Relative timing concerns the timing relationship between the edges, the transitions of the signals. Parameters such as setup and hold times are directly dependent on relative timing. For a linear shrink, all layout dimensions, the dimensions of the active parts as well as the interconnects, change by the same proportionality factor. The relationships, the ratios between the layout geometries such as interconnect lengths and widths, transistor areas, do not change. This suggests that the physical layout-dependent relative timing relationships do not change either because they depend on geometrical ratios. The absolute timing changes but the relative timing should not. For compaction, a nonlinear shrink, changes in relative timing can not be completely avoided but they can be kept to a minimum. This will be very helpful when an existing layout with blocks and routing is migrated as an entity. Based on these observations, the ratios in the relationships between the lengths and widths of interconnects, routed or otherwise, also remain largely unchanged. This produces enormous benefits by preserving relative timing on chips being migrated. The significance of this can not be overestimated and it will increase substantially as the DSM technology gets deeper. This issue is examined in more detail in Chapter 2. Finally, absolute timing is the measure for the speed performance of a chip. It is the absolute timing that determines the maximum clock rate of a digital circuit. We want it to change in migration and it does. In all of the following discussions, we will attempt to speed up the absolute timing, preserve the relative timing of workable chips and fix with layout manipulations any undesired signal skews that might have been introduced by migration or might even have existed in the old chip.
1.2
ECONOMICS
CONSIDERATIONS FOR BIGGER, FASTER, MORE COMPLEX CHIPS
Today's hi-tech electronics industry requires bigger, faster, more complex and "reasonably" priced chips to be put on the market faster and predictably to meet time-to-market requirements. Because so many areas in our hi-tech society depend on VLSI chips, the Electronic Design Automation (EDA) industry, although a financially relatively small industry, is a critical path for a great number of industries. However, many of the latest indicators in this small industry point to difficulties in supplying what these other industries require fast enough. The efforts required to design some of the latest new chips is growing out of proportion in comparison to design efforts required in the past and, to make matters worse, the market windows are getting shorter. Short market windows mean short periods for recouping investments made in developing chips. Accordingly, new chips have to be designed as rapidly and economically as possible or new ways have to be found to extend the useful life of these chips or at least parts of them. Both of these approaches are being vigorously pursued by the EDA industry. Design efficiency is addressed by streamlining, standardization, and higher and higher levels of abstraction in the specification and design processes. Useful lifetime extensions are obviously addressed with IP reuse through retargeting.
15
0
The time-to-market requirement is also addressed with IP reuse, largely by eliminating redesign steps. This means that IP reuse is a big step forward in helping to meet time-to-market requirements and other related challenges. To supply increased performance as required by the market, the dimensions of critical minimum physical device geometries on VLSI designs must continue to shrink, while the complexity of these chips has to continue to grow; and it is growing dramatically. Minimum sizes of critical layout dimensions have traditionally been the dominant factor in determining maximum chip performance. In addition, the total level of functionality that can be placed on a chip critically depends on how small the layout features and how large the maximum chip sizes can be made. As a result of these market forces, the level of functionality of a single chip is reaching into the millions of transistors today and continues to increase rapidly. With the high numbers of transistors on chips, innovative, more productive techniques for placing large numbers of devices on silicon are constantly needed. P reuse will help to place large numbers of devices on a chip more rapidly by just reusing and remapping Placing more functionality on fewer chips or even a single chip offers many desirable features, such as increased miniaturization and increased packing density. Maximum packing density and miniaturization are very important for many applications. Minimizing the number of chips for a given function not only increases the packing density, it also reduces the number of times the electronic signal has to leave and get back to a chip. If signals propagate only in one chip and do not have to propagate between chips, it lowers the size of the system substantially and also increases system reliability and speed. Improvements are dramatic when reducing the number of PC Boards (PCBs) and still significant when reducing the number of Multi-Chip Modules (MCMs), although MCMs are a much better high speed solution. However, besides the advantages of packing density, reliability and speed, PCBs and MCMs are rather expensive and the timing analysis with all these interconnects is less precise than what is possible with chips alone.
1.2.1
ECONOMICS BY SAVING ON SIMULATION AND TESTING THROUGH IP REUSE Once a complex chip is designed, verification of its functionality and performance are other serious challenges. Verification or validation can be performed with an increasing number of methodologies, each claiming to offer the ultimate convenience, speed and accuracy. Timing analysis, simulation on various levels of abstraction from the functional to the physical, cycle-based simulation, formal verification, emulation and, of course, fortunately, new methods continue to appear on the horizon. This increasing number of available methodologies indicates the difficulty of the tasks and the need to minimize the risk of in-field failures in a world that increasingly relies on hi-tech gadgets. Yet the tasks of verification and testing are generally so difficult that one can often only obtain high probabilities of failure-free parts, especially in testing. This is yet another good argument in favor of reusing previously field-tested parts, although there is no guarantee that even a field-tested part is completely fault-free. However, at least there is a track record that provides an additional sense of security. Of course, the verification challenges are growing every day. While it is already very difficult to properly verify and test today's chips with around one million transistors, the multimillion transistor chips will be even more difficult to verify, validate and test. It is a trend to use several verification tools, not just one. Most complex designs probably require a combination of all the different verification and testing methods to achieve a sufficient level of confidence. Until now, an educated selection of just one or two of the available methods was often satisfactory. And combining all the different methods,
t6
CHAPTER 0
i.e. "throwing everything you have got at the chip," will make verification and testing very costly and time-consuming, and it requires a wide spectrum of skills. Considering all of this, it should not be surprising that any new ways to keep the efforts of simulation and verification for these large chips under control are welcome. Since the topology and netlist of a chip remain unchanged through migration with Hard IP reuse, simulation and test vector suites can generally be reused, and timing changes caused by a change in technology tend to stay within manageable limits. Hierarchical approaches are often used to keep complexities within manageable limits. This is particularly true for one of the most difficult and time-consuming steps in the design process, i.e. verification. To be able to perform verification hierarchically, Hard IP migration that is fully hierarchical would help. Fortunately, fully hierarchical migration is, in fact, becoming available right now. We examine what hierarchy maintenance means for migration in later chapters. In fact, we discuss and examine questions of limited hierarchy maintenance in Chapter 2 and complete hierarchy maintenance in Chapter 5. 1.2.2
IP REUSE TO KEEP PACE WITH PROCESSING TECHNOLOGY ADVANCES Although the EDA and chip production industries are very young, there is now a substantial arsenal of very recent, excellent designs ranging from microprocessors to digital signal processors to controller chips and more. These designs are not outdated from the design concept point of view. Most of them are strictly state of the art and they are known to work. If anything is outdated about these chips, it is that they were fabricated by "yesterday's" processing technology. Processing technology has moved at a fierce pace from 0.5 microns minimum critical dimensions only recently to 0.18 microns and rapidly going to smaller minimum dimensions. Processing technology is moving so fast that design innovation can not keep pace with it. In fact, it is estimated that only 25 percent of the advances in chip performance are due to design innovation, while an amazing 75 percent is due to advances in processing capabilities. This means that the large "mountain" of previously designed chips has only a minor flaw of having been laid out according to obsolete processing layout design rules. Another way to keep pace and profit from processing technology advances and the rather fluid set of process and layout parameters is to be able to implement changes for rapidly retargeting. Considering the extremely competitive environment and the enormous investments, these processing lines are constantly tweaked to get the best possible performance and yield. Consequently, some "last minute" changes in design rules are always possible and indeed highly probable. Fortunately, for retargeting, these minor tweaks can be implemented "on the fly" by minor changes in the compactor technology file with a quick rerun to fully benefit from the latest processing changes. Even in the case of manually trimmed cells, such minor adjustments will very often still potentially squeeze a few more megahertz out of a chip or increase the yield. Faster chips can be fabricated with minimum effort by retargeting to newer processes, using the exact same design from netlist to floorplan to routing.
1.2.3
THE CHALLENGE OF FILLING FABS FOR PROFITABILITY There are many benefits from very small, minimal physical layout dimensions in VLSI chips, but they come at a very high price. The cost of the latest technology in semiconductor processing lines is going
'7
through the roof There are many consequences of such costly lines. First and foremost, once in use, these lines should never be idle. Running or not running, they cost millions a day and they function better if running continuously. However, it is obviously difficult to fill these lines with new designs, when it takes years of design to finish a new, high-complexity chip. In addition, with the increase in yield and wafer size, the number of chips coming off a single wafer is growing enormously, while the wafer count is accordingly decreasing. It takes a very large demand for chips to keep the fabrication sites running continuously. Clearly,
shortening the design
cycle through Hard and Soft
IP reuse
is one way to address this problem. One of the most astonishing feats in the field of hi-tech chip production is the relentless progress made in processing technologies. The smallest feasible physical dimensions keep getting smaller with no end in sight, and the chips still work. This may come as a surprise to some who have spent years studying semiconductor physics in depth. It does to the author. It would be expected that by now at least some of the "fundamental" physical limits might have resulted in some "interesting," potentially "deadly" effects beyond just the dominance of interconnects for timing. Of course, many extremely challenging effects appear all the time but, apparently, clever engineers always seem to find ways around them without major disruptions. As a direct consequence of DSM technology, it can be said that understanding many more details of physics matters again, not only for processing today's VLSI digital circuits but also for designing them. An increasing number of intricate details needs to be taken into account when designing and processing these chips. However, aside from the timing closure problems caused by interconnect delays, the dominant factors are not so much related to how the semiconductor devices function as layout dimensions get smaller. Some of the more important factors presently seem related to yield, some of which can be addressed directly through the use of compaction techniques. We discuss yield improvements through flexible layout rules in Chapter 5. However, we will focus on rules related to layout issues, because they can be elegantly addressed with compaction. Physical layout and design rule manipulations for yield improvements fall under the relatively new term of Design for Manufacturing (DfM). It is a topic that is currently generating a lot of interest.
1.2.4
PLANNING IN THE FACE OF UNCERTAINTIES. A nontechnical factor that makes it difficult to anticipate profitability, the level of activity of the processing line and justification of heavy investments is the potential state of the economy at the time the processing line is put into service. Today, a large percentage of the chips are used in consumer electronics. It is a large volume market, which, as such, is good. The military market used to act somewhat as a buffer against fluctuations in the economy for this hi-tech industry, when military, highly priced, high-quality parts helped profit margins. However, this was before enormous volumes of chips were being produced, as is the case today. Because of this strong dependency on consumers who are willing to spend money on gadgets they do not absolutely need, the chip industry is extremely vulnerable to economic ups and downs. For an industry that is as capital-intensive as the chip manufacturing industry, the economics of reality is hard to deal with. Without any buffers, the industry is held hostage by the whims of the consumer market and its fickle supply and demand. Large expenditures for new fabrication site construction is a very high risk to take. Of course, it is well known that not taking large risks in hi-tech is just as disastrous.
CHAPTER
Long-range planning in the chip industry is extremely difficult and painful. Japan is a good example of how difficult it is to predict the economy. Until recently, it was thought that a country such as Japan would never experience a prolonged recession. Well, we all know better now. So, what to do? Again, as we will see, especially Hard IP reuse with its shortened reuse cycle makes chips available faster for processing. It shortens the planning cycle. This will help a lot to fill fabrication sites. But, it also gives a lot of flexibility to companies without fabrication sites by allowing them to change from one process to another very quickly. Just resubmit a compaction run with different process parameters and the chip is ready to be processed by another foundry. This flexibility for the company without a fabrication site also lightens the burden of scheduling "the right amount" of fabrication capacity. After all, it sometimes happens that the commercial success of a design far exceeds expectations. Can Hard IP reuse help if a foundry suddenly experiences yield problems? Or worse, what if your foundry of choice is hit by calamity? The flexibility afforded by Hard IP reuse lightens this worry, but it also creates a very tough and competitive environment for the foundries. Finally, as will become clear later, once a chips is set up for migration, changes in your favorite foundry's process rules, due to adjustments that help improve yield, are not a serious problem for chips that have already been laid out. A few computer runs and a new layout that benefits from the "new, improved" layout rules becomes available for processing. We also show in Chapter 2 that feedback from migration runs may suggest such yield or performance enhancing adjustments in the technology file that contains the process parameters.
1.3
A PREVIEW OF AREAS OF HARD IP ENGINEERING Now that we have discussed many of the challenges of designing DSM VLSI chips productively and with optimum performance, we will preview some areas of application for IP reuse, IP creation, IP optimization and fabrication yield enhancements via physical layout manipulations. To describe these approaches, we suggest the term Hard IP engineering. In the following chapters, we discuss these applications in more detail.
1.3.1
HARD IP RETARGETING AND DESIGN FOR DSM TECHNOLOGIES AND YIELDS The databases describing the physical layout, the Hard IP, can be in one of several formats. These databases contain at least the information necessary to eventually produce masks for fabricating chips. This data describes the position of every polygon edge on the physical layout of a chip. If none of the placements of the polygon edges violates any of the processing design rules (minimum metal widths, minimum distance between metal lines, minimum size of contact openings, etc.), the layout contains no layout design rule errors and is design rule correct (DRC). These rules enable the silicon to be fabricated successfully with an acceptable percentage of good chips (yield) on a processed wafer from the processing point of view. We should realize that these rules have never been cast in concrete. In situations where design engineers and processing engineers have worked hand-in-hand, these layout rules have always been "negotiable" for pre-DSM technologies and even much more so for the present DSM technologies. However, with the current increasing challenges of ultrasmall layout dimensions, the great number of devices per chips, the large size of wafers and chips and the extreme need for competitiveness, DfM has become a very important issue. Fortunately, compaction methodology lends a hand in determining the best trade-offs.
*
While we address these issues in some detail in Chapters 2 and 5, we will just briefly describe the general idea of DfM. Going from an existing to a more aggressive process, it does not make sense to reduce all or perhaps even most of the layout dimensions to the smallest values allowed by the new process. Some layout dimensions affect the performance of a circuit much less than others. If these dimensions are laid out unnecessarily small, it will just result in a waste of manufacturing yield. On the other hand, some of them may increase the size of the layout. For DSM technologies, truly intelligent trade-offs between reducing layout dimensions selectively to gain speed performance without reducing manufacturing yield or paying a serious price in chip size have become a commonly discussed issue in DfM. In addition to the processing-related layout design rules, there are rules concerning electrical and reliability issues. The source of concern is not just the fabrication of a chip. We have already mentioned electromigration and keeping the resistance of interconnects within acceptable limits. We have suggested that certain interconnects need a minimum distance between them to restrict capacitive cross-coupling to acceptable limits. They may need to be balanced to maintain the time skew between certain signals within specified limits. There are normally many more such specifications. These inputs are user controls specified by layout specialists and circuit designers with the aid of analysis tools and other data. 1.3.2
IP REUSE AND THE FRONT-END/BACK-END
CONNECTION Ever since IC design complexity has become a bigger and bigger challenge, one approach to higher design productivity has been to describe intent at a higher and higher level of abstraction. Higher levels of abstraction coupled with modular and hierarchical thinking have been cornerstones for managing the increasing complexity of VLSI circuits and system designs. Unfortunately, as a basic rule, the higher the level of abstraction, the greater the distance of the description from the details of physical implementation. From the point of view of design methodology, this is of course part of the strategy, since the higher the level of abstraction, the less cluttered the thinking. The price for this higher level of abstraction is a lack of control over physical aspects of the chip. Even for DSM technologies, this functional thinking works fine to achieve one design objective, a functionally correct design. However, it does not help to achieve the desired timing aspects of the chip. With DSM, physical aspects of the layout are so important for the desired performance of the chip that it is a serious challenge to the behavioral level of design methodologies. It is a major challenge to keep a close connection between high-level design methodologies (the front-end) and the physical implementation of the design (the back-end). As we compare Hard and Soft IP reuse, the degree of linkage between the front-end and the back-end will be the key. In fact, Hard IP is the back-end since it is in GDS2 or a different physical layout format. Soft IP is based on a high-level software description of an existing design and has difficulties predicting or controlling the performance of the back-end. However, both approaches, Soft IP and Hard IP, are important for a successful IP reuse strategy. They both have distinct advantages and disadvantages. Soft IP keeps the complexity manageable. Hard IP deals with the details of the physical design. Together, they present a certain continuity from the high-level behavioral description to the physical layout of a chip. We later show how to combine the Soft and the Hard IP methodologies to win at both ends. In combining the two, we can leave at least part of the detailed timing requirements, timing refinement, to an optimization phase, using the Hard IP methodology. We discuss this optimization phase in Chapter 3.
20
CHAPTER
Combining Soft IP principles with Hard IP optimization, we can more easily achieve the design requirements and lower the risk factors, such as with an accurate prediction of the time-to-market, performance and costs. It might be helpful to look at Soft IP versus Hard IP as follows: All
Soft I
eventually becomes Hard IF.
Soft IF is front-end.
Hard I
is back-end.
Soft IP and Hard IP methodologies may act as complementary partners. However, the link between front-end and the physical implementation needs to be closely monitored anyway. A floorplan that is incorrect in terms of timing can not be rectified with postlayout steps. There is a limit to the degree of postlayout manipulations, although compaction can fix large timing discrepancies to achieve timing closure. As chip design ventures deeper and deeper into the submicron territory, interconnects will increasingly determine the time behavior of a chip. Interconnect design will affect signal integrity through cross-coupling, increased power consumption through capacitive loading and perhaps through inductive effects on circuit performance. While the transistors (referred to as active elements) do all the work, interconnects (the passive elements), which do nothing but carry the information from transistor to transistor, will control the timing. In Chapter 7, we compare chip design from scratch with Soft IP and Hard IP reuse. We make this comparison by looking at design flows. It is already clear that layout information such as floorplanning and placement have to be largely known. Even a rough estimate of chip timing makes sense. Clearly, one key goal for high-level descriptions of design intent is to create as much coupling as possible between the front-end and the back-end, such as timing driven floorplanning, placement, routing. Simply put: The front-end, the Soft IP methodology, has to take care of the "big factors" for timing. The back-end, the Hard IP methodology, will take care of the "small factors," the optimization of the timing. That is why integration of the two methodologies is of utmost importance.
1.3.3
IP REUSE FOR A SYSTEM-ON-CHIP
(S-0-C)
The idea of moving existing chips to newer technologies can be extended to combining several of them in one chip. Normally, working silicon chips that are part of the inventory of a company has been laid out in technologies based on many different design rule sets. These chips represent what was possible to place on one chip at the time they were designed. Through migration they will all be laid out according to a common design rule set. With the smaller minimum critical layout dimensions and the increase in the maximum chip size still resulting in acceptable yields today, several of these "old" working silicon chips now fit on a single chip. This, of course, is a perfect S-o-C scenario as is illustrated in Figure 1.2, a very powerful way to address today's design productivity crisis.
*
0
EaFig.
CEion
-
1.2 The S-o-C System Design Becomes Reality
For the migration of several blocks or chips onto one chip, there is the S-o-C approach as shown in Figure 1.2, where the various components need to be interconnected on the new chip. We have to go through floorplanning and routing steps once the blocks to be migrated are all placed on the new piece of silicon. Of course, the new chip might contain only some blocks that are reused through migration, while other blocks are new designs. While the preservation of the relative timing discussed before still applies to each block migrated, the dominance of interconnects for timing of the new S-o-C solution due to inter-block routing will require a completely new timing analysis of the S-o-C solution. However, nothing has been lost in migration since timing existed only for the individual blocks before being migrated, and not for the fully integrated chip with the combination of all these blocks that is about to emerge.
1.3.4
AN ULTIMATE MIX AND MATCH S-O-C METHODOLOGY So far, we have discussed placing (integrating) a number of chips or blocks known to work that may have been fabricated with a variety of technologies on one chip so that they can be produced using the latest processing technology. This, of course, is an interesting challenge. In fact, aside from some serious technical challenges, this S-o-C scenario offers rather interesting possibilities. The level of potential integration can range from mixing and matching not only digital Hard IP components but including digital and analog components on the same chip. Although far from trivial, this has been done and, in fact, is now being done routinely by some companies. It is discussed in Chapter 5. Another possibility is the mixing and matching of Soft IP and Hard IP on the same chip. It seems that this approach is still more in the planning stage than an actuality at this time. Although some of these IP reuse applications for S-o-C may seem a bit futuristic, projects using these approaches have already been carried out. In general, feasibility is not the question. The actual challenges depend very much on the particular project. Later, we will attempt to list some guidelines related to the major difficulties and examine how it may be possible to overcome them.
1.3.5
PRODUCTIVE
HARD IP CREATION
So far, we have talked mostly about how to reuse or optimize existing IP. Compaction in conjunction with physical layout also allows a very productive and painless layout or Hard IP creation. We discuss this in Chapter 4.
1.4
BARRIERS TO AND LIMITATIONS OF HARD IP REUSE Many discussions about IP reuse prompt questions about whether IP reuse offers significant relief in major areas of concern, where IP reuse makes sense and where it does not.
CHAPTER
1.4.1
*
PROBLEMS WITH ATTITUDE A careful analysis of available resources, time-to-market requirements, obsolescence of circuit or chip design and the risks of a completely new design is required to choose between Hard IP reuse and design from scratch and Soft IP reuse. Such objectivity is a real challenge for management and engineers. Engineers always know ways to "improve" a circuit no matter how well it works. Engineers literally hate engineering "warm ups." An engineer's top priority is to create the most elegant solution that offers the highest possible performance. And, engineers almost always underestimate the efforts required. They tend to be too optimistic about what is possible. On the other hand, management generally leans towards issues such as minimizing cost, minimizing or guaranteeing the time-to-market with a performance that is "good enough." However, because good engineers are hard to find, it is often better to keep them happy and let them do some creative design work from scratch. In periods when the demand for higher performance chips is greatest, suggesting an emphasis on reuse, engineering talent is also in insufficient supply. This is a real dilemma, and it is difficult to find the best objective compromise. So who wins? Most of the time design from scratch and Soft IP win, unless time pressure is simply too great. Besides, there is also a compromise solution for the design of an entire system. At least some circuits can be migrated for reuse, others will be redesigned. There is another possibility for keeping a balance: Let the engineers be creative and the computers do the Hard IP migration. IP creation itself requires creativity, but not polygon pushing or DRC fixing. In all fairness, because progress in processing technology is so incredibly rapid, the time span between newer, more aggressive processes is so short that obsolescence in the design philosophy used for reusable IP often has not had time to become a major issue.
1.4.2
PROBLEMS WITH INFRASTRUCTURE Today, synthesis and Soft IP based design methodologies are well established in the EDA industry and the associated skills are extremely marketable. Hard IP migration requires a different set of skills that address only a niche market. Besides, although Hard IP reuse provides undeniable benefits in many situations, being good at Hard IP migration requires a respectable level of skill and dedication. At present, finding engineers and managers willing to commit their talent to Hard IP migration is still difficult. The infrastructure in design companies does not adequately support Hard IP migration. This will undoubtedly change with time. Finally, one of the top priorities for Hard IP migration software providers will have to be to become integrated seamlessly into Soft IP and the design environment to minimize the threshold between any of the various design or reuse approaches. It has to be painless to switch between the various paths leading to a higher performance chip!
1.4.3
FUNDAMENTAL TECHNICAL LIMITATIONS Clearly, Hard IP reuse has its limitations. The design to be reused has to be taken more or less as it is. If the gap between the old process and the new process is too large, it may not make sense to migrate. If the number of metal layers from source to target process changes, migration can not take full
23
advantage of the change. If the floorplan of the source chip is not acceptable, it can not be changed except in the S-o-C scenario. If the aspect ratios of the source blocks are not acceptable, they can not be changed for the target design. Minimizing the risks, the time-to-market, engineering resources and very expensive verification tools does have a price! However, a rapid retargeting, an adjustment to changing process-induced layout rules, an optimization of chip timing or the need to reuse the painfully developed software that is part of a VLSI system may make a Hard IP reuse approach well worth considering. To justify any new design methodology, it has to be technologically sound and, in today's stock market driven economy, it has to bring about higher productivity and savings in terms of the time-to-market and investments of expensive resources, such as highly skilled engineering and expensive design tools.
1.4.4 SUMMARY OF CONCLUSIONS We will in the following chapters attempt to show compaction as a useful approach in different areas, and we will discuss them as follows: 1. We show how to use compaction on Hard IP through retargeting of existing, previously used, designs in Chapters 2 and 7. In Chapter 2, we demonstrate a well established retargeting methodology to gain an understanding of what is involved in such a retargeting approach. In Chapter 7, we compare a Hard IP retargeting flow with more classical approaches to designing chips from scratch and Soft IP reuse. 2. We show how to use compaction to optimize chip layouts in Chapter 3. This will demonstrate a strong complementary value to any of the design methodologies be it synthesis-based, standard cell-based or fully custom. 3. We show how compaction provides a methodology for very productive IP creation in Chapter 4. 4. We show how useful compaction is in trading off certain minimum layout dimensions, increasing manufacturing yield without sacrificing performance, based on DfM principles discussed in Chapter 5. 5. Finally, we show how to use compaction to streamline the S-o-C approach in Chapter 5 by reusing some of the designs placed on the S-o-C. We propose a hybrid approach by integrating and retargeting Hard IP, Soft IP and other designs and saving valuable engineering time.
CHAPTER 0
HARD IP MIGRATION HARD IP MIGRATION WITH A PROVEN SYSTEM AND METHODOLOGY The discussions in this chapter serve to provide an overview of what a typical retargeting system can do. While it is based on an actually existing, commercially available system, the discussion is kept as general as reasonably possible. As opposed to discussions about some other VLSI chip design tools, such as simulation or synthesis, the choice of fully functional retargeting systems is rather limited and permission to write about them could only be obtained for one. However, what can traditionally be done with migrating Hard IP is pretty much covered here. This chapter does not cover some of the latest advances, such as what is becoming available in hierarchy maintenance and some of the latest algorithms in layout optimization. These more specific subjects are covered separately in later chapters. For now, the goal is to establish an understanding of the most important functions that should be included in a retargeting environment.
2.1
HARD IP REUSE, LINEAR SHRINK OR COMPACTION? Effective and technically sophisticated retargeting and VLSI design postlayout optimization methodologies are very much at the heart of the reuse of existing VLSI designs, in the form of Hard IP. However, calling Hard IP that is retargeted hard and "Hard IP reuse" completely new is potentially misleading. While the existing data concerns the actual laid-out hard silicon, we will show that the physical layout dimensions can be manipulated and optimized for Hard IP, emphasizing the features that are the most important for supplying substantial improvements in performance and yield. Also, while "Hard IP reuse" may at a first glance appear to be a completely new approach compared to reusing existing designs, it is not. A simplistic approach to Hard IP reuse, "linear shrink", has been used, but discussed little, long before IP reuse became such a popular notion. Linear shrink has always been and is still being practiced extensively today. Many highly desirable circuits that designers are unwilling to abandon are adjusted to newer processes by using a linear shrink. The word linear shrink implies a reduction. However, for some applications, there might be interest in a linear enlargement. Unless specifically stated, we will generally assume a reduction in layout dimensions consistent with trying to push the limits of performance. Linear shrink just means to adjust all the layout geometries of all the layers according to some proportionality factor until one of the layout dimensions on the chip reaches the smallest allowable value. This is sometimes called an "optical shrink" for obvious reasons. This process is, of course, very straightforward. It can be done very quickly and with minimum risk. This linear shrink offers the advantages of minimally "disturbing" the geometrical proportionalities of layout dimensions of a proven layout of a working chip. Maintaining the geometrical proportionalities of a physical layout, implies the reasonable underlying assumption that the relative timing relationships of the shrunken chip are also maintained. Accordingly, the circuit should still work after a linear shrink, but faster. Linear shrink has been very useful to the engineering community for a long time. However, for DSM technologies, a linear shrink generally results in improvements that are insufficient in comparison to the
substantial investments in equipment necessary to improve processing capabilities, due to a lack of required flexibility. It is no longer adequate to perform a shrink until one of the chip's critical dimensions hits the allowable minimum - because once this first one dimension reaches the minimum, no other dimensions can be reduced any farther either. Although some companies push the limits of linear shrinks by using "creative linear shrinks," applying different proportionality factors to different features on the chip, this approach only somewhat delays the inevitable. The more processing technologies move into the DSM area, the more inadequate becomes a "creative linear shrink." A more powerful
retargeting methodology is now needed.
We need a
polygon-by-polygon-based postlayout manipulation methodology. A polygon-by-polygon-based postlayout manipulation is done with the help of computers and the appropriate software. The underlying methodology of this software driven retargeting or migration is polygon-based compaction, which is the capability of repositioning individual polygon edges according to new process rules. It is the basis for all of the Hard IP engineering discussed in this book. There are many reasons why an approach more sophisticated than a linear shrink is needed for retargeting a physical layout. We address many of these issues in the following discussions and in the remaining chapters. The following observations will serve as basic guidelines and stress some of the benefits of polygon-based compaction: 1. The more the performance of a VLSI circuit design depends on physical layout parameters, the more important it becomes for the methodology used for retargeting to allow very high-level control of layout geometries. Hard IP migration with compaction allows unprecedented control of layout geometries and freedom to adjust any or any number of the layout features individually and at practically any time. This enables a concentration on features that are the most critical for DSM technologies. 2. With the fast-moving evolution of processing technology, many process parameters discussed in the previous section are in a constant state of flux. If processing engineers recognize rules that significantly and negatively impact the yield, they may have to change those rules. On the other hand, there may be layout rules that are too conservative and that could be tightened up a bit. Using compaction for retargeting requires the circuit designer to work with processing engineers to find layout rules that optimally satisfy both performance needs in terms of the circuit and yield what is required in terms of processing as practiced in DfM. This will become even more significant as the technologies move deeper into DSM processing. Considering that the processes are constantly tuned, a "last minute" retargeting based on the absolute latest process parameters can produce significant benefits. Hard IP retargeting allows changes as long as the user is willing or able to make some compaction reruns. It also depends on just how much he wants to "squeeze out" of his design or how sensitively he depends on the last few percentage points of performance. With today's state-of-the-art migration software, most reruns can be done overnight or faster. Such rerun times can be predicted rather accurately because of the way migration projects can be organized. Complex migration projects can be roughly organized in three phases - two setup phases and one run phase. The first is the setup phase for the process files. The second phase and the effort required
CHAPTER 0
depends on the layout to be migrated. The phase is different for migrating libraries as opposed to migrating memories or other layouts. Determining how to best migrate a layout is something like a trial and error phase. We explore these phases in more detail later. The final phase is computer runs. Everything is setup correctly. So, if process parameters are tuned to provide an updated layout based on the latest process parameters, only this last phase is required with some straightforward batch type computer runs.
2.1
KEEPING WHATOS GOOD / IMPROVING THE REST Let us assume for the moment that the timing behavior of an existing layout in an advanced technology is totally dominated by interconnects. This assumption is not so farfetched since some presently estimate that 70 to 80% of the timing is determined by interconnects, even for 0.18 micron technologies. This means that we are getting close to where most of the power in manipulating for timing optimization in an existing layout is in the wiring and not in the active part of the circuit, even for today's 0.18 micron technologies. When migrating a chip for Hard IP reuse, the existing floorplan of the blocks on the chip and the routing - the entire layout - is retargeted as such. This retargeting can be done on various levels of sophistication. Retargeting to a new process can be done to satisfy the process rules and the electrical requirements. This is what is done generally when one talks about retargeting. In this chapter, we discuss only this type of retargeting. We discuss optimization issues in conjunction with retargeting later on. On a more sophisticated level, a chip or a block can also be optimized for performance when it is retargeted to a new process. This approach is discussed in Chapter 3. Such an optimization can be done efficiently only with software driven retargeting that is polygon-based, but not with a linear or a creative linear shrink. Optimization algorithms are also needed to determine what the required optimal layout dimensions. Compaction can only reposition individual polygon edges according to inputs from analysis tools. Of course, migration with optimization is an additional effort. However, it is worthwhile because, as we discuss in Chapter 3, substantial performance improvements can be achieved with simultaneous manipulations of the geometry of transistors and interconnects. Apart from mitigating the shortcomings of a linear or creative linear shrink, new processes may also bring additional challenges. True retargeting to a new process may in fact require that the layout satisfy additional design rules that did not exist for older processes. Design rules are getting more complicated and new features are constantly being added. Again, this issue can only be addressed with polygon-based compaction. Figure 2.1 shows a high-level conceptual image of Hard IP retargeting. We can see the inputs needed to start the retargeting process. The target process design rules are required. We also need the original layout data and some performance/timing parameters specified by the user. The original design rules for the various chips to be retargeted are not necessary. We will discuss the details of what is required and what we can expect as outputs in the following sections.
TARGET PROCESS DESIGN RULES
USER-SPECIFIED PERFORMANCE/TIMING PARAMETERS
/#
*% RETARGETING SOFTWARE
/ ~04
laN
MIGRATED LAYOUT
SOURCE LAYOUT
Fig.
2.1
High-level
Components of Hard IP Retargeting
In the previous chapters, we outlined some of the reasons Hard IP migration represents a powerful methodology for IP reuse. We will now look at the "conceptual flow" of how migration is done, based on a well established methodology that has been used successfully for many different migration projects. Projects for which this methodology has been used range from standard cell library migrations to chips with well over a million transistors. Although, the discussion reflects experiences gained from a specific migration environment, most of the ideas presented are nevertheless generic and typical of how the principals of Hard IP migration are applied. In Chapter 1, we introduced the concept of Hard IP migration without even discussing the input data required, the steps in the migration process, or the output data generated. The goal was just to establish a framework for the discussions in this and the following chapters. Now, we will examine some of the steps for Hard IP migration. The process of Hard IP engineering is a methodology that offers many variations in how to proceed for a particular project. Because there are so many possible variations in layout manipulation, mastering the details requires working with a particular tool on a regular basis. Pushing today's limits in a technology -
and this is what hi-tech chip design is all about - is no less of a challenge than mastering the piano to
perform a beautiful classical masterpiece. Clearly, those who referring to such sophistication in design as a push-button operation do not play the piano that well, either. Accordingly, it should not come as a surprise that dealing with every layout feature that could ever be encountered in Hard IP migration requires skill and experience, the same as for any design methodology dealing with the complexity of multimillion transistor chips. In this chapter, and in fact in this book, there is a constant trade-off between describing the details of how to exactly adjust a certain layout feature and how Hard IP engineering is done conceptually. The goal is to give a good idea of the basics of Hard IP migration. For the very detailed knowledge needed for the actual day-to-day problems faced by a those doing Hard IP migration, there are manuals and, it is to be hoped, good engineering support. The manual and the engineering support come into the picture when a user of a migration methodology gets "stuck."
In the following discussions, we remain on the
conceptual level to avoid getting "stuck," while still providing a good understanding of the process.
We show details, such as the exact format or syntax of the data required for a particular job, only when they contribute to understanding of "the big picture."
2.1.2
LAYOUT FLEXIBILITY AND CONTROL WITH POLYGON-BASED COMPACTION Compaction allows a high degree of freedom where most needed for DSM-technology-based layouts to take full advantage of advances in processing technology. Below are some of the critical benefits of compaction: 1. To avoid "compacting" all layout dimensions to the minimum listed in the process file, the functionality of certain devices has to be recognizable from the layout data. This is possible with compaction. These devices can then be resized with scale factors, sized according to process-specific or electrical rule-specific criteria, or resized from and associated with text labels preserved and kept together with the features labeled. These features are critically important for such things as enhancing yield or optimizing performance. 2. Power and ground can be identified and sized differently. 3. Oversizing of metal widths (wide metal), tubs, and other desired features is possible. 4. Netlist-dependent design rules are possible where the netlist is identified from text labels and built from layer connectivity information in the process file. 5. Two-dimensional abutment can be done automatically, based on layer matching. 6. Within the newly achieved reduced area of the migrated block, the local geometries can be optimized according to area-, capacitance- and resistance-related cost factors. 7. All polygon edges and particularly contacts (pins) will be placed on a grid to make abutment and area-based router interconnections possible. 8. If there are conflicts in the layout so that certain layout features do not fit in the available space, graphical feedback will guide the user to eliminate these conflicts. These kinds of conflicts can also be eliminated automatically by the system if specified by the user. Figure 2.2 indicates some of the great flexibility in resizing gate lengths for a group of transistors, the gate length of an individual transistor and supply and clock nets, all independently. Upper part is premigration; lower part is postmigration.
SUPPLY &
INDIVIDUAL RESIZING W
RESIZING
_
L
CLOCK NET
TR-RESIZING
BEFORE
W Ioliliu1 -1
Aw
MIGRATION
A 2.2
M
ff "A01
COMPACTION
Fig.
0
B Flexibility
VERSUS
SHRINK
FLEXIBILITY
D
C
in Resizing Features With
Compacticon
MIGRATION
AFTER MIGRATION
2.1.3
LINEAR SHRINK VERSUS COMPACTION It should be obvious that linear shrink and even "creative" linear shrink will run out of steam as DSM advances. Migrating layouts to technologies with increasingly smaller minimum dimensions can not possibly mean just linearly scaling all dimensions. Clearly, process improvements do not follow a linear scale. If we consider taking a process from, for example, a 0.28 micron process to a 0.18 micron process, only certain minimum dimensions change from 0.28 microns to 0.18 microns. One of these is generally the minimum transistor gate length dimension, the gate dimension that dominates the electrical characteristics of the channel between a MOS transistor source and drain. For reasons of sanity (or to avoid discussion that will most certainly border on insanity), we assume that every manufacturer advertising these minimal dimensions means the same thing. So, while the gate dimensions for the two 0.18 micron processes are the same, many of the other minimally allowed layout dimensions are probably not. Therefore, a chip migrated to 0.18 micron processes of two different foundries will have different layout densities, different sizes and different power and speed performances. This is because a comparison of 0.18 micron processes, for instance, from different vendors will reveal significant differences in the various sets of process rules. As a result, migration to the different 0. 18 micron processes of different foundries will yield blocks of different overall dimensions and different performance parameters for one and the same original (source) layout. Because of the very great influence of layout geometries on chip behavior, we must be able to adjust each and every layout dimension on the chip to the most appropriate value, based on criteria such as speed, power, signal integrity, etc. We can not tolerate the typical behavior in a linear shrink for the process of reduction, when just any one of the layout dimensions on the chip reaches the minimum despite how oversized all other layout dimensions may be. Figure 2.3 gives a simple illustration of how the results of a shrink differ from compaction. A to B shows a change of every layout dimension, according to a common shrink factor. B to C shows the reduction of the overall dimensions of the cell, using compaction, without any change in the dimensions of the device or supply lines. The difference here is not great, but the benefits add up for an entire chip. As we proceed, the benefits of compaction versus shrink will gradually become more convincing. Chapter 3 on layout manipulations for optimization will particularly add a lot to the arsenal of pro-compaction arguments. A D.
e
-U 11
Nl AFTr.
..
n.U
I
_
* LOMPACTF
-. 9
l COMPACTION
A Fig.
2.3 There is
VERSUS
SHRINK
B Shrink,
and Then There is
C Compaction
CHAPTER 0
2.2
RETARGETING TO A NEW PROCESS WITH COMPACTION Since migration is generally done to move to a more advanced processing technology for the purpose of improving the performance of the circuit, the layout dimensions most critical for improving performance should become as small as allowed by the new process. The area of the migrated layout should also become as small as possible, because this minimizes parasitic elements such as parasitic capacitances and smaller chips are more cost effective. This overall reduction in size generally speeds up the circuits, a desired goal. At the same time, however, one of the key goals of IP reuse is to minimize or even eliminate the amount of rework on a chip after migration. After all, improvement of productivity and time-to-market are some of the key motivators for IP reuse. In other words, the chip should work as before, just faster, have a smaller footprint and, it is hoped, even use less power. While maximizing speed and achieving other desired features, the changes in the layout parameters should not render the chip nonfunctional. Timing is one of the major concerns. Of course, Hard IP migration or Hard IP engineering can only affect chip performance parameters to the extent that they depend on physical layout dimensions. Hard IP engineering has become such a powerful tool, largely due to the DSM-technology-induced layout dependency of chip performance. To maximize the chances of a fully functional migrated chip, at least the following criteria will have to be satisfied by migration. Some of them are satisfied automatically by the migration process; other criteria have to be met through proper specification of certain parameters: 1. The migrated chip will be functionally identical to the source chip. 2. The migrated chip must conform to all target process design rules. 3. The migrated chip must conform to a set of electrical design rules. 4. The migrated chip should exhibit correct timing behavior. We will now examine the four points above in more detail.
2.2.1
RETARGETING
FOR UNCHANGED FUNCTIONALITY
As we suggested in Chapter 1, migrating a chip does not change the chip's topology. With an unchanged topology, the netlist and, therefore, the functionality of the migrated chip remain unchanged. The functionality is unchanged because, when migrating a layout from one process to another, polygons are only shifted according to the new design rules and other specified constraints, as indicated by the user. The polygon edges move either closer together or farther apart, but they can not "jump over each other." For this reason, connections from any element in the layout to any other do not change. Because of this functional equivalence, the functional simulation and test vectors existing for the "old" chip can be used for the migrated chip. This is substantial, since simulation vector and test vector generation are both extremely time-consuming and costly. What does this
mean for correct functionality?
For correct functionality, the migrated chip is neither more or less functionally correct than the premigration chip. It is well known that neither functional simulation nor fault simulation guarantee a functionally correct chip. Nothing can be done with migration to improve the chances of a functionally correct chip.
What
does this mean for a correct chip
timing?
As opposed to guaranteeing functional equivalence, migration can not guarantee timing equivalence. We will discuss this later. However, the good news is that migration can fix timing problems, whether they are new problems or existed in the premigration chip.
2.2.2
RETARGETING TO CONFORM TO TARGET PROCESS DESIGN RULES If all the processing rules were entered correctly in the processing file, the migration software will guarantee correct target process design rules in the migrated chip. Since entering process design rules in the process file is a manual operation, it is prone to error. However, correctness of the processing file can be tested by migrating only a small part of the layout. This is much more efficient than having to verify an entire layout and problems in the processing file can be corrected literally "on the fly." Once the processing file is proven correct, the correctness of the migrated layout, the migrated chip, will be design rule correct, even if the "old" chip has had some minor known or unknown violations of the layout rules. The most obvious rules to be specified are the processing-based physical layout rules. We will focus on the ones that pertain to and are required for Hard IP migration. Processing rules reflect the limits of the processing capability to achieve the highest possible chip layout density with an acceptable manufacturing yield. These processing rules focus on what is possible in terms of processing, such as optical resolution, etching resolution, diffusion limits, ion implant limits and much more. As such, improvements in processing technology aim at improving performance or easing the manufacturing of chips. However, performance of a processing technology is not measured by the resulting speed of circuits, because the speed of a certain chip depends both on the process and design techniques. Performance of processes is measured by the minimum achievable dimensions of certain layout geometries, which are considered critical. Smaller minimum critical dimensions have, traditionally resulted in higher speed performance circuits, especially for the active devices. Although smallness in layout geometry is still crucial to achieve high speed, other potentially more critical issues concerning layout geometry are playing an increasingly important role in determining speed and other critical chip parameters. We examine these issues in more detail in Chapter 3 when we discuss optimization and in Chapter 5 when we discuss DfM. However, irrespective of the question of speed, smaller dimensions still mean an increase in the number of functions that can be squeezed onto a chip. The number of possible devices that can be placed on a chip is also growing, because larger and larger chips can be successfully fabricated. The secret to setting minimum allowable layout dimensions is to adjust them so that chips can get larger and still be manufactured in volume with an acceptable yield, in conjunction with an increasingly cleaner and better controlled processing environment. Of course, what is an acceptable yield depends on many factors. Some of these factors are trade-offs between the cost of processing, the complexity of a chip and how much can be charged for it, the manufacturing volume required at the time, what the competition is doing, etc.. In addition, special adjustments are made in a processing line to achieve the best possible performance, depending on the desired final product. This means that a process for the fabricationp of, for instance, DRAMS will be tweaked differently than one in which microprocessors are the desired product. So, what is an acceptable yield and/or performance for a certain chip complexity or a certain production volume may not be for another. Since many of the potential yield and performance problems
32
CHAPT.ER
encountered are related to layout, Hard IP migration is a way to directly address and minimize such problems. By just focusing on layout and Hard IP migration, the number of rules required to specify horizontal geometry, i.e. layout rules, turns out to be quite small and conceptually simple. Of course, with the evolution of processing farther into the area of DSM, more complex rules will most probably emerge. There is consequently no need to look into all the intricacies and details specified in a foundry-generated processing manual. We focus only on all the important ones for the Hard IP migration task to be performed. Layout rules/dimensions specified for Hard IP migration are: I. The minimum width of a layout feature, such as metal width, diffusion or implant width, contact openings, etc. 2. The minimum spacing between two layout features on the same layer, such as separation between two metall lines, poly lines, etc. 3. The minimum spacing between two layout features on different layers. 4. The minimum overlap of layout features on two different layers. 5. The minimum overhang of one layer combination over a combination of that layer and other layers. Some of the more esoteric rules, such as antenna rules and minimum area rules, also need to be considered.
2.2.3
RETARGETING FOR RELIABILITY Some of the layout geometries determine whether a circuit will function and whether it will function reliably over a long period of time and under adverse environmental conditions. Many of these layout parameters are literally prescribed by the processing technology used to fabricate the chip and because of this very close connection to processing, they are specified in the processing file. As such they tell the layout or migration tools what they are allowed to do, and they need to be part of the specifications for any chip, irrespective of its electrical function. They are parameters that generally need to be respected to prevent catastrophic chip failures, not just inadequate performance. Some of these issues are: 1. Latch-up has been a curse for an otherwise almost perfect technology, the CMOS technology. Years of experience have largely eliminated this problem. For any particular process, foundries give strict layout guidelines. If followed, latch-up will not present any problems. These layout rules are simply put into the process file and the layout tool or the migration tool will take them into account. It generally boils down to placing a lot of contacts in strategic places, which is particularly easy to do with retargeting. We will later show an example of how this can be accomplished elegantly and time-efficiently in the migration process. 2. Electromigration is another very destructive phenomenon in chips. The maximum allowable current density for minimizing electromigration is a key factor that determines the minimum metal width. It is the responsibility of more than one person to obtain the correct values. In fact, the specification of a current density through metal interconnects to stay below a certain value to prevent electromigration involves many disciplines. Just how much current can flow through a certain metal width depends on the thickness, the profile of the etched metal line, the temperature of the metal when the circuit is in operation, the chemical composition of the metal lines, etc. Also, the tendency
33
for hotspots to occur on chips, and not just the overall temperature of the metal during operation of a chip, is particularly crucial to the long-term reliability of a circuit. Data on electromigration are generally based on years of research and experimentation. That is why the way to create metalization with the highest possible current density is part of the carefully guarded technical know-how of chip fabrication companies. This is subject to intensive research because it is so crucial to chip performance and maximum allowable current densities keep increasing. Maximum current densities, a benefit from the latest technological advances, depend on the latest discoveries of metalization 'additives." This is why the close cooperation of many disciplines, reflecting a lot of experience, is necessary to get the most appropriate values. Needless to say, there may be substantial differences between the current capabilities of various foundries. Since current carrying capability of chip metalization is so critical to layout density and reliability, it is well worth paying attention to potentially sporadic advances in this area and taking advantage of them. This factor alone may justify changing foundries. It is a strong argument for using retargeting with the compaction methodology, since this allows this type of switch after a few computer runs to implement the new process design rules and optimize timing as needed.
2.3
CORRECT ELECTRICAL AND TIMING BEHAVIOR OF MIGRATED CHIPS "Migration engineering" is such a powerful methodology because there is a very direct and close relationship between performance of a chip and its layout geometries. This is particularly true for MOS technologies. It is less true for chips with bipolar junction devices or with a mixture of various types of devices, where vertical geometries such as junction depths are just as important or even more so. This observation immediately suggests that chips with a mixture of devices must be treated differently. We will here focus on MOS technologies. Of course, parameters related to vertical geometries such as oxide thickness, k-factors of the oxide, interconnect metal thickness, junction depths, etc., are also critical to MOS technologies. However, they are primarily process-related parameters and, in terms of chip design or migration, they are simply design parameters that have to be accepted. We will
concentrate
on horizontal
(layout)
geometries,
focusing on
postmigration first time success and performance optimization. With a clear focus on layout geometries, horizontal geometries have various degrees of influence on the migrated layout. They can be ranked in significance from destructive, if not satisfied, to values required to achieve optimized performance of a migrated chip. We will now examine these layout parameters. How well can one guarantee a migrated chip that is both fully functional and correct in terms of timing? Of course, guarantee really means that it is based on the best engineering analysis and judgment. Even a full simulation/verification cycle may not be able to guarantee first time success for a complex VLSI chip. In the DSM area, first time success is even more difficult to guarantee. We know the physics, but this knowledge is not always easy to apply. Even in pre-DSM days, the promise of "correct by construction" was often more a marketing slogan than reality. However, since DSM "surprises" are fundamentally layout-geometry-based, intelligent layout and postlayout engineering can alleviate many of the problems. The tighter the link to the back-end in the design cycle, the more successfully DSM surprises can be avoided. Clearly, Hard IP engineering is not just a powerful link to the back-end.
35
CHAPTER 0
Hard IP is the back-end! Some layout geometries are enforced to influence the electrical behavior of a chip, others to affect the timing behavior. Electrical requirements, such as not to exceed a certain resistance or a certain voltage drop in a metal interconnect, are satisfied by specification of certain parameters. The preservation of timing relationships through the migration process is intimately related to how migration or postlayout optimization is done. We will now discuss how to affect both of them.
2.3.1
RETARGETING
FOR CORRECT INTERCONNECT
TIMING
As previously stated, a key reason for IP reuse is to benefit from former engineering investments, to minimize additional engineering investments while possibly enhancing performance, to increase the number of functionalities with S-o-C and, at the same time, minimize the time-to-market. We have already explained that Hard IP migration preserves the functionality of a chip. A migrated chip's satisfying all the layout rules is enforced by the migration software, if all the processing and migration files are set up correctly. This needs to be checked. The good news is that checking a very small piece of a migrated layout is sufficient to check the correctness of the process file setup. The remaining issue is the migrated chip timing . Why should the timing of the migrated chip still be OK? Timing in digital circuits and timing in analog circuits can not be treated the same. We will discuss the challenge of IP reuse for analog circuits later, but timing is definitely not the only difficulty that needs to be addressed for analog circuits. For now, we will address only Hard IP migration of digital circuits. Having to deal with the signal processing of only digital signals has many advantages. Many of these advantages are well known and need not be enumerated here. There are, however, some interesting issues that need to be pointed out that are critical for explaining why digital chips keep on working, even if parasitic capacitances (especially interconnect parasitics) have not been accurately determined. This is particularly interesting for DSM technologies where interconnects are starting to determine the limits of the speed of a chip. For now, the following broad statement gives an idea of one key timing issue: The most critical
aspect of chip timing affected by interconnect delays relative timing between the various signal paths. While the chip speeds up, the timing relationship between the signals should be maintained. is
Accordingly, the relative timing between signals is much more critical than the absolute delay of any one of them for digital circuits to work with respect to timing. On the other hand, the actual delays are more critical for the overall speed of the circuit. In other words, a circuit fails when the time relationship between certain signals is off. When a circuit's absolute delays are off, it runs either faster or slower than expected but will still work. The more interconnects on a chip dominate the timing, the more a balance and a relationships between the lengths and the parasitics of interconnects should be maintained. For a linear shrink, this is very much the case. For compaction, this is sufficiently maintained for the relative timing to also remain unchanged. This is in big contrast to what would happen if the chip were rerouted as it is taken from one process to another. Therefore, keeping the routing of the "old" chip not only saves time and money but significantly lowers the risk that the chip in the new process will have a completely different absolute and relative timing.
Signal delays must be calculated accurately to predict the time performance of a digital circuit. Thus, if interconnects dominate timing, the delay characteristics of the interconnects must be accurately calculated. A discussion of how to determine delay and other characteristics is presented in Chapter 3. For now, we merely wish to state that the delay characteristics turn out quite accurately, with approximations generally made to obtain calculated results with manageable efforts. We will see why in Chapter 3. We will also see that some other critical parameters suffer in this approximation process and are way off
2.3.2
RETARGETING
FOR CORRECT TRANSISTOR TIMING
As a first order approach, as both W and L of transistors are reduced in accordance with the new process rules, the W/L ratio of the transistor gates should be kept at the premigration values, instead of both W and the L being changed to the minimally allowed dimensions. This should maintain timing relationships between the transistors consistent with the original, premigration circuit. For a linear shrink, W/L remains unchanged automatically. It has to be specified for compaction. The advantage of this, however, is that the W/L ratio can be anything we want it to be. This flexibility will be useful for optimization. Furthermore, while the W/L ratios of the transistor gates are the most obvious layout dimensions not to be minimized as allowed by processing, other layout dimensions may also have to be different from the minimum values allowed by processing. Some, as mentioned before, because of reliability, others because of the electrical criteria of the chip or for reasons of manufacturability. Some of these layout dimensions are generic to the process and the same for all the chips to be fabricated in a certain process. Others are related to the performance of a particular circuit. Even EDA layout software is often automatically laid out to the minimum allowed dimensions specified by the process file. Finally, let us assume a chip is migrated from an old process (e.g. 0.8 microns) to an aggressive technology (e.g. 0.18 microns). The timing of the migrated chip will change due to both the change in the speed of the active devices as well as the interconnects. Even in this case, some of the arguments for maintaining the relative timing may still apply to a certain degree, especially for the interconnects, since they all get reduced together. Of course, the timing of the chip for such a large step in technology makes a rather radical change from being active-device-dominated to interconnect-dominated. This is a good example of a situation where large changes in transistor W/L may be needed to "get the timing back." However, adjustments in transistor W/L ratios may not suffice to fix the timing discrepancies resulting from such a large jump in technologies. However, more than just W/L can be adjusted in a layout to bring the timing in line, as we suggest in the next section and discuss in more detail in Chapter 3.
2.3.3
RETARGETING
FOR CORRECT INTERCONNECT
AND TRANSISTOR TIMING
The fact that VLSI circuit transistors traditionally controlled largely the timing is still evidenced by the fact that there are currently no commercial tools available that go beyond focusing on transistor modifications to adjust the timing. While compaction can modify any layout geometry to any desired value within the available space, the decision on how much to change must come from other tools. Presently, however, only university-level algorithms are able to analyze more than transistors. This is unfortunate, as we will see in Chapter 3. We will show that the layout timing can be changed dramatically if we modify both transistors' W/L ratios as well as the load they are driving, the interconnects. These simultaneous modifications will be the basis for some serious layout-based performance optimization.
3T
CHAPTER
2.4
l
INPUTS, FEEDBACK AND LEVERAGE ON THE LAYOUT We will now examine the various steps of a migration. We consider the input data required, how to influence the path the migration process follows and, finally, how we can use the data emerging from the migration process to reiterate the migration steps and even to modify processing. We are starting the Hard IP migration process with data describing physical layout as "source" layout. This source layout represents a fully functional block or chip in some MOS technology. The layout data contains everything needed to produce a chip. Every polygon-edge position in this layout is precisely defined. The source of this database may be from a VLSI circuit, designed by one or a mixture of design methodologies. Circuits based on any design methodology can be migrated, ranging from handcrafted to field programmable, although it may make sense to migrate a field-programmable device only in conjunction with other circuitry on a chip. It all depends on how much value there is in reuse, compared to redesign of a particular circuit function. In fact, what may represent a design for Hard IP that is worthwhile to migrate will be easier to decide later in this book, after we have a feel for the effort required for Hard IP migration versus the benefits. Depending on the methodology used for a design, the layout may be more or less optimized. A fully custom design may be optimized based on the best criteria at the time of the design or just for the particular process for which it was designed. Other designs may have been done based on older libraries. Whatever the reasons, there is usually room for improvement. Hard IP engineering allows such improvements at postlayout, no matter what design methodology was used. Then, the knowledge gained from the layout-based performance improvements can often even be used to fine-tune a process, as we will discuss in more detail in Sections 2.4.3 and 2.4.4. We all know that the demands for performance made on today's VLSI-based designs and chips are very high. Let us look at what type of improvements we can achieve with the compaction methodologies discussed here. First, we examine the input data required to migrate a design, including user-specified parameters to steer the compaction process towards the desired results. In contrast to layout optimization which is discussed in Chapter 3, the main goal here is to retarget an existing design and obtain a reliable, functional and timing correct migrated design. We will now explore the various steps and approaches we can take towards achieving the desired results.
2.4.1
SETUP FOR THE MIGRATION PROCESS We suggested earlier that there are three phases in a migration without an optimization cycle. A first phase is setting up a system for migrating a layout and setting up the process files. This phase may take some labor but is not difficult. We just need to make sure there are no errors in the process files. A second phase concerns exactly how to go about migrating a design. This is the phase for setting up the proper conditions for a compaction run. Migrating a simple little block, a library, a memory, a complex chip or even a mixture of chips for a S-o-C solution are all possibilities. There are many variations, depending on the nature of the design to be migrated. Trade-offs are generally between the time and effort required to set up the process versus the quality of the results, the ease with which the resulting layout can be verified. Should we migrate a chip or block hierarchically or flatly? Should regularity in the layout be taken advantage of during migration? Should routing be migrated separately, as part of some blocks? Or would rerouting even make sense in some cases?
37
Time spent properly setting up for migration is comparable to good planning for any project. If done well, there is usually a good return on the investment. And, as with any engineering tools, experienced users will get better results. A third and final phase is the compaction run time. This should be push-button and very predictable. Once the retargeting environment has been properly set up and good results have been achieved, a rerun of the compactor with modified process rules is straightforward and just a matter of computer run time. This is the reason why making last minute adjustments is easy and fast and can produce great benefits in performance and, potentially, optimization of the process. Considering the rate at which processing technologies are changing, last minute retargeting always seems beneficial.
2.4.2 A BIRDOS EYE VIEW OF A MIGRATION In Figure 2.4, we show the migration environment that serves as a basis for discussions in this chapter. Figure 2.4 illustrates the basic process and the data components to be specified for Hard IP migration. The polygon-compaction engine as illustrated in Figure 2.4 is at the heart of the migration process. Most of the discussions will be based on one-dimensional compaction. One-dimensional compaction is compaction along the directions of the orthogonal coordinate system. Polygons move along the x-coordinates and along the y-coordinates, one at a time. To show the benefits derived from compaction and to understand how to optimally affect the results, we have to examine the input data required, the process of compaction and, finally, the outputs resulting from the compaction process.
)
Fig.
2.4
The Major Components of a Migration Environment
Data to be specified for compaction Figure 2.4 makes it clear that we need to specify * The source layout to be migrated. * The target process and its layout rules. * Layout rules related to electrical behavior. Figure 2.4 is a simplistic, high-level conceptual view of both inputs and outputs in a migration environment. This is the minimum data that has to be specified to make migration possible, and we will discuss this now. Later, we will examine additional possibilities to determine how layout manipulations can be used to optimize circuit performance, utilizing the techniques discussed here, or yield a new process, with or without migrating them.
CHAPTER 0
2.4.3
OUTPUT DATA (FEEDBACK)
FROM THE COMPACTOR
The compaction engine works on the layout of the blocks or chip to be migrated. If the compaction is one-dimensional, it will be performed on the source data in any desired order. The x-axis can be compacted first and then the y-axis or vice versa as chosen by the user. The quality of the results may differ, depending on the order. Since we have been discussing process-parameter-based design rules, we are exploring how feedback from the compactor can show the influence of certain design rules on the retargeted layout. In Figure 2.5, we show a migrated layout along with feedback from the migration engine. In the layout, we see straight, fine, yellow lines that highlight polygon edges along these lines. In the illustration on the left, these lines are vertical while the separations between them along the x-direction indicate a critical path in the x-direction. In the illustration on the right, the lines are horizontal while the separation between them indicate a critical path in the y-direction. In both directions, these paths represent a layout-related critical path. Critical path means that all polygon edges along this path are at the minimum distance from each other as allowed by the process-based or any other design rules. This is the highest density that can be achieved without violating the design rules along the critical path. This critical path can be an indication of the quality of a layout. For instance, in a memory array, the polygons delimiting the memory cells should be as closely placed to each other as the process allows, because they determine the packing density of the memory array. Any dimensions larger than the minimum possible for the cells are multiplied many times in the array, yielding a memory array that is larger than necessary.
CRITICAL PATH INFORMATION
Fig.
2.5 The Concept of
'Critical
Path'
in
Layout Migration
In Figure 2.6, we show how utilizing the knowledge of the critical path in a layout as feedback from a compaction run can be used to improve the layout. We see that in the chain of cells on the left side of the illustration. In the cell farthest to the right, the L-shaped metal line in the upper right corner gets as close to the metal line next to it as is allowed by process rules. The cell farthest to the right is therefore the highest cell and determines the height of all the other cells, because pitch matching requires this. With a minor manual intervention in a layout, we can change an entire standard cell row. Just moving the L-shaped metal line "knee" shown in Figure 2.6 along the critical path in the metal layer to the right just a small amount will give the compactor the freedom to reduce the height of the cell farthest to the right and, accordingly, all cells shown on the right of Figure 2.6. A small increase in the length of the chain of cells may result in significant savings in the total area of the standard cell chain.
39
0
CRI TICAL __________________
,__ MINIMUM
LONG CHAIN Fig.
2.6 Critical
Path Information for a
DIMENSION
OF ABUTTED
Chain of Abutted Cells
In Figure 2.5, we showed a path along which all dimensions are at a minimum. It is also interesting to know which of the minimum geometries of the process parameters cause the highest number of polygon edges in a layout to be placed at the minimally allowed separations. This may be a "point of negotiation" with a processing engineer. We show this type of information in the following section on statistical feedback.
2.4.4 STATISTICAL FEEDBACK ON THE LAYOUT The statistical data displayed in Figure 2.7 lists the number of times a certain layout geometry reaches the minimum as prescribed by a particular process rule. This data is based on a specific layout and varies from layout to layout. For the data in Figure 2.7, the layout is a RAM cell. The table contains the following information: 1. The first column is the name of a design rule defined in the process file. 2. The second column is the value of this rule. 3. The third column is the number of layout geometries using this design rule. 4. The fourth column is the percentage of the cell dimensions using this rule. 5. The fifth column is the part of the cell dimension following this design rule. The most obviously useful column is the fourth, showing the percentage. In the example given here, the dominance of one rule limiting a further reduction of the layout dimensions of a cell is not very great. Still, if that one rule could be relaxed a bit from the processing point of view, the RAM cell could become smaller. For a RAM or ROM cell that is repeated millions of times, even a minor change can make a big difference.
CHAPTER
Rule
Value
Number
I Percentage
Distance
W701
1200
95. 28
20.36
9528.32
A702
150
36.00
11 .54
5400.00
A802
150
31 . 79
10.19
4768.75
H703
75
58.00
9.29
4349.99
H203
300
14. 00
8.97
4200.00
E205
300
13.00
8.33
3900.00
H804
75
30.83
4. 94
2312.50
W801
150
13. 33
4.27
2000.00
A504
150
10.80
3.46
1620.00
E707
100
12. 33
2.46
1233.33
H506
100
11 . 80
2.52
1180. 00
W501
100
10.97
2.34
1096.67
H705
75
14. 00
2. 24
1050.00
W753
100
7.58
1 .62
758.33
E507
50
12.80
1.37
640.00
A202
200
3.00
1.28
600.00
H760
75
7.08
1.14
531.25
A852
150
2.37
0.76
356.25
H853
75
4. 75
0. 76
356.25
1 00. 00%
Fig.
2.7
Feedback on
46800.00
Where Maximum Density has Been
Reached
Statistical feedback suggests the following observations: 1. The one layout dimension, that "pushes" against the process design rule (here W70 1), may be tweaked in the process. For a process specifically designed for memories, this may be worthwhile. 2. As a consumer looking for foundries, it might be worthwhile looking at various competing foundries. Needless to say, that gives a consumer some valuable negotiating power. 3. The statistical data might suggest to the designer of the migrated circuit how to make some minor changes in design to further reduce chip sizes. However, a certain dimension that occurs in a layout so many times may be difficult to change.
2.4.5
KEEPING POLYGONS EDGES ON A GRID Keeping all the features in a layout on a grid is at times another advantage of polygon compaction over a linear shrink. The resolution of this grid can be specified by the user. Having every polygon edge on a grid is critical for pitch matching between migrated std-cells and guaranteeing connection using area-based routers. It is often easier for routers to interconnect features that are on a grid. Even for proportional compaction, which at first seems like a linear shrink, every polygon edge is still on a grid. Proportional compaction is often used for metal sizing on a layer-by-layer basis and for layouts of analog circuits, for which symmetries are maintained on the basis of proportionality. Remember, a linear shrink can not change any one layer proportionally without changing all the others.
e
2.4.6 COMPACTION-INDUCED JOGGING Besides serving as an indicator for possible process rule or foundry trade-offs, the critical path can serve other purposes. One of them is shown in Figure 2.8 helping to achieve a denser layout than without jogging. Another benefit is the elimination of conflicting layout demands, as shown in Figure 2.9. In Figure 2.8, we show how the compactor introduces "jogs," sometimes called "doglegs," to decrease the height of this cell. This is done automatically by the compactor in connection with a critical path, suggesting an improvement in the size of a cell. Jogs will only be introduced where benefits result, i.e. around a critical path. The critical path is indicated by the dotted line in Figure 2.8. Furthermore, jogging can be turned off by the software user, if undesired. .
rrn I..... ...
t
l;
-=__ ACEIPaI i
I
=::D
ri,
rI
=EEI]
II
i
mi
ELET=L E0 i i Fig.
I
2.8 Jogging (Doglegs)
-EC:1 I
KL i
for a Denser Layout
Introducing jogging for conflicting demands, generally referred to as overconstraints, is also very useful. Here, as illustrated in Figure 2.9, the compactor encounters conflicting demands, one from the process rules the other from the electrical rules. As we can see from the dimensions in the illustration (50+30+50), the contacts to the left and the right of the gate can not fit in the length of the gate (100). To satisfy both design rules, jogging is needed. The end result is satisfaction of all the dimensions required. Overconstraints are graphically highlighted in the layout as feedback from the compactor. It is one of the greatest batch run challenges for compaction. Unfortunately, a single overconstraint violation arising from conflicting demands can very easily happen when setting up the compaction run. Depending on the user's settings, such a violation may stop the batch process. However, this feature can be disabled by the user or the system may be allowed to relax them. This is often desirable because it is not very pleasant when the process is stopped for something like an overnight computer run.
.-v-
.. . ..
-
. .
..
. .. .
.... ..
I l
--
42
1::::
-
--
50
..... ..
Fig.
4k
2.9
V
..
,30
....
'.:: .... .- :
..
..-.. ......
.... .. ....
50 2L
Jogging to Eliminate Conflicting
Specifications
e
CHAPTER
The response to the option of relaxing overconstraint checking is an indication of the problems, but it lets the run continue. This will at least enable the run to reach its conclusion. Full overconstraint checking will then only be turned on at the very end, when it appears that all overconstraints have been fixed.
2.4.7 CONTACT OPTIMIZATION AFTER MIGRATION For the purpose of reliability and other reasons, electrical contacts in VLSI designs are often made of multiple contacts as opposed to the big one shown in Figure 2.11 in the rectangle at the far left. Either of the two contact geometries can be chosen by the user. If, however, a multiple contacts structure is desired, a lot of work can be saved if it is done automatically with compaction, while at the same time only the number of contacts is implemented that still fits the dimensions of the new technology. This is shown from left to right in Figure 2.10.
I+I4 ::: I Fig.
2.10
Automatic Recontacting:
2.4.8 MINIMIZATION
A Labor Saving Feature
OF PARASITIC CAPACITANCE OR RESISTANCE
Visualizing compaction in the x-coordinate direction for the moment, polygon compaction "pulls" all the vertical polygons as closely towards x=0 (to the left) in the coordinate system as the process and the electrical design rules allow (compaction in the y-coordinates would pull horizontal edges towards y=O). The result of this is illustrated in the center rectangle in Figure 2.11. For many reasons this is undesirable. The migrated layout should closely resemble the original layout.
-- K 1--V
11111111
I 11 1111
l Fig.
2.11
Maintaining Relative .
l
Positioning
l
Through Migration
We need to optimize layout for it to approach the geometric relationships of the source layout. After all, we know that the relationships, the proportions between the layout geometries of the original layout led to a workable design. Very similar proportions can be achieved in the migrated layout by asking the compactor to optimize the new layout so that it resembles the old one while maintaining the newly achieved layout area. This process is called wire length or area optimization and is done automatically as part of the compaction process.
43
* HIGH COST Fig.
2.12
E:1
U
D Low COST
Low COST
Trade-offs in Layout Through
HIGH COST
Weighting Functions
There is more that can be done with area optimization than just maintaining the basic geometric proportions as they were in the premigrated layout. In Figure 2.12, we see a structure where we have a choice over a given length between using metal or a diffusion. Using cost factors (sometimes referred to as weighting functions) during area optimization, the compactor can trade one material for another. Over its length, we can adjust the structure's resistance. Of course, other structures with other trade-off parameters could be created. Designers of VLSI circuits sometimes place these types of structures to minimize rework in case a chip does not meet intended performance parameters. In Figure 2.12, the left hand structure is achieved by assigning a large cost factor to the metal. The structure has maximum resistance, while the right structure has minimum resistance. This process can be achieved through area optimization. The results in Figure 2.11 were achieved by keeping the same cost factors as the premigration layout. The results in Figure 2.12 were achieved by "playing" with different cost factors. 2.4.9 SOME OTHER CHALLENGES If a layout to be retargeted has 45 degree layout features, it often represents something of a challenge. One challenge is that some foundries do not accept 45 degree features. The other one is more migration complexity. It is not sufficient to just be able to compact 45 degree layout features, as Figure 2.13 indicates. Just performing a 45 degree compaction would waste area as shown in the center of Figure 2.13. The 45 degree layout feature also needs to be elongated. This is perfectly possible for compaction. As an alternative, it might be desirable to convert 45 degree layout features to 90 degrees or to staircases.
L1 Li */If Fig.
2.13
Forty Five Degrees are no Problem
for Compaction
Actually, 45 degree layout features often present some additional challenges like, for example, two 45 degree layers crossing at 90 degrees. However, some of these challenges illustrate an important point about compaction. The compactor can be taught new tricks.
CHAPTER 0
Looking at all the layout features that could ever occur, it should not be surprising that new geometries never seen before could appear. In fact, they do. As previously mentioned, completely new layout rules could appear with some of the latest technologies, or just a very strange layout geometry. Such new situations may require some setup modifications. The flexibility inherent in compaction allows such modifications. Yes, it will be reflected in a longer setup time before compaction can be initially run. However, as already suggested, once set up, reruns of the compaction of a chip to adjust for process parameter changes will be just computer time. This large flexibility enables solutions for even the strangest" layouts.
2.4.10 SUMMARY ON COMPACTION We have discussed some of the typical challenges and capabilities of compaction. Since this book is neither a catalogue nor a manual, many of the additional possibilities and capabilities of compaction have not been discussed. In many ways, every compaction project has it own special idiosyncrasies, most of which an experienced operator can overcome. Fortunately, many projects are straightforward. Hard IP reuse through compaction, saves a lot of time and investment in tools and engineering talent and is a very useful approach to IP reuse. The key is to judge intelligently when Hard IP reuse is the most beneficial approach. I believe that with growing awareness and experience with Hard IP reuse, it will be judged the best solution more and more often.
2.5
EVOLUTION AND APPLICATIONS OF HARD IP MIGRATION METHODOLOGY Compaction, which is the heart of modern Hard IP migration, began in silicon compilation when compaction was one way to achieve foundry independence and portability. A few years ago it became obvious that compaction could stand on its own as a methodology. It became clear that the semiconductor processing technology was moving at such a rapid pace that design methodologies could not produce new designs fast enough to keep up. Progress in semiconductor processing is happening due to many contributing factors, such as lithography, electron beam technology applications, new etching techniques and advancement in materials used. Since the chips in use were perfectly good except that their physical layout was based on outdated processes, linear shrink and, lately, compaction started to look attractive for quickly taking advantage of the rapid advances in processing capability. Hard IP retargeting to IP reuse was born. Of course, a lot of development work was needed to move from having a compaction engine packaged somewhere in silicon compilation software to creating a complete, standalone Hard IP migration environment. Then, there was and remains the challenge to continuously improve compaction algorithms to be able to keep up with the rapid progress in semiconductor processing technology and the resulting, immense increase in device counts on chips. Of course, as for all other hi-tech design methodologies, this continued learning process is a must for staying in business. We will now review the evolution of the compaction methodology over recent years and assess its present status and where it is going. We will now look at some of the possible applications of Hard IP migration. Clearly, like with any evolution, we start with the easiest, early applications and move towards more difficult, present and future projects. We will first discuss library migration, an application area has been possible for quite some time now and one that is still very important today. We will then move towards the migration of bigger blocks, such as memories, data paths, and eventually entire chips. Clearly, the rapidly increasing size and complexity of today's chips is one of the major difficulties to face.
45
Increasing chip complexity will force us to very quickly address one of the major issues in the migration of complex layouts, maintenance of the hierarchy of source layout through the migration process. We will see that hierarchy in layout is quite a different concept from that of hierarchy as normally discussed in design methodologies, where top-down and bottom-up and "high-level" for functional or behavior-level design versus "low-level" for a transistor-level design are considered. We will introduce the concept of hierarchy maintenance in layouts for regular structures in Section 2.5.2, the difficulty of maintaining hierarchy for complex layout structures in Section 2.5.3 and the limited hierarchy maintenance for complete chips in Section 2.4.5, based on today's and yesterday's compaction engines. Finally, in Chapter 5, we discuss unlimited hierarchy maintenance that is currently becoming available with the latest compaction technologies.
2.5.1
STANDARD CELL LIBRARY MIGRATION Compaction is compute-intensive and the first projects in retargeting were standard cell libraries. This area of application was an obvious one, since libraries used in synthesis were becoming obsolete as fast as processing capability was advancing. Figure 2.14 graphically shows a standard cell migration. Just two years ago, library migration was a challenge. But the current migration environment and the computer power now available have made library migrations a routine task yielding enormous time and risk savings and excellent results. Because of these benefits, compaction is extensively used by almost all library vendors.
CELL CELL
ACTION
NIGRATJON
El1 M-C
A>V
Fig.
2.14
S
-l>
ASSEALI
=*2
A Standard Cell Migration
One of the benefits of compaction over linear shrink previously discussed, i.e. adherence to a grid, is critical for contacting the routing around the cells. Supply and ground bars are enlarged as needed and contacts are optimized for gates and well/substrate taps. All of this is done automatically for the whole process run in batch mode. Nevertheless, if human intervention is needed, it can be done easily as we will discuss later when consider Hard IP creation (not Hard IP reuse) in Chapter 4.
2.5.2
MIGRATION OF REGULAR STRUCTURES Structures with a degree of repetitiveness or regularity of cells can be migrated more efficiently than others and, as we will discuss, it is relatively easy to maintain a clear cell identity through the migration process. This means we know exactly which polygons make up a certain cell before and after it is migrated, and the migrated cell is as clearly a building block within the layout of the regular arrays before and after the migration process as, for example, a brick in wall of bricks. This maintenance of identity is generally referred to as hierarchy maintenance. Repetition does not mean all cells have to be identical. There can be several groups of different cells, but all of the cells within each of the group have to be identical as illustrated in Figure 2.15. Some obvious examples are memory arrays, FIFO "arrays," the repetitive one-bit data path cells or a mixture of these or any other structure with sufficient regularity to be advantageous to maintain through the migration process.
CHAPTER 0
As Figure 2.15 suggests, the blocks making up the cell array and the repetitive pieces in the driver and sensing sections surrounding the memory will not be migrated as a "sea of cells," but the individual cells will be migrated. The layout will then be put back together like Lego blocks.
/
Fig.
2.15
Migration
of Regular Layout Structures
The software recognizes repetitive cells, which means where one cell ends and the next begins. It then places cutting lines accordingly between them. After the migration of each of the various cells, the array will then be put back together again with abutment. In fact, even hand-designed cells, as in memory core cells, can be fitted to the peripheral cells by the migration software. All of this can be done automatically by the software, although human intervention is occasionally necessary to find the best cut lines and often beneficial. When we previously mentioned three basic steps in migration, human intervention is included in the second part of the setup phase for "trial and error." To evaluate the migrated design, critical path information and statistical data will be used, as previously discussed concerning the number of geometries being pushed to the process design rule limits. Because of the very efficient way of handling large array data by migrating the very small individual cells in each group of repetitive cells, very large structures can be migrated in this way very rapidly. The interactive process for finding the best possible cut lines may be the most time-consuming but ultimately very beneficial, and it has to done only once in the initial setup phase.
2.5.3
HIERARCHY MAINTENANCE IN HARD IP MIGRATION When discussing the process of migrating regular structures, we took advantage of a very important concept in Hard IP migration: The maintenance of hierarchy in the physical migration process.
layout through
the
Maintaining the hierarchy of the source layout through the migration steps is a very challenging migration engineering problem. It means that we will still know after the migration the association of every single polygon in the layout with the block it was part of before the migration. Of course, the degree of difficulty varies considerably from one layout to another. We have seen that it was quite simple for the regular arrays discussed above. But to be completely accurate, it should be stated that, while we maintained the hierarchy of the array structure, any hierarchy inside any of the cells of the
array is still lost after the migration. In other words, hierarchy no longer exists within the cells after the migration. The cells themselves are flat. So what is hierarchy maintenance in layout and migration engineering? And what are the various degrees of hierarchy maintenance? What hierarchy maintenance or its loss means can be easily explained when migrating a regular structure such as a memory. Before we migrate a memory layout, we "know" just exactly what parts in the layout are the memory cells, what parts are the sense amps and the other drivers. In this big "sea of polygons" in the layout database, memory cells are arranged in a nicely regular array with a clearly defined position in the layout, with clearly defined boundaries between them and the polygons that belong to them. We know how to identify these parts because we placed them on the chip with function-specific means. If we now migrate a memory as discussed above by taking advantage of the regularity, this regularity is maintained and the association of the polygons with each cell remains intact. This is, of course, the maintenance of a rather shallow hierarchy. When we discuss the migration of a chip, we will achieve a higher level of hierarchy maintenance. In contrast to any hierarchy maintenance, in flat migration we migrate an entire memory as one block "in one scoop," as opposed to benefiting from the regularity as discussed above. After the migration, we still know where the cells are located after the migration process. After all, the cells are "boxed in," and they can't go anywhere. However, the polygons in each cell have moved to compact the layout according to the new process design rules. Boundaries between cells are no longer sharply defined and some polygons near the boundary of a cell before migration may have moved across the straight line boundary that existed before the migration between memory cells. Such polygons now look like they belong to a different cell than before migration. We can no longer associate every single polygon of a certain xy memory cell before migration to that same migrated cell x'y', because we can not keep track of each polygon through the migration process. In fact by migrating a memory cell array flatly, we have declared that the boundaries between cells are no longer significant. We no longer keep track of them. The layout of the entire memory resembles a sea of unidentifiable polygons, only the entire array now has an identity. And we can no longer deal with cells in any consecutive work on the array such as verification. We have to deal with a solid memory block. Since a picture is worth a thousand words, we show in Figure 2.16 how hierarchy can be lost in a more complicated layout. On the left side of Figure 2. 16, we have a layout with several sizable interconnected blocks that constitute a chip. Each one of these blocks, A, B, C and D fulfills a very specific function. Every polygon in each of these blocks is assigned to its appropriate block by a design specification. Although the blocks overlap and polygons from one functional block are in the middle of polygons from another functional block via the initial specification, each polygon "belongs" to the respective block.
Iy
_ TM
N
j±g _PT
*
...X
Cs±
2.16
Too Intermingled for
rrw
Um zFRTT Um HffT1111
:_....I
U LHI w itm _... mm ...a IITTT .... X.. _ ml
n _... .... I
ETT± ±mf±
11111
TRH i ii RHm _ml
Fig.
-
1111 11111
I
l m m
m
1111
I
W
I
Hierarchy Mairtenance?
If the blocks in Figure 2.16 did not overlap, it would be easy to migrate them one at a time and maintain their identity and one level of hierarchy. We will show this in Figure 2.19 when we migrate a chip. Since they do overlap we get what we see on the right side of Figure 2.16 with retargeting, using a traditional compaction engine. The resulting layout is completely flat. However if migration software can assign and remember the association of each and every polygon edge with a certain functional block, any level of hierarchy can be preserved through migration. The latest migration software technology allows this kind of association, which will be discussed later. For now, we assume that the migration software can maintain hierarchy in regular structures, as we have seen in Figure 2.15. The concept of hierarchy maintenance in any regular array is easy to understand and can always be maintained if desired. Of course, the situation depicted on the left side of Figure 2.16 is particularly challenging in terms of hierarchy maintenance, and it does not occur very commonly in reality. However, it is a good example for graphically illustrating how difficult it may be to maintain hierarchy in a layout. Hierarchy maintenance is particularly critical to layout verification and verification in general within manageable limits.
2.5.4
FLAT HARD IP MIGRATION A layout whose functional regularity is also reflected in the layout, such as a memory, has polygons that contribute to a certain electrical function to be performed by a chip placed in an identifiable functional block. It might be said that the layout is modular. For random or irregular layouts, polygons that contribute to a certain function are located all over the chip. This immediately suggests that later, when a layout has to be verified for correctness, it can not be done modularly. This can create problems and is one of the main motivations for trying to maintain a substantial degree of the hierarchy of the source layout during migration. Of course, highly irregular layouts can be migrated. They are migrated flatly. However, a random layout is not the only reason for considering a flat layout migration. Below are three reasons for considering a flat migration: 1. As mentioned above, there is no discernible regularity in a layout. 2. A block is so large that it can not be migrated as a whole. The block needs to be cut into manageable pieces and each smaller piece of the large block is migrated separately. An efficient way to do this is through parallel processing over a network, using several processors. The migration software then puts the many blocks together to reconstitute the original block. However, the layout of the large block is now totally flattened.
3. While a hierarchical migration has substantial advantages in terms of complexity management, a flat layout is smaller than a hierarchical one. This is because, in retaining its modularity through migration, there are clearly defined geometrical boundaries across which polygons of a certain cell can not move, even if there is empty space that could be filled with polygons from a neighboring cell. So a final run after a full verification of a layout has produced confidence in the "correctness by construction," might be worth considering. In Figure 2.17, we illustrate the typical migration of a highly irregular block. As seen in Figure 2.17, the block is cut into manageably sized pieces, each of which is migrated. The cut lines can be chosen intelligently and automatically by the compaction environment. The maximum size of the pieces depends on the state of compaction technology, computer performance and perhaps somewhat on the characteristics of the layout to be migrated. Certain geometrical features in layouts are more computeintensive than others. We should keep in mind that for a truly irregular layout, a real layout hierarchy does not exists anyway. The fact that the migration process again yields a completely flat result is not an issue. In addition, flat migration does offer yet another advantage. Flat migration is the most straightforward in terms of setup. What we earlier called the second step in the setup phase, "the trial and error" step, falls away. Because of this, even modular layouts are sometimes migrated flatly. With no setup time to determine the best way to "cut up" a layout into regular arrays, a simple brute force migration is performed "e basta!"
~~
ON
ban K
1---
tZZPM--9--W-
N3 Pt
:L3
a"
'2 -E1 I Id CIMF '2
p
ME
iF
'?
C_ aI I 1lw
7 3
I
.
-Er
?a
C -W C U
t OR 4 CCC g ! WO C C
"a"
"I-
up
n4 Fig.
2.5.5
2.17
Illustration
of a Flat
Migration for
Random or Regular Layouts
MIGRATION OF ENTIRE CHIPS Entire chips can be migrated with substantial benefits. In Figure 2.18, we show how such a migration can be performed. Figure 2.18 graphically depicts how a chip can be migrated while maintaining a relatively "shallow" but rather typical hierarchy. The chip, shown at the top, contains several blocks and the routing to interconnect them. Each of the blocks may represent any kind of function. The chip in Figure 2.18 shows one possibility. It shows regular blocks like RAM, ROM, PLA, Data path, and an irregular Random Logic block. For the migration, the chip layout will be split into the various major blocks and into the entire routing. The blocks will then be migrated using the most appropriate methodology of the ones previously discussed and maintain the hierarchy to the level discussed in regular structures or be migrated flatly. The identity of the blocks and the hierarchy on the chip is maintained. The routing will be migrated flatly but
50
*
CHAPTER
maintains connectivity with the blocks by being migrated together with the empty shell of the blocks with just the connection points on the boundary (the shell) present. Such empty blocks are generally referred to in the literature as abstract cells. Accordingly, a logical name for this approach is Abstract Cell Compaction or ACC. All of this is quite evident in Figure 2.18.
Bounding Boxes (Abstract Ceils 4 4'
y ,,
ri~s
Cleb Decafs
y-rvI
411747r27a Drwem
l at
-/
at
-
i
-
Levels
~~~~~~~~~~~of Hierarchy
9Migrated Chip
Fig. 2.18 Hierarchical Migration of a Chip In terms of IP reuse, ACC offers some significant benefits. One of the critical questions in IP reuse is the chip timing after migration. This is a particularly difficult question if the migrated chip has to be rerouted. As discussed before, timing will be increasingly dominated by interconnects. While we have migrated both the functional blocks and the routing for the chip shown in Figure 2.18, this may not always be possible. So when migrating from one technology to another, rerouting will strongly affect and change the timing of the circuit to be reused. All the uncertainties corresponding to timing rear their ugly heads. Of course, the expected changes are not as dramatic as starting from scratch. Furthermore, being able to take advantage of additional layers of metal, compared to the "old" layout, may offer enough benefits to justify rerouting. The decision of whether to do a Hard IP migration and conserve the existing routing or to reroute may be affected by time-to-market needs, which could be a forceful argument in favor of Hard IP reuse. As suggested before, migration will maintain the relative timing of the chip because the relationship between the relative lengths of wires hardly changes at all, although their absolute lengths of wires will change somewhat since everything gets smaller. That is why the important timing, the relative timing relationships in the circuit, should not change much either. This does not guarantee that a timing analysis would be superfluous, but it might be. What it most certainly will mean is a minimal change in the layout to correct a timing problem. For example, such an adjustment could be accomplished with merely a buffer size adjustment. This kind of postlayout (actually postmigration) change can be accomplished easily as we will show when we discuss layout optimization in Chapter 3 and the available tools in Chapter 6. We have discussed the migration of standard cell libraries, memories and other regular structures, totally irregular structures, a class that covers any block ever encountered and, finally, chips. All of this retargeting was done in Hard IP. What about mixing Soft IP and Hard IP on one chip? And what about analog circuits? What if one of the blocks in Figure 2.18 were an analog block? For now, the answer is that it can be done and is done routinely by some companies. We will show an example of this kind of a migration in Chapter 5, when we examine some of the issues of analog Hard IP migration.
51
Another challenge: What if the chip migration depicted in Figure 2.18 is extended to take blocks from different sources, blocks that were not on the same chip before and not even processed by the same foundry? Such a truly S-o-C scenario is possible, but without a doubt challenging. Possible? Probably says the skeptic, most certainly says the optimist. We shall examine some of the arguments in Chapter 5. Finally, how about mixing and matching Soft IP, Hard IP, analog and making all of this work well in a chip while keeping within the power budget and guaranteeing a highly testable circuit? Well??? Conceptually, even this is possible. Feasible and practical? A marketing department would respond with: Good question! We will address even this issue in Chapter 5.
CHAPTER 0
HARD IP PERFORMANCE AND YIELD OPTIMIZATION HARD IP OPTIMIZATION In the last few years, Hard IP reuse has been focusing on retargeting. Even now, when the engineering community talks about Hard IP, retargeting always seems to be the primary focus. The main reason for this may be the gap that has developed between manufacturing capability and design productivity of DSM VLSI chips, calling for ways to bridge that gap. Hard IP reuse is viewed as an approach that at least narrows the gap. Compaction technology is at the heart of a sophisticated Hard IP reuse or retargeting. Since compaction allows a manipulation of physical layout geometries at the polygon level and since DSM VLSI chip performance and manufacturing yield are very sensitive to layout geometries, compaction should allow us to optimize both performance and yield in addition to providing just a re-layout of an existing Hard IP according to new process parameters. In this chapter, we discuss how we can improve performance through layout manipulations. In Chapter 4, we discuss how we can improve yield through layout manipulations. We already suggested some of the capabilities for improving yield in Chapter 2. In this chapter we look at this issue in considerably more detail, discussing Design for Manufacturing (DfM), in which compaction plays a key role. Of course, every additional step in a design or reuse flow adds more time. In a world in which the time-to-market aspect is potentially the most critical aspect to success of a VLSI chip, this is a legitimate concern. On the other hand, if a migrated chip or any chip that might have been designed from scratch or with Soft IP reuse does not meet performance specifications, using compaction for postlayout optimization could provide a relatively painless and quick fix instead of more drastic, riskier or time-consuming measures. Similar arguments are applicable if the manufacturing yield of a chip is not high enough. Finally, it is to be expected that with advances in processing technology towards smaller and smaller minimum dimensions, challenges, especially for timing closure, will only become greater. Large VLSI chips that are correct the first time may be even harder to achieve. Compaction as a postlayout step can be of great help in eliminating these kinds of problems. Only the question remains: How much can actually be done with postlayout manipulations using compaction in terms of performance? This chapter examines this issue.
3.1
WHAT TO OPTIMIZE
IN A LAYOUT AND HOW
We have discussed how VLSI chips are now the core building blocks for information management in hi-tech electronics. We have stressed the need for high performance, emphasizing key parameters such as speed, power dissipation and miniaturization, which is also packing density. We will now explore how to manipulate a VLSI layout to affect these performance parameters. For this, we need to examine analyses performed over the years to evaluate the influence of physical layout parameters on key VLSI chip performance parameters.
0
As we examine some of these analyses, we will constantly confront the dilemma of having to make approximations. Today's VLSI chips are simply so complex that most analysis will be too time-consuming or impossible without approximations.
Approximating, while sacrificing as little accuracy as possible,
generally requires setting priorities that favor certain parameters. We need to try to achieve as much accuracy as possible and simplicity, we hope, for parameters that are the most critical for a particular application. Accordingly, we need to establish the most critical performance parameters for various applications. Later in this chapter, we will see how focusing on determining certain performance parameters requires approximations that, while perfectly acceptable for the desired results, would lead to invalid or inaccurate results if blindly used to determine different performance parameters. It is therefore critical for layout optimization to use data compatible with the goal of the optimization. Different areas of application require an emphasis on different performance measures. For computers, a dominant measure of performance is the speed with which operations can be performed. For wireless telephony, smallness and power consumption are for obvious reasons dominant factors, although speed, generally measured in bandwidth requirements, is also terribly important. Of course, power consumption is in general extremely critical for anything portable, due to battery life. However, even if battery life is not a concern, limiting power dissipation is gradually becoming a simple question of 'survival" for a chip. There are many more areas of application with other key factors, but many of them are related in some ways to the ones we have listed as major concerns. The extremely rapid progress in VLSI chip performance, due to enormous advancements in processing technology, does not come without a substantial price tag for modern and improved processing lines. Yet, in spite of the astronomical cost of newer processing lines, there presently does not seem to be any slowdown in processing capability advances. In fact, when comparing performance improvements due to advances in clever design techniques versus processing technology, processing technology wins hands down. The least we can do, and have to do, to reward such progress and take these enormous investments into account is to try everything possible to get the maximum benefits out of these expensive processing lines.
3.2
LEVERAGE OF LAYOUT ON PERFORMANCE Actually, the statements above already emphasize the importance of physical layout geometries for performance, without any further discussions. Why would anybody spend so much money if there were no substantial benefits? The manufacture of smaller and smaller minimal critical physical dimensions and larger chips are the most directly visible measures considered responsible for increased performances. However, the ultimate performance of a VLSI chip is actually a very important function of not only the minimum critical dimensions but also of many other dimensions of the physical layout. In addition, optimizing the layout geometries of a chip takes only some engineering effort and time, the expense of the software and some computer runs. We already know that the lengths of interconnects play a dominant role in performance and that the placement of intercommunicating blocks is very critical. This is rather obvious. However, do we also know how the length of interconnects and the remaining dimensional parameters of interconnects affect DSM VLSI chip performance?
St
CHAPTER
@
A considerable amount of research, mostly at the university level, has been done to determine how to maximize the speed and minimize the power consumption of DSM VLSI chips with detailed physical layout manipulations [3]. The focus of the research was on how to dimension interconnects, the drivers, or even interconnects and drivers as pairs simultaneously. Other parameters such as signal integrity, reliability and yield will also be affected. We discuss some of the results in detail later in the chapter. The physical layout of a DSM VLSI chip has to be developed with a lot of care during floorplanning and the placement and route design phase. This is by now universally recognized for hi-tech VLSI chip design. The ability to intelligently address the physical layout design very early in the design cycle is a hot issue for front-end, high-level methodologies like synthesis and progress is being made. However, as the minimum layout dimensions become smaller, the timing uncertainties grow until a layout is complete. Fortunately, there is a lot of postlayout optimization that can be done at the back-end using compaction. This is of interest even for VLSI chips that have just been designed using the latest, most advanced, functional level synthesis techniques that have taken account of the back-end, the physical layout. Needless to say, it is also of considerable interest for IP reuse, especially Soft IP. These "established" chips may not have benefited from the latest front-end tools that take account of physical layout, but they still have too much to offer to just be thrown away. In addition, it is clear that chips migrated to faster, more advanced processes as discussed in Chapter 2 could use a postlayout tune-up. This type of layout manipulation can maximize the speed of both of these candidates or minimize their power consumption, or both. Just how much postlayout optimization contributes to performance improvements and how much is needed to make it worthwhile depends on many factors? If we were to assume only, let's say, a 10% improvement, this 10% could persuade a customer to choose one product over another. With the big bucks in electronics being increasingly spent on consumer electronics, this is clearly important. If the improvement was only 5% but would save a complex microprocessor chip from failure, it would be worthwhile. As we will see later in this chapter, most of the time there is much more to be gained in performance than just a 5 or 10% improvement. We will now discuss how layout manipulations will help improve performance and manufacturing yield of a fully laid-out chip by optimally dimensioning interconnects, drivers or by optimizing the interconnects with the transistor stages driving them as a pair.
3.2.1
THE INCREASING INFLUENCE
OF LAYOUT ON PERFORMANCE
Much important research has been conducted over the years to examine the electrical and timing characteristics of interconnect structures. Actually, some of the most useful work analyzing interconnect parasitic capacitances was done as far back as 1973 [4]. Of course, in those days the performance of VLSI chips was not affected by interconnects on the chip. However, for various other structures such as PCBs, MCMs and especially strip lines for memory arrays, problems existed that are very similar to what happens now on DSM VLSI chips. However, while the work to be discussed here addresses physical layout issues and covers such a long time period (compared to the newness of VLSI
ss
technology), the points of emphasis and interest kept shifting over the years, as we will see. It started with layout-related timing analysis and only very recently has there been a significant interest in true physical layout optimization, which still seems to be limited to the academic community[3]. The following discussions are based on some of the work done over the last twenty-five years that is related in one way or another to signal propagation in VLSI chips. However, despite the early 1973 work, the literature search was very focused. Only the most pertinent, most recent work that directly addresses the interconnect issues in DSM VLSI chips has been reviewed. Furthermore, for now the discussions will largely concern timing and power consumption issues. Other important issues such as cross-talk and signal integrity will also be addressed. In the 70s, as suggested above, the focus was on determining the parasitic capacitance values of interconnects different than what we find on DSM VLSI chips today. However, the work dating back to 1973 does an excellent job of yielding many results that are difficult to find in the current literature. We will make considerable use of this information. Although this was long before DSM effects became critical for VLSI chips, the interconnect capacitance parasitics the active parts of the circuit (the transistors) have to drive are conceptually very similar. This is especially true of current popular MOS technology, for which the dominant effect of resistance-capacitance time constants are significant because of the high impedance levels. Interconnects could still be modeled with a simple, discrete RC load. Also, ICs were still small enough to be simulated with SPICE or SPICE-like simulators. In the late 70s and early 80s, all VLSI chip components could be modeled as discrete components, all except for the base resistance in bipolar transistors, which was already a nonlinear, distributed RC load. Accordingly, the challenge of dealing with distributed loads in VLSI chip analysis has been around for quite some time. Simple discrete RC models where used for interconnects, except for very fast circuits and critical interconnects. By 1983 [5], as chips became too large for SPICE-like simulators, switch-level models and Timing Analyzers (TAs) became popular. By the mid 80s, MOS technologies started to dominate bipolar technologies. However, by this time, interconnects started to behave like distributed RC loads. In fact, for high- speed applications, interconnects started to behave like a lossy RC transmission lines. Determining signal propagation along RC transmission lines presented a serious mathematical challenge. There exists no closed form solution in the time domain [6]. Timing analysis had to settle for determining bounds as opposed to exact timing information and exact pulse shapes. Contributions to time delays were more or less equally shared between the active and the passive parts of the LSI circuits. However, the ability to model one lossy RC line between two drivers was not enough. The signal distribution in a VLSI circuit was done by complicated RC tree structures. Models were needed for RC tree structures and became available in combination with switch-level-based TAs in the mid 80s [7]. By the latter part of the 80s, interconnect delays became critical and dominant enough to require a rather detailed timing analysis for critical RC trees, such as clocking trees. However, as processing technology kept marching relentlessly forward, it became clear that interconnects would eventually dominate the timing behavior of VLSI chips. Some simple, straightforward measures were needed to gauge the effects of shrinking layout geometries on VLSI chip behavior. Scaling factors became popular for showing trends in chip performance, as a result of smaller critical layout dimensions. The scaling factors showed how changes in physical dimensions, such as the interconnect thickness, width, separation, length, the oxide thickness, etc., would affect the behavior of the VLSI chip [8].
CHAPTER 0
The end of the 80s and early 90s were marked by attempts to approximate parasitic capacitance values and the time delays caused by them with relatively simple analytical expressions [9]. These analytical expressions have to be applied with full knowledge of the assumptions made in their approximations. In studying the effects of interconnects on time delay, it became clear that short interconnects between drivers would not substantially increase signal delays, but that long ones would. Studies were conducted on the statistical frequency of the occurrence of long versus short interconnects. The conclusion was that short interconnects are much more frequent than long ones [8]. Buffer stages would be inserted for the long ones, to keep them from getting too long [8]. At least, this is the present conventional wisdom.
3.2.2
OPTIMIZATION REQUIRES A DETAILED
LAYOUT ANALYSIS
A good floorplan and a good place and route based on timing-driven layout tools is about as much as one can do to approach the desired performance of a DSM VLSI chip. However, most timing information available before a layout is completely statistical and based on previous similar designs, of which there are generally very few. Although constant progress is being made in forward annotating tools, the processing technology also keeps moving at a rapid rate. The challenges of obtaining first-time timing-closure are growing and so is the need for some type of postlayout optimization. Of course, the closer the chip to correct timing based on the synthesis, floorplan, and place and route, the greater the chance for success. And postlayout optimization can not work if the timing is completely off In the 90s and beyond, ignoring intuition, a substantial amount of research has fortunately been conducted on true layout optimization [3]. As we shall see, some of the results are amazing and impressive. As chip speeds keep increasing, as packing density increases and chips get larger, dealing with the heat generated in these chips is one of the most serious challenges. In fact, at least one startup claims that power dissipation is the single most limiting parameter seriously hampering - if not blocking - some DSM VLSI chips from becoming a reality [10]. So we need to explore every possibility for minimizing the power generated on the chip without giving up speed. Clearly, inserting buffer stages into the interconnects increases power consumption and takes up real estate, increasing chip size and increasing its cost and without adding functionality. It is taking us in an undesirable direction. Of course, we have to pose the legitimate questions: How can we achieve the necessary speed without these buffer stages? Could we find other ways to maximize speed without an increase in power consumption? The most obvious question concerns the optimal sizing of every transistor in a VLSI chip. Every transistor that has more driving power than absolutely necessary burns unnecessary power and is also wasting real estate. Transistor sizes are generally adjusted by searching for a critical path, the slowest path on a chip, in order to then adjust its driving capacity. The search is not primarily for devices that are potentially too large. The critical path determines the transistors that are too small. To minimize power consumption without slowing down a chip, the goal has to be to optimize the size of every transistor in a chip. A less obvious question concerns the dimensioning of interconnects. After all, we know that they greatly affect the speed of a VLSI chip. But how and by how much do interconnects affect power dissipation?
The answer lies in the load the interconnects represent, which the transistors have to drive. An optimal dimensioning of the interconnects in conjunction with the transistors that drive them may allow smaller transistors and less power consumption without sacrificing speed performance, or even improve both parameters. Recent research supports such statements [3] and has shown some remarkable results. As opposed to changing a given design, the emphasis is on optimizing existing designs with a sharp focus on physical layout optimization. We will discuss some of the results later in this chapter and see how the methodologies in Hard IP migration can implement the layout modification suggested by such findings to optimize DSM VLSI chip performance. This brief overview should motivate us to explore just how much layout-related parameters can affect final DSM VLSI chip performance. We will examine both the front-end and back-end leverage to assess their level of significance.
3.2.3
FRONT-END LEVERAGE Before we compare some of the benefits of front-end in comparison to back-end in the design flow of a DSM VLSI chip, we have to reemphasize that the focus here is on the back-end. There is generally a lot of attention given to the front-end when a chip is designed. Synthesis has always been and continues to be a fascinating field and almost the entire established EDA world is focusing on making it do all the things people want and need it to do. Only a handful of small startups are focusing on the performance issues that only the back-end can properly address. However, shrinking layout geometries are gradually bringing about a close partnership between the front-end and the back-end. It is hoped that the following discussion will help to clarify what should be a front-end to back-end synergistic relationship. The trade-off between high levels of abstraction in the design process versus control over the details of a design is critical and needs to be carefully monitored. It is clear that, while a very high level of abstraction results in great benefits for the design process in terms of the management of complexity, its direct control over the physical aspects of a layout tends to be relatively weak. In the past, this presented few problems for the chance of the first-time success of a VLSI chip design, because the active elements of the design determined the performance and not the layout., Because of the importance of layout parameters, these issues need to be taken increasingly seriously in design disciplines for DSM technologies, such as synthesis and floorplanning. This is emphasized by the fact that timing-driven layout is a new design discipline. Front-end work allows a lot of freedom. Blocks can be placed and rotated, aspect ratios can be changed if needed, contacts can be moved through feed-throughs, metal layers can be judiciously chosen, etc. This is a lot of freedom indeed. However, it is unfortunate that while every step has far reaching consequences it is difficult to know early in the design flow what these choices will mean exactly in terms of timing by the time the physical layout is finished. Because it is so difficult to judge the parasitics resulting from a certain placement and route, the physical layout achieved at the front-end is not optimal and normally leaves a lot of room for improvement. Anything that deals with placement of objects in a layout like postfloor-planning insertions of buffers during routing, as suggested by various authors [3,8], is nevertheless front-end work because decisions have to be made based on the best estimates. Buffers inserted into long interconnects do improve chip
CHAPTER 0
performance, but their dimensioning can only be done according to the best estimates and they also increase power consumption and take up real estate. Buffer insertion was proposed quite some time ago and is still practiced. Buffer insertion to speed up long interconnects on the chip is generally done in conjunction with routing. If done later, it amounts to surgery because in a well laid-out chip it will be difficult to find empty space to place the buffers. Since buffer insertion and their dimensioning are based on estimates, the size of these buffers often need to be adjusted once the layout is finished. Since there are optimization algorithms to dimension them for optimal performance, this is a good application for some postlayout optimization using compaction. The literature indicates that buffer insertion was a step in the evolution of VLSI chip performance optimization. While it seems a good approach for very long interconnects, some procedures in layout optimization will be discussed that might also solve timing problems adequately without introducing more power dissipation and taking up real estate on the chip.
3.2.4
BACK-END LEVERAGE The major functional blocks are in place, the chip is routed. What can we still do to change performance or a chip or block at the back-end? With the methodologies discussed here, we can only affect changes in a layout database in what we call back-end operations. Due only to "polygon pushing" can changes still be made to a chip at this point. In other words, any operations we perform on GDS2 or on any other layout database are back-end operations. We know that the length of an interconnect is one of the parameters determining its behavior, particularly its delay characteristics. However, we can not do much about the lengths of interconnects at the back-end after routing. We also know that the width of an interconnect and its proximity to other interconnects affect its behavior and after the place and route phase, interconnect dimensions may be far from optimal. Fortunately, we can still adjust the widths of interconnects and their proximity to other interconnects quite a bit in the postlayout phase by using compaction and other algorithms working at the polygon level. The same may be true for the reuse of an existing chip with Hard IP migration from process to process. So there is still quite a lot that can be adjusted on a "finished" layout. We need to look at the proper dimensioning of interconnects and a balancing of lengths or, more accurately, a balancing of time delays on some of them. We should just briefly mention here that balancing lengths of interconnects will generally not assure equal time delay in these interconnects [3]. We simply need to take a very good look at many of the characteristics of interconnects in the interest of maximizing chip performance. Clearly the degree of freedom for major changes is largest at the front-end during synthesis, floorplanning and routing. Because of this, we may have to restart with these early steps if we need to fix a chip that totally missed its target. Unfortunately, a lot of rework has to be done with such changes and the desired timing may in the worst cases be approached only with several retries. Maybe, we can fix a timing problem by just inserting a buffer here and there in long interconnects and, if that alone
59
does not quite fix the problem, the strength of the buffer stage can be adjusted with compaction. Actually, as we discuss in the next section, we may even insert a buffer stage in the postlayout phase. Finally, while the degrees of freedom for postlayout corrections are more limited, an advantage is that the results caused by changes are very predictable. In the case of less dramatic deviations from timing, we may just use some of the optimization steps that are discussed in this chapter.
3.2.5
FEATURES COMPACTION CHANGES, FEATURES IT DOES NOT Within the available real estate on a chip any polygon edge can be shifted around as long as its position satisfies all the process rules. Also, as pointed out before, none of the polygon edges can "jump over" any of their neighboring polygon edges. This flexibility in placing polygon edges can be very useful. It is in sharp contrast to a linear shrink and provides the following benefits: Compaction can actually
free up space for additional components!
When performing a linear shrink, all polygon edges move together according to some proportionality factor. When compaction is applied, all polygon edges move to a place prescribed by process rules or user input. We could for instance insert a miniature buffer stage into a long interconnect that exhibits timing problems in the postlayout phase. This insertion would most probably be done manually with a layout editor. Compaction could then enlarge this buffer stage to the desired size by pushing other polygon edges aside while enforcing all the process layout rules. In Figure 3.1, we show how a miniature feature can be inserted in the left layout. Compaction then enlarges it to do what it is supposed to do while satisfying all the layout design rules, as shown in the right layout. While the inserted feature in Figure 3.1 is not a buffer, any shape that potentially fits is possible, like a buffer for a long interconnect.
DRAW VERY TINY OBJECTS, MADE DRC-CORRECT BY COMPACTION
/I
Fig.
3.1
Compaction Can Make Space for
Inserted
Features
As we will see when discussing spreading out interconnects in Figure 3.9, space is often available. Of course, it is a trade-off. Some space has to be sacrificed for the benefit of the whole, which means that the procedure is limited. However, pushing neighboring polygons aside for the benefit of others is not possible with linear shrink. Needed space is actually created, although it is a matter of give and take. Of course, back-end optimization can not save a badly designed chip! With all the leverage possible with back-end optimization, the degree of leverage after layout may not be enough to eliminate timing
CHAPTER 0
problems in a very bad floorplan. It is better to redesign from scratch to obtain a better starting point for the layout optimization to follow. But with a "reasonable" floorplan and "reasonable" route, back-end adjustments in a layout can profoundly affect and improve chip timing performance. At the back-end, we start either with a chip that needs to be retargeted to a more advanced process or a chip that has been newly designed up through place and route. We already know a lot about a chip's behavior in order to be able to retarget it. The chip has a track record. We know that it is functionally correct. We can also compare the way the chip actually worked to the chip analysis conducted before fabrication. This information of actual performance data, which is generally difficult to predict accurately, is very valuable, especially for the timing, But now that the chip is fabricated in a new process, the timing will undoubtedly shift and the layout has to be adjusted. Fortunately, constant advancement in tools currently provides us with optimization algorithms for the physical layout that did not even exist during the initial design of the chip. This means we may now have a much improved means to get a better chip. Of course, what we can not change is the "topology" of the layout, how the blocks are functionally connected together. This is good because we do not want to redesign
- we want to reuse!
For a chip that has just been designed and laid out, the architecture is fixed, the floorplan is fixed, the routing and all the transistors are in place and, based on estimated parasitics for the process to be used, its timing works. So the focus for such a chip is not to just make it work but to make it work even better, to optimize its performance. To achieve this, we can adjust the geometrical dimensions and shapes of transistors, the widths and the separation of interconnects. We can change the sizes and shapes of capacitors and resistors. Another important aspect of layout that receives increased attention lately is the fact that VLSI circuits often have the minimally allowed layout dimensions in many places where this does not result in any increased performance. This often just lowers the yield and negatively affects the reliability that can be achieved for the fabrication process. In fact, it may lower the performance of a chip, as will be discussed later. It is critical that layout dimensions be optimized for maximum density, but without sacrificing yield or performance. This type of optimization is easily done with compaction and is very powerful. Finally, reasons for late layout adjustments may be based on more than just optimizing the performance of a chip. There may be some serious timing problems in the chip that are due to changes in some process parameters, perhaps to changes in metal resistance, perhaps to changes in the permittivity of the oxide (the k value), to mention just a few possibilities. Many things could be responsible for a chip not working. There is no point in guessing in advance what they might be. The point is, if adjustments in layout dimensions can solve the problems, the presently discussed methodology provides the means to fix them.
3.2.6 SYNERGY BETWEEN
FRONT-END AND BACK-END?
There is a German saying and a song that says a shoemaker should stick to his craft, which he obviously does very well, instead of trying to live the life of a Bohemian. Of course, it sounds great in German and loses everything in translation. It is the German version of the English saying: Do what you do well and stay focused.
Applied to the subject at hand, there is no point for either Hard IP migration or Soft IP reuse to diminish the other's value. The key is focus: Soft IP and synthesis for the front-end, Hard IP and compaction for the back-end. In
fact,
Hard IP migration and Soft IP reuse are very complementary!
Each discipline should focus on its strengths and on working synergistically together. The result will be a higher performance chip, reaching the market faster and less expensively, with a smiling customer at the receiving end. There have been some heated discussions about the virtue of Soft IP versus Hard IP reuse. Usually, the Hard IP side loses because the world today focuses on synthesis and Hard IP reuse is more of a niche solution. And, not to be overlooked, infrastructures in companies are set up for synthesis, not Hard IP reuse with optimization. But as always, where there is debate, even if lopsided, both sides have some valid arguments. Otherwise, it would not come to debate in the first place. In the discussion to follow, we will see that Soft IP reuse and Hard IP related techniques are synergistic. Starting out at the functional level in a top-down approach, the design process eventually evolves towards using actual components that can be associated with physical implementation. However, even if placement and route are as good as possible, choosing the perfect dimension for a transistor or the perfect size of interconnect is difficult at this point in the design process. It is easier to pick the best estimate and then later, after the first-order layout is done, use compaction to optimize to the best dimensions of devices and interconnects. From this point of view, it becomes quite obvious that back-end layout manipulations in the form of compaction are very synergistic with the front-end. What about the convenience and ease with which timing can be adjusted with compaction? How else would timing be adjusted at the back-end anyway? Rerouting would be too drastic and it could dramatically change the timing of parts of the chip that were OK before rerouting. Changing the floorplan would be even much more drastic. Adjusting transistors and interconnect sizes is a graceful and effective way to make such adjustments. Another important aspect is what we suggested in Chapter 2, when we discussed substituting poly for metal with weighting functions in compaction, as shown in Figure 2.13. There, we proposed an adjustment of the resistance between two points on a chip by varying how much of this interconnect is poly or a diffusion instead of metal. This is a neat on-chip trimming mechanism that can be done with compaction. Other trade-offs such as adjustments in resistor lengths, capacitor areas and source/drain adjustments are possible. In fact, any adjustments that are also possible by moving any polygon edges within the available space should be used to achieve the desired performance adjustments. Large percentage changes in timing can be achieved in this way. Of course, it has to be part of layout planning in the first place. In summary, there is generally quite a lot of room for performance optimization. The back-end layout manipulations are rather complementary to the front-end. In other words, if there is a timing problem in a particular path, just "massaging" the layout geometries of the transistors and interconnects for a critical path for which the timing is off may have sufficient effect on circuit timing. Accordingly, a timing problem can be eliminated without a more drastic change in the circuit. This is also particularly useful for Hard IP migration where only physical layout parameters can be manipulated.
CHAPTER 0
So far, all the discussions have been qualitative. As previously mentioned, intuition would suggest that only minute improvements can be achieved with back-end adjustments. Later in this chapter, we will show that this is not the case. However, before we can do this, we have to discuss the techniques with which performance is analyzed in a DSM VLSI chip to see which layout geometries need to be adjusted for optimal performance. 3.3
THE MODELING
OF INTERCONNECTS
With the big influence of interconnects on circuit performance, their electrical behavior has to be adequately modeled. If we want to optimize the layout with respect to interconnects, we need to know which geometrical features to adjust and how. To determine this, we have to take a small detour into the world of interconnect modeling. To "adequately" model interconnects, we need to find a good compromise between sufficient accuracy and computational complexity. As we can see in Figure 3.2, an interconnect is not just a simple discrete circuit but, actually, a distributed load, more like a transmission line. For interconnects in today's VLSI chips, the series resistance of the interconnect normally dominates the inductive component. In the following discussions, we will assume this to be the case and will focus on capacitive and resistive effects only. This will be adequate for now. The issue of a distributed load as opposed to a simple RC network will be discussed when we examine timing analysis. Figure 3.2 shows the partitioning of the signal transport through the active parts, the buffers, and the passive parts, the interconnects. The illustration does not show where the capacitive components that determine the signal propagation along interconnects come from.
JUST METAL LINE *ITN PARASITIS
/
4~~~~~~~~~~~~~~~ l
~~T
T I,
PART
Fig.
3.2 Partitioning
ACTIVE PART
INTERCONNECT
a Circuit
into
Active
INTERCONNECT
Parts
ACTIVE PART
and Interconnects
Of course, to examine layout optimization, we need to look at a cross section perpendicular to the interconnect. A typical structure with all or at least the most significant interconnects surrounding and influencing the behavior of the interconnect is depicted in Figure 3.3. This is quite a realistic illustration of an interconnect structure. It is a cross section of a possible geometrical arrangement of interconnects in a multilayer metal chip. It is capacitances like the ones shown in Figure 3.3 that we have to determine. Again, we can already sense the need for approximations.
The model needed is an electrical representation of interconnects to allow a simulation of electrical behavior. This electrical model has to act electrically as the actual interconnect does, within a certain degree of accuracy and a certain range of validity. For cross-sectional geometry similar to Figure 3.3, we now discuss how to determine capacitances and resistances.
ByMETAL: 2
/11,
1
,_
If
I
I
T
I 1
I1
I1 1I 1 1I I
I I
I
I
I
A
I7
T IT
T 7r
I AL
IF
I i T T 4W I -POLY I
I
LY
w
TT
v/////:
Fig.
//////////////i/;/////////////////
3.3 A
PL
i-rr L
L 1:
i////.
L-i7/..//i
I1
T
7, i
.;
. i
i/.///.
i
Cross Section of a Typical Interconnect Structure
Once we have found the parasitic elements for the interconnects, we will explore two aspects that, although related, serve different needs: 1. We need to compute the delay caused by the interconnects between the active elements, the transistors, on a VLSI chip and reduce these delays were needed with layout manipulation. For the time delay, the total capacitive "loading" on an interconnect is important, including the contributions from capacitive coupling with interconnects close to the interconnect we are evaluating. These terms will be discussed further. 2. We need to address cross-coupling between interconnects on a VLSI chip because of signal integrity. Ever since the DSM "scare" the problem of interconnects has received a lot of attention. For now, we are focusing mainly on digital circuits, and the digital VLSI world has for quite some time been dominated by the microprocessor. As everybody knows, speed is one of the key microprocessor selling points, and delay is the key parameter for speed. As layout geometries kept shrinking, interconnect delays became a critical issue long before signal integrity became critical. Much of the focus on interconnects was on delay models as opposed to signal integrity models. As we will see later, this need to determine the delays accurately, favored certain approximations unsuitable for signal integrity issues. 3.3.1
PARASITIC
COMPONENTS
OF
THE
INTERCONNECTS
Looking at Figures 3.2 and 3.3, we will now discuss how one can determine the capacitances associated with connectors in such an arrangement. We will focus only on parasitic capacitances for now. The resistance part of the interconnect will be determined easily later, although in the end finding the model to be used for the delay analysis is not that straightforward, because of the distributed character of the interconnects. Fortunately, years of timing analysis research on VLSI chips have led to elegant ways to deal with this problem, as we shall see a bit later [6]. The problem of determining the parasitic capacitances surrounding conductors is an electrostatic field problem. It is based on the same simple principles as Coulomb's law. For anybody with a good
Sq
understanding of electromagnetic field theory, a look at the cross-sectional view in Figure 3.3 will make it apparent that it is not difficult to conceptually understand these parasitic capacitances. However, solving the mathematical problem of determining their values for a large range of geometrical layout dimensions is quite difficult and computationally demanding. Figure 3.3 shows an illustration of most of the capacitance components surrounding a particular interconnect and the interconnects are held in place by the oxide (the dielectric) surrounding. The oxide only changes (unfortunately increases) the degree of electrical coupling capacitance between interconnects, because the permittivity is greater than one. For an oxide that is uniform, this does not distort the field distribution. For an analysis of time delays and coupling affecting signal integrity, we must focus on the dominant capacitive effects to simplify the analysis as a much as is consistent with the desired accuracy. In Figure 3.4, we show an interconnect with the capacitive effects of only its nearest neighbors. This is generally sufficient for both the time delay and coupling analyses. Figure 3.4 shows all the critical physical effects. It shows: * The capacitance directly under the conductor to the substrate. * The capacitance due to fringing effects contributing capacitance from the side walls of the interconnect to the substrate but not coupling with the adjacent interconnects. * The coupling capacitances to adjacent interconnects. * Crossover capacitances to different layers. * Shielding effects that lower capacitance values.
*Lateral coupling (charge sharing, not good name) *Lateral direct coupling and fringe
ITZ.
*Vertical coupling
-.I
. Vertical direct coupling and fringe . Vertical
T,-1
shielding
L1
. Function of vertical alignment and non-alignment of conducting strips
1-1a
I I I I I
[M
i
*Lateral shielding *Function of conducting strips on the same layer
Fig.
3.4 A Realistic
Model
a
7a
7-777771
i
for Determining
the Important
arasitics
Coupling capacitances beyond the nearest neighbor interconnects are not shown to simplify the picture at least somewhat and because the shielding effects greatly lower their significance. Finally, to start to "develop a feel" for the geometrical dependency of some of the major capacitive components of interconnects, Figure 3.5 shows a simpler but clearly pertinent geometrical structure for which there are exact solutions [11]. The theoretical basis for understanding the curves in Figure 3.5 are presented in the next section.
|Nola-dffrA c
I
T
C
SUBSTRATE
Fig.
3.5 Exact Values for Parasitic
Capacitances for a
"Simpler" Structure
Clearly the partitioning of the capacitances in Figure 3.5 is not as sophisticated as was shown in Figure 3.4. It is a simplification of what needs to be determined for DSM technologies but it was published in 1975 as just an aside. In addition, because it is published data and focuses primarily on showing basic ideas, it can be openly discussed, while most such data based on current DSM geometries is proprietary information. The data in Figure 3.5 is an accurate solution and it beautifully shows how at least some of the capacitive components illustrated in Figures 3.4 and 3.5 change as a function of one of the important parameters, the spacing () between interconnects. The focus of the discussions here is to show the basic ideas of how useful data for the eventual layout optimization of a VLSI chip can be generated. This is not to come up with exact results, which are different in every case anyway. The goal is to show and understand some of the assumptions and limitations made to determine DSM parasitic parameters, as well as to show the advantage of knowing how to "play" with the trade-offs. The curves in Figure 3.5 show certain capacitance values as a function of interconnect spacings. Every point on these curves has been generated with extensive numerical calculations. Clearly, we can not invoke a process of numerical solutions for the partial differential equations and integral equations that are the basis for the results in Figure 3.5 for an optimization algorithm using compaction. Instead, we need to find the simplest possible analytical expressions that are sufficiently accurate for describing these curves with a curve-fitting process. Such an analytical expression can then be used by a compaction engine. Because we can vary the interconnect spacing and simultaneously the width of the interconnects, we need slightly more sophisticated curves. We need curves that also contain the effects due to interconnect width variations. Such curves as well as analytical expressions to fit these curves are generally part of chip-making companies' know-how. In the next section, we discuss in more detail how to obtain such curves and comment on some of the published results on the subject.
3.3.2
DETERMINING
VALUES FOR THE PARASITICS
To find accurate values for the capacitive components as shown in Figure 3.4, complex boundary value problems in conjunction with second order partial differential equations, such a Poisson's or Laplace's
equation, and integral equations, such as Gauss's Law, need to be solved. For all but the simplest geometries, there are no closed form solutions for these equations. In other words, there are no exact solutions that would yield a neat analytical, completely accurate expression resulting in nice curves that express the needed capacitance values as a function of any range of the geometrical dimensions, such as the width, separation from the substrate, distance from nearest neighbor connectors and other significant parameters. A quick review of any book on field theory, such as [12], in conjunction with a book containing a section on boundary value problems [13] will show interested readers the following: 1. Only conductors whose geometrical shapes (equipotential boundary surfaces) strictly follow one of the three coordinate systems, Cartesian, cylindrical or spherical, have exact solutions that are exact analytical expressions, and they can be found relatively easily. A good example of a cylindrical system is a coax cable. A capacitor with infinitely large parallel flat plates is and example of a Cartesian system. 2. For a rather limited set of shapes, a mathematical discipline called conformal mapping, often referred to as a Schwarz-Christoffel transformation, is used to "bend" geometries into shapes to correspond to one of the three coordinate systems can lead to mathematically tractable forms and, closed form solutions. A good reference is also [13]. This is a fascinating area of mathematics. Some interesting problems have been solved using this transformation, but not many. It is generally difficult to apply. 3. Finally, for most "real world" problems, as shown in Figures 3.3, 3.4 and 3.5, only approximate numerical solutions can be found, which are nevertheless very accurate. Numerical solutions are based on solving the boundary value problem for small pieces of the space filled with the electrical field between the positive and negative pole of the structure to be analyzed. These small pieces are small enough for their boundaries to follow shapes, as a linear approximation, so that they approximate imaginary equipotential surfaces. Such shapes should follow rectangular (Cartesian), cylindrical or spherical coordinates, depending on the problem to be solved. Every little piece of space yields partial capacitances, the sum of which equals the total capacitance associated with the total electrical field between the two poles. It takes a lot of computer power to achieve accurate results. Commercially available tools that do this are called field solvers. It is obvious that the finer the mesh for determining partial capacitances, the larger the investment in computer power and the more accurate the results. Also, the more contorted the electrical field lines are, the finer the mesh has to be. For every pair of poles, all the partial capacitances add up to the total capacitance. If there are several pairs of poles, the capacitances for each one of them has to be determined. The total capacitance from all pairs of poles is the superposition of all capacitances. This is a big but unavoidable job for current DSM VLSI chips. As we can see in Figure 3.5, once the electrical field configurations and the partial capacitances have been calculated and summed up, curves result as shown in Figure 3.5. They give us the values for many of the capacitances we might have to know. That's the good news. The bad news is that the curves in Figure 3.5 are only a function of the spacing between interconnects. This means that more work is required. Eventually, we need curves for all the relevant capacitances as a function of all the significant geometrical variables. Then, once we have all of these curves, we have to find an analytical expression that describes these curves. This is a curve-fitting process. Design tools can work with analytical expressions that are based on such curves. This process of curve-fitting has led to many analytic expressions that have been used in the industry over the years. It is important for the user of such curves to understand the approximations in these
67
mathematical expressions. To obtain a good fit over a large range of physical dimensions, curve-fitting often needs to be skewed to give good results for certain parameters or to be used in analysis for specific needs. For instance, we have seen the various capacitive components in Figure 3.4 and in a simpler model in Figure 3.5. These capacitances are due to different physical phenomena. Because of this, each one varies differently with the process variables and layout parameters. When we use curve-fitting, do we find a good fit for every one of these capacitances or do we use it to model a particular physical phenomenon as closely as possible? If we are analyzing the time delay properties of a connector, we will want to use curve-fitting to find the total capacitive loading on a connector. If we are making an analysis for signal integrity, we will want to use curve-fitting for coupling capacitances to maximize the accuracy for coupling between connectors. As interconnects started to affect the performance of VLSI chips, the first effects on the chip related to its timing, due to time delays coming from the interconnects. Because of the time delay focus, most of the curve-fitting done to determine parasitic capacitances was until very recently for the total capacitive loading on interconnects. Consequently, most published analytical expressions are accurate for the total capacitive loading and the time delay, but not for coupling between interconnects [9,14]. This was probably acceptable for the state of the technology at the time these results were published. However, for layout optimization, often with a focus on minimizing cross-coupling to maintain signal integrity, the coupling capacitance as a function of the proximity of the nearest neighbors may be just as important. Especially, as minimum dimensions on chips continue to shrink and supply voltages for chips continue to be reduced because of problems with power dissipation, signal integrity becomes increasingly an issue. Smaller power supply voltages mean smaller signal amplitudes in chips, making them more vulnerable to noise and coupling. With the current rapidly changing technologies, constant evaluations of approximations used in analytical expressions for capacitive components are needed. The most accurate analytical expressions used for today's most advanced processes are most certainly refinements of the published data of the past years. Unfortunately, many of them are proprietary and not accessible to outsiders. One word about the enormous investment of computer resources required to generate accurate data for the processes and electrical parameters required for an analysis and optimization of layouts. This investment continuously adds to a manufacturer's knowledge base and supports the continuous development of better and better processes and models with which customers can analyze their designs. It is, therefore, well worth the effort.
3.3.3
CAPACITANCES AFFECTING INTERCONNECTS Figure 3.4 shows the capacitances that affect the interconnect in the center. For the coming discussion on timing and cross-coupling, we will assume a structure as depicted in Figure 3.4 with the interconnect in the center being driven by a signal source. The dominant capacitances are the center interconnect capacitance to the substrate, the cross-coupling capacitance to the nearest neighbors and the nearest neighbors' capacitances to the substrate. For curve fitting, it may be useful to focus on certain physical effects, one at a time. For time delays and dynamic power consumption, the total capacitive loading on the center interconnect is important. The individual capacitances in Figure 3.4 should be examined, focusing just on cross-coupling and signal integrity issues. For the following discussions on layout optimization, we will focus on parameters that can be varied with layout manipulations. Since we are interested primarily in postlayout optimization, the choice of
CHAPTER 0
geometries that can be manipulated is even more limited. The only interconnect parameters we can change are the width, the shape of an interconnect somewhat and the proximity to other interconnects. Of course, we can also and will change the dimensions of active devices. However, we will focus the discussion for now on interconnects. In
following discussions, we start
with timing issues
for digital
circuits.
3.4
TIME DELAY ANALYSIS IN DIGITAL VLSI CIRCUITS Since we are focusing on performance optimization through layout manipulation of VLSI chips after place and route, we need to reliably determine those layout parameters that have the largest effect on timing of a chip fabricated with a DSM technology. Therefore, we examine only the most appropriate timing analysis methods used for VLSI chips to determine which layout dimensions are the dominant parameters affecting timing. These parameters then give us the information necessary for layout optimization. We also need to understand the limitations and underlying assumptions in these timing analysis techniques. Looking at timing analysis and the dominant layout parameters, we will learn that substantial speed performance optimization can be achieved through adjustments of interconnect dimensions in conjunction with the buffer stages driving them. The most appropriate timing analysis in VLSI chips depends on the character of the circuit and the timing information needed. For now, we will focus on digital circuits. The timing information needed to determine and optimize the performance of a digital circuit is very different from what we would need to know for an analog circuit. We will see, however, that effects such as interconnect cross-coupling, will turn out to be analog effects, such as detailed pulse-shape information even for digital circuits. Focusing on digital circuits, the next questions concern the level at which we want to determine and verify the correct circuit timing behavior. Do we need a timing analysis at a high level, a functional level, or do we have to go all the way down to the transistor level? Since we are interested in physical layout optimization involving the polygon level, the analysis will have to be at the lowest and most detailed level, the transistor level. We will see later on what this means in terms of the complexity of the transistor models used for the timing analysis. Normally, analyzing the time behavior of a digital circuit involves determining how a circuit moves through its digital - its binary - states with time, since digital circuits are state machines. Unfortunately, processing state information for timing analysis is too time-consuming for most situations, especially for the current complex VLSI circuits. In addition, if we want to perform a state-dependent simulation, we would also need to generate the appropriate simulation vector suites before we could even get started. Fortunately, for physical layout optimization, changes do not occur in a circuit that affect its functional, state-dependent behavior. This means that we can focus on timing alone and just on time delays for now. To know the timing, the highest clock frequency at which a VLSI digital circuit can operate, we have to determine the longest time delay among all the paths between circuits that latch the information on the clock edges. The longest delay path found still has to fit within the shortest clock cycle desired for the circuit. It will determine the highest clock frequency at which a circuit can run. Figure 3.2 showed the typical configuration of a circuit in order to simply highlight the parasitics in the active parts of a circuit and suggest the distributed nature of the passive parts, the interconnects. Figure 3.2 does not show how signals get latched with clocked registers or latches. This is, however, how
ss
signals propagate through digital circuits. This process of signal propagation is described in detail in many books on digital systems where parameters such as setup and hold times are carefully explained. The concepts are just summarized here: The time delays along all the signal paths in a circuit must enable every signal coming from a latch at the beginning of a path to pass through the corresponding latch at the end of that path before the latch "closes." It closes with the clock edge. In other words, the data has to be available and stable so that the clock latches the correct data. The longest path of this type is called the critical path. The path delays between clocked latches determine a circuit's maximum possible clocking frequency. By focusing exclusively on just time delay, we will determine the paths in the digital circuit that may present problems, paths that are marginal. The tools that determine only the delay in digital circuits and no state information are the TAs. TAs became popular even before interconnects started to dominate timing. In a "well controlled" environment, such as a silicon compiler, they were used successfully twenty years ago. The other main reason for their widespread usage was that TAs are literally designed for MOS technology, which became the unquestionably dominant design methodology at about that time. Today, the TA approach is also very well suited for timing verification for the exceedingly complex VLSI chips, because just determining time delays is much simpler than simulating a circuit through its states. It is also very fast and TAs are a very good fit for layout optimization in DSM technologies. No simulation vectors need to be generated and, in return, state information will not be determined. The resulting data is strictly time delays. However, while TAs are very useful for determining time delays in digital circuits, their results are valid only under certain physical assumptions. We will examine the physical assumptions for which TAs apply later, when we discuss interconnect modeling for time delay. In summary, digital simulation is needed to determine the functionality of digital circuits. It is well known that TAs determine critical paths in a circuit that are not physically possible. This is, of course, due to the lack of state information. The person using the TA has to know if a very slow critical path indicated by the TA is actually a logically possible path. However, many of these shortcomings of TAs are far outweighed by their advantages. The result of the timing analysis is a knowledge of critical timing paths. Layout optimization can then focus on these paths in correcting timing problems.
3.4.1
MODELING FOR TIMING ANALYSIS So far, we have discussed some of the techniques for determining approximate but accurate capacitance values for interconnects. Together with other parameters, such as interconnect series resistance, these parasitic capacitances will be needed to model interconnects. The series resistance of interconnects requires a simple calculation based on sheet resistance. Now that we have accurate values for capacitances and resistances, we need to find appropriate models for timing analysis. We will focus on finding accurate enough models for layout optimization that are computationally manageable. Such models do exist, and they will be discussed below. The main challenge, as indicated in Figure 3.2, is the interconnect part on the chip increasingly behaves like a lossy but linear transmission line, as layout geometries continue to shrink and clocking frequencies increase. We assume that the series resistance in the interconnect dominates to such an extent that the inductance can be neglected, even for the latest VLSI geometries and operating speeds. As suggested in the
CHAPTER 0
literature, this is at present a very reasonable assumption. Also, most modeling discussions here and published information on interconnects/buffers make this assumption. New challenges will arise if and when series inductance starts to become important. Lossy RC transmission lines in conjunction with the other circuit components on a chip create computational difficulties. Intelligent, discrete approximate circuits need to be found whenever distributed loads are present in a model. This is the path that was pursued for interconnects. Elmore and PRH have done pioneering work [6]to find acceptable trade-offs between computational complexity and accuracy. We will first look at some results and then mention some of the assumptions made. Before clocking frequencies were so high, when accurate models for interconnects were not so critical, models for interconnects were a simple RC circuit with a single R and a single C component. We will refer to this as a "lumped" equivalent circuit of the interconnect. The capacitance value was the total "parallel plate" capacitance of the interconnect over the length between buffer stages and the resistance is the total series resistance over the same total length. Gauging the range of possible solutions, this would be one limiting case. Another one at the other end of the spectrum would be a lossy RC transmission line, a continuum. But remember, this means solving a partial differential equation! We definitely need to find something in between. An obvious compromise is: As few sections as possible with accuracy.
acceptable
Elmore and PRH found a good compromise. Assuming a step function at the input of such an interconnect and focusing strictly on delay (not the detailed pulse shape), we show in Figure 3.7 the fruits of Elmore and PRH's work. R and C fnr lngth rconnect
R
R./. R/ T-Netwo
0.5 RC 1.0 RC 1.5 RC Range Out Distributed 0 to 90% 1.0 RC 0 to 50% 0.4 RC Fig. 3.6
2.0 RC 2.5 RC Lurned 2 RC 0.7 RC
The Result of Intelligent
R +
C/6
C/T"TjC/2
C/3 TC/3
Nwork
C/3-Network
Note: For the 1t3 network, the error is only 3%
Curve-Fitting
The slower rising signal at the left side of the illustration (the one marked lumped) is the response to a step input to a simple RC circuit with a single R and a single C component as discussed above. Both C and R are equal to the total capacitance and total resistance of an interconnect. The faster rising signal at the left of the illustration is the response to a step input to an exact representation of a distributed, lossy RC transmission line. Obviously, a simple, lumped RC circuit is inadequate in terms of accuracy. At the other extreme, a transmission line is not computationally tolerable. However, numerical calculations coupled with intelligent curve-fitting yield accurate responses for relatively simple circuits consisting of few lumps, as shown at the right side of the illustration in Figure 3.6.
One of the key assumptions for these approximations is a step-function input. Large errors start to occur for a slowly rising signal at the input of such an equivalent circuit. The small percentage error is really remarkable, considering that a lossy transmission line is being approximated with just a few discrete components. Good delay and speed information can be calculated using these "simple" approximations. Some of the layout-related issues are: 1. As for the critical capacitance values, curve-fitting was used to obtain good accuracy for Elmore-based capacitance and resistance values. This clearly demonstrates how powerful curve-fitting based on numerical results can be. Generally, SPICE is the "standard" for accuracy comparisons. 2. As previously discussed, curve-fitting is generally done with a particular application in mind, just as for parasitic capacitance calculations as discussed in Section 3.3.2. Let us summarize some of the assumptions underlying the models used for TAs without going into complicated network theoretic arguments. We should also review what can and can not be done with these results: 1.The results contain information on delay along an RC combination of elements and this delay is a result of the total capacitive loading caused by this RC combination. No inductances are assumed. 2. We do not obtain information about the details of the signal shape. If the signal response contains any "ringing," we do not know it. Ringing can be very detrimental to performance, especially the cross-coupling and signal integrity. 3. Since we do not know the signal shape, we do not know the rate of change, the first derivative with time, of the signal. For cross-coupling, this is key. Before chips became as large as they are today, the most popular and most accurate timing analysis was based on SPICE. It is still the standard of reference today. SPICE determines signal rise and fall time, measured from the 10% point to the 90% point of a signal and the time delay from point to point measured between the 50% points of the signal. However, SPICE yields additional, very useful information such as the exact waveform and, therefore the rate of change of signals at points during the signal. This type of information will be of paramount importance for matters related to cross-coupling and signal integrity. While SPICE can analyze any circuit with resistors, capacitors, inductors, current and voltage sources, a SPICE simulation of major parts of a chip rapidly became impractical because it is simply too slow, even with the current computer power. It is suitable only for a rather small number of transistors. That is the reason why SPICE has to be used judiciously where accuracy and detailed timing information are needed. Fortunately,
detailed time
behavior provided by SPICE is not needed
most of the time for digital analysis.
3.5
This is
circuits,
particularly for time delay
the key to the success of TAs.
PERFORMANCE OPTIMIZATION WITH LAYOUT PARAMETERS To optimize the performance of a chip, the number of physical dimensions that affect performance can be affected at different points in the process.
CHAPTER 0
For the fabrication of the chips, there are all the geometrical and physical parameters that are manipulated when a chip fabrication process is designed. In general, there is a lot of talk about visible changes, such as the minimum possible channel length for a MOS transistor. However, there are many others that are constantly balanced against each other, parameters such as metal thickness, oxide thickness, metal composition to lower resistance and increase current carrying capability, the k factor of the oxide to lower capacitive coupling. There are many more, but some of them only became important with DSM capability. So why talk about processing when the focus of our discussion is layout manipulation using compaction? The reason is that although compaction can not change processing parameters, some of the statistical feedback discussed in Chapter 2 can be used to help optimize some of their choices. We will not, however, discuss these issues further here, although it is important to know about these possibilities to facilitate cooperation between design and processing engineers.
3.5.1. FRONT-END OPTIMIZATION Although, the front-end of chip design is not layout design, the information provided here is directly linked to back-end optimization. In a sense, the boundary between front-end and back-end is routing. In fact, compaction is a "silent partner" in routing for two reasons: 1. Buffer insertions may involve both front-end and back-end (postlayout) operations. If insertion happens before or during routing, the buffer sizing is based on estimates and is a front-end procedure. If the buffer sizing stage is correct and the timing problem eliminated, compaction is unnecessary. However, since the buffer sizing stage is based on estimates, this may not happen. If the buffer size needs to be tweaked, it becomes a back-end optimization problem and is discussed in the next section. 2. As we will discuss in this optimization chapter, the second basis for the partnership between routing and optimization is compaction, which applied to the interconnect itself, is very powerful. Compaction performed on the interconnect alone may solve many timing problems. Inserting buffer stages into longer interconnects is an established and proven approach that is still practiced to speed up long interconnect paths [8]. Unfortunately, it adds power to already overheated chips. Selecting the best routing algorithm for a particular chip layout is another complex challenge that is getting worse with DSM. The following discussion will highlight the extreme flexibility required in routing to keep up with rapidly changing processing technologies. The goals for floorplanning or placement for DSM processes are quite clear. Key interconnects should be as short as possible. The goals for routing are not only short key interconnects but predictable length and timing, in addition to some other requirements. This is a difficult task for a router and, since routing is one of the more compute-intensive tasks, there are limits on how many constraints can be forced on a router. And some aspects of routing are getting even more complicated than that. For instance, balancing the lengths of clock trees may not result in equal delays. Some of the criteria used for routing algorithms that yielded the highest performance chips before DSM technology are no longer satisfactory. Extensive research into interconnect topology optimization shows that there are different "best" approaches, depending on the application and shrinking layout geometries. Obtaining an optimally laid out and routed chip is indeed a difficult and multidimensional task. The interested reader may find the following references useful for gaining an appreciation of the complexity of the problem.
The abbreviations of the titles given here are only partly indicative and largely confusing. They are names of routers, most of them the initials of their creators. They are all routing algorithms based on different cost (performance) criteria: Bounded-radius bounded-cost trees, AHHK trees, maximum performance trees, A-trees, low-delay trees, IDW/CFD trees [3]. This large variety of possibilities strongly suggest that there will be room for back-end optimization for quite some time to come and that it is a constantly changing target - if, of course, you want to squeeze the maximum performance out of a chip with optimization.
3.5.2
BACK-END OPTIMIZATION It took years of painstaking research [3] to show the significance of optimizing what already seems to have been close to the optimum: A timing-driven laid-out VLSI chip. One of the key issues is that all this timing-driven work is based too extensively on statistical data, in other words on past performance. We already know what this kind of data means in the stock market. It may not be as bad for chips, but it is certainly, not perfect. Of course, progress is constantly being made. Timing-driven approaches are getting better all the time. However, the technology keeps also advancing. As we have seen above in paragraph 3.5.1, there are plenty of routing algorithms. The best time to truly optimize the performance of a well designed chip is when it is completely laid out. This makes perfect sense. Of course, if the timing is completely off, postlayout optimization is not going to "save the chip." However, it is truly amazing how much leverage is still possible at this point in time. The simplest approach in back-end "optimization" is to enlarge transistor sizes in slower paths. This is done commercially with tools such as AMPS. We discuss this tool in Chapter 6. Another approach already discussed is the insertion and optimization of buffers. Both of these approaches increase the speed of a chip at the price of increasing chip area and power consumption. It seems that none of the commercial approaches takes the geometrical dimensions of interconnects into account. Very recent research suggests that the best results are achieved if both interconnects and the transistors driving them are simultaneously optimized, like a matched pair. Actually two distinct approaches have been suggested. One algorithm optimizes the interconnect/transistor pair for the fastest speed irrespective of power consumption. Another algorithm optimizes speed, while at the same time minimizing power dissipation [3]. Experimental and computational results have shown substantial improvement in these vital performance parameters. The improvements are based solely on back-end optimizing, independent of improvements that can be achieved with optimal routing. Since the exact numbers are reasonably dependent on the particular application, it is adequate for this discussion to put the potential reduction in both delay and power consumption at around 50% for a path that has been optimized. This
is simply too enormous to remain nonchalant about it!
Such results, based so far on university research, deserve serious consideration.
3.6
CAPACITIVE EFFECTS BETWEEN INTERCONNECTS Capacitive effects between interconnects have become much more important lately. They not only introduce new phenomena, such as the affecting of signals on interconnects that are closely spaced, they also increase effective capacitive loading, slowing down the speed of DSM VLSI chips.
CHAPTER 0
3.6.1
CROSS-COUPLING
BETWEEN
INTERCONNECTS
In terms of processing, one of the limiting factors in fabricating higher and higher density chips is how closely metal interconnects can be placed together. Metal interconnects occupy a sizable percentage of the real estate on a VLSI chip. Fabrication-limited layout density is restricted by such factors as clean room classification, the type of photoresist used, the status of the etching technology, optical resolution or alternatively e-beam techniques, to name just a very few. A challenge for narrow, closely spaced interconnects is to avoid bridging that would create electrical shorts. In terms of reliability and voltage supply, there are other limiting factors. Making the metal lines as narrow as possible, we still need to guarantee a certain current carrying capability to minimize electromigration and minimize resistive voltage drops along the metal line interconnects. This is all the more critical today, because power supply voltages are continuously reduced to minimize power consumption in VLSI chips. Problems with both bridging and current carrying capabilities can be minimized with plasma etching, a huge step forward from classical wet-etch techniques. Plasma etching allows metal interconnects with nice, rectangular cross sections as opposed to a trapezoidal and, for narrow widths, even a triangular cross section. Then, current carrying capability is also optimized by making the metal as thick as is compatible with processing challenges such as step coverage. The other key factor for increasing current carrying capability is progress in finding the best metal composition or doping, such as with copper or other metals. Finally, while there has been great advancements in interconnect layout density for minimizing voltage drops and maximizing reliability with large current carrying capability, they have brought with them a serious challenge for the dynamic performance of VLSI chips. The positive geometrical factors of thick, rectangular interconnects in close proximity to each other also maximize capacitive coupling between interconnects. Figure 3.7 shows such an "ideal" cross section with the associated capacitance components. For pre-DSM technologies, the capacitive component to the substrate is large but the capacitive component between interconnects is small. For DSM technologies it is, of course, the opposite. Dynamic performance parameters such as cross-talk to maintain signal integrity and excessive capacitive loading are rapidly becoming major issues.
r~~~714*rTTTI 7
Lr
IT SST PRE-DSM GEOMETRY Fig. 3.7
0.5 pm 0.25pM
EfX
0.18M DSM GEOMETRY
The Change of Interconnect Cross Sections and Parasitics
Cross-talk induces noise and noise increasingly a challenge in VLSI chips, due to the lowering of power supply voltages with the resulting smaller signal amplitudes that are more susceptible to noise. However, it is not just the cross-coupling between closely spaced interconnects that creates problems. While there is a variety of noise sources in VLSI chips, most related to the physical layout, our discussion here is limited exclusively to effects caused by placing interconnects closely together. This increases capacitive coupling between interconnects and affects signal integrity and capacitive
75
loading. Capacitive loading increases signal delay and power consumption. We focus on these two effects because they can be strongly influenced by layout compaction. These compaction steps are best performed at the back-end, after floorplanning and place & route. Of course, smaller signal amplitudes due to smaller power supply voltages do not mean less cross-coupling, because the degree of capacitive coupling depends on the rate of signal change and not their amplitudes. Because of the increases in speed, these rates of change also increase. Since capacitive coupling causing cross-talk and capacitive loading between closely spaced interconnects occurs only when the voltage between these interconnects changes (like two lines switching in phase in opposite directions), there are several capacitive possibilities: 1. If a signal is propagating on a primary interconnect at any time while there is no signal on the secondary interconnect, capacitive coupling will induce noise proportional to the coupling capacitance between them on a nearby secondary interconnect. 2. If signals of opposite polarities meet at any time on two closely spaced interconnects, capacitive coupling will occur that is twice as strong as with only one signal on one interconnect. This is known as the Miller effect and is the worst-case coupling. 3. If signals with the same polarities meet at any time on two closely spaced interconnects, capacitive coupling will not occur between the interconnects. Of course, anything between these limits can occur if the signals are shifted in time against each other or if their rates of change, their rise times, differ. Because of the uncertainty of exactly what might happen, the design should take account of the worst case, such opposite polarity signals, and be based on the largest possible rate of change of signals. In terms of cross-coupling analysis, this depends very sensitively on the exact signal shape and especially any ringing or spikes in a signal. For instance, ringing or spikes can occur for fast rise times in longer interconnects due to transmission line impedance-mismatch-induced reflections. The very successful use of the first order-distributed Elmore delay models are a good approximation for "well-behaved" signals for determining delay times and rise times. Different types of timing analyses should be used to check for noise and dynamic coupling.
3.6.2
MINIMIZING CROSS-COUPLING We have sought to maximize the metal interconnect packing density while at the same time taking cross-coupling into account. Again
the question:
What can be done at the front-end
versus the back-end?
Clearly, as previously discussed, the main leverage is at the front-end. Any back-end adjustment on a layout that has not been designed with noise sources in mind may be too little, too late. This is particularly true for noise problems, because the coupling between adjacent interconnects discussed here is only one of several possible noise sources in a VLSI chip. But again, back-end adjustments can be performed for optimization based on complete knowledge of a finished layout. Now focusing only on interconnect coupling, increasing the thickness (height) of interconnects helps with current carrying capability and minimizing voltage drops, while the capacitance between interconnects for a given interconnect separation will increase. The challenge is as follows:
CHAPTER 0
Can we lower this cross-coupling component by increasing the distance between interconnects without paying a penalty on packing density? One approach used by the industry is to take advantage only of "empty" space. Often, metal interconnects are placed closely together on a chip when there are actually unoccupied areas surrounding the interconnects. So why not spread these interconnects within the available area without affecting the placement of any of the rest of the layout? Figure 3.8 shows an example of such spreading. On the left side is the layout before spreading, while layout after spreading is shown on the right side. Such an adjustment is not only "free" in terms of layout density, but much is gained in terms of lowering the cross-coupling, the capacitive-loading-induced additional power consumption and fabrication yield may also increase. Of course, not all interconnects are equally susceptible to coupling. Another level of sophistication in spreading interconnects is to prioritize the interconnects that more urgently need to spread apart in comparison to others. This is also possible. We discuss the features of a tool allowing such spreading in Chapter 6. So far, we have only minimized the degree of cross-coupling. After spreading the interconnects to the maximum, we have minimized the coupling as much as possible by modification at the back-end. Before moving to masks, we need to determine whether a VLSI chip actually works, whether the noise problems have been sufficiently eliminated. At present, tools are reaching the market that analyze at the layout level whether a VLSI chip is expected to pass the noise test.
Fig.
3.7
3.8 Lower Cross-Coupling
OPTIMIZING
& Higher Yield Keeping
the Same Area
THE ACTIVE PART
The usefulness and success of today's simple switch-level transistor models in TAs are based on extensive use and a rather limited focus on only determining time delay in digital circuits. For more detailed analyses requiring an accurate knowledge of the signal shape, the authoritative comparison is still a SPICE run, which uses a complicated SPICE equivalent circuits for transistors. Even if limited to digital circuits, transistor models will have to continue to evolve with shrinking minimum layout geometries, if for no other reason than to know when a new physics effect may start to affect the DSM VLSI circuits in unexpected ways. Fortunately, current switch-level-based TAs work just fine for now for delay analysis. We have also determined that it is the interconnects that dominate the timing on DSM VLSI circuits. Does that mean, we should only look at the interconnects when we optimize the layout geometry for minimum delay and power consumption?
This question is prompted largely by the fact that there are at present only commercial solutions that modify transistors to achieve the above goals.
3.7.1
OTHER OPTIMIZATION ISSUES With the push towards increased layout density for all the previously mentioned reasons, the geometrical separation between some of the elements in a layout are often smaller than necessary for optimal performance. Needless to say, the other key fallout from such a layout is lower yield than what could optimally be achieved for a VLSI chip. Lately, there have also been allusions to this in the literature. Optimizing yield is clearly a big financial issue.
3.8
CONCLUSIONS TO PERFORMANCE
OPTIMIZATION
We have reviewed some of the issues dealing with optimizing the physical layout of DSM VLSI circuits. The focus of this optimization was primarily performance optimization and managing runaway power consumption, since these VLSI chips pack more and more functionality into smaller areas. It is self-evident that for something as complicated as fabricating a multimillion-transistor VLSI chip, there has to be many process steps along the way that could be optimized. Listening to person after person and speaker after speaker in the EDA field gives the distinct impression that all design problems will be solved through more intelligent synthesis and place and route techniques. To put it diplomatically, this is tunnel vision. Just as doctors should examine the entire person, the VLSI design community should examine the entire design process. In this chapter, we have specified that we mean by front-end everything up to and including place and route. Back-end addresses a VLSI chip design after everything has been put in place at the GDS2 level. Back-end layout manipulations literally amount to what could be called "massaging" the layout. Mathematicians know that "massaging" equations can do a lot of good, even in an exact science such as mathematics. The same is true for VLSI chips, especially if they have been fabricated with a DSM technology. Extensive research has already demonstrated that for an otherwise well laid out VLSI chip, time delay and power consumption reductions of over 50% can be achieved merely with back-end layout manipulations. This is substantial and can not be ignored in the long or short run,. When we discuss design flows in Chapter 7, we talk about levels of abstraction in the design process. We suggest that, while a very high level of abstraction yields great benefits for the design process in terms of complexity management, its direct control over the physical aspects of a layout tends to be relatively weak. In the past, this presented few obstacles to the chance of first time success for a VLSI chip design. Because of the importance of layout parameters, these issues have to be taken increasingly seriously in design disciplines such as synthesis and timing-driven layout and especially during the floorplanning phase in DSM technologies. At present, there are some solutions in the industry, seeking optimal dimensioning of transistors in a VLSI circuit as a postlayout optimization step. These tools focus only on transistor sizing. We discuss some of these tools in a separate section on available industrial solutions.
3.9
LAYOUT GEOMETRY TRADE-OFFS FOR BETTER YIELD A key measure for successful DSM VLSI chip design and manufacturing is the percentage of defect-free chips at the end of the process line. No matter how well chips are designed to meet performance specifications, if the percentage of good chips coming off the processing line is too low in comparison to the bad, nonworking chips, it is a losing proposition.
Of course, we should try to increase manufacturing yield without sacrificing performance, if possible. We will see that some layout dimensions can, in fact, often be enlarged to improve yield without any loss in performance. Alternatively, a trade-off between yield and a minor sacrifice in performance might be acceptable. With increased density, the probability of one's chip defect being large enough to cause a failure is considerably increased. Other important contributing factors are the large sizes of today's chips and the enormous number of devices now placed on a chip. The discussion here is not intended to be comprehensive. It is focused on just some of the design-related steps that can be taken to improve manufacturing yield. Depending on the nature of a chip, different approaches can be taken to increase its yield. As in other design disciplines, redundancy has often been viewed as a good approach to overcome the debilitating effects of a failure of certain components. Redundancy is being used to bring about "self-repair" of a failing system that is in use at the time of failure. For chips containing one or more defects after manufacturing, defects could be bypassed by designing redundancy into the chip. Such redundancies are sometimes referred to as "swapping redundancies" [15]. As the name implies, the design would have to allow for substitution of an operable part of a structure for a failing one through exchange. Redundancy can work well for highly repetitive structures. Any array-type structure could be a suitable candidate. A very good example is the frequently very large portion of very densely laid out embedded static/dynamic RAMs and/or flash memories added for S-o-C designs. After all, adding on the order of 256 Mbits of memory, possible in a 0.18 micron technology, is a nonnegligible amount of defect density exposure. We discuss the S-o-C approach in conjunction with Hard IP reuse in Chapter 5. What other design-related steps can be taken to increase manufacturing yield? We will now discuss how compaction can be used for just that purpose. We will discuss the following three techniques that will improve manufacturing yield by using compaction: 1. We have already discussed a technique in this chapter that improves yield through wire spreading as shown in Figure 3.8. Not only does wire spreading increase manufacturing yield, but it does so without impacting performance. In fact, in many cases it even improves chip performance. 2. EDA tools often design a physical layout according to the smallest process rules allowed. A discipline called Design for Manufacturing (DfM) has become popular lately. With the aid of compaction, basic building blocks such as embedded memories, library cells, or the layout of any other functional blocks can be designed or retargeted using what is generally referred to as "preferred process rules." The best choice of preferred process rules are, of course, layout rules that do not negatively affect the performance of the resulting circuit, while increasing manufacturing yield. 3. One technique for improving yield through compaction is to minimize areas that are particularly prone to having higher defect densities.
3.9.1
YIELD ENHANCEMENT THROUGH PREFERRED PROCESS RULES, USING COMPACTION With the introduction of 0.18 micron CMOS process technology, a new phenomenon in circuit manufacturing becomes more important than when minimum layout dimensions were larger:
19
Design rule values as specified in design rule manuals are no longer "hard" numbers. Actually, anybody who has worked with foundries knows that they never were. There were always "gray" areas, but these issues were not that critical with the larger layout dimensions of the previous processes. For older processes, designers and FDA tool developers considered process rule values as strict limits when creating mask layouts. Now, such "fixed" process rules have been replaced by "preferred" process rules. They have turned into gray areas around the specified rule values. This concept is illustrated in Figure 3.9 [16, 17]
DEEP SUBMICRON ((=0.35)
SUBMICRON
YIELDI
1
~INIMUM RULEVAUERULE ................
VALUE
'- '
FEATURE DISTANCE Fig. 3.9
The Gray Areas Leading to Prefeared Design Rules
For the shrinking layout dimensions of DSM processes, the choice of design rule values is increasingly pushed towards the high end of the yield range. Choosing a larger value guarantees a higher yield for a particular rule, but it results in less dense designs. A lower rule value means the opposite: Manufacturing yield will be less, but designs are denser. Many foundries now also specify a preferred design rule value together with the minimum allowable rule value. If preferred values are used wherever space permits in the final layout, a substantially higher manufacturing yield can be obtained. A necessary design strategy for DfM in DSM processes is to avoid implementing minimum design rule values wherever possible. The minimum allowable design rule values should only be used when design density and substantial loss in performance are at stake. In Chapter 2, we discussed the critical path data resulting from compaction. This type of data can be very useful here. The drawback on yield will only pay off against using larger rule values along the critical path that determines block dimensions. Of course, this is because using larger rule values on the critical path will result in a larger silicon area on the design, which will lead to a higher cost of silicon and reduced yield because of a larger die size. Other than chip or block size, the critical path does not address penalties in terms of performance. Performance may be an even more critical parameter than chip area. At all locations that are not dimensionally critical in the final mask layout, larger than minimum rule values should be respected. If implemented properly and consequently, a defect falling randomly on the wafer during fabrication simply has less of a chance of producing a fatal circuit malfunction. In addition, mask layout postprocessing before manufacturing, such as optical proximity correction and the use of phase shifting masks, will be facilitated. Introducing nonminimum rule values in the design is not something that can be implemented by the foundry after tapeout. The consequences in terms of design performance and functionality are too great for that. The implementation of preferred rules will have to be an integral part of the design flow for enhanced manufacturability. Only then is the designer able to fully verify the final design, including
CHAPTER 0
the consequences of something such as using a wire-spreading tool on the final routing of standard cell blocks. How can such preferred design
rules be implemented?
It is obvious that automation is needed to implement larger than minimum design rule values effectively and efficiently. If the fact that certain preferred rule values are more preferred than others - because of a difference in yield gain - is added to the complexity of the problem, it becomes clear that a manual approach to the problem is doomed to produce suboptimal results. In addition, it will consume too much in terms of precious human resources. EDA tools for implementing nonminimum rule values should assist designers in the following areas: - Routing Design of custom cells -
Definition of design rule values
-
Verification for DfM
When routing a design, the distances between adjacent wires should be made nonminimum wherever possible. This is not feasible when constructing the routing. If nonminimum rules were used at that stage, many signals would end up not being connected. Instead, postprocessing of the routed design has to be done, known as wire-spreading. We have already showed this in Figure 3.8. Custom cell creation is still very much dominated by manual layout design, whether for standard cells or regular blocks like memories. Again, introducing nonminimum spacing while drawing the layout is a highly complex task. Designers will face a high risk of having to "create space" to get that cell to fit its required footprint. The step of introducing nonminimum rules in a custom cell layout should be a postprocessing task following fully custom layout design. This will allow designers to focus on the main task of creating the densest possible cell layout. The enhancement of layout to ensure better manufacturing yield can be automated by using layout compaction software, because a compactor is able to reposition each individual polygon edge in order to produce a design that is correct in terms of the design rules. Optimization for yield is done by analysis of the layout for the available "freedom of movement" at each polygon edge. The available space for each polygon edge will then be prioritized to get the highest return on yield enhancement. This is similar to the concept of "critical areas," discussed in the last paragraph. Design rules are defined when designing a new process. It is the process engineer's task to pick a suitable number in the manufacturability range measured for each rule. A full evaluation of the consequences of choosing a particular set of rule values is a far from trivial task. Essentially, it requires the construction of test designs that use the proposed rules optimally. Layout compaction software can help in this respect, due to its ability to quickly implement a set of new minimum (and preferable) design rule values on a collection of given test designs. This allows a proper trade-off of manufacturing yield and design density to already be made in the process definition stage. Design verification for Dfl is the last stage requiring EDA tool assistance. A manufacturability analysis of a particular cell that highlights the hot spots for yield is needed, in addition to DRC and LVS reports. Ideally, locations would be flagged where minimum rule values are unnecessarily used.
55
In summary, the use of nonminimum design rule values will be an important aspect of DfM for manufacturability in future DSM processes. Layout compaction and wire-spreading tools clearly help to implement preferred design rule values on mask layouts for enhanced manufacturability. Since this area of DfM is still quite a new concept, areas such as verification for manufacturability still need to be further explored.
3.9.2 YIELD ENHANCEMENT USING COMPACTION
BY MINIMIZING CRITICAL AREAS,
The number of point defects in IC layouts is related to the surface area of a chip. This is of course evidenced by the fact that increasingly larger chips could only be manufactured as the technology matured and the industry learned how to lower defect densities over the years. However, studies have shown that not all areas on a chip are equally probable of having defects. In addition, even if present, defects do not cause failures equally in all areas. For instance, if there is no circuitry present where a defect occurs, it will probably not cause a chip failure. Areas that are more prone to defects are often referred to as critical areas [1 8]. Accordingly, minimizing the dimensions of such critical areas will increase manufacturing yield. Compaction is the process for minimizing such areas. Thus, a compactor coupled with an algorithm that requires the compaction of areas on a chip only where the defect density is expected to be high, while at the same time respecting circuit process rules and performance criteria of the circuit. The first significant work in this area dates back to just 1992 [19] and our discussion has been stimulated by work presented at DATE 2000 [18]. With the rapidly growing complexity of DSM VLSI designs, this is a critical area to be further studied. Of course, it is discussed here because DfM is becoming quite a hot issue and the focus of the present work on application areas of compaction. It is clearly an additional and interesting application area whose importance is bound to grow significantly with larger and denser chips.
CHAPTER* C I 0
POUTVHARDtIP CRAION IC LAYOUT, HARD IP CREATION So far, we have discussed Hard IP reuse and Hard IP optimization. We will now examine Hard IP creation, how to create it more efficiently, more conveniently and how to create denser layouts than otherwise possible. Creating Hard IP produces an IC layout of a new standard cell, memory cell, generator, instance or custom block. When designing the layout of such building blocks that are used over and over again, it is worth the time and investment to create the densest possible layout. Every micron counts, especially for memory cells that are placed in arrays in great numbers of repetitive cells. Just as critical are performance and power consumption parameters, but those will be discussed later. The bottom line, the key to company success is of course to create denser, higher performance layouts than the competition and create them faster. Also, as processing parameters keep changing, and they are currently changing rapidly, layouts have to be rapidly retargeted. A design approach well balanced in terms of human input and computer aid, such as a powerful compaction engine seamlessly integrated into a state-of-the-art layout editor, is worth considering.
4.1
HARD IP CREATION USING COMPACTION We have so far discussed the use of compaction when a layout already exists and needs to be either retargeted to a new process or has to be optimized for the best possible chip performance. We will now introduce an approach using compaction that offers the benefits of both IC layout design rule correctness and optimization of layout density, built directly into the IC layout design flow. First, we compare a traditional IC layout flow with one where compaction is part of the design flow. On the left side, Figure 4.1 shows Hard IP creation based on a traditional approach, and on the right side using compaction. Both flows start with what is generally referred to as topological design. During this phase, the layout topology is drawn on a trial and error basis, using approximate locations to determine how the polygons and larger pieces can be fit together to create a nicely laid out cell. In this step, the layout designer should not have to worry about design rules imposed by processing, rules dictated by the desired electrical behavior of this building block, including possibly yield criteria, maximum power, current consumption, etc.. These are a lot of complicated rules to bear in mind. The left side of Figure 4.1 shows a traditional IC layout design flow, where a designer puts a lot of effort into fixing DRC errors. On the right side is a compaction-based IC layout design flow, where the designer is free from worries about process-imposed layout rules. As we can see from the compaction-assisted IC layout on the right of Figure 4.1, the focus so far is only on design rule correctness (DRC), which can also address yield enhancements, as previously discussed for DfM, and maximize layout density.
Fig.
4.1 Traditional
and Compaction-Based IC
Layout Design
This range of solutions is commercially available today in products used in the field. Although these solutions address only a subset of what is possible with compaction, they already represent great steps forward for many reasons, which will be enumerated in the next section. Looking a bit into the future, we should keep in mind the postlayout optimizations for performance and power, as discussed in Chapter 3. In principle, these additional features are not difficult to implement but tools including them are not presently commercially available.
4.2
IC LAYOUT BENEFITS FROM COMPACTION The main goal of compaction-assisted IC layout is, of course, to help the layout designer create the densest IC layout he can, as fast as possible and with minimum risks. With the traditional layout flow, the multiplicity of rules to be respected clouds a layout designer's vision, reducing his layout creativity because he can not focus fully on the topological design process. After all, the layout will have to pass DRC for the cell layout to be acceptable. As we know from experience, the best work is always done when one can focus on the most important aspects of a task. The key is to separate the tasks into what requires what humans can offer, such as creativity, and tasks that the computer can do much faster and more reliably with the aid of software, keeping track of data. That is what is shown in the flow on the right side of Figure 4.1. The compaction engine with its process files keeps track of all the complex process layout rules, while the layout designer can freely experiment with various trials without design rule errors, focusing on the most clever topological design that fits the rest of the circuitry. Using the compaction-based flow, the designer can concentrate fully on the layout activity. The compaction engine acts as an online checker, giving instant feedback to allow corrections to be made interactively. The layout designer can leave it to a compaction step immediately after the "experimental," loosely drawn placement of the layout features to enforce all the rules and user inputs. As we can see on the right side of Figure 4. 1, there exists adensity optimization step for the compaction-assisted flow. As already discussed in Chapter 2, compaction yields immediately feedback on critical paths, the path that shows where minor modifications in a physical layout can lead to significant density improvements. We have seen in Figure 2.6 in Chapter 2 how a minor change in layout can significantly change the layout density. Figures 2.7 and 2.9 showed how jogging (doglegs) can help layout density along critical paths. Jogging is automatically inserted into the layout by the compaction engine, if the user desires. This does not require manual effort of the user.
s
CHAPTER
Finally, since compaction is one dimensional, rapid what if x first then y or y first then x compactions can show the layout designer which compaction sequence may lead to a better layout. Accordingly, the designer can modify his layout on the fly to achieve a design-rule-correct and a much higher density layout than what was possible without this feedback in the traditional layout approach. Furthermore, and as indicated in Figure 4. 1, the traditional approach results in a recursive loop between the initial layout and the DRC just to get the layout so that it might pass DRC. Recursive loops cost valuable time. The compaction-based IC layout flow shown in Figure 4.1 demonstrates the basic steps required. For a complete design environment using compaction, the flow in Figure 4.1 needs to be integrated seamlessly into current DSM VLSI chip design environments, as shown in Figure 4.2. EXPERT
LAYOUT DESIGNER Fig.
4.2 Compaction-Based,
Commercial Layout Design
Environment
In Chapter 6, when discussing actual commercial solutions, we will show the same flow integrated with commercially available software.
4.3
WHERE TO GO FROM HERE Although, the solution shown in Figure 4.2 is a workable flow for compaction-assisted IC layout design, it shows only part of what in principle is possible. It really shows only the IC layout rules perspective. In Figure 4.2, there is no device and no interconnect sizing analysis software for power and performance optimization. Furthermore, current generically employed design flows of complex DSM VLSI chips do not take advantage of compaction, but this should change. Compaction can implement many desired changes to achieve the performance needed at a time when all other means of change have been exhausted. Even with the limited compaction-assisted flow shown in Figure 4.2, the following present and future compaction-induced consequences are worth mentioning. At present, the following technical and organizational aspects of IC layout design (also many aspects of DSM VLSI chip design) can be made easier, faster and more reliable using a compaction-assisted IC layout design flow: 1. The focus of the flow on the right side of Figure 4.1 is limited to design rule correctness and density maximization. Both of these aspects of IC layout design will benefit from compaction. 2. All the increasingly more complex process-imposed layout and electrical rules can be kept in a central process file for all design tools to be used, including IC layout. Since design rules frequently change in DSM technologies, a CAD or processing specialist can be given the responsibility of keeping these files current, supporting many design engineers in the entire DSM VLSI circuit organization and
*
design flows from front-end to back-end. There will no longer be difficulties tracking changes in technology or process manuals, and there will be consistency "across the board." In addition, many VLSI chip designers and even IC layout designers may not keep on top of all the details of such process files. This is a problem that is due to the stress on high-level design in today's education process. Putting a specialist in charge minimizes the risks associated with this. 3. Any cell or block IC layout being designed will automatically reflect not only the latest process rules but also DfM rules for increasing yield. All of this can be build directly into the design process. Future, potential benefits that do not presently seem to be implemented (except for perhaps in some of the more progressive companies) are sophisticated performance and power optimization features that address simultaneous transistor and interconnect dimensions as discussed in Chapter 3. These features are technically already feasible. A flow should be created that analyzes cells, larger blocks, custom designs for optimally dimensioned transistor and interconnect dimensions. This type of information should then be interactively fed back to the IC layout designer. This could be done as soon as the industry has adopted some of the optimization algorithms discussed in Chapter 3. Why not create an optimal IC layout, not just one with maximum density, good yield and without any layout design rule violations? Why not optimize speed and power performance during the IC layout?
4.4
WHAT COMPACTION
IN IC LAYOUT CAN AND CAN NOT DO
In summary, all the typical features contained in a state-of-the-art compaction engine as discussed in Chapter 2 are available to an IC layout designer. Features such as the abutment of cells, gridding for ports to guarantee connections to routed interconnects and automatic jog insertions in critical paths are useful for getting a top quality layout in a minimum time and with minimum effort. Keep-out regions are respected, as often required by metals, ports, analog layouts, etc. The design database indicated in Figure 4.2 contains not only physical layout data but also pin and connectivity information, and this data is maintained through the compaction process. Compaction-assisted IC layout can also create additional space in a layout for adding a forgotten feature when it appears as though there is not a square micron left. This possibility was mentioned when discussing buffer insertion on optimization in Chapter 3. Figure 4.3 shows how an additional feature in the left part of the illustration is undersized to make it fit into the IC layout. Once inserted, it is "inflated" through compaction to satisfy all the IC design layout rules. Features can be inserted that are desired or required, but may have been forgotten during IC layout design. In addition, a process change could dictate the insertion of some feature (e.g. a diode). This kind of flexibility can be very useful. .
... .Am Sro mam S. . S
.
..
..,a
..
E1 ;> I LUMPACI
Fig.
4.3 Insertion
of Nonfitting
Feature With Compaction
Finally, although compaction will enforce all layout and electrical design rules, it cannot verify the correct topology or guarantee that LVS is correct. These features have to be tested independently of compaction. The IC layout designer has to make sure the layout he has created is not only laid out correctly but also performs functionally correct.
CHAPTER 0
ANALOG, HIERARCHY, S-O-Cs, REUSE GU I DELI N ES WE WILL NOW DISCUSS SOME OF THE SPECIAL CHALLENGES HARD IP MIGRATION FACES. The focus up to this point has been on digital circuits. What about retargeting analog or mixed signal designs? Although digital designs are currently dominant for very many applications, digital circuits need to work together with some analog circuits on the same chip. Can Hard IP retargeting address these kinds of design requirements? Another interesting challenge in Hard IP retargeting is hierarchy maintenance in the physical layout. For much of the Hard IP migration currently done, the hierarchy of the source layout gets lost during the retargeting process. What about the possibility of maintaining the source layout hierarchy during Hard IP migration? An extremely efficient method for increasing design productivity would be to reuse designs processed in an outdated technology and to integrate several designs on one chip as an S-o-C in state-of-the-art technology. These designs could be just Hard IP, just Soft IP or - most challenging - Hard and Soft IP mixed and matched. Although a very promising S-o-C scenario, this approach should present some interesting challenges. We will examine some of these challenges. Guidelines for "good design" have evolved over many years. Comprehensive guidelines for facilitating Soft IP reuse have just recently been presented in the RMM [1]. Because Hard IP reuse and Hard IP-based optimization have been an important aspect of DSM VLSI chip design for a relatively short time, the available data on designs to facilitate working with Hard IP is somewhat limited, still in flux and changing with the evolution of compaction technology. We will discuss what we know now about how to facilitate Hard IP reuse.
5.1
RETARGETING ANALOG AND MIXED SIGNAL DESIGNS In the discussions to follow, we limit the scope to one of the more common fundamental challenges in VLSI chip design and IP reuse: placing analog and digital circuits on the same chip and changing the technology in the process. Let us assume that we are in a basically digital world, but we need some analog capability on a chip that is mostly digital, a typical mixed signal scenario. Analog capability is often needed in conjunction with digital functions. However, we must exclude high-precision analog circuits from a discussion about migrating analog circuits. High precision in analog circuits may mean microvolt-level balancing between certain devices. Such circuits are difficult enough to design and produce as standalone chips and are manufactured in processes specially designed for analog ICs. Such circuits can not and should not be "mixed and matched" with digital circuits on the same chip. If the analog circuit we are about to migrate is a necessary part of the original, mostly digital chip that now needs to be retargeted, it has already been designed to "live" with the digital functions on the same chip. Most likely, this case will be manageable. If the analog circuit is a standalone IC, finding itself together with digital functions in a S-o-C scenario may present serious technical difficulties.
87
Long before physical layout became so critical for digital designs, layout was of paramount importance for VLSI analog designs. While today's digital VLSI circuits consist almost entirely of transistors alone, analog functions require circuits to contain "all" the electrical elements: transistors, resistors, capacitors and inductors. However, because it is difficult to make inductors and sizable capacitors in VLSI circuits, design techniques evolved to generally do without inductors and to live with small capacitance values. Instead of getting into a full-blown discussion on analog VLSI circuit design, let us review some of the key layout-related parameters for analog circuit performance and then examine how they may be addressed in a VLSI circuit environment and, in particular, for retargeting. We will not talk about the effects of interconnects for now.
5.1.1
LAYOUT CONSIDERATIONS
FOR ANALOG
Some of the more critical physical layout considerations for analog VLSI circuits are: I. Because analog circuits need to produce a continuum of DC and AC signal levels accurately, and not just binary ones and zeros, the values of the electrical elements need to be accurate. However, while it is difficult to control absolute resistance and junction capacitance values in IC technology, the relationship, or ratio, between similar value resistances and similar value capacitances can be controlled easily. Well designed analog circuits work based on ratios rather than absolute values. Some of the most popular components in analog circuits are differential amplifiers because their performance is closely related to ratios and relationships between pairs of components of equal size or value. We need to examine how to retarget without disturbing these ratios. 2. Symmetry and its maintenance are also very critical. Thus, we must examine how migration might affect symmetries or can be performed to maintain them. This is also particularly critical for transistor pairs. 3. When measuring the performance of analog circuits, speed is generally measured as a frequency response that consists of amplitude and phase components. The behavior of many analog circuits is particularly sensitive to the phase component. Thus RC time constants are very critical. Again, it is the ratios between time constants that are the most critical. 4. The DC-related parameters of transistor and resistor pairs need to match. However, even if pairs of transistors, capacitors, resistors are well matched, the orientation in the layout needs to be such that the matching pairs are on equithermal and equipotential lines in the chip. Matching pairs mean nothing if they are at different temperatures or if externally generated voltage differences cause different biases in the circuit. We should mention here that a mere 26 millivolts forward bias across a junction double the current in that device. When signal- and power-generating digital circuits are placed in the neighborhood of analog circuitry, deleterious voltage and temperature gradients may occur. The need to satisfy the above conditions suggests that there are two scenarios that look promising for successful retargeting of analog blocks: 1. A chip that contains digital and analog is migrated to a different process. Obviously, analog and digital worked together successfully in a previous process on such a chip. Thus, the layout considerations just discussed were either satisfied or some of them did not apply. "Intelligent" compaction should be all that is required to get these chips to work. 2. As in an S-o-C scenario, blocks from more than one chip are migrated onto one. Obviously, this is a much more challenging situation. The manufacturing processes for these circuits may have been different. The orientation and the location of the migrated blocks on the new chip are crucial for the analog part. Thus, not only the intelligent compaction but intelligent floorplanning will be required.
CHAPTER
When migrating analog circuits, one of the most fundamental requirements is to be able to "recognize" the function of devices on a chip. The compactor needs to recognize transistors, capacitors and resistors amidst the sea of polygons in the layout database. If we can recognize these components in the layout, we can specify what should happen to them during migration. Having stated all
these constraints,
can analog be successfully
migrated? The answer is actually yes, irrespective of the above conceptual statements, because it has been done by several users of migration tools. Let's review how reasonable it is to consider analog migration.
5.1.2
CAN ANALOG DESIGNS BE SUCCESSFULLY MIGRATED? Considering all the constraints to be satisfied, analog migration seems difficult. In addition to the apparent difficulties, analog design is a somewhat specialized, not too common skill. Furthermore, one would expect a reasonable level of skepticism to using migration on analog designs. After all, there has been a rather common resistance to using hi-tech EDA-type tools for analog in general, especially for layout. This is partly justified because analog design is tricky and partly because tool design has not received as much attention as tool design for digital design. Of course, it is true that analog has never been mainstream like digital technology and that it is more delicate than digital. In fact, there is a strong trend towards replacing analog with digital wherever possible. So there are technical and psychological obstacles to overcome. Whatever the arguments, analog is almost always needed in conjunction with digital. We shall try to make an objective assessment of the realistic chances of successful analog migration.
5.1.3 A PRACTICAL VIEW OF ANALOG MIGRATION Since analog functions in VLSI chips can not generally be avoided, we will explore some compromise approaches to migrate mixed signal designs. But first we will summarize the key issues to keep in mind: 1. Maintain symmetry, locally and globally. 2. Maintain electrical matching (same loads, size, relative position, orientation). Since matching is key, one should look at components and devices in pairs. 3. Keep elements on isotherms (same distance from heat sources) and equipotential lines (same distance from significant current sources or sinks). 4. Similar arguments concerning serious noise sources. One needs to very seriously look at crosscoupling and the resulting loss of signal integrity. S. Keep interconnects symmetrical. We mention many of the above caution flags not so much because of retargeting. They are part of the basic principles of careful, knowledgeable mixed signal or analog VLSI circuit design. In fact, a legitimate question that comes to mind is: Just how feasible is mixed signal VLSI design, irrespective of migration? We all know it can be done and can be done well. We also know that relatively few people are really good at it. The following are possible approaches to analog migration:
*
1. One can always take a minimalist approach to using migration since no less than ninety percent of a chip will generally be digital. Accordingly, one can use the "keep out" or "don't touch" approach to the analog part of a circuit, migrate the digital part as usual, and just substitute the analog part with a new analog layout created by manual design. 2. One can use less process-dependent (parameterizable) analog circuits. Such circuits have been designed using tools like GDT (for old-timers who were around back then). Of course, such circuits can address some of the problems, but not all of them. 3. Apply a linear shrink or a "creative linear shrink" by scaling some parameters differently than others and then fixing remaining problems by hand. The positive aspects of a linear or optical shrink are that it does not disturb the symmetries and proportionality relationships. The negative aspects of a linear or optical shrink are that the new design rules are not a scaling of the old design rules. Finally, layout features will not be on a grid after a linear shrink. 4. Do a proportional compaction. This will keep the symmetries and changes proportional while keeping all layout features on grid. S. Develop an analog compaction engine that allows for specification of the major analog constraints stated above. This would be the most elegant solution, but there seems to be nothing like that on the horizon. Until analog plays a more dominant role and there is promise of a big market for such a product, there may not be sufficient motivation. In fact, the need for a sufficiently large market for analog compaction is probably the most problematic issue. Some efforts do exist but, so far, they have not gone much beyond the university level. At a time when compaction of digital circuits seems ahead of its time, it is doubtful that anybody will invest heavily in analog compaction.
5.2
HIERARCHY IN HARD IP MIGRATION In many design disciplines, hierarchy plays a key role in managing complexity. Of course, hierarchy and hierarchy maintenance mean different things in different design disciplines. We will now discuss what hierarchy and hierarchy maintenance mean for Hard IP migration, creation or optimization. Maintaining hierarchy in Hard IP migration has been a serious challenge for many years and is one of the hotly debated issues in Hard I reuse. Most users want to maintain all levels in the layout during hierarchy migration, at least that is the initial demand when approaching a possible migration project, even though there are pros and cons to maintaining all levels of a layout hierarchy. In fact, the number of levels of hierarchy one eventually wishes to maintain will often (and indeed should) become an intelligent trade-off between the pros and cons to be discussed below. In the discussions to follow, we will differentiate between what has been possible in hierarchy maintenance up to now (we will call it "traditional migration"), and what is just around the corner (we will call it "fully hierarchical migration"). We will discuss both, and the reasons are simple. Both the fully hierarchical and the traditional migration have and will continue to have justified areas of application. Two to three levels of hierarchy could be maintained for traditional migration. Only in early 2000, has the technology advanced to the point where maintaining any number of hierarchical levels has become possible. Before we discuss details, we need to review what hierarchy and hierarchy maintenance mean in physical layout.
CHAPTER
5.2.1
HIERARCHY MAINTENANCE
@
IN LAYOUTS
In a block diagram, schematic or similar descriptions of a VLSI chip, blocks are clearly recognized by their symbols. The hierarchy of the design is evident. At the highest level, for instance, we have CPUs, controllers, memory arrays. As we traverse the hierarchy from top to bottom, the next level may be memory cells, registers, etc. Going even lower, there are gates and finally transistors and other components. Looking at the proper description in the hierarchy of diagrams, we can also see what is inside these functional blocks. Thus the hierarchical representation in an electronic or different system shows what parts in the system belong together functionally and at various levels of abstraction. In the physical layout of a chip, the same principles apply. During the IC layout of a chip or a smaller functional block, depending on the design approach and the functionality required, entities are placed on the silicon in a building block fashion, suggesting a similar kind of modularity as we see in a block diagram or schematic. Thus, even in physical layout, pieces that belong together functionally are placed together physically. There are, therefore, physical boundaries in the layout identifying functional entities, although they are generally much less easily identifiable in a layout than in a block diagram. In a memory array or any other regular structure, it is easy to see where one cell ends and the next begins. In designs based on standard cells or designs where entire CPU or controller blocks are placed on a chip, the boundaries are also easily identifiable. For random logic without hierarchy, it is very difficult to do the same. There are many reasons, some of them psychological, why designers want to be able to identify which physical pieces belong functionally to which major parts. Just as hierarchy is critical to design as well as synthesis, verification and simulation of a design, hierarchy maintenance can be very helpful for the verification of a physical layout. Some of the verification tasks to be done on a chip can be done modularly as in the case of functional simulation of a major part in a block diagram. So, what about maintaining this modularity, generally referred to as hierarchy, when migrating a chip? As we have discussed, this hierarchy can easily get lost when we push around polygons, as is done in compaction. Of course, dealing with hierarchy in a layout may be cutting at the boundaries, as we have shown before, also in terms of maintaining links with data describing physical features in a layout. Either approach will maintain some hierarchy. Maintaining any level of hierarchy during migration means knowing which pieces of the silicon belong to which functional blocks. So, full knowledge and maintenance of a hierarchy means knowing this association down to the polygon level. After all, it is the association of each and every one of these polygons within the blocks that creates an identifiable function, however small it may be. The polygons of a functional entity logically belong together. The hierarchy of the design is evident. When we look at the proper description in the hierarchy of diagrams, we also reveal what is inside these functional blocks. The difference between the design phase and the migration phase is that, initially, during the design phase, each of these polygons is assigned to a certain functional block through links in the description of the design like a layout-description language, schematic, block diagram, etc. As long as this link is maintained, we have a layout that still has the hierarchical information in it.
51
1. One can always take a minimalist approach to using migration since no less than ninety percent of a chip will generally be digital. Accordingly, one can use the "keep out" or "don't touch" approach to the analog part of a circuit, migrate the digital part as usual, and just substitute the analog part with a new analog layout created by manual design. 2. One can use less process-dependent (parameterizable) analog circuits. Such circuits have been designed using tools like GDT (for old-timers who were around back then). Of course, such circuits can address some of the problems, but not all of them. 3. Apply a linear shrink or a "creative linear shrink" by scaling some parameters differently than others and then fixing remaining problems by hand. The positive aspects of a linear or optical shrink are that it does not disturb the symmetries and proportionality relationships. The negative aspects of a linear or optical shrink are that the new design rules are not a scaling of the old design rules. Finally, layout features will not be on a grid after a linear shrink. 4. Do a proportional compaction. This will keep the symmetries and changes proportional while keeping all layout features on grid. S. Develop an analog compaction engine that allows for specification of the major analog constraints stated above. This would be the most elegant solution, but there seems to be nothing like that on the horizon. Until analog plays a more dominant role and there is promise of a big market for such a product, there may not be sufficient motivation. In fact, the need for a sufficiently large market for analog compaction is probably the most problematic issue. Some efforts do exist but, so far, they have not gone much beyond the university level. At a time when compaction of digital circuits seems ahead of its time, it is doubtful that anybody will invest heavily in analog compaction.
5.2
HIERARCHY
IN HARD IP MIGRATION
In many design disciplines, hierarchy plays a key role in managing complexity. Of course, hierarchy and hierarchy maintenance mean different things in different design disciplines. We will now discuss what hierarchy and hierarchy maintenance mean for Hard IP migration, creation or optimization. Maintaining hierarchy in Hard IP migration has been a serious challenge for many years and is one of the hotly debated issues in Hard IP reuse. Most users want to maintain all levels in the layout during hierarchy migration, at least that is the initial demand when approaching a possible migration project, even though there are pros and cons to maintaining all levels of a layout hierarchy. In fact, the number of levels of hierarchy one eventually wishes to maintain will often (and indeed should) become an intelligent trade-off between the pros and cons to be discussed below. In the discussions to follow, we will differentiate between what has been possible in hierarchy maintenance up to now (we will call it "traditional migration"), and what is just around the corner (we will call it "fully hierarchical migration"). We will discuss both, and the reasons are simple. Both the fully hierarchical and the traditional migration have and will continue to have justified areas of application. Two to three levels of hierarchy could be maintained for traditional migration. Only in early 2000, has the technology advanced to the point where maintaining any number of hierarchical levels has become possible. Before we discuss details, we need to review what hierarchy and hierarchy maintenance mean in physical layout.
92
CHAPTER
5.2.1
HIERARCHY MAINTENANCE
*
IN LAYOUTS
In a block diagram, schematic or similar descriptions of a VLSI chip, blocks are clearly recognized by their symbols. The hierarchy of the design is evident. At the highest level, for instance, we have CPUs, controllers, memory arrays. As we traverse the hierarchy from top to bottom, the next level may be memory cells, registers, etc. Going even lower, there are gates and finally transistors and other components. Looking at the proper description in the hierarchy of diagrams, we can also see what is inside these functional blocks. Thus the hierarchical representation in an electronic or different system shows what parts in the system belong together functionally and at various levels of abstraction. In the physical layout of a chip, the same principles apply. During the IC layout of a chip or a smaller functional block, depending on the design approach and the functionality required, entities are placed on the silicon in a building block fashion, suggesting a similar kind of modularity as we see in a block diagram or schematic. Thus, even in physical layout, pieces that belong together functionally are placed together physically. There are, therefore, physical boundaries in the layout identifying functional entities, although they are generally much less easily identifiable in a layout than in a block diagram. In a memory array or any other regular structure, it is easy to see where one cell ends and the next begins. In designs based on standard cells or designs where entire CPU or controller blocks are placed on a chip, the boundaries are also easily identifiable. For random logic without hierarchy, it is very difficult to do the same. There are many reasons, some of them psychological, why designers want to be able to identify which physical pieces belong functionally to which major parts. Just as hierarchy is critical to design as well as synthesis, verification and simulation of a design, hierarchy maintenance can be very helpful for the verification of a physical layout. Some of the verification tasks to be done on a chip can be done modularly as in the case of functional simulation of a major part in a block diagram. So, what about maintaining this modularity, generally referred to as hierarchy, when migrating a chip? As we have discussed, this hierarchy can easily get lost when we push around polygons, as is done in compaction. Of course, dealing with hierarchy in a layout may be cutting at the boundaries, as we have shown before, also in terms of maintaining links with data describing physical features in a layout. Either approach will maintain some hierarchy. Maintaining any level of hierarchy during migration means knowing which pieces of the silicon belong to which functional blocks. So, full knowledge and maintenance of a hierarchy means knowing this association down to the polygon level. After all, it is the association of each and every one of these polygons within the blocks that creates an identifiable function, however small it may be. The polygons of a functional entity logically belong together. The hierarchy of the design is evident. When we look at the proper description in the hierarchy of diagrams, we also reveal what is inside these functional blocks. The difference between the design phase and the migration phase is that, initially, during the design phase, each of these polygons is assigned to a certain functional block through links in the description of the design like a layout-description language, schematic, block diagram, etc. As long as this link is maintained, we have a layout that still has the hierarchical information in it.
91
For the migration phase, one may start out with just a GDS2-type database of polygons. Unfortunately, GDS2 drops all connectivity and properties data of a design, leaving only polygons. Fortunately, more recent physical layout databases do maintain connectivity and property data. To maintain the hierarchy, these links need to be maintained in the migration process. Thus to maintain the hierarchy, we need to "keep track" of the assignment of every polygon to the functional block to which it belongs even after any changes to the layout and migration of the block to a different process. Having described what hierarchy and hierarchy maintenance mean in somewhat abstract terms, let us demonstrate with actual layout structures what this means for traditional migration and then for a fully hierarchical migration now available. In terms of traditional migration, we discussed in Chapter 2 how regularity in layouts such as arrays can be used to maintain some hierarchy in the array structure. We did this by migrating an entire array by taking a single cell out of an array, migrating it by itself and then tiling the array back together. The migrated cell will still be as identifiable in the migrated array as it was in the original array. Maintaining the identity of these repetitive cells is generally referred to as maintaining a single level of the hierarchy of the array. A single level of hierarchy can be preserved even for a collection of much larger blocks. The identity of the large blocks among the many can be maintained. However, with a single level of hierarchy maintenance, the inside of the repetitive cell and the inside of the various blocks have changed. The inside of the repetitive cell and the various blocks are now a flat sea of polygons from the layout point of view. Also, while tiling together an array of memory cells is straightforward, putting a number of big blocks back together as a chip, especially with the routing in between, may be challenging at times. In terms of fully hierarchical migration, the data inside the repetitive cells and the large blocks is not flat. Thus, for large blocks especially, the individual functional blocks inside the block are still identifiable and can be worked with individually. Does this suggest why all this hierarchy maintenance is so important? We will discuss some of these hierarchy issues when we look at the many pros and cons of hierarchy maintenance. For now, just to render all these abstract arguments somewhat concrete, recognizing a block in a layout in its entirety and as a separable part from the rest of the layout allows for analysis and verification on this block without mixing everything up with the rest of the layout. For fully hierarchical migration in a large block consisting, for instance, of an ALU, some registers, and some control logic, we can analyze and verify each one of these smaller entities. This cuts a large layout into manageably sized pieces.
5.2.2
CHALLENGES IN MAINTAINING LAYOUT HIERARCHY In traditional migration, whether we like it or not, a migrated layout ends up flat if there is no regularity in the source layout, such as a layout for random logic, or if blocks overlap. Nothing is lost from the random logic block since there was no hierarchy in the source layout. We may or may not lose the hierarchy of layout with overlapping regions, depending on how we proceed. In Figure 2.1 7 of Chapter 2 and the center of and below Figure 5. 1, we can see what the overlapinduced flat migration yields. The postmigration layout will be flat. The top part of Figure 5.1 illustrates how a regular array is migrated by preserving the identity of each cell. At the top in Figure 5. 1, blocks A and B are migrated as individual blocks. They maintain their identity. In the center of Figure 5.1, we show how blocks A and B share some empty space. The resulting compacted layout for a flat migration will be denser than if it were done hierarchically. This gain in density comes at the expense of the loss of hierarchy. The blocks in the array are no longer identifiable as such.
92
A#E
Fig.
5.1
AE0B
Contrasting Hierarchical
AEB
Versus Flat
Migration of a Regular Array
The layout at the top has become a flat sea of polygons as symbolically suggested at the bottom in Figure 5. 1. This kind of one- or two-level hierarchy maintenance is fe for a regular array as long as the cells of the array are not too large. However, many designs do not have such regularity. They do not have nice straight-line "functional boundaries" that allow identification of functional entities in a layout by nicely shaped rectangles. They may be a completely random arrangement of polygons. Such layouts look much more complicated than even the overlapping ones in Figure 2.17. With the latest compaction engines on the market, such "generic" layouts can be migrated while maintaining as many levels of the hierarchy as desired. However, we all know that there is no such thing as a free lunch. Given the choice of migrating hierarchically, migrating hierarchically versus flat has both advantages and disadvantages.
5.2.3
PROS AND CONS OF MAINTAINING HIERARCHY IN MIGRATION There are many reasons why hierarchy and its maintenance are important. We will briefly list some of the reasons, some of which are layout-migration-specific. However, hierarchy maintenance also has a price and sometimes the price may be too high. First, let's look at the positive aspects of keeping the hierarchy of a design. All
the reasons listed are based on surveys
of compaction users.
Some of the reasons given are generic, others are more user-specific. However, all of them are well founded. 1. Hierarchy means that the details of repetitive cells exist in the database only once, through the design flow. It saves disk space, loading time and run time. Example: A memory cell repeated 1,000,000 times. 2. Hierarchy allows structured design work. Every block can be designed and characterized separately, enabling partitioning of the chip design into tasks to be done by different designers, and in parallel to improve time-to-market. Also, partitioning allows the use of "building blocks" to be used for various projects. 3. Hierarchy makes big designs manageable. Flat designs and, therefore, flat compaction, yields very large databases. 4. Verification: Hierarchical DRC and LVS are only possible after hierarchical migration. Though flat LVS/DRC is always an option, hierarchical verification has become very desirable because of the huge designs. 5. Maintaining hierarchy allows the use of the same timing verification approach before and after migration. It allows a better "apples to apples" comparison than if the hierarchy were to be lost.
6. Psychology of habit. As one user put it: "Although a design can be migrated flat and verified flat with no loss of accuracy or reliability, designers prefer a hierarchical design. They consider it to be more reliable because IP to be reused has always been hierarchical." In addition to giving up density when migrating hierarchically, hierarchy maintenance has its negative aspects, as well. Accordingly, these aspects need to be weighed against the positive aspects. 1. Hierarchical designs are larger than flat designs because every cell master has to fit its worst instance. 2. Performance optimization: Any optimization you do on the electrical side -be it for speed, power consumption or signal integrity- will be suboptimal on hierarchical designs. Ideally, you'd like to tune each transistor and signal to its exact load and environment, something that is only possible for flat designs. 3. Hierarchical compaction is very compute-intensive and requires a lot of RAM to run. Clearly, in the spectrum from fully hierarchical to cell-based hierarchy maintenance to flat migration, the trade-offs need to be examined. For flat migration, the relationship between compute time and size is not exactly linear, but it is much more so than it is for hierarchical compaction. 4. After a flat compaction, timing verification can be done on "critical nets" only. For the rest, one should rely on proper design. This way, no full verification of the compacted design is attempted. Extraction for flat designs can be done for gates and for the routing, then after back-annotation, timing analysis can be performed as for hierarchical layouts. Doing these steps hierarchically just seems to be much more "comfortable". S. Hierarchical analysis and design needs to be supported by EDA tools. This is not always the case. It may, in fact, be one of the limiting factors in choosing how hierarchically one wants to work. 6. Hierarchical migration may require too much setup time for migration to be performed. Sometimes it is more convenient to migrate flat if the resulting difficulties with verification can be handled. 7. For hierarchical layouts, a lot of characterization is needed. This characterization depends on the partitioning and the granularity in the hierarchy. In other words, for standard cell designs, the cell units need to be characterized. For memories, they are the memory cells. For full custom, larger blocks can be used in the hierarchy. Once characterized, the characterized blocks can be further used in hierarchy. So, every unit and block needs to have its verification and characterization environment. There is a link between the methodology and tools used, and the preferred hierarchical structure of the design, working in parallel.
THE S-0-C MIXING AND MATCHING IN RETARGETING The S-o-C scenario for IP reuse has already been mentioned in Chapters 1 and 2. This is such an obvious scenario and yet it is still done only on a relatively limited scale. Of course, this is not easy to do, considering the many different parameters to consider if we want to mix and match chips from different manufacturers, mix Hard IP and Soft IP, and even throw in some analog capabilities. However, considering the large amount of available IP, the tremendous need for productivity improvements, and the fact that, by using the latest processes, several single chip designs can now fit onto one chip, there are certainly abundant good reasons to explore this approach further. Also, some of the challenges are similar to what has been done with PCBs for a long time, just more difficult. Some of the advantages of S-o-C are: 1. Some of the more significant reliability problems with VLSI-based systems come from interconnects between the chips and with chips interconnecting inside the package. With S-o-C, the total number of external interconnects will be reduced significantly.
94
CHAPTER 0
2. One of the biggest penalties in speed in VLSI-based systems comes from getting on and off the chip. Besides, timing is difficult to determine accurately. This can be minimized for S-o-C. 3 The higher the level of S-o-C integration, the more chips can be placed into fewer packages, the more compact VLSI solutions become with the obvious tremendous systems design potential. Savings would also be substantial because packages are expensive. Some of the challenges of a higher level of VLSI chip integration are: 1. With that many chips being that close together, eliminating the power generated is a substantial difficulty. This power generated on the chip will also introduce temperature and potential gradients not previously present on the individual chips. This can lead to problems for digital circuits and total failure for analog circuits in the S-o-C solution. Excessive power generation and excessive IR drops may be the single, most significant challenge to S-o-C solutions, as they would otherwise be possible, combining retargeting of various technologies combined with Soft IP reuse [I0]. 2. Access to the individual chips is becoming much more difficult, making an already difficult problem-testing-even more difficult. This problem needs to be addressed and it can be. Scan circuitry (boundary scan) around the individual chips will need to be considered. However, as mentioned before, the test vector suites that already exist for Hard IP retargeted chips in the package can be reused. 3. When placing chips that have been manufactured using a broad range of processes onto one chip, they will all have to work together in a single process. Therefore, the performance of some of these chips is going to change much more dramatically than others. This will require careful timing analysis. The voltage levels also change, requiring a careful analysis of the interface question and possible design of interface circuits. This is a substantially more difficult challenge than placing many chips on a PCB.
5.4
DESIGNING VLSI CHIPS FOR EASE OF REUSE We will discuss some guidelines for how proactive design techniques can simplify and speed up Hard IP reuse. Over the years, experience in VLSI design has taught us what to avoid in order for designs to be robust. Concepts such as synchronous vs. asynchronous designs and using Flip-Flops as opposed to latches are even taught in school as good design practices for minimizing surprises later on, for simplifying verification, testing and other issues. Such guidelines are scattered here and there in the literature but are stated concisely and collectively in the recently published RMM [1] which focuses strongly on Soft IP reuse. Such guidelines are not mandatory for chips to work but they do "make life easier". When it comes to recommending design rules, we may want to separate them into three areas of focus: 1. Guidelines to assure robust designs are critical and useful for any circuits, whether they are designed from scratch, for circuits ported via Soft IP, or for circuits to be retargeted via Hard IF. The focus of these guidelines is not primarily reuse, although they are part of the critical aspects for designs to still work as processes change (one typical factor for reuse). 2. Then there are guidelines to facilitate the process of retargeting for Soft IP reuse. Both, guidelines for robust designs in general and for Soft IP reuse are discussed in RMM. 3. Finally, there are guidelines to facilitate the process of retargeting for Hard IP. These are guidelines often related but not limited to making the compaction process easier and more efficient. We will focus on the Hard IP guidelines below. While the guidelines for robust design are based on years of experience and, thus, on an enormous database, guidelines for reuse are based on a relatively short period and an as yet limited number of
95
designs. This makes it all the more important to share what has been learned thus far. It is clear that the S-o-C approach will become more popular over time. While it is without a doubt a serious engineering challenge, the benefits are so great that it is too tempting to be left alone. When it happens, it will involve Soft IP and Hard IP because of the enormous investments in existing designs and the confidence that every single reused design will work. Another important issue to consider is that a lot of software, such as control software, simulation and test software, is already available and its reliability and usefulness have been established. Finally, some Hard IP will be reused because it has to satisfy certain standards, requiring a lot of investments for recertification.
5.4.1
SOME GUIDELINES TO FACILITATE HARD IP MIGRATION The guidelines that follow only scratch the surface, but they may prompt a more lively exchange of ideas in the future. They also are changing all the time as compaction technology and processing limits keep advancing. How to Partition
a Layout?
We have already seen how the ease of defining boundaries in the layout help when it comes to migrating hierarchically, without any gain in computational complexity to speak of or a lot of setup time. The following suggestions also help make retargeting easier: Even if they work together in a particular application, units performing different functions should be placed in separate physical boxes in a layout. Such modular design makes it easier to "plug and play" later on. This functional separation should even be done when designing Soft IP through synthesis. We have already discussed the issue of careful placement of analog blocks. An additional precaution to be taken during the layout design phase is to introduce shielding for the analog blocks. Also, since analog drivers are more sensitive to process variations, one should allow sufficient space for their drive strength to be adjustable. Custom Layout Design Guidelines Give all transistors the same physical orientation, especially transistors whose performance needs to tracked. When migrating a block, design rules might resize differently in the direction of a gate and opposite to the gate direction. When a cell has transistors in both orientations, less density can be achieved during migration. Avoid
Nonportable Constructs Butted contacts are not supported by all processes and should, therefore, be avoided. The same is true for 45 degree polygons. They are not supported by all processes. Logistical
Guidelines
These are guidelines that make sense to a user of compaction tools. One should use consistent layer names to avoid confusion or errors. In fact, an effort should be made to standardize a set of layer names and their numbering. It helps avoid mistakes and makes data exchange easier, especially in companies where design efforts are partitioned by using well organized hierarchical design methodologies.
96
To summarize, just as for any well organized design organization following proven engineering and design practices, one should systematically build up, document and disseminate guidelines learned with every project. This is no different for Hard IP reuse.
CHAPTER
A PARTIAL OVERVIEW OF AVAILABLE TOOLS SOLUTIONS NEEDED FOR RETARGETING AND LAYOUT OPTIMIZATION The focus in all the discussions up to now has been on DSM VLSI chip performance issues related to the physical layout. The areas of application have included Hard IP retargeting to take advantage of existing designs and the latest technologies, efficient worry-free, IP creation that follows design rules, postlayout performance optimization through layout manipulations and design for manufacturing yield improvements (DfM). In this chapter, we will examine commercial tools that help in addressing these issues. We have reviewed the potential for affecting various performance parameters in DSM VLSI chips by manipulating the layout. Since both yield and performance are directly related to layout at the most detailed level, the polygon level, we are seeking tools that allow the necessary analysis related to physical layout parameters and then manipulation of the physical layout on the polygon level. So far in this book, we have talked about concepts without naming commercially available tools with which to perform the necessary layout modifications. As one would expect, only some of what is theoretically known to be possible in layout manipulation and optimization has actually been implemented in commercially available tools. Additionally, no information could be obtained for some tools known to exist or rumored to be on the horizon. Accordingly, they will not be discussed here. This leaves us with a somewhat limited but still useful set of commercially available tools to be discussed. Certain layout geometries in pre-DSM VLSI designs have traditionally been viewed as critical to the performance of ICs. Processing technology has therefore been pressed to make those critical layout geometries as small as possible, and remarkable success has been achieved in a very short time. Guidelines for the most critical process parameters have been based on computer simulations using the best known models for these circuits. At the heart of these models are active devices such as transistors. An enormous selection of models and corresponding software have been developed and are at the designer's disposal. Lately, because of DSM effects in designs, the "ballgame" has changed dramatically due to the significant contribution of interconnects in determining the performance of designs fabricated in DSM processes. Because of these relatively recent developments, an appreciation of the dominant effects determining the performance of DSM designs is rather limited and, therefore, the corresponding software to design and properly optimize these designs is also largely unavailable. This is a bit surprising since every year for the past years, an entire day has been dedicated to the subject of DSM layout optimization presented by the main contributor [3] of new insights at DAC, and extensive materials have been published over the past ten years suggesting algorithms containing the latest understanding of the main factors determining the performance parameters of DSM VLSI designs. Is
anybody out there in
the commercial world listening!?
To reiterate, the dominant concerns in the VLSI chip industry over the last years have been: 1. A conflict between the enormous investments in engineering resources and time required to come out with state-of-the-art VLSI chips and a shortening market window, jeopardizing a guaranteed return on investment. 97
@
2. Accelerated time-to-market requirements. 3. A growing discrepancy between the rate of progress in processing technology and the speed of design of new chips that benefit from it. 4. An inability to design DSM VLSI chips to meet performance expectations without extensive rework. Often, a major redesign is even required. One of the problems with these requirements is that their priorities shift constantly with time. Time-to-market is probably the most constant "squeaky wheel". Whatever the crisis "du jour", we will now discuss some available solutions regardless of whatever the priorities may be at the moment.
6.1
THE POSTLAYOUT OPTIMIZATION PROCESS The postlayout optimization process may require any one, any combination of, or all of the following three steps, depending on the task to be performed. The three steps are: I The first step is an analysis of the nature of the problem. When performance problems arise or when increased performance is desired for whatever reason, an analysis is required to determine the problem. It may be a timing problem, an excessive power consumption problem, a yield problem, or another, more recent hot analysis issue such as problems with signal integrity. 2 The second step is to determine what and how much in the layout needs to be changed. It is this second step that tells "compaction" which polygons to move and how much. 3 The third step is the actual layout modification. Once the desired changes in the layout dimensions are known, these changes need to be implemented. This step is performed by means of "compaction" (an enlargement or reduction in layout dimensions).
6.1.1
THE THEORETICALLY IDEAL OPTIMIZATION
APPROACH
In Chapter 3, we discussed some of the research results showing that, though they may be good starting points, neither an optimization of transistor geometries nor manipulations of interconnect geometries alone will lead to the best possible performance. A key realization gained from this research is that simultaneous optimization of interconnects and transistors driving them results in the greatest improvements with respect to speed, power dissipation and even layout density. Thus, based on the rather startling results particularly with respect to the magnitude of these effects, we should expect some newer commercial tools that take into account both transistors and interconnects working together to come along in the near future. Another key focus for layout manipulation is to improve manufacturing yield, preferably without sacrificing performance or chip size. We have already suggested in Chapters 3 and 5 that not all layout dimensions need to be the minimal layout dimensions allowed by the corresponding process. The trick is to relax the layout rules where no or little performance is sacrificed while significant yield improvements can be achieved. Such efforts will require close cooperation between processing and design engineers and the availability of good layout optimization analysis software. This raises the following obvious and immediate question: What tools are available to make the desired changes and what tools are available to efficiently analyze the required modifications? We already have answered the first part of the question. The available compaction tools are exactly what we need to make the appropriate layout changes. All we need now is a few names of commercially 98 available tools.
CHAPTER 0
The second part of the question can not be answered as easily. With the main focus of the VLSI chip design community on synthesis, there is still only limited emphasis on layout optimization tools. The commercial tools currently available are limited to a traditional approach to layout optimization. The traditional approach is and has been to optimize the layout geometry of the active devices, the transistors. This was fine for pre-DSM processes, but the tools are only a starting point for DSM VLSI chips. Nevertheless, they are still useful and we will discuss them in this chapter. As mentioned earlier, extensive work is being performed at the university level and very promising results have been published and algorithms found. Thus, even though many useful algorithms for optimization of DSM VLSI layouts have been published [3], they have not been "picked up" by the industry. At this point in time, these university level results are the only concrete results we can report that deal with physical layout optimization the way it should be done. The main challenge in layout optimization seems to be that today only a handful of people in the industry believe or are aware of the possibility of significantly improving the performance of a chip through back-end, layout manipulation. To be fair, it is indeed almost counterintuitive to expect improvements larger than around 5 to 10% at that point in the design flow. However, one would think that if such improvements would save a chip that would otherwise have to be significantly redesigned, even 5% should be worth considering. After all, there are plenty of difficulties with issuing new DSM VLSI chip designs without significant rework and, with the continuous reduction in minimum layout geometries, the situation will become much more challenging. In fact, it is not going too far to say that some back-end optimization will be inevitable at some time in the near future, whatever the design methodology may be. Now that we know what type of layout manipulations are most desirable, we will discuss commercially available tools capable of the following: I. Tools exist for retargeting existing chips. We will suggest two: one for limited hierarchical maintenance in Hard IP migration; another for fully hierarchical migration. 2. Next we will suggest a tool allowing productive Hard IP creation. 3. We will then suggest tools that allow transistor geometries to be analyzed and adjusted for optimal performance. 4. Finally, we will suggest a tool focusing on spreading out interconnects to minimize capacitive coupling and improve yield. The weakness in terms of commercial tools is really on the analysis side. Compaction allows any kind of layout adjustment we demand of the compactor. The problem lies with not knowing what the best layout geometry would be. Transistors can be adjusted because there are analysis tools to determine how much. Interconnects can be spread out to minimize capacitive coupling by simply using all the available space without increasing the total chip area. This can be done without any detailed analysis, although analysis would be very helpful. The ultimate performance, however, is achieved when analyzing the best combination of transistor size together with the load, the interconnect, it is driving. No commercial tool today addresses their simultaneous adjustments for optimal performance. We will now look at commercial tools that help with Hard IP retargeting, Hard IP creation by enforcing layout rules with compaction, Hard IP optimization with the limited focus on transistor size optimization and reduction of capacitive coupling through wire spreading of global interconnects also affecting yield. The combination of these tools also allows the suggested DfM optimizations. For DfM it is not so much a particular tool that is needed but rather close cooperation between the processing and the design staff. 99
6.2
IP REUSE THROUGH RETARGETING The concept of Hard IP reuse through retargeting has been discussed in Chapter 2. IP reuse, part of which is Hard IP reuse, has been the first approach believed to solve the design productivity crisis and it has been quietly practiced for many years by using a linear shrink. For DSM VLSI chips, however, linear shrink is inadequate and needs to be replaced by compaction, since compaction takes full advantage of the changes possible in layout geometries. Compaction became popular when the desire to create process-portable designs emerged and was first used on a large scale in silicon compilation.
6.2.1
A TRADITIONAL, ROBUST, RETARGETING PRODUCT In the late 70s and early 80s, silicon compilation was pioneered by Carver Mead and Dave Johanosen at CalTech. Silicon compilation then became the core of technologies for start-ups like Silicon Compilers in the Silicon Valley and Sagantec in the Netherlands. When silicon compilation, did not "take off' as expected, Sagantec decided to use the compaction technology inside their silicon compiler engine to attack Hard IP retargeting long before anybody in engineering knew that IP could mean anything more than a networking term. Through much research and a lot of experience with migrating first libraries, a product called "DREAM", an acronym for Design Rule Enforcement And Migration, was created. DREAM is today the most widely used tool for Hard IP retargeting and very robust. DREAM has been steadily expanding, thanks to the steadily increasing complexity of processing rules and feedback from customers worldwide. DREAM is now a solid product capable of retargeting designs as simple as libraries and as complex as chips in the one to two million transistors range. Since the description of Hard IP retargeting in Chapter 2 was based on DREAM, the description given there basically reflects its functional capabilities. We will not use any more space talking about the various features of DREAM here. As is well known when talking about software, well established products, while very robust, ultimately can be surpassed with newer, previously unknown algorithms. Sagantec recently introduced a new product, called Hurricane, a vintage 2000 product that is based on a completely newly designed compaction engine, with features paralleling DREAM but running much faster and handling much larger capacity layouts. As is generally known, software speeds often follow a linear curve with capacity until they "hit" that famous breakpoint (knee) where the relationships between speed and capacity become nonlinear and the process slows down. That breakpoint (knee) has been extended by orders of magnitude compared with DREAM for Hurricane. Thus, increasingly, DREAM will be used for library migrations while Hurricane will be used for the migration of large Hard IP blocks and ICs. We have already talked about the setup phase for migration jobs. When interacting with a terminal, some engineers like to type. However, engineers who are especially conceptually and visually minded generally love a nice graphical user interface (GUI). A nice GUI called EnCORe (Environment for Core Optimization and Reuse) has recently been introduced. EnCORe makes it easier for designers of all skill levels to set up technology and design migration parameters. With EnCORe, all Sagantec products utilize a common process database that serves as a central repository accessible through templates containing all the basic data belonging to certain foundries. It makes the process of adjusting certain process parameters "on the fly" particularly easy.
6.2.2 STATE-OF-THE-ART,
FULLY HIERARCHICAL RETARGETING
Today's most sophisticated DSM VLSI chips consist of a lot of polygons. For most steps in the design of complex chips, the best strategy is to stay as far away as possible from the low-level details of the physical implementation of the desired functions. Thus, when designing complex VLSI chips, design
15
CHAPTER
a
work will be organized whenever possible in a modular fashion and based on a high-level functional or behavioral approach. For the modular approach, large designs will be divided into major functional blocks. This approach, referred to as "divide and conquer " has advantages like dealing with smaller blocks, letting each block be designed by a specialist for the particular function, and many other advantages. Each block in itself may have additional hierarchical levels. Most designers want to maintain the original identity/modularity and hierarchy of these blocks through migration. For many years, it was impossible to migrate fully hierarchically. Now, a fully hierarchical retargeting tool is available on the market. It is a tool from Sagantec named SiClone. Though simple, one example of the migration of a layout will demonstrate one aspect of full hierarchy maintenance. In Chapter 2, we discussed how we can maintain the hierarchy for a traditional compaction engine such as DREAM, and therefore the identity of cells in a regular array such as a memory array. To achieve this result, the user has to identify cut lines or let the compactor do it automatically for him or her. Then the array is migrated by migrating a single cell with the subsequent tiling of the array coming together like Lego blocks. User input is not required to define hierarchy for a compaction engine that maintains all levels of a hierarchy. hi this simple example, this means no cut lines need to be defmed, the array will be migrated as a whole, and yet all cells have the original hierarchy and are dearly identifiable in the migrated array. While it is difficult to show the complete hierarchy maintenance in a picture of such a migrated array, the identity of the cells is clearly visible as shown in Figure 6.1 of a hierarchically migrated memory array, migrated with SiClone.
Fig.
6.1
A Memory Array Migrated While Maintaining Hierarchy
With traditional compaction, a flat migration is sometimes performed because the user does not want to bother with the required setup or because he or she does not know how. With SiClone, this setup is no longer needed, saving time. For layouts with overlaps, it is not possible to introduce cut lines. SiClone allows hierarchical maintenance for layouts with overlaps, as well.
6.3
TOOLS FOR HIGH PRODUCTIVITY
IP CREATION ON THE LAYOUT LEVEL
To create a new layout, we need a layout editor. There are several on the market. Using a layout editor is most productive if the designer using it does not have to worry about layout rules. A combination of a compactor and a layout editor provides this type of solution. As with DREAM, all the design rules are in the database of the compactor. In fact, a compaction engine just like the one used for DREAM can be integrated into a layout editor and voila! The problem is solved.
151
Sagantec offers just such a product called "Companion" because it is a companion to the layout editor. Enormous productivity improvements can be achieved using such a combination of tools. We have discussed the basic ideas about Companion in Chapter 4. In Figure 6.2, we show a commercial version of Companion seamlessly integrated into the Cadence Virtuoso Layout Editor environment.
LAYOUTDESIGNER
Fig.
6.4
6.2
Sagantec's Companion
in Cadence's Virtuoso Environment
OPTIMIZATION OF PHYSICAL LAYOUT FOR PERFORMANCE
& BETTER YIELD
At present, there are not any commercial products to optimize layout geometries by varying the drivers (transistors) and the load (interconnect) simultaneously, in order to find the optimal sizing of their but there are products at the university level as mentioned [3]. As we have suggested in Chapter 4, transistors and interconnects must be optimized simultaneously for the ultimate layout optimization. However, there are intermediate commercial solutions and we will have a look at them combination,
now. We will discuss a product that allows optimization of transistor sizes without addressing the interconnects and we will discuss a product that allows certain manipulations on interconnects without addressing the transistor sizing. First, transistor sizing. There is a commercial product for finding the best combination of sizing of transistors. The product, called AMPS, is available from Synopsys. AMPS determines the best transistor sizes to optimize a VLSI chip layout for power, speed and area simultaneously. For interconnects, another commercial product called XTREME helps improve VLSI chip performance by manipulating the physical characteristics of interconnects. This product is available from Sagantec. We will discuss some of the details of these tools below and examine where that leaves us in terms of performance optimization of DSM VLSI chips.
6.4.1
OPTIMIZING TRANSISTOR SIZES A traditional approach to obtaining the desired performance in a VLSI chip is based on changing transistors sizes by exchanging them for larger or smaller devices to obtain stronger or weaker buffers. This is a rather established technique that was clearly the way to go before DSM technologies. This approach worked well for synthesis or even custom designs, especially in view of the lack of more sophisticated layout optimization algorithms. It is a generally established pattern in performance optimization. Designers feel comfortable with this approach. Based on this established way of thinking, we would expect to see a similar initial thrust in physical layout optimization tools even for DSM technologies. This is indeed the case. The only commercially available tool for optimizing physical layout focuses on transistor geometries. This tool, called AMPS, is available from Synopsys. Of course, there is a significant difference between the traditional attempts to find the best transistor drive strengths and AMPS. While a tedious and time-consuming trial-and-error approach was traditionally used in
CHAPTER
order to (hopefully) find the best transistor sizes, AMPS uses the power of the computer and optimization algorithms to do the work for us. AMPS can simultaneously optimize following user-specified design requirements for delay, power, timing slack or power/delay/area cost functions for digital CMOS designs. In conjunction with another Synopsys product called ACE, AMPS can also serve to optimize analog circuits. Since AMPS is an analysis tool, it only analyzes the transistor sizes. AMPS does not change the physical layout but rather modifies the schematic. Changes in the circuit can then occur by means of resynthesis in a prelayout mode or modification in the physical layout in a postlayout mode. We have already discussed the two options of going through a resynthesis flow versus simply changing the transistor sizes with compaction. For prelayout, resynthesis-type replacement of transistors, interconnect effects are estimated, and there needs to be enough room available for placement of different sized transistors. This approach literally requires leaving empty room around transistors in anticipation of timing difficulties. This is really difficult to plan for. If there is not enough space, rerouting may be required, which could seriously change the timing of the chip. This may generate a significant amount of work and, in fact, it may force the designer into a rework cycle to achieve timing closure. On the other hand, with compaction, postlayout adjustment of the transistor sizes is very easy. It is also much more accurate, since values for the parasitics of interconnects are based on parasitics extracted from the layout. Of course, space is also needed with compaction. However, with compaction, space can be created, as we discussed in Chapter 3 and demonstrated in Figure 3.1. Space can be created through the compaction process because layout features other than the transistors are allowed to change to make things fit. Polygons can be moved wherever there is some space as long as no process layout rules are violated. That does not mean that the transistor enlargements suggested by AMPS can always be accommodated. But it does mean that the chances are much greater than they are when just trying to exchange any of the transistors with larger ones. Thus, the combination of AMPS with a compaction engine such as DREAM is very appropriate and powerful. AMPS requires the input from a timing analyzer and a power analyzer such a Pathmill or PowerMill from Synopsys in order to determine the required transistor modifications for either timing or power optimization. In conjunction with compaction tools, AMPS also addresses the issue of hierarchy. When seeking the critical timing or power information, it can proceed on a block-by-block basis or it can ignore the functional boundaries between various blocks on a chip. This is like flattening the design. The pros and cons are similar to what one experiences with other tools. There is a trade-off between finding the best possible total design parameters for a flattened design versus a less optimal solution but simplifying a task and investing less in computational efforts. Finally, AMPS working with compaction can respect keep-out areas defined in an existing layout or layout to be migrated in order not to disturb established critical timing such as in an analog block or for timing critical clock lines.
6.4.2 INTERCONNECT LAYOUT ADJUSTMENTS With interconnects being so critical for DSM VLSI chip performance, it is clear that their characteristics must be carefully analyzed. However, as we have seen in Chapter 3, analyzing interconnects is also very complicated because of their distributed electrical characteristics. It took years to develop the understanding that led to the relatively simple and yet powerful methodologies for analyzing timing characteristics of DSM VLSI interconnects as shown in Chapter 3. 103
@
Of course, time delays are only part of' what we would like to know about interconnects. The continued, rapid push towards smaller layout geometries entries keeps designers guessing about electromigration, larger than expected IR (voltage) drops (see [O] for for some enlightening information), questions on signal integrity due to cross-coupling with the resultingg increase in power dissipation, and the lowering in the speed of chips. The worst thing about these issues ies is that they tend to show up unexpectedly after a chip has been fabricated, because the manufacturing technology technology progresses faster than EDA tool development How about a tool that works more or less without any analysis? Could capacitive coupling between interconnects with all the resulting benefits aefits for speed, power and signal integrity be reduced without much analysis? Could manufacturing yield deld be improved without much analysis? There There is such a tool called XTREME from Sagantec.
Fig.
6.3
Routing Before and After ter Spreading With XTREME
XTREME pushes interconnects apart, spreading reading them evenly within the available space, resulting in smaller capacitive coupling and better manufacturing yield. We have seen the results after using other example is shown above in Figure 63 XTREME in Figure 3.8, Chapter 3. Another 6.3, a small section of routing before and after using XTREME. [E. Clearly, interconnects that were spacedI closely along parallel lines before using XTREME are spaced farther apart after the process. They are re no longer tightly coupled as they were before. before. Also, on average, they are farther apart without increasing ng the overall routing area. area. Actual application of XTREME in the the field support what is visually evident: a substantial improvement in performance and yield. Why is a tool such as XTREME needed? Couldn't routers just do what we see on the right side in Figure 63 6.3 as opposed to what we see on the left? The answer has to do with the complexity lexity of the task. Routing is already very compute-intensive, and one of the primary, difficult tasks of a router is the interconnect of all the points of connection in the layout. Finishing a route is often no easy ,asy feat. If the router were given all these additional constraints, the task would simply get too overwhelming. lelming. It is easier to do the spreading as a postroute step with XTREME. We all know that hindsight is easier than foresight. It has been a general and desirable trend nd to constantly increase the layout density on chips and the space occupied by interconnects has been watched watched particularly carefully because it is a large portion of a chip layout. To minimize this space, designi automation tools often place interconnects as closely together as possible (and often closer than necessary, try, as we will see). This increases the chances of cross-coupling cross-coupling between neighboring interconnects. Additionally, additionally, interconnects are made as narrow as possible in order to further maximize the packing density. Ity. All without violating any process layout rules, of course.
104 04
CHAPTER
Placing interconnects closely together will also increase the probability of shorts between metals through "bridging". One mechanism causing this is defects, the probability of which depends on the defect density of the particular process. We will examine this further below. Although bridging is only one of many possible failures, it is an important one. However, others can be caused by keeping layout dimensions as small as possible, even in places where no performance benefits result. We discussed possible alternatives in Chapter 5 when we talked about DfM. Well, XTREME allows the routing at least to be done in such a manner as to optimize performance and yield.
6.4.3 YIELD ENHANCEMENTS WITH XTREME It is well known that one of the key culprits in lowering the yield in chip manufacturing is the much-talkedabout defect density. This is talked about only in generalities in public, however. When it comes to actual, real numbers, it is one of the more carefully guarded secrets about a particular process because disclosure would reveal pricing and profit information for chips that companies wish to keep to themselves. In the present discussion, it is perfectly fine to talk in generalities about numbers and trends. Qualitatively, it is "intuitively obvious" (don't you love this phrase?) that spreading interconnects apart (as with XTREME) will increase manufacturing yield. The following figures, however, will lend the intuition a more quantitative character. The goal is to show a relationship between interconnect spacing, defect density, defect sizes and yield. The general relationship between yield, chip area and defect density is well known and can be found in many textbooks about VLSI chip design, as well as in some very recent publications containing much more than just the yield equation. Semiconductor manufacturing equipment is now capable of monitoring defects, defect density and their size during wafer processing. The following information will serve to show the value of interconnect spreading. Readers interested in this recent and fascinating subject on yield monitoring should consult [19]. The equation describing yield is as follows: Yield =
ACRIT: D:
e Acrit*D
TOTAL
DEFECT
CRITICAL
AREA
WITH
POSSIBLE DEFECT OCCURRENCE OF A LAYER.
DENSITY.
The critical area is not just the total chip area. It is an area in which a defect would do harm. This was explained in Chapter 5 and in more detail in [18]. Clearly, in empty space in a layout, nothing can be harmed by a defect. In Figure 6.4, we see a qualitative curve for the probability of a certain defect size to occur. It shows that, as defect size increases from zero up to a certain defect size, the probability of it occurring on a chip increases until it reaches a maximum value. Then as we look for larger and larger defects, the chances for them to occur decreases. PROBABILITYOF DEFECT (FOR ONE LAYER)
DEFECTSIZE [MICRONS]
1 IiNO
SPAE
2
M
In Figure 6.4, we can see that if we make the minimum design rule for the separation between interconnects larger than the one indicated by the arrow in Figure 6.4, we substantially lower the probability of a defect being big enough to cause a short. In Figure 6.5, we illustrate how a defect of a certain sized would cause problems while a smaller defect would not.
-
- - 7 ' -
IDEF Y
SIZE
-r
Fig.
6.5
Defect Size
Compared With a Typical
Interconnect
Separation
The critical area of a given defect size is shown in Figure 6.5. The concept is very simple. If the defect is smaller than the separation, no short will occur. This demonstrates the value of spreading interconnects with XTREME. Thus, to summarize, XTREME only does the spreading to take advantage of the available space. Of course, XTREME could benefit from some intelligent interconnect spacing analysis tool. It seems that some of these tools are on the horizon. But again, it was impossible to obtain sufficient and credible information to describe their virtues here. Thus,
XTREME does postlayout reengineering
and has the following
features: 1. It can handle any number of interconnect layers (one at a time) and supports a variety of routing styles. 2. It preserves hierarchy and, as is always the case with compaction, connectivity. 3. It accepts shape, symbolic data and LEF/DEF as inputs. DEF contains routing data. LEF contains technology and library data. 4. It provides full control of spacing and wiring trade-offs. 5. It automatically determines which wires are the most critical for cross-talk. XTREME has been used extensively in the industry and has shown reduotions in ross -ta7k of mnor than sn -ronnt
106
CHAPTER 0
PRODUCTIVITY AND ,IRISKS WITH HARD IP OR SOFT IP REUSE SOME GENERAL OBSERVATIONS The complete sequence of steps one has to go through to design a chip is generally referred to as a "design flow". This is when one starts a design from scratch. Although we focus on reuse in this book, design from scratch often needs to serve as a reference. Thus, it will be mentioned as such. We already have stated that all Soft IP will eventually become Hard IP, suggesting that Hard IP reuse allows us to go directly to the back-end of a design or reuse flow. This might suggest to some an immediate savings in time and resources when comparing Soft IP versus Hard IP reuse. However, such a statement would immediately and justly infuriate a Soft IP proponent, make a Hard IP proponent very happy, and serve only to create a situation in which the two perfectly legitimate engineering approaches could never again be viewed objectively. The fact is, both Hard IP and Soft IP reuse have strengths and weaknesses. Let us attempt to make an objective comparison. After all, it makes no sense to make a comparison between engineering approaches if it becomes an exercise in marketing for one of them. The number of steps and the time required to create the type of chip desired is only one aspect of comparison. It is an important one, but only one among many. Let's see what else we need to look at.
7.1
SOME REASONS FOR LOOKING AT BOTH HARD IP AND SOFT IP REUSE When comparing Soft and Hard IP reuse, some of the many points to examine are: risks, time-to-market, and predictability of time-to-market, flexibility in terms of how much freedom one has to choose the target technology, the flexibility in engineering changes one can make to a design to fit the new requirements, etc. In fact, flexibility versus risks are often viewed as the two focal points for comparison. Of course, it is very important not to take such a limiting view when comparing these two engineering approaches. We have already seen in the discussions in previous chapters, especially when talking about optimization, that there are many not so obvious factors to consider. There are also issues such as tool costs and the type of engineering talent required, which are much different for the two approaches. Of the many points of comparison, we will hopefully touch upon the most important subset. Soft IP reuse offers a very high level of flexibility. This flexibility alone may be enough reason for some to take that path. Engineers especially love that flexibility. However, the high degree of flexibility often brings with it the highest level of uncertainty, especially for Soft IP reuse, in terms of the timing of the physical layout. When comparing Soft IP reuse with design from scratch, it seems obvious that chips, designed from scratch starting with only a very high-level functional description of what one expects from a chip, will require the highest level of resources. For a design from scratch, simulation and test vectors also need to be generated, while for Soft IP reuse they can generally be reused. A high-fault coverage for fault simulation is required in order to get acceptable assurances that a chip actually works as needed. This can be a time-consuming, expensive task that does not really guarantee a working chip for reasons too test-specific to be discussed here. Thus, again our conclusion is that a sensible IP reuse
methodology can mean substantial savings in resources. The quality of the simulation and test suites is already established and the desired performance can often be achieved by simply migrating to a higher performance process. Hard IP reuse offers considerably less flexibility than Soft IP reuse. This lack of flexibility often leads to an immediate rejection of the idea of Hard IP reuse. However, this lack of flexibility also has positive aspects. It eliminates potential functional errors and minimizes highly probable timing errors in a Hard IP reused part. We also have seen just how much can actually be accomplished with postlayout optimization. This alone should convince the rather biased synthesis world to look at compaction, the methodology used for Hard IP, to see its merit as a valuable complement to Soft IP methodologies. For Hard IP reuse, there are limitations to being able to benefit from some drastic topological changes from process to process, such as additional layers of metal. Furthermore, there are very limited means for changing any functionality of a reused Hard I. Of course, if functional changes are made to Soft IP, many of the advantages of Soft IP reuse also are lost. In Hard IP, floorplans, aspect ratios of blocks can not be changed, or only in a very limited way. Thus, an open mind and some willingness to make some compromises are required in order to benefit from both Soft and Hard IF reuse and get better, faster, less expensive and more predictable results. In a nutshell, with Soft IP reuse, we preserve the functionality at the expense of flexibility but can get more flexibility at the expense of guaranteed functionality. We can not very well predict the timing of reused Soft IP in DSM technologies. With Hard IP reuse, we preserve functionality at all times and the timing most of the time. If the timing is off somewhat, it is easy to fix with the additional benefit of optimizing the reused Hard IP with respect to timing, power, signal integrity and area as a postlayout step. Cost, performance, risks, and time-to-market requirements are some of the factors that influence the decision about which blocks to reuse as Soft IP and which to migrate as Hard I. For reuse of a certain existing IF, an objective evaluation of the state of this IP is required. We already know one very important positive point: we know that the existing circuit works, it has a proven track record. It takes a very objective approach to decide if the design is sound otherwise. Is the architecture still up to date? Does it have the required levels of testability? Is the test coverage satisfactory? The questions that need to be asked are highly contingent on the application of the circuit. The questions also depend again on whether the retargeting is done as Soft IP or Hard IP. This evaluation is often also a real challenge between management and engineers. Engineers always want and know ways in which to "improve' a circuit no matter how well it works. They feel stifled by reuse, by the apparent inability to reengineer a system, especially for Hard IP reuse. The deciding engineers are also often front-end, high-level, architectural, creative types. They don't think of back-end, layout optimization. The engineer's top priority is to create the most elegant solution with the highest possible performance. On the other hand, management generally leans more towards issues such as minimizing cost, minimizing or guaranteeing time-to-market with "good enough" performance. When designing an entire system, there are usually at least some circuits that need not and should not be redesigned, and that is key. Reuse what can be reused and redesign what needs to redesigned. Besides, progress in processing technology is so incredibly fast that the time between newer, more
108
CHAPTER 0
aggressive processes is so short that obsolescence often has not had time to become a major issue in the design philosophy used for reusable IP. Having looked at some of the ideas concerning redesign versus reuse, we need to look at the costs, risks and time issues in more detail. Trying to maximize the useful lifetime of a well designed, field-proven chip is one of the key motivators for Hard IP reuse. Just where the boundary lies between doing a redesign versus reuse will have to be determined by evaluating the cost/performance/time-to-market benefits and trade-offs. As we mentioned, this chapter discusses both, Soft and Hard IP, but the discussion will be limited mostly to comparing the two methodologies and pointing out the complementary values of Soft and Hard IP. We still have not addressed the question of what types of circuits are the best candidates for reuse. Performance of circuits, questions of costs, and silicon usage are all very strongly influenced by the way a chip is laid out and what design methodology was used. Full custom design allows for the ultimate attention to the chip layout, but the cost of full custom design may be prohibitive. However, the large initial investment for a full custom design is also one of the most important arguments for reuse. After looking at design and reuse flows with the associated cost and risk factors throughout these chapters, we will be in a better position to judge. It would certainly be nice to be able to amortize the high initial investment for a sophisticated VLSI design over more than one generation of processing technology through IP reuse and, in particular, Hard IP reuse.
7.2
DESIGN FLOWS: A VISUAL PERCEPTION
OF DESIGN EFFORTS
Design flows give an intuitive picture of the steps involved in designing a VLSI chip. There are questions about how many steps are required, what kind of tasks have to be performed at each one of the steps, and what skill levels are required. There are questions about the sophistication and cost of required tools. There are questions about the level of control over the outcome, the time required and so on. Just as there are many possible design methodologies, there are just as many possible design flows. The best we can hope for is to choose design flows here that help demonstrate the major challenges with respect to IP reuse. The main motivation for discussing design and reuse flows here is to attempt to show other angles justifying IP reuse. So far, our discussion has been primarily from a somewhat technical perspective. We also want to see whether IP reuse makes economic and business sense. Do these approaches really yield major productivity improvements? Do we really get a shorter time-to-market? So the main focus here is a comparison between some of the major design flows in general terms. We will limit ourselves to three areas, because they are the major focus of this book. They are: I. A generic design flow using synthesis for pre-DSM chips for which delays caused by interconnects could be ignored. 2. A generic design flow using synthesis for DSM chips for which delays caused by interconnects significantly affect the performance of the chip. 3. A Hard IP reuse flow for retargeting any type of physical layout, pre-DSM or DSM, to a new process. Comparing design flows between pre-DSM and DSM technologies, will give us an idea how many steps are required in order to design a chip in either of the technologies.
Comparing design from scratch with Soft IP reuse and Hard IP reuse will show us how many of the steps in pre-DSM and DSM design flows can be skipped because the information is already available and proven, and how many uncertainties and risk factors can be minimized. Keeping in mind the need for improvements in design productivity, we will look at the steps involved in designing today's complex DSM VLSI chips versus IP reuse methodologies. The need for a substantial increase in design productivity is what triggered reuse of already existing and previously used designs and retargeting of them to the latest, most advanced processes. Of course, the only part that would literally be reused would be the design content, which is in fact the reuse of the IP content. While additional improvements in performance could be achieved through layout optimization at the same time we retarget a chip, for now, we will only discuss simple retargeting by taking advantage of the tighter, higher performance process layout rules. This will allow us to focus primarily on profit-related issues of Soft IP and Hard IP reuse as standalone processes. For Soft IP reuse, existing high-level software descriptions of chips can be targeted to newer processes for Soft IP reuse. So what is reused is much - if not most - of the high-level programming, test scripts, simulation scripts, models and programs controlling the Soft IP. For Hard IP reuse, the existing layout databases will be transformed to represent the new physical layout design rules. Existing simulation and test vector suites and any control program that was developed for the part in question will be reused. In addition, any parts that were developed to satisfy some established standards or that had to pass some elaborate incoming inspection (anybody who has tried to sell chips to some French company knows what I mean) will much more easily be accepted after "just" retargeting. The refabricated designs will then have higher performance, be denser and thus smaller than in the original layout. Also, several of these retargeted designs, can now be combined onto one chip in an S-o-C approach, yielding much higher levels of integration than previously possible. This retargeting of proven designs to newer technologies has become one of the more promising approaches for achieving the needed productivity gains. It seems obvious that such reuse of existing, proven designs through retargeting to more advanced technologies, as opposed to starting from scratch, should result in substantial savings in engineering, and reduced risks, while still taking full advantage of the progress in processing technologies. This should also address the hottest issue at present in the market: shortening time-to-market. The main goals are to benefit from the rapid advances in processing technology without paying the heavy price of constantly having to redo a design by reusing the knowledge previously invested in these chips.
7.2.1
CONSIDERATIONS WHEN LOOKING AT DESIGN FLOWS In order to go from the concept of a VLSI chip we want to design to the fully tested final chip, we will have to come up with some "typical" design flow. Of course, what may seem typical to us may not seem typical to others. We also do not want to make the design flow too complicated, since a rather straightforward flow should do the job when it comes to comparing "the three ways" of getting a chip: design from scratch, Soft IP reuse and Hard IP reuse. An entire book can be devoted to design methodologies and design flows. An excellent, very recent book from 1998 [2] does just that. We will focus here on just the bare minimum.
11
0
It is not very difficult to predict the number of steps going through a design flow once. However, it is often extremely difficult to predict how many iterations certain steps in the flow will require to finally get a chip that meets all the desired specifications. But this is exactly one of difficulties with today's DSM VLSI chip designs. It is a problem in terms of such issues as time-to-market, budgeting, engineering resources, tools required and planning for processing line capacity to get the chip fabricated. In our attempt to compare the efforts and risks of the three flows considered here, it will be difficult to come up with absolute measures. Absolute measures are too unpredictable considering the many possibilities. We will therefore attempt to come up with some meaningful comparative measures between the three approaches. Accordingly, below we will attempt to assess ... * Just how many steps and iterations it may take to get a workable chip. We can highlight the steps and iterations that one can avoid through reuse. * We will discuss the time it takes and some of the challenges faced in going through some of the steps. * What the risks for first time failure requiring rework are. One big factor here is clearly the uncertainty about timing until the physical layout is complete. * The skills for design creation are different from those for reuse. This factor is rather critical in setting up an infrastructure to deal with either design, on the one hand, and reuse, on the other. However, one can also see some strong advantages to taking notice of the different skills required for the different tasks. * Some of the design tools are very different for the different flows. This may be a barrier to Hard IP reuse because company infrastructures are set up for the well established synthesis route. There are extensive, detailed literature and project descriptions of the steps required for the design and fabrication of a VLSI chip. Of course, most of the really interesting ones, the latest ones, are not available to the public but are accessible only inside companies. Thus, we are forced to speak in generalities. There are many different approaches. They depend on the complexity of the design, the projected volume demanded in the market place, and the price one is expected to be able to charge for a chip. Then there are the most important requirements for the chip to meet, such as extremely high performance, extreme miniaturization, extreme reliability, extremely low power consumption, very rough environmental demands, many of which are in conflict with each other. In terms of flow, a chip could be designed with gate arrays, with standard cells, with programmable logic or, possibly, fully customized. It could be a system-on-chip (S-o-C) solution using a combination of some or all the suggested design approaches. This makes it difficult to talk about one or even a typical design flow. Whatever we choose, evaluations of the design process will still give us a feel for some of the critical issues and help us understand how IP reuse could speed up the process and save engineering resources. Even if we are off on some of these "guesstimates", we will still get the "big picture". In any case, it is pretty much the best we can do and, no matter what we say, the situation will have changed tomorrow anyway. The data from the flows may be sufficient to provide an indication of the Return On Investment (ROI). That would be quite useful. Such an assessment is only meaningful, however, on a company-specific basis. One has to know the specific cost of doing business. This obviously varies from company to
company.
11 1
7.2.2
A PRE-DSM
DESIGN FLOW
In Figure 7.1, we can look at a simplified pre-DSM flow. Immediately, one may ask the legitimate question: Why should we even look at a pre-DSM flow? Almost none of today's chips are fabricated using such a flow. True, but it does demonstrate the total lack of timing closure problems caused by the strong influence of physical layout on timing. It also shows just how much more involved the design of a DSM chip has become. Besides, we will spend very little time discussing it. Thus, the main characteristic of such a flow is the lack of repeated timing analysis once the physical layout phase starts. Once synthesis has chosen characterized cells that satisfy the timing requirements, timing analysis is finished. If the timing is off, one resynthesizes. However, once the timing is correct, we do not have to return to it after subsequent design steps.
LANGUAGE (COULD BE ENGLISH)
SYNTHESIZABLE SUBSET OF VHDL OR VERILOG
HIGH LEVEL SIMULATION
LOGIC TEST SYNTHESIS
RE-SYNTHESIS
SIMULATION TIMING ANALYSIS ATPG
TEST VECTORS
Fig.
7.1 A
Simplified
re-DSM Design Flow
The major verification activities in the flow are functional and timing verification as shown in the flow. There are many additional verification steps within the flow that are not shown in Fig. 7.1. There is also much more that could be said about the individual steps in this flow. However, the same issues and many additional ones will be part of a modern, DSM flow, and we will discuss them in the section about DSM. As mentioned above, possibly the most significant fact about this pre-DSM flow is that all the timing information is in the active parts of the chip, the design can be handed over to processing, a foundry or other chip maker, based on a functional and timing verified netlist. 1 12
CHAPTER 0
7.2.3
A DSM DESIGN FLOW In Figure 7.2, we saw a simplified DSM flow. It shows many more steps than a pre-DSM flow, but still only the essential steps. The flow diagram for DSM shows a lot more timing analysis than it does for pre-DSM. Also, what is not really indicated is that all the timing analysis is based on estimates for the dominant timing element, the interconnects. We do not really know what to use for interconnects until after parasitic extraction. Note that in Figure 7.2, IPO means In Place Optimization. The flow diagrams should contain the key ingredients for a fair evaluation of the time and efforts necessary to successfully design a DSM chip. Of course, such design flows are "moving targets" in today's fast moving hi-tech world. Ideas for new tools are constantly emerging and, with the increasing complexity, whatever new tools emerge will probably arrive not a moment too soon. Yet, however tentative a flow may be, we need to establish a basis from which we can discuss the major differences between the three flows discussed here. LANGUAGE (COULD
BE
ENGLISH)
SYNTHESIZABLE SUBSET Of VHDL OH VERILOG
HIGH
LEVEL
SINULATION
ATPG
LocC
&
TEST SYNTHESIS (USING STATTCA W IRE LOADING TO EET TIMINS)
GET ESTIMATED LOADING
WIRE
TO MEET TIMING
USE
jjTjMAjj
MIRE LOADING TO MEET TIMING
SHOULD ONLY REQUIRE IPO
113
7.2.4
HARD IP REUSE FLOW In Figure 7.3, we see a Hard IP reuse flow. It is a very simple flow. Minor adjustments may or may not be needed. The same is true for the setup step. As we discussed in Chapter 2, if we want to maintain certain levels of layout hierarchy, the layout needs to be cut into smaller pieces such as memory cells, multiplier cells, barrel shifters, etc. Finally, the pieces could be major blocks of a chip like CPUs, complete RAMs, etc. Cut lines defining the boundaries of these cells or blocks can be determined by a compactor automatically or with manual assistance. Often, manual assistance has positive effects on the compacted results. The feedback loop could also be used for optimization, but other pieces of software, such as EDA analysis tools, are then often needed to automate and analyze some of the optimization steps.
MINOR ADJUSTMENT
Figure 7.3 Simple Hard IF Flow
We will now look at the essential steps in the design flows and comment later on some of the additional challenges that are on the horizon. Rather than to show recursive steps in the flow diagram, we will discuss their possibilities in the text. The main focus in this and the following sections is to try to identify measures of comparison between the efforts to get a workable chip using any of these three approaches. For these reasons, we will limit the observations to steps in the flows that are critical for comparing the flows with respect to the major challenges in bringing a chip to the market. While it is difficult to decide, a priori, where it "will hurt the most" for the large range of requirements, we do have a pretty good general idea of the main challenges VLSI chips face. For most of them, it is time-to-market; for some, it may be economics. Then again, for some special requirements, it could simply be the maximum performance one can squeeze out of a chip. For others, such as for applications in space, the goal might be to maximize reliability. Finally, the choice of flow could be made based on a shortage of highly skilled engineers required to design a totally new chip. The priorities may be many. Because of this multitude of potential priorities, we will focus primarily below on the strengths of the three flows suggested above in stressing IP reuse as opposed to starting from scratch every time we need a higher performance chip.
114
CHAPTER 0
7.3
EXAMINING FLOWS: DESIGN FROM SCRATCH, SOFT IP AND HARD IP REUSE We have seen design flows for pre-DSM and DSM technologies. The main difference between the two is that, for DSM technologies, we try to shift timing analysis to the end once the physical layout is complete or almost complete. For pre-DSM, we can determine accurate timing as soon as we have selected the active components for the circuit. Clearly, for any design, the sooner we have reliable timing information, the better. When using compaction as a postlayout step, timing data that is not "too far off' is good enough to achieve timing closure.
7.3.1
DESIGN FROM SCRATCH We will look at the steps required to design a DSM VLSI circuit and will use information describing these steps to understand the benefits of a flow in which these steps do not have to be taken when following the path of IP reuse. Specification At the specification level, the description what one "wants" may have one of many possible formats depending on needs and preferences. Although, the specification could start out with a human-language description, it will generally be translated immediately into a more precise formulation. Coding
Into High-Level Behavioral Language
Once the specification is clear and understood by everybody, the "intent" needs to be translated into a format that is technically doable. It needs to be coded. The desired description could be in a high-level behavioral language (HDL), a flow diagram or a state diagram, etc. If it is in Verilog or VHDL, cycle-to-cycle clock activity will not be defined at this point because it is strictly behavioral and contains no timing information. It also does have to be a synthesizable subset of Verilog or VHDL but it should adhere strictly to the IEEE standard. If the specification is in HDL, a block diagram or a flow diagram, the behavioral code can be machine-generated to VHDL or Verilog. Because there is no link to any particular physical version, we still have the freedom to generate various functionally equivalent architectures. This is very efficient and extremely useful. However, machine-generated code does not contain commenting lines and the resulting code will be difficult to interpret. Since these days reuse comes in the "flavor of the month", this may be a serious drawback because it can make reuse, especially of archived code, difficult once the original designers have left the company. The alternative to machine-generated code is to generate the code "by hand". In the interest of reuse, guidelines such as those suggested in the RMM []
should be
followed. Now, we may want to proceed with synthesis. If this is the case, the language in which the desired design is coded must be described in synthesizable subset of behavior-level VHDL or Verilog. However, there is still no "attachment" to an actual physical implementation. It is a behavioral description. Functional Verification This highest functional level description of the desired project is the direct result of a specification that may have been done in a human language such as English. Thus, translating this intent into a more technical, more mathematical format may add ambiguities. However, no matter how the functional intent was specified, it needs to be verified - and the earlier, the better. A functional simulation is needed to make sure the ultimate product eventually does what it needs to do, at least functionally. To verify this functional-level specification, we need a functional simulator. There are several on the market. So far, the designs do not include any timing information. As a result, any verification of the timing will have to be done later, although the desired speed has already been projected in the specification.
115
Synthesis This initial phase of synthesis (in a new design) has to do without very much timing data. After checking the functionality at the behavior level, logic and test synthesis will be performed. As we pointed out, the coding must be a synthesizable subset of Verilog or VHDL. if the data describing the design is behavior-level VHDL or Verilog, it could be translated into an RTL code. The resulting RTL code is a preparation for synthesis. RTL code specifies clock-to-clock cycle operations. It is also becomes linked to an architecture as opposed to behavior-level synthesis. Thus, architectural trade-offs have to happen before this step. In essence, synthesis is the same as going to the "parts department" (called the technology library in VLSI design), to find the physical building blocks with which to put our system together. The range of parts will generally run from large macros to standard cells to gates. The granularity is, of course, much finer for silicon compilers. Since we are now at an implementation level, such as RTL code synthesized directly into gates or other building blocks, timing starts to play a role. We need to select those components from a technology library that have the proper timing characteristics for the entire chip. Since layout is so critical to DSM technologies, we need some estimates of the timing of interconnects. Since there is no physical layout, the only data available at this point for timing is generally referred to as "statistical wire-load model". This model is an attempt to specify some timing before any floorplanning or placement. Such statistical models have no relationship to the design under consideration. They are based on past designs. There are few of them and the technology is constantly and rapidly changing. This is like predicting stock prices based on past performance. A better approach is often referred to as "custom wire models". With these models, interconnect timing estimates are based on projected physical placements of the building blocks in the present chip, the chips that are actually being designed. No routing has been done, no extractions have been done. These models are better than statistical wire-load models, but timing convergence is still highly unlikely. Since the routing of the interconnects has such a dramatic effect on timing, their accuracy is still seriously questionable. Verification Between Levels
of Abstraction
The design flow shown in Fig. 7.2 starts at a high level of abstraction, a behavioral or functional level, and proceeds towards a lower level of abstraction, eventually the physical layout. The translation between levels requires verification to make sure the initial intent is not lost in the process. This might best be done with formal verification between levels where such a test is an equivalency test of logic functions. Thus, the following steps are required at some levels and between levels of abstraction: 1. Create the intended design at a certain level of abstraction. 2. Verify the desired function at that level. 3. Translate to the lower level. 4. Verify consistency throughout these 3 steps. As pointed out in 2, there needs to be verifications at some levels in the flow besides the verification of the "translation" between all the levels of abstraction. The highest level of verification is to check if the system designed does what we want it to do. This will be done first at the functional level. Floorplanning and Placement We are now in the early stages of the physical layout of a chip. Fig. 7.2 suggests that floorplanning, placement and routing are separate tasks. Ideally, these tasks should be done together, interactively.
CHAPTER 0
This is not done in practice because each of these tasks is already extremely computer-intensive by itself. This is especially true for routing (discussed later). However, we will see in the discussion here that it is conceptually difficult to separate them because the end result depends so much on their working well together. With floorplanning, one tries to get an idea early on of how major blocks are going to fit together, how the shape and aspect ratios of the various blocks will affect putting together the puzzle. A critical question is with what ease the blocks will interconnect. Are the connections of intercommunicating contacts close to each other or not? Many blocks might want to use feed-throughs to ease the task. Feed-throughs are much more important for DSM VLSI chips than for earlier processes. If the floorplanning is done with manual interaction, optical aids such as a rat's nest are used to get an indication of congestions and the path of major interconnects. The placement actually places the various building blocks and determines, therefore, the dimensions such as the space available for the router to place the interconnects. The quality of a floorplan in conjunction with the spaces reserved in the placement for the router can make the difference between a good or bad route or a route that does not even complete. It also has a big effect on timing in DSM technology chips. After floorplanning, the relative positions of these blocks are set and we have a general idea about the lengths the interconnects will have. Refined
Synthesis
After floorplanning and placement, net loads are still estimated based on the placement and orientation of the blocks. Using these data, a more refined synthesis can now be performed. Net loads are backannotated and a more informed selection of cells can be chosen with synthesis. Net and cell delays may be specified in a format such as SDF (Standard Delay Format), net resistances, physical cluster and location information via PDEF (Physical Data Exchange Format). However, at this point, it is still only the active parts of the circuit that have accurate delays. The net delays are still an estimate, though an educated estimate. Based on the available data, a timing analysis will show whether the timing is in the ballpark. If the timing is way off, routing - a very compute-intensive and time-consuming step - makes no sense. It probably will be better to consider rearchitecturing the chip at this point in time to, at least, approach the desired timing. Of course, this decision is up to the designer. Routing Global routing (loose wiring), determines the regions traversed by wiring. Detailed wiring determines specific location of wires. Routing and its success is highly contingent on the floorplanning and the placement. Timing-driven routing is desired because of the challenges of DSM technologies. In addition to the space constraints on the router, this means the router has additional constraints at critical interconnects to be within certain delay limits once the routing is finished. Considering the complexity of the distributed RC load interconnects and the fact that standard routing is already compute-intensive, this may be difficult to do well. However, it is one of the possibilities with today's latest tools.
Parasitic Extraction Now we are at the point where we can determine all the information necessary through a Layout Parasitic Extraction (LPE) to analyze the exact timing. The data will generally be specified in DSPF (Detailed Standard Parasitic Format). Extraction is also a very compute-intensive task. However, a lot depends on whether the layout data is hierarchical. It also depends on whether the extraction can be performed hierarchically, even for layout data that is hierarchical. Hierarchy in layouts was discussed in Chapters 2 and 5. Complexity and computation intensity also increase because, for DSM technologies, the extraction in 3D is so important. We have seen in Chapter 3 how significant the 3D effects are and how they complicate things. After the parasitic extraction, we can model the interconnects and can determine the timing of the chip to see if we are close to the desired timing parameters. Now we can decide, based on realistic data, which of the following situations we are facing: 1. The timing is close enough for us to believe that, without exchanging cells and simply adjusting the physical layout dimensions of certain devices and interconnects in the layout, we can achieve the desired timing. Such adjustments are generally referred to as IPOs (In Place Optimizations). The big question here is: Just how much can we change the timing this way? The answer is, assuming we use all the latest available knowledge, an amazing amount, as we have suggested in Chapter 3. 2. The timing is "reasonably" off. We have at least four choices: -
We leave the routing and do a postlayout insertion of tiny buffers into interconnects using compaction to "inflate" them to satisfy both the layout design rules and the driving strength as we have suggested in Chapters 2 and 3.
-
We resynthesize the design with different library elements and reroute the chip.
-
If the timing is still off, which is completely possible since rerouting may throw off the timing completely, we may then want to use optimization or buffer insertion, both with compaction.
-
If there is any way to resynthesize without a reroute, that would be a nice solution, too.
3. If the timing is way off, the chip will probably be rearchitectured or worse, the spec needs to be reviewed. 4. We could look to another foundry and migrate the design to the new process rules. A more careful review of the processing technology may be in order. There may now be a faster one. The above steps are really what is called Final "Synthesis" in the flow in Figure 7.2. Fabrication and Test What happens now really depends on what happened within the flow. It depends a lot on the changes that had to be made to meet the timing requirements of the design. The big question is now: Have any of the required changes after the third step in the flow (Functional Verification) affected the functionality in any way, and how can we be sure that it did not? If functional changes could have happened, both functional simulation and ATPG (Automatic Test Pattern Generation) need to be redone. Such steps would involve major investments in engineering effort and time. Also, test synthesis gets into the picture because it affects the timing (the capacitive loading) of the design. It might be reasonable to wait with ATPG until the physical design is complete. Test patterns are not needed until the end, anyway. Thus, when it comes to circuits designed for DSM technologies, we need to be vigilant about when we really know that a chip is ready for fabrication based on simulation results and which test vectors to use to guarantee the required fault coverage. The functional simulation can be done repeatedly in the flow 11 8
CHAPTER 0
with more and more definitive results. After all, only the last functional simulation is the basis for a signoff. Generating a good set of test vectors is very time-consuming, but it needs to done with diligence and as late in the flow as possible. The only problem with late test vector generation is discovering that the present design can not guarantee the required coverage or the test requires too many vectors and, therefore, too much time. Then, a redesign with scan insertion may be needed, which will greatly change the timing. It is not easy to make a complex, workable DSM VLSI chip!
7.3.2
SOFT IP REUSE Soft IP reuse has been discussed extensively in the RMM [1]. Clearly, savings in time and engineering resources can be obtained with Soft IP reuse. One of the questions for Soft IP reuse is what exactly in an existing design available as code can be reused. In principle, there is data from an architectural level synthesis (HDL, flow diagram, ...), logic-level synthesis (state transition diagrams, schematics, etc.), geometrical/physical level synthesis (floorplans, routing, layout). Thus, depending on the type of data for the reuse, we have more or less freedom of implementation. For a behavioral description of the design intent, we are still totally independent of the implementation; for a structural description, we are already bound by the interconnections of components such as gates; and, finally, for a physical description, we have given up a lot of freedom. So, whether we talk of Soft IP or Hard IP, the more definitively the data we reuse specifies a "new" design, the less freedom we will have but the less design time we will spend and the more certainty we will have that the reused part will actually work as expected. So, Soft or Hard IP, there is "no such thing as a free lunch". Some of what we gain in productivity and time-to-market, we lose in degrees of freedom for the design. This is a fundamental rule of life for any reuse. Thus, if we look at Figure 7.2, the early steps, such as specification and coding, clearly do not have to be repeated. We may be able to reuse the simulation and test vector suite for some designs. Just exactly where the total reuse approach breaks down is difficult to generalize. Of course, one should reuse as much as possible. As for the chances of designing correct silicon the first time, some say that the disconnect between the front-end logical database from the back-end physical database is almost complete. The design industry is making great efforts to close this gap. However, it is difficult to know (and even more difficult to write about) the latest developments that might happen inside the chip design and the design tool industries. A better connection between front-end and back-end is a must in order to achieve convergence between prelayout and postlayout timing (timing closure). This communication of changes in the netlist during synthesis to back-end tools in order to affect incremental placement and routing is generally called Engineering Change Order (ECO). The good news is that, for any level synthesis, the active parts of a circuit can be accurately characterized a priori. The bad news is that, even for an "ancient" process such as 0.35 microns, interconnects make up for 70% of the delay in a circuit. New formats have been introduced by leading EDA vendors to contain more physical data earlier in the design process. These formats help both Soft and Hard IP reuse.
Some of the formats that are used and are becoming the standard in some cases are: LEF
Library Export Format
LLEF
Logic Library Export Format
-
DEF
Design Export Format
-
SDF
Standard Delay Format
PLEF
Physical Library Exchange Format
-
PDEF
Physical Data Exchange Format
-
SPF
Standard Parasitic Format
(an extension from LEF)
(netlist format from Cadence in ASCII format) (an OVI standard) (extension from LEF)
(an OVI standard)
These formats help, and will be refined with time. However, we have to keep in mind that while PDEF adds to synthesis visibility of physical locations of cells on a die to calculate delays for each cluster, it does not have information on routing resources, congestion or die size. SDF communicates delay information between floorplanning and synthesis. Estimated or extracted net capacitance and resistance information is communicated to synthesis via SPF. Also, synthesis-specific scripts. The floorplanning environment can interact with back-end place and route tools using DEF. Thus, communication formats are gradually put in place to interconnect the front-end with the back-end. However, until a physical layout is complete, all the values are really guesstimates, to put it kindly. Accumulating experience from previously designed chips will help a lot. The rest can be done with postlayout optimization using compaction.
7.3.3
HARD IP REUSE We have seen the short and simple flow for Hard IP reuse. It only suggests some of the applications of the Hard IP compaction methodology. And even the three areas of application... 1. starting with a design existing as layout of a chip or part of a chip that has already been fabricated and in use generally referred to as Hard IP reuse or retargeting, 2. starting with a design that has been laid out after having been designed by whatever means, fully customized, synthesis or any other approach to be optimized with respect to performance, power, density, signal integrity and 3. the stage in a design where we are ready to do the layout or modify and existing layout "by hand" using compaction with its layout design rule database, may represent only part of where postlayout modifications will become necessary as minimum chip layout dimensions continue to shrink. Of course, this is speculative but certainly a possibility. For comparing the various flows,
we will
focus primarily on Hard IP
retargeting.
7.4
SUMMARIZING
REMARKS
As we have seen, it is difficult to exactly quantify the advantages of one method over others. There are too many variables, and now we are going to add some more. However, we do have some ideas about the competing factors for any particular project, for a given company organization and the available resources in terms of tools, engineering staff, infrastructure, etc. Thus far, we have not talked about the cost of the tools needed to do the job: software and hardware. We have only seen that to verify the correct behavior of a chip alone we may need simulators of 120
CHAPTER 0
various levels and flavors such as cycle-based, event-driven, worst case. We may need to use formal verification. We may need to use emulation. We certainly need to use timing analysis tools. We have not looked at the cost of all the synthesis-related tools, the routers, the floorplanners. Anyway, there is no point to developing a detailed catalog of what is needed here. All these tools and more are needed for Soft IP reuse. For Hard IF reuse, we need timing analysis and the migration software and quite some computer power to do the migrations. For both Soft IP and Hard IF reuse,
we need layout
verification tools.
Then there is the question of the skills required. For Soft IP reuse, engineers primarily need to have computer science backgrounds with a proclivity for architectural creativity. This is what everybody learns in school today. For Hard IP reuse, the skills are more related to semiconductor physics and processing with a penchant for details. This is what used to be popular in university curricula. The question is how to organize a company to do both areas well. At this point, Hard IP migration is not easy enough that it can be done "once in while" by whoever has nothing else to do. It takes dedication. Finally, just how much useful, hard-to-sacrifice Hard IP is there just floating around in a company? Probably a lot in some companies. Can management convince engineers to reuse this Hard IP as opposed to do what they like most, design a new chip no matter how long it takes or what is costs? To conclude, very objective decisions are needed, many on the managerial level. On the other hand, Hard IP retargeting still has to get easier to use and needs to be integrated into overall Soft IP/Hard IP reuse flows in such a way that, in a S-o-C approach, for instance, the mixing and matching becomes natural. In many projects, there are some major blocks for which there are simply no reasons to redesign as opposed to reuse with Hard and Soft IP. During 1999 and 2000, it has become very evident that the advanced semiconductor manufacturers and design houses are increasingly introducing IP reuse flows, both Soft and Hard IP. The complexity of the chips which are designed and will be designed, combined with the required resources to do so and the critical time-to-market windows which are faced, simply does not offer any other alternative but to seriously consider at least a partial IP reuse strategy. In addition, we are seeing fast progress in new process technologies which will further stimulate strong growth in the Hard IP flow. There are some significant developments in the area of resynthesis, as well, with newly developed design flows which create "correct timing" layout directly from the netlist level, but these developments are too premature to be considered seriously at this stage. Also these developments are mainly geared towards cell-based designs and subsequently do nothing for the large fully custom designs. IP reuse therefore is bound to increase and there are substantive estimates which estimate the total IP reuse market worldwide to reach the level of over 2 billion dollars in 2001, 75% of which is Hard IP reuse. This is a significant development which will finally draw the attention of the chip design industry to the back-end or layout. At the end of the day, all that matters is if a company is able to produce and deliver the right products at the right time. Only then will it do well. 121
IP
REUSE.
SOFT AND
HARD
OVERVIEW OF THE CHAPTERS 2.2
Retargeting to a New Process with Compaction
HARD IP AND SOFT IP REUSE
2.2.1
Retargeting for Unchanged Functionality
1
2.2.2
Retargeting to Conform to Target Process Design
CHAPTER
1
IP REUSE IN ADDRESSING THE NEED FOR INCREASED VLSI DESIGN PRODUCTIVITY
1.1
Some General Observations About VLSI Chips
1.1.1 Some Major Challenges in VLSI Chip Design
Rules 2.2.3
Retargeting for Reliability
2.3
Correct Electrical and Timing Behavior of Migrated
1.1.2
Design Issues for Pre-DSM Technologies
1.1.3
Design Issues for DSM Technologies
2.3.1
Retargeting for Correct Interconnect Timing
1.1.4
Going From Pre-DSM to DSM Requires Changes
2.3.2
Retargeting for Correct Transistor Timing
1.1.5 The Concept of Hard IP Reuse 1.1.6 1.2 1.2.1 1.2.2
Chips
2.3.3
With Hard IP Migration, Only Some Circuit
Retargeting for Correct Interconnect and Transistor Timing
Properties Change
2.4
Inputs, Feedback and Leverage on the Layout
Economics Considerations for Bigger, Faster, More
2.4.1
Setup for Migration Process
Complex Chips?
2.4.2
A Birdbs Eye View of a Migration
Economics by Saving on Simulation and Testing
2.4.3
Output Data (Feedback) From the Compactor
Through IP Reuse
2.4.4
Statistical Feedback on the Layout
IP Reuse to Keep Pace with Processing Technology
2.4.5
Keeping Polygons Edges on a Grid
Advances
2.4.6
Compaction-Induced Jogging
1.2.3
The Challenge of Filling Fabs for Profitability
2.4.7
Contact Optimization After Migration
1.2.4
Planning in the Face of Uncertainties.
2.4.8
Minimization of Parasitic Capacitance or Resistance
1.3
A Preview of Areas of Hard IP Engineering
2.4.9
Some Other Challenges
1.3.1
Hard IP Retargeting and Designing for DSM
2.4.10 Summary on Compaction
Technologies and Yield
2.5
Evolution and Applications of Hard IP Migration
1.3.2
IP Reuse and the Front-End/Back-End Connection
1.3.3
IP Reuse for a System-on-Chip (S-o-C)
2.5.1
Standard Cell Library Migration
1.3.4
An Ultimate Mix and Match S-o-C Methodology
2.5.2
Migration of Regular Structures
1.3.5
Productive Hard IP Creation
2.5.3
Hierarchy Maintenance in Hard IP Migration
1.4
Barriers to and Limitations of Hard IP Reuse
2.5.4
Flat Hard IP Migration
1.4.1
Problems With Attitude
2.5.5
Migration of Entire Chips
1.4.2
Problems With Infrastructure
Methodology
1.4.3
Fundamental Technical Limitations
CHAPTER
1.4.4
Summary of Conclusions
HARD IP PERFORMANCE
3
AND YIELD
OPTIMIZATION CHAPTER
2
HARD IP MIGRATION 2 2.1
2.1.3
12
HARD IP OPTIMIZATION
3.1
What to Optimize in a Layout and How
HARD IP MIGRATION WITH A PROVEN SYSTEM
3.2
Leverage of Layout on Performance
AND METHODOLOGY
3.2.1
The Increasing Influence of Layout on Performance
Hard IP Reuse, Linear Shrink or Compaction?
3.2.2
Optimization Requires a Detailed Layout Analysis
3.2.3
Front-end Leverage
Layout Flexibility and Control with Polygon Based
3.2.4
Back-end Leverage
Com paction
3.2.5
Features Compaction Changes, Features It Does Not
Linear Shrink Versus Compaction
3.2.6
Synergy Between Front-end and Back-end?
2.1.1 Keeping WhatbGood / Improving the Rest 2.1.2
3
3.3
The Modeling of Interconnects
5.3
The S-o-C Mixing and Matching in Retargeting
3.3.1 Parasitic Components of the Interconnects
5.4
Designing VLSI Chips for Ease of Reuse
3.3.2
Determining Values for the Parasitics
5.4.1
Some Guidelines to Facilitate Hard IP Migration
3.3.3
Capacitances Affecting Interconnects
3.4
Time Delay Analysis in Digital VLSI Circuits
CHAPTER
3.4.1
Modeling for Timing Analysis
A PARTIAL OVERVIEW
3.5
Performance Optimization with Layout Parameters
TOOLS
3.5.1 Front-end Optimization
6
OF AVAILABLE
6
SOLUTIONS NEEDED FOR RETARGETING AND
Capacitive Effects Between Interconnects
6.1
The Postlayout Optimization Process
3.6.1
Cross-Coupling Between Interconnects
6.1.1 The Theoretically Ideal Optimization Approach
3.6.2
Minimizing Cross-Coupling
6.2
IP Reuse Through Retargeting
3.
Optimizing the Active Part
6.2.1
A Traditional, Robust, Retargeting Product
3.7.1 Other Optimization Issues
6.2.2
State-of-the-art, Fully Hierarchical Retargeting
3.8
Conclusions to Performance Optimization
6.3
Tools for High Productivity IP Creation on the
3.9
Layout Geometry Tradeoffs for Better Yield
3.9.1
Yield Enhancement Through Preferred Process
3.5.2
Back-end Optimization
3.6
LAYOUT OPTIMIZATION
Layout Level 6.4
Rules, Using Compaction 3.9.2
Optimization of Physical Layout for Performance & Better Yield
Yield Enhancement Through Minimizing Critical
6.4.1
Optimizing Transistor Sizes
Areas, Using Compaction
6.4.2
Interconnect Layout Adjustments
6.4.3
Yield Enhancements With XTREME
CHAPTER
4
PRODUCTIVE
HARD IP CREATION
CHAPTER
7
4
IC LAYOUT, HARD IP CREATION
4.1
Hard IP Creation Using Compaction
IP OR SOFT IP REUSE
4.2
IC Layout Benefits From Compaction
7
SOME GENERAL OBSERVATIONS
4.3
Where to go From Here
7.1
Some Reasons for Looking at Both Hard IP and Soft
4.4
What Compaction in ICLayout Can and Can Not Do
PRODUCTIVITY AND RISKS WITH HARD
IP Reuse 7.2
CHAPTER
ANALOG, HIERARCHY, S-O-Cs,
Design Flows: A Visual Perception of Design Efforts
7.2.1 Considerations When Looking at Design Flows
5
REUSE
7.2.2
A Pre-DSM Design Flow
GUIDELINES
7.2.3
A DSM Design Flow
s
SPECIAL CHALLENGES AND OPPORTUNITIES
7.2.4
Hard IP Reuse Flow
WITH HARD IP
7.3
Examining Flows: Design from Scratch, Soft IP and
5.1
Retargeting Analog and Mixed Signal Designs
Hard IP Reuse
5.1.1 Layout Considerations for Analog
7.3.1 Design from Scratch
5.1.2 Can Analog Designs be Successfully Migrated?
7.3.2
Soft IP Reuse
5.1.3 A Practical View of Analog Migration
7.3.3
Hard IP Reuse
5.2
7.4
Summarizing Remarks
Hierarchy in Hard IP Migration
5.2.1 Hierarchy Maintenance in Layouts 5.2.2
Challenges in Maintaining Layout Hierarchy
5.2.3
Pros and Cons of Maintaining Hierarchy in Migration
123
IN
D EX
45 degree
44, 96, 124
absolute timing
DSM (Deep SubMicron)
15, 124
abstract cells
51I
ACC (Abstract Cell Compaction) AMPS ATPG (Automati c Test Pattern Generation) boundary scan
73, 74, 77, 78, 80, 82, 85, 87, 97
74, 102, 103
98, 100, 102, 108, 110, 112, 115
118
64, 73, 75, 76, 99, 104
capacitive effects
65
capacitive loadin: g
7, 14, 21, 68, 72, 74, 75, 76, 118
clock tree
73
compaction
6, 10, 13, 14, 15, 18, 21, 24
116, 118 dynamic
Gauss
118, 120
GDS2
81, 82, 105, 106
critical path
15, 39, 42, 47, 57, 62, 70, 80, 84, 86
67, 88
flat
86, 90, 93, 96, 98, 100, 106, 115
critical area
100
equipotential
front-end design
43
71, 72, 76
EnCORe
fringing
contact optimizal :ion
7, 14, 20, 33, 34, 75, 104
Elmore
49, 50, 53, 59, 60, 66, 73, 76, 79
102
14, 68, 75, 76, 79
electromigration
26, 29, 30, 34, 36, 41, 43, 44, 45
Companion
cross-coupling
53, 54, 56, 58, 63, 64, 66, 69, 70
51, 103
95
capacitive coupli rig
6, 9, 10, 11, 12, 14, 15, 18
19, 20, 26, 29, 30, 31, 33, 34, 35
4:8, 50, 67, 92, 93, 94, 101 65 7, 12 67, 20, 59, 78, 92 20, 59,
gridding
78, 92 86
Hard IP Engineering Hard IP reuse
I 9, 26, 28, 31, 34, 37, 5, 6 5, 6, 9, 10, 13, 14, 17, 19, 21 22, 23, 27, 45, 46, 51, 53, 62
20, 64, 68, 72, 75, 76, 89, 104
9, 83, 87, 90, 95, 96, 100
crossover
65
107 , 108, 110, 114, 119, 120
cross-talk
14, 56, 75,106
current density
33
curve-fitting
47
DEF
106, 120
defect size
105, 106
design cycle
7, 18, 34, 55
design from scra tch
5, 7, 21, 23,107, 110, 7, 10, 17, 19, 20, 27, 29, 31, 34 39, 42, 48, 60, 81, 83, 85, 90, 95 97, 101, 110, 118
DfM (Design for Manufacturing) 7, 18, 20, 24, 26, 32 53, 79, 80, 82, 86, 97, 99, 105 distributed
12, 56, 63, 69, 71, 76, 103, 117
doglegs don't touch DRC (Design Rule Correct) DREAM
17, 50, 51, 90, 92, 94, 99
hierarchy
16, 48, 50, 87, 90, 91, 92
66, 71
cutting lines
design rules
hierarchical migration
42, 84 90 19, 23, 81, 83, 84, 93
101, 103, 106,114, 118 hierarchy maintenance
90, 91, 92, 93, 94, 101 Hurricane
100
inductive effects
12, 21
interconnect timing IP Creation
35, 116
5, 6, 19, 22, 23, 46, 83, 97, 99, 101
IP optimization
5, 19, 21, 53, 83, 99
IPO (In Place Optimization) jogging jogs Laplace LEF libraries linear shrink
100, 101, 103
113 42, 84 42 66 20, 106 6, 27, 37, 46, 51, 100
14. 25, 26, 30, 35, 36, 41, 45 46, 60, 90, 100
lithography LLEF 124
7, 10, 17, 25, 46, 48, 49, 87
localized timing logic minimization
45 120 11 13, 14
lossy
56, 70, 72
lumped
11, 12, 71
market window migration
97 5, 6, 12, 14, 15, 17, 19, 21, 22 23, 25, 26, 28, 30, 32, 33, 34, 37
signal integrity
7, 14, 21, 30, 55, 56, 64, 68, 72 75, 89, 94, 104, 108, 120
signal skew
15
simultaneous optimization S-o-C (System on Chip)
38, 43, 44, 46, 47, 48, 48, 49, 50 58, 62, 87, 88, 89, 90, 92, 94, 96 99, 100, 121 mixed signal
15
overconstraint
42, 43
overlap
33, 48, 92
partition
96
Patdmill
103
PDEF
117, 120
plasma etching
75
PLEF
120
Poisson
66
polygon-based
10, 26, 27, 29
postlayout optimiza tion
12, 25, 35, 53, 55, 57, 59 68, 74, 78, 84, 98, 108, 120
PowerMill
103
process file
29, 32, 35, 36, 40, 85
random
49, 50, 91, 92
RC load
56, 117
recontacting
43
regular structure regularity relative timing
5, 7, 9, 10, 13, 14, 17, 19, 24, 26
SiClone
SPF Spice
118, 120 11, 56, 72, 77, 24, 28, 39, 46, 51 81, 81, 83, 91, 94, 111, 116
Standard cells
11, 81, 91, 111, 116
state-dependent
12, 69
statistical wire-load
116
substrate
46, 65, 67, 68, 75
switch-level
11, 56, 77
TA (Timing Analyzer)
70
technology file
17, 19
temperature gradier its
88, 95
timing closure
5, 18, 21, 53, 103, 112, 115, 119
timing-driven
57, 58, 74, 78, 117
topology
15, 17, 31, 61, 73, 83, 86
unchanged Function ality university level
31 55, 90, 99,102 12 88, 95
wide metals XTREME yield
29 102, 104, 106 5, 6, 9, 10, 12, 14, 17, 18, 20, 24 26, 29, 30, 32, 38, 53, 53, 55, 61
51, 53, 87, 88, 89, 94, 96, 97, 99
67, 71, 77, 79, 80, 82, 86, 97,102
100, 108, 109, 120,
104, 109
7, 111 30, 56, 90 117, 120
shrink factor
95, 107, 108, 110, 119, 121
27, 28, 31, 32, 34, 36, 38, 45, 49
95, 119
shielding
7, 37, 38, 43, 46, 47, 49, 87, 92
voltage gradients
SDF setup phase
source layout
vector-dependent
scan selectively
5, 6, 7, 10, 18, 24, 53, 62, 87
48, 91
29
ROI (Return On Im vestment) scaling
111, 121 Soft IP reuse
37, 46, 48, 92 14, 22, 25, 35, 36, 51
resizing retargeting
52, 79, 87, 88, 94, 95, 110
7, 87, 89
nonlinear shrink
98 7, 10, 21, 22, 24, 24, 35, 37
20 26, 47, 50, 100 65, 96 30 101
125
B I B L I O G RA P H Y [1]
Reuse Methodology Manual, M. Keating, P. Bricaud, Kluwer, 1998.
[2]
It's the Methodology, Stupid!, P. Kurup et al., Bytek Design Inc., www.bytek.com.
[3]
J. Cong. Many pertinent papers: http://cadlabs.cs.ucla.edu-cong.
[4]
A. E. Ruehfi, P.A. Brennan, "Accurate metallization capacitances for integrated circuits and packages," IEEE Journal of Solid State Circuits, vol. SC-8, pp. 289-290, Aug. (1973).
[5]
J. Rubinstein, P. Penfield, M.A. Horowitz, " Signal delay in RC tree networks," IEEE Interactions on Computer-Aided Design," vol. CAD-2, no. 3, pp. 202-210, July 1983.
[6]
P. Penfield, J. Rubinstein, " Signal delay in RC tree networks," IEEE 18th Design Automation Conference, pp. 613-617, 1981.
[7]
T. M. Lin, C. A. Mead, " Signal Delay in General RC Networks," IEEE Trans. on CAD, vol. CAD-3, no. 4, October 1984, pp. 331-349.
[8]
Circuits, Interconnections and Packaging for VLSI, H. B. Bakoglu, ADDISON-WESLEY, 1990. T. Sakurai, K. Tamaru, " Simple formulas for two- and three-dimensional capacitances,"
[9]
[11]
IEEE Transactions on Electron Devices, vol. ED-30, pp. 183-185, Feb. 1983. IOTA Technology Inc, Low Power, Low Noise, High Performance EDA Solutions, www.iota.com. A.E. Ruehli, P.A. Brennan, "Capacitance models for integrated circuit metallization wires,"
[12]
Electromagnetics, J.D. Krauss, McGraw Hill, 1991.
[13]
Advanced Engineering Mathematics, C.R. Wylie, L. C. Barrett, McGraw Hill, 1995. J.H. Chern et al., "Multilevel Metal Capacitance Models For CAD Synthesis Systems," IEEE Electron Devices Letter, vol. 13, no. 1, Jan. 1992.
[10]
IEEE Journal of Solid-State Circuits, vol. SC-10, Dec. 1975.
[14] [15] [16] [17] [18]
Y. Zorian, " Yield Improvements and Repair Trade-Off For Large Embedded Memories," Date 2000. C. Strolenberg, "Stay Away from Minimum Design Rules Values," Date 2000. K. Veelenturf, "The Road to better Reliability and Yield Embedded DfM tools," Date 2000. Y. Bourai, C.-J. R. Shi, " Layout Compaction for Yield Optimization via Critical Area Minimization," Date 2000.
[19]
G. A. Allen et al.," A yield improvement technique for IC layout using local design rules," IEEE Transactions on Computer Aided Design, vol. 11, pp. 1355-1360, Nov. 1992.
ABOUT THE AUTHOR Peter Rohr holds a B.S. degree in Telecommunications and M.S. and Ph.D. degrees in electrical engineering, with emphasis in semiconductor physics and device modeling. Dr. Rohr has over 30 years experience in hi-tech industries in electronic design automation and in fields such as high-speed digital and GPS circuit design, applications of DSP, microprocessors, analog circuits, silicon compilation and VLSI chip DFT. As a Member of the Graduate Faculty of the Department of Electrical Engineering at LSU, he has lectured on analog circuit design and semiconductor physics. As a Consultant he has conducted workshops on VLSI DFT and deep sub-micron methodologies. Recently he was Vice President in Business Development at Sagantec in the field of Hard IP reuse of VLSI chips.
Dr. Rohr is now President of MATTERHORN CONSULTING specializing in International Business Development. He is fluent in several foreign languages.