By Jim McCanny, Altos Design Automation
To address this problem, more corners were added to the mix – such as fast process with high temperature, or slow process with low temperature. This led to a major increase in ECO iterations where designers got to play “whack a mole” – fixing a problem in one corner only to have it cause a new problem at a different corner. Voltage drops and signal integrity impact on delay were treated in an ad-hoc fashion — fix the biggest glitches that occur at the corners and account for the remainder as components of OCV (on-chip variation). The very foundation of corner analysis was showing severe cracks. Chips were mostly working, but a few leading edge designs had failures or very low yields due to unexplained electrical or process side-effects.
At 65 nm, the semiconductor manufacturing process tolerances and their margins are being reduced to a point that the corner characterization and static timing analysis flow is no longer able to accurately predict silicon performance. Process variations, both random and systematic, play a much larger role in electrical performance. If corner methods are used, the result will be multiple design spins, cost over-runs and production delays.
While more intelligent modeling of variation such as location-based OCV will help, the better approach to predicting the impact of random and systematic variation is to create a design flow that accounts for statistical variation. This new flow must include statistical device modeling, statistical cell modeling and statistical analysis and optimization. It’s a well know fact that the semiconductor manufacturing process is statistically controlled, with each process step controlled within specified limits. The technique is known as Statistical Process Control (SPC). Stay between the lines with statistical control, and you get consistency and high yields. For accurate and predictable timing closure, the “lines” need to be changed from corner descriptions to statistical distributions.
To adopt a statistical design flow, a couple of key components need to be replaced by their statistical counterparts, namely the cell models and the timing analyzer. These two pieces together can replace or augment traditional corner-based signoff. The statistical design flow should essentially look and feel like a corner based flow except that it’s more productive and creates a better design. It should also provide more choices for performance, leakage, power and yield tradeoffs.
A key challenge of the new flow is in the creation of the statistical cell models, where each cell is characterized for its sensitivity to both systematic and random variation. This could potentially increase characterization run-times by 2 or 3 orders of magnitude, depending on the number of parameters that must be modeled and the characterization techniques used. Thankfully, there are new, smarter characterization methods that can keep the statistical characterization run-times down close to the run-time of regular cell characterization. Consequently, the characterization cost is manageable, especially if the same characterization process is used to create both statistical and non-statistical models.
Adopting a statistical static timing analysis (SSTA) sign-off tool along with a statistical cell library is a good start. To truly take advantage of statistical models, a complete statistical design flow requires the comprehensive use of statistical logic descriptions from RTL to tapeout. During statistical RTL design, critical timing limits are defined that set the tolerances or constraints for downstream simulation, implementation and verification. SSTA uses the SRTL timing constraints along with a statistically characterized library to reach statistical timing closure and set the implementation limits for place and route.
During the critical verification process in the statistical timing closure flow, statistical timing limits are validated between the implementation RTL and the SSTA. This top-down methodology propagates timing tolerances down the design flow to the final verification phase, thus ensuring that the timing tolerance or specifications are compliant. During design for manufacturing (DFM) validation, the statistical geometric tolerances are also checked against optical proximity correction (OPC) and chemical mechanical polishing (CMP) results derived from the physical implementation.
The statistical timing closure flow can be characterized as “staying between the lines.” By describing and characterizing all electrical and logical descriptions with statistical and geometric parameters, the design flow constrains the implementation to be within the lines or product specifications, thus ensuring a robust, high performance, high yielding semiconductor product.
Of course there are costs involved in deploying a statistical based flow, but these costs are minuscule when compared to the design productivity and chip yield gains, along with the increased market competitiveness, that a statistical flow can deliver. The current corner based current flow has major holes, and over-design is no longer an option or sufficient to ensure a working, competitive chip. Statistical design inherently increases the probability of success. Nobody likes statistics but when near-chaos reigns, statistics are our only chance to make accurate predictions. Changing to a statistical design flow, while inconvenient, will bring you closer to the truth of actual silicon performance.
Jim McCanny is CEO of Altos Design Automation Inc.