Designers and foundries eye process move challenges

By Bill Murray


SAN JOSE, Calif. — Moving from one process node to another creates design and manufacturing challenges that require improved communication between designers and foundries, according to presenters at the recent International Conference on Computer Aided Design (ICCAD) 2007. Designers and foundry managers highlighted intellectual property (IP) reuse and foundry portability, analog/mixed-signal (AMS) design, process variability, and design for manufacturing (DFM) as primary challenges.

According to Betina Hold, principal design engineer at ARM, IP reuse is “taking a given architecture as a starting point for design in multiple foundries, not taking the same layout and porting directly to non-aligned foundries.” At the 130 nm node, she said, both design and layout were portable to a dozen fabs, using relatively straightforward optimizations. At 90 nm and 65 nm, design and layout portability is still possible across perhaps a half-dozen fabs, albeit with more complex optimizations. At 45 nm, the design will be partially portable, the layout will be hardly portable, and it will require foundry-specific optimizations. At 32 nm, there will be fewer fabs, and some design portability, together with design concept and layout topology portability.

“We used to have one main design effort and multiple foundry ports,” Hold continued. “Now we target individual foundries – this is cost-effective given the smaller number of foundries.” This approach offers flexibility of design transfer between foundries, while “getting the most out of the silicon” through individual foundry optimizations that do not sacrifice performance, power and area, she said.

Walter Ng, responsible for customer and partner alliances at Chartered Semiconductor, had a different approach to foundry portability – a common technology platform developed jointly by multiple foundries. “Chartered, IBM and Samsung, together with other partners including Freescale, Infineon and STMicroelectronics, have jointly developed one process technology. It has the same design rules, same Spice models, same interconnect models, same depths.” The common platform members have collaborated with EDA vendors to make available common design kits, DRC/LVS extraction, common libraries, and common power management techniques. “Customers can decide where they want to manufacture,” he noted.

“This is real portability,” he said. “If you’re shipping tons of product, you don’t want to rely on one manufacturer. You need to mitigate the supply risk – it could mean the difference between a customer succeeding or going out of business.”

The stability – or lack thereof – of new foundry processes is another source of uncertainty. In answer to a question from the audience, ARM’s Hold said that she would like to see far fewer Spice model revisions. “While developing IP, you’ll probably have thirty to thirty-five Spice revisions. You can have revisions every week,” she said.

Chartered’s Ng pointed out that “there are fewer early-adopters using the most advanced processes than before, but they’re starting much earlier. And there is also a higher volume potential at this end of the adoption curve than before.” Consequently, new processes are being adopted and stressed during their development “when they are not fully stable and not fully mature.”

An audience member questioned whether there will be a tendency for foundries to “pull the plug on R&D” because there might be insufficient return on investment with fewer customers adopting the latest process technologies.

Ng responded “TI [Texas Instruments] is going out at 32 nm and Sony just sold their fabs to Toshiba. They all understand that bringing a new process technology to market is extremely costly. In our [common platform] model, we have six companies dividing that cost. We are working very heavily with 32 nm and we expect to be first to market with 32 nm.”

Analog/mixed-signal design

Kevin Jones, engineering director at Rambus, delivered a somewhat tongue-in-cheek description of the analog designers’ plight. The analog design flow, he said, is essentially a manual one of “draw it, Spice it, and repeat.” Jones compared the analog design flow to the digital design flow, where digital designers start “at a fairly abstract level – RTL – then have this huge range of tools underneath that make it work down to GDSII.”

The foundry provides the information that the tools require, Jones said, so the “digital designer is pretty much isolated from a lot of what goes on in the actual process node that they’ve targeted. Of course, that’s a very good thing, because without that level of abstraction it wouldn’t be possible to do the designs that we do today.”

In contrast, he said, “If you look at the analog tool flows that most EDA companies offer, underneath there is pretty much only one tool from each provider.” Analog design begins with schematic entry followed by polygon-pushing. Everything is manual, which is why analog designs don’t scale as easily across multiple processes as digital designs.

The relative effort, time, people, dollars required in a typical AMS design project – a PHY design – are shown in Figure 1. The ratios are multiples of some basic unit “however you want to quantify it,” said Jones. “Pretty much all of our designs are right on the edge. We’re almost never on stable processes, and we’re never designing anything slow – if the standard speed is 3 GHz, we’re probably trying to do 20 GHz.”

In the first generation design, circuit design requires 80x basic units of effort and layout requires 60x – together about 60% of total effort. In porting the first generation design to another process, circuit design is “almost as much effort as in the original design,” said Jones. “The important thing is that layout effort is exactly the same. We’ve found that there could be one new design rule in a process that means the entire layout must be redone – and it must be done by hand.” The result is that the porting effort is about two-thirds of the original design effort.


Figure 1: PHY port requires almost as much design and layout effort as the original design (Source: Rambus)

Jones then outlined what AMS designers really want from EDA, including:

  • A repository of specified and validated process-independent circuit topologies with enough information to make the topologies usable and reusable
  • Fast circuit optimizers for basic sizing
  • Automated layout
  • Design feedback and automatic iteration via tool intercommunication and interoperability
  • Validation of digital, analog and AMS based on device characteristic analysis, not simulation
  • Effective simulation at all levels from Spice to system

And from the foundries, Jones wants process-specific information – complete and tool-usable – available in EDA tools for use up-front in the design phase.

Jones pointed out that the R&D invested in AMS tools and methodologies versus those for digital reflects the relative size of the two markets. But the demand for AMS designs is increasing.

He observed that the two major AMS design issues that must be cracked are reuse and validation, and the mindset that you have to start every design from scratch. This doesn’t scale in a world in which AMS is becoming ever more prolific and important, he said.

Process variability

Both ARM’s Hold and Dr. Azeez Bhavnagarwala, research staff member in the Low Power Circuits and Technology group at IBM’s TJ Watson Research Center, emphasized that process technology characteristics no longer fit the Gaussian – or normal – distribution curve. According to Bhavnagarwala “the statistical models that we use to predict yields and sigmas can be off by orders of magnitude if we don’t have models that replicate what we actually observe in silicon. You cannot predict MOS fluctuation statistics without measured data.”

Consequently, said Bhavnagarwala, “There is a significant challenge using conventional statistical EDA tools to predict SRAM cell variability and yield. We need accurate SRAM variability characterization as necessary inputs to EDA tools to enable circuit design and development.”

Bhavnagarwala pointed out that SRAM accounts for more than 50% of a microprocessor’s transistors, so it significantly affects overall chip-size and cost. SRAM accounts for a large proportion of the processor’s current leakage, and affects chip performance, operating voltage and voltage scaling. Also, because the SRAM uses the smallest geometry transistors, it is the most susceptible to process variations. So, SRAM design must be co-developed and co-optimized with the process technology.

The IBM team had specific SRAM design goals to reduce risk and cost. “We wanted no changes to the cell technology, footprint and size, and no changes to the existing circuit architecture (word path and bit path). We wanted to maintain performance, reduce leakage, eliminate dual-supply requirements, and reduce pressure on the bit cell technology development. We were prepared to accept some area overhead,” Bhavnagarwala said.

First, the team measured actual SRAM cell voltage and current fluctuations, and their dependency on cell terminal bias. From this, they developed circuit biasing schemes to improve fluctuation immunity; optimized cell tailor implants to balance read and write sigma; and built and demonstrated dynamic biasing within performance, leakage reduction, area and complexity constraints. The team observed the aforementioned significant departure of many of the measured parameters from the Gaussian model, including a significant dependence of the parametric asymmetry on terminal bias, and bimodal distributions, probably resulting from the simultaneous contribution of multiple asymmetrical stochastic distributions (see figure 2).

Figure 2: Parameter distribution departs from the Gaussian model (Source: IBM)

The team implemented the following circuit techniques to achieve the stated objectives. These techniques were implemented successfully in nine experimental arrays and one production array.

  • Write assist with a dynamic virtual GND (International Electron Devices Meeting 2005)
  • Reduce bit-line pre-charge voltage (International Electron Devices Meeting 2005)
  • Strong cell PFET
  • Power line bootstrap (VLSI Circuits 2004)

Arm’s Hold observed that “global” variations are rapidly becoming “local” variations, as demonstrated by figures 3 and 4. The figures show the saturation current as a function of leakage at the 90 nm and 45 nm nodes, respectively. In the latter, there is a much greater overlap between the global (blue) results and the local (red) results.

Figure 3: Variation of saturation current with leakage at 90 nm. (Source: ARM)

Figure 4: Variation of saturation current with leakage at 45 nm. (Source: ARM)

Hold pointed out that local variation can cause higher-than-predicted leakage and lower-than-predicted read currents in memories. So, there was a need to (for instance) restrict the number of rows to allow for sufficient bit line differential. At 90nm, row count was fixed simply by optimizing the architecture. At 65nm and below, leakage variation between processes requires architectures to be flexible. At 45nm and below, rows, columns, datapath and architecture must be both process- and performance-specific.

“The key at 45 and 32 nm is to really understand the foundry requirements. It’s not just size, basic shapes and neighbors, anymore. It’s also Vt requirements, logic versus bit-cell requirements, different definitions of models, signals, variation and sigma,” she said.

Design for manufacturability

According to Chartered’s Ng, “In the great majority of designs taping out at 65 nm, there hasn’t been a heavy use of DFM tools.” Chartered expects that to change in the move to 45 nm.

K. C. Wang, chief engineer of system and architecture support at UMC, said “our DFM infrastructure supports customers with both pre- and post-tapeout DFM analyses and fixes.” Using an analog flow as an example (see figure 5), he outlined the kind of “DFM-aware” design tools UMC uses to ensure fast yield ramp.

Figure 5: DFM Design Flow for Analog IP (Source: UMC)

These include:

  • Critical area analysis (CAA) to identify likely failure candidates due to conductive random particles causing shorts and non-conductive random particles causing opens.
  • Optional lithography simulation checks (LSC) with contour-based device and interconnect extraction to enable parametric yield improvement.
  • Optical proximity correction (OPC) to improve resolution and reduce process variation.
  • Chemical Mechanical Polishing (CMP) simulation to evaluate thickness variations.
  • Automatic dummy fill to reduce topology variations
  • Modeling and measurement of well proximity effects, wire edge effects, and length-of-diffusion effects.
  • Double-via insertion to reduce the effect of “via-open” defects.
  • Statistical static timing analysis.

In response to an audience question about the benefit to the customer of using DFM tools and methodologies, Wang cited fast yield ramp and reduced time to volume.


Device physics dominated this session. Because of process variability, design methodologies no longer isolate digital functional designers from the details of the manufacturing process, and analog designers never had such methodologies. Specifically:

  • IP reuse is now defined as IP architecture reuse, with the downstream implementation subject increasingly to customer and foundry optimizations.
  • The difficulty of foundry portability for digital designs may ease at 45 nm and 32 nm because there are fewer foundries, and some of those foundries are collaborating to produce a common platform. The difficulty of foundry and process portability for analog designs will continue at the present level so long as the analog design flow remains largely manual.
  • AMS design methodology and tools lag those for digital design, but innovation and progress is essential if designers are to achieve the productivity and scalability necessary to meet the growing demand for AMS designs.
  • Fewer early-adopters are adopting new processes, but they are doing it earlier, with unstable processes, and the production volumes are higher.
  • DFM accelerates the yield ramp and time to volume – both extremely important in high volume markets – and the foundries have quite sophisticated DFM flows.

Related Articles
PDF CEO calls for restricted layouts
International Electron Devices Meeting 2005: Fluctuation Limits & Scaling Opportunities for CMOS SRAM Cells, Azeez Bhavnagarwala, Stephen Kosonocky, Carl Radens, Kevin Stawiasz, Randy Mann, Qiuyi Ye, Ken Chin.

Japanese language report on the above paper:

VLSI Circuits 2004: A Sub-600mV, Fluctuation Tolerant 65nm CMOS SRAM Array with Dynamic Cell Biasing. Azeez Bhavnagarwala.

Volis Written by: