A New Approach to Circuit Simulation

11 May, 2021

It’s time for EDA tool providers to close the gaps that arise when disparate tools and disparate environments are applied to hyper-convergent designs and to provide a New Circuit Simulation Methodology

Sponsored by Synopsys. By Raja Tabet, Sr. VP of Engineering, and Anand Thiruvengadam, Product Marketing Director, Custom Design and Physical Verification Group

In our data-driven world, applications like high-performance computing (HPC) and artificial intelligence (AI) are taking center stage, delivering intelligence and insights that are transforming our lives. However, the growing complexities of HPC and AI designs are driving the need for much more complex semiconductor devices. Increasingly, multiple components and technologies are coming together in hyper-convergent designs to meet demands for bandwidth, performance, and power for these compute-intensive applications.

To achieve power, performance, and area (PPA) targets, such complex chips need to be analyzed as a single system—an approach that’s difficult to support via traditionally disparate tools. In this post, originally published in the “From Silicon to Software” blog, we’ll examine the trend of IC hyperconvergence and explain why the traditional, disaggregated approach to circuit simulation is no longer sufficient.

What is IC hyperconvergence? Simply put, a hyper-convergent IC design is one that is comprised of disparate components integrated on the same die or the same package. It’s like system-on-chip, but packed with a lot more functionality. A single die or package, for instance, can feature a diverse set of analog, digital, and mixed-signal components, some built on different process nodes. The complexity increases further when the various components are integrated vertically using 2.5D/3D architectures in a system-in-package (SiP).

From technology generation to generation, SoCs have grown more complex with more integration in response to application needs. As recently as 2015, advanced-node SoCs were primarily digital designs, with separate discrete analog components on mature nodes and fairly low data rates for on-chip IO. Fast-forward to 2020 and you’ll have noticed the increasing prevalence of advanced-node SoCs with integrated analog components, larger and faster embedded memory, and complex IOs with 100+ Gb data rates. And today, we’re seeing the emergence of high-bandwidth memory (HBM) designs consisting of large 3D stacked DRAM integrated with the SoC on a 3DIC or in a SiP.

While today’s highly integrated designs provide a way for designers to stretch the limits of Moore’s Law, the evolution also points to increased scale complexity and system complexity. From a scale standpoint, we’re seeing reduced margins and increased parasitics in advanced nodes. Also, larger and more complex circuits demand higher quality of results (QoR), time-to-results, and cost-of-results.

On the system side, complex multi-function and multi-technology silicon integrations are driving designers’ need for unified workflows around a common circuit simulation solution. In other words, the disparate tools that we’ve long been accustomed to are not adequate to meet the evolving needs in this environment.

Performing  Multi-Dimensional Analysis at the Component and Sub-System Levels

To illustrate the circuit simulation needs of today’s complex designs, let’s consider HBM. Adopted by JEDEC as an industry standard in 2013, HBM provides a high-speed memory interface for 3D stacked synchronous DRAM (SDRAM). It’s used with high-performance graphics accelerators, AI ASICs and FPGAs in high-performance datacenters, network devices, and some supercomputers. In these memory chips, multiple DRAM dies are vertically stacked with a memory controller, all interconnected by through-silicon vias (TSVs) and microbumps on a silicon interposer. This architecture makes it possible to deliver higher bandwidth with less power in a smaller form factor than DDR4 or GDDR5.

In a hyperconverged design, HBM designers will need to verify the entire memory sub-system present in a SiP, which means performing complex multi-dimensional analysis at the component and sub-system levels. There are difficult and more stringent constraints with new complexities that must be addressed to achieve power and performance targets. Circuit simulation tools need to be able to support:

    • Analysis of multiple technologies and multiple components (logic, analog, memory, I/O)
    • Different types of analyses (analog, digital, mixed-signal)
    • Large capacities for sub-system and chip-level analysis
    • Advanced reliability analyses (electrical, thermal, electro-thermal, temporal)
    • Signal integrity
    • Variability analysis (process, structural).

What’s more, as these designs continue to scale to advanced nodes, there is a substantial increase in simulations to ensure that the design will be reliable and meet yield targets. Familiar challenges remain but will be exacerbated. Signal integrity, for example, will need to be analyzed through the interposer. Issues such as electrothermal stress and larger parasitics must be addressed to foster chip reliability that manufacturing at scale will require.

From a design enablement perspective, this presents a multi-dimensional challenge that calls for workflows optimized for PPA and cost convergence. As a result, design teams and electronic design automation (EDA) tool providers must collaborate closely to address the complexity and costs of developing these hyperconverged designs.

Hyperconvergence Redefines Circuit Simulation

IC hyperconvergence is redefining how circuit simulation should be done. To meet the design and signoff requirements of hyper-convergent designs, circuit simulation tools need to come together in a unified workflow that:

    • Enables a holistic and cohesive verification of complex multi-technology/multi-function designs
    • Delivers greater performance while supporting much more capacity
    • Understands both the digital and analog worlds—and what happens when both are integrated in a complex device
    • Delivers a rich and consistent verification experience across all tools.

It’s time for EDA tool providers to close the gaps that arise when disparate tools and disparate environments are applied to hyper-convergent designs. As silicon chip designers continue to find innovative new ways to extend—or go beyond—Moore’s Law, a unified workflow is needed to support PPA, reliability, and yield targets while also reducing design costs and turnaround time to meet increasing verification demands of hyper-convergent designs.

More Details from SNUG World 2021

The challenges of hyper-convergent designs were discussed at our recent SNUG® World 2021. Recordings from the virtual experience are available online.

Share via Whatsapp

Posted in: blogs , Semiconductors , Software and IT