University Program Nurtures Next Generation of Engineers

By Patrick Haspel, Global Program Director, Academic Partnerships and University Programs, Synopsys

The COVID-19 pandemic has accelerated our digital migration, moving more of our activities online. Ajit Manocha, president and CEO of SEMI, has discussed how critical it will be for the industry to close the talent gap. Investing in science, technology, engineering, and math (STEM) education is one way to nurture the interests and skillsets that are needed to bring more engineers into the workforce.

Collaborative business/university relationships, where businesses provide resources that complement or augment educational programs, provide a nice bridge between the two worlds. One such example is the Synopsys Electronic Design University Program, which provides academic and research institutions with access to electronic design automation (EDA) software, technical support, curriculum, and more.

The university bundle consists of more than 200 tools for a nominal fee and licensing agreement in support of fundamental research and education efforts. In this article, which was originally published on the From Silicon to Software blog, I’ll highlight some key examples that illustrate the mutually beneficial outcomes that are resulting from close collaboration between the business and academic worlds.

VLSI Training Course at Tel Aviv University

Creating the next generation of chip design engineers needs to start at the university level. Consider a project involving a complex 5nm design, which would require a team for implementation, verification, software design, and more. Such an endeavor could involve more than 100 people who have the latest skills. However, it’s not always easy to find the right mix of engineers.

Israel, for example, is in a region of the world where the dearth of electronic design talent is extremely high. To help create a pipeline of engineers, Zvi Webb, a retired applications engineering director from Synopsys, is serving as VLSI lab manager at Tel Aviv University and is developing an introductory very large-scale integration (VLSI) course based on the latest chip design tools. Students there, Webb noted, hadn’t been exposed to a digital design workflow and tool chain. Instead, they were building their designs manually.

Webb’s course will be offered in the spring of 2022 and will cover topics such as Verilog, logic synthesis, static timing analysis, and placement and routing, providing students with real-world expertise that can help open doors once they’re ready for the workforce. The training outline was derived from material prepared by Professor Adam Teman from Bar Ilan University. “The new course will bring student engineers more knowledge – they will gain an understanding of what VLSI means, what the steps are, how to perform checks,” Webb said.

NC State University Creates PDK for Physical Verification at 3nm

What constitutes an effective 3nm node? According to research conducted by the Electrical & Computer Engineering Department at North Carolina State University, which based its examinations on several IMEC papers, the 3nm node is marked by a gate length of approximately 15nm, cell track height of 5.5T, and contacted poly pitch of 42nm. Scaling has been enabled by design technology co-optimization to achieve the desired benefits; however, as Moore’s Law slows down, it’s now also important to look at system technology co-optimization, examining ways to reengineer the power grid and utilize new device structures (such as gate all-around FETs).

Dr. Rhett Davis, a professor at the university’s Electrical & Computer Engineering Department, has teamed up with graduate students, other faculty, and the Synopsys University Program to create an open-source 3nm process design kit (PDK) for education and industry research. Specifically, the team wanted to explore the impact of new structures like gate all-around FETs and scaling boosters like buried power rails and 5.5T height metal pitch.

A cross-section of a single transistor in the FreePDK3's vision of a 3nm process (front and side views)
A cross-section of a single transistor in the FreePDK3’s vision of a 3nm process (front and side views)

“What we found when making this kit is that transistors aren’t really shrinking anymore. Instead, they’re getting taller. That is, foundries are finding economical ways to stack them. Our kit compiles the best available public data into a set of rules that show us how to work with this new technology,” explained Davis.

To create the resulting FreePDK3, the team used Synopsys IC Validator for physical verification, Synopsys Custom Compiler for layout and schematic entry, Synopsys StarRC for parasitic extraction, and HSPICE® technology for circuit simulation. The FreePDK3 is published on the GitHub repository.

Engaging the Next Generation of Engineers

These examples illustrate the work that academia is engaging in with the business world. Through our Electronic Design University Program, Synopsys provides full-semester coursework for undergraduate and graduate programs in IC design and EDA development; teaching resources such as libraries and PDKs, and technical support and training. In addition, Synopsys also offers academic programs in the areas of optical design and static analysis software.

The Synopsys Foundation is committed to advancing STEM education opportunities that contribute to the growth and development of our future technology leaders. Through close collaboration, businesses and universities can help nurture the next generation of engineers for semiconductor and electronics industries that are continuing to embark on new innovations that are fueling our smart, connected world.

Optimizing PPA for 3DICs Requires a New Approach

By Raja Tabet, Sr. VP of Engineering, and Anand Thiruvengadam, Product Marketing Director, Custom Design and Physical Verification Group

Sponsored by Synopsys.

The adoption of 3DIC architectures is enjoying a surge in popularity as product developers look to their inherent advantages in performance, cost, and the ability to combine heterogeneous technologies and nodes into a single package. As designers struggle to find ways to scale with complexity and density limitations of traditional flat IC architectures, 3D integration offers an opportunity to continue functional diversity and performance improvements, while meeting form-factor constraints and cost.

3D structures offer a variety of specific benefits. For example, performance is often dominated by the time and power needed to access memory. With 3D integration, memory and logic can be integrated into a single 3D stack. This approach dramatically increases the width of memory busses through fine-pitch interconnects, while decreasing the propagation delay through the shorter interconnect line. Such connections can lead to memory access bandwidth of tens of Tbps for 3D designs, as compared with hundreds of Gbps bandwidth in leading 2D designs.

From a cost perspective, a large system with different parts has various sweet spots in terms of silicon implementation. Rather than having the entire chip at the most complex and/or expensive technology node, heterogeneous integration allows the use of the ‘right’ node for different parts of the system, e.g., advanced/expensive nodes for only the critical parts of the system and less expensive nodes for the less critical parts.

In this post, which was originally published on the “From Silicon to Software” blog, we’ll look at 3DIC’s ability to leverage designs from heterogenous nodes– and the opportunities and challenges of a single 3D design approach to achieve optimal Power, Performance, and Area (PPA).

Vertical Dimension Changes the Design Strategy

While 3D architectures elevate workflow efficiency and efficacy, 3DIC design does introduce new challenges. Because of the distinct physical characteristics of 3D design and stacking, traditional tools and methodologies are not sufficient to solve these limitations and require a more integrated approach. In addition, there is a need to look at the system in a much more holistic way, compared to a typical flat 2D design. Simply thinking about stacking 2D chips on top of each other is insufficient in dealing with the issues related to true 3D design and packaging.

Since the designs must be considered in three dimensions, as opposed to the typical x, y aspects of a flat 2D design, everything must be managed with the addition of the z dimension  – from architectural design to logic verification and route connection – including bumps and through-silicon vias (TSVs), thermal, and power delivery network (PDN) opportunities for new tradeoffs (such as interposer based versus 3D stacks, memory on logic or logic on memory, and hybrid bonding versus bumps). Optimization of the ‘holy grail’ of PPA is still a critical guiding factor; however, with 3DICs, it now becomes cubic millimeter optimization, because it’s not just in two directions, but also the vertical dimension that must be considered in all tradeoff decisions.

The need for Co-design Methodology

Further complicating matters, higher levels of integration available with 3DICs obsolete traditional board and package manual-level techniques such as bump layout and custom layout for high-speed interconnects, which cause additional bottlenecks. Most importantly, interdependency of previously distinct disciplines now needs to be considered in a co-design methodology (both people and tools), across all stages of chip design, package, architecture, implementation, and system analysis.

Let’s look at an example of a specific design challenge – the goal to improve memory bandwidth. Traditionally, designers would look at how to connect the memory and CPU to get the highest possible bandwidth. But with 3DICs, they need to look at both the memory and CPU together to figure out the optimal placement in the physical hierarchy, as well as how they connect, through CSVs or silicon vias, for example. While performance is critical, designers need a way to evaluate the power and thermal impact by stacking these types of elements together in different ways, introducing new levels of complexities and design options.

Taking a Silicon-First Approach

While it might seem obvious to consider a 3D architecture in a similar manner as a printed circuit board (PCB) design, 3DICs should ideally take a silicon-first approach – that is, optimize the design IP (of the entire silicon) and co-design this silicon system with the package. Within our approach to 3DICs, Synopsys is bringing key concepts and innovations of IC design into the 3DIC space. This includes looking at aspects of 3DICs such as architectural design, bringing high levels of automation to manual tasks, scaling the solution to embrace the high levels of integration from advanced packaging, and integrating signoff analysis into the design flow.

3DICs integrate the package, traditionally managed by PCB-like tools, with the chip. PCB tools  are not wired to deal with both the scale complexity and process complexity. In a typical PCB there may be 10,000 connections. But in a complex 3DIC, there are hundreds of millions of connections, introducing a whole new level of scale which is far outpacing what older, PCB-centric approaches can manage. Existing PCB tools cannot offer assistance for stacking dies, and there is no package or PCB involved. Further, PCB tools cannot look at RTL or system design decisions.

The reality is that there cannot be one single design tool for all aspects of a 3DIC (IC, interposer, package), yet there is an acute need for assembling and visualizing the complete stack. The Synopsys 3DIC Compiler does just that. It is a platform that has been built for 3DIC system integration and optimization. The solution focuses on multi-chip systems, such as chip-on-silicon interposer (2.5D), chip-on-wafer, wafer-on-wafer, chip-on-chip, and 3D SoC.

The PPA Trifecta

Typically, when you think of large complex chips, the first optimization considered is area.  SoC designers want to integrate as much functionality into the chip and deliver as high performance as possible. But then there are always the required power and thermal envelopes, particularly critical in applications such as mobile and IoT (and also high-performance computing). Implementing 3D structures enables designers to continue to add functionality to the product, without exceeding the area constraints and, at the same time, lowering silicon costs.

But a point tool approach only addresses sub-sections of the complex challenges in designing 3DICs. This creates large design feedback loops that don’t allow for convergence to an optimal solution for the best PPA per cubic mm2 in a timely manner. In a multi-die environment, the full system must be analyzed and optimized together. It isn’t enough to perform power and thermal analysis of the individual die in isolation. A more effective and efficient solution would be a unified platform that integrates system-level signal, power, and thermal analysis into a single, tightly coupled solution.

This is where 3DIC Compiler really shines–by enabling early analysis with a suite of integrated capabilities for power and thermal analysis. The solution reduces the number of iterations through its full set of automated features while providing power integrity, thermal, and noise-aware optimization. This helps designers to better understand the performance of the system and facilitate exploration around the system architecture.  And it also allows a more efficient way to understand how to stitch together various elements of the design and even connect design engineers in some ways to traditional 2D design techniques.

Ideal Platform for Achieving Optimal PPA Per Cubic mm2

Through the vertical stacking of silicon wafers into a single packaged device, 3DICs are proving their potential as a means to deliver the performance, power, and footprint required to continue to scale Moore’s law. Despite the new nuances of designing 3D architectures using an integrated design platform, the possibilities of achieving the highest performance at the lowest achievable power makes 3D architecture appealing. 3DICs are poised to become even more widespread as chip designers strive to achieve the optimum PPA per cubic mm2.

Synopsys Announced a new simulator for converged ICs

Synopsys announced the PrimeSim Continuum solution, a unified workflow for circuit simulation technologies to accelerate the creation of hyper-convergent designs. PrimeSim Continuum is built on SPICE and FastSPICE architectures – a proven GPU acceleration technology providing runtime improvements and signoff accuracy. “PrimeSim Continuum represents a revolutionary breakthrough in circuit simulation,” said Sassine Ghazi, Chief Operating Officer at Synopsys.

Today’s hyper-convergent SoCs consist of larger and faster embedded memories, analog front-end devices and complex I/O circuits that communicate at 100Gb+ data rates with the DRAM stack connected on the same piece of silicon in a system-in-package design. This results in more simulations with longer runtimes at higher accuracy. PrimeSim Continuum addresses this complexity with a unified workflow of sign-off quality simulation engines tuned for analog, mixed-signal, RF, custom digital memory designs. Synopsys said it optimizes the use of CPU and GPU resources and improve time-to-results and cost of results.

“As modern compute workloads evolve, the size and complexity of analog designs have moved beyond the capacity of traditional circuit simulators,” said Edward Lee, vice president of Mixed Signal Design at NVIDIA. “Using NVIDIA GPUs enables PrimeSim SPICE to accelerate circuit simulation, notably minimizing signoff time of analog blocks from days to hours.” The Synopsys PrimeSim Continuum solution is now available. For more information: PrimeSim Continuum.

Synopsys Delivers New ZeBu Empower Emulation System for Hardware-Software Power Verification

Synopsys, Inc. (Nasdaq: SNPS) announced the immediate availability of ZeBu® Empower emulation system, delivering breakthrough technology for fast hardware-software power verification of multi-billion gate SoC designs. The performance of ZeBu Empower enables multiple iterations per day with actionable power profiling in the context of the full design and its software workload. With ZeBu Empower, software and hardware designers can utilize the power profiles to identify substantial power improvement opportunities for dynamic and leakage power much earlier. The ZeBu Empower emulation system also feeds forward power-critical blocks and time windows into Synopsys’ PrimePower engine to accelerate RTL power analysis and gate-level power sign-off.

Traditionally, power analysis with realistic software workloads is performed post-silicon, introducing a high amount of risk to miss critical high-power situations, which exposes companies to significant cost and product adoption risk. By taking advantage of high-speed emulation in ZeBu Empower, design teams can perform verification earlier in the design cycle, dramatically reducing risks of power bugs and missed SoC power goals.

“The industry’s need to shift-left software development from post-silicon to pre-silicon has driven tremendous adoption of our ZeBu Server over the last five years,” said Manoj Gandhi, general manager of the Verification Group at Synopsys. “Our breakthrough technology in ZeBu Empower addressees our customers’ need for hardware-software power verification enabling them to develop a new generation of power optimized SoCs.”

“As high-performance designs and workloads continue to grow in complexity, achieving leadership performance within a thermal envelope is important for our products,” said Alex Starr, Corporate Fellow, Technology and Engineering at AMD. “Solutions that allow us to efficiently profile power consumption across real workloads in a pre-silicon environment help us achieve our product goals. Synopsys’ ZeBu Empower, operating in collaboration with servers using 2nd Gen AMD EPYCTM processors, has enabled us to perform pre-silicon power analysis more efficiently in a quicker time.”

The Synopsys ZeBu Empower emulation system for hardware-software power verification solution is available now.

Synopsys offers a comprehensive solution for low power design and verification, including RTL-based early power exploration to the industry’s golden power signoff; from early static verification to emulation-based hardware-software power verification. Synopsys’ innovative low power solutions are deployed across some of the most demanding designs, globally.

Synopsys Acquired Light Tec

Synopsys has acquired Light Tec, a French-based provider of optical scattering measurements solutions. The acquisition allows Synopsys to combine its optical design software tools with Light Tec’s solutions expands customer access to precision light scattering data for materials and media used in optical systems.

The terms of the deal were not being disclosed. Light scattering data provides designers with accurate information to predict how light reflects and transmits in an optical system. It is used to obtain high-precision simulation results for a wide range of applications such as optical sensors, displays, semiconductors, and luminaires. Light scattering data is also important for demonstrating optical product spectral behavior in photorealistic renderings.

“Light Tec’s optical measurement capabilities provide our customers with robust new tools for high-accuracy optical product simulations and visualizations,” said Dr. Howard Ko, general manager of Synopsys’ Silicon Engineering Group. Synopsys is in a process of expanding its presence in the optics market design tools. In September, 2020, it launched the OptoCompiler platform for photonic integrated circuit (PIC) design.

Unified Electronics and Photonics Design Tool

OptoCompiler is one of the industry’s first unified electronic and photonic design platform that combines industry-proven electronic design tools with optical design tools. Widespread implementation of PICs has, until now, been impeded because many design tools were intended for electronics rather than photonics. As a result, photonic design has largely been the domain of experts who could build their own tools or repurpose a disparate toolset.

OptoCompiler provides support for electronic-photonic co-design to ensure scalable design processes. “With OptoCompiler, we aim to make photonic design as productive as digital,” said Tom Walker, group director of Synopsys’ Photonic Solutions. Synopsys started this move in 2012, when it acquired RSoft Design Group, a provider of photonics design and simulation software headquartered in New York.

Arbe Robotics Selected EV62 Processor from Synopsys for its Imaging Radar

The next Radar SoC for vehicles will be based on a suite of solutions from Synopsys’ DesignWare IP. Last month the Israeli startup company Arbe Robotics announced it has raised $10 million in a round led by French VC 360 Capital Partners, The company said it will use the proceeds to enhance its imaging technology and to expand its operations in the US and China. With this round, the total funding into the firm stands at $23 million.

Lately the company revealed more data on its future radar device. Arbe Robotics’ imaging radar is designed for advanced driver assistance systems (ADAS) and autonomous vehicles. The imaging radar can sense the environment at a wide 100-degree field of view in high-resolution at all weather conditions, including fog, heavy rain, pitch darkness, and air pollution. It is able to create a detailed image of the road at a range of more than 300 meters (1,000 feet) and capture the size, location, and velocity data of objects surrounding the vehicle in accuracy of 10-30 cm.

Synopsys said that Arbe Robotics selected its DesignWare ARC EM Safety IslandEV6x Embedded Vision Processor with Safety Enhancement PackageEthernet Quality-of-Service Controller IP, and STAR Memory and STAR Hierarchical System. “Radar chips for autonomous vehicles require both a high level of processing capabilities and integrated safety features, to detect and prevent system failures,” said Kobi Marenko CEO at Arbe Robotics.

Arbe Robotics selected the ARC EM6SI Safety Island because of its integrated self-checking safety monitor, error correcting code (ECC), and programmable watchdog timer. The EV62 Embedded Vision Processor with SEP provides a dual core 512-bit wide SIMD vector DSP, and required functional safety capabilities without sacrificing performance. Both processors feature lockstep capabilities for detection of system failures and runtime faults.