In 2020 COVID-19 had a negative impact on the global economy, yet the semiconductor market grew larger than expected. This growth in the semiconductor market was driven by the demand for networking, cloud, and computing solutions required to support today’s work-at-home environment. Also in 2020 we witnessed the introduction of 5G mobile devices and the emergence of new infrastructure required to support 5G, all fueling the semiconductor market growth. Today, semiconductor design is experiencing higher levels of integration combined with an increasing amount of new requirements. This is driving complexity beyond levels we have ever seen. Adopting proven solutions to achieve functional correctness has become critical.
Hardware verification today is a relatively mature topic, both in research and in industrial practice. Verification research dates back to at least three decades, with a rich body of literature . In industrial practice, verification is now firmly established as an integral component of the system development flow. Unfortunately, in spite of these advancements, there remains a significant gap between the state of the art in the technology today and the verification needs for modern industrial designs. The situation is exacerbated by the rapidly changing design ecosystem as we move rapidly and inevitably to the era of automated vehicles, smart cities, and Internet of Things (IoT). In particular, this new era has ushered in an environment where an electronic device first collects, analyzes, and stores some of our most intimate personal information, such as location, health, fitness, and sleep patterns; then communicates such information through a network of billions of other computing devices .
In spite of maturity, verification tools today do not scale up to the needs of modern SOC verification problems.While some of the challenges are driven by complexity (e.g., tool scalability, particularly for formal), some are driven by the needs of the rapidly changing technology trends.
With 60% of functional verification time spent on testbench development and debug and up to 40% of the time devoted to testbench bring-up and coverage closure, anything you can do to shorten these duration without missing bugs would be a welcome scenario. But, as chip designs grow larger and methodologies get more complex, it has become common for chip verification to consume thousands of CPU hours and for closure to take longer than ever. Given the relentless time-to-market pressures for new products, however, so much time spent on one activity—albeit a critical one—is more time than you might be able to effort .
How ML Can Increase Chip Verification Efficiency
- Reducing repeat stimuli generation
- Increasing hard-to-hit and rarely/not hit rates
- Providing stimuli distribution diagnostics and root-cause analysis
ML technologies accumulate knowledge from test to test, their ability to deliver increasing optimization benefits over regression runs strengthens. in which the learner independently discovers the sequence of actions to obtain maximum reward—facilitates faster and improved coverage. It exposes more bugs, including latent issues in the testbench, sometimes in the DUT too. And it reduces regression turnaround time. Instead of spending your time writing thousands of tests, using a tool infused with AI/ML technologies supports a holistic approach to coverage closure as shown in Figure 2, allowing you to.
- Gain testbench insights
- Accelerate coverage closure
- Find more bugs early
- Triage testbench issues quickly