ARM TechCon Video: Beer, New Meridian CDC, and Arnold Schwarzenegger ?!
Graham Bell Vice President of Marketing at Real Intent
At ARM Tech Con 2014, I discussed beer, the new release of our Real Intent clock-domain crossing software Meridian CDC, and a new spokesperson for our company, with Sean O’Kane of ChipEstimate.TV. Enjoy!
New CDC Verification: Less Filling, Picture Perfect, and Tastes Great!
Graham Bell Vice President of Marketing at Real Intent
Real Intent will release our greatly extended Meridian CDC clock domain crossing software in November with new capabilities headlined by more hierarchical firepower and the launch of a user-configurable debugger.
The 2014.A edition announced last week (on my wife’s birthday), will have 30% higher performance against the existing tool and a 40% smaller memory footprint. The formal analysis engine within Meridian has also been given a 10X boost in throughput.
In the YouTube video interview below, Ramesh Dewangan, vice-president of application engineering, points out that the bottom-up hierarchical flow is key to Meridian CDC’s giga-scale capacity (though the tool is equally capable of handling designs ‘flat’).
The hierarchical approach means that] the complete design view of the SoC is available for CDC analysis at any time. There is no abstraction or any approximation that is used that has a potential to miss bugs. Being more specific, there is neither abstract modeling nor waivers.
New iDebug for CDC debugging
Our new debugger specifically leverages this hierarchical approach. Named iDebug (the ‘i’ standing for ‘intent’), it draws upon a Meridian CDC database that captures all phases of clock domain crossing verification for a hierarchical analysis of the design’s intent.
The iDebug software identifies root causes and then presents issues to users in an easy-to-assess and easy-t0-debug environment. We think it is a next generation debug environment. It has an integrated GUI and has user-configurability and programmability using command line interfaces (CLI). All the CDC analysis data is stored in a database that can be accessed through the command-line interface (CLI). So you are not stuck with one methodology that the tool provides for debug. Instead, you can create your own debug methodologies, custom to your own design flows which may include spreadsheet reports, graphical reports, scripting, and so on.
Run-time for CDC analysis
With recent envelope-pushing designs from AMD and nVIDIA both exceeding 5 billion gates, the tool has been designed to allow CDC checks to be undertaken at speed. The 40% decrease in memory and other performance improvements should mean that most projects can be run overnight on a reasonable-sized machine with a few-hundred GBytes of RAM.
ARM Fueling the SoC Revolution and Changing Verification Sign-off
Dr. Pranav Ashar Chief Technology Officer
ARM TechCon was in Santa Clara this week and Real Intent was exhibiting at the event. TechCon was enjoying its 10th anniversary and ARM was celebrating the fact that it is at the center of the System-on-Chip (SoC) revolution.
The SoC ecosystem spans the gamut of designs from high-end servers to low-power mobile consumer segments. A large and heterogeneous set of players (foundries, IP vendors, SoC integrators, etc.) has a stake in fostering the success of the ecosystem model. While the integrated device manufacturer (IDM) model has undeniable value in terms of bringing to bear large resources in tackling technology barriers, one could argue that the rapid-fire smartphone revolution we have experienced in the last five years owes in large part to the broad-based innovation enabled by the SoC ecosystem model. How are the changing dynamics of SoCs driving changes in verification requirements, tools and flows and thereby changing the timing sign-off paradigm?
ARM should be applauded for the significant role it has played in bootstrapping and further enabling the SoC ecosystem model. By licensing its processor and digital IP instead of manufacturing its own chips, ARM has freed its partners to aggressively build and refine their products without being reliant on general purpose devices or rigid form factors. And it did not hurt that ARM’s advantage in the mobile space was the low-power architecture and instruction set designed to sip power rather than glug it. Going forward, the optimization and perpetuation of this model will depend on a deep commitment by the EDA industry to recognize and fill the specific needs inherent to this model. ARM has a made a significant commitment on its side and works closely with EDA companies to develop reference methodologies (RMs) for implementation flows that enable ARM licensees to customize, implement, verify and characterize ARM processors for their chosen process technologies.
A case in point is that an SoC today is really a sea of interfaces. This is a consequence of the building-block design style used to create them. Since system timing is dominated by the delays found in long interconnect wiring paths, large monolithic ICs have given way to designs that use a number of small blocks with signal crossing interfaces. Besides the timing issue, each block can be optimized for low-power operation using an independent supply voltage and clock frequency control.
As a consequence, much of the performance optimization is being targeted at these interfaces in the form of aggressive design of protocols and their implementation. A second consequence is that most of these interfaces are asynchronous or need to be modeled that way. In other words, it would be more correct to say that an SoC today is really a sea of aggressively designed asynchronous interfaces!
This prominence of interfaces in the modern SoC has big implications on verification requirements, tools and flows and is changing the sign-off paradigm.
For starters, Clock-Domain-Crossing (CDC) verification is now a first-order sign-off requirement. CDC bugs are insidious in that they can remain unnoticed until after tape out and deployment. Their difficulty lies in that they are at the intersection of functionality and timing and neither functional simulation nor static timing analysis meets their challenge. It used to be the case that the number of clock crossings in a chip was small enough that manual review sufficed. Not any more. With more than fifty clock domains per SoC, and SoC sizes in the hundreds of millions of gates, it is absolutely essential that CDC sign-off be automated by means of a specialized tool that has deep and first-principles domain expertise in asynchronous interface design techniques and the typical implementation idioms therein.
An important point to register is that correct clock-crossing interface design is predicated not just on correct circuit implementation, but also on correct protocol design. As a result, the first CDC sign-off must happen at the pre-synthesis RTL abstraction level to intercept any protocol design bugs.
A further truism is that CDC sign-off is only as good as the environment setup feeding into it. Improper clock grouping, clock propagation, mode setup, reset propagation, etc. can lead to incorrect CDC analysis and a bad sign-off.
There are two implications of the above statements on timing-constraints closure in the modern SoC.
First, until recently, timing constraints setup fed into the Quality-of-Results (QoR) steps of synthesis, physical design and static timing analysis. Going forward, timing constraints closure is being fed into a black-and-white verification sign-off step. The timing-constraints specification exercise is, therefore, no more just a question of dealing with over-designed paths and logic, or of compromising on the QoR spec. It now part of a verification sign-off step with an implication of possible very-expensive-to-fix field bugs if done incorrectly.
Second, CDC sign-off now starts at the pre-synthesis RT level. That is possible only if SoC-level timing-constraints are available at that stage. Basically, the obligation to plan, create and manage timing constraints has moved up an abstraction level.
All of the above points to the need for tools for precise CDC analysis and for full-featured timing-constraints creation and management starting at the RT level. Real Intent is very much in the business of filling this need with its Meridian family of CDC and Constraints tools.
There are other changes in the SoC sign-off paradigm. What have you seen? And what is your number one concern today? I am very much interested in hearing your comments.
Graham Bell Vice President of Marketing at Real Intent
Recently, Real Intent put out a new release of our Ascent Lint tool, which checks your RTL to make sure it meets the standards for good coding practices. Linting has the advantages of delivering very quick feedback on troublesome and even dangerous coding style that causes problems that can show up in simulation, but will likely take a much longer time to uncover. With the right lint tool, you can catch the “low-hanging fruit” before tackling functional errors. In a recent blog, we discussed how a staged analysis starting with Initial checks, followed by Mature and Handoff checks, can very efficiently get you to ‘hardened’ RTL code that is ready to be integrated with the rest of the design.
Our latest release of Ascent Lint supports this staged analysis through a series of different policy files, and this is very effective for hand-coded RTL that designers create.
Another important aspect to Linting is the requirement to verify that automatically generated RTL code from high-level synthesis (HLS) tools is ‘clean’, as well. You might think that since the code is automatically generated does not need linting. In fact, the freedom given to designers in their HLS environments, permits them to generate RTL code that should be checked.
One example flow is MathWork’s HDL Coder™. It generates portable, synthesizable Verilog® and VHDL code from MATLAB functions, Simulink models, and Stateflow® charts. We announced in May, that Ascent Lint is integrated with the HDL Coder user interface which automates the setup of files and commands for Ascent Lint. This tight integration enables users to verify that the RTL code generated using HDL Coder is compliant with users’ coding conventions and industry standards, for a safe and reliable implementation flow for digital synthesis tools used by ASIC and FPGA designers.
Another flow is the integration of Calypto’s Catapult high-level synthesis tool with Ascent™ Lint, which we announced last year. Catapult synthesizes ANSI C++ and SystemC to RTL code. The advantage of starting with C-code means verification can be done over 100X faster than at RTL and designers get early feedback on their design architecture in terms of performance, area, and power. By integrating with Calypto’s synthesis tool, Ascent Lint enables designers to quickly go from from the system-level to gates, secure in the knowledge that their RTL code meets all of the industry quality standards in their implementation flow.
Besides our new integration with Mathworks, a number of other features and enhancement were introduced to Ascent Lint to keep it the highest performance linting tool in the marketplace. For a perspective on these new capabilities, please view the video below by Srinivas Vaidyanathan, staff technical engineer at Real Intent.
It’s Time to Embrace Objective-driven Verification
Dr. Pranav Ashar Chief Technology Officer
This article was originally published on TechDesignForums and is reproduced here by permission.
Consider the Wall Street controversy over High Frequency Trading (HFT). Set aside its ethical (and legal) aspects. Concentrate on the technology. HFT exploits customized IT systems that allow certain banks to place ‘buy’ or ‘sell’ stock orders just before rivals, sometimes just milliseconds before. That tiny advantage can make enough difference to the share price paid that HFT users are said to profit on more than 90% of trades.
Now look back to the early days of electronic trading. Competitive advantage then came down to how quickly you adopted an off-the-shelf, one-size-fits-all e-trading package.
Banking has long been at computing’s cutting edge. What HFT illustrates today is a progressive shift in the strategy it uses to develop systems from tool-based (‘We have bought an e-trading system’) to objective-driven (‘Make our e-trades the fastest and most profitable’).
As I said, I want to set aside the fair/unfair debate around HFT, and take it simply as a high profile illustration of how Wall Street’s approach to IT is evolving. Banks are continuously developing other systems based on objective-driven thinking. My point is that we can draw important lessons for SoC design from this overall shift because we are moving – and need to move – in the same direction toward objective-driven verification. Less controversially (thankfully), but we should still follow the trend more aggressively.
Wall Street’s riches point the way for objective-driven verification
‘Objective-driven verification’ defined
What do we mean by ‘objective-driven’? At a high level, the mindset of the system architect has changed: He has gone from identifying useful tools and deploying them in isolation to starting with a pre-defined goal that is achieved through a customized synthesis of available tools and methods.
Going deeper, one can identify two triggers:
A recognition that systemic tasks have become so complex it is very unlikely that you can fully realize them using a single raw tool, or even a few. Multiple tools and techniques must be combined and used in a fuller context.
A deeper understanding of the inner workings of complex systems that allows architects to isolate the processes and cause-effect relationships relevant to their objectives.
These triggers describe IT trends in logic verification as well as in banking.
The ‘system’ in verification is the SoC. The raw tools are, first, simulation, but also static-timing analysis and formal analysis. After a healthy run of around 25 years, SoC complexity has caught up with and overtaken this coarse-grain raw-tool model.
Objective-driven verification begins with that deeper understanding of the SoC architecture and the processes involved in putting it together. The objectives themselves emerge from today’s greater knowledge of failure modes and hard-to-achieve verification goals.
The model moves away from treating logic verification as monolithic. It focuses instead on specific goals. For each, we now know that custom solutions are more effective. Objective-driven verification rewards us with a much deeper, much cheaper process.
Raw tools play a role but have become interchangeable and commoditized. The productivity of an SoC design group is no longer determined by the use of a particular simulator. Rather, productivity and the viability of the design depend on how well the group adopts objective-driven solutions.
The value today therefore resides in a layer that sits on top of commoditized raw tools which contains a deep knowledge of different failure-modes within a structured workflow. This is where your big verification dollars need to be spent.
It is a disruption of a logic verification business model long based on selling raw tools. Nevertheless, the assertion that future growth will come from objective-driven verification is already well illustrated in two specific instances.
Objective-driven verification is already here
Take verification for failures caused by asynchronous clock-domain crossing (CDC). Until recently, it entailed manual design review and the use of specialized synchronizer library cells in simulation. You bought a fast simulator and then pounded stimuli onto the special cell-equipped model. This worked for crossings up to, say, the dozens. But as they grew in number and complexity, the approach broke down. Asynchronous-crossing failures increased alarmingly.
In response SoC designers, aided by vendors like Real Intent, have carved out asynchronous-CDC as a distinct objective-driven verification task. They have adopted dedicated solutions and workflows that address the problem to sign-off. Objective: “There will be no failures caused by asynchronous crossings.”
Real Intent’s asynchronous CDC solution stack illustrates an objective-driven verification process. It starts with a first-principles understanding of the failure modes. Around that is built a synergy of structural analysis methods, formal analysis methods and simulation hooks. A workflow then guides the user through an iterative chip-environment setup and the progressive refinement of verification results until full-chip sign-off is achieved.
This workflow component shows that objective-driven verification goes beyond simply a rediscovery of the ‘point tool’. Context, relationships with other ‘objectives’ and their solutions, relevance to the overall goal, and even the UI play subtle but important roles they did not in the point era.
Every SoC taped out today goes through an explicit asynchronous CDC sign-off based on a dedicated static solution of this type. However, I would note that the workflows associated with different solutions are materially different and lead to measurably different levels of productivity and quality of final results.
Objective-driven verification is also becoming the norm in X propagation. Logic simulation has long been an imperfect tool here: It can still incorrectly turn a deterministic value into an X, or an X into a deterministic value. The second effect is worse because it can mask bugs, giving false confidence in the chip’s correctness.
These insidious failures make it imperative that SoC design teams deploy objective-driven verification to catch them early. The same template applies as for asynchronous CDC: Synergistic structural and formal analysis with simulation hooks are joined to an intuitive and iterative workflow. This delivers progressively better results.
The list of high-value goals to which we can apply objective-based verification is getting longer. The broader concept is spreading quickly out from Wall Street’s deep-pocketed IT pioneers. And importantly for SoC design, objective-based verification techniques for asynchronous CDC and X-effects already demonstrate a value you can – well – take to the bank.
Autoformal: The Automatic Vacuum for Your RTL Code
Graham Bell Vice President of Marketing at Real Intent
The Roomba automatic vacuum cleaner may be the most popular home robot in the world. It wakes up, wanders around your house collecting ‘dust bunnies’ and other dirt and then parks itself, where it can recharge and be ready for the next cleaning cycle.
Real Intent also offers an automatic tool that cleans up your RTL code. Ascent IIV is an autoformal tool that automatically analyses the implied intent of your RTL code. It verifies different kinds of sequences and reports back on those that are suspicious. Because the analysis is smart and hierarchical, it reports primary errors that, when corrected, can remove a cascade of secondary errors.
Here is a quick list of checks that Ascent IIV automatically performs:
FSM deadlocks and unreachable states
Bus contention and floating busses
Full- and Parallel-case pragma violations
Constant RTL expressions, nets & state vector bits
SystemVerilog ‘unique’, ‘unique0′, and ‘priority’ checks for if and case constructs
In July, Real Intent announced a new release of Ascent IIV. Here is a video interview with Lisa Piper , senior technical marketing manager, discussing how IIV makes debug even easier with new features such as causation trees and focused custom reports.
It is a fact-of-life that as soon as RTL designers start writing the code for their modules, they will begin to introduce unintended errors. To eliminate these errors, designers will use a variety of tools to ensure the code is correct before hand-off. Functional errors are typically caught by a mix of static tools (auto-formal and assertion-based) and simulation. However, before designers start to uncover functional errors their code should pass RTL linting. Linting has the advantages of delivering very quick feedback on troublesome and even dangerous coding style that causes problems that can show up in simulation, but will likely take a much longer time to uncover. With the right lint tool, you can catch the “low-hanging fruit” before tackling functional errors.
As the code goes through refinement by a developer, Real Intent’s Ascent Lint is applicable at any stage of RTL maturity. Designers can be working with a mix of internally developed and external IPs with different levels of maturity and compatibility. And they can check their RTL early and often through development, confident it is ready for integration with other modules.
In order to bring this mix of IPs together under one umbrella, Real Intent recommends using a succession of Lint policy files. Each policy file is intended to achieve a significantly greater level of maturity towards achieving quality RTL using a set of Lint rules. The policies are tailored to apply across the broad spectrum of design types, but may be adjusted as needed. Design teams, after careful consideration, may skip individual steps in the flow in keeping with their priorities. Additionally, the sequence of policies is optimized to lend itself for early detection, faster debug and low noise. Here again, design teams may choose to re-order the recommendations based on their best practices.
HDL maturity is broadly classified into three stages of Initial, Mature and Handoff with an associated policy file. The three stages are defined as follows:
Initial RTL – Initial RTL represents the early phase where the requirements may still be evolving. It ensures that regressions and builds failures are caught early.
Mature RTL – Modeling costs, simulation-synthesis mismatches, FSM complexity, etc. are higher order aspects of freeze-ready RTL that can significantly impact the design quality. The Mature RTL checks ensure necessary conditions for downstream interoperability.
Handoff RTL – At the handoff stage, the checks are geared towards compliance with industry standards or internal conventions, to allow easy integration and reuse.
By fixing errors earlier in the design flow, with static verification such as Lint, significant project timeline savings can be achieved. Designers realize maximum productivity through using a staged set of policy files that address each level of code maturity. And designers can be confident that when their code is integrated into the project, they will not “look bad” to other team members, since they are delivering quality RTL for downstream simulation and implementation.
Graham Bell Vice President of Marketing at Real Intent
In our last post in series, part 4, we looked at the costs associated with debugging and sign-off verification. In this final posting, we propose a practical and efficient CDC verification methodology.
Template recognition vs. report quality trade-off
The first-generation CDC tools employed structural analysis as the primary verification technology. Given the lack of precision of this technology, users are often required to specify structural templates for verification. Given the size and complexity to today’s SOCs, this template specification becomes a cumbersome process where debugging cost is traded for setup cost. Also, the checking limitations imposed by templates may reduce the report volume, but they also increase the risk of missing errors. In general, template-based checking requires significant manual effort for effective utilization.
Top-level vs. block-level verification trade-off
The top-level verification reduces the setup requirements for CDC verification but can result in higher debugging cost as the design maturity improves iteratively. On the other hand, block-level verification identifies errors earlier and at smaller complexity levels, creating a cleaner top-level verification. The top-level debugging cost is reduced but the overall setup and run-time cost increases.
RTL vs. netlist verification trade-off
As mentioned earlier, netlist analysis can cover all the CDC error sources. The debugging cost is very high for application at the netlist level. Also, the delay in detecting errors until much later in the design cycle can have a serious impact on schedules. But RTL analysis does not cover all CDC-error sources, and this requires that CDC verification also be run on netlists.
A practical and efficient CDC verification methodology
After evaluating the various considerations as mentioned above, we recommend the following CDC-verification methodology to accomplish high-quality verification with minimal engineering cost:
Automatically create the functional setup the top-level design leveraging SDC.
Automatically complete the functional setup.
Use setup verification techniques to refine top-level functional setup.
Identify the sub-blocks for initial CDC verification.
Automatically generate block-level functional setup from the top-level.
Run thorough block level CDC verification.
Examine the generated functional setup for correctness.
Run structural analysis.
Identify and fix gross design errors or refine functional setup.
Run formal analysis for precise error identification.
Debug and fix design or refine functional setup.
Iterate verification steps until clean.
Run thorough top-level CDC verification with block-level result inheritance.
Run thorough netlist CDC verification.
Figure 16. A top down-bottom up verification flow.
Figure 17 compares the characteristics of first- and second-generation CDC tools across seven different categories. It summarizes the advantages of this new generation of design verification with the most dramatic change being in the efficiency of sign-off warnings, debug and verification methodology. We believe that sign-off verification is now possible and more importantly is a requirement for complex SOC designs.
Figure 17. Spider chart for first-generation and second-generation CDC tools.
Today, the number of clock domains in a complex SOC design can easily exceed 100 and the gate-count is well over 100 million instances. The first generation of CDC tools were not engineered to handle this kind of complexity and a second-generation tool-set is essential to reduce CDC failure risk and to avoid wasting engineering resources. This second generation maximizes automation and uses special formal techniques and automatic generation of top-level and block-level setups to accomplish high-quality verification. A hierarchical top-down, bottom-up methodology that takes advantage of the inherited results of both top- and block-level analysis minimizes the manual debug effort in CDC verification.
Video Keynote: New Methodologies Drive EDA Revenue Growth
Graham Bell Vice President of Marketing at Real Intent
Wally Rhines from Mentor gave an excellent keynote at the 51st Design Automation Conference on how EDA grows by solving new problems. In his short talk, he references an earlier keynote he gave back in 2004 and what has changed in the EDA industry since that time.
Here is a quick quote from his presentation: “Our capability in EDA today is largely focused on being able to verify that a chip does what it’s supposed to do. The problem of verifying that it doesn’t do anything it’s NOT supposed to do is a much more difficult one, a bigger one, but one for which governments and corporations would pay billions of dollars for to even partially solve.”
Where do you think future growth will come in EDA?
The original video is from the DAC web-site video archive and can be seen here. Wally’s full presentation is here.
WALDEN C. RHINES is Chairman and Chief Executive Officer of Mentor Graphics, a leader in worldwide electronic design automation with revenue of $1.2 billion in 2013. During his tenure at Mentor Graphics, revenue has more than tripled and Mentor has grown the industry’s number one market share solutions in four of the ten largest product segments of the EDA industry.
Prior to joining Mentor Graphics, Rhines was Executive Vice President of Texas Instruments’ Semiconductor Group, sharing responsibility for TI’s Components Sector, and having direct responsibility for the entire semiconductor business with more than $5 billion of revenue and over 30,000 people.
During his 21 years at TI, Rhines managed TI’s thrust into digital signal processing and supervised that business from inception with the TMS 320 family of DSP’s through growth to become the cornerstone of TI’s semiconductor technology. He also supervised the development of the first TI speech synthesis devices (used in “Speak & Spell”) and is co-inventor of the GaN blue-violet light emitting diode (now important for DVD players and low energy lighting). He was President of TI’s Data Systems Group and held numerous other semiconductor executive management positions.
Rhines has served five terms as Chairman of the Electronic Design Automation Consortium and is currently serving as co-vice-chairman. He is also a board member of the Semiconductor Research Corporation and First Growth Family & Children Charities. He has previously served as chairman of the Semiconductor Technical Advisory Committee of the Department of Commerce, as an executive committee member of the board of directors of the Corporation for Open Systems and as a board member of the Computer and Business Equipment Manufacturers’ Association (CBEMA), SEMI-Sematech/SISA, Electronic Design Automation Consortium (EDAC), University of Michigan National Advisory Council, Lewis and Clark College and SEMATECH.
Dr. Rhines holds a Bachelor of Science degree in metallurgical engineering from the University of Michigan, a Master of Science and Ph.D. in materials science and engineering from Stanford University, a master of business administration from Southern Methodist University and an Honorary Doctor of Technology degree from Nottingham Trent University.
Ramesh Dewangan Vice President of Application Engineering at Real Intent
Weird things can happen during a presentation to a customer!
I was visiting a customer site giving an update on the latest release of our Ascent and Meridian products. It was taking place during the middle of the day, in a large meeting room, with more than 30 people in the audience. Everything seemed to be going smoothly.
Suddenly there was an uproar, with clapping and cheers coming from an adjacent break room. Immediately, everyone in my audience opened their laptops, and grinned or groaned at the football score.
The 2014 FIFA World Cup soccer championship game was in full swing!
As Germany scored at will against Brazil, I lost count of the reactions by the end of the match! The final score was a crushing 7-1.
It disturbed my presentation alright, but it also gave me some food for thought.
If I look at SoC design as a SoCcer game, the bugs hiding in the design are like potential scores against us, the chip designers. We are defending our chip against bugs. Bugs could be related to various issues with design rules (bus contention), state machines (unreachable states, dead-codes), X-optimism (X propagating through x-sensitive constructs), clock domain crossing (re-convergence or glitch on asynchronous crossings), and so on.
Bugs can be found quickly, when the attack formation of our opponent is easy to see, or hard to find if the attack formation is very complex and well-disguised.
It is obvious that more goals will be scored against us if we are poorly prepared. The only way to avoid bugs (scores against us) is to build a good defense. What are some defenses we can deploy for successful chips?
We need to have design RTL that is free from design rule issues, free of deadlocks in its state machines, free from X-optimism and pessimism issues, and employs properly synchronized CDC for both data and resets and have proper timing constraints to go with it.
Can’t we simply rely on smart RTL design and verification engineers to prevent bugs? No, that’s only the first line of defense. We must have the proper tools and methodologies. Just like, having good players is not enough; you need a good defense strategy that the players will follow.
If you do not use proper tools and methodologies, you increase the risk of chip failure and a certain goal against the design team. That is like inviting penalty kick. Would you really want to leave you defense to the poor lone goal keeper? Wouldn’t you rather build methodology with multiple defense resources in play?
So what tools and methodologies are needed to prevent bugs? Here are some of the key needs:
RTL analysis (Linting) – to create RTL free of structural and semantic bugs
Clock domain crossing (CDC) verification – to detect and fix chip-killing CDC bugs
Functional intent analysis (also called auto-formal) – to detect and correct functional bugs well before the lengthy simulation cycle
X-propagation analysis – to reduce functional bugs due to unknowns X’s in the design and ensure correct power-on reset
Timing constraints verification – to reduce the implementation cycle time and prevent chip killer bugs due to bad exceptions
Proven EDA tools like Ascent Lint, Ascent IIV, Ascent XV, Meridian CDC and Meridian Constraints meet these needs effectively and keep bugs from crossing the mid-field of your design success.
Next time, you have no excuse for scores against you (i.e. bugs in the chip). You can defend and defend well using proper tools and methodologies.
Don’t let your chips be a defense-less victim like Brazil in that game against Germany! J