Real Talk Blog
Blog Archive
August 2014
8/29/2014: Fundamentals of Clock Domain Crossing: Conclusion
8/21/2014: Video Keynote: New Methodologies Drive EDA Revenue Growth
8/15/2014: SoCcer: Defending your Digital Design
8/08/2014: Executive Insight: On the Convergence of Design and Verification
July 2014
7/31/2014: Fundamentals of Clock Domain Crossing Verification: Part Four
7/24/2014: Fundamentals of Clock Domain Crossing Verification: Part Three
7/17/2014: Fundamentals of Clock Domain Crossing Verification: Part Two
7/10/2014: Fundamentals of Clock Domain Crossing Verification: Part One
7/03/2014: Static Verification Leads to New Age of SoC Design
June 2014
6/26/2014: Reset Optimization Pays Big Dividends Before Simulation
6/20/2014: SoC CDC Verification Needs a Smarter Hierarchical Approach
6/12/2014: Photo Booth Blackmail at DAC in San Francisco!
6/06/2014: Quick Reprise of DAC 2014
May 2014
5/01/2014: Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions
April 2014
4/24/2014: Complexity Drives Smart Reporting in RTL Verification
4/17/2014: Video Update: New Ascent XV Release for X-optimization, ChipEx show in Israel, DAC Preview
4/11/2014: Design Verification is Shifting Left: Earlier, Focused and Faster
4/03/2014: Redefining Chip Complexity in the SoC Era
March 2014
3/27/2014: X-Verification: A Critical Analysis for a Low-Power World (Video)
3/14/2014: Engineers Have Spoken: Design And Verification Survey Results
3/06/2014: New Ascent IIV Release Delivers Enhanced Automatic Verification of FSMs
February 2014
2/28/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 3
2/20/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 2
2/13/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 1
2/07/2014: Video Tech Talk: Changes In Verification
January 2014
1/31/2014: Progressive Static Verification Leads to Earlier and Faster Timing Sign-off
1/30/2014: Verific’s Front-end Technology Leads to Success and a Giraffe!
1/23/2014: CDC Verification of Fast-to-Slow Clocks – Part Three: Metastability Aware Simulation
1/16/2014: CDC Verification of Fast-to-Slow Clocks – Part Two: Formal Checks
1/10/2014: CDC Verification of Fast-to-Slow Clocks – Part One: Structural Checks
1/02/2014: 2013 Highlights And Giga-scale Predictions For 2014
December 2013
12/13/2013: Q4 News, Year End Summary and New Videos
12/12/2013: Semi Design Technology & System Drivers Roadmap: Part 6 – DFM
12/06/2013: The Future is More than “More than Moore”
November 2013
11/27/2013: Robert Eichner’s presentation at the Verification Futures Conference
11/21/2013: The Race For Better Verification
11/18/2013: Experts at the Table: The Future of Verification – Part 2
11/14/2013: Experts At The Table: The Future Of Verification Part 1
11/08/2013: Video: Orange Roses, New Product Releases and Banner Business at ARM TechCon
October 2013
10/31/2013: Minimizing X-issues in Both Design and Verification
10/23/2013: Value of a Design Tool Needs More Sense Than Dollars
10/17/2013: Graham Bell at EDA Back to the Future
10/15/2013: The Secret Sauce for CDC Verification
10/01/2013: Clean SoC Initialization now Optimal and Verified with Ascent XV
September 2013
9/24/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 4
9/20/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 3
9/20/2013: CEO Viewpoint: Prakash Narain on Moving from RTL to SoC Sign-off
9/17/2013: Video: Ascent Lint – The Best Just Got Better
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 2
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain
9/10/2013: SoC Sign-off Needs Analysis and Optimization of Design Initialization in the Presence of Xs
August 2013
8/15/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 4
8/08/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 3
July 2013
7/25/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 2
7/18/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 1
7/16/2013: Executive Video Briefing: Prakash Narain on RTL and SoC Sign-off
7/05/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 3
June 2013
6/27/2013: Bryon Moyer: Simpler CDC Exception Handling
6/21/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 2
6/17/2013: Peggy Aycinena’s interview with Prakash Narain
6/14/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 1
6/10/2013: Photo Booth Blackmail!
6/03/2013: Real Intent is on John Cooley’s “DAC’13 Cheesy List”
May 2013
5/30/2013: Does SoC Sign-off Mean More Than RTL?
5/24/2013: Ascent Lint Rule of the Month: DEFPARAM
5/23/2013: Video: Gary Smith Tells Us Who and What to See at DAC 2013
5/22/2013: Real Intent is on Gary Smith’s “What to see at DAC” List!
5/16/2013: Your Real Intent Invitation to Fun and Fast Verification at DAC
5/09/2013: DeepChip: “Real Intent’s not-so-secret DVcon’13 Report”
5/07/2013: TechDesignForum: Better analysis helps improve design quality
5/03/2013: Unknown Sign-off and Reset Analysis
April 2013
4/25/2013: Hear Alexander Graham Bell Speak from the 1880′s
4/19/2013: Ascent Lint rule of the month: NULL_RANGE
4/16/2013: May 2 Webinar: Automatic RTL Verification with Ascent IIV: Find Bugs Simulation Can Miss
4/05/2013: Conclusion: Clock and Reset Ubiquity – A CDC Perspective
March 2013
3/22/2013: Part Six: Clock and Reset Ubiquity – A CDC Perspective
3/21/2013: The BIG Change in SoC Verification You Don’t Know About
3/15/2013: Ascent Lint Rule of the Month: COMBO_NBA
3/15/2013: System-Level Design Experts At The Table: Verification Strategies – Part One
3/08/2013: Part Five: Clock and Reset Ubiquity – A CDC Perspective
3/01/2013: Quick DVCon Recap: Exhibit, Panel, Tutorial and Wally’s Keynote
3/01/2013: System-Level Design: Is This The Era Of Automatic Formal Checks For Verification?
February 2013
2/26/2013: Press Release: Real Intent Technologist Presents Power-related Paper and Tutorial at ISQED 2013 Symposium
2/25/2013: At DVCon: Pre-Simulation Verification for RTL Sign-Off includes Automating Power Optimization and DFT
2/25/2013: Press Release: Real Intent to Exhibit, Participate in Panel and Present Tutorial at DVCon 2013
2/22/2013: Part Four: Clock and Reset Ubiquity – A CDC Perspective
2/18/2013: Does Extreme Performance Mean Hard-to-Use?
2/15/2013: Part Three: Clock and Reset Ubiquity – A CDC Perspective
2/07/2013: Ascent Lint Rule of the Month: ARITH_CONTEXT
2/01/2013: “Where Does Design End and Verification Begin?” and DVCon Tutorial on Static Verification
January 2013
1/25/2013: Part Two: Clock and Reset Ubiquity – A CDC Perspective
1/18/2013: Part One: Clock and Reset Ubiquity – A CDC Perspective
1/07/2013: Ascent Lint Rule of the Month: MIN_ID_LEN
1/04/2013: Predictions for 2014, Hier. vs Flat, Clocks and Bugs
December 2012
12/14/2012: Real Intent Reports on DVClub Event at Microprocessor Test and Verification Workshop 2012
12/11/2012: Press Release: Real Intent Records Banner Year
12/07/2012: Press Release: Real Intent Rolls Out New Version of Ascent Lint for Early Functional Verification
12/04/2012: Ascent Lint Rule of the Month: OPEN_INPUT
November 2012
11/19/2012: Real Intent Has Excellent EDSFair 2012 Exhibition
11/16/2012: Peggy Aycinena: New Look, New Location, New Year
11/14/2012: Press Release: New Look and New Headquarters for Real Intent
11/05/2012: Ascent Lint HDL Rule of the Month: ZERO_REP
11/02/2012: Have you had CDC bugs slip through resulting in late ECOs or chip respins?
11/01/2012: DAC survey on CDC bugs, X propagation, constraints
October 2012
10/29/2012: Press Release: Real Intent to Exhibit at ARM TechCon 2012 – Chip Design Day
September 2012
9/24/2012: Photos of the space shuttle Endeavour from the Real Intent office
9/20/2012: Press Release: Real Intent Showcases Verification Solutions at Verify 2012 Japan
9/14/2012: A Bolt of Inspiration
9/11/2012: ARM blog: An Advanced Timing Sign-off Methodology for the SoC Design Ecosystem
9/05/2012: When to Retool the Front-End Design Flow?
August 2012
8/27/2012: X-Verification: What Happens When Unknowns Propagate Through Your Design
8/24/2012: Article: Verification challenges require surgical precision
8/21/2012: How To Article: Verifying complex clock and reset regimes in modern chips
8/20/2012: Press Release: Real Intent Supports Growth Worldwide by Partnering With EuropeLaunch
8/06/2012: SemiWiki: The Unknown in Your Design Can be Dangerous
8/03/2012: Video: “Issues and Struggles in SOC Design Verification”, Dr. Roger Hughes
July 2012
7/30/2012: Video: What is Driving Lint Usage in Complex SOCs?
7/25/2012: Press Release: Real Intent Adds to Japan Presence: Expands Office, Increases Staff to Meet Demand for Design Verification and Sign-Off Products
7/23/2012: How is Verification Complexity Changing, and What is the Impact on Sign-off?
7/20/2012: Real Intent in Brazil
7/16/2012: Foosball, Frosty Beverages and Accelerating Verification Sign-off
7/03/2012: A Good Design Tool Needs a Great Beginning
June 2012
6/14/2012: Real Intent at DAC 2012
6/01/2012: DeepChip: Cheesy List for DAC 2012
May 2012
5/31/2012: EDACafe: Your Real Intent Invitation to Fast Verification and Fun at DAC
5/30/2012: Real Intent Video: New Ascent Lint and Meridian CDC Releases and Fun at DAC 2012
5/29/2012: Press Release: Real Intent Leads in Speed, Capacity and Precision with New Releases of Ascent Lint and Meridian CDC Verification Tools
5/22/2012: Press Release: Over 35% Revenue Growth in First Half of 2012
5/21/2012: Thoughts on RTL Lint, and a Poem
5/21/2012: Real Intent is #8 on Gary Smith’s “What to see at DAC” List!
5/18/2012: EETimes: Gearing Up for DAC – Verification demos
5/08/2012: Gabe on EDA: Real Intent Helps Designers Verify Intent
5/07/2012: EDACafe: A Page is Turned
5/07/2012: Press Release: Graham Bell Joins Real Intent to Promote Early Functional Verification & Advanced Sign-Off Circuit Design Software
March 2012
3/21/2012: Press Release: Real Intent Demos EDA Solutions for Early Functional Verification & Advanced Sign-off at Synopsys Users Group (SNUG)
3/20/2012: Article: Blindsided by a glitch
3/16/2012: Gabe on EDA: Real Intent and the X Factor
3/10/2012: DVCon Video Interview: “Product Update and New High-capacity ‘X’ Verification Solution”
3/01/2012: Article: X-Propagation Woes: Masking Bugs at RTL and Unnecessary Debug at the Netlist
February 2012
2/28/2012: Press Release: Real Intent Joins Cadence Connections Program; Real Intent’s Advanced Sign-Off Verification Capabilities Added to Leading EDA Flow
2/15/2012: Real Intent Improves Lint Coverage and Usability
2/15/2012: Avoiding the Titanic-Sized Iceberg of Downton Abbey
2/08/2012: Gabe on EDA: Real Intent Meridian CDC
2/08/2012: Press Release: At DVCon, Real Intent Verification Experts Present on Resolving X-Propagation Bugs; Demos Focus on CDC and RTL Debugging Innovations
January 2012
1/24/2012: A Meaningful Present for the New Year
1/11/2012: Press Release: Real Intent Solidifies Leadership in Clock Domain Crossing
August 2011
8/02/2011: A Quick History of Clock Domain Crossing (CDC) Verification
July 2011
7/26/2011: Hardware-Assisted Verification and the Animal Kingdom
7/13/2011: Advanced Sign-off…It’s Trending!
May 2011
5/24/2011: Learn about Advanced Sign-off Verification at DAC 2011
5/16/2011: Getting A Jump On DAC
5/09/2011: Livin’ on a Prayer
5/02/2011: The Journey to CDC Sign-Off
April 2011
4/25/2011: Getting You Closer to Verification Closure
4/11/2011: X-verification: Conquering the “Unknown”
4/05/2011: Learn About the Latest Advances in Verification Sign-off!
March 2011
3/21/2011: Business Not as Usual
3/15/2011: The Evolution of Sign-off
3/07/2011: Real People, Real Discussion – Real Intent at DVCon
February 2011
2/28/2011: The Ascent of Ascent Lint (v1.4 is here!)
2/21/2011: Foundation for Success
2/08/2011: Fairs to Remember
January 2011
1/31/2011: EDA Innovation
1/24/2011: Top 3 Reasons Why Designers Switch to Meridian CDC from Real Intent
1/17/2011: Hot Topics, Hot Food, and Hot Prize
1/10/2011: Satisfaction EDA Style!
1/03/2011: The King is Dead. Long Live the King!
December 2010
12/20/2010: Hardware Emulation for Lowering Production Testing Costs
12/03/2010: What do you need to know for effective CDC Analysis?
November 2010
11/12/2010: The SoC Verification Gap
11/05/2010: Building Relationships Between EDA and Semiconductor Ventures
October 2010
10/29/2010: Thoughts on Assertion Based Verification (ABV)
10/25/2010: Who is the master who is the slave?
10/08/2010: Economics of Verification
10/01/2010: Hardware-Assisted Verification Tackles Verification Bottleneck
September 2010
9/24/2010: Excitement in Electronics
9/17/2010: Achieving Six Sigma Quality for IC Design
9/03/2010: A Look at Transaction-Based Modeling
August 2010
8/20/2010: The 10 Year Retooling Cycle
July 2010
7/30/2010: Hardware-Assisted Verification Usage Survey of DAC Attendees
7/23/2010: Leadership with Authenticity
7/16/2010: Clock Domain Verification Challenges: How Real Intent is Solving Them
7/09/2010: Building Strong Foundations
7/02/2010: Celebrating Freedom from Verification
June 2010
6/25/2010: My DAC Journey: Past, Present and Future
6/18/2010: Verifying Today’s Large Chips
6/11/2010: You Got Questions, We Got Answers
6/04/2010: Will 70 Remain the Verification Number?
May 2010
5/28/2010: A Model for Justifying More EDA Tools
5/21/2010: Mind the Verification Gap
5/14/2010: ChipEx 2010: a Hot Show under the Hot Sun
5/07/2010: We Sell Canaries
April 2010
4/30/2010: Celebrating 10 Years of Emulation Leadership
4/23/2010: Imagining Verification Success
4/16/2010: Do you have the next generation verification flow?
4/09/2010: A Bug’s Eye View under the Rug of SNUG
4/02/2010: Globetrotting 2010
March 2010
3/26/2010: Is Your CDC Tool of Sign-Off Quality?
3/19/2010: DATE 2010 – There Was a Chill in the Air
3/12/2010: Drowning in a Sea of Information
3/05/2010: DVCon 2010: Awesomely on Target for Verification
February 2010
2/26/2010: Verifying CDC Issues in the Presence of Clocks with Dynamically Changing Frequencies
2/19/2010: Fostering Innovation
2/12/2010: CDC (Clock Domain Crossing) Analysis – Is this a misnomer?
2/05/2010: EDSFair – A Successful Show to Start 2010
January 2010
1/29/2010: Ascent Is Much More Than a Bug Hunter
1/22/2010: Ascent Lint Steps up to Next Generation Challenges
1/15/2010: Google and Real Intent, 1st Degree LinkedIn
1/08/2010: Verification Challenges Require Surgical Precision
1/07/2010: Introducing Real Talk!

Fundamentals of Clock Domain Crossing: Conclusion

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

In our last post in series, part 4, we looked at the costs associated with debugging and sign-off verification.  In this final posting, we propose a practical and efficient CDC verification methodology.

Template recognition vs. report quality trade-off

The first-generation CDC tools employed structural analysis as the primary verification technology. Given the lack of precision of this technology, users are often required to specify structural templates for verification. Given the size and complexity to today’s SOCs, this template specification becomes a cumbersome process where debugging cost is traded for setup cost. Also, the checking limitations imposed by templates may reduce the report volume, but they also increase the risk of missing errors. In general, template-based checking requires significant manual effort for effective utilization.

Top-level vs. block-level verification trade-off

The top-level verification reduces the setup requirements for CDC verification but can result in higher debugging cost as the design maturity improves iteratively. On the other hand, block-level verification identifies errors earlier and at smaller complexity levels, creating a cleaner top-level verification. The top-level debugging cost is reduced but the overall setup and run-time cost increases.

RTL vs. netlist verification trade-off

As mentioned earlier, netlist analysis can cover all the CDC error sources. The debugging cost is very high for application at the netlist level. Also, the delay in detecting errors until much later in the design cycle can have a serious impact on schedules. But RTL analysis does not cover all CDC-error sources, and this requires that CDC verification also be run on netlists.

A practical and efficient CDC verification methodology

After evaluating the various considerations as mentioned above, we recommend the following CDC-verification methodology to accomplish high-quality verification with minimal engineering cost:

  • Automatically create the functional setup the top-level design leveraging SDC.
  • Automatically complete the functional setup.
  • Use setup verification techniques to refine top-level functional setup.
  • Identify the sub-blocks for initial CDC verification.
  • Automatically generate block-level functional setup from the top-level.
  • Run thorough block level CDC verification.
    • Examine the generated functional setup for correctness.
    • Run structural analysis.
    • Identify and fix gross design errors or refine functional setup.
    • Run formal analysis for precise error identification.
    • Debug and fix design or refine functional setup.
    • Iterate verification steps until clean.
  • Run thorough top-level CDC verification with block-level result inheritance.
  • Run thorough netlist CDC verification.
Figure 16. A top down-bottom up verification flow.

Figure 17 compares the characteristics of first- and second-generation CDC tools across seven different categories. It summarizes the advantages of this new generation of design verification with the most dramatic change being in the efficiency of sign-off warnings, debug and verification methodology. We believe that sign-off verification is now possible and more importantly is a requirement for complex SOC designs.

 

Figure 17. Spider chart for first-generation and second-generation CDC tools.

In summary

Today, the number of clock domains in a complex SOC design can easily exceed 100 and the gate-count is well over 100 million instances. The first generation of CDC tools were not engineered to handle this kind of complexity and a second-generation tool-set is essential to reduce CDC failure risk and to avoid wasting engineering resources. This second generation maximizes automation and uses special formal techniques and automatic generation of top-level and block-level setups to accomplish high-quality verification. A hierarchical top-down, bottom-up methodology that takes advantage of the inherited results of both top- and block-level analysis minimizes the manual debug effort in CDC verification.



Aug 29, 2014 | Comments


Video Keynote: New Methodologies Drive EDA Revenue Growth

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Wally Rhines from Mentor gave an excellent keynote at the 51st Design Automation Conference on how EDA grows by solving new problems.  In his short talk, he references an earlier keynote he gave back in 2004 and what has changed in the EDA industry since that time.

Here is a quick quote from his presentation: “Our capability in EDA today is largely focused on being able to verify that a chip does what it’s supposed to do. The problem of verifying that it doesn’t do anything it’s NOT supposed to do is a much more difficult one, a bigger one, but one for which governments and corporations would pay billions of dollars for to even partially solve.”

Where do you think future growth will come in EDA?


The original video is from the DAC web-site video archive and can be seen here.  Wally’s full presentation is here.

Biography

WALDEN C. RHINES is Chairman and Chief Executive Officer of Mentor Graphics, a leader in worldwide electronic design automation with revenue of $1.2 billion in 2013. During his tenure at Mentor Graphics, revenue has more than tripled and Mentor has grown the industry’s number one market share solutions in four of the ten largest product segments of the EDA industry.

Prior to joining Mentor Graphics, Rhines was Executive Vice President of Texas Instruments’ Semiconductor Group, sharing responsibility for TI’s Components Sector, and having direct responsibility for the entire semiconductor business with more than $5 billion of revenue and over 30,000 people.

During his 21 years at TI, Rhines managed TI’s thrust into digital signal processing and supervised that business from inception with the TMS 320 family of DSP’s through growth to become the cornerstone of TI’s semiconductor technology. He also supervised the development of the first TI speech synthesis devices (used in “Speak & Spell”) and is co-inventor of the GaN blue-violet light emitting diode (now important for DVD players and low energy lighting). He was President of TI’s Data Systems Group and held numerous other semiconductor executive management positions.

Rhines has served five terms as Chairman of the Electronic Design Automation Consortium and is currently serving as co-vice-chairman. He is also a board member of the Semiconductor Research Corporation and First Growth Family & Children Charities. He has previously served as chairman of the Semiconductor Technical Advisory Committee of the Department of Commerce, as an executive committee member of the board of directors of the Corporation for Open Systems and as a board member of the Computer and Business Equipment Manufacturers’ Association (CBEMA), SEMI-Sematech/SISA, Electronic Design Automation Consortium (EDAC), University of Michigan National Advisory Council, Lewis and Clark College and SEMATECH.

Dr. Rhines holds a Bachelor of Science degree in metallurgical engineering from the University of Michigan, a Master of Science and Ph.D. in materials science and engineering from Stanford University, a master of business administration from Southern Methodist University and an Honorary Doctor of Technology degree from Nottingham Trent University.



Aug 21, 2014 | Comments


SoCcer: Defending your Digital Design

Ramesh Dewangan   Ramesh Dewangan
   Vice President of Application Engineering at Real Intent

Weird things can happen during a presentation to a customer!

I was visiting a customer site giving an update on the latest release of our Ascent and Meridian products. It was taking place during the middle of the day, in a large meeting room, with more than 30 people in the audience. Everything seemed to be going smoothly.

Suddenly there was an uproar, with clapping and cheers coming from an adjacent break room. Immediately, everyone in my audience opened their laptops, and grinned or groaned at the football score.

The 2014 FIFA World Cup soccer championship game was in full swing!

As Germany scored at will against Brazil, I lost count of the reactions by the end of the match! The final score was a crushing 7-1.

It disturbed my presentation alright, but it also gave me some food for thought.

If I look at  SoC design as a SoCcer game, the bugs hiding in the design are like potential scores against us, the chip designers. We are defending our chip against bugs. Bugs could be related to various issues with design rules (bus contention), state machines (unreachable states, dead-codes), X-optimism (X propagating through x-sensitive constructs), clock domain crossing (re-convergence or glitch on asynchronous crossings), and so on.

Bugs can be found quickly, when the attack formation of our opponent is easy to see, or hard to find if the attack formation is very complex and well-disguised.

It is obvious that more goals will be scored against us if we are poorly prepared. The only way to avoid bugs (scores against us) is to build a good defense. What are some defenses we can deploy for successful chips?

We need to have design RTL that is free from design rule issues, free of deadlocks in its state machines, free from X-optimism and pessimism issues, and employs properly synchronized CDC for both data and resets and have proper timing constraints to go with it.

Can’t we simply rely on smart RTL design and verification engineers to prevent bugs? No, that’s only the first line of defense. We must have the proper tools and methodologies. Just like, having good players is not enough; you need a good defense strategy that the players will follow.

If you do not use proper tools and methodologies, you increase the risk of chip failure and a certain goal against the design team. That is like inviting penalty kick. Would you really want to leave you defense to the poor lone goal keeper? Wouldn’t you rather build methodology with multiple defense resources in play?

So what tools and methodologies are needed to prevent bugs? Here are some of the key needs:

  • RTL analysis (Linting) – to create RTL free of structural and semantic bugs
  • Clock domain crossing (CDC) verification – to detect and fix chip-killing CDC bugs
  • Functional intent analysis (also called auto-formal) – to detect and correct functional bugs well before the lengthy simulation cycle
  • X-propagation analysis – to reduce functional bugs due to unknowns X’s in the design and ensure correct power-on reset
  • Timing constraints verification – to reduce the implementation cycle time and prevent chip killer bugs due to bad exceptions

Proven EDA tools like Ascent Lint, Ascent IIV, Ascent XV, Meridian CDC and Meridian Constraints meet these needs effectively and keep bugs from crossing the mid-field of your design success.

Next time, you have no excuse for scores against you (i.e. bugs in the chip). You can defend and defend well using proper tools and methodologies.

Don’t let your chips be a defense-less victim like Brazil in that game against Germany! J



Aug 15, 2014 | Comments


Executive Insight: On the Convergence of Design and Verification

Pranav Ashar   Dr. Pranav Ashar
   CTO of Real Intent

This article was originally published on TechDesignForums and is reproduced here by permission.

Sometimes it’s useful to take an ongoing debate and flip it on its head. Recent discussion around the future of simulation has tended to concentrate on aspects best understood – and acted upon – by a verification engineer. Similarly, the debate surrounding hardware-software flow convergence has focused on differences between the two.

Pranav Ashar, CTO of Real Intent, has a good position from which to look across these silos. His company is seen as a verification specialist, particularly in areas such as lint, X-propagation and clock domain crossing. But talk to some of its users and you find they can be either design or verification engineers.

How Real Intent addresses some of today’s challenges – and how it got there – offer useful pointers on how to improve your own flow and meet emerging or increasingly complex tasks.

“We’ve seen this and said this before, but for today’s big systems, you don’t want to do a lot of separate design and verification,” Ashar says. “Each represents a major project in itself and until now each has required its own process. When things become as complex as they have, you have to interweave them.

“This isn’t just because it is inherently more efficient. The level of complexity is such that it becomes predictable that the boundary between the two will blur. That’s happening and it will continue to happen. It’s critical to understand that it is almost a natural evolution.”

The next issue is how to communicate this and the flow changes it requires on both sides of the D&V divide. In some cases, you don’t. Instead, you present information to different communities in the way they most easily understand given existing working practices.

In Real Intent’s latest update to Ascent XV (its X-verification and reset suite), the company worked from the assumption that different disciplines look at things in different ways. The verification engineer concentrates on X-related issues; the design engineer wants detail on resets, power management schemes and proliferating clocks. The company tailored the tool’s interfaces and outputs accordingly.

Real Intent is not alone in adopting this approach. But perhaps it is only a beginning.

Fuzzy verification boundaries

Ashar draws a useful comparison with the ongoing debate over hardware-software co-design, and the similar tailoring of tools to users that it has seen.

“The underlying technologies for hardware and software are in many respects very similar. For example, execution paths are important on both sides. Having said that, though, the computational paradigms are different as are the data management procedures. Aspects like that, right now, explain why debug tools have different flavors, why they are presented to the user in different ways,” he notes.

“But, in terms of this whole hardware/software debate, we still seem to talk more about two separate worlds. Where there seems to be less discussion is, again, in terms of these fuzzy boundaries. So, we don’t talk much about how the hardware is increasingly looking like the software. Yet, the abstraction layers above RTL do look more and more like software algorithms, and they are becoming a lot more important in terms of how a system is assembled.”

Coming back to the world of verification, Ashar suggests an approach that, while it may not define two different disciplines, could more closely align them.

“Simulation,” he says, “is a last resort. It largely comes about because of things that we do not understand. It is a back stop.”



Aug 8, 2014 | Comments


Fundamentals of Clock Domain Crossing Verification: Part Four

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Last time we discussed practical considerations for designing CDC interfaces.  In this posting, we look at the costs associated with debugging and sign-off verification.

Design setup cost

Design setup starts with importing the design. With the increasing complexity of SOCs, designs include RTL and netlist blocks in a Verilog and VHDL mixed-language environment. In addition, functional setup is required for good quality of verification. A typical SOC has multiple modes of operation characterized by clocking schemes, reset sequences and mode controls. Functional setup requires the design to be set up in functionally valid modes for verification, by proper identification of clocks, resets and mode select pins. Bad setup can lead to poor quality of verification results.

Given the management complexity for the multitude of design tasks, it is highly desirable that there be a large overlap between setup requirements for different flows. For example, design compilation can be accomplished by processing the existing simulation scripts. Also, there is a large overlap between the functional setup requirements for CDC and that for static timing analysis. Hence, STA setup, based upon Synopsys Design Constraints (SDCs), can be leveraged for cost-effective functional setup.

Design constraints are usually either requirements or properties in your design. You use constraints to ensure that your design meets its performance goals and pin assignment requirements. Traditionally these are timing constraints but can include power, synthesis, and clocking.optimization tools (synthesis) in order to meet these goals. You can set timing constraints either globally or to a specific set of paths in your design. You can apply timing constraints to:

  • Specify the required minimum speed of a clock domain.
  • Set the input and output port timing information.
  • Define the maximum delay for a specific path.
  • Identify paths that are considered false and excluded from the analysis.
  • Identify paths that require more than one clock cycle to propagate the data.
  • Provide the external load at a specific port.

Correct functional setup of large designs may require setup of a very large number of signals. This cumbersome and time-consuming drudgery can be avoided with automatic setup generation. Also, setup has the first-order effect on the quality of verification. Hence, early feedback on setup quality can lead to easy and effective setup refinement for high quality of verification.

 Fig14
Figure 14. Design setup flow.

 

Debugging and sign-off cost

The debugging cost is dependent upon the number of errors flagged by the CDC tool. Assuming good setup, this, in turn, depends upon the size and CDC complexity of the design and the maturity of the design. Typically, the debugging cost for top-level runs on immature designs will be high. This is because the design may contain a large number of immature CDC interfaces. This can generate a large number of failures requiring significant debugging effort. Also, the ownership of these CDC interfaces may be distributed between multiple designers.

Debugging cost is heavily dependent upon the reporting style of the tools. Source-code oriented reporting relates the errors to the real source, i.e., HDL functionality. Also, it produces much more compact reports. CDC verification employs multiple technologies of increasing sophistication, such as structural analysis and formal analysis. As a result, a composite report is essential to determine the overall quality of CDC verification. Most waveform viewers can read an industrial standard waveform database known as Value Change Dump (VCD).

Good clock-domain, functional, structural and VCD visualization is essential for effective debugging. Automated and advanced pre-processing of these views, to isolate the error context, further reduces the debugging cost. Finally, debugging support requires advanced sign-off capabilities so that the same issues are not analyzed multiple times in the iterative verification flow.

Verification run-time cost

CDC checking is based upon multiple technologies with varying degrees of precision. In the first step, structural techniques are used to identify clock-domain crossings and to identify possible error sources in the design. Structural analysis tends to be relatively fast and is very useful at detecting gross errors in the design. To guarantee design correctness, however, structural analysis identifies all potential errors in the design. This set can be very large.

As an example, consider the design in Figure 12. This reduced-latency design can operate correctly or can be erroneous depending upon the relative frequency of the clock domains. Also, this structure can be included in a more complex interface that handles stall and other issues making precise structural identification difficult. If a structural technique does not compromise the quality of checking, it has to flag this interface for manual review and sign-off.

Formal analysis is an excellent technology to filter out false failures from structural analysis and to precisely identify failures in the design. As mentioned earlier, traditional formal analysis is built to analyze steady-state design behavior, and these formal techniques are incapable of formally analyzing uncertain behavior because of metastability and glitches. As a result, special formal-analysis techniques that are capable of handling behavioral uncertainty, are needed for CDC applications. For example, consider the failure shown in Figure 13. Here the MCP on data path is violated because of a hazard. Vanilla formal analysis will pass the data stability check (MCP) for this structure. Data stability for CDC interfaces can only be proven with glitch-sensitive formal-analysis techniques.

Formal analysis needs to be seamlessly integrated into the application all the way from invocation to reporting and debugging. This eliminates the huge overhead of integrating external formal-analysis tools into the flow and to correlate the results from these different tools to arrive at an integrated view of the verification status.

As the computational complexity of formal analysis is very high, this can require a large amount of computation time. This cost is well worth it, however, as it provides significant savings in debugging and sign-off cost.

 

 Fig15
Figure 15. Verification and debug flow.

Next time we will look at a practical and efficient CDC verification methodology.



Jul 31, 2014 | Comments


Fundamentals of Clock Domain Crossing Verification: Part Three

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Last time we looked at design principles and the design of CDC interfaces.  In this posting, we will look at practical considerations for designing CDC interfaces.

Verifying CDC interfaces

A typical SOC is made up of a large number of CDC interfaces. From the discussion above, CDC verification can be accomplished by executing the following steps in order:

  • Identification of CDC signals.
  • Classification of CDC signals as control and data.
  • Hazard/ glitch robustness of control signals.
  • Verification of single signal transition (gray coding) of control signals.
  • Verification of control stability (pulse-width requirement).
  • Verification of MCP operation (stability) of data signals.

All verification processes are iterative and achieve design quality by iteratively identifying design errors, debugging and fixing errors and re-running verification until no more errors are detected.

Practical considerations for CDC verification

Effective deployment of CDC tools in the design flow requires due consideration of multiple factors. We have discovered that first-generation CDC tools were not being used effectively in design flows. Based upon feedback from users, we have identified the following factors as the most important considerations for CDC deployment:

  • Coverage of error sources.
  • Design setup cost.
  • Debugging and sign-off cost.
  • Verification run-time cost.
  • Template recognition vs. report quality trade-off.
  • Top-level vs. block-level verification trade-off.
  • RTL vs. netlist verification trade-off.

There is consistent feedback from the users that the minimization of engineering cost for high-quality verification is critical for effective deployment of the CDC tools.

Coverage of error sources

CDC errors can creep into a design from multiple sources. The first is inadvertent clock-domain crossing where there is an assumption mismatch or oversight at block interfaces. The second is faulty block-level design. The designers, because of oversight or because of the pressure to design correct and high-performance interfaces, can make a design error. As an example, consider the protocol in Figure 12. Here, tapping Feedback Signal from an earlier flop stage can reduce the latency across the interface. But correct operation of this interface requires that the transmitting clock frequency be lower than the receiving clock frequency. Otherwise, it is possible to signal New Data before Load Data is completed.

 Fig12
Figure 12. Reduced latency protocol.

These two error sources are properly covered by RTL analysis. They can also be covered by netlist analysis. But not all CDC error sources are covered by RTL analysis. This is because CDC errors are dependent upon glitches and hazards. It is a well-known phenomenon that synthesis transformations can introduce hazards in the design. Hazards in CDC logic lead to CDC failures. Figure 13 shows an example of a design failure caused by synthesis. Here, the multiplexor implementation created a logic hazard that violated the multi-cycle path requirement on the data bus. We are aware of multiple design failures because of this phenomenon.

 

 Fig13
Figure 13. Logic hazard caused CDC failure.

With the increasing complexity of SOCs and the increasing number of CDC interfaces on the chip, the contribution of this risk factor is increasing. As a result, CDC verification must be run on both RTL and netlist views of the design.

Design setup cost

Design setup starts with importing the design. With the increasing complexity of SOCs, designs include RTL and netlist blocks in a Verilog and VHDL mixed-language environment. In addition, functional setup is required for good quality of verification. A typical SOC has multiple modes of operation characterized by clocking schemes, reset sequences and mode controls. Functional setup requires the design to be set up in functionally valid modes for verification, by proper identification of clocks, resets and mode select pins. Bad setup can lead to poor quality of verification results.

Given the management complexity for the multitude of design tasks, it is highly desirable that there be a large overlap between setup requirements for different flows. For example, design compilation can be accomplished by processing the existing simulation scripts. Also, there is a large overlap between the functional setup requirements for CDC and that for static timing analysis. Hence, STA setup, based upon Synopsys Design Constraints (SDCs), can be leveraged for cost-effective functional setup.

Design constraints are usually either requirements or properties in your design. You use constraints to ensure that your design meets its performance goals and pin assignment requirements. Traditionally these are timing constraints but can include power, synthesis, and clocking.

Timing constraints represent the performance goals for your designs. Designer software uses timing constraints to guide the timing-driven optimization tools (synthesis) in order to meet these goals. You can set timing constraints either globally or to a specific set of paths in your design. You can apply timing constraints to:

  • Specify the required minimum speed of a clock domain.
  • Set the input and output port timing information.
  • Define the maximum delay for a specific path.
  • Identify paths that are considered false and excluded from the analysis.
  • Identify paths that require more than one clock cycle to propagate the data.
  • Provide the external load at a specific port.

Correct functional setup of large designs may require setup of a very large number of signals. This cumbersome and time-consuming drudgery can be avoided with automatic setup generation. Also, setup has the first-order effect on the quality of verification. Hence, early feedback on setup quality can lead to easy and effective setup refinement for high quality of verification.

 

 Fig14
Figure 14. Design setup flow.

In the next posting we will discuss the costs associated with debugging and sign-off verification .



Jul 24, 2014 | Comments


Fundamentals of Clock Domain Crossing Verification: Part Two

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Last time we looked at how metastability is unavoidable and the nature of the clock domain crossing (CDC) problem.   This time we will look at design principles.

CDC design principles

Because metastability is unavoidable in CDC designs, the robust design of CDC interfaces is required to follow some strict design principles.

Metastability can be contained with “synchronizers” that prevent metastability effects from propagating into the design. Figure 9 shows the configuration of a double-flop synchronizer which minimizes the load on the metastable flop. The single fan-out protects against loss of correlation because the metastable signal does not fan out to multiple flops. The probability that metastability will last longer than time t is governed by the following equation:

Eqn1

 

where tau is the resolution time constant dependent upon the latch characteristics and ambient noise. This configuration resolves metastability with a very high probability, leading to a very large mean time between failures as governed by the equation:

Eqn2

where P is the probability that metastability is not resolved within one clock cycle. Triple or higher flop configurations may be used for very fast designs.

Fig9
Figure 9. Double flop synchronizer contains metastability.

 

Designing CDC interfaces

A CDC interface is designed for reliable transfer of correlated data across the data bus and the reliable design of a CDC interface must follow a simple set of rules:

  • The CDC data bus must be designed for 2-cycle multi-cycle-path operation (MCP). This means that data is captured in the CDC flops on the second clock edge or later, following the launch of data. This also gives one clock cycle of the receiving clock as the timing constraint on the path. Static timing analysis should ensure that the timing constraints are met on these paths. This rule eliminates metastability for these paths. As data-bus signals are correlated, their CDC flops can not be allowed to become metastable.
  • The control signals implementing the MCP protocol can become metastable and hence must follow the following rules:
    • The controls must be properly synchronized to prevent propagation of metastability in the design.
    • The MCP is enabled by one and only one control-signal transition to eliminate loss of correlation errors (gray coding).
    • The control signals should be free of hazards/ glitches.
    • The control signals must be stable for more than one clock cycle of the receiving clock.

 

These principles can be implemented using handshake protocols or FIFO-based protocols. Figure 10 shows a simple handshake CDC protocol. This interface is transmitting data from CLK1 domain to CLK2 domain. While Data Ready is asserted, the data on the bus Data In is transmitted across the clock domain. The data availability is signaled by a transition on Control Signal. Transmit Data is launched on the same clock edge. Control Signal is synchronized in the CLK2 domain and the transition is detected to signal Load Data. Since, synchronization requires at-least one cycle of CLK2, Transmit Data is received at the second edge of CLK2 or later. This creates a multi-cycle path for Transmit Data across the interface. Feedback Signal completes the handshake.

 

Fig10
Figure 10. Simple handshake CDC protocol.

 

Transition on Feedback Signal is detected to drive Next Data to the interface. Figure 11 shows the timing diagram for the protocol. It should be noted that this is a simplified concept of the interface. We have not incorporated the logic initializing the interface, detecting transition in Data Ready and dealing with stalling conditions. All these considerations, combined with latency minimization, add complexity to the design of the interface.

 

Fig11
Figure 11. CDC protocol timing diagram.

Next time we will start the discussion on verifying CDC interfaces.



Jul 17, 2014 | Comments


Fundamentals of Clock Domain Crossing Verification: Part One

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

The increase in SOC designs is leading to the extensive use of asynchronous clock domains. The clock-domain-crossing (CDC) interfaces are required to follow strict design principles for reliable operation. Also, verification of proper CDC design is not possible using standard simulation and static timing-analysis (STA) techniques. As a result, CDC-verification tools have become essential in design flows.

A good understanding of the CDC problem requires an understanding of metastability and the associated design challenge.

Metastability

When the input signal to a data latch changes within the setup-and-hold window around the transition of the latching clock, the latch output can become metastable at an intermediate voltage between logical zero and one. Figure 1 shows a simplified latch implementation. The metastable state is a very high-energy state as shown in Figure 2. Because of noise in the chip environment, this metastable voltage gets disturbed and eventually resolves to a logical value. The resolution time is dependent upon the load on the latch output and the gain through the feedback loop. It is impossible, however, to predict this logical value. Also, there is an inherent delay in the resolution of the metastable output as shown in the timing diagram of Figure 3. This logical and timing uncertainty introduces unreliable behavior in the design and, without proper protection, can cause it to fail in unpredictable ways.

 

Fig1
Figure 1. A simplified latch.

 

 

Fig2
Figure 2. The metastability energy curve.

 

 

Fig3
Figure 3. Metastability timing diagram.

 

For synchronous clock designs, timing closure with static timing analysis ensures that all paths meet timing specifications; metastability is avoided and the designs operate reliably.

Limitations of functional verification

The prevalent functional-verification methodology is based upon functional simulation. A simplified view of the simulation model is that the design behavior is evaluated using zero-delay evaluation for logic, unit-delay for flops and ideal clock behavior. Also, formal analysis makes use of the same evaluation assumptions. But both of these techniques have an inherent limitation because they only analyze the steady-state behavior of the design.

Functional verification makes a fundamental assumption that static timing analysis will account for the uncertainty in clock behavior caused by jitter and skews, and ensure that all hazards in the design subside before the clock event (timing closure). This is the default timing rule. Functional verification will be invalidated if this assumption is violated. Static timing analysis lets users specify exceptions to the default timing rules. These exceptions invalidate the functional-verification and default-timing assumptions. It is imperative that these exceptions be properly verified using timing-closure verification (TCV) for a robust design methodology. Because static timing of CDC interfaces is not possible and requires timing exceptions, CDC verification is a unique and essential component of TCV.

CDC terminology

A clock domain is defined as the set of all flops that are clocked by the associated clock. A clock-domain crossing (CDC) is defined as a flop-to-flop path where the transmitting flop is triggered by a clock that is asynchronous to the receiving flop clock. These two clock domains are considered to be relatively asynchronous. Figure 4 describes the CDC terminology used in this article. The receiving flops are referred to as CDC flops. The signals feeding the CDC flops are referred to as CDC signals.

 

 

Fig4
Figure 4. Defining CDC terminology.

 

Unavoidable metastability and the CDC problem

Asynchronous clocks operate without any mutual frequency and phase relationships. As a result, it is impossible to guarantee timing on CDC paths because the launch- and capture-clock edges can be arbitrarily close, and metastability is unavoidable for CDC designs. This invalidates the assumptions of both functional simulation and formal verification, and robust design behavior cannot be assured using simulation and static timing analysis. Without proper design, CDC errors can cause random and unpredictable failures in a chip that are impossible to debug.

Metastability introduces the following failure modes in the design:

  • Loss of correlation (error E1). This happens when two or more correlated CDC flops become metastable as shown in Figures 5a and 5b. Figure 6 shows the timing diagram where these flops resolve to arbitrary logical values and lose correlation, leading to a bad design state.
  • Hazard (glitch) capture (error E2). A hazard on a CDC path can get captured in the CDC flop leading to bad design state as shown in Figure 7.
  • Loss of signal (error E3). CDC signals that are stable for less than one clock cycle of the receiving clock may not get captured in the receiving domain because of clock network uncertainties, clock alignment and metastability. Figure 8 shows the situation where functional verification view concludes signal transmission. However, the signal transmission can actually fail, leading to a bad state in the design.
  • Metastability propagation (error E4). Metastability may propagate to the next level of flops in the design if it is not resolved in a timely manner. The resolution time is dependent upon the load on the flop. Propagation of metastability may lead to a cascading of errors E1-E3.
Fig5a
Figure 5a. Loss of correlation.

 

 

Fig5b
Figure 5b. Loss of correlation.

 

 

Fig6
Figure 6. Loss of correlation timing diagram.

 

 

Fig7
Figure 7. Glitch capture.

 

Fig8
Figure 8. Loss of signal.

Next posting, we will look at CDC design principles.



Jul 10, 2014 | Comments


Static Verification Leads to New Age of SoC Design

Pranav Ashar   Dr. Pranav Ashar
   CTO of Real Intent

SoC companies are coming to rely on RTL sign-off of many verification objectives as a means to achieve a sensible division of labor between their RTL design team and their system-level verification team. Given the sign-off expectation, the verification of those objectives at the RT level must absolutely be comprehensive.

Increasingly, sign-off at the RTL level can be accomplished using static-verification technologies. Static verification stands on two pillars: Deep Semantic Analysis and Formal Methods. With the judicious synthesis of these two, the need for dynamic analysis (a euphemism for simulation) gets pushed to the margins. To be sure, dynamic analysis continues to have a role, but is increasingly as a backstop rather than the main thrust of the verification flow. Even where simulation is used, static methods play an important role in improving its efficacy.

Deep Semantic Analysis is about understanding the purpose or role of RTL structures (logic, flip-flops, state machines, etc.) in a design in the context of the verification objective being addressed. This type of intelligence is at the core of everything that Real Intent does, to the extent that it is even ingrained into the company’s name. Much of sign-off happens based just on the deep semantic intelligence in Real Intent’s tools without the invocation of classical formal analysis.

 

Further, Deep Semantic intelligence and Formal analysis play a symbiotic role to complete the sign-off. Formal analysis benefits from the precisely scoped and contextually well-structured checks generated by virtue of the Deep Semantic intelligence, and Formal analysis proves the supposition of these generated checks.

This combination is efficient for numerous verification objectives in the SoC era.

A key area is X-propagation verification. RTL simulation be its very nature is X-optimistic and can hide bugs or cause RTL and gate-level simulation results to differ. Designers need to understand the X-sensitive constructs in their design and how they can be affected by upstream X-sources. Another area of concern is ensuring that designs come out of power-up in a known state in a given number of clock cycles, and that powered-down blocks do not cause illicit behavior in the active blocks. Static analysis based on combining Deep Semantic intelligence with judicious application of Formal methods is the only way to sign-off on X-verification objectives in a reasonable amount of time.

Another iconic example is the verification of clock-domain crossings. Whereas the basic failure modes here have a textbook simplicity, identifying these failures in real-life RTL so that all potential failures are reported in acceptable run time and without drowning the engineer in noise is a challenging ask. This is an area where the Deep Semantic intelligence in Real Intent’s Meridian CDC tool shines. It is the only product that performs full-chip comprehensive CDC analysis without resorting to abstractions, while also providing the ability of a full-featured hierarchical and distributed workflow. For example, when doing full-chip SoC integration the details of the IP blocks must be retained intelligently to ensure that “sneak paths” that may be lurking in the IP and only come into play at the SoC level can be uncovered. Abstraction models are infamous for ignoring that essential detail that may needed for top-level analysis. Real Intent has developed data models that allow its analyses to represent even gigascale designs with all the necessary details that allow for comprehensive verification. We like to say that if you are not signing-off on CDC with Real Intent’s Merdian, you are not signing-off!

Even for RTL linting, which has been a verification tool in use for over 20 years, new data models are needed to deliver gigascale capacity and performance. With the new levels of performance combined with Real Intent’s Deep Semantic intelligence, designers can have answers in minutes and can quickly resolve chip-scale issues that would otherwise have been missed or taken days to resolve. For example, it is often the case that undesired combinational loops get added as IPs are integrated into the SoC. Without tools like Real Intent’s Ascent Lint, such problem would go undetected and manifest as field failures.

Related to the above, we see a fundamental change in the moving away from a tool-based mindset to a verification-objective-driven mindset in chip verification that is facilitating sign-off at RTL and anchoring the use of static verification methods. This is supremely beneficial for the ScC paradigm and it would not be an exaggeration to say that the SoC design process would have broken down. Static methods shine when the objective is clearly stated and failure modes are deeply understood. Real Intent has experienced this first hand over the past decade as it has watched the static verification for CDC and early functional verification that it pioneered become entrenched in the SoC verification flow.

The objective-driven approach also points to another reality for SoC design houses: Insuring your SoCs against respins is not about having the fastest simulator, ABV or STA tool any more. Neither is it about having an all-in-one tool that does a little bit of a lot of things. Rather, it is about deploying the best-in-class solution with leading edge performance, capacity, workflow and sign-off quality for key SoC-verification objectives like CDC and X-safe design. We are seeing this message take hold in the high-end SoC design houses. It is imperative that SoC design companies across the full spectrum of SoC types to accept this message.

Real Intent is a verification-solutions provider that emphasizes early static verification sign-off. Mostly that means signing off at RTL, but sometimes it could also mean signing-off at the gate-level in order to get an independent validation of the synthesis steps. It also means signing-off on as much as possible before simulation. Any simulation you must do has to be absolutely necessary and tied to a companion static analysis step. With its best-in-class verification-solutions focus, Real Intent sees itself as an enabler of the new age of SoC design.



Jul 3, 2014 | Comments


Reset Optimization Pays Big Dividends Before Simulation

Pranav Ashar   Dr. Pranav Ashar
   CTO of Real Intent

Dr. Pranav Ashar is chief technology officer at Real Intent. He previously worked at NEC Labs developing formal verification technologies for VLSI design. With 35 patents granted and pending, he has authored about 70 papers and co-authored the book ‘Sequential Logic Synthesis’.

This article was originally published on TechDesignForums and is reproduced here by permission.

Reset optimization is another one of those design issues that has leapt in complexity and importance as we have moved to ever more complex system-on-chips. Like clock domain crossing, it is one that we need to resolve to the greatest degree possible before entering simulation.

The traditional approach to resets might have been to route to every flop. Back in the day, you might have been done this even though it has always entailed a large overhead in routing. That would help avoid X ‘unknown’ states arising during simulation for every memory location that was not reinitialized at restart. It was a hedge against optimistic behavior by simulation that could hide bugs.

Our objectives today, though, include not only conserving routing resources but also capturing problems as we bring up RTL for simulation to avoid unfeasible run times there at both RTL and – worse still – the gate level.

There is then one other important factor for reset optimization: its close connection to power optimization.

Matching power and performance increasingly involves the use of retention cells. These retain the state of elements of the design even if appears to be powered off: in fact, to allow for a faster restart bring-up these must continue to consume static power even when the SoC is ‘at rest’. So, controlling the use of retention cells cuts power consumption and extends battery life.

Reset the ‘endless’ threat

Resolving such complex issues based purely on simulations will no longer work. It will put you on the path toward so-called ‘endless verification’.

A thorough and intelligent pre-simulation analysis of your reset scheme can now point both to the best reset routing and the minimum number of expensive retention cells you need to implement.

At the pre-simulation stage, tools like Ascent XV from my company Real Intent, can undertake a pretty smart heuristic analysis of the dependency of one flop’s reset on another and the relationships between different blocks. They will then produce a report with further insights and characterization, based on formal and structural techniques, that go some way beyond just ‘a best guess’.

The objective is to inform the designer on either the specifics or the flavor of the potential problems in the design. He can then review this report – which ideally should offer some alternatives itself – and undertake reset and related power optimization before moving into full simulation.

Orders of magnitude do apply

The time-savings available are significant. Unresolved reset issues lead, of course, to X states, uncertainties post-simulation that will take considerable time to address. The familiar ‘Rule of 10’ applies: catch a problem earlier and it is a 10X easier fix.

Beyond that, pre-simulation techniques are becoming more powerful with each generation. Our latest release of Ascent XV has enhanced algorithms that in themselves offer a 10X improvement in run-time against the previous generation.

Preparing your code carefully for simulation has a direct benefit at the bottom line by leveraging increasingly mature strategies. Can you afford not to consider them within your flow?



Jun 26, 2014 | Comments