Real Talk Blog
Blog Archive
June 2014
6/20/2014: SoC CDC Verification Needs a Smarter Hierarchical Approach
6/06/2014: Quick Reprise of DAC 2014
May 2014
5/01/2014: Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions
April 2014
4/24/2014: Complexity Drives Smart Reporting in RTL Verification
4/17/2014: Video Update: New Ascent XV Release for X-optimization, ChipEx show in Israel, DAC Preview
4/11/2014: Design Verification is Shifting Left: Earlier, Focused and Faster
4/03/2014: Redefining Chip Complexity in the SoC Era
March 2014
3/27/2014: X-Verification: A Critical Analysis for a Low-Power World (Video)
3/14/2014: Engineers Have Spoken: Design And Verification Survey Results
3/06/2014: New Ascent IIV Release Delivers Enhanced Automatic Verification of FSMs
February 2014
2/28/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 3
2/20/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 2
2/13/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 1
2/07/2014: Video Tech Talk: Changes In Verification
January 2014
1/31/2014: Progressive Static Verification Leads to Earlier and Faster Timing Sign-off
1/30/2014: Verific’s Front-end Technology Leads to Success and a Giraffe!
1/23/2014: CDC Verification of Fast-to-Slow Clocks – Part Three: Metastability Aware Simulation
1/16/2014: CDC Verification of Fast-to-Slow Clocks – Part Two: Formal Checks
1/10/2014: CDC Verification of Fast-to-Slow Clocks – Part One: Structural Checks
1/02/2014: 2013 Highlights And Giga-scale Predictions For 2014
December 2013
12/13/2013: Q4 News, Year End Summary and New Videos
12/12/2013: Semi Design Technology & System Drivers Roadmap: Part 6 – DFM
12/06/2013: The Future is More than “More than Moore”
November 2013
11/27/2013: Robert Eichner’s presentation at the Verification Futures Conference
11/21/2013: The Race For Better Verification
11/18/2013: Experts at the Table: The Future of Verification – Part 2
11/14/2013: Experts At The Table: The Future Of Verification Part 1
11/08/2013: Video: Orange Roses, New Product Releases and Banner Business at ARM TechCon
October 2013
10/31/2013: Minimizing X-issues in Both Design and Verification
10/23/2013: Value of a Design Tool Needs More Sense Than Dollars
10/17/2013: Graham Bell at EDA Back to the Future
10/15/2013: The Secret Sauce for CDC Verification
10/01/2013: Clean SoC Initialization now Optimal and Verified with Ascent XV
September 2013
9/24/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 4
9/20/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 3
9/20/2013: CEO Viewpoint: Prakash Narain on Moving from RTL to SoC Sign-off
9/17/2013: Video: Ascent Lint – The Best Just Got Better
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 2
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain
9/10/2013: SoC Sign-off Needs Analysis and Optimization of Design Initialization in the Presence of Xs
August 2013
8/15/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 4
8/08/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 3
July 2013
7/25/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 2
7/18/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 1
7/16/2013: Executive Video Briefing: Prakash Narain on RTL and SoC Sign-off
7/05/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 3
June 2013
6/27/2013: Bryon Moyer: Simpler CDC Exception Handling
6/21/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 2
6/17/2013: Peggy Aycinena’s interview with Prakash Narain
6/14/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 1
6/10/2013: Photo Booth Blackmail!
6/03/2013: Real Intent is on John Cooley’s “DAC’13 Cheesy List”
May 2013
5/30/2013: Does SoC Sign-off Mean More Than RTL?
5/24/2013: Ascent Lint Rule of the Month: DEFPARAM
5/23/2013: Video: Gary Smith Tells Us Who and What to See at DAC 2013
5/22/2013: Real Intent is on Gary Smith’s “What to see at DAC” List!
5/16/2013: Your Real Intent Invitation to Fun and Fast Verification at DAC
5/09/2013: DeepChip: “Real Intent’s not-so-secret DVcon’13 Report”
5/07/2013: TechDesignForum: Better analysis helps improve design quality
5/03/2013: Unknown Sign-off and Reset Analysis
April 2013
4/25/2013: Hear Alexander Graham Bell Speak from the 1880′s
4/19/2013: Ascent Lint rule of the month: NULL_RANGE
4/16/2013: May 2 Webinar: Automatic RTL Verification with Ascent IIV: Find Bugs Simulation Can Miss
4/05/2013: Conclusion: Clock and Reset Ubiquity – A CDC Perspective
March 2013
3/22/2013: Part Six: Clock and Reset Ubiquity – A CDC Perspective
3/21/2013: The BIG Change in SoC Verification You Don’t Know About
3/15/2013: Ascent Lint Rule of the Month: COMBO_NBA
3/15/2013: System-Level Design Experts At The Table: Verification Strategies – Part One
3/08/2013: Part Five: Clock and Reset Ubiquity – A CDC Perspective
3/01/2013: Quick DVCon Recap: Exhibit, Panel, Tutorial and Wally’s Keynote
3/01/2013: System-Level Design: Is This The Era Of Automatic Formal Checks For Verification?
February 2013
2/26/2013: Press Release: Real Intent Technologist Presents Power-related Paper and Tutorial at ISQED 2013 Symposium
2/25/2013: At DVCon: Pre-Simulation Verification for RTL Sign-Off includes Automating Power Optimization and DFT
2/25/2013: Press Release: Real Intent to Exhibit, Participate in Panel and Present Tutorial at DVCon 2013
2/22/2013: Part Four: Clock and Reset Ubiquity – A CDC Perspective
2/18/2013: Does Extreme Performance Mean Hard-to-Use?
2/15/2013: Part Three: Clock and Reset Ubiquity – A CDC Perspective
2/07/2013: Ascent Lint Rule of the Month: ARITH_CONTEXT
2/01/2013: “Where Does Design End and Verification Begin?” and DVCon Tutorial on Static Verification
January 2013
1/25/2013: Part Two: Clock and Reset Ubiquity – A CDC Perspective
1/18/2013: Part One: Clock and Reset Ubiquity – A CDC Perspective
1/07/2013: Ascent Lint Rule of the Month: MIN_ID_LEN
1/04/2013: Predictions for 2014, Hier. vs Flat, Clocks and Bugs
December 2012
12/14/2012: Real Intent Reports on DVClub Event at Microprocessor Test and Verification Workshop 2012
12/11/2012: Press Release: Real Intent Records Banner Year
12/07/2012: Press Release: Real Intent Rolls Out New Version of Ascent Lint for Early Functional Verification
12/04/2012: Ascent Lint Rule of the Month: OPEN_INPUT
November 2012
11/19/2012: Real Intent Has Excellent EDSFair 2012 Exhibition
11/16/2012: Peggy Aycinena: New Look, New Location, New Year
11/14/2012: Press Release: New Look and New Headquarters for Real Intent
11/05/2012: Ascent Lint HDL Rule of the Month: ZERO_REP
11/02/2012: Have you had CDC bugs slip through resulting in late ECOs or chip respins?
11/01/2012: DAC survey on CDC bugs, X propagation, constraints
October 2012
10/29/2012: Press Release: Real Intent to Exhibit at ARM TechCon 2012 – Chip Design Day
September 2012
9/24/2012: Photos of the space shuttle Endeavour from the Real Intent office
9/20/2012: Press Release: Real Intent Showcases Verification Solutions at Verify 2012 Japan
9/14/2012: A Bolt of Inspiration
9/11/2012: ARM blog: An Advanced Timing Sign-off Methodology for the SoC Design Ecosystem
9/05/2012: When to Retool the Front-End Design Flow?
August 2012
8/27/2012: X-Verification: What Happens When Unknowns Propagate Through Your Design
8/24/2012: Article: Verification challenges require surgical precision
8/21/2012: How To Article: Verifying complex clock and reset regimes in modern chips
8/20/2012: Press Release: Real Intent Supports Growth Worldwide by Partnering With EuropeLaunch
8/06/2012: SemiWiki: The Unknown in Your Design Can be Dangerous
8/03/2012: Video: “Issues and Struggles in SOC Design Verification”, Dr. Roger Hughes
July 2012
7/30/2012: Video: What is Driving Lint Usage in Complex SOCs?
7/25/2012: Press Release: Real Intent Adds to Japan Presence: Expands Office, Increases Staff to Meet Demand for Design Verification and Sign-Off Products
7/23/2012: How is Verification Complexity Changing, and What is the Impact on Sign-off?
7/20/2012: Real Intent in Brazil
7/16/2012: Foosball, Frosty Beverages and Accelerating Verification Sign-off
7/03/2012: A Good Design Tool Needs a Great Beginning
June 2012
6/14/2012: Real Intent at DAC 2012
6/01/2012: DeepChip: Cheesy List for DAC 2012
May 2012
5/31/2012: EDACafe: Your Real Intent Invitation to Fast Verification and Fun at DAC
5/30/2012: Real Intent Video: New Ascent Lint and Meridian CDC Releases and Fun at DAC 2012
5/29/2012: Press Release: Real Intent Leads in Speed, Capacity and Precision with New Releases of Ascent Lint and Meridian CDC Verification Tools
5/22/2012: Press Release: Over 35% Revenue Growth in First Half of 2012
5/21/2012: Thoughts on RTL Lint, and a Poem
5/21/2012: Real Intent is #8 on Gary Smith’s “What to see at DAC” List!
5/18/2012: EETimes: Gearing Up for DAC – Verification demos
5/08/2012: Gabe on EDA: Real Intent Helps Designers Verify Intent
5/07/2012: EDACafe: A Page is Turned
5/07/2012: Press Release: Graham Bell Joins Real Intent to Promote Early Functional Verification & Advanced Sign-Off Circuit Design Software
March 2012
3/21/2012: Press Release: Real Intent Demos EDA Solutions for Early Functional Verification & Advanced Sign-off at Synopsys Users Group (SNUG)
3/20/2012: Article: Blindsided by a glitch
3/16/2012: Gabe on EDA: Real Intent and the X Factor
3/10/2012: DVCon Video Interview: “Product Update and New High-capacity ‘X’ Verification Solution”
3/01/2012: Article: X-Propagation Woes: Masking Bugs at RTL and Unnecessary Debug at the Netlist
February 2012
2/28/2012: Press Release: Real Intent Joins Cadence Connections Program; Real Intent’s Advanced Sign-Off Verification Capabilities Added to Leading EDA Flow
2/15/2012: Real Intent Improves Lint Coverage and Usability
2/15/2012: Avoiding the Titanic-Sized Iceberg of Downton Abbey
2/08/2012: Gabe on EDA: Real Intent Meridian CDC
2/08/2012: Press Release: At DVCon, Real Intent Verification Experts Present on Resolving X-Propagation Bugs; Demos Focus on CDC and RTL Debugging Innovations
January 2012
1/24/2012: A Meaningful Present for the New Year
1/11/2012: Press Release: Real Intent Solidifies Leadership in Clock Domain Crossing
August 2011
8/02/2011: A Quick History of Clock Domain Crossing (CDC) Verification
July 2011
7/26/2011: Hardware-Assisted Verification and the Animal Kingdom
7/13/2011: Advanced Sign-off…It’s Trending!
May 2011
5/24/2011: Learn about Advanced Sign-off Verification at DAC 2011
5/16/2011: Getting A Jump On DAC
5/09/2011: Livin’ on a Prayer
5/02/2011: The Journey to CDC Sign-Off
April 2011
4/25/2011: Getting You Closer to Verification Closure
4/11/2011: X-verification: Conquering the “Unknown”
4/05/2011: Learn About the Latest Advances in Verification Sign-off!
March 2011
3/21/2011: Business Not as Usual
3/15/2011: The Evolution of Sign-off
3/07/2011: Real People, Real Discussion – Real Intent at DVCon
February 2011
2/28/2011: The Ascent of Ascent Lint (v1.4 is here!)
2/21/2011: Foundation for Success
2/08/2011: Fairs to Remember
January 2011
1/31/2011: EDA Innovation
1/24/2011: Top 3 Reasons Why Designers Switch to Meridian CDC from Real Intent
1/17/2011: Hot Topics, Hot Food, and Hot Prize
1/10/2011: Satisfaction EDA Style!
1/03/2011: The King is Dead. Long Live the King!
December 2010
12/20/2010: Hardware Emulation for Lowering Production Testing Costs
12/03/2010: What do you need to know for effective CDC Analysis?
November 2010
11/12/2010: The SoC Verification Gap
11/05/2010: Building Relationships Between EDA and Semiconductor Ventures
October 2010
10/29/2010: Thoughts on Assertion Based Verification (ABV)
10/25/2010: Who is the master who is the slave?
10/08/2010: Economics of Verification
10/01/2010: Hardware-Assisted Verification Tackles Verification Bottleneck
September 2010
9/24/2010: Excitement in Electronics
9/17/2010: Achieving Six Sigma Quality for IC Design
9/03/2010: A Look at Transaction-Based Modeling
August 2010
8/20/2010: The 10 Year Retooling Cycle
July 2010
7/30/2010: Hardware-Assisted Verification Usage Survey of DAC Attendees
7/23/2010: Leadership with Authenticity
7/16/2010: Clock Domain Verification Challenges: How Real Intent is Solving Them
7/09/2010: Building Strong Foundations
7/02/2010: Celebrating Freedom from Verification
June 2010
6/25/2010: My DAC Journey: Past, Present and Future
6/18/2010: Verifying Today’s Large Chips
6/11/2010: You Got Questions, We Got Answers
6/04/2010: Will 70 Remain the Verification Number?
May 2010
5/28/2010: A Model for Justifying More EDA Tools
5/21/2010: Mind the Verification Gap
5/14/2010: ChipEx 2010: a Hot Show under the Hot Sun
5/07/2010: We Sell Canaries
April 2010
4/30/2010: Celebrating 10 Years of Emulation Leadership
4/23/2010: Imagining Verification Success
4/16/2010: Do you have the next generation verification flow?
4/09/2010: A Bug’s Eye View under the Rug of SNUG
4/02/2010: Globetrotting 2010
March 2010
3/26/2010: Is Your CDC Tool of Sign-Off Quality?
3/19/2010: DATE 2010 – There Was a Chill in the Air
3/12/2010: Drowning in a Sea of Information
3/05/2010: DVCon 2010: Awesomely on Target for Verification
February 2010
2/26/2010: Verifying CDC Issues in the Presence of Clocks with Dynamically Changing Frequencies
2/19/2010: Fostering Innovation
2/12/2010: CDC (Clock Domain Crossing) Analysis – Is this a misnomer?
2/05/2010: EDSFair – A Successful Show to Start 2010
January 2010
1/29/2010: Ascent Is Much More Than a Bug Hunter
1/22/2010: Ascent Lint Steps up to Next Generation Challenges
1/15/2010: Google and Real Intent, 1st Degree LinkedIn
1/08/2010: Verification Challenges Require Surgical Precision
1/07/2010: Introducing Real Talk!

SoC CDC Verification Needs a Smarter Hierarchical Approach

This article was originally published on TechDesignForums and is reproduced here by permission.

Thanks to the widespread reuse of intellectual property (IP) blocks and the difficulty of distributing a system-wide clock across an entire device, today’s system-on-chip (SoC) designs use a large number of clock domains that run asynchronously to each other. A design involving hundreds of millions of transistors can easily incorporate 50 or more clock domains and hundreds of thousands of signals that cross between them.

Although the use of smaller individual clock domains helps improve verification of subsystems apart from the context of the full SoC, the checks required to ensure that the full SoC meets its timing constraints have become increasingly time consuming.

Signals involved in clock domain crossing (CDC), for example where a flip-flip driven by one clock signal feeds data to a flop driven by a different clock signal raise the potential issue of metastability and data loss. Tools based on static verification technology exist to perform CDC checks and recommend the inclusion of more robust synchronizers or other changes to remove the risk of metastability and data loss.

Runtime issues
Conventionally, the verification team would run CDC verification on the entire design database before tapeout as this is the point that it becomes possible to perform a holistic check of the clock-domain structure and ensure that every single domain-crossing path is verified. However, on designs that incorporate hundreds of thousands of gates, this is becoming impractical as the compute runtime alone can run into days at a point where every hour saved or spent is precious. And, if CDC verification waits for this point, the number of violations – some of which may be false positives – will potentially generate many weeks of remedial effort, after which another CDC verification cycle needs to be run. To cope with the complexity, CDC verification needs a smarter strategy.

By grouping modules into a hierarchy, the verification team can apply a divide-and-conquer strategy. Not only that, the design team can play a bigger role in ensuring that potential CDC issues are trapped early and checked automatically as the design progresses.

A hierarchical methodology makes it possible to perform CDC checks early and often to ensure design consistency such that, following SoC database assembly, the remaining checks can pass quickly and, most likely, result in a much more manageable collection of potential violations.

Hierarchical obstacles
Traditionally, teams have avoided hierarchical management of CDC issues because of the complexity of organizing the design and ensuring that paths are not missed. A potential problem is that all known CDC paths may be deemed clean within a block and that it can be considered ‘CDC clean’. But there may be paths that escape attention because they cross the hierarchy boundaries in ways that cannot be caught easily – largely because the tools do not have sufficient information about the logic on the unimplemented side of the interface and the designer has made incorrect clock-related assumptions about the incoming paths.

If those sneak paths were not present, it would be possible to present the already-verified modules as black boxes to higher levels of hierarchy such that only the outer interfaces need to be verified with the other modules at that level of hierarchy. For hierarchical CDC verification to work effectively, a white- or grey-box abstraction is required in which the verification process at higher levels of hierarchy is able to reach inside the model to ensure that all potential CDC issues are verified.

As the verification environment does not have complete information about the clocking structure before final SoC assembly, reporting will tend to err on the side of caution, flagging up potential issues that may not be true errors. Traditionally, designers would provide waivers for flops on incoming paths that they believe not to be problematic to avoid them causing repeated errors in later verification runs as the module changes. However, this is a risky strategy as it relies on assumptions about the overall SoC clocking structure that may not be born out in reality.

Refinements to the model
The waiver model needs to be refined to fit a smart hierarchical CDC verification strategy. Rather than apply waivers, designers with a clear understanding of the internal structure of their blocks can mark flops and related logic to reflect their expectations. Paths that they believe not to be an issue and therefore not require a synchronizer can be marked as such and treated as low priority, focusing attention on those paths that are more likely to reveal serious errors as the SoC design is assembled and verified.

However, unlike paths marked with waivers, these paths are still in the CDC verification environment database. Not only that, they have been categorized by the design engineer to reflect their assumptions. If the tool finds a discrepancy between that assumption and the actual signals feeding into that path, errors will be generated instead of being ignored. This database-driven approach provides a smart infrastructure for CDC verification and establishes a basis for smarter reporting as the project progresses.

Smart reporting
Reporting will be organized to meet the specification rather than a long list of uncategorized errors that may or may not be false positives. This not only accelerates the process of reviews but allows the work to be distributed among engineers. As the specification is created and paths marked and categorized, engineers establish what they expect to see in the CDC results, providing the basis for smart reporting from the verification tools.

When structural analysis finds that a problematic path that was previously thought to be unaffected by CDC issues, the engineer can zoom in on the problem and deploy formal technologies to establish the root cause and potential solutions. Once fixed, the check can be repeated to ensure that the fix has worked.

The specification-led approach also allows additional attention to be paid to blocks that are likely to lead to verification complications, such as those that employ reconvergent logic. Whereas structural analysis will identify most problems on normal logic, these areas may need closer analysis using formal technology. Because the database-driven methodology allows these sections to be marked clearly, the right verification technology can be deployed at the right time.

By moving away from waivers and black-box models, the database-driven hierarchical CDC methodology encourages design groups to take SoC-oriented clocking issues into account earlier in the design cycle and ensure that any concerns about interfaces that may involve modules designed by design groups located elsewhere or even by different companies are carried forward to the critical SoC-level analysis without incurring the overhead of having to repeatedly re-verify each port on the module. Through earlier CDC analysis and verification, the team reduces the risk of encountering a large number of schedule-killing violations immediately prior to tapeout, and be far more confident that design deadlines will be met.

Jun 20, 2014 | Comments

Quick Reprise of DAC 2014

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Thanks to everyone that came to the 2014 Design Automation Conference.  It was a successful show with maximum traffic on Tuesday afternoon.  At the Real Intent booth we were giving away Roses (yes they were real!) and had a photo booth as well.   Visitors could dress up in world-cup soccer jerseys and hoist the World Cup 2014 Trophy.


We also had the Official FIFA World Cup Soccer video game to challenge our visitors.


We also had a Partner Passport that visitors could get stamped to win prizes.   At the MathWorks stand in the Automotive Pavillion visitors could see our Ascent Lint tool integrated with MathWork’s MATLAB HDL Coder Synthesis.   Similarly, Calypto Catapult also has an integration with Ascent Lint to qualify designers’ synthesized RTL code.   We were also demonstrating an intergration with our partnet Defacto, where Real Intent’s Merdian CDC can send environmental setup information to Defacto’s STAR DFT tool.


Real Intent was the Organizer for the “Asymptote of Verification” panel.   It was well attended with over 80 designers and engineers in the room.  I think this was quite an achievement as it was the last panel of the day, and beer and wine receptions were already underway.  The panelists Brian Hunter from Cavium, San Jose, CA; Holger Busch from Infineon, Munich, Germany and Wolfgang Roesner from IBM, Austin, TX  brought their considerable industry to the discussion.   Attendees got to hear “the unique attributes of graph-based scenario models, including starting from the intended goal and being able to deterministically generate a test case to get there and that graphs are an effective way to communicate design intent between designers and verification engineers.”

There will be more to tell of what we saw and heard at DAC 2014 in San Francsico in future blog postings.  Until then, please let me know what you and saw heard.

Jun 6, 2014 | Comments

Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Its only one month left until the Design Automation Conference in San Francisco,  June 1-5 and the process of getting ready is keeping me BUSY.   This week, I would like to highlight the DVCon 2014 Best Oral Presentation by Kelly D. Larson from NVIDIA on “Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions.

This paper describes an entirely different way to use these same SVA assertions. While the standard use of SystemVerilog assertions is typically targeted towards DESIGN QUALITY, this paper describes how to effectively use assertions to target individual TEST QUALITY. In many cases the same SystemVerilog assertions which were written for measuring design quality can also be used to measure test quality, but it’s important to realize that the fundamental goal is quite different.

The PowerPoint slides for Kelly’s presentation are here and are excellent in explaining his intent.

Reproduced below is the paper outlining Kelly’s talk.

May 1, 2014 | Comments

Complexity Drives Smart Reporting in RTL Verification

Lisa Piper, Technical Marketing Manager at Real Intent

Lisa Piper, Technical Marketing Manager at Real Intent

This article was originally published on TechDesignForums and is reproduced here by permission.

It’s an increasingly complex world in which we live and that seems to be doubly true of state-machine design.

With protocols such as USB3, PCI Express and a growing number of cache coherent multiprocessor on-chip buses and networks, the designer has been greeted with a state-space explosion. USB3 has, for example, added an entire link layer and, with it, the Link Training and Status State Machine. This is, in itself, a complex entity, which although it has only 12 states in total can move between them using a variety of different arcs.

Within the SoC itself, to maximize bandwidth, we are seeing highly complex processor-to-memory interconnect schemes that allow transactions to be split into smaller entities, with the ability for each master or slave on the interconnect to respond out of order. Not only that, to maintain cache coherency, data may need to be reflected to other nodes as it is returned. State machines that can control this level of activity are, by nature, highly complex. Because of the way that transactions can be split, prioritized and reordered, FSMs are potentially prone to design-killing problems such as deadlock and livelock.

Although it is technically possible to write assertions that can hunt for deadlock conditions or unreachable states, it is generally clear that avoiding these situations are the intent of every designer. Furthermore, writing detailed, comprehensive assertions is not something that a domain expert in cache coherency or bus interface design has a lot of time to perform. It makes far more sense to use a tool that can parse and understand state machines to infer these common intents from the RTL source code, leaving the designer and verification teams to concentrate on writing test code to ensure that states are connected by the right transition arcs.

Verification Automation

Automated checking makes it possible to deploy verification tools across a wider group of engineers, in both design and verification, so that they can erase bugs in their designs faster and earlier. The technology also improves their ability to harden IP before it is released to other SoC groups that need to use these complex controllers.

A potential hazard of automated intent checking is that the tool may not prioritize the errors that really matter. An problem in one condition in part of the RTL may trigger a number of ancillary errors that the tool dutifully reports, but which obscure the root cause that, if fixed, will also solve many of the secondary problems. This is where smart reporting will play an important role.

Smart reporting looks one level deeper at the design and assembles the errors that really matter so that the designer is not forced to wade through a series of reports that, in reality, are simply shadows of the root cause. This smart reporting is a key component of the latest release of the Ascent Implied Intent Verification (IIV) automatic formal tool.

In a project at a major customer, Ascent IIV found some 3000 failures in a block of 130,000 gates. But, more importantly, rather than forcing the designer to look at each one in detail, narrowed down the causes of those errors to fewer than 200 – cutting out 94 per of the reporting noise that the design team would have seen from a tool without such smart analysis and reporting technology.

To ease debugging once the errors have been flagged up, Ascent IIV lets the user trace back to state-transition assignments, making it easier and faster to make changes to the RTL. To support the latest design and verification flows, Ascent IIV adds support for SystemVerilog 1800-2009. The result is that, even as state machines become ever more complex, verification tools are more than keeping pace.


Apr 24, 2014 | Comments

Video Update: New Ascent XV Release for X-optimization, ChipEx show in Israel, DAC Preview

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

I stopped in at EELive! show in San Jose on April 2, 2014 and spoke with Sanjay Gangal, President of Internet Business Systems and about the latest release of the Ascent XV X-verification system and its latest design reset optimization features. I also gave a preview of the activities at the ChipEx conference in Israel on April 30, 2014 and the Design Automation Conference in San Francisco on the dates of June 2-4, 2014.  Click below to play the interview.


Apr 17, 2014 | Comments

Design Verification is Shifting Left: Earlier, Focused and Faster

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Recently, we have seen announcements by the Big Three EDA Companies about new initiatives in the area of SoC verification.  Synopsys for example, has started talking about Verification Compiler and how it is introducing static and formal checks for the first time, and relies on the Verdi debugging environment (acquired from SpringSoft) to tie it all together.  Real Intent has been delivering solutions focused static and formal for several years now (and also relies on Verdi for Debug).  The industry really started taking notice of this static verification trend in 2013 at DVCon and we have seen it grow through DAC 2013 in Austin.   We are now talking about designs crossing the billion-gate threshold and what can be done to not only control this explosion of complexity, but also to achieve sign-off for RTL code.

RTL and gate-level simulation theoretically can be used to fully test a billion-gate SoC, but the cost of complete RTL testing is beyond what design teams can afford. To reduce the testing cost and the risk of missing critical tests, abstract modeling and pre-simulation static analysis of RTL have now become imperative in SoC design flows. Integration of heterogeneous IP and design units require confirmation of protocols, power budgets, testability and the correct operation of multiple interfaces and clock domain crossings (CDC).

The goal is to “shift left” and find more problems earlier in the design cycle. To improve the quality and robustness of RTL before simulation and synthesis requires a number of tools :

  • Syntax and semantic checking with Lint that covers loop detection, FSM, low-power, and mixed-language issues;
  • Automatic formal analysis to verify design functional intent and uncover unintended behavior;
  • Reset flop analysis and later optimizations to reduce the number of required flops;
  • Timing constraints (SDC) correctness and consistency verification, especially after RTL changes from power and clock-gating optimizations and top-level integration of IP;
  • CDC sign-off flow using formal and structural methods;
  • Testability sign-off, DFT verification and planning, and proper DFT implementation;
  • Correct X-hygiene in preparation for simulation including optimism/pessimism correction, and
  • Power estimation and optimization.

Let’s look at these in more detail and discuss their importance to sign off.

Modern Lint tools have evolved to the point where they can handle full-chip designs and yet still offer concise hierarchical reporting. The availability of low-noise reporting means less time waiving violations and more time cleaning easy-to-fix issues. Because of the lower-noise, designers can use the tool earlier and more often. However, an RTL Lint tool requires only rule-setup and therefore cannot provide a deep analysis.

Automatic formal RTL analysis builds on Lint cleaning for early detection of functional issues and takes advantage of clock definitions for the design. Because automated formal performs a sequential analysis and does constant propagation, it can do a deeper design exploration to uncover potential problems. Formal analysis can eliminate potential failures reported in Lint. Designers benefit from early static analysis of problems such as potential FSM deadlocks, bus issues and even X-value propagation.

Billion-gate designs have millions of flip flops to initialize. Many of the IP blocks used in such designs also have their own initialization schemes. It is neither practical nor desirable to wire a reset signal to every single flop. It makes more sense to route resets to an optimal minimum set of flops, and initialize the rest through the logic, but this is a significant RTL coding challenge.

Flip-flop reset analysis ensures that the SoC design will come in a known good state, and in later iterations of the design it may be used to save chip area and routing resources through a more intelligent application of reset signals. The analysis of any system with such a reset and initialization scheme is bound to identify many Xs. For designers, the issue is in knowing which ones matter, because dealing with unnecessary Xs wastes time and resources. However, missing an X state that does matter can increase the likelihood of late-stage debug, cause insidious functional failures and ultimately, respins.

As a last step, it is important to manage the way simulation and synthesis processes handle the unknown (X) states thrown up by power management strategies that turn blocks on and off, and adjust clocks crossing between domains. A proper analysis of this issue can reveal functional bugs that have been hidden at the RTL level by too much optimism about the impact of X states, and also reduce the impact of excessive pessimism given to X states after synthesis.

Timing constraints (SDC) are a key input to the gate-level synthesis of designs, so SDC management and checking ensures correct timing for the block and full-chip level, so long as any changes in the RTL are reflected in the SDC files for the design. And the SDC itself needs to be verified for correctness and consistency, which is essential for other analyses such as clock design crossing.
Clock domain crossing analysis, so important for design reuse, IP, and complex power management schemes, can be carried out using a combination of formal and structural methods. It helps trap the corner case combinations of timing and functionality that lead to errors.

Power analysis and optimization techniques address issues such as retention flop and isolation-cell analysis and optimization, clock/power gating, and sequential/combinational optimizations. These interventions can be so extensive that it makes sense to go back to the other static analyses to recheck the design.

Combining these static verification steps can enable signoff of the RTL to reduce the simulation burden of testing functionality and the synthesis burden of trying to implement conflicted code from disparate IP. It means the design will be as correct as possible as soon as possible, with reduced risk of failure at the implementation stage. And billion-gate SoC signoff is now a reachable goal, not an impossibility.

Does the industry want a monolithic solution from one vendor?  Certainly history has shown that is not the case.  The ability to create a best-in-class solution that uses a mix of mature industry tools with leading-edge products from smaller, more innovative EDA companies is very desirable.  Semiconductor companies support open flows because it provides the easiest pathway to incorporate that next 10x tool in their design suite.   May it ever be so.

Apr 11, 2014 | Comments

Redefining Chip Complexity in the SoC Era

Ramesh Dewangan   Ramesh Dewangan
   Vice President of Application Engineering at Real Intent

I am old enough to recall the Pentium versus AMD processor rivalry of 1990’s. Back then, the chip complexity was all about number of transistors and clock speed. More and more complex Pentiums were reeling out of factory at a pace, faster than we replaced dress shirts in our closets!

In today’s SoC, complexity is not just about clock speed or number of transistors packed in tiny wafers. We don’t hear much about clock speed of processor in the Apple iPhone, or the Samsung Galaxy 4 smartphone, do we?

Are we building less complex chips? Have our applications become simpler?

Quite the opposite.

We are building chip with complexity that is orders of magnitude higher than the past. The definition of complexity is changing though. Whereas there are still old measures in play like lower process nodes and size, many more attributes now determine your SoC complexity.

First let’s acknowledge, we continue to have to deal with packing huge number of transistors into tiny wafers. Look at the sheer size of some of the popular consumer devices. In the techreport.com1 article, published in Aug. 2013, it was revealed at the Hot Chips conference that Xbox One’s AMD-designed SoC, employing eight Jaguar CPU cores and GCN-class integrated graphics, is 363 mm² in area and is comprised of roughly 5 billion transistors; and produced at TSMC on a 28-nm process. Comparable in size, the AMD Tahiti GPU that powers the Radeon HD 7970 is also produced on a 28-nm process at TSMC, and Tahiti packs 4.3 billion transistors into 365 mm². Nvidia’s GK110 GPU, also made on TSMC’s 28-nm process, has 7.1 billion transistors and is 551 mm².  These are big chips!

We continue to have to march towards lower nodes to meet the performance and capacity demands on today’s chips for the applications in mobile, gaming, automotive, cloud computing and others. According to EETimes2, quoting Aart de Geus of Synopsys, the respected EDA industry leader, the tapeouts on advanced node continue to rise:

Number of Tapeouts Per Process Node (per Synopsys, Inc.)

So, what’s new?

For one, today’s SoCs have high degree of re-use and integration of multiple IPs. Look at the variety of functions our smartphones perform! Here is a typical application processor from VIA as per tgdaily3:

Typical Application Processor for a Smartphone (courtesy VIA, Inc.)

We needed multiple chips to perform that amount of function in the past. Yesterday’s SoCs have become todays’ IPs. Now we have multiple IPs taking care of diverse functions, integrated into a single SoC. No wonder, according to same EETimes2 report, IP reuse is to rise significantly as we go towards more complex chips needing lower nodes:

Increase of IP Reuse with Semiconductor Process Node (courtesy of International Business Strategies)

What that means is that IPs need to be signed off with a rigorous methodology before integration, and there must be a sound methodology for the IPs to be integrated smoothly in the SoCs. This methodology must account for clocks, clock domains, timing constraints, low power and test structures, between the IPs and between IPs and SoC.

The consumer mobile revolution brings more critical design challenges, e.g. the proliferation of low-power and asynchronous interfaces. Low power requirements have become acute to ensure the longest possible life for a smartphone customer. As a result we see the number of voltage domains and power domains have gone up rapidly. Similarly, the rise in asynchronous IP components within a chip means the number of asynchronous clocks and clock domains are higher than the past and can exceed 100 in number. You have to not only ensure the asynchronous clock domain crossings are properly synchronized, but also address the risk of any signal glitches that will be missed in simulation.

These new attributes of chip complexity are going to explode with the arrival of Internet of things. In a recent Gartner4 report, it forecast there will be nearly 26 billion devices on the Internet of Things by 2020, and according to ABI Research there will be more than 30 billion devices wirelessly connected by that time.

Our dress shirts may still be getting replaced at the same pace as in the past, but we need new methodologies and techniques to reel out today’s chip at a significantly faster pace!




Apr 3, 2014 | Comments

X-Verification: A Critical Analysis for a Low-Power World (Video)

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

The problem logic designers have with X’s is that RTL simulation is optimistic in behavior and this can hide real bugs in your design when you go to tapeout.  Some engineers point out that we have always had to deal with X’s and nothing has really changed.

In fact, today’s SoC employ different power management schemes that wake-up or suspend IP.  As any designer knows, when powering up logic, any X’s must be cleared on reset or within a specific short number of cycles afterword.   The situation is now much more uncertain for designers whether all possible power scenarios are considered and all X’s will be cleared correctly.

Sorting all of this out with your simulator is too much and will be too late in the design process.  So, the temptation is to supply a reset to all the flops in your design, but this will be costly in terms of precious routing density and power usage. Ideally, you would have a static tool that could analyze the rest scheme of your design and then suggest a minimum sub-set of flops that need reset lines.  This week, on March 25, Real Intent unveiled major enhancements in its  Ascent XV product for early detection and management of unknowns (X’s) in digital designs, which address this issue.

Lisa Piper, senior manager of technical marketing at Real Intent summarized the new release as follows: “Analysis and optimization of design reset and initialization is a new requirement for SoC sign-off due to the presence of X’s that can arise from modern power-management techniques.  Ascent XV can ensure that the initialization sequences are complete and optimal for various power states in an SoC and identify only those areas of risk that need attention by the designer, ignoring trivial X’s. With this new release we are continuing to innovate to deliver best-in-class verification performance and debug efficiency.”

To hear why X’s in SoC designs are becoming more of a problem, what features in the new 2014 Ascent XV software release addresses these issues, why simulation is not up to the challenge and what is her favorite feature is in Ascent XV, please watch the video of Lisa Piper below.

Mar 27, 2014 | Comments

Engineers Have Spoken: Design And Verification Survey Results

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Previously I have blogged about the verification surveys that Real Intent runs at tradeshows throughout the year.  We find it useful to track trends in tool needs and reveal what are the pain points designers are feeling.  I last reported to you, a year ago, in the blog article Clocks and Bugs, where I focused on clock-domain crossing (CDC) errors causing re-spins.

This year, I would like to add some additional highlights and trends that I see from new survey data.

For “What verification technologies will you adopt or change in 2013?”, Lint led with 27% of respondents, followed closely by X-propagation at 26% and CDC at 22%. The 2012 data shows a different mix with CDC leading at 25% followed by both automatic-formal checking and SDC constraint analysis at roughly 22%. My thought on this result is that designers are looking for a set of applications that tackle specific verification challenges, with CDC being a continuing source of concern.

CDC bugs are a problem since they result in late stage ECOs or product re-spins. Year-over-year we see that around 65% of respondents have run into these late-stage bugs.  This is also consistent with a poll that ran at Chip Design Magazine, which reported 68% had problems.

Why is CDC an ongoing concern? Besides the raw increase in the number of gates and signals in a design, the number of clock domains is continuing to grow. In our surveys we have seen that more than 36% of designs have more than 50 different clock domains, and more than 7% have 100 domains. This combination of design size and number of domains and synchronizers that need to be checked is straining incumbent tools at designers’ work sites.

We also asked if CDC verification is a nice-to-have or a sign-off criterion. Again the answers have been very consistent at about 70%. This confirms that they have been burned in the past by CDC bugs and need to use design automation tools to avoid getting burned again. The new technologies in our Meridian CDC product tackle these challenges head-on.

What about the other application areas for verification?

We asked “Are you using automatic formal analysis currently?” and 41% said yes. While we didn’t drill down for any specifics for the kind of analysis they were using, we did ask a follow-on question “When doing full-chip verification, are you still finding block-level bugs?” More than 85% reported yes, which leads me to conclude that more exhaustive verification needs to be done at the block level and automatic-formal tools are a good candidate for that. Our announcement on Feb. 26 of the new release of Real Intent’s Ascent IIV tool is latest answer to this continuing need for early verification of RTL bugs at the block-level.

While RTL linting has for a long time been standard design practice there is still dissatisfaction with designers. When asked “Does your current Lint tool have limited usage because of speed, capacity, or poor reporting?” we saw more than 60% say yes. This leads me to conclude our Ascent Lint product is answering a need in the marketplace.

Finally, I would like to mention the results from our query “What X (unknown) issues affect your designs?” The most popular response was X-optimism at 36%, followed by X-pessimism (26%), power management (21%), and the need to reset all flops to clear Xs (16%). Our Ascent XV product has some new technologies that address these issues and we expect to see growing customer success with our offering.

Overall, I see that designers are still looking for better ways to find bugs early in their design cycles and to sign-off critical issues such as CDC.  We are sharing our newest innovations at industry events  in Silicon Valley in the month of March, including Design and Verification Conference (DVCon), Cadence CDNLive and Synopsys SNUG. Come by and see how we can help meet your verification challenges.

Mar 14, 2014 | Comments

New Ascent IIV Release Delivers Enhanced Automatic Verification of FSMs

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Chris Morrison, Chief Architect at Real Intent, speaks with Graham Bell, about the new 2014 release of the Ascent IIV automatic formal verification product. They discuss the trends in automatic formal verification, the new finite-state machine (FSM) checks in the release, what makes Ascent IIV unique in the marketplace, and lastly, customer experience with the tool.

On Feb. 26, Real Intent  announced a new version of its Ascent Implied Intent Verification (IIV) tool for early functional analysis of digital designs, delivering significant enhancements for users. Ascent products find elusive bugs and eliminate sources of uncertainty that are difficult to uncover using traditional Verilog or VHDL simulation, leading to both improved QoR and productivity of design teams.

New Ascent IIV features and enhancements include:

  • Improved root cause analysis minimizes time spent debugging FSMs
  • New FSM transition checks for deeper analysis of the design
  • New FSM debug reporting with direct trace back to state transition assignments
  • SystemVerilog 1800-2009 language support for easier adoption into existing design flows

Lisa Piper, senior manager of technical marketing at Real Intent, said, “The enhanced FSM checks and associated debug of IIV mean designers can find more bugs automatically without the need for any test benches. IIV’s root cause analysis dramatically reduces debug time by focusing the effort on the real design problems, without being distracted by related secondary issues. The enhancements we made to our SystemVerilog 2009 language support and file processing make it easier for design teams to adopt it into their existing design flows. Our Ascent products remain the fastest and highest-capacity verification solutions available for uncovering issues prior to digital simulation.”


The latest release of Ascent IIV is available immediately for download from the Real Intent web-site.

About Ascent IIV

Ascent IIV is a state-of-the-art automatic RTL verification tool. It finds bugs using an intelligent hierarchical analysis of design intent. No test bench is needed, making it easy and efficient to find RTL bugs earlier in the design flow before they become more expensive to uncover. The analysis minimizes debug time by identifying the root cause of issues, and provides the VCD traces that show the sequence of events that lead to an undesired state. Ascent IIV has the speed and capacity to handle design blocks exceeding 250K gates and provides a wide variety of complex checks including FSM deadlocks, bus issues, and constant bits and nets. If SVA or VHDL assertions written in PSL are available, Ascent IIV can use these as constraints to enhance the analysis. Please click here for a recent announcement about how Real Intent’s Ascent IIV software accelerates design debug for a customer.

CDC:      Clock Domain Crossing
EDA:      Electronic Design Automation
FSM:      Finite-State Machine
PSL:      Property Specification Language
QoR:      Quality of Results
RTL:      Register Transfer Level
SoC:      Systems-on-Chip
SVA:      SystemVerilog Assertions
VCD:      Value Change Dump
VHDL:     Very High-level Design Language

Ascent and Meridian are trademarks of Real Intent, Inc.
All other trademarks and trade names are the property of their respective owners.

Mar 6, 2014 | Comments