Real Talk Blog
Blog Archive
July 2014
7/24/2014: Fundamentals of Clock Domain Crossing Verification: Part Three
7/17/2014: Fundamentals of Clock Domain Crossing Verification: Part Two
7/10/2014: Fundamentals of Clock Domain Crossing Verification: Part One
7/03/2014: Static Verification Leads to New Age of SoC Design
June 2014
6/26/2014: Reset Optimization Pays Big Dividends Before Simulation
6/20/2014: SoC CDC Verification Needs a Smarter Hierarchical Approach
6/12/2014: Photo Booth Blackmail at DAC in San Francisco!
6/06/2014: Quick Reprise of DAC 2014
May 2014
5/01/2014: Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions
April 2014
4/24/2014: Complexity Drives Smart Reporting in RTL Verification
4/17/2014: Video Update: New Ascent XV Release for X-optimization, ChipEx show in Israel, DAC Preview
4/11/2014: Design Verification is Shifting Left: Earlier, Focused and Faster
4/03/2014: Redefining Chip Complexity in the SoC Era
March 2014
3/27/2014: X-Verification: A Critical Analysis for a Low-Power World (Video)
3/14/2014: Engineers Have Spoken: Design And Verification Survey Results
3/06/2014: New Ascent IIV Release Delivers Enhanced Automatic Verification of FSMs
February 2014
2/28/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 3
2/20/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 2
2/13/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 1
2/07/2014: Video Tech Talk: Changes In Verification
January 2014
1/31/2014: Progressive Static Verification Leads to Earlier and Faster Timing Sign-off
1/30/2014: Verific’s Front-end Technology Leads to Success and a Giraffe!
1/23/2014: CDC Verification of Fast-to-Slow Clocks – Part Three: Metastability Aware Simulation
1/16/2014: CDC Verification of Fast-to-Slow Clocks – Part Two: Formal Checks
1/10/2014: CDC Verification of Fast-to-Slow Clocks – Part One: Structural Checks
1/02/2014: 2013 Highlights And Giga-scale Predictions For 2014
December 2013
12/13/2013: Q4 News, Year End Summary and New Videos
12/12/2013: Semi Design Technology & System Drivers Roadmap: Part 6 – DFM
12/06/2013: The Future is More than “More than Moore”
November 2013
11/27/2013: Robert Eichner’s presentation at the Verification Futures Conference
11/21/2013: The Race For Better Verification
11/18/2013: Experts at the Table: The Future of Verification – Part 2
11/14/2013: Experts At The Table: The Future Of Verification Part 1
11/08/2013: Video: Orange Roses, New Product Releases and Banner Business at ARM TechCon
October 2013
10/31/2013: Minimizing X-issues in Both Design and Verification
10/23/2013: Value of a Design Tool Needs More Sense Than Dollars
10/17/2013: Graham Bell at EDA Back to the Future
10/15/2013: The Secret Sauce for CDC Verification
10/01/2013: Clean SoC Initialization now Optimal and Verified with Ascent XV
September 2013
9/24/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 4
9/20/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 3
9/20/2013: CEO Viewpoint: Prakash Narain on Moving from RTL to SoC Sign-off
9/17/2013: Video: Ascent Lint – The Best Just Got Better
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 2
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain
9/10/2013: SoC Sign-off Needs Analysis and Optimization of Design Initialization in the Presence of Xs
August 2013
8/15/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 4
8/08/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 3
July 2013
7/25/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 2
7/18/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 1
7/16/2013: Executive Video Briefing: Prakash Narain on RTL and SoC Sign-off
7/05/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 3
June 2013
6/27/2013: Bryon Moyer: Simpler CDC Exception Handling
6/21/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 2
6/17/2013: Peggy Aycinena’s interview with Prakash Narain
6/14/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 1
6/10/2013: Photo Booth Blackmail!
6/03/2013: Real Intent is on John Cooley’s “DAC’13 Cheesy List”
May 2013
5/30/2013: Does SoC Sign-off Mean More Than RTL?
5/24/2013: Ascent Lint Rule of the Month: DEFPARAM
5/23/2013: Video: Gary Smith Tells Us Who and What to See at DAC 2013
5/22/2013: Real Intent is on Gary Smith’s “What to see at DAC” List!
5/16/2013: Your Real Intent Invitation to Fun and Fast Verification at DAC
5/09/2013: DeepChip: “Real Intent’s not-so-secret DVcon’13 Report”
5/07/2013: TechDesignForum: Better analysis helps improve design quality
5/03/2013: Unknown Sign-off and Reset Analysis
April 2013
4/25/2013: Hear Alexander Graham Bell Speak from the 1880′s
4/19/2013: Ascent Lint rule of the month: NULL_RANGE
4/16/2013: May 2 Webinar: Automatic RTL Verification with Ascent IIV: Find Bugs Simulation Can Miss
4/05/2013: Conclusion: Clock and Reset Ubiquity – A CDC Perspective
March 2013
3/22/2013: Part Six: Clock and Reset Ubiquity – A CDC Perspective
3/21/2013: The BIG Change in SoC Verification You Don’t Know About
3/15/2013: Ascent Lint Rule of the Month: COMBO_NBA
3/15/2013: System-Level Design Experts At The Table: Verification Strategies – Part One
3/08/2013: Part Five: Clock and Reset Ubiquity – A CDC Perspective
3/01/2013: Quick DVCon Recap: Exhibit, Panel, Tutorial and Wally’s Keynote
3/01/2013: System-Level Design: Is This The Era Of Automatic Formal Checks For Verification?
February 2013
2/26/2013: Press Release: Real Intent Technologist Presents Power-related Paper and Tutorial at ISQED 2013 Symposium
2/25/2013: At DVCon: Pre-Simulation Verification for RTL Sign-Off includes Automating Power Optimization and DFT
2/25/2013: Press Release: Real Intent to Exhibit, Participate in Panel and Present Tutorial at DVCon 2013
2/22/2013: Part Four: Clock and Reset Ubiquity – A CDC Perspective
2/18/2013: Does Extreme Performance Mean Hard-to-Use?
2/15/2013: Part Three: Clock and Reset Ubiquity – A CDC Perspective
2/07/2013: Ascent Lint Rule of the Month: ARITH_CONTEXT
2/01/2013: “Where Does Design End and Verification Begin?” and DVCon Tutorial on Static Verification
January 2013
1/25/2013: Part Two: Clock and Reset Ubiquity – A CDC Perspective
1/18/2013: Part One: Clock and Reset Ubiquity – A CDC Perspective
1/07/2013: Ascent Lint Rule of the Month: MIN_ID_LEN
1/04/2013: Predictions for 2014, Hier. vs Flat, Clocks and Bugs
December 2012
12/14/2012: Real Intent Reports on DVClub Event at Microprocessor Test and Verification Workshop 2012
12/11/2012: Press Release: Real Intent Records Banner Year
12/07/2012: Press Release: Real Intent Rolls Out New Version of Ascent Lint for Early Functional Verification
12/04/2012: Ascent Lint Rule of the Month: OPEN_INPUT
November 2012
11/19/2012: Real Intent Has Excellent EDSFair 2012 Exhibition
11/16/2012: Peggy Aycinena: New Look, New Location, New Year
11/14/2012: Press Release: New Look and New Headquarters for Real Intent
11/05/2012: Ascent Lint HDL Rule of the Month: ZERO_REP
11/02/2012: Have you had CDC bugs slip through resulting in late ECOs or chip respins?
11/01/2012: DAC survey on CDC bugs, X propagation, constraints
October 2012
10/29/2012: Press Release: Real Intent to Exhibit at ARM TechCon 2012 – Chip Design Day
September 2012
9/24/2012: Photos of the space shuttle Endeavour from the Real Intent office
9/20/2012: Press Release: Real Intent Showcases Verification Solutions at Verify 2012 Japan
9/14/2012: A Bolt of Inspiration
9/11/2012: ARM blog: An Advanced Timing Sign-off Methodology for the SoC Design Ecosystem
9/05/2012: When to Retool the Front-End Design Flow?
August 2012
8/27/2012: X-Verification: What Happens When Unknowns Propagate Through Your Design
8/24/2012: Article: Verification challenges require surgical precision
8/21/2012: How To Article: Verifying complex clock and reset regimes in modern chips
8/20/2012: Press Release: Real Intent Supports Growth Worldwide by Partnering With EuropeLaunch
8/06/2012: SemiWiki: The Unknown in Your Design Can be Dangerous
8/03/2012: Video: “Issues and Struggles in SOC Design Verification”, Dr. Roger Hughes
July 2012
7/30/2012: Video: What is Driving Lint Usage in Complex SOCs?
7/25/2012: Press Release: Real Intent Adds to Japan Presence: Expands Office, Increases Staff to Meet Demand for Design Verification and Sign-Off Products
7/23/2012: How is Verification Complexity Changing, and What is the Impact on Sign-off?
7/20/2012: Real Intent in Brazil
7/16/2012: Foosball, Frosty Beverages and Accelerating Verification Sign-off
7/03/2012: A Good Design Tool Needs a Great Beginning
June 2012
6/14/2012: Real Intent at DAC 2012
6/01/2012: DeepChip: Cheesy List for DAC 2012
May 2012
5/31/2012: EDACafe: Your Real Intent Invitation to Fast Verification and Fun at DAC
5/30/2012: Real Intent Video: New Ascent Lint and Meridian CDC Releases and Fun at DAC 2012
5/29/2012: Press Release: Real Intent Leads in Speed, Capacity and Precision with New Releases of Ascent Lint and Meridian CDC Verification Tools
5/22/2012: Press Release: Over 35% Revenue Growth in First Half of 2012
5/21/2012: Thoughts on RTL Lint, and a Poem
5/21/2012: Real Intent is #8 on Gary Smith’s “What to see at DAC” List!
5/18/2012: EETimes: Gearing Up for DAC – Verification demos
5/08/2012: Gabe on EDA: Real Intent Helps Designers Verify Intent
5/07/2012: EDACafe: A Page is Turned
5/07/2012: Press Release: Graham Bell Joins Real Intent to Promote Early Functional Verification & Advanced Sign-Off Circuit Design Software
March 2012
3/21/2012: Press Release: Real Intent Demos EDA Solutions for Early Functional Verification & Advanced Sign-off at Synopsys Users Group (SNUG)
3/20/2012: Article: Blindsided by a glitch
3/16/2012: Gabe on EDA: Real Intent and the X Factor
3/10/2012: DVCon Video Interview: “Product Update and New High-capacity ‘X’ Verification Solution”
3/01/2012: Article: X-Propagation Woes: Masking Bugs at RTL and Unnecessary Debug at the Netlist
February 2012
2/28/2012: Press Release: Real Intent Joins Cadence Connections Program; Real Intent’s Advanced Sign-Off Verification Capabilities Added to Leading EDA Flow
2/15/2012: Real Intent Improves Lint Coverage and Usability
2/15/2012: Avoiding the Titanic-Sized Iceberg of Downton Abbey
2/08/2012: Gabe on EDA: Real Intent Meridian CDC
2/08/2012: Press Release: At DVCon, Real Intent Verification Experts Present on Resolving X-Propagation Bugs; Demos Focus on CDC and RTL Debugging Innovations
January 2012
1/24/2012: A Meaningful Present for the New Year
1/11/2012: Press Release: Real Intent Solidifies Leadership in Clock Domain Crossing
August 2011
8/02/2011: A Quick History of Clock Domain Crossing (CDC) Verification
July 2011
7/26/2011: Hardware-Assisted Verification and the Animal Kingdom
7/13/2011: Advanced Sign-off…It’s Trending!
May 2011
5/24/2011: Learn about Advanced Sign-off Verification at DAC 2011
5/16/2011: Getting A Jump On DAC
5/09/2011: Livin’ on a Prayer
5/02/2011: The Journey to CDC Sign-Off
April 2011
4/25/2011: Getting You Closer to Verification Closure
4/11/2011: X-verification: Conquering the “Unknown”
4/05/2011: Learn About the Latest Advances in Verification Sign-off!
March 2011
3/21/2011: Business Not as Usual
3/15/2011: The Evolution of Sign-off
3/07/2011: Real People, Real Discussion – Real Intent at DVCon
February 2011
2/28/2011: The Ascent of Ascent Lint (v1.4 is here!)
2/21/2011: Foundation for Success
2/08/2011: Fairs to Remember
January 2011
1/31/2011: EDA Innovation
1/24/2011: Top 3 Reasons Why Designers Switch to Meridian CDC from Real Intent
1/17/2011: Hot Topics, Hot Food, and Hot Prize
1/10/2011: Satisfaction EDA Style!
1/03/2011: The King is Dead. Long Live the King!
December 2010
12/20/2010: Hardware Emulation for Lowering Production Testing Costs
12/03/2010: What do you need to know for effective CDC Analysis?
November 2010
11/12/2010: The SoC Verification Gap
11/05/2010: Building Relationships Between EDA and Semiconductor Ventures
October 2010
10/29/2010: Thoughts on Assertion Based Verification (ABV)
10/25/2010: Who is the master who is the slave?
10/08/2010: Economics of Verification
10/01/2010: Hardware-Assisted Verification Tackles Verification Bottleneck
September 2010
9/24/2010: Excitement in Electronics
9/17/2010: Achieving Six Sigma Quality for IC Design
9/03/2010: A Look at Transaction-Based Modeling
August 2010
8/20/2010: The 10 Year Retooling Cycle
July 2010
7/30/2010: Hardware-Assisted Verification Usage Survey of DAC Attendees
7/23/2010: Leadership with Authenticity
7/16/2010: Clock Domain Verification Challenges: How Real Intent is Solving Them
7/09/2010: Building Strong Foundations
7/02/2010: Celebrating Freedom from Verification
June 2010
6/25/2010: My DAC Journey: Past, Present and Future
6/18/2010: Verifying Today’s Large Chips
6/11/2010: You Got Questions, We Got Answers
6/04/2010: Will 70 Remain the Verification Number?
May 2010
5/28/2010: A Model for Justifying More EDA Tools
5/21/2010: Mind the Verification Gap
5/14/2010: ChipEx 2010: a Hot Show under the Hot Sun
5/07/2010: We Sell Canaries
April 2010
4/30/2010: Celebrating 10 Years of Emulation Leadership
4/23/2010: Imagining Verification Success
4/16/2010: Do you have the next generation verification flow?
4/09/2010: A Bug’s Eye View under the Rug of SNUG
4/02/2010: Globetrotting 2010
March 2010
3/26/2010: Is Your CDC Tool of Sign-Off Quality?
3/19/2010: DATE 2010 – There Was a Chill in the Air
3/12/2010: Drowning in a Sea of Information
3/05/2010: DVCon 2010: Awesomely on Target for Verification
February 2010
2/26/2010: Verifying CDC Issues in the Presence of Clocks with Dynamically Changing Frequencies
2/19/2010: Fostering Innovation
2/12/2010: CDC (Clock Domain Crossing) Analysis – Is this a misnomer?
2/05/2010: EDSFair – A Successful Show to Start 2010
January 2010
1/29/2010: Ascent Is Much More Than a Bug Hunter
1/22/2010: Ascent Lint Steps up to Next Generation Challenges
1/15/2010: Google and Real Intent, 1st Degree LinkedIn
1/08/2010: Verification Challenges Require Surgical Precision
1/07/2010: Introducing Real Talk!

Fundamentals of Clock Domain Crossing Verification: Part Three

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Last time we looked at design principles and the design of CDC interfaces.  In this posting, we will look at practical considerations for designing CDC interfaces.

Verifying CDC interfaces

A typical SOC is made up of a large number of CDC interfaces. From the discussion above, CDC verification can be accomplished by executing the following steps in order:

  • Identification of CDC signals.
  • Classification of CDC signals as control and data.
  • Hazard/ glitch robustness of control signals.
  • Verification of single signal transition (gray coding) of control signals.
  • Verification of control stability (pulse-width requirement).
  • Verification of MCP operation (stability) of data signals.

All verification processes are iterative and achieve design quality by iteratively identifying design errors, debugging and fixing errors and re-running verification until no more errors are detected.

Practical considerations for CDC verification

Effective deployment of CDC tools in the design flow requires due consideration of multiple factors. We have discovered that first-generation CDC tools were not being used effectively in design flows. Based upon feedback from users, we have identified the following factors as the most important considerations for CDC deployment:

  • Coverage of error sources.
  • Design setup cost.
  • Debugging and sign-off cost.
  • Verification run-time cost.
  • Template recognition vs. report quality trade-off.
  • Top-level vs. block-level verification trade-off.
  • RTL vs. netlist verification trade-off.

There is consistent feedback from the users that the minimization of engineering cost for high-quality verification is critical for effective deployment of the CDC tools.

Coverage of error sources

CDC errors can creep into a design from multiple sources. The first is inadvertent clock-domain crossing where there is an assumption mismatch or oversight at block interfaces. The second is faulty block-level design. The designers, because of oversight or because of the pressure to design correct and high-performance interfaces, can make a design error. As an example, consider the protocol in Figure 12. Here, tapping Feedback Signal from an earlier flop stage can reduce the latency across the interface. But correct operation of this interface requires that the transmitting clock frequency be lower than the receiving clock frequency. Otherwise, it is possible to signal New Data before Load Data is completed.

Figure 12. Reduced latency protocol.

These two error sources are properly covered by RTL analysis. They can also be covered by netlist analysis. But not all CDC error sources are covered by RTL analysis. This is because CDC errors are dependent upon glitches and hazards. It is a well-known phenomenon that synthesis transformations can introduce hazards in the design. Hazards in CDC logic lead to CDC failures. Figure 13 shows an example of a design failure caused by synthesis. Here, the multiplexor implementation created a logic hazard that violated the multi-cycle path requirement on the data bus. We are aware of multiple design failures because of this phenomenon.


Figure 13. Logic hazard caused CDC failure.

With the increasing complexity of SOCs and the increasing number of CDC interfaces on the chip, the contribution of this risk factor is increasing. As a result, CDC verification must be run on both RTL and netlist views of the design.

Design setup cost

Design setup starts with importing the design. With the increasing complexity of SOCs, designs include RTL and netlist blocks in a Verilog and VHDL mixed-language environment. In addition, functional setup is required for good quality of verification. A typical SOC has multiple modes of operation characterized by clocking schemes, reset sequences and mode controls. Functional setup requires the design to be set up in functionally valid modes for verification, by proper identification of clocks, resets and mode select pins. Bad setup can lead to poor quality of verification results.

Given the management complexity for the multitude of design tasks, it is highly desirable that there be a large overlap between setup requirements for different flows. For example, design compilation can be accomplished by processing the existing simulation scripts. Also, there is a large overlap between the functional setup requirements for CDC and that for static timing analysis. Hence, STA setup, based upon Synopsys Design Constraints (SDCs), can be leveraged for cost-effective functional setup.

Design constraints are usually either requirements or properties in your design. You use constraints to ensure that your design meets its performance goals and pin assignment requirements. Traditionally these are timing constraints but can include power, synthesis, and clocking.

Timing constraints represent the performance goals for your designs. Designer software uses timing constraints to guide the timing-driven optimization tools (synthesis) in order to meet these goals. You can set timing constraints either globally or to a specific set of paths in your design. You can apply timing constraints to:

  • Specify the required minimum speed of a clock domain.
  • Set the input and output port timing information.
  • Define the maximum delay for a specific path.
  • Identify paths that are considered false and excluded from the analysis.
  • Identify paths that require more than one clock cycle to propagate the data.
  • Provide the external load at a specific port.

Correct functional setup of large designs may require setup of a very large number of signals. This cumbersome and time-consuming drudgery can be avoided with automatic setup generation. Also, setup has the first-order effect on the quality of verification. Hence, early feedback on setup quality can lead to easy and effective setup refinement for high quality of verification.


Figure 14. Design setup flow.

In the next posting we will discuss the costs associated with debugging and sign-off verification .

Jul 24, 2014 | Comments

Fundamentals of Clock Domain Crossing Verification: Part Two

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Last time we looked at how metastability is unavoidable and the nature of the clock domain crossing (CDC) problem.   This time we will look at design principles.

CDC design principles

Because metastability is unavoidable in CDC designs, the robust design of CDC interfaces is required to follow some strict design principles.

Metastability can be contained with “synchronizers” that prevent metastability effects from propagating into the design. Figure 9 shows the configuration of a double-flop synchronizer which minimizes the load on the metastable flop. The single fan-out protects against loss of correlation because the metastable signal does not fan out to multiple flops. The probability that metastability will last longer than time t is governed by the following equation:



where tau is the resolution time constant dependent upon the latch characteristics and ambient noise. This configuration resolves metastability with a very high probability, leading to a very large mean time between failures as governed by the equation:


where P is the probability that metastability is not resolved within one clock cycle. Triple or higher flop configurations may be used for very fast designs.

Figure 9. Double flop synchronizer contains metastability.


Designing CDC interfaces

A CDC interface is designed for reliable transfer of correlated data across the data bus and the reliable design of a CDC interface must follow a simple set of rules:

  • The CDC data bus must be designed for 2-cycle multi-cycle-path operation (MCP). This means that data is captured in the CDC flops on the second clock edge or later, following the launch of data. This also gives one clock cycle of the receiving clock as the timing constraint on the path. Static timing analysis should ensure that the timing constraints are met on these paths. This rule eliminates metastability for these paths. As data-bus signals are correlated, their CDC flops can not be allowed to become metastable.
  • The control signals implementing the MCP protocol can become metastable and hence must follow the following rules:
    • The controls must be properly synchronized to prevent propagation of metastability in the design.
    • The MCP is enabled by one and only one control-signal transition to eliminate loss of correlation errors (gray coding).
    • The control signals should be free of hazards/ glitches.
    • The control signals must be stable for more than one clock cycle of the receiving clock.


These principles can be implemented using handshake protocols or FIFO-based protocols. Figure 10 shows a simple handshake CDC protocol. This interface is transmitting data from CLK1 domain to CLK2 domain. While Data Ready is asserted, the data on the bus Data In is transmitted across the clock domain. The data availability is signaled by a transition on Control Signal. Transmit Data is launched on the same clock edge. Control Signal is synchronized in the CLK2 domain and the transition is detected to signal Load Data. Since, synchronization requires at-least one cycle of CLK2, Transmit Data is received at the second edge of CLK2 or later. This creates a multi-cycle path for Transmit Data across the interface. Feedback Signal completes the handshake.


Figure 10. Simple handshake CDC protocol.


Transition on Feedback Signal is detected to drive Next Data to the interface. Figure 11 shows the timing diagram for the protocol. It should be noted that this is a simplified concept of the interface. We have not incorporated the logic initializing the interface, detecting transition in Data Ready and dealing with stalling conditions. All these considerations, combined with latency minimization, add complexity to the design of the interface.


Figure 11. CDC protocol timing diagram.

Next time we will start the discussion on verifying CDC interfaces.

Jul 17, 2014 | Comments

Fundamentals of Clock Domain Crossing Verification: Part One

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

The increase in SOC designs is leading to the extensive use of asynchronous clock domains. The clock-domain-crossing (CDC) interfaces are required to follow strict design principles for reliable operation. Also, verification of proper CDC design is not possible using standard simulation and static timing-analysis (STA) techniques. As a result, CDC-verification tools have become essential in design flows.

A good understanding of the CDC problem requires an understanding of metastability and the associated design challenge.


When the input signal to a data latch changes within the setup-and-hold window around the transition of the latching clock, the latch output can become metastable at an intermediate voltage between logical zero and one. Figure 1 shows a simplified latch implementation. The metastable state is a very high-energy state as shown in Figure 2. Because of noise in the chip environment, this metastable voltage gets disturbed and eventually resolves to a logical value. The resolution time is dependent upon the load on the latch output and the gain through the feedback loop. It is impossible, however, to predict this logical value. Also, there is an inherent delay in the resolution of the metastable output as shown in the timing diagram of Figure 3. This logical and timing uncertainty introduces unreliable behavior in the design and, without proper protection, can cause it to fail in unpredictable ways.


Figure 1. A simplified latch.



Figure 2. The metastability energy curve.



Figure 3. Metastability timing diagram.


For synchronous clock designs, timing closure with static timing analysis ensures that all paths meet timing specifications; metastability is avoided and the designs operate reliably.

Limitations of functional verification

The prevalent functional-verification methodology is based upon functional simulation. A simplified view of the simulation model is that the design behavior is evaluated using zero-delay evaluation for logic, unit-delay for flops and ideal clock behavior. Also, formal analysis makes use of the same evaluation assumptions. But both of these techniques have an inherent limitation because they only analyze the steady-state behavior of the design.

Functional verification makes a fundamental assumption that static timing analysis will account for the uncertainty in clock behavior caused by jitter and skews, and ensure that all hazards in the design subside before the clock event (timing closure). This is the default timing rule. Functional verification will be invalidated if this assumption is violated. Static timing analysis lets users specify exceptions to the default timing rules. These exceptions invalidate the functional-verification and default-timing assumptions. It is imperative that these exceptions be properly verified using timing-closure verification (TCV) for a robust design methodology. Because static timing of CDC interfaces is not possible and requires timing exceptions, CDC verification is a unique and essential component of TCV.

CDC terminology

A clock domain is defined as the set of all flops that are clocked by the associated clock. A clock-domain crossing (CDC) is defined as a flop-to-flop path where the transmitting flop is triggered by a clock that is asynchronous to the receiving flop clock. These two clock domains are considered to be relatively asynchronous. Figure 4 describes the CDC terminology used in this article. The receiving flops are referred to as CDC flops. The signals feeding the CDC flops are referred to as CDC signals.



Figure 4. Defining CDC terminology.


Unavoidable metastability and the CDC problem

Asynchronous clocks operate without any mutual frequency and phase relationships. As a result, it is impossible to guarantee timing on CDC paths because the launch- and capture-clock edges can be arbitrarily close, and metastability is unavoidable for CDC designs. This invalidates the assumptions of both functional simulation and formal verification, and robust design behavior cannot be assured using simulation and static timing analysis. Without proper design, CDC errors can cause random and unpredictable failures in a chip that are impossible to debug.

Metastability introduces the following failure modes in the design:

  • Loss of correlation (error E1). This happens when two or more correlated CDC flops become metastable as shown in Figures 5a and 5b. Figure 6 shows the timing diagram where these flops resolve to arbitrary logical values and lose correlation, leading to a bad design state.
  • Hazard (glitch) capture (error E2). A hazard on a CDC path can get captured in the CDC flop leading to bad design state as shown in Figure 7.
  • Loss of signal (error E3). CDC signals that are stable for less than one clock cycle of the receiving clock may not get captured in the receiving domain because of clock network uncertainties, clock alignment and metastability. Figure 8 shows the situation where functional verification view concludes signal transmission. However, the signal transmission can actually fail, leading to a bad state in the design.
  • Metastability propagation (error E4). Metastability may propagate to the next level of flops in the design if it is not resolved in a timely manner. The resolution time is dependent upon the load on the flop. Propagation of metastability may lead to a cascading of errors E1-E3.
Figure 5a. Loss of correlation.



Figure 5b. Loss of correlation.



Figure 6. Loss of correlation timing diagram.



Figure 7. Glitch capture.


Figure 8. Loss of signal.

Next posting, we will look at CDC design principles.

Jul 10, 2014 | Comments

Static Verification Leads to New Age of SoC Design

Pranav Ashar   Dr. Pranav Ashar
   CTO of Real Intent

SoC companies are coming to rely on RTL sign-off of many verification objectives as a means to achieve a sensible division of labor between their RTL design team and their system-level verification team. Given the sign-off expectation, the verification of those objectives at the RT level must absolutely be comprehensive.

Increasingly, sign-off at the RTL level can be accomplished using static-verification technologies. Static verification stands on two pillars: Deep Semantic Analysis and Formal Methods. With the judicious synthesis of these two, the need for dynamic analysis (a euphemism for simulation) gets pushed to the margins. To be sure, dynamic analysis continues to have a role, but is increasingly as a backstop rather than the main thrust of the verification flow. Even where simulation is used, static methods play an important role in improving its efficacy.

Deep Semantic Analysis is about understanding the purpose or role of RTL structures (logic, flip-flops, state machines, etc.) in a design in the context of the verification objective being addressed. This type of intelligence is at the core of everything that Real Intent does, to the extent that it is even ingrained into the company’s name. Much of sign-off happens based just on the deep semantic intelligence in Real Intent’s tools without the invocation of classical formal analysis.


Further, Deep Semantic intelligence and Formal analysis play a symbiotic role to complete the sign-off. Formal analysis benefits from the precisely scoped and contextually well-structured checks generated by virtue of the Deep Semantic intelligence, and Formal analysis proves the supposition of these generated checks.

This combination is efficient for numerous verification objectives in the SoC era.

A key area is X-propagation verification. RTL simulation be its very nature is X-optimistic and can hide bugs or cause RTL and gate-level simulation results to differ. Designers need to understand the X-sensitive constructs in their design and how they can be affected by upstream X-sources. Another area of concern is ensuring that designs come out of power-up in a known state in a given number of clock cycles, and that powered-down blocks do not cause illicit behavior in the active blocks. Static analysis based on combining Deep Semantic intelligence with judicious application of Formal methods is the only way to sign-off on X-verification objectives in a reasonable amount of time.

Another iconic example is the verification of clock-domain crossings. Whereas the basic failure modes here have a textbook simplicity, identifying these failures in real-life RTL so that all potential failures are reported in acceptable run time and without drowning the engineer in noise is a challenging ask. This is an area where the Deep Semantic intelligence in Real Intent’s Meridian CDC tool shines. It is the only product that performs full-chip comprehensive CDC analysis without resorting to abstractions, while also providing the ability of a full-featured hierarchical and distributed workflow. For example, when doing full-chip SoC integration the details of the IP blocks must be retained intelligently to ensure that “sneak paths” that may be lurking in the IP and only come into play at the SoC level can be uncovered. Abstraction models are infamous for ignoring that essential detail that may needed for top-level analysis. Real Intent has developed data models that allow its analyses to represent even gigascale designs with all the necessary details that allow for comprehensive verification. We like to say that if you are not signing-off on CDC with Real Intent’s Merdian, you are not signing-off!

Even for RTL linting, which has been a verification tool in use for over 20 years, new data models are needed to deliver gigascale capacity and performance. With the new levels of performance combined with Real Intent’s Deep Semantic intelligence, designers can have answers in minutes and can quickly resolve chip-scale issues that would otherwise have been missed or taken days to resolve. For example, it is often the case that undesired combinational loops get added as IPs are integrated into the SoC. Without tools like Real Intent’s Ascent Lint, such problem would go undetected and manifest as field failures.

Related to the above, we see a fundamental change in the moving away from a tool-based mindset to a verification-objective-driven mindset in chip verification that is facilitating sign-off at RTL and anchoring the use of static verification methods. This is supremely beneficial for the ScC paradigm and it would not be an exaggeration to say that the SoC design process would have broken down. Static methods shine when the objective is clearly stated and failure modes are deeply understood. Real Intent has experienced this first hand over the past decade as it has watched the static verification for CDC and early functional verification that it pioneered become entrenched in the SoC verification flow.

The objective-driven approach also points to another reality for SoC design houses: Insuring your SoCs against respins is not about having the fastest simulator, ABV or STA tool any more. Neither is it about having an all-in-one tool that does a little bit of a lot of things. Rather, it is about deploying the best-in-class solution with leading edge performance, capacity, workflow and sign-off quality for key SoC-verification objectives like CDC and X-safe design. We are seeing this message take hold in the high-end SoC design houses. It is imperative that SoC design companies across the full spectrum of SoC types to accept this message.

Real Intent is a verification-solutions provider that emphasizes early static verification sign-off. Mostly that means signing off at RTL, but sometimes it could also mean signing-off at the gate-level in order to get an independent validation of the synthesis steps. It also means signing-off on as much as possible before simulation. Any simulation you must do has to be absolutely necessary and tied to a companion static analysis step. With its best-in-class verification-solutions focus, Real Intent sees itself as an enabler of the new age of SoC design.

Jul 3, 2014 | Comments

Reset Optimization Pays Big Dividends Before Simulation

Pranav Ashar   Dr. Pranav Ashar
   CTO of Real Intent

Dr. Pranav Ashar is chief technology officer at Real Intent. He previously worked at NEC Labs developing formal verification technologies for VLSI design. With 35 patents granted and pending, he has authored about 70 papers and co-authored the book ‘Sequential Logic Synthesis’.

This article was originally published on TechDesignForums and is reproduced here by permission.

Reset optimization is another one of those design issues that has leapt in complexity and importance as we have moved to ever more complex system-on-chips. Like clock domain crossing, it is one that we need to resolve to the greatest degree possible before entering simulation.

The traditional approach to resets might have been to route to every flop. Back in the day, you might have been done this even though it has always entailed a large overhead in routing. That would help avoid X ‘unknown’ states arising during simulation for every memory location that was not reinitialized at restart. It was a hedge against optimistic behavior by simulation that could hide bugs.

Our objectives today, though, include not only conserving routing resources but also capturing problems as we bring up RTL for simulation to avoid unfeasible run times there at both RTL and – worse still – the gate level.

There is then one other important factor for reset optimization: its close connection to power optimization.

Matching power and performance increasingly involves the use of retention cells. These retain the state of elements of the design even if appears to be powered off: in fact, to allow for a faster restart bring-up these must continue to consume static power even when the SoC is ‘at rest’. So, controlling the use of retention cells cuts power consumption and extends battery life.

Reset the ‘endless’ threat

Resolving such complex issues based purely on simulations will no longer work. It will put you on the path toward so-called ‘endless verification’.

A thorough and intelligent pre-simulation analysis of your reset scheme can now point both to the best reset routing and the minimum number of expensive retention cells you need to implement.

At the pre-simulation stage, tools like Ascent XV from my company Real Intent, can undertake a pretty smart heuristic analysis of the dependency of one flop’s reset on another and the relationships between different blocks. They will then produce a report with further insights and characterization, based on formal and structural techniques, that go some way beyond just ‘a best guess’.

The objective is to inform the designer on either the specifics or the flavor of the potential problems in the design. He can then review this report – which ideally should offer some alternatives itself – and undertake reset and related power optimization before moving into full simulation.

Orders of magnitude do apply

The time-savings available are significant. Unresolved reset issues lead, of course, to X states, uncertainties post-simulation that will take considerable time to address. The familiar ‘Rule of 10’ applies: catch a problem earlier and it is a 10X easier fix.

Beyond that, pre-simulation techniques are becoming more powerful with each generation. Our latest release of Ascent XV has enhanced algorithms that in themselves offer a 10X improvement in run-time against the previous generation.

Preparing your code carefully for simulation has a direct benefit at the bottom line by leveraging increasingly mature strategies. Can you afford not to consider them within your flow?

Jun 26, 2014 | Comments

SoC CDC Verification Needs a Smarter Hierarchical Approach

This article was originally published on TechDesignForums and is reproduced here by permission.

Thanks to the widespread reuse of intellectual property (IP) blocks and the difficulty of distributing a system-wide clock across an entire device, today’s system-on-chip (SoC) designs use a large number of clock domains that run asynchronously to each other. A design involving hundreds of millions of transistors can easily incorporate 50 or more clock domains and hundreds of thousands of signals that cross between them.

Although the use of smaller individual clock domains helps improve verification of subsystems apart from the context of the full SoC, the checks required to ensure that the full SoC meets its timing constraints have become increasingly time consuming.

Signals involved in clock domain crossing (CDC), for example where a flip-flip driven by one clock signal feeds data to a flop driven by a different clock signal raise the potential issue of metastability and data loss. Tools based on static verification technology exist to perform CDC checks and recommend the inclusion of more robust synchronizers or other changes to remove the risk of metastability and data loss.

Runtime issues
Conventionally, the verification team would run CDC verification on the entire design database before tapeout as this is the point that it becomes possible to perform a holistic check of the clock-domain structure and ensure that every single domain-crossing path is verified. However, on designs that incorporate hundreds of thousands of gates, this is becoming impractical as the compute runtime alone can run into days at a point where every hour saved or spent is precious. And, if CDC verification waits for this point, the number of violations – some of which may be false positives – will potentially generate many weeks of remedial effort, after which another CDC verification cycle needs to be run. To cope with the complexity, CDC verification needs a smarter strategy.

By grouping modules into a hierarchy, the verification team can apply a divide-and-conquer strategy. Not only that, the design team can play a bigger role in ensuring that potential CDC issues are trapped early and checked automatically as the design progresses.

A hierarchical methodology makes it possible to perform CDC checks early and often to ensure design consistency such that, following SoC database assembly, the remaining checks can pass quickly and, most likely, result in a much more manageable collection of potential violations.

Hierarchical obstacles
Traditionally, teams have avoided hierarchical management of CDC issues because of the complexity of organizing the design and ensuring that paths are not missed. A potential problem is that all known CDC paths may be deemed clean within a block and that it can be considered ‘CDC clean’. But there may be paths that escape attention because they cross the hierarchy boundaries in ways that cannot be caught easily – largely because the tools do not have sufficient information about the logic on the unimplemented side of the interface and the designer has made incorrect clock-related assumptions about the incoming paths.

If those sneak paths were not present, it would be possible to present the already-verified modules as black boxes to higher levels of hierarchy such that only the outer interfaces need to be verified with the other modules at that level of hierarchy. For hierarchical CDC verification to work effectively, a white- or grey-box abstraction is required in which the verification process at higher levels of hierarchy is able to reach inside the model to ensure that all potential CDC issues are verified.

As the verification environment does not have complete information about the clocking structure before final SoC assembly, reporting will tend to err on the side of caution, flagging up potential issues that may not be true errors. Traditionally, designers would provide waivers for flops on incoming paths that they believe not to be problematic to avoid them causing repeated errors in later verification runs as the module changes. However, this is a risky strategy as it relies on assumptions about the overall SoC clocking structure that may not be born out in reality.

Refinements to the model
The waiver model needs to be refined to fit a smart hierarchical CDC verification strategy. Rather than apply waivers, designers with a clear understanding of the internal structure of their blocks can mark flops and related logic to reflect their expectations. Paths that they believe not to be an issue and therefore not require a synchronizer can be marked as such and treated as low priority, focusing attention on those paths that are more likely to reveal serious errors as the SoC design is assembled and verified.

However, unlike paths marked with waivers, these paths are still in the CDC verification environment database. Not only that, they have been categorized by the design engineer to reflect their assumptions. If the tool finds a discrepancy between that assumption and the actual signals feeding into that path, errors will be generated instead of being ignored. This database-driven approach provides a smart infrastructure for CDC verification and establishes a basis for smarter reporting as the project progresses.

Smart reporting
Reporting will be organized to meet the specification rather than a long list of uncategorized errors that may or may not be false positives. This not only accelerates the process of reviews but allows the work to be distributed among engineers. As the specification is created and paths marked and categorized, engineers establish what they expect to see in the CDC results, providing the basis for smart reporting from the verification tools.

When structural analysis finds that a problematic path that was previously thought to be unaffected by CDC issues, the engineer can zoom in on the problem and deploy formal technologies to establish the root cause and potential solutions. Once fixed, the check can be repeated to ensure that the fix has worked.

The specification-led approach also allows additional attention to be paid to blocks that are likely to lead to verification complications, such as those that employ reconvergent logic. Whereas structural analysis will identify most problems on normal logic, these areas may need closer analysis using formal technology. Because the database-driven methodology allows these sections to be marked clearly, the right verification technology can be deployed at the right time.

By moving away from waivers and black-box models, the database-driven hierarchical CDC methodology encourages design groups to take SoC-oriented clocking issues into account earlier in the design cycle and ensure that any concerns about interfaces that may involve modules designed by design groups located elsewhere or even by different companies are carried forward to the critical SoC-level analysis without incurring the overhead of having to repeatedly re-verify each port on the module. Through earlier CDC analysis and verification, the team reduces the risk of encountering a large number of schedule-killing violations immediately prior to tapeout, and be far more confident that design deadlines will be met.

Jun 20, 2014 | Comments

Photo Booth Blackmail at DAC in San Francisco!

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Real Intent had a photo booth at its exhibit in San Francisco at the Design Automation Conference.  We thought it would be cool to give a photo souvenir of the 51st conference for anyone who strolled by and to celebrate the 2014 FIFA World Cup.  On hand to work the booth was Jeremy who helped everyone with funny props or choosing the right World Cup team jersey.

Between Jeremy and myself we were able to get some great photos.  Here are just a few for your viewing pleasure.   And at the bottom of the page, you can click on the link to see all the blackmail photos for your fellow conference attendees and exhibitors.   Enjoy!

Happy Patriot!


Real Intent is Taking Over!


Who Will Win the World Cup?

Click Here to  See the Full Blackmail Photo Gallery!

Jun 12, 2014 | Comments

Quick Reprise of DAC 2014

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Thanks to everyone that came to the 2014 Design Automation Conference.  It was a successful show with maximum traffic on Tuesday afternoon.  At the Real Intent booth we were giving away Roses (yes they were real!) and had a photo booth as well.   Visitors could dress up in world-cup soccer jerseys and hoist the World Cup 2014 Trophy.


We also had the Official FIFA World Cup Soccer video game to challenge our visitors.


We also had a Partner Passport that visitors could get stamped to win prizes.   At the MathWorks stand in the Automotive Pavillion visitors could see our Ascent Lint tool integrated with MathWork’s MATLAB HDL Coder Synthesis.   Similarly, Calypto Catapult also has an integration with Ascent Lint to qualify designers’ synthesized RTL code.   We were also demonstrating an intergration with our partnet Defacto, where Real Intent’s Merdian CDC can send environmental setup information to Defacto’s STAR DFT tool.


Real Intent was the Organizer for the “Asymptote of Verification” panel.   It was well attended with over 80 designers and engineers in the room.  I think this was quite an achievement as it was the last panel of the day, and beer and wine receptions were already underway.  The panelists Brian Hunter from Cavium, San Jose, CA; Holger Busch from Infineon, Munich, Germany and Wolfgang Roesner from IBM, Austin, TX  brought their considerable industry to the discussion.   Attendees got to hear “the unique attributes of graph-based scenario models, including starting from the intended goal and being able to deterministically generate a test case to get there and that graphs are an effective way to communicate design intent between designers and verification engineers.”

There will be more to tell of what we saw and heard at DAC 2014 in San Francsico in future blog postings.  Until then, please let me know what you and saw heard.

Jun 6, 2014 | Comments

Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Its only one month left until the Design Automation Conference in San Francisco,  June 1-5 and the process of getting ready is keeping me BUSY.   This week, I would like to highlight the DVCon 2014 Best Oral Presentation by Kelly D. Larson from NVIDIA on “Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions.

This paper describes an entirely different way to use these same SVA assertions. While the standard use of SystemVerilog assertions is typically targeted towards DESIGN QUALITY, this paper describes how to effectively use assertions to target individual TEST QUALITY. In many cases the same SystemVerilog assertions which were written for measuring design quality can also be used to measure test quality, but it’s important to realize that the fundamental goal is quite different.

The PowerPoint slides for Kelly’s presentation are here and are excellent in explaining his intent.

Reproduced below is the paper outlining Kelly’s talk.

May 1, 2014 | Comments

Complexity Drives Smart Reporting in RTL Verification

Lisa Piper, Technical Marketing Manager at Real Intent

Lisa Piper, Technical Marketing Manager at Real Intent

This article was originally published on TechDesignForums and is reproduced here by permission.

It’s an increasingly complex world in which we live and that seems to be doubly true of state-machine design.

With protocols such as USB3, PCI Express and a growing number of cache coherent multiprocessor on-chip buses and networks, the designer has been greeted with a state-space explosion. USB3 has, for example, added an entire link layer and, with it, the Link Training and Status State Machine. This is, in itself, a complex entity, which although it has only 12 states in total can move between them using a variety of different arcs.

Within the SoC itself, to maximize bandwidth, we are seeing highly complex processor-to-memory interconnect schemes that allow transactions to be split into smaller entities, with the ability for each master or slave on the interconnect to respond out of order. Not only that, to maintain cache coherency, data may need to be reflected to other nodes as it is returned. State machines that can control this level of activity are, by nature, highly complex. Because of the way that transactions can be split, prioritized and reordered, FSMs are potentially prone to design-killing problems such as deadlock and livelock.

Although it is technically possible to write assertions that can hunt for deadlock conditions or unreachable states, it is generally clear that avoiding these situations are the intent of every designer. Furthermore, writing detailed, comprehensive assertions is not something that a domain expert in cache coherency or bus interface design has a lot of time to perform. It makes far more sense to use a tool that can parse and understand state machines to infer these common intents from the RTL source code, leaving the designer and verification teams to concentrate on writing test code to ensure that states are connected by the right transition arcs.

Verification Automation

Automated checking makes it possible to deploy verification tools across a wider group of engineers, in both design and verification, so that they can erase bugs in their designs faster and earlier. The technology also improves their ability to harden IP before it is released to other SoC groups that need to use these complex controllers.

A potential hazard of automated intent checking is that the tool may not prioritize the errors that really matter. An problem in one condition in part of the RTL may trigger a number of ancillary errors that the tool dutifully reports, but which obscure the root cause that, if fixed, will also solve many of the secondary problems. This is where smart reporting will play an important role.

Smart reporting looks one level deeper at the design and assembles the errors that really matter so that the designer is not forced to wade through a series of reports that, in reality, are simply shadows of the root cause. This smart reporting is a key component of the latest release of the Ascent Implied Intent Verification (IIV) automatic formal tool.

In a project at a major customer, Ascent IIV found some 3000 failures in a block of 130,000 gates. But, more importantly, rather than forcing the designer to look at each one in detail, narrowed down the causes of those errors to fewer than 200 – cutting out 94 per of the reporting noise that the design team would have seen from a tool without such smart analysis and reporting technology.

To ease debugging once the errors have been flagged up, Ascent IIV lets the user trace back to state-transition assignments, making it easier and faster to make changes to the RTL. To support the latest design and verification flows, Ascent IIV adds support for SystemVerilog 1800-2009. The result is that, even as state machines become ever more complex, verification tools are more than keeping pace.


Apr 24, 2014 | Comments