Real Talk Blog

Blog Archive
September 2014
9/19/2014: It’s Time to Embrace Objective-driven Verification
9/12/2014: Autoformal: The Automatic Vacuum for Your RTL Code
9/04/2014: How Bad is Your HDL Code? Be the First to Find out!
August 2014
8/29/2014: Fundamentals of Clock Domain Crossing: Conclusion
8/21/2014: Video Keynote: New Methodologies Drive EDA Revenue Growth
8/15/2014: SoCcer: Defending your Digital Design
8/08/2014: Executive Insight: On the Convergence of Design and Verification
July 2014
7/31/2014: Fundamentals of Clock Domain Crossing Verification: Part Four
7/24/2014: Fundamentals of Clock Domain Crossing Verification: Part Three
7/17/2014: Fundamentals of Clock Domain Crossing Verification: Part Two
7/10/2014: Fundamentals of Clock Domain Crossing Verification: Part One
7/03/2014: Static Verification Leads to New Age of SoC Design
June 2014
6/26/2014: Reset Optimization Pays Big Dividends Before Simulation
6/20/2014: SoC CDC Verification Needs a Smarter Hierarchical Approach
6/12/2014: Photo Booth Blackmail at DAC in San Francisco!
6/06/2014: Quick Reprise of DAC 2014
May 2014
5/01/2014: Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions
April 2014
4/24/2014: Complexity Drives Smart Reporting in RTL Verification
4/17/2014: Video Update: New Ascent XV Release for X-optimization, ChipEx show in Israel, DAC Preview
4/11/2014: Design Verification is Shifting Left: Earlier, Focused and Faster
4/03/2014: Redefining Chip Complexity in the SoC Era
March 2014
3/27/2014: X-Verification: A Critical Analysis for a Low-Power World (Video)
3/14/2014: Engineers Have Spoken: Design And Verification Survey Results
3/06/2014: New Ascent IIV Release Delivers Enhanced Automatic Verification of FSMs
February 2014
2/28/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 3
2/20/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 2
2/13/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 1
2/07/2014: Video Tech Talk: Changes In Verification
January 2014
1/31/2014: Progressive Static Verification Leads to Earlier and Faster Timing Sign-off
1/30/2014: Verific’s Front-end Technology Leads to Success and a Giraffe!
1/23/2014: CDC Verification of Fast-to-Slow Clocks – Part Three: Metastability Aware Simulation
1/16/2014: CDC Verification of Fast-to-Slow Clocks – Part Two: Formal Checks
1/10/2014: CDC Verification of Fast-to-Slow Clocks – Part One: Structural Checks
1/02/2014: 2013 Highlights And Giga-scale Predictions For 2014
December 2013
12/13/2013: Q4 News, Year End Summary and New Videos
12/12/2013: Semi Design Technology & System Drivers Roadmap: Part 6 – DFM
12/06/2013: The Future is More than “More than Moore”
November 2013
11/27/2013: Robert Eichner’s presentation at the Verification Futures Conference
11/21/2013: The Race For Better Verification
11/18/2013: Experts at the Table: The Future of Verification – Part 2
11/14/2013: Experts At The Table: The Future Of Verification Part 1
11/08/2013: Video: Orange Roses, New Product Releases and Banner Business at ARM TechCon
October 2013
10/31/2013: Minimizing X-issues in Both Design and Verification
10/23/2013: Value of a Design Tool Needs More Sense Than Dollars
10/17/2013: Graham Bell at EDA Back to the Future
10/15/2013: The Secret Sauce for CDC Verification
10/01/2013: Clean SoC Initialization now Optimal and Verified with Ascent XV
September 2013
9/24/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 4
9/20/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 3
9/20/2013: CEO Viewpoint: Prakash Narain on Moving from RTL to SoC Sign-off
9/17/2013: Video: Ascent Lint – The Best Just Got Better
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 2
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain
9/10/2013: SoC Sign-off Needs Analysis and Optimization of Design Initialization in the Presence of Xs
August 2013
8/15/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 4
8/08/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 3
July 2013
7/25/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 2
7/18/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 1
7/16/2013: Executive Video Briefing: Prakash Narain on RTL and SoC Sign-off
7/05/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 3
June 2013
6/27/2013: Bryon Moyer: Simpler CDC Exception Handling
6/21/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 2
6/17/2013: Peggy Aycinena’s interview with Prakash Narain
6/14/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 1
6/10/2013: Photo Booth Blackmail!
6/03/2013: Real Intent is on John Cooley’s “DAC’13 Cheesy List”
May 2013
5/30/2013: Does SoC Sign-off Mean More Than RTL?
5/24/2013: Ascent Lint Rule of the Month: DEFPARAM
5/23/2013: Video: Gary Smith Tells Us Who and What to See at DAC 2013
5/22/2013: Real Intent is on Gary Smith’s “What to see at DAC” List!
5/16/2013: Your Real Intent Invitation to Fun and Fast Verification at DAC
5/09/2013: DeepChip: “Real Intent’s not-so-secret DVcon’13 Report”
5/07/2013: TechDesignForum: Better analysis helps improve design quality
5/03/2013: Unknown Sign-off and Reset Analysis
April 2013
4/25/2013: Hear Alexander Graham Bell Speak from the 1880′s
4/19/2013: Ascent Lint rule of the month: NULL_RANGE
4/16/2013: May 2 Webinar: Automatic RTL Verification with Ascent IIV: Find Bugs Simulation Can Miss
4/05/2013: Conclusion: Clock and Reset Ubiquity – A CDC Perspective
March 2013
3/22/2013: Part Six: Clock and Reset Ubiquity – A CDC Perspective
3/21/2013: The BIG Change in SoC Verification You Don’t Know About
3/15/2013: Ascent Lint Rule of the Month: COMBO_NBA
3/15/2013: System-Level Design Experts At The Table: Verification Strategies – Part One
3/08/2013: Part Five: Clock and Reset Ubiquity – A CDC Perspective
3/01/2013: Quick DVCon Recap: Exhibit, Panel, Tutorial and Wally’s Keynote
3/01/2013: System-Level Design: Is This The Era Of Automatic Formal Checks For Verification?
February 2013
2/26/2013: Press Release: Real Intent Technologist Presents Power-related Paper and Tutorial at ISQED 2013 Symposium
2/25/2013: At DVCon: Pre-Simulation Verification for RTL Sign-Off includes Automating Power Optimization and DFT
2/25/2013: Press Release: Real Intent to Exhibit, Participate in Panel and Present Tutorial at DVCon 2013
2/22/2013: Part Four: Clock and Reset Ubiquity – A CDC Perspective
2/18/2013: Does Extreme Performance Mean Hard-to-Use?
2/15/2013: Part Three: Clock and Reset Ubiquity – A CDC Perspective
2/07/2013: Ascent Lint Rule of the Month: ARITH_CONTEXT
2/01/2013: “Where Does Design End and Verification Begin?” and DVCon Tutorial on Static Verification
January 2013
1/25/2013: Part Two: Clock and Reset Ubiquity – A CDC Perspective
1/18/2013: Part One: Clock and Reset Ubiquity – A CDC Perspective
1/07/2013: Ascent Lint Rule of the Month: MIN_ID_LEN
1/04/2013: Predictions for 2014, Hier. vs Flat, Clocks and Bugs
December 2012
12/14/2012: Real Intent Reports on DVClub Event at Microprocessor Test and Verification Workshop 2012
12/11/2012: Press Release: Real Intent Records Banner Year
12/07/2012: Press Release: Real Intent Rolls Out New Version of Ascent Lint for Early Functional Verification
12/04/2012: Ascent Lint Rule of the Month: OPEN_INPUT
November 2012
11/19/2012: Real Intent Has Excellent EDSFair 2012 Exhibition
11/16/2012: Peggy Aycinena: New Look, New Location, New Year
11/14/2012: Press Release: New Look and New Headquarters for Real Intent
11/05/2012: Ascent Lint HDL Rule of the Month: ZERO_REP
11/02/2012: Have you had CDC bugs slip through resulting in late ECOs or chip respins?
11/01/2012: DAC survey on CDC bugs, X propagation, constraints
October 2012
10/29/2012: Press Release: Real Intent to Exhibit at ARM TechCon 2012 – Chip Design Day
September 2012
9/24/2012: Photos of the space shuttle Endeavour from the Real Intent office
9/20/2012: Press Release: Real Intent Showcases Verification Solutions at Verify 2012 Japan
9/14/2012: A Bolt of Inspiration
9/11/2012: ARM blog: An Advanced Timing Sign-off Methodology for the SoC Design Ecosystem
9/05/2012: When to Retool the Front-End Design Flow?
August 2012
8/27/2012: X-Verification: What Happens When Unknowns Propagate Through Your Design
8/24/2012: Article: Verification challenges require surgical precision
8/21/2012: How To Article: Verifying complex clock and reset regimes in modern chips
8/20/2012: Press Release: Real Intent Supports Growth Worldwide by Partnering With EuropeLaunch
8/06/2012: SemiWiki: The Unknown in Your Design Can be Dangerous
8/03/2012: Video: “Issues and Struggles in SOC Design Verification”, Dr. Roger Hughes
July 2012
7/30/2012: Video: What is Driving Lint Usage in Complex SOCs?
7/25/2012: Press Release: Real Intent Adds to Japan Presence: Expands Office, Increases Staff to Meet Demand for Design Verification and Sign-Off Products
7/23/2012: How is Verification Complexity Changing, and What is the Impact on Sign-off?
7/20/2012: Real Intent in Brazil
7/16/2012: Foosball, Frosty Beverages and Accelerating Verification Sign-off
7/03/2012: A Good Design Tool Needs a Great Beginning
June 2012
6/14/2012: Real Intent at DAC 2012
6/01/2012: DeepChip: Cheesy List for DAC 2012
May 2012
5/31/2012: EDACafe: Your Real Intent Invitation to Fast Verification and Fun at DAC
5/30/2012: Real Intent Video: New Ascent Lint and Meridian CDC Releases and Fun at DAC 2012
5/29/2012: Press Release: Real Intent Leads in Speed, Capacity and Precision with New Releases of Ascent Lint and Meridian CDC Verification Tools
5/22/2012: Press Release: Over 35% Revenue Growth in First Half of 2012
5/21/2012: Thoughts on RTL Lint, and a Poem
5/21/2012: Real Intent is #8 on Gary Smith’s “What to see at DAC” List!
5/18/2012: EETimes: Gearing Up for DAC – Verification demos
5/08/2012: Gabe on EDA: Real Intent Helps Designers Verify Intent
5/07/2012: EDACafe: A Page is Turned
5/07/2012: Press Release: Graham Bell Joins Real Intent to Promote Early Functional Verification & Advanced Sign-Off Circuit Design Software
March 2012
3/21/2012: Press Release: Real Intent Demos EDA Solutions for Early Functional Verification & Advanced Sign-off at Synopsys Users Group (SNUG)
3/20/2012: Article: Blindsided by a glitch
3/16/2012: Gabe on EDA: Real Intent and the X Factor
3/10/2012: DVCon Video Interview: “Product Update and New High-capacity ‘X’ Verification Solution”
3/01/2012: Article: X-Propagation Woes: Masking Bugs at RTL and Unnecessary Debug at the Netlist
February 2012
2/28/2012: Press Release: Real Intent Joins Cadence Connections Program; Real Intent’s Advanced Sign-Off Verification Capabilities Added to Leading EDA Flow
2/15/2012: Real Intent Improves Lint Coverage and Usability
2/15/2012: Avoiding the Titanic-Sized Iceberg of Downton Abbey
2/08/2012: Gabe on EDA: Real Intent Meridian CDC
2/08/2012: Press Release: At DVCon, Real Intent Verification Experts Present on Resolving X-Propagation Bugs; Demos Focus on CDC and RTL Debugging Innovations
January 2012
1/24/2012: A Meaningful Present for the New Year
1/11/2012: Press Release: Real Intent Solidifies Leadership in Clock Domain Crossing
August 2011
8/02/2011: A Quick History of Clock Domain Crossing (CDC) Verification
July 2011
7/26/2011: Hardware-Assisted Verification and the Animal Kingdom
7/13/2011: Advanced Sign-off…It’s Trending!
May 2011
5/24/2011: Learn about Advanced Sign-off Verification at DAC 2011
5/16/2011: Getting A Jump On DAC
5/09/2011: Livin’ on a Prayer
5/02/2011: The Journey to CDC Sign-Off
April 2011
4/25/2011: Getting You Closer to Verification Closure
4/11/2011: X-verification: Conquering the “Unknown”
4/05/2011: Learn About the Latest Advances in Verification Sign-off!
March 2011
3/21/2011: Business Not as Usual
3/15/2011: The Evolution of Sign-off
3/07/2011: Real People, Real Discussion – Real Intent at DVCon
February 2011
2/28/2011: The Ascent of Ascent Lint (v1.4 is here!)
2/21/2011: Foundation for Success
2/08/2011: Fairs to Remember
January 2011
1/31/2011: EDA Innovation
1/24/2011: Top 3 Reasons Why Designers Switch to Meridian CDC from Real Intent
1/17/2011: Hot Topics, Hot Food, and Hot Prize
1/10/2011: Satisfaction EDA Style!
1/03/2011: The King is Dead. Long Live the King!
December 2010
12/20/2010: Hardware Emulation for Lowering Production Testing Costs
12/03/2010: What do you need to know for effective CDC Analysis?
November 2010
11/12/2010: The SoC Verification Gap
11/05/2010: Building Relationships Between EDA and Semiconductor Ventures
October 2010
10/29/2010: Thoughts on Assertion Based Verification (ABV)
10/25/2010: Who is the master who is the slave?
10/08/2010: Economics of Verification
10/01/2010: Hardware-Assisted Verification Tackles Verification Bottleneck
September 2010
9/24/2010: Excitement in Electronics
9/17/2010: Achieving Six Sigma Quality for IC Design
9/03/2010: A Look at Transaction-Based Modeling
August 2010
8/20/2010: The 10 Year Retooling Cycle
July 2010
7/30/2010: Hardware-Assisted Verification Usage Survey of DAC Attendees
7/23/2010: Leadership with Authenticity
7/16/2010: Clock Domain Verification Challenges: How Real Intent is Solving Them
7/09/2010: Building Strong Foundations
7/02/2010: Celebrating Freedom from Verification
June 2010
6/25/2010: My DAC Journey: Past, Present and Future
6/18/2010: Verifying Today’s Large Chips
6/11/2010: You Got Questions, We Got Answers
6/04/2010: Will 70 Remain the Verification Number?
May 2010
5/28/2010: A Model for Justifying More EDA Tools
5/21/2010: Mind the Verification Gap
5/14/2010: ChipEx 2010: a Hot Show under the Hot Sun
5/07/2010: We Sell Canaries
April 2010
4/30/2010: Celebrating 10 Years of Emulation Leadership
4/23/2010: Imagining Verification Success
4/16/2010: Do you have the next generation verification flow?
4/09/2010: A Bug’s Eye View under the Rug of SNUG
4/02/2010: Globetrotting 2010
March 2010
3/26/2010: Is Your CDC Tool of Sign-Off Quality?
3/19/2010: DATE 2010 – There Was a Chill in the Air
3/12/2010: Drowning in a Sea of Information
3/05/2010: DVCon 2010: Awesomely on Target for Verification
February 2010
2/26/2010: Verifying CDC Issues in the Presence of Clocks with Dynamically Changing Frequencies
2/19/2010: Fostering Innovation
2/12/2010: CDC (Clock Domain Crossing) Analysis – Is this a misnomer?
2/05/2010: EDSFair – A Successful Show to Start 2010
January 2010
1/29/2010: Ascent Is Much More Than a Bug Hunter
1/22/2010: Ascent Lint Steps up to Next Generation Challenges
1/15/2010: Google and Real Intent, 1st Degree LinkedIn
1/08/2010: Verification Challenges Require Surgical Precision
1/07/2010: Introducing Real Talk!

It’s Time to Embrace Objective-driven Verification

Pranav Ashar   Dr. Pranav Ashar
   CTO of Real Intent

This article was originally published on TechDesignForums and is reproduced here by permission.

Consider the Wall Street controversy over High Frequency Trading (HFT). Set aside its ethical (and legal) aspects. Concentrate on the technology. HFT exploits customized IT systems that allow certain banks to place ‘buy’ or ‘sell’ stock orders just before rivals, sometimes just milliseconds before. That tiny advantage can make enough difference to the share price paid that HFT users are said to profit on more than 90% of trades.

Now look back to the early days of electronic trading. Competitive advantage then came down to how quickly you adopted an off-the-shelf, one-size-fits-all e-trading package.

Banking has long been at computing’s cutting edge. What HFT illustrates today is a progressive shift in the strategy it uses to develop systems from tool-based (‘We have bought an e-trading system’) to objective-driven (‘Make our e-trades the fastest and most profitable’).

As I said, I want to set aside the fair/unfair debate around HFT, and take it simply as a high profile illustration of how Wall Street’s approach to IT is evolving. Banks are continuously developing other systems based on objective-driven thinking. My point is that we can draw important lessons for SoC design from this overall shift because we are moving – and need to move – in the same direction toward objective-driven verification. Less controversially (thankfully), but we should still follow the trend more aggressively.

Wall Street's riches point the way for objective-driven verification

Wall Street’s riches point the way for objective-driven verification

‘Objective-driven verification’ defined

What do we mean by ‘objective-driven’? At a high level, the mindset of the system architect has changed: He has gone from identifying useful tools and deploying them in isolation to starting with a pre-defined goal that is achieved through a customized synthesis of available tools and methods.

Going deeper, one can identify two triggers:

  1. A recognition that systemic tasks have become so complex it is very unlikely that you can fully realize them using a single raw tool, or even a few. Multiple tools and techniques must be combined and used in a fuller context.
  2. A deeper understanding of the inner workings of complex systems that allows architects to isolate the processes and cause-effect relationships relevant to their objectives.

These triggers describe IT trends in logic verification as well as in banking.

The ‘system’ in verification is the SoC. The raw tools are, first, simulation, but also static-timing analysis and formal analysis. After a healthy run of around 25 years, SoC complexity has caught up with and overtaken this coarse-grain raw-tool model.

Objective-driven verification begins with that deeper understanding of the SoC architecture and the processes involved in putting it together. The objectives themselves emerge from today’s greater knowledge of failure modes and hard-to-achieve verification goals.

The model moves away from treating logic verification as monolithic. It focuses instead on specific goals. For each, we now know that custom solutions are more effective. Objective-driven verification rewards us with a much deeper, much cheaper process.

Raw tools play a role but have become interchangeable and commoditized. The productivity of an SoC design group is no longer determined by the use of a particular simulator. Rather, productivity and the viability of the design depend on how well the group adopts objective-driven solutions.

The value today therefore resides in a layer that sits on top of commoditized raw tools which contains a deep knowledge of different failure-modes within a structured workflow. This is where your big verification dollars need to be spent.

It is a disruption of a logic verification business model long based on selling raw tools. Nevertheless, the assertion that future growth will come from objective-driven verification is already well illustrated in two specific instances.

Objective-driven verification is already here

Take verification for failures caused by asynchronous clock-domain crossing (CDC). Until recently, it entailed manual design review and the use of specialized synchronizer library cells in simulation. You bought a fast simulator and then pounded stimuli onto the special cell-equipped model. This worked for crossings up to, say, the dozens. But as they grew in number and complexity, the approach broke down. Asynchronous-crossing failures increased alarmingly.

In response SoC designers, aided by vendors like Real Intent, have carved out asynchronous-CDC as a distinct objective-driven verification task. They have adopted dedicated solutions and workflows that address the problem to sign-off. Objective: “There will be no failures caused by asynchronous crossings.”

This 2012 survey shows why CDC has seen fast adoption of objective-driven verification

Real Intent’s asynchronous CDC solution stack illustrates an objective-driven verification process. It starts with a first-principles understanding of the failure modes. Around that is built a synergy of structural analysis methods, formal analysis methods and simulation hooks. A workflow then guides the user through an iterative chip-environment setup and the progressive refinement of verification results until full-chip sign-off is achieved.

This workflow component shows that objective-driven verification goes beyond simply a rediscovery of the ‘point tool’. Context, relationships with other ‘objectives’ and their solutions, relevance to the overall goal, and even the UI play subtle but important roles they did not in the point era.

Every SoC taped out today goes through an explicit asynchronous CDC sign-off based on a dedicated static solution of this type. However, I would note that the workflows associated with different solutions are materially different and lead to measurably different levels of productivity and quality of final results.

Objective-driven verification is also becoming the norm in X propagation. Logic simulation has long been an imperfect tool here: It can still incorrectly turn a deterministic value into an X, or an X into a deterministic value. The second effect is worse because it can mask bugs, giving false confidence in the chip’s correctness.

These insidious failures make it imperative that SoC design teams deploy objective-driven verification to catch them early. The same template applies as for asynchronous CDC: Synergistic structural and formal analysis with simulation hooks are joined to an intuitive and iterative workflow. This delivers progressively better results.

The list of high-value goals to which we can apply objective-based verification is getting longer. The broader concept is spreading quickly out from Wall Street’s deep-pocketed IT pioneers. And importantly for SoC design, objective-based verification techniques for asynchronous CDC and X-effects already demonstrate a value you can – well – take to the bank.



Sep 19, 2014 | Comments


Autoformal: The Automatic Vacuum for Your RTL Code

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

The Roomba automatic vacuum cleaner may be the most popular home robot in the world.   It wakes up, wanders around your house collecting ‘dust bunnies’ and other dirt and then parks itself, where it can recharge and be ready for the next cleaning cycle.

shark_cat_roomba[1]

Real Intent also offers an automatic tool that cleans up your RTL code. Ascent IIV is an autoformal tool that automatically analyses the implied intent of your RTL code.   It verifies different kinds of sequences and reports back on those that are suspicious.  Because the analysis is smart and hierarchical, it reports primary errors that, when corrected, can remove a cascade of secondary errors.

Here is a quick list of checks that Ascent IIV automatically performs:

  • FSM deadlocks and unreachable states
  • Bus contention and floating busses
  • Full- and Parallel-case pragma violations
  • X-value propagation
  • Array bounds
  • Constant RTL expressions, nets & state vector bits
  • Dead code
  • SystemVerilog ‘unique’, ‘unique0′, and ‘priority’ checks for if and case constructs

In July, Real Intent announced a new release of Ascent IIV.  Here is a video interview with Lisa Piper , senior technical marketing manager, discussing how IIV makes debug even easier with new features such as causation trees and focused custom reports.


 



Sep 12, 2014 | Comments


How Bad is Your HDL Code? Be the First to Find out!

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

builtd-it-and-the-bugs-will-come(Courtesy of Andy Glover, cartoontester.blogspot.com)

It is a fact-of-life that as soon as RTL designers start writing the code for their modules, they will begin to introduce unintended errors. To eliminate these errors, designers will use a variety of tools to ensure the code is correct before hand-off. Functional errors are typically caught by a mix of static tools (auto-formal and assertion-based) and simulation. However, before designers start to uncover functional errors their code should pass RTL linting. Linting has the advantages of delivering very quick feedback on troublesome and even dangerous coding style that causes problems that can show up in simulation, but will likely take a much longer time to uncover. With the right lint tool, you can catch the “low-hanging fruit” before tackling functional errors.

As the code goes through refinement by a developer, Real Intent’s Ascent Lint is applicable at any stage of RTL maturity. Designers can be working with a mix of internally developed and external IPs with different levels of maturity and compatibility. And they can check their RTL early and often through development, confident it is ready for integration with other modules.

In order to bring this mix of IPs together under one umbrella, Real Intent recommends using a succession of Lint policy files. Each policy file is intended to achieve a significantly greater level of maturity towards achieving quality RTL using a set of Lint rules. The policies are tailored to apply across the broad spectrum of design types, but may be adjusted as needed. Design teams, after careful consideration, may skip individual steps in the flow in keeping with their priorities. Additionally, the sequence of policies is optimized to lend itself for early detection, faster debug and low noise. Here again, design teams may choose to re-order the recommendations based on their best practices.

HDL maturity is broadly classified into three stages of Initial, Mature and Handoff with an associated policy file. The three stages are defined as follows:

  1. Initial RTL – Initial RTL represents the early phase where the requirements may still be evolving. It ensures that regressions and builds failures are caught early.
  2. Mature RTL – Modeling costs, simulation-synthesis mismatches, FSM complexity, etc. are higher order aspects of freeze-ready RTL that can significantly impact the design quality. The Mature RTL checks ensure necessary conditions for downstream interoperability.
  3. Handoff RTL – At the handoff stage, the checks are geared towards compliance with industry standards or internal conventions, to allow easy integration and reuse.

RTL Lint stages

By fixing errors earlier in the design flow, with static verification such as Lint, significant project timeline savings can be achieved. Designers realize maximum productivity through using a staged set of policy files that address each level of code maturity. And designers can be confident that when their code is integrated into the project, they will not “look bad” to other team members, since they are delivering quality RTL for downstream simulation and implementation.



Sep 4, 2014 | Comments


Fundamentals of Clock Domain Crossing: Conclusion

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

In our last post in series, part 4, we looked at the costs associated with debugging and sign-off verification.  In this final posting, we propose a practical and efficient CDC verification methodology.

Template recognition vs. report quality trade-off

The first-generation CDC tools employed structural analysis as the primary verification technology. Given the lack of precision of this technology, users are often required to specify structural templates for verification. Given the size and complexity to today’s SOCs, this template specification becomes a cumbersome process where debugging cost is traded for setup cost. Also, the checking limitations imposed by templates may reduce the report volume, but they also increase the risk of missing errors. In general, template-based checking requires significant manual effort for effective utilization.

Top-level vs. block-level verification trade-off

The top-level verification reduces the setup requirements for CDC verification but can result in higher debugging cost as the design maturity improves iteratively. On the other hand, block-level verification identifies errors earlier and at smaller complexity levels, creating a cleaner top-level verification. The top-level debugging cost is reduced but the overall setup and run-time cost increases.

RTL vs. netlist verification trade-off

As mentioned earlier, netlist analysis can cover all the CDC error sources. The debugging cost is very high for application at the netlist level. Also, the delay in detecting errors until much later in the design cycle can have a serious impact on schedules. But RTL analysis does not cover all CDC-error sources, and this requires that CDC verification also be run on netlists.

A practical and efficient CDC verification methodology

After evaluating the various considerations as mentioned above, we recommend the following CDC-verification methodology to accomplish high-quality verification with minimal engineering cost:

  • Automatically create the functional setup the top-level design leveraging SDC.
  • Automatically complete the functional setup.
  • Use setup verification techniques to refine top-level functional setup.
  • Identify the sub-blocks for initial CDC verification.
  • Automatically generate block-level functional setup from the top-level.
  • Run thorough block level CDC verification.
    • Examine the generated functional setup for correctness.
    • Run structural analysis.
    • Identify and fix gross design errors or refine functional setup.
    • Run formal analysis for precise error identification.
    • Debug and fix design or refine functional setup.
    • Iterate verification steps until clean.
  • Run thorough top-level CDC verification with block-level result inheritance.
  • Run thorough netlist CDC verification.
Figure 16. A top down-bottom up verification flow.

Figure 17 compares the characteristics of first- and second-generation CDC tools across seven different categories. It summarizes the advantages of this new generation of design verification with the most dramatic change being in the efficiency of sign-off warnings, debug and verification methodology. We believe that sign-off verification is now possible and more importantly is a requirement for complex SOC designs.

 

Figure 17. Spider chart for first-generation and second-generation CDC tools.

In summary

Today, the number of clock domains in a complex SOC design can easily exceed 100 and the gate-count is well over 100 million instances. The first generation of CDC tools were not engineered to handle this kind of complexity and a second-generation tool-set is essential to reduce CDC failure risk and to avoid wasting engineering resources. This second generation maximizes automation and uses special formal techniques and automatic generation of top-level and block-level setups to accomplish high-quality verification. A hierarchical top-down, bottom-up methodology that takes advantage of the inherited results of both top- and block-level analysis minimizes the manual debug effort in CDC verification.



Aug 29, 2014 | Comments


Video Keynote: New Methodologies Drive EDA Revenue Growth

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Wally Rhines from Mentor gave an excellent keynote at the 51st Design Automation Conference on how EDA grows by solving new problems.  In his short talk, he references an earlier keynote he gave back in 2004 and what has changed in the EDA industry since that time.

Here is a quick quote from his presentation: “Our capability in EDA today is largely focused on being able to verify that a chip does what it’s supposed to do. The problem of verifying that it doesn’t do anything it’s NOT supposed to do is a much more difficult one, a bigger one, but one for which governments and corporations would pay billions of dollars for to even partially solve.”

Where do you think future growth will come in EDA?


The original video is from the DAC web-site video archive and can be seen here.  Wally’s full presentation is here.

Biography

WALDEN C. RHINES is Chairman and Chief Executive Officer of Mentor Graphics, a leader in worldwide electronic design automation with revenue of $1.2 billion in 2013. During his tenure at Mentor Graphics, revenue has more than tripled and Mentor has grown the industry’s number one market share solutions in four of the ten largest product segments of the EDA industry.

Prior to joining Mentor Graphics, Rhines was Executive Vice President of Texas Instruments’ Semiconductor Group, sharing responsibility for TI’s Components Sector, and having direct responsibility for the entire semiconductor business with more than $5 billion of revenue and over 30,000 people.

During his 21 years at TI, Rhines managed TI’s thrust into digital signal processing and supervised that business from inception with the TMS 320 family of DSP’s through growth to become the cornerstone of TI’s semiconductor technology. He also supervised the development of the first TI speech synthesis devices (used in “Speak & Spell”) and is co-inventor of the GaN blue-violet light emitting diode (now important for DVD players and low energy lighting). He was President of TI’s Data Systems Group and held numerous other semiconductor executive management positions.

Rhines has served five terms as Chairman of the Electronic Design Automation Consortium and is currently serving as co-vice-chairman. He is also a board member of the Semiconductor Research Corporation and First Growth Family & Children Charities. He has previously served as chairman of the Semiconductor Technical Advisory Committee of the Department of Commerce, as an executive committee member of the board of directors of the Corporation for Open Systems and as a board member of the Computer and Business Equipment Manufacturers’ Association (CBEMA), SEMI-Sematech/SISA, Electronic Design Automation Consortium (EDAC), University of Michigan National Advisory Council, Lewis and Clark College and SEMATECH.

Dr. Rhines holds a Bachelor of Science degree in metallurgical engineering from the University of Michigan, a Master of Science and Ph.D. in materials science and engineering from Stanford University, a master of business administration from Southern Methodist University and an Honorary Doctor of Technology degree from Nottingham Trent University.



Aug 21, 2014 | Comments


SoCcer: Defending your Digital Design

Ramesh Dewangan   Ramesh Dewangan
   Vice President of Application Engineering at Real Intent

Weird things can happen during a presentation to a customer!

I was visiting a customer site giving an update on the latest release of our Ascent and Meridian products. It was taking place during the middle of the day, in a large meeting room, with more than 30 people in the audience. Everything seemed to be going smoothly.

Suddenly there was an uproar, with clapping and cheers coming from an adjacent break room. Immediately, everyone in my audience opened their laptops, and grinned or groaned at the football score.

The 2014 FIFA World Cup soccer championship game was in full swing!

As Germany scored at will against Brazil, I lost count of the reactions by the end of the match! The final score was a crushing 7-1.

It disturbed my presentation alright, but it also gave me some food for thought.

If I look at  SoC design as a SoCcer game, the bugs hiding in the design are like potential scores against us, the chip designers. We are defending our chip against bugs. Bugs could be related to various issues with design rules (bus contention), state machines (unreachable states, dead-codes), X-optimism (X propagating through x-sensitive constructs), clock domain crossing (re-convergence or glitch on asynchronous crossings), and so on.

Bugs can be found quickly, when the attack formation of our opponent is easy to see, or hard to find if the attack formation is very complex and well-disguised.

It is obvious that more goals will be scored against us if we are poorly prepared. The only way to avoid bugs (scores against us) is to build a good defense. What are some defenses we can deploy for successful chips?

We need to have design RTL that is free from design rule issues, free of deadlocks in its state machines, free from X-optimism and pessimism issues, and employs properly synchronized CDC for both data and resets and have proper timing constraints to go with it.

Can’t we simply rely on smart RTL design and verification engineers to prevent bugs? No, that’s only the first line of defense. We must have the proper tools and methodologies. Just like, having good players is not enough; you need a good defense strategy that the players will follow.

If you do not use proper tools and methodologies, you increase the risk of chip failure and a certain goal against the design team. That is like inviting penalty kick. Would you really want to leave you defense to the poor lone goal keeper? Wouldn’t you rather build methodology with multiple defense resources in play?

So what tools and methodologies are needed to prevent bugs? Here are some of the key needs:

  • RTL analysis (Linting) – to create RTL free of structural and semantic bugs
  • Clock domain crossing (CDC) verification – to detect and fix chip-killing CDC bugs
  • Functional intent analysis (also called auto-formal) – to detect and correct functional bugs well before the lengthy simulation cycle
  • X-propagation analysis – to reduce functional bugs due to unknowns X’s in the design and ensure correct power-on reset
  • Timing constraints verification – to reduce the implementation cycle time and prevent chip killer bugs due to bad exceptions

Proven EDA tools like Ascent Lint, Ascent IIV, Ascent XV, Meridian CDC and Meridian Constraints meet these needs effectively and keep bugs from crossing the mid-field of your design success.

Next time, you have no excuse for scores against you (i.e. bugs in the chip). You can defend and defend well using proper tools and methodologies.

Don’t let your chips be a defense-less victim like Brazil in that game against Germany! J



Aug 15, 2014 | Comments


Executive Insight: On the Convergence of Design and Verification

Pranav Ashar   Dr. Pranav Ashar
   CTO of Real Intent

This article was originally published on TechDesignForums and is reproduced here by permission.

Sometimes it’s useful to take an ongoing debate and flip it on its head. Recent discussion around the future of simulation has tended to concentrate on aspects best understood – and acted upon – by a verification engineer. Similarly, the debate surrounding hardware-software flow convergence has focused on differences between the two.

Pranav Ashar, CTO of Real Intent, has a good position from which to look across these silos. His company is seen as a verification specialist, particularly in areas such as lint, X-propagation and clock domain crossing. But talk to some of its users and you find they can be either design or verification engineers.

How Real Intent addresses some of today’s challenges – and how it got there – offer useful pointers on how to improve your own flow and meet emerging or increasingly complex tasks.

“We’ve seen this and said this before, but for today’s big systems, you don’t want to do a lot of separate design and verification,” Ashar says. “Each represents a major project in itself and until now each has required its own process. When things become as complex as they have, you have to interweave them.

“This isn’t just because it is inherently more efficient. The level of complexity is such that it becomes predictable that the boundary between the two will blur. That’s happening and it will continue to happen. It’s critical to understand that it is almost a natural evolution.”

The next issue is how to communicate this and the flow changes it requires on both sides of the D&V divide. In some cases, you don’t. Instead, you present information to different communities in the way they most easily understand given existing working practices.

In Real Intent’s latest update to Ascent XV (its X-verification and reset suite), the company worked from the assumption that different disciplines look at things in different ways. The verification engineer concentrates on X-related issues; the design engineer wants detail on resets, power management schemes and proliferating clocks. The company tailored the tool’s interfaces and outputs accordingly.

Real Intent is not alone in adopting this approach. But perhaps it is only a beginning.

Fuzzy verification boundaries

Ashar draws a useful comparison with the ongoing debate over hardware-software co-design, and the similar tailoring of tools to users that it has seen.

“The underlying technologies for hardware and software are in many respects very similar. For example, execution paths are important on both sides. Having said that, though, the computational paradigms are different as are the data management procedures. Aspects like that, right now, explain why debug tools have different flavors, why they are presented to the user in different ways,” he notes.

“But, in terms of this whole hardware/software debate, we still seem to talk more about two separate worlds. Where there seems to be less discussion is, again, in terms of these fuzzy boundaries. So, we don’t talk much about how the hardware is increasingly looking like the software. Yet, the abstraction layers above RTL do look more and more like software algorithms, and they are becoming a lot more important in terms of how a system is assembled.”

Coming back to the world of verification, Ashar suggests an approach that, while it may not define two different disciplines, could more closely align them.

“Simulation,” he says, “is a last resort. It largely comes about because of things that we do not understand. It is a back stop.”



Aug 8, 2014 | Comments


Fundamentals of Clock Domain Crossing Verification: Part Four

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Last time we discussed practical considerations for designing CDC interfaces.  In this posting, we look at the costs associated with debugging and sign-off verification.

Design setup cost

Design setup starts with importing the design. With the increasing complexity of SOCs, designs include RTL and netlist blocks in a Verilog and VHDL mixed-language environment. In addition, functional setup is required for good quality of verification. A typical SOC has multiple modes of operation characterized by clocking schemes, reset sequences and mode controls. Functional setup requires the design to be set up in functionally valid modes for verification, by proper identification of clocks, resets and mode select pins. Bad setup can lead to poor quality of verification results.

Given the management complexity for the multitude of design tasks, it is highly desirable that there be a large overlap between setup requirements for different flows. For example, design compilation can be accomplished by processing the existing simulation scripts. Also, there is a large overlap between the functional setup requirements for CDC and that for static timing analysis. Hence, STA setup, based upon Synopsys Design Constraints (SDCs), can be leveraged for cost-effective functional setup.

Design constraints are usually either requirements or properties in your design. You use constraints to ensure that your design meets its performance goals and pin assignment requirements. Traditionally these are timing constraints but can include power, synthesis, and clocking.optimization tools (synthesis) in order to meet these goals. You can set timing constraints either globally or to a specific set of paths in your design. You can apply timing constraints to:

  • Specify the required minimum speed of a clock domain.
  • Set the input and output port timing information.
  • Define the maximum delay for a specific path.
  • Identify paths that are considered false and excluded from the analysis.
  • Identify paths that require more than one clock cycle to propagate the data.
  • Provide the external load at a specific port.

Correct functional setup of large designs may require setup of a very large number of signals. This cumbersome and time-consuming drudgery can be avoided with automatic setup generation. Also, setup has the first-order effect on the quality of verification. Hence, early feedback on setup quality can lead to easy and effective setup refinement for high quality of verification.

 Fig14
Figure 14. Design setup flow.

 

Debugging and sign-off cost

The debugging cost is dependent upon the number of errors flagged by the CDC tool. Assuming good setup, this, in turn, depends upon the size and CDC complexity of the design and the maturity of the design. Typically, the debugging cost for top-level runs on immature designs will be high. This is because the design may contain a large number of immature CDC interfaces. This can generate a large number of failures requiring significant debugging effort. Also, the ownership of these CDC interfaces may be distributed between multiple designers.

Debugging cost is heavily dependent upon the reporting style of the tools. Source-code oriented reporting relates the errors to the real source, i.e., HDL functionality. Also, it produces much more compact reports. CDC verification employs multiple technologies of increasing sophistication, such as structural analysis and formal analysis. As a result, a composite report is essential to determine the overall quality of CDC verification. Most waveform viewers can read an industrial standard waveform database known as Value Change Dump (VCD).

Good clock-domain, functional, structural and VCD visualization is essential for effective debugging. Automated and advanced pre-processing of these views, to isolate the error context, further reduces the debugging cost. Finally, debugging support requires advanced sign-off capabilities so that the same issues are not analyzed multiple times in the iterative verification flow.

Verification run-time cost

CDC checking is based upon multiple technologies with varying degrees of precision. In the first step, structural techniques are used to identify clock-domain crossings and to identify possible error sources in the design. Structural analysis tends to be relatively fast and is very useful at detecting gross errors in the design. To guarantee design correctness, however, structural analysis identifies all potential errors in the design. This set can be very large.

As an example, consider the design in Figure 12. This reduced-latency design can operate correctly or can be erroneous depending upon the relative frequency of the clock domains. Also, this structure can be included in a more complex interface that handles stall and other issues making precise structural identification difficult. If a structural technique does not compromise the quality of checking, it has to flag this interface for manual review and sign-off.

Formal analysis is an excellent technology to filter out false failures from structural analysis and to precisely identify failures in the design. As mentioned earlier, traditional formal analysis is built to analyze steady-state design behavior, and these formal techniques are incapable of formally analyzing uncertain behavior because of metastability and glitches. As a result, special formal-analysis techniques that are capable of handling behavioral uncertainty, are needed for CDC applications. For example, consider the failure shown in Figure 13. Here the MCP on data path is violated because of a hazard. Vanilla formal analysis will pass the data stability check (MCP) for this structure. Data stability for CDC interfaces can only be proven with glitch-sensitive formal-analysis techniques.

Formal analysis needs to be seamlessly integrated into the application all the way from invocation to reporting and debugging. This eliminates the huge overhead of integrating external formal-analysis tools into the flow and to correlate the results from these different tools to arrive at an integrated view of the verification status.

As the computational complexity of formal analysis is very high, this can require a large amount of computation time. This cost is well worth it, however, as it provides significant savings in debugging and sign-off cost.

 

 Fig15
Figure 15. Verification and debug flow.

Next time we will look at a practical and efficient CDC verification methodology.



Jul 31, 2014 | Comments


Fundamentals of Clock Domain Crossing Verification: Part Three

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Last time we looked at design principles and the design of CDC interfaces.  In this posting, we will look at practical considerations for designing CDC interfaces.

Verifying CDC interfaces

A typical SOC is made up of a large number of CDC interfaces. From the discussion above, CDC verification can be accomplished by executing the following steps in order:

  • Identification of CDC signals.
  • Classification of CDC signals as control and data.
  • Hazard/ glitch robustness of control signals.
  • Verification of single signal transition (gray coding) of control signals.
  • Verification of control stability (pulse-width requirement).
  • Verification of MCP operation (stability) of data signals.

All verification processes are iterative and achieve design quality by iteratively identifying design errors, debugging and fixing errors and re-running verification until no more errors are detected.

Practical considerations for CDC verification

Effective deployment of CDC tools in the design flow requires due consideration of multiple factors. We have discovered that first-generation CDC tools were not being used effectively in design flows. Based upon feedback from users, we have identified the following factors as the most important considerations for CDC deployment:

  • Coverage of error sources.
  • Design setup cost.
  • Debugging and sign-off cost.
  • Verification run-time cost.
  • Template recognition vs. report quality trade-off.
  • Top-level vs. block-level verification trade-off.
  • RTL vs. netlist verification trade-off.

There is consistent feedback from the users that the minimization of engineering cost for high-quality verification is critical for effective deployment of the CDC tools.

Coverage of error sources

CDC errors can creep into a design from multiple sources. The first is inadvertent clock-domain crossing where there is an assumption mismatch or oversight at block interfaces. The second is faulty block-level design. The designers, because of oversight or because of the pressure to design correct and high-performance interfaces, can make a design error. As an example, consider the protocol in Figure 12. Here, tapping Feedback Signal from an earlier flop stage can reduce the latency across the interface. But correct operation of this interface requires that the transmitting clock frequency be lower than the receiving clock frequency. Otherwise, it is possible to signal New Data before Load Data is completed.

 Fig12
Figure 12. Reduced latency protocol.

These two error sources are properly covered by RTL analysis. They can also be covered by netlist analysis. But not all CDC error sources are covered by RTL analysis. This is because CDC errors are dependent upon glitches and hazards. It is a well-known phenomenon that synthesis transformations can introduce hazards in the design. Hazards in CDC logic lead to CDC failures. Figure 13 shows an example of a design failure caused by synthesis. Here, the multiplexor implementation created a logic hazard that violated the multi-cycle path requirement on the data bus. We are aware of multiple design failures because of this phenomenon.

 

 Fig13
Figure 13. Logic hazard caused CDC failure.

With the increasing complexity of SOCs and the increasing number of CDC interfaces on the chip, the contribution of this risk factor is increasing. As a result, CDC verification must be run on both RTL and netlist views of the design.

Design setup cost

Design setup starts with importing the design. With the increasing complexity of SOCs, designs include RTL and netlist blocks in a Verilog and VHDL mixed-language environment. In addition, functional setup is required for good quality of verification. A typical SOC has multiple modes of operation characterized by clocking schemes, reset sequences and mode controls. Functional setup requires the design to be set up in functionally valid modes for verification, by proper identification of clocks, resets and mode select pins. Bad setup can lead to poor quality of verification results.

Given the management complexity for the multitude of design tasks, it is highly desirable that there be a large overlap between setup requirements for different flows. For example, design compilation can be accomplished by processing the existing simulation scripts. Also, there is a large overlap between the functional setup requirements for CDC and that for static timing analysis. Hence, STA setup, based upon Synopsys Design Constraints (SDCs), can be leveraged for cost-effective functional setup.

Design constraints are usually either requirements or properties in your design. You use constraints to ensure that your design meets its performance goals and pin assignment requirements. Traditionally these are timing constraints but can include power, synthesis, and clocking.

Timing constraints represent the performance goals for your designs. Designer software uses timing constraints to guide the timing-driven optimization tools (synthesis) in order to meet these goals. You can set timing constraints either globally or to a specific set of paths in your design. You can apply timing constraints to:

  • Specify the required minimum speed of a clock domain.
  • Set the input and output port timing information.
  • Define the maximum delay for a specific path.
  • Identify paths that are considered false and excluded from the analysis.
  • Identify paths that require more than one clock cycle to propagate the data.
  • Provide the external load at a specific port.

Correct functional setup of large designs may require setup of a very large number of signals. This cumbersome and time-consuming drudgery can be avoided with automatic setup generation. Also, setup has the first-order effect on the quality of verification. Hence, early feedback on setup quality can lead to easy and effective setup refinement for high quality of verification.

 

 Fig14
Figure 14. Design setup flow.

In the next posting we will discuss the costs associated with debugging and sign-off verification .



Jul 24, 2014 | Comments


Fundamentals of Clock Domain Crossing Verification: Part Two

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Last time we looked at how metastability is unavoidable and the nature of the clock domain crossing (CDC) problem.   This time we will look at design principles.

CDC design principles

Because metastability is unavoidable in CDC designs, the robust design of CDC interfaces is required to follow some strict design principles.

Metastability can be contained with “synchronizers” that prevent metastability effects from propagating into the design. Figure 9 shows the configuration of a double-flop synchronizer which minimizes the load on the metastable flop. The single fan-out protects against loss of correlation because the metastable signal does not fan out to multiple flops. The probability that metastability will last longer than time t is governed by the following equation:

Eqn1

 

where tau is the resolution time constant dependent upon the latch characteristics and ambient noise. This configuration resolves metastability with a very high probability, leading to a very large mean time between failures as governed by the equation:

Eqn2

where P is the probability that metastability is not resolved within one clock cycle. Triple or higher flop configurations may be used for very fast designs.

Fig9
Figure 9. Double flop synchronizer contains metastability.

 

Designing CDC interfaces

A CDC interface is designed for reliable transfer of correlated data across the data bus and the reliable design of a CDC interface must follow a simple set of rules:

  • The CDC data bus must be designed for 2-cycle multi-cycle-path operation (MCP). This means that data is captured in the CDC flops on the second clock edge or later, following the launch of data. This also gives one clock cycle of the receiving clock as the timing constraint on the path. Static timing analysis should ensure that the timing constraints are met on these paths. This rule eliminates metastability for these paths. As data-bus signals are correlated, their CDC flops can not be allowed to become metastable.
  • The control signals implementing the MCP protocol can become metastable and hence must follow the following rules:
    • The controls must be properly synchronized to prevent propagation of metastability in the design.
    • The MCP is enabled by one and only one control-signal transition to eliminate loss of correlation errors (gray coding).
    • The control signals should be free of hazards/ glitches.
    • The control signals must be stable for more than one clock cycle of the receiving clock.

 

These principles can be implemented using handshake protocols or FIFO-based protocols. Figure 10 shows a simple handshake CDC protocol. This interface is transmitting data from CLK1 domain to CLK2 domain. While Data Ready is asserted, the data on the bus Data In is transmitted across the clock domain. The data availability is signaled by a transition on Control Signal. Transmit Data is launched on the same clock edge. Control Signal is synchronized in the CLK2 domain and the transition is detected to signal Load Data. Since, synchronization requires at-least one cycle of CLK2, Transmit Data is received at the second edge of CLK2 or later. This creates a multi-cycle path for Transmit Data across the interface. Feedback Signal completes the handshake.

 

Fig10
Figure 10. Simple handshake CDC protocol.

 

Transition on Feedback Signal is detected to drive Next Data to the interface. Figure 11 shows the timing diagram for the protocol. It should be noted that this is a simplified concept of the interface. We have not incorporated the logic initializing the interface, detecting transition in Data Ready and dealing with stalling conditions. All these considerations, combined with latency minimization, add complexity to the design of the interface.

 

Fig11
Figure 11. CDC protocol timing diagram.

Next time we will start the discussion on verifying CDC interfaces.



Jul 17, 2014 | Comments