Blog Archive
April 2015
4/17/2015: Analysis of Clock Intent Requires Smarter SoC Verification
4/09/2015: High-Level Synthesis: New Driver for RTL Verification
4/03/2015: Underdog Innovation: David and Goliath in Electronics
March 2015
3/27/2015: Taking Control of Constraints Verification
3/20/2015: Billion Dollar Unicorns
3/13/2015: My Impressions of DVCon USA 2015: Lies; Experts; Art or Science?
3/06/2015: Smarter Verification: Shift Mindset to Shift Left [Video]
February 2015
2/27/2015: New Ascent Lint, Cricket Video Interview and DVCon Roses
2/20/2015: Happy Lunar New Year: Year of the Ram (or is it Goat or Sheep?)
2/12/2015: Video: Clock-Domain Crossing Verification: Introduction; SoC challenges; and Keys to Success
2/06/2015: A Personal History of Transaction Interfaces to Hardware Emulation: Part 2
January 2015
1/30/2015: A Personal History of Transaction Interfaces to Hardware Emulation: Part 1
1/22/2015: Intel’s new SoC-based Broadwell CPUs: Less Filling, Taste Great!
1/19/2015: Reporting Happiness: Not as Easy as You Think
1/09/2015: 38th VLSI Design Conf. Keynote: Nilekani on IoT and Smartphones
December 2014
12/22/2014: December 2014 Holiday Party
12/17/2014: Happy Holidays from Real Intent!
12/12/2014: Best of “Real Talk”, Q4 Summary and Latest Videos
12/04/2014: P2415 – New IEEE Power Standard for Unified Hardware Abstraction
November 2014
11/27/2014: The Evolution of RTL Lint
11/20/2014: Parallelism in EDA Software – Blessing or a Curse?
11/13/2014: How Big is WWD – the Wide World of Design?
11/06/2014: CMOS Pioneer Remembered: John Haslet Hall
October 2014
10/31/2014: Is Platform-on-Chip The Next Frontier For IC Integration?
10/23/2014: DVClub Shanghai: Making Verification Debug More Efficient
10/16/2014: ARM TechCon Video: Beer, New Meridian CDC, and Arnold Schwarzenegger ?!
10/10/2014: New CDC Verification: Less Filling, Picture Perfect, and Tastes Great!
10/03/2014: ARM Fueling the SoC Revolution and Changing Verification Sign-off
September 2014
9/25/2014: Does Your Synthesis Code Play Well With Others?
9/19/2014: It’s Time to Embrace Objective-driven Verification
9/12/2014: Autoformal: The Automatic Vacuum for Your RTL Code
9/04/2014: How Bad is Your HDL Code? Be the First to Find out!
August 2014
8/29/2014: Fundamentals of Clock Domain Crossing: Conclusion
8/21/2014: Video Keynote: New Methodologies Drive EDA Revenue Growth
8/15/2014: SoCcer: Defending your Digital Design
8/08/2014: Executive Insight: On the Convergence of Design and Verification
July 2014
7/31/2014: Fundamentals of Clock Domain Crossing Verification: Part Four
7/24/2014: Fundamentals of Clock Domain Crossing Verification: Part Three
7/17/2014: Fundamentals of Clock Domain Crossing Verification: Part Two
7/10/2014: Fundamentals of Clock Domain Crossing Verification: Part One
7/03/2014: Static Verification Leads to New Age of SoC Design
June 2014
6/26/2014: Reset Optimization Pays Big Dividends Before Simulation
6/20/2014: SoC CDC Verification Needs a Smarter Hierarchical Approach
6/12/2014: Photo Booth Blackmail at DAC in San Francisco!
6/06/2014: Quick Reprise of DAC 2014
May 2014
5/01/2014: Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions
April 2014
4/24/2014: Complexity Drives Smart Reporting in RTL Verification
4/17/2014: Video Update: New Ascent XV Release for X-optimization, ChipEx show in Israel, DAC Preview
4/11/2014: Design Verification is Shifting Left: Earlier, Focused and Faster
4/03/2014: Redefining Chip Complexity in the SoC Era
March 2014
3/27/2014: X-Verification: A Critical Analysis for a Low-Power World (Video)
3/14/2014: Engineers Have Spoken: Design And Verification Survey Results
3/06/2014: New Ascent IIV Release Delivers Enhanced Automatic Verification of FSMs
February 2014
2/28/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 3
2/20/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 2
2/13/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 1
2/07/2014: Video Tech Talk: Changes In Verification
January 2014
1/31/2014: Progressive Static Verification Leads to Earlier and Faster Timing Sign-off
1/30/2014: Verific’s Front-end Technology Leads to Success and a Giraffe!
1/23/2014: CDC Verification of Fast-to-Slow Clocks – Part Three: Metastability Aware Simulation
1/16/2014: CDC Verification of Fast-to-Slow Clocks – Part Two: Formal Checks
1/10/2014: CDC Verification of Fast-to-Slow Clocks – Part One: Structural Checks
1/02/2014: 2013 Highlights And Giga-scale Predictions For 2014
December 2013
12/13/2013: Q4 News, Year End Summary and New Videos
12/12/2013: Semi Design Technology & System Drivers Roadmap: Part 6 – DFM
12/06/2013: The Future is More than “More than Moore”
November 2013
11/27/2013: Robert Eichner’s presentation at the Verification Futures Conference
11/21/2013: The Race For Better Verification
11/18/2013: Experts at the Table: The Future of Verification – Part 2
11/14/2013: Experts At The Table: The Future Of Verification Part 1
11/08/2013: Video: Orange Roses, New Product Releases and Banner Business at ARM TechCon
October 2013
10/31/2013: Minimizing X-issues in Both Design and Verification
10/23/2013: Value of a Design Tool Needs More Sense Than Dollars
10/17/2013: Graham Bell at EDA Back to the Future
10/15/2013: The Secret Sauce for CDC Verification
10/01/2013: Clean SoC Initialization now Optimal and Verified with Ascent XV
September 2013
9/24/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 4
9/20/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 3
9/20/2013: CEO Viewpoint: Prakash Narain on Moving from RTL to SoC Sign-off
9/17/2013: Video: Ascent Lint – The Best Just Got Better
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 2
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain
9/10/2013: SoC Sign-off Needs Analysis and Optimization of Design Initialization in the Presence of Xs
August 2013
8/15/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 4
8/08/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 3
July 2013
7/25/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 2
7/18/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 1
7/16/2013: Executive Video Briefing: Prakash Narain on RTL and SoC Sign-off
7/05/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 3
June 2013
6/27/2013: Bryon Moyer: Simpler CDC Exception Handling
6/21/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 2
6/17/2013: Peggy Aycinena’s interview with Prakash Narain
6/14/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 1
6/10/2013: Photo Booth Blackmail!
6/03/2013: Real Intent is on John Cooley’s “DAC’13 Cheesy List”
May 2013
5/30/2013: Does SoC Sign-off Mean More Than RTL?
5/24/2013: Ascent Lint Rule of the Month: DEFPARAM
5/23/2013: Video: Gary Smith Tells Us Who and What to See at DAC 2013
5/22/2013: Real Intent is on Gary Smith’s “What to see at DAC” List!
5/16/2013: Your Real Intent Invitation to Fun and Fast Verification at DAC
5/09/2013: DeepChip: “Real Intent’s not-so-secret DVcon’13 Report”
5/07/2013: TechDesignForum: Better analysis helps improve design quality
5/03/2013: Unknown Sign-off and Reset Analysis
April 2013
4/25/2013: Hear Alexander Graham Bell Speak from the 1880′s
4/19/2013: Ascent Lint rule of the month: NULL_RANGE
4/16/2013: May 2 Webinar: Automatic RTL Verification with Ascent IIV: Find Bugs Simulation Can Miss
4/05/2013: Conclusion: Clock and Reset Ubiquity – A CDC Perspective
March 2013
3/22/2013: Part Six: Clock and Reset Ubiquity – A CDC Perspective
3/21/2013: The BIG Change in SoC Verification You Don’t Know About
3/15/2013: Ascent Lint Rule of the Month: COMBO_NBA
3/15/2013: System-Level Design Experts At The Table: Verification Strategies – Part One
3/08/2013: Part Five: Clock and Reset Ubiquity – A CDC Perspective
3/01/2013: Quick DVCon Recap: Exhibit, Panel, Tutorial and Wally’s Keynote
3/01/2013: System-Level Design: Is This The Era Of Automatic Formal Checks For Verification?
February 2013
2/26/2013: Press Release: Real Intent Technologist Presents Power-related Paper and Tutorial at ISQED 2013 Symposium
2/25/2013: At DVCon: Pre-Simulation Verification for RTL Sign-Off includes Automating Power Optimization and DFT
2/25/2013: Press Release: Real Intent to Exhibit, Participate in Panel and Present Tutorial at DVCon 2013
2/22/2013: Part Four: Clock and Reset Ubiquity – A CDC Perspective
2/18/2013: Does Extreme Performance Mean Hard-to-Use?
2/15/2013: Part Three: Clock and Reset Ubiquity – A CDC Perspective
2/07/2013: Ascent Lint Rule of the Month: ARITH_CONTEXT
2/01/2013: “Where Does Design End and Verification Begin?” and DVCon Tutorial on Static Verification
January 2013
1/25/2013: Part Two: Clock and Reset Ubiquity – A CDC Perspective
1/18/2013: Part One: Clock and Reset Ubiquity – A CDC Perspective
1/07/2013: Ascent Lint Rule of the Month: MIN_ID_LEN
1/04/2013: Predictions for 2014, Hier. vs Flat, Clocks and Bugs
December 2012
12/14/2012: Real Intent Reports on DVClub Event at Microprocessor Test and Verification Workshop 2012
12/11/2012: Press Release: Real Intent Records Banner Year
12/07/2012: Press Release: Real Intent Rolls Out New Version of Ascent Lint for Early Functional Verification
12/04/2012: Ascent Lint Rule of the Month: OPEN_INPUT
November 2012
11/19/2012: Real Intent Has Excellent EDSFair 2012 Exhibition
11/16/2012: Peggy Aycinena: New Look, New Location, New Year
11/14/2012: Press Release: New Look and New Headquarters for Real Intent
11/05/2012: Ascent Lint HDL Rule of the Month: ZERO_REP
11/02/2012: Have you had CDC bugs slip through resulting in late ECOs or chip respins?
11/01/2012: DAC survey on CDC bugs, X propagation, constraints
October 2012
10/29/2012: Press Release: Real Intent to Exhibit at ARM TechCon 2012 – Chip Design Day
September 2012
9/24/2012: Photos of the space shuttle Endeavour from the Real Intent office
9/20/2012: Press Release: Real Intent Showcases Verification Solutions at Verify 2012 Japan
9/14/2012: A Bolt of Inspiration
9/11/2012: ARM blog: An Advanced Timing Sign-off Methodology for the SoC Design Ecosystem
9/05/2012: When to Retool the Front-End Design Flow?
August 2012
8/27/2012: X-Verification: What Happens When Unknowns Propagate Through Your Design
8/24/2012: Article: Verification challenges require surgical precision
8/21/2012: How To Article: Verifying complex clock and reset regimes in modern chips
8/20/2012: Press Release: Real Intent Supports Growth Worldwide by Partnering With EuropeLaunch
8/06/2012: SemiWiki: The Unknown in Your Design Can be Dangerous
8/03/2012: Video: “Issues and Struggles in SOC Design Verification”, Dr. Roger Hughes
July 2012
7/30/2012: Video: What is Driving Lint Usage in Complex SOCs?
7/25/2012: Press Release: Real Intent Adds to Japan Presence: Expands Office, Increases Staff to Meet Demand for Design Verification and Sign-Off Products
7/23/2012: How is Verification Complexity Changing, and What is the Impact on Sign-off?
7/20/2012: Real Intent in Brazil
7/16/2012: Foosball, Frosty Beverages and Accelerating Verification Sign-off
7/03/2012: A Good Design Tool Needs a Great Beginning
June 2012
6/14/2012: Real Intent at DAC 2012
6/01/2012: DeepChip: Cheesy List for DAC 2012
May 2012
5/31/2012: EDACafe: Your Real Intent Invitation to Fast Verification and Fun at DAC
5/30/2012: Real Intent Video: New Ascent Lint and Meridian CDC Releases and Fun at DAC 2012
5/29/2012: Press Release: Real Intent Leads in Speed, Capacity and Precision with New Releases of Ascent Lint and Meridian CDC Verification Tools
5/22/2012: Press Release: Over 35% Revenue Growth in First Half of 2012
5/21/2012: Thoughts on RTL Lint, and a Poem
5/21/2012: Real Intent is #8 on Gary Smith’s “What to see at DAC” List!
5/18/2012: EETimes: Gearing Up for DAC – Verification demos
5/08/2012: Gabe on EDA: Real Intent Helps Designers Verify Intent
5/07/2012: EDACafe: A Page is Turned
5/07/2012: Press Release: Graham Bell Joins Real Intent to Promote Early Functional Verification & Advanced Sign-Off Circuit Design Software
March 2012
3/21/2012: Press Release: Real Intent Demos EDA Solutions for Early Functional Verification & Advanced Sign-off at Synopsys Users Group (SNUG)
3/20/2012: Article: Blindsided by a glitch
3/16/2012: Gabe on EDA: Real Intent and the X Factor
3/10/2012: DVCon Video Interview: “Product Update and New High-capacity ‘X’ Verification Solution”
3/01/2012: Article: X-Propagation Woes: Masking Bugs at RTL and Unnecessary Debug at the Netlist
February 2012
2/28/2012: Press Release: Real Intent Joins Cadence Connections Program; Real Intent’s Advanced Sign-Off Verification Capabilities Added to Leading EDA Flow
2/15/2012: Real Intent Improves Lint Coverage and Usability
2/15/2012: Avoiding the Titanic-Sized Iceberg of Downton Abbey
2/08/2012: Gabe on EDA: Real Intent Meridian CDC
2/08/2012: Press Release: At DVCon, Real Intent Verification Experts Present on Resolving X-Propagation Bugs; Demos Focus on CDC and RTL Debugging Innovations
January 2012
1/24/2012: A Meaningful Present for the New Year
1/11/2012: Press Release: Real Intent Solidifies Leadership in Clock Domain Crossing
August 2011
8/02/2011: A Quick History of Clock Domain Crossing (CDC) Verification
July 2011
7/26/2011: Hardware-Assisted Verification and the Animal Kingdom
7/13/2011: Advanced Sign-off…It’s Trending!
May 2011
5/24/2011: Learn about Advanced Sign-off Verification at DAC 2011
5/16/2011: Getting A Jump On DAC
5/09/2011: Livin’ on a Prayer
5/02/2011: The Journey to CDC Sign-Off
April 2011
4/25/2011: Getting You Closer to Verification Closure
4/11/2011: X-verification: Conquering the “Unknown”
4/05/2011: Learn About the Latest Advances in Verification Sign-off!
March 2011
3/21/2011: Business Not as Usual
3/15/2011: The Evolution of Sign-off
3/07/2011: Real People, Real Discussion – Real Intent at DVCon
February 2011
2/28/2011: The Ascent of Ascent Lint (v1.4 is here!)
2/21/2011: Foundation for Success
2/08/2011: Fairs to Remember
January 2011
1/31/2011: EDA Innovation
1/24/2011: Top 3 Reasons Why Designers Switch to Meridian CDC from Real Intent
1/17/2011: Hot Topics, Hot Food, and Hot Prize
1/10/2011: Satisfaction EDA Style!
1/03/2011: The King is Dead. Long Live the King!
December 2010
12/20/2010: Hardware Emulation for Lowering Production Testing Costs
12/03/2010: What do you need to know for effective CDC Analysis?
November 2010
11/12/2010: The SoC Verification Gap
11/05/2010: Building Relationships Between EDA and Semiconductor Ventures
October 2010
10/29/2010: Thoughts on Assertion Based Verification (ABV)
10/25/2010: Who is the master who is the slave?
10/08/2010: Economics of Verification
10/01/2010: Hardware-Assisted Verification Tackles Verification Bottleneck
September 2010
9/24/2010: Excitement in Electronics
9/17/2010: Achieving Six Sigma Quality for IC Design
9/03/2010: A Look at Transaction-Based Modeling
August 2010
8/20/2010: The 10 Year Retooling Cycle
July 2010
7/30/2010: Hardware-Assisted Verification Usage Survey of DAC Attendees
7/23/2010: Leadership with Authenticity
7/16/2010: Clock Domain Verification Challenges: How Real Intent is Solving Them
7/09/2010: Building Strong Foundations
7/02/2010: Celebrating Freedom from Verification
June 2010
6/25/2010: My DAC Journey: Past, Present and Future
6/18/2010: Verifying Today’s Large Chips
6/11/2010: You Got Questions, We Got Answers
6/04/2010: Will 70 Remain the Verification Number?
May 2010
5/28/2010: A Model for Justifying More EDA Tools
5/21/2010: Mind the Verification Gap
5/14/2010: ChipEx 2010: a Hot Show under the Hot Sun
5/07/2010: We Sell Canaries
April 2010
4/30/2010: Celebrating 10 Years of Emulation Leadership
4/23/2010: Imagining Verification Success
4/16/2010: Do you have the next generation verification flow?
4/09/2010: A Bug’s Eye View under the Rug of SNUG
4/02/2010: Globetrotting 2010
March 2010
3/26/2010: Is Your CDC Tool of Sign-Off Quality?
3/19/2010: DATE 2010 – There Was a Chill in the Air
3/12/2010: Drowning in a Sea of Information
3/05/2010: DVCon 2010: Awesomely on Target for Verification
February 2010
2/26/2010: Verifying CDC Issues in the Presence of Clocks with Dynamically Changing Frequencies
2/19/2010: Fostering Innovation
2/12/2010: CDC (Clock Domain Crossing) Analysis – Is this a misnomer?
2/05/2010: EDSFair – A Successful Show to Start 2010
January 2010
1/29/2010: Ascent Is Much More Than a Bug Hunter
1/22/2010: Ascent Lint Steps up to Next Generation Challenges
1/15/2010: Google and Real Intent, 1st Degree LinkedIn
1/08/2010: Verification Challenges Require Surgical Precision
1/07/2010: Introducing Real Talk!

Analysis of Clock Intent Requires Smarter SoC Verification

Thanks to the widespread reuse of intellectual property (IP) blocks and the difficulty of distributing a system-wide clock across an entire device, today’s system-on-chip (SoC) designs use a large number of clock domains that run asynchronously to each other. A design involving hundreds of millions of transistors can easily incorporate 50 or more clock domains and hundreds of thousands of signals that cross between them.

Although the use of smaller individual clock domains helps improve verification of subsystems apart from the context of the full SoC, the checks required to ensure that the full SoC meets its timing constraints have become increasingly time consuming.

Signals involved in clock domain crossing (CDC), for example where a flip-flip driven by one clock signal feeds data to a flop driven by a different clock signal raise the potential issue of metastability and data loss. Tools based on static verification technology exist to perform CDC checks and recommend the inclusion of more robust synchronizers or other changes to remove the risk of metastability and data loss.

Runtime issues

Conventionally, the verification team would run CDC verification on the entire design database before tapeout as this is the point that it becomes possible to perform a holistic check of the clock-domain structure and ensure that every single domain-crossing path is verified. However, on designs that incorporate hundreds of thousands of gates, this is becoming impractical as the compute runtime alone can run into days at a point where every hour saved or spent is precious. And, if CDC verification waits for this point, the number of violations – some of which may be false positives – will potentially generate many weeks of remedial effort, after which another CDC verification cycle needs to be run. To cope with the complexity, CDC verification needs a smarter strategy.

By grouping modules into a hierarchy, the verification team can apply a divide-and-conquer strategy. Not only that, the design team can play a bigger role in ensuring that potential CDC issues are trapped early and checked automatically as the design progresses.

A hierarchical methodology makes it possible to perform CDC checks early and often to ensure design consistency such that, following SoC database assembly, the remaining checks can pass quickly and, most likely, result in a much more manageable collection of potential violations.

Hierarchical obstacles

Traditionally, teams have avoided hierarchical management of CDC issues because of the complexity of organizing the design and ensuring that paths are not missed. A potential problem is that all known CDC paths may be deemed clean within a block and that it can be considered ‘CDC clean’. But there may be paths that escape attention because they cross the hierarchy boundaries in ways that cannot be caught easily – largely because the tools do not have sufficient information about the logic on the unimplemented side of the interface and the designer has made incorrect clock-related assumptions about the incoming paths.

If those sneak paths were not present, it would be possible to present the already-verified modules as black boxes to higher levels of hierarchy such that only the outer interfaces need to be verified with the other modules at that level of hierarchy. For hierarchical CDC verification to work effectively, a white- or grey-box abstraction is required in which the verification process at higher levels of hierarchy is able to reach inside the model to ensure that all potential CDC issues are verified.

As the verification environment does not have complete information about the clocking structure before final SoC assembly, reporting will tend to err on the side of caution, flagging up potential issues that may not be true errors. Traditionally, designers would provide waivers for flops on incoming paths that they believe not to be problematic to avoid them causing repeated errors in later verification runs as the module changes. However, this is a risky strategy as it relies on assumptions about the overall SoC clocking structure that may not be born out in reality.

Refinements to the model

The waiver model needs to be refined to fit a smart hierarchical CDC verification strategy. Rather than apply waivers, designers with a clear understanding of the internal structure of their blocks can mark flops and related logic to reflect their expectations. Paths that they believe not to be an issue and therefore not require a synchronizer can be marked as such and treated as low priority, focusing attention on those paths that are more likely to reveal serious errors as the SoC design is assembled and verified.

However, unlike paths marked with waivers, these paths are still in the CDC verification environment database. Not only that, they have been categorized by the design engineer to reflect their assumptions. If the tool finds a discrepancy between that assumption and the actual signals feeding into that path, errors will be generated instead of being ignored. This database-driven approach provides a smart infrastructure for CDC verification and establishes a basis for smarter reporting as the project progresses.

Smart reporting

Reporting will be organized to meet the specification rather than a long list of uncategorized errors that may or may not be false positives. This not only accelerates the process of reviews but allows the work to be distributed among engineers. As the specification is created and paths marked and categorized, engineers establish what they expect to see in the CDC results, providing the basis for smart reporting from the verification tools.

When structural analysis finds that a problematic path that was previously thought to be unaffected by CDC issues, the engineer can zoom in on the problem and deploy formal technologies to establish the root cause and potential solutions. Once fixed, the check can be repeated to ensure that the fix has worked.

The specification-led approach also allows additional attention to be paid to blocks that are likely to lead to verification complications, such as those that employ reconvergent logic. Whereas structural analysis will identify most problems on normal logic, these areas may need closer analysis using formal technology. Because the database-driven methodology allows these sections to be marked clearly, the right verification technology can be deployed at the right time.

Summary

By moving away from waivers and black-box models, the database-driven hierarchical CDC methodology encourages design groups to take SoC-oriented clocking issues into account earlier in the design cycle and ensure that any concerns about interfaces that may involve modules designed by design groups located elsewhere or even by different companies are carried forward to the critical SoC-level analysis without incurring the overhead of having to repeatedly re-verify each port on the module. Through earlier CDC analysis and verification, the team reduces the risk of encountering a large number of schedule-killing violations immediately prior to tapeout, and be far more confident that design deadlines will be met.

This article was originally published on TechDesignForums and is reproduced here by permission.



Apr 17, 2015 | Comments


High-Level Synthesis: New Driver for RTL Verification

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

In a recent blog, Does Your Synthesis Code Play Well With Others?,  I explored some of the requirements for verifying the quality of the RTL code generated by high-level synthesis (HLS) tools.  At a minimum, a state-of-the-art lint tool should be used to ensure that there are no issues with the generated code.  Results can be achieved in minutes, if not seconds for generated blocks.

What else can be done to ensure the quality of the generated RTL code?   For functional verification, an autoformal tools, like Real Intent’s Ascent IIV product can be used to ensure that basic operation is correct.   The IIV tools will automatically generate sequences and detect whether incorrect or undesirable behavior can occur.   Here is a quick list of what IIV can catch in the generated code:

  • FSM deadlocks and unreachable states
  • Bus contention and floating busses
  • Full- and Parallel-case pragma violations
  • Array bounds
  • Constant RTL expressions, nets & state vector bits
  • Dead code

dffDesigners are are also concerned about the resettability of their designs and if they power-up into a known good state.  We have seen some interesting results when Real Intent’s Ascent XV tool is applied to RTL blocks generated by HLS.  Besides analyzing X-optimism and X-pessimism, the Ascent XV tool can determine the minimum number of flops that need to have reset lines routed to them.  To save routing resources and reduce power requirements a minimal set of flops should be used.  Running additional lines does not improve the design.

Here are the results for a block that was 130K gates in size:

Number of Flops 17,495
Ascent XV Analysis Time (sec) 20
Unitialized Flops Found 646
Percent Initialized 96%
Redundant Flops Initialization 11,896
Reset Savings 68%

In this example, the Ascent XV tool took 20 seconds to analyze all 17,495 flops and discover that 646 were unitialized and that of the roughly 16,800 other flops, most of these did not need to have reset signals routed to them.   The savings were 68% compared to the unimproved design.  We have seen similar savings on other blocks generated by HLS tools.

HLS is now an important part of the hardware flow, and improves the productivity of designers.  With easy generation of RTL code, designers should expert to use quick static verification tools such as lint, autoformal, and reset analysis to confirm quality and correct operation.  This will save valuable time when designs are given to simulation and gate-level synthesis tools later in the flow.



Apr 9, 2015 | Comments


Underdog Innovation: David and Goliath in Electronics

Ramesh Dewangan   Ramesh Dewangan
   Vice President of Application Engineering at Real Intent

The story of “David and Goliath” from the book of Samuel, has taken on a secular meaning of describing any underdog situation, a contest where a smaller, weaker opponent faces a much bigger, stronger adversary.  Not just in EDA, but all companies in different technology industries deal with this struggle.

Organizations have moved from “build once, last forever” to “build fast and improve faster” approach to meet the dynamic requirements of their customers. In order to scale, evolve and respond, companies are choosing between two business philosophies. One which focuses on building larger, process driven yet efficient organizations and the other on smaller more efficient teams.

The panel discussion “The paradox of leadership: Incremental approach to Big Ideas ” at the recent Confluence 2015 conference addressed this question.  It explored the pros and cons of each of these philosophies and tried to gauge if there is a preferred way to creating success as part of the conference theme: “Building the Technology Organizations of Tomorrow.”  In my previous blog, Billion Dollar Unicorns, I discussed which companies were leading innovators, but the question remains: how do companies get there?

Confluence 2015 Panelists from Facebook, Pactera Technology, Saama Capital, SAP, and Zinnov.

Confluence 2015 Panelists from Facebook, Pactera Technology, Saama Capital, SAP, and Zinnov.

On the panel were:

Whereas industry startups (the Davids) have inherent advantage of being nimble and focused, the necessary ingredients for the significant innovation, the large companies (Goliaths) suffer from bureaucratic processes that can dampen or kill innovation.

Broadly, you can classify the innovation in two categories (according to Peter Thiel’s book Zero to One):

  • ‘0-1’ denotes a major innovation, and may be a disruptive solution, a new product or technology
  • ‘1-n’ denotes an evolutionary or incremental innovation

Large companies are good at 1-n innovation. The panelists emphatically asserted that the only way to achieve 0-1 innovation in the large companies is to form a separate group with the right skills to focus on the specific innovation. This team can be guided by a corporate sponsor (adult supervision) or could be an independent subsidiary.

On the other hand, the startups are formed on the very basis of a major idea that leads them to 0-1 innovation. In many cases, a 0-1 idea that couldn’t see the light-of-day in a large company is the very reason why a startup is formed. In the EDA context, think Silicon Perspective, Sierra Design Automation, Berkeley Design Automation, and numerous others.

This brings an interesting question, what about the startups established for a decade or more, who already have differentiated products? Let us call them “established Davids” (eDavids). Atoptech, Atrenta, Berekely DA (recently acquired), Calypto, Forte (recently acquired), Jasper (recently acquired),  and Real Intent come to mind. Whereas eDavids are still working on the new products (0-1 innovation), the 1-n innovations form bulk of such company focus, as the majority of its work is on improving their products that already have a large customer base for good number of years.

For example, Real Intent has the leading Clock Domain Crossing (CDC) product in the market for several years. It competes with big and small players alike. They continue to deliver several 1-n innovations in CDC.

Does it mean eDavids are not differentiating against Goliaths?

Absolutely not!

First of all, what we consider a 1-n innovation by eDavids, are sometimes a 0-1 innovation for a large company. For example, one of the large EDA companies is still working on a viable CDC product.  Another large company has ceased to innovate and their current product is on life-support. A third large company has a CDC product for years in market but with a low rate of innovation their customers tell us.

Then there is question of how you distinguish between 0-1 and 1-n innovations in an established product. For example, Real Intent introduced a completely unique next generation configurable CDC debug environment with command line interface. Real Intent improved its data model that enables its CDC tool to run full-chip CDC analysis on a 1 billion gate chip. Should we call these a 0-1 innovation or 1-n?

The debate on how-to-do innovation, among Davids (established or not) and Goliaths will not cease, not even in EDA! But one thing is clear, eDavids are having field day with the success of their innovations thanks to the immense value their customers realize!



Apr 3, 2015 | Comments


Taking Control of Constraints Verification

This article was originally published on TechDesignForums and is reproduced here by permission.

Constraints are a vital part of IC design, defining, among other things, the timing with which signals move through a chip’s logic and hence how fast the device should perform. Yet despite their key role, the management and verification of constraints’ quality, completeness, consistency and fidelity to the designer’s intent is an evolving art.

Why constraints management matters

Constraints management matters for a couple of reasons: as a way of ensuring that the intent of the original designers, be they SoC architects or third-party IP providers, is taken into account throughout the design process; and for their ability to enable better designs.

For example, It’s possible to use constraints to define ‘false paths’, routes through the logic that cannot affect its overall timing and so need not be optimized, giving the synthesis and physical implementation tools greater freedom to act.

Functional false paths are rare. But the ability to define a false path is often used to denote asynchronous paths or signals that timing engines don’t have to care about because they only transition once, for example in accessing configuration registers during boot sequences. Without effective constraints management it is easy to lose track of the rationale for particular constraints, and hence the opportunity for greater optimization.

It is also possible to define ‘multi-cycle paths’, through which signals are expected to propagate in more than a single clock cycle. Designers use multi-cycle path constraints in two ways: to denote paths that really are functionally multi-cycle paths; and as a way around corporate methodologies that ban the setting of false-path constraints. In this scenario, designers define a multi-cycle path with a large multiplier as another way to relax timing requirements.

Multi-mode designs, for which different constraints may apply to particular paths in different operating modes, present another constraint-management challenge. It is easy to lose track of the rationale for each constraint in each mode, and to overlook potential conflicts between multiple constraints applied to the same path in different modes.

Constraints management challenges

Managing and verifying design constraints presents a number of challenges to methodology developers and verification engineers. The first is that of carrying forward a designer’s intent, expressed in the constraints that accompany the logic definition, throughout the design flow from abstract code through synthesis and related transformations (such as test insertion) to gates in silicon.

The second, in this age of increasing chip sizes and shrinking timescales, is ensuring that verification engineers aren’t overwhelmed with such large volumes of debug data that they are unable to analyze it effectively and act upon it quickly as they work to sign off the constraints.

These issues are not well addressed in today’s methodologies: designers often use custom scripts to check the properties of constraints, such as quality and consistency.

Formal approaches can be useful in this context, but because of their speed and capacity limitations, it makes sense to develop a process of stepwise constraints refinement, using a series of targeted analyses and interventions to address the simpler issues. This reduces the burden on formal tools when they are eventually pressed into service.

In this approach, likened by some to peeling an onion, verification engineers might start by checking that the existing constraints have been correctly applied to the design. The next step could be to define all the paths which can be safely ignored, using algorithmic approaches to find such paths and denote them by adding constraints to the design. For example, multi-cycle paths need a retention capability at their start and finish, so an algorithm can check for that. The algorithm needs smarts, though: a multi-cycle path may exploit retention capabilities from elsewhere in the design, such as a state machine that is driving it, so the analysis need to consider the path’s context as well.

These analyses can be done quickly, before applying formal techniques that risk delivering such detailed reports that engineers get overwhelmed. Effective constraints verification tools need to be able to categorize exceptions based on predefined principles, to provide a prioritized view of what’s important.

Ensuring consistency between SoC and block-level constraints

As the use of IP increases, constraints files are providing a useful way to ensure that the same timing budgets are not being allocated twice, once at the block level and once at the SoC level.

Checking for this kind of consistency throws up subtle issues. For example, an IP block may include asynchronous paths that are recognized within a block-level constraint. At the SoC level, though, the IP block’s asynchronous paths may not matter and so can be safely ignored. There’s a twist, though – if other signals within the IP block depend on these paths, then the original constraints on those paths should be taken into account after all.

The key is to be able to assess block-level constraints within the SoC context, which may be easier said than done if the SoC constraints file doesn’t include placeholders for these issues. For example, how do we promote an internally generated clock, derived from a signal on the IP boundary, up to the SoC level?

It is also important to remember a second form of consistency that needs checking – between blocks. Depending on the context in which a block is being driven, it may be considered as synchronous or asynchronous. If a tool regards one of the instantiations of the block as correct, it may see other instantiations in different contexts as incorrect – creating a reporting issue.

Conclusions

Given the importance of constraints in defining how an IC is meant to work, it is increasingly important that their quality, completeness, and consistency is properly verified, and that they are correctly applied throughout the whole design elaboration process.

The best way to verify constraints is to develop a step by step approach, tackling particular classes of issue at a time, supported by tools that can sort and prioritize their error reports so that engineers can focus on the most important issues first. If these tools also help preserve the design intent expressed in the constraints all they way through the process, that is a bonus.



Mar 27, 2015 | Comments


Billion Dollar Unicorns

Ramesh Dewangan   Ramesh Dewangan
   Vice President of Application Engineering at Real Intent

unicorn_final_full-1[1]

The business magazine, Fortune, in a Feb. 2015 article proclaimed The Age of Unicorns  — private companies valued at more than $1 billion by investors. Unicorns are the stuff of myth, but billion-dollar tech start-ups seem to be everywhere, backed by a bull market and a new generation of disruptive technology.  According to a recent New York Times article, there are over 50 unicorns in Silicon Valley right now.

Upcoming unicorns formed a popular discussion topic at the Confluence 2015 conference organized by Zinnov, on March 12th in Santa Clara, Calif. The conference theme was “Building the Technology Organizations of Tomorrow”.

Here is a sampling of six unicorns that have emerged as real winners using innovative strategies:

Airbnb

Airbnb (San Francisco) is a web marketplace for the rental of local lodging, with listings in 192 countries. It uses social media technology to conduct background checks for both providers and renters, to amplify stories, connect with travelers and ultimately drive business growth. And you thought Facebook was just for time-wasters!  Watch this YouTube video to see how Airbnb leverages social media.

Uber

Uber operates a mobile-phone based transportation network using private cars and taxis. It employs just 3 people per city when it first launches operations in a new location. The teams get support from the San Francisco headquarters mainly for IT operations. The team also leverages the network of operators in other cities. In contrast, rivals employ hundreds of employees to manage a driver network. This fat-free model is helping Uber to roll out operations at a rapid pace.

Flipkart

Flipkart (Bangalore) is a web store and sellers marketplace in India. It was established in 2007, and is valued at $12 billion. One specific feature – “Cash on delivery” – introduced in 2013, accelerated their sales significantly. You hand over the required cash to the delivery staff, and get the product handed over to you in return, all with a human touch. They figured India is primarily a cash driven economy where plastic card penetration is extremely low in India (<1%). Why couldn’t Amazon think of it?

Go Pro

The high-definition personal camera company, GoPro is based in San Mateo, Calif.  It raised $427 million when it went IPO with a valuation of $2.96 billion in 2014.  It turned its customers into a stoked sales force, by enabling users to flood the Internet with videos of their own adventures. In 2013 alone, GoPro customers uploaded 2.8 years worth of video featuring GoPro in the title. Each video not only serves as a customer testimonial, it is guerrilla advertising, giving potential customers millions of reasons why they should buy one of GoPro’s little cameras. To learn more, read the Wired article Why GoPro’s Success Isn’t Really About the Cameras.

Pivotal Labs

Pivotal Labs, based in San Francisco, offers a next-generation Platform-as-a-service (PaaS) for creating web applications in the cloud.  It has grown to over 400 consultants, with an office presence in nine major tech hubs in the US and now internationally in Toronto and London. They use pair programming (agile software development) with their clients, a technique in which two engineers work together at one computer, write code, and collaborate on solutions to problems. Pair programming with the clients is the most common reason they choose to work with Pivotal since it accelerates learning and expertise. Check out this video article on how Pair Programming is the secret sauce to Pivotal Labs’ growth and success.

ZOHO

Zoho University, in Chennai India, started as a corporate social responsibility experiment a decade ago. The IT university has no exams, deadlines or assignments, but students are paid to attend and graduates receive a professional certificate. Zoho University is now among the largest contributors to the 2,600-strong workforce of the India-based IT company Zoho Corporation. Nearly 15%, or about 300, of the company’s employees are graduates of Zoho University. Learn more about this innovative educational institution in this video interview of Sridhar Vembu, CEO of Zoho.

So, what about the design automation industry?

First of all, startups will not have billion dollar valuations, given that market value of the whole industry is less than 20 billion dollars. So, let’s define our one-horned-wonders as the hot startups that are ready to deliver significantly superior products compared to the big 3 of EDA.

So, where are the EDA unicorns? Where will they come from?

I believe that unicorns will be the ones using innovative strategies to provide solutions that tackle a highly difficult pain-point in chip design, and prevent chip killer problems. I mentioned in my blog Redefining Chip Complexity in the SoC Era, we are dealing with chip complexity that is orders of magnitude higher than the past. The complexity comes not only from the sheer size (approaching 1 billion gates) or lower process nodes, but also the scale of IP integration, complex low power requirements, asynchronous interfaces, x-propagation risks, verification bug escapes, and so on.

EDA unicorns will create high capacity and high performance methodologies to prevent chip failures and provide a reliable sign-off solution!



Mar 20, 2015 | Comments


My Impressions of DVCon USA 2015: Lies; Experts; Art or Science?

Luc Burgun   David Scott
   Principal Architect

Last week I attended the Design and Verification Conference in San Jose.  It had been six years since my last visit to the conference.  Before then, I had attended five years in a row, so it was interesting to see what had changed in the industry.  I focused on test bench topics, so this blog records my impressions in that area.

First, my favorite paper was “Lies, Damned Lies, and Coverage” by Mark Litterick of Verilab, which won an Honorable Mention in the Best Paper category.  Mark explained common shortcomings of coverage models implemented as SystemVerilog covergroups.  For example, because a covergroup has its own sampling event, that may or may not be appropriate for the design.  If you sample when a value change does not matter for the design, the covergroup has counted a value as covered when in fact it really isn’t.  In the slides, Mark’s descriptions of common errors were pithy and, like any good observation, obvious only in retrospect.  More interestingly, he proposed correlating coverage events via the UCIS (Unified Coverage Interoperability Standard) to verify that they have the expected relationships.  For example, a particular covergroup bin count might be expected to be the same as the pass count of some cover property (in SystemVerilog Assertions) somewhere else, or perhaps as much as some block count in code coverage.  It struck me that some aspects of this must be verifiable using formal analysis. You can read the entire paper here and see the presentation slides here.

I was also impressed by the use of the C language in verification — not SystemC, but old-fashioned C itself.  Harry Foster of Mentor Graphics shared some results of his Verification Survey, and there were only two languages whose use had increased from year-to-year: SystemVerilog and C.  For example, there was a Cypress paper by David Crutchfield et al where configuration files were processed in C.  Why is this, I wondered?  Perhaps because SystemVerilog makes it easy via the Direct Programming Interface (DPI): you can call SystemVerilog functions from C and vice-versa.  Also, a lot of people know C.  I imagine if there were a Python DPI or Perl DPI, people would use those a lot as well!

Of course, the Universal Verification Methodology (UVM) is becoming, well… almost universal.  I get the impression that verification architects are turning into software engineers.  They are having fun, if that is the word, creating abstractions so that they can re-use the same top-level verification code in different circumstances, with differing design blocks or versions of IP.  But like creating classes in C++ software, as I do for Real Intent, there are many different ways of doing the same thing.  It seems to me UVM has made the verification problem less constrained rather than more constrained, in some sense, and that does add some risks, as well as make static analysis more difficult.

The crowd got a kick out of the fact that even the UVM experts can’t agree among themselves how much of it is minimally necessary; there were some lively discussions among the presenters in the UVM Session on Wednesday afternoon.  First, Stu Sutherland and Tom Fitzpatrick proposed a minimal subset.  The next two authors contradicted it.  One feature that Tom said never to use was then the subject of a paper by John Aynsley.  Last in the session, my friend Rich Edelman described his UVM template generator.  I think there could be as many template generators as authors!

Some presentations had the tinge of an advertisement.  There was an “e” paper where a user described reasons to miss aspect-oriented programming, which is not found in SystemVerilog.  For the first time, I got a good definition of aspect-oriented programming, which you will find on Wikipedia, as focused on cross-cutting concerns.  My paraphrase of cross-cutting concerns is a feature that usually requires implementation in multiple locations; an aspect-oriented language can put the cross-cutting concerns in one place.  But it also strikes me that an aspect-oriented language really allows the extension or re-definition of anything from anywhere.  This may in fact be aspect-oriented, or it may not; nothing guarantees that it is.  If not, you risk a giant mess where you need to read all the source code to understand anything.  At least, object-oriented languages like SystemVerilog have features that push people in an object-oriented direction.

Finally, for Real Intent, I was encouraged to hear from Harry Foster, during the “Art or Science?” panel,  that “formal apps” — or focused formal applications dedicated to analysis of a particular problem area — grew in usage year-to-year by over 60%, and this is the fastest-growing area for EDA tools.  I’m glad to be working for a company in such an interesting area.

P.S.  The answer, by the way, to the question of whether verification is “Art or Science” is easy.  Of course, it’s both!



Mar 13, 2015 | Comments


Smarter Verification: Shift Mindset to Shift Left [Video]

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

The Design and Verification Conference Silicon Valley was held this week.  During Aart de Geus’ keynote, he shared how SoC verification is “shifting left”, so that debug starts earlier and results are delivered more quickly.   He identified a number of key technologies that have made this possible:

  • Static verification that uses a mix of specialized code analysis and formal technology which are must faster and more focused than traditional simulation
  • New third generation of analysis engines
  • Advancements in debug

Real Intent has also been talking about this new suite of technologies that improve the whole process of SoC verification.  Pranav Ashar, CTO at Real Intent wrote about these in a blog posted on the EETimes web-site.  Titled “Shifting Mindsets: Static Verification Transforms SoC Design at RT Level“, it introduces the idea of objective-driven verification:

We are at the dawn of a new age of digital verification for SoCs. A fundamental change is underway. We are moving away from a tool and technology approach — “I have a hammer, where are some nails?” — and toward a verification-objective mindset for design sign-off, such as “Does my design achieve reset in two cycles?”

Objective-driven verification at the RT level now is being accomplished using static-verification technologies. Static verification comprises deep semantic analysis (DSA) and formal methods. DSA is about understanding the purpose and intent of logic, flip-flops, state machines, etc. in a design, in the context of the verification objective being addressed. When this understanding is at the core of an EDA tool set, a major part of the sign-off process happens before the use or need of formal analysis.

The right mix of these two components — DSA and formal methods — significantly reduces the need for dynamic analysis (simulation). Although dynamic analysis continues to have a role, increasingly it is viewed as a backstop and not the main focus of the verification flow. Any simulation must be absolutely necessary and be tied to a companion static analysis step.

Click here to read the entire article.

Pranav also covered this topic in a recent interview with Warren Savage, President and CEO of IP Extreme, on his IP Watch YouTube channel. Pranav shares his background in the high-tech industry before the conversation turns to verification and how it has changed over the years.



Mar 6, 2015 | Comments


New Ascent Lint, Cricket Video Interview and DVCon Roses

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

New Ascent Lint with DO-254 Compliance Testing

On February 25 we announced the 2015 release of Ascent Lint for comprehensive RTL analysis and rule checking. The new version for 2015 delivers enhanced support for the SystemVerilog language, DO-254 policy files for compliance testing of complex electronic hardware in airborne systems, deeper rule coverage and easy configurability. We believe it is the industry’s fastest-performance, highest-capacity and most precise Lint solution in the market.

Additional enhancements and new features for Ascent Lint include:

  • Enhanced VHDL finite state machine (FSM) handling for deeper analysis
  • 17 new VHDL and 12 new Verilog lint rules that ensure design code quality and consistency for a wide range of potential issues
  • Lower noise in reporting of design issues

To read further details about the announcement, click here. For additional insights and comments from Srinivas Vaidyanathan, staff technical engineer, including his take on the Cricket World Cup, please watch the video interview below.

Real Intent at DVCon 2015: Verification Solutions and Roses in Booth #602

We will exhibit our Ascent and Meridian products in Booth #602 at the 2015 Design & Verification Conference & Exhibition (DVCon 2015) next week. Visitors to our also will receive a rose from Real Intent – a sweet tradition for two years now. DVCon, which typically attracts more than 800 attendees, is the premier industry conference for design and verification engineers of all experience levels, and for engineering managers

DVCon Expo Booth Crawl
Monday, Mar. 2, 5-7 p.m. – food and drink provided

DVCon Expo Exhibit
Tuesday, Mar. 3 and Wednesday, Mar. 4 from 2:30-6:30 p.m.
at the Doubletree Hotel, San Jose, Calif.

I look forward to seeing you there!



Feb 27, 2015 | Comments


Happy Lunar New Year: Year of the Ram (or is it Goat or Sheep?)

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

The Lunar New Year Day is on Thursday February 19, 2015.  According to Chinese astrology, 2015 is year of Wooden Ram and is the 4,712th year in the traditional calendar.  The original Chinese word for this year is “yang,” a generic term for various horned ruminating mammals. During the translation process, people have interpreted the word differently, and communities pick the animal that represents the qualities they admire. For example, sheep are associated with mildness and moderation, which is seen as an ideal attitude by some Asian societies, so they will call 2015 the Year of the Sheep.

You can learn an overwhelming amount of information at various web pages.  The following Wikipedia page is a good place to start:  Goat (zodiac). Let’s just say that the Year of the Ram will be an auspicious one and will bring a happy turnaround in fortunes in the coming months.

Happy New Year!

Happy New Year!

P.S.  I am reminded of the stories about early computer translation programs that converted “hydraulic ram” into the equivalent of “water goat,” which is not the same thing!



Feb 20, 2015 | Comments


Video: Clock-Domain Crossing Verification: Introduction; SoC challenges; and Keys to Success

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

In the YouTube video interview below, Oren Katzir, vice-president of application engineering, introduces the topic of clock-domain crossing (CDC) verification.  He identifies what are the four key issues that need to be met to achieve SoC sign-off, and what are the features that Real Intent’s Meridian CDC tool offers to handle the deluge of data that can arise in CDC analysis, and as well, work effectively with different design methodologies.  I am sure you will learn something from Oren’s experience with many customers’ designs.



Feb 12, 2015 | Comments