Blog Archive
July 2015
7/01/2015: Richard Goering and Us: 30 Great Years
June 2015
6/12/2015: Quick 2015 DAC Recap and Racing Photo Album
6/05/2015: Advanced FPGA Sign-off Includes DO-254 and …Missing DAC?
May 2015
5/28/2015: #2 on GarySmithEDA What to See @ DAC List – Why?
5/14/2015: SoC Verification: There is a Stampede!
5/07/2015: Drilling Down on the Internet-of-Things (IoT)
April 2015
4/30/2015: Reflections on Accellera UCIS: Design by Architect and Committee
4/23/2015: DO-254 Without Tears
4/17/2015: Analysis of Clock Intent Requires Smarter SoC Verification
4/09/2015: High-Level Synthesis: New Driver for RTL Verification
4/03/2015: Underdog Innovation: David and Goliath in Electronics
March 2015
3/27/2015: Taking Control of Constraints Verification
3/20/2015: Billion Dollar Unicorns
3/13/2015: My Impressions of DVCon USA 2015: Lies; Experts; Art or Science?
3/06/2015: Smarter Verification: Shift Mindset to Shift Left [Video]
February 2015
2/27/2015: New Ascent Lint, Cricket Video Interview and DVCon Roses
2/20/2015: Happy Lunar New Year: Year of the Ram (or is it Goat or Sheep?)
2/12/2015: Video: Clock-Domain Crossing Verification: Introduction; SoC challenges; and Keys to Success
2/06/2015: A Personal History of Transaction Interfaces to Hardware Emulation: Part 2
January 2015
1/30/2015: A Personal History of Transaction Interfaces to Hardware Emulation: Part 1
1/22/2015: Intel’s new SoC-based Broadwell CPUs: Less Filling, Taste Great!
1/19/2015: Reporting Happiness: Not as Easy as You Think
1/09/2015: 38th VLSI Design Conf. Keynote: Nilekani on IoT and Smartphones
December 2014
12/22/2014: December 2014 Holiday Party
12/17/2014: Happy Holidays from Real Intent!
12/12/2014: Best of “Real Talk”, Q4 Summary and Latest Videos
12/04/2014: P2415 – New IEEE Power Standard for Unified Hardware Abstraction
November 2014
11/27/2014: The Evolution of RTL Lint
11/20/2014: Parallelism in EDA Software – Blessing or a Curse?
11/13/2014: How Big is WWD – the Wide World of Design?
11/06/2014: CMOS Pioneer Remembered: John Haslet Hall
October 2014
10/31/2014: Is Platform-on-Chip The Next Frontier For IC Integration?
10/23/2014: DVClub Shanghai: Making Verification Debug More Efficient
10/16/2014: ARM TechCon Video: Beer, New Meridian CDC, and Arnold Schwarzenegger ?!
10/10/2014: New CDC Verification: Less Filling, Picture Perfect, and Tastes Great!
10/03/2014: ARM Fueling the SoC Revolution and Changing Verification Sign-off
September 2014
9/25/2014: Does Your Synthesis Code Play Well With Others?
9/19/2014: It’s Time to Embrace Objective-driven Verification
9/12/2014: Autoformal: The Automatic Vacuum for Your RTL Code
9/04/2014: How Bad is Your HDL Code? Be the First to Find out!
August 2014
8/29/2014: Fundamentals of Clock Domain Crossing: Conclusion
8/21/2014: Video Keynote: New Methodologies Drive EDA Revenue Growth
8/15/2014: SoCcer: Defending your Digital Design
8/08/2014: Executive Insight: On the Convergence of Design and Verification
July 2014
7/31/2014: Fundamentals of Clock Domain Crossing Verification: Part Four
7/24/2014: Fundamentals of Clock Domain Crossing Verification: Part Three
7/17/2014: Fundamentals of Clock Domain Crossing Verification: Part Two
7/10/2014: Fundamentals of Clock Domain Crossing Verification: Part One
7/03/2014: Static Verification Leads to New Age of SoC Design
June 2014
6/26/2014: Reset Optimization Pays Big Dividends Before Simulation
6/20/2014: SoC CDC Verification Needs a Smarter Hierarchical Approach
6/12/2014: Photo Booth Blackmail at DAC in San Francisco!
6/06/2014: Quick Reprise of DAC 2014
May 2014
5/01/2014: Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions
April 2014
4/24/2014: Complexity Drives Smart Reporting in RTL Verification
4/17/2014: Video Update: New Ascent XV Release for X-optimization, ChipEx show in Israel, DAC Preview
4/11/2014: Design Verification is Shifting Left: Earlier, Focused and Faster
4/03/2014: Redefining Chip Complexity in the SoC Era
March 2014
3/27/2014: X-Verification: A Critical Analysis for a Low-Power World (Video)
3/14/2014: Engineers Have Spoken: Design And Verification Survey Results
3/06/2014: New Ascent IIV Release Delivers Enhanced Automatic Verification of FSMs
February 2014
2/28/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 3
2/20/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 2
2/13/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 1
2/07/2014: Video Tech Talk: Changes In Verification
January 2014
1/31/2014: Progressive Static Verification Leads to Earlier and Faster Timing Sign-off
1/30/2014: Verific’s Front-end Technology Leads to Success and a Giraffe!
1/23/2014: CDC Verification of Fast-to-Slow Clocks – Part Three: Metastability Aware Simulation
1/16/2014: CDC Verification of Fast-to-Slow Clocks – Part Two: Formal Checks
1/10/2014: CDC Verification of Fast-to-Slow Clocks – Part One: Structural Checks
1/02/2014: 2013 Highlights And Giga-scale Predictions For 2014
December 2013
12/13/2013: Q4 News, Year End Summary and New Videos
12/12/2013: Semi Design Technology & System Drivers Roadmap: Part 6 – DFM
12/06/2013: The Future is More than “More than Moore”
November 2013
11/27/2013: Robert Eichner’s presentation at the Verification Futures Conference
11/21/2013: The Race For Better Verification
11/18/2013: Experts at the Table: The Future of Verification – Part 2
11/14/2013: Experts At The Table: The Future Of Verification Part 1
11/08/2013: Video: Orange Roses, New Product Releases and Banner Business at ARM TechCon
October 2013
10/31/2013: Minimizing X-issues in Both Design and Verification
10/23/2013: Value of a Design Tool Needs More Sense Than Dollars
10/17/2013: Graham Bell at EDA Back to the Future
10/15/2013: The Secret Sauce for CDC Verification
10/01/2013: Clean SoC Initialization now Optimal and Verified with Ascent XV
September 2013
9/24/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 4
9/20/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 3
9/20/2013: CEO Viewpoint: Prakash Narain on Moving from RTL to SoC Sign-off
9/17/2013: Video: Ascent Lint – The Best Just Got Better
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 2
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain
9/10/2013: SoC Sign-off Needs Analysis and Optimization of Design Initialization in the Presence of Xs
August 2013
8/15/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 4
8/08/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 3
July 2013
7/25/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 2
7/18/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 1
7/16/2013: Executive Video Briefing: Prakash Narain on RTL and SoC Sign-off
7/05/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 3
June 2013
6/27/2013: Bryon Moyer: Simpler CDC Exception Handling
6/21/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 2
6/17/2013: Peggy Aycinena’s interview with Prakash Narain
6/14/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 1
6/10/2013: Photo Booth Blackmail!
6/03/2013: Real Intent is on John Cooley’s “DAC’13 Cheesy List”
May 2013
5/30/2013: Does SoC Sign-off Mean More Than RTL?
5/24/2013: Ascent Lint Rule of the Month: DEFPARAM
5/23/2013: Video: Gary Smith Tells Us Who and What to See at DAC 2013
5/22/2013: Real Intent is on Gary Smith’s “What to see at DAC” List!
5/16/2013: Your Real Intent Invitation to Fun and Fast Verification at DAC
5/09/2013: DeepChip: “Real Intent’s not-so-secret DVcon’13 Report”
5/07/2013: TechDesignForum: Better analysis helps improve design quality
5/03/2013: Unknown Sign-off and Reset Analysis
April 2013
4/25/2013: Hear Alexander Graham Bell Speak from the 1880′s
4/19/2013: Ascent Lint rule of the month: NULL_RANGE
4/16/2013: May 2 Webinar: Automatic RTL Verification with Ascent IIV: Find Bugs Simulation Can Miss
4/05/2013: Conclusion: Clock and Reset Ubiquity – A CDC Perspective
March 2013
3/22/2013: Part Six: Clock and Reset Ubiquity – A CDC Perspective
3/21/2013: The BIG Change in SoC Verification You Don’t Know About
3/15/2013: Ascent Lint Rule of the Month: COMBO_NBA
3/15/2013: System-Level Design Experts At The Table: Verification Strategies – Part One
3/08/2013: Part Five: Clock and Reset Ubiquity – A CDC Perspective
3/01/2013: Quick DVCon Recap: Exhibit, Panel, Tutorial and Wally’s Keynote
3/01/2013: System-Level Design: Is This The Era Of Automatic Formal Checks For Verification?
February 2013
2/26/2013: Press Release: Real Intent Technologist Presents Power-related Paper and Tutorial at ISQED 2013 Symposium
2/25/2013: At DVCon: Pre-Simulation Verification for RTL Sign-Off includes Automating Power Optimization and DFT
2/25/2013: Press Release: Real Intent to Exhibit, Participate in Panel and Present Tutorial at DVCon 2013
2/22/2013: Part Four: Clock and Reset Ubiquity – A CDC Perspective
2/18/2013: Does Extreme Performance Mean Hard-to-Use?
2/15/2013: Part Three: Clock and Reset Ubiquity – A CDC Perspective
2/07/2013: Ascent Lint Rule of the Month: ARITH_CONTEXT
2/01/2013: “Where Does Design End and Verification Begin?” and DVCon Tutorial on Static Verification
January 2013
1/25/2013: Part Two: Clock and Reset Ubiquity – A CDC Perspective
1/18/2013: Part One: Clock and Reset Ubiquity – A CDC Perspective
1/07/2013: Ascent Lint Rule of the Month: MIN_ID_LEN
1/04/2013: Predictions for 2014, Hier. vs Flat, Clocks and Bugs
December 2012
12/14/2012: Real Intent Reports on DVClub Event at Microprocessor Test and Verification Workshop 2012
12/11/2012: Press Release: Real Intent Records Banner Year
12/07/2012: Press Release: Real Intent Rolls Out New Version of Ascent Lint for Early Functional Verification
12/04/2012: Ascent Lint Rule of the Month: OPEN_INPUT
November 2012
11/19/2012: Real Intent Has Excellent EDSFair 2012 Exhibition
11/16/2012: Peggy Aycinena: New Look, New Location, New Year
11/14/2012: Press Release: New Look and New Headquarters for Real Intent
11/05/2012: Ascent Lint HDL Rule of the Month: ZERO_REP
11/02/2012: Have you had CDC bugs slip through resulting in late ECOs or chip respins?
11/01/2012: DAC survey on CDC bugs, X propagation, constraints
October 2012
10/29/2012: Press Release: Real Intent to Exhibit at ARM TechCon 2012 – Chip Design Day
September 2012
9/24/2012: Photos of the space shuttle Endeavour from the Real Intent office
9/20/2012: Press Release: Real Intent Showcases Verification Solutions at Verify 2012 Japan
9/14/2012: A Bolt of Inspiration
9/11/2012: ARM blog: An Advanced Timing Sign-off Methodology for the SoC Design Ecosystem
9/05/2012: When to Retool the Front-End Design Flow?
August 2012
8/27/2012: X-Verification: What Happens When Unknowns Propagate Through Your Design
8/24/2012: Article: Verification challenges require surgical precision
8/21/2012: How To Article: Verifying complex clock and reset regimes in modern chips
8/20/2012: Press Release: Real Intent Supports Growth Worldwide by Partnering With EuropeLaunch
8/06/2012: SemiWiki: The Unknown in Your Design Can be Dangerous
8/03/2012: Video: “Issues and Struggles in SOC Design Verification”, Dr. Roger Hughes
July 2012
7/30/2012: Video: What is Driving Lint Usage in Complex SOCs?
7/25/2012: Press Release: Real Intent Adds to Japan Presence: Expands Office, Increases Staff to Meet Demand for Design Verification and Sign-Off Products
7/23/2012: How is Verification Complexity Changing, and What is the Impact on Sign-off?
7/20/2012: Real Intent in Brazil
7/16/2012: Foosball, Frosty Beverages and Accelerating Verification Sign-off
7/03/2012: A Good Design Tool Needs a Great Beginning
June 2012
6/14/2012: Real Intent at DAC 2012
6/01/2012: DeepChip: Cheesy List for DAC 2012
May 2012
5/31/2012: EDACafe: Your Real Intent Invitation to Fast Verification and Fun at DAC
5/30/2012: Real Intent Video: New Ascent Lint and Meridian CDC Releases and Fun at DAC 2012
5/29/2012: Press Release: Real Intent Leads in Speed, Capacity and Precision with New Releases of Ascent Lint and Meridian CDC Verification Tools
5/22/2012: Press Release: Over 35% Revenue Growth in First Half of 2012
5/21/2012: Thoughts on RTL Lint, and a Poem
5/21/2012: Real Intent is #8 on Gary Smith’s “What to see at DAC” List!
5/18/2012: EETimes: Gearing Up for DAC – Verification demos
5/08/2012: Gabe on EDA: Real Intent Helps Designers Verify Intent
5/07/2012: EDACafe: A Page is Turned
5/07/2012: Press Release: Graham Bell Joins Real Intent to Promote Early Functional Verification & Advanced Sign-Off Circuit Design Software
March 2012
3/21/2012: Press Release: Real Intent Demos EDA Solutions for Early Functional Verification & Advanced Sign-off at Synopsys Users Group (SNUG)
3/20/2012: Article: Blindsided by a glitch
3/16/2012: Gabe on EDA: Real Intent and the X Factor
3/10/2012: DVCon Video Interview: “Product Update and New High-capacity ‘X’ Verification Solution”
3/01/2012: Article: X-Propagation Woes: Masking Bugs at RTL and Unnecessary Debug at the Netlist
February 2012
2/28/2012: Press Release: Real Intent Joins Cadence Connections Program; Real Intent’s Advanced Sign-Off Verification Capabilities Added to Leading EDA Flow
2/15/2012: Real Intent Improves Lint Coverage and Usability
2/15/2012: Avoiding the Titanic-Sized Iceberg of Downton Abbey
2/08/2012: Gabe on EDA: Real Intent Meridian CDC
2/08/2012: Press Release: At DVCon, Real Intent Verification Experts Present on Resolving X-Propagation Bugs; Demos Focus on CDC and RTL Debugging Innovations
January 2012
1/24/2012: A Meaningful Present for the New Year
1/11/2012: Press Release: Real Intent Solidifies Leadership in Clock Domain Crossing
August 2011
8/02/2011: A Quick History of Clock Domain Crossing (CDC) Verification
July 2011
7/26/2011: Hardware-Assisted Verification and the Animal Kingdom
7/13/2011: Advanced Sign-off…It’s Trending!
May 2011
5/24/2011: Learn about Advanced Sign-off Verification at DAC 2011
5/16/2011: Getting A Jump On DAC
5/09/2011: Livin’ on a Prayer
5/02/2011: The Journey to CDC Sign-Off
April 2011
4/25/2011: Getting You Closer to Verification Closure
4/11/2011: X-verification: Conquering the “Unknown”
4/05/2011: Learn About the Latest Advances in Verification Sign-off!
March 2011
3/21/2011: Business Not as Usual
3/15/2011: The Evolution of Sign-off
3/07/2011: Real People, Real Discussion – Real Intent at DVCon
February 2011
2/28/2011: The Ascent of Ascent Lint (v1.4 is here!)
2/21/2011: Foundation for Success
2/08/2011: Fairs to Remember
January 2011
1/31/2011: EDA Innovation
1/24/2011: Top 3 Reasons Why Designers Switch to Meridian CDC from Real Intent
1/17/2011: Hot Topics, Hot Food, and Hot Prize
1/10/2011: Satisfaction EDA Style!
1/03/2011: The King is Dead. Long Live the King!
December 2010
12/20/2010: Hardware Emulation for Lowering Production Testing Costs
12/03/2010: What do you need to know for effective CDC Analysis?
November 2010
11/12/2010: The SoC Verification Gap
11/05/2010: Building Relationships Between EDA and Semiconductor Ventures
October 2010
10/29/2010: Thoughts on Assertion Based Verification (ABV)
10/25/2010: Who is the master who is the slave?
10/08/2010: Economics of Verification
10/01/2010: Hardware-Assisted Verification Tackles Verification Bottleneck
September 2010
9/24/2010: Excitement in Electronics
9/17/2010: Achieving Six Sigma Quality for IC Design
9/03/2010: A Look at Transaction-Based Modeling
August 2010
8/20/2010: The 10 Year Retooling Cycle
July 2010
7/30/2010: Hardware-Assisted Verification Usage Survey of DAC Attendees
7/23/2010: Leadership with Authenticity
7/16/2010: Clock Domain Verification Challenges: How Real Intent is Solving Them
7/09/2010: Building Strong Foundations
7/02/2010: Celebrating Freedom from Verification
June 2010
6/25/2010: My DAC Journey: Past, Present and Future
6/18/2010: Verifying Today’s Large Chips
6/11/2010: You Got Questions, We Got Answers
6/04/2010: Will 70 Remain the Verification Number?
May 2010
5/28/2010: A Model for Justifying More EDA Tools
5/21/2010: Mind the Verification Gap
5/14/2010: ChipEx 2010: a Hot Show under the Hot Sun
5/07/2010: We Sell Canaries
April 2010
4/30/2010: Celebrating 10 Years of Emulation Leadership
4/23/2010: Imagining Verification Success
4/16/2010: Do you have the next generation verification flow?
4/09/2010: A Bug’s Eye View under the Rug of SNUG
4/02/2010: Globetrotting 2010
March 2010
3/26/2010: Is Your CDC Tool of Sign-Off Quality?
3/19/2010: DATE 2010 – There Was a Chill in the Air
3/12/2010: Drowning in a Sea of Information
3/05/2010: DVCon 2010: Awesomely on Target for Verification
February 2010
2/26/2010: Verifying CDC Issues in the Presence of Clocks with Dynamically Changing Frequencies
2/19/2010: Fostering Innovation
2/12/2010: CDC (Clock Domain Crossing) Analysis – Is this a misnomer?
2/05/2010: EDSFair – A Successful Show to Start 2010
January 2010
1/29/2010: Ascent Is Much More Than a Bug Hunter
1/22/2010: Ascent Lint Steps up to Next Generation Challenges
1/15/2010: Google and Real Intent, 1st Degree LinkedIn
1/08/2010: Verification Challenges Require Surgical Precision
1/07/2010: Introducing Real Talk!

Richard Goering and Us: 30 Great Years

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Richard Goering at his 30th DAC, San Francisco in 2014

Richard Goering at his 30th DAC, San Francisco in 2014

Richard Goering, the EDA industry’s distinguished reporter and most recently Cadence blogger is finally closing his notebook and retiring from the world of EDA writing after 30 years.  I can’t think of anyone that is more universally regarded and respected in our industry, even though all he did was report and analyze industry news and developments.

Richard left Cadence Design Systems at the end of June (last month).  According to his last blog posting EDA Retrospective: 30+ Years of Highlights and Lowlights, and What Comes Next he will be pursuing a variety of interests other than EDA. He will “keep watching to see what happens next in this small but vital industry”.

When Richard left EETimes in 2007, there was universal hand-wringing and distress that we had lost a key part of our industry.  John Cooley did a Wiretap post on his DeepChip web-site with contributions from 20 different executives, analysts and other media heavyweights.  Here are a just few quotes that I picked out for this post:

Richard was a big supporter of start-ups and provided the best coverage that this industry could ever get.

– Rajeev Madhavan of Magma

Richard has been a cornerstone of the EDA industry since I was on the customer side. He was never influenced by hype; he looked for content. I have always appreciated his objectivity, recognizing that his analysis would go beyond the superficial aspects of an industry event or product announcement and search for the real impact.

– Wally Rhines of Mentor

Goering has been an icon for the EDA industry since I first became aware of what EDA was. EDA is an industry with somewhat loose definitions. Just as you can say that RTL is defined by what Design Compiler accepts, you can say that EDA is defined by what Richard Goering covers. If he stops covering it, will it stop being EDA?

– John Sanguinetti

Like Rajiv Madhavan, I also experienced the great support of my startup back in 1999.  A few of us had founded a formal verification company called HDAC (later Averant) and we were very surprised to end up on the front-page of EETimes when we launched.  Richard was indeed THE reporter at the number 1 industry publication.

You will want to read Richard’s last blog post.  His retrospective covers

1985, CAE/CAD, Daisy, Mentor, Valid, Orcad, Gates-to-RTL, function verification, ESL, high-level synthesis, lawsuits, standards wars, DFM, brain drain, and Where is EDA Headed?

For the last 6 years, Richard’s steady hand has covered industry trends and developments on behalf of Cadence. Never one for hyperbole and exaggeration he was always been a good read.

Goodbye Richard.  You will be very much missed.

Jul 1, 2015 | Comments

Quick 2015 DAC Recap and Racing Photo Album

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

This years Design Automation Conference in San Francisco was excellent!   You don’t have to take my word for it.  At the Industry Liaison Committee meeting for DAC exhibitors on Thursday June 11, the various members were in agreement that show traffic was up, and the quality of the customer meetings exceeded expectations.  Why is that?  It is in large part of due to the tremendous efforts of Anne Cerkel senior director for technology marketing at Mentor Graphics, who was the general chair for the 52nd DAC.

One innovation at this year’s show was opening the exhibitor floor at 10 am.  This made it more convenient to see the morning keynotes and also more flexibility in commuting to the show from around the Bay area.  I think you can expect to see this again at the next 53rd DAC show in Austin Texas.

Our two GRID racing car simulators was one reason the show was excellent for Real Intent.  We were able to draw a large crowd to our booth.  Budding race car drivers could challenge their friends and colleagues to a race and enjoy our license-to-speed verification solutions.  A special thank you to Shama Jawaid and the team at OpenText who was our partner for the license-to-speed promotion.

Here are some quick photos from the show for you to enjoy.








Our booth hostesses Crisca and Costina with their mother Chau


Happy Booth Staff

Jun 12, 2015 | Comments

Advanced FPGA Sign-off Includes DO-254 and …Missing DAC?

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

One trend we’re seeing in Asia is the number of FPGA design starts — now counting in the thousands. Getting a functionally correct design is the first goal for designers. It is easy to think that once that is achieved FPGAs can shipped out in finished products. But that’s not a robust model. For example, we have had customers with failures in the field due to a subtle timing change between FPGA part lots. Larger FPGA designs have grown in complexity, resulting in an amalgamation of disparate IP that can lead to clock domain challenges. A robust model for FPGA designs requires advanced signoff tools, a design flow that works easily with Xilinx and Altera tools, and support for high-reliability standards like DO-254. This is where Real Intent’s Meridian and Ascent products excel. For high-performance, our CDC and Lint tools provide the confidence design teams need, with unsurpassed verification and sign-off support.

Come visit us in Booth #1422 at DAC in San Francisco, June 8-10 to see our latest technical presentations. To choose your technical presentation click here.

Can’t attend DAC?  Check out some of our latest video interviews with Real Intent technologists or email us for a personal presentation to you or your team.


Jun 5, 2015 | Comments

#2 on GarySmithEDA What to See @ DAC List – Why?

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

The last two weeks before the Design Automation Conference in San Francisco are a busy time.  For us marketeers, it has been called “our Superbowl.”  We want to get the word out that we have something new and important to show visitors to at our exhibit booth.  But there is more going on which I will mention after I talk about our booth activities.

Real Intent is number two on the GarySmithEDA What to See @ DAC list.   I know why we are number two on the list.  But I don’t want to give the secret away. If you know the reason, then please let everyone know in the comments section at the end of the blog.

Here are the quick titles for our technical presentation in our demo suites.

  • Ascent Lint with 3rd Generation iDebug Platform and DO-254
  • Meridian CDC for RTL with New 3rd Generation iDebug Platform
  • Ascent XV with Advanced Gate-level Pessimism Analysis
  • Accelerate Your RTL Sign-off
  • Hierarchical CDC Analysis and Reporting for Giga-gate Designs
  • Next-Generation Meridian Constraints for SDC
  • Autoformal RTL Verification
  • FPGA Sign-off and Verification

Click on this appointment sign-up link to arrange a meeting with us.

Besides fast RTL sign-off, we are also having fun at our booth and giving away cool prizes.  Come and race against other drivers in our two GRID Racing Simulators and receive your License-to-Speed.   Get your license stamped at both the Real Intent and Open Text booths (just around the corner) and you will get a change to win $$$ Amazon gift cards.  Fill out our verification survey and you will get a chance to win a Roku 3 streaming media player or a Kindle PaperWhite e-reader.  Here is a picture of the GRID simulators.


I hinted earlier that there was more going on than just activities at the Real Intent booth.  We are organizers for a Test and Verification panel:  Scalable Verification: Evolution or Revolution? on Wed., June 10 from 4:30-6 p.m. in Room #304. Moderated by Brian Bailey (technology and EDA editor of Semiconductor Engineering), it has a panel of experts from Freescale Semiconductor, NVIDIA, Qualcomm, Hewlett-Packard and ARM.

We are also sponsoring the Love IP DAC Party on Monday, June 8 at Jillians in the Metreon, just steps away from the Moscone Center. Doors open at 7 p.m. The party is organized by Heart of Technology (HOT), the philanthropic organization founded by EDA veteran Jim Hogan. This event brings the DAC and IP communities together to raise money for the San Jose State Guardian Scholars – a program to help underprivileged and homeless students at the university. The party’s theme is “Summer of Love,” so come in your best Jerry Garcia look-alike costume!

And don’t forget the The Denali Party by Cadence on Tuesday night, June 9.  You will want to sign up online before the DAC show starts to get your ticket by Tuesday morning.  See you there!

May 28, 2015 | Comments

SoC Verification: There is a Stampede!

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Red River Cattle DriveIn the stories of the Wild West from the 1800s, the image of a cattle drive often is depicted. A small team of cowboys delivers thousands of heads of cattle to market. The cowboys spend many days crossing open land until they reach their destination – one with stock yards to accept their precious herd, and a rail station to deliver it quickly to market. Along the way there are dangers, including losses by predators and mad stampedes by cattle rushing blindly when frightened or disturbed. The primary job of the cowboys is to keep the herd on track and settled as they move to ship-out.

I see immediate parallels between the cowboys of the Wild West and today’s system-on-chip (SoC) design and verification engineers. Cowhands struggle to control and move a big herd. Similarly, today’s design teams grapple with how to keep a project on target and converging to tape-out and success when the gate count of SoCs has become so large it can stretch and even overwhelm their ability to stay on track. How big are these new SoCs?

The Xbox One gaming console, for example, uses 5 billion transistors, which is equivalent to 1.25 billion digital gates. Its AMD-designed SoC produced at TSMC on a 28-nm process combines eight Jaguar CPU cores and Graphics Core Next (GCN)-class integrated graphics. (See Figure 1.)


nvidia_kepler2_die_shot-200Another example, pictured on the left, is Nvidia’s GK110 GPU (also made on TSMC’s 28-nm process), which has 7.1 billion transistors. This translates to nearly 2 billion digital gates. These are not just big chips but giant chips!

With each smaller semiconductor node foundries provide, more gates can be squeezed into the same die size. In parallel, many different kinds of design blocks and intellectual property (IP) are employed, usually created by third-parties, to accelerate the implementation of the design objectives. The interaction of the various blocks across various power and timing conditions adds a new kind of complexity to the design. The result is a “herd” of interfaces with thousands of different crossings that must be checked and verified to ensure the design does not run off into a fatal operating condition.

It would be great to have the luxury of several hundred design and verification engineers to verify all possible failures in these giant SoCs, but that is not usually the case. Typically a small team relies on design automation software to manage the complexity of the verification challenge.

For each interface in the SoC, signals cross asynchronously between the various IPs and must be registered correctly to ensure the integrity of the digital signal path and eliminate metastability errors. For bus-level signals, circuitry such as a FIFO manages the data transfer and verification to ensure there is no data overflow or underflow that could compromise the design. This approach requires a full-chip clock domain crossing (CDC) analysis.

Design teams need three elements to achieve overnight CDC analysis runs for functional sign-off – precision, throughput and ease of use. (See Figure 2.)

Precise analysis means the software must accurately capture all possible interfaces in the design, including buses; provide reset analysis, including glitches in both asynchronous and synchronous domains; and correctly handle crossings that may be blocked by environment definition. Once the analysis is done, it is essential to be able to verify the interfaces automatically, using formal technologies, so all possible failure conditions can be exhaustively covered.

Likewise, throughput has three important considerations:  runtime, capacity and methodology. Design analysis must be done in overnight runs to make the necessary progress to stay on schedule. In terms of capacity, a terabyte of computer memory no longer is needed to verify a 500-million gate design. Instead, teams can use more standard hardware. For giga-scale designs, a hierarchical methodology is needed to leverage block-level CDC signoff for chip-level CDC verification. This methodology is effective for sign-off only if the SoC verification makes no approximations or abstractions. Only then can it truly ensure no signal crossing errors are missed.

Ease of use is the third major aspect of CDC analysis for functional sign-off. The software setup must be easy and automated to ensure the quality of results. The various kinds of analysis including formal analysis must generate results without the user writing any tests. Finally and perhaps most important, the debug of analysis results must be hierarchical and fully customizable. This kind of flexibility is available typically only from a full database of analysis results. Graphical and command-line interfaces must be able to extract the necessary reports in a variety of formats and with the data organized as required for any specific verification flow requirements. Whether using HTML docs or custom spreadsheets, the design and verification team should be able to “rope-in” any interface issue.


SoC verification poses many challenges through the sheer size of designs and the various mix of design IP, each operating with its own clocking scheme. Successful SoC design teams will meet the challenge of clock domain crossing verification with a solution that provides the necessary precision, throughput and ease-of-use they need. This approach will avoid a stampede of errors and late debugging that will delay the ship-out of their designs.


This blog article was originally published on EETimes SoC Designlines.


May 14, 2015 | Comments

Drilling Down on the Internet-of-Things (IoT)

Ramesh Dewangan   Ramesh Dewangan
   Vice President of Application Engineering at Real Intent

Did you know there will be 50 billion connected devices by 2020?

I am not making it up!

This was the future painted by Dr. Martin Scott, SVP and GM, Cryptography Research Division, Rambus, in a scintillating session on the Internet of Things (IoT) at the Silicon Summit 2015 event organized by Global Semiconductor Alliance in April.

What will the future look like when there will be over 6 devices for every person on the planet?

I’ll summarize the 3 key points I learned regarding IoT: the components, the scope and the challenges.

Components of an IoT System

Dr. Scott laid out the high-level components of an IoT system:

  • End points are the IoT devices with sensors, hardware and software to provide touch point to the users or gather data
  • The Hub/Edge is the data gateway or aggregators. They could be mobile phones, routers, towers and so on.
  • There is a cloud system/data center to store and analyze data. A high bandwidth wide area and local area connectivity move data across these components.


Lastly you have Analytics apps to provide meaningful data back to the providers and consumers.

Scope of IoT

The scope of IoT applications is vast. I was aware of to its applications in the consumer segment based on media coverage that I had been exposed to sofar. It turns out that in addition to the consumer segment, IoT is already playing major roles in industrial and medical segments. As per Rahul Patel, SVP and GM, Wireless Connectivity, Broadcom, IoT has limitless possibilities:


Challenges to IoT success

James Stansberry, SVP and GM IoT Products, Silicon Labs laid out the challenges succinctly: It is Energy, Functionality, Integration and Connectivity.

Energy: How many times have you been frustrated with your smart phone running out of juice in the middle of the day? While devices are improving battery life with every generation, IoT devices need sustained battery life for a much longer period. IoT devices must operate on a coin cell battery for 5 years. Unless that happens, the applications will be limited. The SoCs driving the IoT devices have to be ultra-low power.

Connectivity: The bandwidth and flexibility of existing connectivity systems, be they WiFi, or Bluetooth or LTE, are too limiting for IoTs to become pervasive. There needs to be higher bandwidth and flexible switching among the connectivity protocols. New standards like new WiFi standards, Bluetooth Smart, ZigBee, and THREAD are emerging as viable solutions.

Integration: A typical IoT SoC will need to integrate highly complex IPs and interface with sensors, control, RF and battery. The process nodes and the SoC development methodology must enable such large scale integration.


Functionality: Dr. Scott, pointed out that sensitive data in transit remains vulnerable going from end-point to hub to cloud. The functionality must include security as a key component.

Personal Experience

Recently, my son realized that he had lost his car keys at his college campus one weekend. I thought he would be frantic, asking around for help to find them. Instead, he calmly opened an app on his smart phone and then located his keys on a convenient map , thanks to a tiny tracking chip he had added to his key ring.

IoT is not a concept any more, it is real, and it is happening. It will become pervasive and ingrained in our lives as soon as the significant challenges in functionality, energy, connectivity and integration, are tackled!

May 7, 2015 | Comments

Reflections on Accellera UCIS: Design by Architect and Committee

Luc Burgun   David Scott
   Principal Architect

In late March, Brian Bailey of Semiconductor Engineering published an article on standards: “Design by Architect or Committee?”  This made me think of my own experience with the Accellera Unified Coverage Interoperability Standard (UCIS), something of which I am both proud and embarrassed.  Proud, because when I was at Mentor Graphics I was the architect of the winning donation, and that’s a rare thing in any career — to contribute the design and architecture for an industry standard.  However, I am embarrassed because I know I could have done better in a re-design.  Any software engineer will tell you this: the second design is always better, because you’ve learned from the first.  We did some re-design as part of the standardization effort, but not to the degree I wanted.

In retrospect, the politics of Accellera UCIS were bound to be difficult, because if you think about it, the standard allows users to easily switch simulators.  That’s what the “interoperable” part means.  With simulation a slowly growing market, a sort of zero-sum game, one company’s gain is another’s loss.  No one is going to be enthusiastic about a standard that helps them lose business.  This point was also made in Brian’s article.

I also participated in the SystemVerilog standard of the IEEE.  Say what you like about SystemVerilog, it is not just design by committee, it is design by multiple committees.  But those committees do really have a lot of common ground and work pretty well together.  The atmosphere in Accellera UCIS meetings was more polarized.

The inception for the standard was the realization inside Mentor Graphics that coverage analysis needed a public application programming interface (API).  We made the crucial decision to use the same API internally for coverage creation, reporting, and analysis, and to make it usable in a standalone fashion as well.  We tried to keep it simple, easy to grasp for verification engineers who were not software developers, without the complex data models and handles that would make it more like SystemVerilog VPI.  This wasn’t entirely possible, but when we were done, we had something that was complete and functional.

It remains my favorite project of my career.  In the early days of formulating the API, I had great fun brainstorming with Doug Warmke and Samiran Laha.  (Samiran presented a poster on the UCIS API just this past DVCon.)  We then gradually re-architected the coverage GUIs with my hands-on marketing counterpart Darron May and created a suite of brand new verification management features.  It culminated in the Questa Verification Management Tracker GUI, allowing test traceability analysis tying together all kinds of coverage.  I myself wrote the internal machinery of the GUI, and it was the ultimate validation of the API started a few years before.

There was quite a debate within Mentor about whether to try to make the API an industry standard.  This is the rarified domain of Mentor’s great tactician Dennis Brophy, so I don’t really know why we decided in favor of submitting it.  I had heard there was a customer telling us to participate.  I think we then expected backing from that customer, but it didn’t happen that way.  One interesting twist is in the behavior of the Big Three.  With three big gorillas in the room, you get a lot of two-versus-one alignments.  The push to SystemVerilog 2005 was initially a Synopsys and Mentor alliance versus Cadence.  Perhaps just for political balance, UCIS became Cadence and Mentor versus Synopsys.  We started meeting with Cadence well before the donation was approved, so the basis for the UCIS standard was really a combined effort of Cadence and Mentor.

The most vocal customers on the committee, however, were from Synopsys.  This made the negotiations in the meetings difficult for us.

How we won the committee vote to accept Mentor’s donation in June, 2009 I cannot say.  This had much more to do with Dennis Brophy than with me, and certainly little to do with the merits of the competing donations.  I’ll tell you, though, the most stressful day of a 25-year career was having to defend my donation to the committee, because it had to be as perfect a performance as I could muster, and it didn’t really matter.  It was a political exercise, not a technical one.

The first meeting after acceptance of the donation, I produced a list of defects I wanted to correct or improve.  From my point of view, this was just standard software engineering post mortem; I’d lived with the design for years and could do better.  The immediate reaction, however, was not a happy one, and I had to shut up.

I wasn’t completely ignored; some of my and others’ suggested improvements were made during my remaining tenure on the committee, and more after I left Mentor and the committee.  The most serious criticism of the standard, which I agree with, is that the coverage models are not really interoperable.  The API is, but not the way coverage itself is stored by different simulators.  While I understand users would like this, you have to ask which vendors would like this.  None.  Vendors would have to change their current implementation to adhere to some new way of doing things, only to increase the risk of losing their customers to another vendor.  The worst problem is that coverage is rooted in particular language scopes, and language scopes aren’t even standardized.  Synthesizable scopes are, but not verification scopes like those created by parameterized classes in SystemVerilog.  Because this depends on a company’s proprietary elaboration algorithm, it is very unlikely this will ever be a standard.

So, bottom line, UCIS was not a “win-win, a benefit for the vendors and a benefit for the users,” as Arturo Salz said in Brian’s article.  I think Mentor initiated it to increase its profile and credibility as a verification vendor, and I suspect others were dragged along by the force of customers, but without a clear and universal win-win, its full promise remains unrealized.

I will always be grateful that it was something I could participate in, and it is a highlight of my professional career. But I do look back on it as a stressful experience.  I hope the UCIS will evolve and mature, and I pray it encourages an ecosystem of coverage analysis tools to develop along with it. I am interested to see some positive signs, like Mark Litterick’s DVCon paper I blogged about last time.  But now UCIS has a life of its own without me.  As one of its several parents, I will follow it with natural interest, and of course, some measure of pride.


Apr 30, 2015 | Comments

DO-254 Without Tears

Pranav Ashar   Dr. Pranav Ashar
   Chief Technology Officer

This article was originally published on TechDesignForums and is reproduced here by permission.

At first glance the DO-254 aviation standard, ‘Design Assurance Guideline for Airborne Electronic Hardware’, seems daunting. It defines design and verification flows tightly with regard to both implementation and traceability.

Here’s an example of the granularity within the standard: a sizeable block addresses how you write state machines, the coding style you use and the conformity of those state machines to that style.

This kind of stylistic, lower-level semantic requirement – and there are many within DO-254 – makes design managers stop and think. So it should. The standard is focused on aviation’s safety-critical demands, assessing the hardware design’s execution and functionality in appropriate depth right up to the consequences of a catastrophic failure.

Nevertheless, one pervasive and understandable concern has been the degree to which such a tightly-drawn standard will impact on and be compatible with established flows. This particularly goes for new entrants in avionics and its related markets.

Your company has a certain way of doing things so you inevitably wonder how easily that can be adapted and extended to meet the requirements of DO-254… or will a painful and expensive rethink be necessary? Can we realistically do this?

Here’s the good news. The demands of the standard map closely to how EDA tools have developed and continue to evolve. Automation therefore takes a lot of pain out of the process.

DO-254 and EDA in harmony

At Real Intent, we have just placed DO-254 at the forefront of the new release of our Ascent Lint tool. It is a good illustration of what I mean.

First, what is a linter if not largely an accumulation of design knowledge that is applied to a new project in the light of what has been discovered on earlier ones? That’s where most of the rules come from. This has obvious and very beneficial implications for designs that observe predefined coding styles.

Our lint tool can guide you to the right places to look. When you have that information, it becomes a lot easier to adapt your flow and your design practices.

But let’s go further and look at the philosophy behind DO-254.

Consider the implications of ‘complexity’. It may be the most overused word in EDA but it’s still true that the increasing challenges faced by electronics system design have seen more intelligence fed into tools of all types.

To achieve DO-254 compliance specifically, I would argue that a linter is an important foundation, but you need to go further. You need a suite of tools, also packed with the same kind of semantic intelligence.

The kind of hierarchical RTL verification offered by our Ascent IIV tool and the depth of understanding of unknowns within our Ascent XV X-verification tool illustrate the extra checks and traces that are likely to be needed for a safety-critical design.

And there they are already in our tools – and yes, those of some of our competitors. These tools have evolved largely in parallel with the needs of this particular standard, but more importantly with the broader needs of all electronic system design.

Processes alone can only take you so far. Processes that highlight the need for an informed approach to design are what we need. That last quality strikes me as a key and very welcome aspect of DO-254.

DO-254 has its rewards

None of this means that DO-254 compliance is ‘easy’. No safety-first design should be. Attention to detail matters. But again, you already knew that even if you have never worked on an aviation project before. Today, nothing is easy.

In that context, today’s EDA tools include capabilities that greatly improve the efficiency with which existing players in aviation deliver projects and also lower the barriers to entry for new ones. That boosts competition and thereby quality.

Right now, aviation is an exciting field. The drone market alone – spurred by interest from the likes of Amazon and Google – is being awarded multi-billion dollar valuations. In the US, the FAA has this month finally described how it sees UAVs operating, albeit relatively small ones for now.

As UAVs become more commonplace, their DO-254-compliance will increasingly be required… even if the FAA is not itself making that mandatory. Yet.

A tremendous opportunity exists and EDA can help a great many of its customers take advantage of it. DO-254 does present challenges, but they are not so different from those we already face – with the right tools you can adapt without tears.

Apr 23, 2015 | Comments

Analysis of Clock Intent Requires Smarter SoC Verification

Thanks to the widespread reuse of intellectual property (IP) blocks and the difficulty of distributing a system-wide clock across an entire device, today’s system-on-chip (SoC) designs use a large number of clock domains that run asynchronously to each other. A design involving hundreds of millions of transistors can easily incorporate 50 or more clock domains and hundreds of thousands of signals that cross between them.

Although the use of smaller individual clock domains helps improve verification of subsystems apart from the context of the full SoC, the checks required to ensure that the full SoC meets its timing constraints have become increasingly time consuming.

Signals involved in clock domain crossing (CDC), for example where a flip-flip driven by one clock signal feeds data to a flop driven by a different clock signal raise the potential issue of metastability and data loss. Tools based on static verification technology exist to perform CDC checks and recommend the inclusion of more robust synchronizers or other changes to remove the risk of metastability and data loss.

Runtime issues

Conventionally, the verification team would run CDC verification on the entire design database before tapeout as this is the point that it becomes possible to perform a holistic check of the clock-domain structure and ensure that every single domain-crossing path is verified. However, on designs that incorporate hundreds of thousands of gates, this is becoming impractical as the compute runtime alone can run into days at a point where every hour saved or spent is precious. And, if CDC verification waits for this point, the number of violations – some of which may be false positives – will potentially generate many weeks of remedial effort, after which another CDC verification cycle needs to be run. To cope with the complexity, CDC verification needs a smarter strategy.

By grouping modules into a hierarchy, the verification team can apply a divide-and-conquer strategy. Not only that, the design team can play a bigger role in ensuring that potential CDC issues are trapped early and checked automatically as the design progresses.

A hierarchical methodology makes it possible to perform CDC checks early and often to ensure design consistency such that, following SoC database assembly, the remaining checks can pass quickly and, most likely, result in a much more manageable collection of potential violations.

Hierarchical obstacles

Traditionally, teams have avoided hierarchical management of CDC issues because of the complexity of organizing the design and ensuring that paths are not missed. A potential problem is that all known CDC paths may be deemed clean within a block and that it can be considered ‘CDC clean’. But there may be paths that escape attention because they cross the hierarchy boundaries in ways that cannot be caught easily – largely because the tools do not have sufficient information about the logic on the unimplemented side of the interface and the designer has made incorrect clock-related assumptions about the incoming paths.

If those sneak paths were not present, it would be possible to present the already-verified modules as black boxes to higher levels of hierarchy such that only the outer interfaces need to be verified with the other modules at that level of hierarchy. For hierarchical CDC verification to work effectively, a white- or grey-box abstraction is required in which the verification process at higher levels of hierarchy is able to reach inside the model to ensure that all potential CDC issues are verified.

As the verification environment does not have complete information about the clocking structure before final SoC assembly, reporting will tend to err on the side of caution, flagging up potential issues that may not be true errors. Traditionally, designers would provide waivers for flops on incoming paths that they believe not to be problematic to avoid them causing repeated errors in later verification runs as the module changes. However, this is a risky strategy as it relies on assumptions about the overall SoC clocking structure that may not be born out in reality.

Refinements to the model

The waiver model needs to be refined to fit a smart hierarchical CDC verification strategy. Rather than apply waivers, designers with a clear understanding of the internal structure of their blocks can mark flops and related logic to reflect their expectations. Paths that they believe not to be an issue and therefore not require a synchronizer can be marked as such and treated as low priority, focusing attention on those paths that are more likely to reveal serious errors as the SoC design is assembled and verified.

However, unlike paths marked with waivers, these paths are still in the CDC verification environment database. Not only that, they have been categorized by the design engineer to reflect their assumptions. If the tool finds a discrepancy between that assumption and the actual signals feeding into that path, errors will be generated instead of being ignored. This database-driven approach provides a smart infrastructure for CDC verification and establishes a basis for smarter reporting as the project progresses.

Smart reporting

Reporting will be organized to meet the specification rather than a long list of uncategorized errors that may or may not be false positives. This not only accelerates the process of reviews but allows the work to be distributed among engineers. As the specification is created and paths marked and categorized, engineers establish what they expect to see in the CDC results, providing the basis for smart reporting from the verification tools.

When structural analysis finds that a problematic path that was previously thought to be unaffected by CDC issues, the engineer can zoom in on the problem and deploy formal technologies to establish the root cause and potential solutions. Once fixed, the check can be repeated to ensure that the fix has worked.

The specification-led approach also allows additional attention to be paid to blocks that are likely to lead to verification complications, such as those that employ reconvergent logic. Whereas structural analysis will identify most problems on normal logic, these areas may need closer analysis using formal technology. Because the database-driven methodology allows these sections to be marked clearly, the right verification technology can be deployed at the right time.


By moving away from waivers and black-box models, the database-driven hierarchical CDC methodology encourages design groups to take SoC-oriented clocking issues into account earlier in the design cycle and ensure that any concerns about interfaces that may involve modules designed by design groups located elsewhere or even by different companies are carried forward to the critical SoC-level analysis without incurring the overhead of having to repeatedly re-verify each port on the module. Through earlier CDC analysis and verification, the team reduces the risk of encountering a large number of schedule-killing violations immediately prior to tapeout, and be far more confident that design deadlines will be met.

This article was originally published on TechDesignForums and is reproduced here by permission.

Apr 17, 2015 | Comments

High-Level Synthesis: New Driver for RTL Verification

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

In a recent blog, Does Your Synthesis Code Play Well With Others?,  I explored some of the requirements for verifying the quality of the RTL code generated by high-level synthesis (HLS) tools.  At a minimum, a state-of-the-art lint tool should be used to ensure that there are no issues with the generated code.  Results can be achieved in minutes, if not seconds for generated blocks.

What else can be done to ensure the quality of the generated RTL code?   For functional verification, an autoformal tools, like Real Intent’s Ascent IIV product can be used to ensure that basic operation is correct.   The IIV tools will automatically generate sequences and detect whether incorrect or undesirable behavior can occur.   Here is a quick list of what IIV can catch in the generated code:

  • FSM deadlocks and unreachable states
  • Bus contention and floating busses
  • Full- and Parallel-case pragma violations
  • Array bounds
  • Constant RTL expressions, nets & state vector bits
  • Dead code

dffDesigners are are also concerned about the resettability of their designs and if they power-up into a known good state.  We have seen some interesting results when Real Intent’s Ascent XV tool is applied to RTL blocks generated by HLS.  Besides analyzing X-optimism and X-pessimism, the Ascent XV tool can determine the minimum number of flops that need to have reset lines routed to them.  To save routing resources and reduce power requirements a minimal set of flops should be used.  Running additional lines does not improve the design.

Here are the results for a block that was 130K gates in size:

Number of Flops 17,495
Ascent XV Analysis Time (sec) 20
Unitialized Flops Found 646
Percent Initialized 96%
Redundant Flops Initialization 11,896
Reset Savings 68%

In this example, the Ascent XV tool took 20 seconds to analyze all 17,495 flops and discover that 646 were unitialized and that of the roughly 16,800 other flops, most of these did not need to have reset signals routed to them.   The savings were 68% compared to the unimproved design.  We have seen similar savings on other blocks generated by HLS tools.

HLS is now an important part of the hardware flow, and improves the productivity of designers.  With easy generation of RTL code, designers should expert to use quick static verification tools such as lint, autoformal, and reset analysis to confirm quality and correct operation.  This will save valuable time when designs are given to simulation and gate-level synthesis tools later in the flow.

Apr 9, 2015 | Comments