Blog Archive
October 2015
10/30/2015: The Many Tentacled Monster Under My House (with Pictures)
10/23/2015: Is Silicon the New Fabric for Our Lives?
10/16/2015: DAC Verification Survey: What’s Hot and What’s Not
10/09/2015: On-the-Fly Hardware Accurate Simulation, New Meridian CDC, ASICON Tutorial
10/02/2015: Correcting Pessimism without Optimism – Part Two
September 2015
9/24/2015: Correcting Pessimism without Optimism – Part One
9/17/2015: Calypto Design Systems: A Changed Partner
August 2015
8/17/2015: A Verification Standard for Design Reliability
8/06/2015: New 3D XPoint Fast Memory a Big Deal for Big Data
July 2015
7/30/2015: Technology Errors Demand Netlist-level CDC Verification
7/23/2015: Video: SoC Requirements and “Big Data” are Driving CDC Verification
7/16/2015: 50th Anniversary of Moore’s Law: What If He Got it Wrong?
7/09/2015: The Interconnected Web of Work
7/06/2015: In Fond Memory of Gary Smith
7/01/2015: Richard Goering and Us: 30 Great Years
June 2015
6/12/2015: Quick 2015 DAC Recap and Racing Photo Album
6/05/2015: Advanced FPGA Sign-off Includes DO-254 and …Missing DAC?
May 2015
5/28/2015: #2 on GarySmithEDA What to See @ DAC List – Why?
5/14/2015: SoC Verification: There is a Stampede!
5/07/2015: Drilling Down on the Internet-of-Things (IoT)
April 2015
4/30/2015: Reflections on Accellera UCIS: Design by Architect and Committee
4/23/2015: DO-254 Without Tears
4/17/2015: Analysis of Clock Intent Requires Smarter SoC Verification
4/09/2015: High-Level Synthesis: New Driver for RTL Verification
4/03/2015: Underdog Innovation: David and Goliath in Electronics
March 2015
3/27/2015: Taking Control of Constraints Verification
3/20/2015: Billion Dollar Unicorns
3/13/2015: My Impressions of DVCon USA 2015: Lies; Experts; Art or Science?
3/06/2015: Smarter Verification: Shift Mindset to Shift Left [Video]
February 2015
2/27/2015: New Ascent Lint, Cricket Video Interview and DVCon Roses
2/20/2015: Happy Lunar New Year: Year of the Ram (or is it Goat or Sheep?)
2/12/2015: Video: Clock-Domain Crossing Verification: Introduction; SoC challenges; and Keys to Success
2/06/2015: A Personal History of Transaction Interfaces to Hardware Emulation: Part 2
January 2015
1/30/2015: A Personal History of Transaction Interfaces to Hardware Emulation: Part 1
1/22/2015: Intel’s new SoC-based Broadwell CPUs: Less Filling, Taste Great!
1/19/2015: Reporting Happiness: Not as Easy as You Think
1/09/2015: 38th VLSI Design Conf. Keynote: Nilekani on IoT and Smartphones
December 2014
12/22/2014: December 2014 Holiday Party
12/17/2014: Happy Holidays from Real Intent!
12/12/2014: Best of “Real Talk”, Q4 Summary and Latest Videos
12/04/2014: P2415 – New IEEE Power Standard for Unified Hardware Abstraction
November 2014
11/27/2014: The Evolution of RTL Lint
11/20/2014: Parallelism in EDA Software – Blessing or a Curse?
11/13/2014: How Big is WWD – the Wide World of Design?
11/06/2014: CMOS Pioneer Remembered: John Haslet Hall
October 2014
10/31/2014: Is Platform-on-Chip The Next Frontier For IC Integration?
10/23/2014: DVClub Shanghai: Making Verification Debug More Efficient
10/16/2014: ARM TechCon Video: Beer, New Meridian CDC, and Arnold Schwarzenegger ?!
10/10/2014: New CDC Verification: Less Filling, Picture Perfect, and Tastes Great!
10/03/2014: ARM Fueling the SoC Revolution and Changing Verification Sign-off
September 2014
9/25/2014: Does Your Synthesis Code Play Well With Others?
9/19/2014: It’s Time to Embrace Objective-driven Verification
9/12/2014: Autoformal: The Automatic Vacuum for Your RTL Code
9/04/2014: How Bad is Your HDL Code? Be the First to Find out!
August 2014
8/29/2014: Fundamentals of Clock Domain Crossing: Conclusion
8/21/2014: Video Keynote: New Methodologies Drive EDA Revenue Growth
8/15/2014: SoCcer: Defending your Digital Design
8/08/2014: Executive Insight: On the Convergence of Design and Verification
July 2014
7/31/2014: Fundamentals of Clock Domain Crossing Verification: Part Four
7/24/2014: Fundamentals of Clock Domain Crossing Verification: Part Three
7/17/2014: Fundamentals of Clock Domain Crossing Verification: Part Two
7/10/2014: Fundamentals of Clock Domain Crossing Verification: Part One
7/03/2014: Static Verification Leads to New Age of SoC Design
June 2014
6/26/2014: Reset Optimization Pays Big Dividends Before Simulation
6/20/2014: SoC CDC Verification Needs a Smarter Hierarchical Approach
6/12/2014: Photo Booth Blackmail at DAC in San Francisco!
6/06/2014: Quick Reprise of DAC 2014
May 2014
5/01/2014: Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions
April 2014
4/24/2014: Complexity Drives Smart Reporting in RTL Verification
4/17/2014: Video Update: New Ascent XV Release for X-optimization, ChipEx show in Israel, DAC Preview
4/11/2014: Design Verification is Shifting Left: Earlier, Focused and Faster
4/03/2014: Redefining Chip Complexity in the SoC Era
March 2014
3/27/2014: X-Verification: A Critical Analysis for a Low-Power World (Video)
3/14/2014: Engineers Have Spoken: Design And Verification Survey Results
3/06/2014: New Ascent IIV Release Delivers Enhanced Automatic Verification of FSMs
February 2014
2/28/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 3
2/20/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 2
2/13/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 1
2/07/2014: Video Tech Talk: Changes In Verification
January 2014
1/31/2014: Progressive Static Verification Leads to Earlier and Faster Timing Sign-off
1/30/2014: Verific’s Front-end Technology Leads to Success and a Giraffe!
1/23/2014: CDC Verification of Fast-to-Slow Clocks – Part Three: Metastability Aware Simulation
1/16/2014: CDC Verification of Fast-to-Slow Clocks – Part Two: Formal Checks
1/10/2014: CDC Verification of Fast-to-Slow Clocks – Part One: Structural Checks
1/02/2014: 2013 Highlights And Giga-scale Predictions For 2014
December 2013
12/13/2013: Q4 News, Year End Summary and New Videos
12/12/2013: Semi Design Technology & System Drivers Roadmap: Part 6 – DFM
12/06/2013: The Future is More than “More than Moore”
November 2013
11/27/2013: Robert Eichner’s presentation at the Verification Futures Conference
11/21/2013: The Race For Better Verification
11/18/2013: Experts at the Table: The Future of Verification – Part 2
11/14/2013: Experts At The Table: The Future Of Verification Part 1
11/08/2013: Video: Orange Roses, New Product Releases and Banner Business at ARM TechCon
October 2013
10/31/2013: Minimizing X-issues in Both Design and Verification
10/23/2013: Value of a Design Tool Needs More Sense Than Dollars
10/17/2013: Graham Bell at EDA Back to the Future
10/15/2013: The Secret Sauce for CDC Verification
10/01/2013: Clean SoC Initialization now Optimal and Verified with Ascent XV
September 2013
9/24/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 4
9/20/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 3
9/20/2013: CEO Viewpoint: Prakash Narain on Moving from RTL to SoC Sign-off
9/17/2013: Video: Ascent Lint – The Best Just Got Better
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 2
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain
9/10/2013: SoC Sign-off Needs Analysis and Optimization of Design Initialization in the Presence of Xs
August 2013
8/15/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 4
8/08/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 3
July 2013
7/25/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 2
7/18/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 1
7/16/2013: Executive Video Briefing: Prakash Narain on RTL and SoC Sign-off
7/05/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 3
June 2013
6/27/2013: Bryon Moyer: Simpler CDC Exception Handling
6/21/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 2
6/17/2013: Peggy Aycinena’s interview with Prakash Narain
6/14/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 1
6/10/2013: Photo Booth Blackmail!
6/03/2013: Real Intent is on John Cooley’s “DAC’13 Cheesy List”
May 2013
5/30/2013: Does SoC Sign-off Mean More Than RTL?
5/24/2013: Ascent Lint Rule of the Month: DEFPARAM
5/23/2013: Video: Gary Smith Tells Us Who and What to See at DAC 2013
5/22/2013: Real Intent is on Gary Smith’s “What to see at DAC” List!
5/16/2013: Your Real Intent Invitation to Fun and Fast Verification at DAC
5/09/2013: DeepChip: “Real Intent’s not-so-secret DVcon’13 Report”
5/07/2013: TechDesignForum: Better analysis helps improve design quality
5/03/2013: Unknown Sign-off and Reset Analysis
April 2013
4/25/2013: Hear Alexander Graham Bell Speak from the 1880′s
4/19/2013: Ascent Lint rule of the month: NULL_RANGE
4/16/2013: May 2 Webinar: Automatic RTL Verification with Ascent IIV: Find Bugs Simulation Can Miss
4/05/2013: Conclusion: Clock and Reset Ubiquity – A CDC Perspective
March 2013
3/22/2013: Part Six: Clock and Reset Ubiquity – A CDC Perspective
3/21/2013: The BIG Change in SoC Verification You Don’t Know About
3/15/2013: Ascent Lint Rule of the Month: COMBO_NBA
3/15/2013: System-Level Design Experts At The Table: Verification Strategies – Part One
3/08/2013: Part Five: Clock and Reset Ubiquity – A CDC Perspective
3/01/2013: Quick DVCon Recap: Exhibit, Panel, Tutorial and Wally’s Keynote
3/01/2013: System-Level Design: Is This The Era Of Automatic Formal Checks For Verification?
February 2013
2/26/2013: Press Release: Real Intent Technologist Presents Power-related Paper and Tutorial at ISQED 2013 Symposium
2/25/2013: At DVCon: Pre-Simulation Verification for RTL Sign-Off includes Automating Power Optimization and DFT
2/25/2013: Press Release: Real Intent to Exhibit, Participate in Panel and Present Tutorial at DVCon 2013
2/22/2013: Part Four: Clock and Reset Ubiquity – A CDC Perspective
2/18/2013: Does Extreme Performance Mean Hard-to-Use?
2/15/2013: Part Three: Clock and Reset Ubiquity – A CDC Perspective
2/07/2013: Ascent Lint Rule of the Month: ARITH_CONTEXT
2/01/2013: “Where Does Design End and Verification Begin?” and DVCon Tutorial on Static Verification
January 2013
1/25/2013: Part Two: Clock and Reset Ubiquity – A CDC Perspective
1/18/2013: Part One: Clock and Reset Ubiquity – A CDC Perspective
1/07/2013: Ascent Lint Rule of the Month: MIN_ID_LEN
1/04/2013: Predictions for 2014, Hier. vs Flat, Clocks and Bugs
December 2012
12/14/2012: Real Intent Reports on DVClub Event at Microprocessor Test and Verification Workshop 2012
12/11/2012: Press Release: Real Intent Records Banner Year
12/07/2012: Press Release: Real Intent Rolls Out New Version of Ascent Lint for Early Functional Verification
12/04/2012: Ascent Lint Rule of the Month: OPEN_INPUT
November 2012
11/19/2012: Real Intent Has Excellent EDSFair 2012 Exhibition
11/16/2012: Peggy Aycinena: New Look, New Location, New Year
11/14/2012: Press Release: New Look and New Headquarters for Real Intent
11/05/2012: Ascent Lint HDL Rule of the Month: ZERO_REP
11/02/2012: Have you had CDC bugs slip through resulting in late ECOs or chip respins?
11/01/2012: DAC survey on CDC bugs, X propagation, constraints
October 2012
10/29/2012: Press Release: Real Intent to Exhibit at ARM TechCon 2012 – Chip Design Day
September 2012
9/24/2012: Photos of the space shuttle Endeavour from the Real Intent office
9/20/2012: Press Release: Real Intent Showcases Verification Solutions at Verify 2012 Japan
9/14/2012: A Bolt of Inspiration
9/11/2012: ARM blog: An Advanced Timing Sign-off Methodology for the SoC Design Ecosystem
9/05/2012: When to Retool the Front-End Design Flow?
August 2012
8/27/2012: X-Verification: What Happens When Unknowns Propagate Through Your Design
8/24/2012: Article: Verification challenges require surgical precision
8/21/2012: How To Article: Verifying complex clock and reset regimes in modern chips
8/20/2012: Press Release: Real Intent Supports Growth Worldwide by Partnering With EuropeLaunch
8/06/2012: SemiWiki: The Unknown in Your Design Can be Dangerous
8/03/2012: Video: “Issues and Struggles in SOC Design Verification”, Dr. Roger Hughes
July 2012
7/30/2012: Video: What is Driving Lint Usage in Complex SOCs?
7/25/2012: Press Release: Real Intent Adds to Japan Presence: Expands Office, Increases Staff to Meet Demand for Design Verification and Sign-Off Products
7/23/2012: How is Verification Complexity Changing, and What is the Impact on Sign-off?
7/20/2012: Real Intent in Brazil
7/16/2012: Foosball, Frosty Beverages and Accelerating Verification Sign-off
7/03/2012: A Good Design Tool Needs a Great Beginning
June 2012
6/14/2012: Real Intent at DAC 2012
6/01/2012: DeepChip: Cheesy List for DAC 2012
May 2012
5/31/2012: EDACafe: Your Real Intent Invitation to Fast Verification and Fun at DAC
5/30/2012: Real Intent Video: New Ascent Lint and Meridian CDC Releases and Fun at DAC 2012
5/29/2012: Press Release: Real Intent Leads in Speed, Capacity and Precision with New Releases of Ascent Lint and Meridian CDC Verification Tools
5/22/2012: Press Release: Over 35% Revenue Growth in First Half of 2012
5/21/2012: Thoughts on RTL Lint, and a Poem
5/21/2012: Real Intent is #8 on Gary Smith’s “What to see at DAC” List!
5/18/2012: EETimes: Gearing Up for DAC – Verification demos
5/08/2012: Gabe on EDA: Real Intent Helps Designers Verify Intent
5/07/2012: EDACafe: A Page is Turned
5/07/2012: Press Release: Graham Bell Joins Real Intent to Promote Early Functional Verification & Advanced Sign-Off Circuit Design Software
March 2012
3/21/2012: Press Release: Real Intent Demos EDA Solutions for Early Functional Verification & Advanced Sign-off at Synopsys Users Group (SNUG)
3/20/2012: Article: Blindsided by a glitch
3/16/2012: Gabe on EDA: Real Intent and the X Factor
3/10/2012: DVCon Video Interview: “Product Update and New High-capacity ‘X’ Verification Solution”
3/01/2012: Article: X-Propagation Woes: Masking Bugs at RTL and Unnecessary Debug at the Netlist
February 2012
2/28/2012: Press Release: Real Intent Joins Cadence Connections Program; Real Intent’s Advanced Sign-Off Verification Capabilities Added to Leading EDA Flow
2/15/2012: Real Intent Improves Lint Coverage and Usability
2/15/2012: Avoiding the Titanic-Sized Iceberg of Downton Abbey
2/08/2012: Gabe on EDA: Real Intent Meridian CDC
2/08/2012: Press Release: At DVCon, Real Intent Verification Experts Present on Resolving X-Propagation Bugs; Demos Focus on CDC and RTL Debugging Innovations
January 2012
1/24/2012: A Meaningful Present for the New Year
1/11/2012: Press Release: Real Intent Solidifies Leadership in Clock Domain Crossing
August 2011
8/02/2011: A Quick History of Clock Domain Crossing (CDC) Verification
July 2011
7/26/2011: Hardware-Assisted Verification and the Animal Kingdom
7/13/2011: Advanced Sign-off…It’s Trending!
May 2011
5/24/2011: Learn about Advanced Sign-off Verification at DAC 2011
5/16/2011: Getting A Jump On DAC
5/09/2011: Livin’ on a Prayer
5/02/2011: The Journey to CDC Sign-Off
April 2011
4/25/2011: Getting You Closer to Verification Closure
4/11/2011: X-verification: Conquering the “Unknown”
4/05/2011: Learn About the Latest Advances in Verification Sign-off!
March 2011
3/21/2011: Business Not as Usual
3/15/2011: The Evolution of Sign-off
3/07/2011: Real People, Real Discussion – Real Intent at DVCon
February 2011
2/28/2011: The Ascent of Ascent Lint (v1.4 is here!)
2/21/2011: Foundation for Success
2/08/2011: Fairs to Remember
January 2011
1/31/2011: EDA Innovation
1/24/2011: Top 3 Reasons Why Designers Switch to Meridian CDC from Real Intent
1/17/2011: Hot Topics, Hot Food, and Hot Prize
1/10/2011: Satisfaction EDA Style!
1/03/2011: The King is Dead. Long Live the King!
December 2010
12/20/2010: Hardware Emulation for Lowering Production Testing Costs
12/03/2010: What do you need to know for effective CDC Analysis?
November 2010
11/12/2010: The SoC Verification Gap
11/05/2010: Building Relationships Between EDA and Semiconductor Ventures
October 2010
10/29/2010: Thoughts on Assertion Based Verification (ABV)
10/25/2010: Who is the master who is the slave?
10/08/2010: Economics of Verification
10/01/2010: Hardware-Assisted Verification Tackles Verification Bottleneck
September 2010
9/24/2010: Excitement in Electronics
9/17/2010: Achieving Six Sigma Quality for IC Design
9/03/2010: A Look at Transaction-Based Modeling
August 2010
8/20/2010: The 10 Year Retooling Cycle
July 2010
7/30/2010: Hardware-Assisted Verification Usage Survey of DAC Attendees
7/23/2010: Leadership with Authenticity
7/16/2010: Clock Domain Verification Challenges: How Real Intent is Solving Them
7/09/2010: Building Strong Foundations
7/02/2010: Celebrating Freedom from Verification
June 2010
6/25/2010: My DAC Journey: Past, Present and Future
6/18/2010: Verifying Today’s Large Chips
6/11/2010: You Got Questions, We Got Answers
6/04/2010: Will 70 Remain the Verification Number?
May 2010
5/28/2010: A Model for Justifying More EDA Tools
5/21/2010: Mind the Verification Gap
5/14/2010: ChipEx 2010: a Hot Show under the Hot Sun
5/07/2010: We Sell Canaries
April 2010
4/30/2010: Celebrating 10 Years of Emulation Leadership
4/23/2010: Imagining Verification Success
4/16/2010: Do you have the next generation verification flow?
4/09/2010: A Bug’s Eye View under the Rug of SNUG
4/02/2010: Globetrotting 2010
March 2010
3/26/2010: Is Your CDC Tool of Sign-Off Quality?
3/19/2010: DATE 2010 – There Was a Chill in the Air
3/12/2010: Drowning in a Sea of Information
3/05/2010: DVCon 2010: Awesomely on Target for Verification
February 2010
2/26/2010: Verifying CDC Issues in the Presence of Clocks with Dynamically Changing Frequencies
2/19/2010: Fostering Innovation
2/12/2010: CDC (Clock Domain Crossing) Analysis – Is this a misnomer?
2/05/2010: EDSFair – A Successful Show to Start 2010
January 2010
1/29/2010: Ascent Is Much More Than a Bug Hunter
1/22/2010: Ascent Lint Steps up to Next Generation Challenges
1/15/2010: Google and Real Intent, 1st Degree LinkedIn
1/08/2010: Verification Challenges Require Surgical Precision
1/07/2010: Introducing Real Talk!

The Many Tentacled Monster Under My House (with Pictures)

Jay Littlefield, Director, Product Strategy & Business Development
Jay Littlefield, Director, Product Strategy & Business Development


Many years ago, my wife and I bought our first home. At that time, a coworker said to me, “Congratulations! You now have a home project to do every weekend for the rest of your life!” How right he was, though his prediction only covered our first 6 years, since we had sold that early house and moved into an apartment in San Francisco.

Fast forwarding to this past summer, our family made the decision to move back to the south bay. We located a nice house near a good elementary school within our budget, and moved in over the Independence Day weekend. Part of the move involved pulling my old woodworking tools out of storage. (Back when we first moved to the City, I’d tried with no success to convince my wife that a table saw sitting in the middle of our living room in our small apartment really wouldn’t be an inconvenience.) For me, setting up the tools once more was like seeing long absent friends. As they took their new places in our garage, my thoughts turned to how to use them to make “improvements” in our new home.

After assessing the state of the house, I eventually settled on wiring the home for networking.  As our family’s “Geek-In-Residence”, I’ve long preferred the additional speed and security provided by cabled networking over wireless.  Additionally, we were nowhere close to maximizing the speed of our high-bandwidth internet connection, and our wireless network was regularly slowing to a crawl whenever our kids would watch streaming movies to our TV.  I’d run network cable before in our old house with good results, but figured this time around, I’d recruit some additional help.  I floated the idea of running network cable to my two kids, ages 8 and 6. Both seemed mildly indifferent to the idea, until I explained that this project involved crawling around under the house.  Their enthusiasm meter immediately spiked.  My wife’s response was more tempered, involving a well-practiced eye roll and head shake, but then signing off on the project with the utterance, “Whatever!”  (As good as a signed contract in our house!)  I ordered the supplies and made a wiring plan.

There are two keys to any successful home improvement project – good planning and a high spousal approval rating of the finished project.  I wanted to run cabling throughout all rooms of the house, and to hide the networking equipment in closets, since I knew my wife would prefer that they would remain unseen.  As this plan would require a lot of drilling between walls and possibly under the house, I took the opportunity to upgrade my tool set with a cordless drill.  I learned about selecting tools from my father.  He was a machinist, and he had a simple philosophy regarding tool purchases – “Buy tools of sufficient quality that your offspring will inherit them from you.”  Aside from the initial out-of-pocket expense, I’ve never been disappointed living by this rule.  In that spirit, I took one of the potential future recipients with me to pick it out, along with a drill bit suitable to our purpose.

Future tool owner with inheritance.

Future tool owner with inheritance.

We started by cutting holes in the walls where the eventual Ethernet wall jacks would be found.  Once a hole was cleared, I took the new drill and extra-long-bit to make a 1” hole into the floor studs to run the wiring up from the crawl space.  The tight nature of the in-wall access necessitated drilling the holes at a slight angle.  This shouldn’t have been a problem, except for the one wall which bordered on a room with a 6 inch drop in the floor height.  Fortunately, some slight re-planning of the network jack locations and a rapid approval sign-off by management (my wife) resolved the issue with a minimum of incidental profane language training for my kids provided by myself.

Oops! A vocabulary enhancement moment. The Fix: Making lemonade out of lemons

Oops! A vocabulary enhancement moment.           The Fix: Making lemonade out of lemons

Next came the cable runs under the house.  The plan was to put the network patch panel in my home office closet and run all connections from there.  My wife was already a pro at fishing cables through walls from various home projects in our old house.  This project started like any other we have done together; I recommended a technique to her to use to get the cable easily through the height of the wall, and she discovered a substantially improved method within minutes of attempting the first cable run.  I left her to manage the above ground work while I prepared for the crawl space cable runs.  My son and daughter traded off “assisting” above and below the floor with Mom and Dad throughout the day.

In all honesty, I had expected my kids to have mixed interest under the house.  While the concept was cool, the reality might be a bit scary for someone their ages. Perhaps they would pass Dad some tools while he worked, or maybe feed a cable through a hole.  Was I wrong!  Not only did they like being under the house, they were actually much more effective than their old man in running cable.  Because they were physically so much smaller, they could easily crawl on their hands and knees where I could only slide along the ground on my stomach.  That means they could run a cable to the far end of the house in about a minute, whereas I might take 2-3 due to the tight space.

After a few practice runs, they were ready to go solo.  I would hand them a freshly fished cable from the crew upstairs, and they would grab the end and say, “Where to Dad?”  They ran the lines to the pre-drilled holes while I kept the feed coming down into the crawl space from snagging.  By the end of the day they each had their own roll of electrical table to tie the end of the cable to a converted coat hanger that would feed the cable back up through the wall to where the Ethernet jacks would be.  We made such good progress that I spent much of my time using cable nails to clean up the runs while the kids ran other lines.  They thoroughly enjoyed the experience.  And let’s face it – how many elementary school students can claim they ran the networking cable that they use in their rooms?  They have serious playground cred now!

Would you want this crowd crawling under your house?

Would you want this crowd crawling under your house?

With our cables run, there was still the business of wiring up the jacks and testing the connections.  A tip for parents out there who may consider this project on their own – small children find both punch tools and network testers to be fascinating equipment.  This is advantageous in that they are more than willing to run from room to room, crawl under furniture and attach test fixtures to hard-to-reach Ethernet jacks while you sit “working” at the main patch panel with a network signal generator and an ice-cold beverage.  The biggest issue you will face is ensuring everyone gets an equal number of jacks to test. (Hint, when planning your home network, ensure the total number of connections you want to run divides evenly by the number of children you have. You’ll be glad you did!)   In all, we had less than 3 connections that needed rewiring out of 20 run throughout the house.

Punch tools are fun! Blinking lights are cool!

Punch tools are fun!                                                     Blinking lights are cool!


So was the whole thing worth it?  As far as my own goals for the project, I’d say yes.  We’re now getting nearly the top advertised speed from my internet connection across all wired devices. This means my kids can watch Netflix shows and I can be on a company internet meeting without interference.  Also, our wireless connections rarely seem crowded now.  And with the 8 wired network connections I installed in my home office, I have ports for all of my current devices, with room for more!  But honestly, getting to do this project with my wife and kids was a real reward.  I discovered, once again, that I married someone not only intelligent and beautiful, but also capable of fishing cable through walls like nobody’s business.  In addition, the excitement my kids felt in knowing they were working on a project with Mom and Dad was simply too much to express in words. I am looking forward to our next home improvement adventure.

The Result showing the Monoprice 24-port gigabit switch connected to the TrendNet 24-port CAT6 patch panel. You can the cable ‘tentacles’ coming through the wall at the top and bottom of the patch panel.

The Result showing the Monoprice 24-port gigabit switch connected to the TrendNet 24-port CAT6 patch panel. You can see the cable ‘tentacles’ coming through the wall at the top and bottom of the patch panel.

Jay is a Product Strategist at Real Intent with over 20 years of design and EDA experience. His primary focus for the past decade has been support and marketing of static verification products. He has a MSEE from Stanford and a MBA from San Jose State.

Oct 30, 2015 | Comments

Is Silicon the New Fabric for Our Lives?

Prakash Narain   Prakash Narain, Ph.D.
   President, CEO Real Intent, Inc.

The following CEO Insight was published in the October 2015 issue of SiliconIndia.

This year we are celebrating the 50th anniversary of Moore’s Law. On April 19, 1965, Electronics magazine published an article that profoundly impacted the world. It was authored by a Fairchild Semiconductor R&D Director, Gordon Moore, who forecast that transistors would decrease in cost and increase in performance at an exponential rate. The article predicted the availability of personal computers and mobile communications. Moore’s seminal observation became known as ‘Moore’s Law’, a prediction that established the path the semiconductor industry would take for the next 50 years or more and, in doing so would dramatically change our lives. Three years later Gordon Moore co-founded Intel, the number one semiconductor company in the world.

According to the analytics firm IHS, the pace of Moore’s Law has resulted in $3 trillion dollars in added value to the Global Domestic Product (GDP) in the last 20 years. We have seen advances across a wide range of business sectors including transportation, energy, life sciences, environment, communications, entertainment, finance, and manufacturing.

In the communications sector there are 6.8 billion mobile phone subscriptions worldwide, or about one per person. More than half of these, 3.6 billion, are so-called smartphones that enable a social media community of 2 billion users.

The power of Moore’s Law has enabled the creation of semiconductor Systems-on-Chip (SoC) that provide rich feature set, brilliant graphics and wireless connectivity that we have learned to take for granted in our smartphones. We tend to overlook the fact that today’s SoCs include hundreds of millions of digital logic gates.

The creators of these SoCs must use computer automation and off-the-shelf design components to assemble working designs in a reasonable amount of time. The complexity of today’s digital chips exceed people’s capability to design and verify them manually.

We, of course, expect our consumer electronics products to work reliably and exhibit long battery life. For automotive, medical, aeronautical, and military applications, requirements for reliable operation are far more strict. Careful automated analysis is needed to confirm correct behavior of designs.

And what is the cost of failure? Currently, the design and fabrication cost for a new SoC ranges from $35-50 million dollars.

Given the huge complexity and feature requirements for SoCs, design verification must start as early in the design process as possible. The digital design community checks basic functionality at the architectural level and begins to implement behavioral designs through the use of hardware description languages such as SystemVerilog and VHDL. The Hardware Description Language (HDL) stage describes how signals will move between the composite digital components; at this stage designers can begin to code different tests that will confirm correct interoperation of the blocks.

With the relentless march of Moore’s Law, we see that the number of engineers required to verify the behavior of a design exceeds the number of engineers needed to develop the design itself. According to Wally Rhines, Chairman and CEO of Mentor Graphics Corp., if this trend continues, the entire population of India will need to become verification engineers.¹

How Can We Contain this Verification Explosion?
Design teams now rely on new verification software technologies to manage this trend. The key to the new software approach is laser-like focus on specific verification problems, along with the use of so-called static analysis methods.

By constraining the verification challenge to a series of specific problem areas, such as signals crossing block boundaries, only necessary information must be retained and processed for each specific problem. A narrow focus keeps the problem size manageable. The use of static analysis methods fits very well with the narrow focus technique. Analyzing the intent of the design as it relates to that problem brings dramatic speed-ups; validation results are delivered in minutes, not days as happens with traditional simulation methods. Parallelization of the effort across multiple problem domains means verification is not constrained, and a whole suite of applications can be run concurrently.

After the HDL stage, designs go through what is called digital synthesis – one step closer to physical realization with actual logic gates. Other new verification technologies such as hardware emulation further confirm the correct behavior of an SoC before it is fabricated.

Worldwide demand for electronic devices continues to escalate. Recently semiconductor market research firm IC Insights raised its projection for wearable electronics revenues in 2015 to show much stronger growth in wearable systems after the launch of Apple’s first smartwatches in April 2015. The long-term fate of smartwatches continues to be debated. Whether these wearable systems evolve into a major end-use market category or simply become a niche with a short lifecycle remains to be seen. In the short-term, however, the launch of the Apple Watch — jam-packed with ICs, sensors, and other components — has provided a major boost to semiconductor unit shipments and sales in the wearable Internet-of-Things (IoT) category. IC Insights accordingly has estimated that the wearable IoT category will grow from $1 billion in 2014 to more than $5 billion in 2015.

With addition of more interconnected digital electronics to the fabric of our lives, we will need to continue to develop smarter methods to verify that everything is working correctly. The way forward is clear: verify early, be targeted in your analysis, and parallelize your efforts. Moore’s law is continuing to deliver exciting new opportunities to the world of design, manufacturing and services. Let’s all keep enjoying the new vistas opening up to us.

¹ From his DVCon 2013 keynote presentation, Accelerating EDA Innovation through SoC Design Methodology Convergence, February 26, 2013.

Oct 23, 2015 | Comments

DAC Verification Survey: What’s Hot and What’s Not

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

At the Design Automation Conference in San Francisco, Real Intent did a survey of 201 visitors to our booth.  We focused on RTL and gate-level verification issues.  Below is a brief introduction and you can see the entire survey on the web-site.

DAC’15 “When is your next design start?”

0-3 months : ########################################### (52%)
3-6 months : ###################### (26%)
6-12 months: ################## (22%)

These numbers are very similar to what was reported in 2012 on DeepChip. With half of the future design starts occurring in the next 3 months, this leads me to think design activity is remaining strong despite any EDA user consolidation we might have seen with the big mergers of various chip companies, and the slowing of the Chinese economy. However, the latest IC forecast from Gartner has 2015 growth falling from 5.4% at the beginning of 2015 down to 2.2% in July.

DAC’15 “How many clock domains do you expect it will have?”

under 50 : ################################################ (58%)
  50-100   : ##################### (25%)
 100-500  : ######### (11%)
over 500 : ##### (6%)

We asked this to find how many asynchronous clocks need to be analyzed by a CDC verification tool. When we see designs with over 100 clock domains they are typically mobile devices that use aggressive low-power goals with multiple clock schemes to hit their power target. Designs with under 50 clock domains are often block-level designs such as hard IP, or those for
commodity consumer electronic products.
Since 2012 we have seen a 10% increase in the number of designs with more than 50 clock domains, so system complexity continues to grow. What is interesting is that Harry Foster’s 2014 Wilson Group study reports much smaller numbers. We think our numbers are higher since respondents may be counting a mix of synchronous and asynchronous clocks in their designs.

“Have you seen CDC bugs that resulted in late ECOs?”

Read the entire survey on the web-site.

Oct 16, 2015 | Comments

On-the-Fly Hardware Accurate Simulation, New Meridian CDC, ASICON Tutorial

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

In this blog, we are presenting the highlights from Real Intent’s Fall 2015 Verification Newsletter. First are some thoughts from Prakash Narain, CEO, followed by an introduction to the new 2015 release of Meridian CDC for clock-domain and reset-domain crossing sign-off, and finally a review of our fall events including an ASICON tutorial.

Thoughts From Prakash Narain, President and CEO…

Most functional verification for SoC and FPGA designs is done prior to RTL hand-off to digital synthesis, since gate-level simulations take longer to complete and are significantly harder to debug.  However, gate-level simulations are still needed to verify some circuit behavior.  Unfortunately, X’s in gate-level simulations can cause differences in the RTL simulation output and the gate-level simulation output.  X’s generally exist in all designs – it can be difficult to prevent this for practical reasons.  Simulation results may be different because of X’s that are hidden in the RTL simulation by X-optimism, or additional X’s may exist due to X-pessimism in gate-level simulations. Pessimism can be fixed by overriding the simulator because you know that real hardware would always resolve to a deterministic value. The challenge is confirming that the X value is a result of X-pessimism and not simply X-propagation, and then forcing it to the right value at the right point in time so the simulation matches that of real hardware.

Real Intent’s Ascent XV product corrects X-pessimism on the fly so the simulation is hardware accurate. Use of Ascent XV saves the time required to get gate-level simulations started by an order of magnitude.  It is proven to be superior to alternative approaches in the marketplace in terms of performance, memory, and accuracy.  Its ease of use and capacity, make it the only practical solution for large SoC designs, just like our other Ascent and Meridian products.

New Meridian CDC Release with Next-Generation Features

In September we delivered the latest 2015 release of Meridian CDC for comprehensive clock-domain crossing (CDC) and reset-domain crossing (RDC) analysis. This new software release adds enhanced speed, analysis and debug support, boosting productivity for SoC and FPGA design teams. With a brand new way to debug CDC violations, it lets you achieve giga-gate capacity verification without sacrificing precision. We believe it is the industry’s fastest-performance, highest-capacity and most precise CDC solution in the market.

Some of the features of the latest Meridian CDC include:

  • 30% faster performance and improved capacity for Giga-gate SoCs
  • iDebug: a state-of-the-art design intent debugger and analysis manager
  • Interfaces based approach for low-noise CDC analysis
  • Qualitative improvements for several CDC checks
  • Tcl-based command line interface lets user query and create custom scripts for debug and reporting
  • New formal analysis engine with up to 10X faster speed and greater coverage to find CDC problems
  • New HTML documentation improves usability

For additional insights and comments, please watch a video interview here.

Come Visit Us at Upcoming Industry Events

During October and November we will exhibit our Ascent and Meridian solutions at major industry events in Japan, China, Europe and Israel. We were most recently at Design Solution Forum in Yokohama, Japan, on Friday, Oct. 2. Please join us at the International Conference on ASIC (ASICON 2015) Nov. 3-6 in Chengdu, China, where Ramesh Dewangan, our VP of Product Strategy, will present a 90-minute tutorial “New Challenges and Techniques for Clock Domain Crossing and Reset Sign-off.” You can also see our advanced sign-off solutions at the DVCon EU technical conference in Munich on Nov. 11-12, in area F4, and at SemIsrael in Airport City on Nov. 17.  At SemIsrael, Oren Katzir, our VP of Applications Engineering will present a talk “New RTL sign-off challenges: Reset metastability, X-safe design, and CDC Data glitches.”

Oct 9, 2015 | Comments

Correcting Pessimism without Optimism – Part Two

Lisa Piper   Lisa Piper
   Technical Marketing Manager

Part one of this article focused on the issues of X-pessimism at the netlist level and why the current solutions are inadequate.  In In part two, we look at how the Ascent XV tool correctly addresses X-safe verification.

If a node is determined to be 1(or 0) pessimistic, that means its real circuit value is 1(or 0), but simulation produces an X. A pessimistic simulation value can be corrected by forcing a 1(or 0) on the node until the conditions for pessimism no longer hold, at which time, it is released. This does not mean that all X’s can be arbitrarily forced to a known value. Only X’s that result from pessimism should be forced, and they must be forced to represent the deterministic value that real hardware would see and released immediately when the pessimism stops.

Ascent XV-netlist makes your simulation hardware accurate by appropriately correcting pessimism. Ascent XV statically identifies the potentially pessimistic nodes and then uses that information to create SimPortal files that augment gate-level simulation to correct X-pessimism on the fly. By doing the analysis statically before the simulation starts, the number of nodes that must be analyzed during simulation is significantly reduced. Also, the X-analysis during simulation can be reduced to a table look-up when the potentially pessimistic node has an X-value. The SimPortal files monitor the potentially pessimistic nodes in the design on the fly, independent of the testbench.

A bottoms-up hierarchical static analysis can also be done at the block level. When all the blocks are integrated for full chip simulations, a very scalable solution is achieved. The SimPortal is designed for performance, and also minimizes compile time and memory overhead. You can control the verbosity at simulation time, and can choose to drop back to simple monitoring or even turn off both the correction and monitoring at any point in time. The flow and methodology is shown below in Figure 3.



Ascent XV X-pessimism Flow and Methodology:

  1. Run static analysis to determine which data input values can cause monitored nodes to exhibit pessimism. Generate design-specific SimPortal data files.
  2. Run SimPortal simulation to find out which nodes experienced the input combinations that cause pessimism.

The Ascent XV solution is characterized as follows:

  • Performance
    • Gate-level simulation overhead is as low as 2x-2.5x
    • Memory overhead is 0.5x
    • Negligible overhead to compilation time
    • Simulation time configuration of verbosity from totally quiet to full details
    • Can choose to turn off correction at any point in time (such as after reset)
  • Capacity
    • Unique approach easily handles next generation full chip netlists (billion gate SOCs)
  • Accuracy
    • Only does forces when pessimism is occurring.
    • The value forced is the value that will be seen in real hardware.
  • Ease of Use
    • No setup required
    • Testbench independent static analysis
    • No need to touch the existing design or testbench, only the simulation script

In RTL, X’s can hide functional bugs due to X-optimism. These bugs will be brought to light in netlist gate simulations. Unfortunately, X’s also cause X-pessimism in netlist simulations, making it difficult to determine whether a functional mismatch is due to X-optimism, X-pessimism, or something else entirely. Ascent XV – Netlist will remove X’s caused by X-pessimism, removing the major source of simulation RTL and netlist simulation differences.

Related Considerations

Reliable correcting of pessimism at the netlist has become very feasible, thanks to Ascent XV. But there is additional analysis that can be done early in the RTL development process to prevent potential X-issues. This will benefit the post-RTL handoff, whether it is gate-level simulations or FPGA modeling of your design, so you are not debugging X-optimism issues in hard to debug environments.

Ascent XV- Reset Optimization minimizes significant X’s in the design that result from incomplete initialization. Ascent XV – Reset Optimization will do a hardware accurate reset analysis that will report where additional resets are needed; as well suggest where resets can be removed. It ensures complete initialization, taking into account the propagation of known values to avoid adding extraneous resets. The goal is to minimize X-issues during the design of the RTL. In the case of simulations, fewer occurrences of pessimism will speed your simulation.

Ascent XV-RTL Optimism analyzes where the X-sources of a design are, as well as where they can cause X-optimism. This ensures hardware accurate simulations at RTL either by eliminating the X-source, or through coding for X-accuracy. Hardware accurate RTL simulations will make the RTL-netlist simulation outputs easier to compare, but more significantly, it will make FPGA-based modeling easier to get up and running.


Once a design is synthesized, the immediate goal is to get gate-level simulations up and running fast.  Unfortunately, X’s in gate-level simulations can cause differences in the RTL simulation output and the gate-level simulation output. X’s generally exist in all designs – it can be difficult to prevent this for practical reasons. Simulation results may be different because of X’s that are hidden in the RTL simulation by X-optimism, or additional X’s may exist due to X-pessimism in gate-level simulations. Pessimism can be fixed by overriding the simulator because you know that real hardware would always resolve to a deterministic value. The challenge is confirming that the X value is a result of X-pessimism and not simply X-propagation, and then forcing it to the right value at the right point in time so the simulation matches that of real hardware.

Ascent XV- Netlist Pessimism corrects X-pessimism on the fly so the simulation is hardware accurate. Use of Ascent XV saves in the time required to get gate-level simulations started by an order of magnitude.  It is proven to be superior to alternative approaches in the marketplace in terms of performance, memory, and accuracy.  Its ease of use and capacity, make it the only practical solution for large SOCs.

Oct 2, 2015 | Comments

Correcting Pessimism without Optimism – Part One

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Most functional verification for SoC and FPGA designs is done prior to RTL hand-off to digital synthesis because gate-level simulations take longer to complete and are significantly harder to debug. However, gate-level simulations are still needed to verify some circuit behavior. Ideally, the output of the RTL simulations will match the output of gate-level netlist simulations on the same design after synthesis. And why wouldn’t they? Besides the obvious things that are being verified in your gate-level simulations, there are also unknown values (X’s) that were not seen in RTL due to X-optimism, and additional X’s in the gate-level simulations due to X-pessimism. Part one of this article focuses on the issues of X-pessimism at the netlist level and why the current solutions are inadequate.

X-pessimism and X-optimism Defined

The presence of X’s can cause both X-optimism in RTL simulations and X-pessimism in netlist simulations. X-optimism can result in the failure to detect functional bugs at RTL. X-pessimism typically makes it hard to get netlist simulations up and running quickly.

X-pessimism occurs in gate-level designs when an X signal at the input of some digital logic causes the simulation output to be an X value, even though in real hardware the value will be deterministic, e.g., a 1 or 0. Figure 1 shows a very simple example. When the value of in1 and in2 are both 0, the simulation value at the output is 0, as it would in hardware. But when the “input” value is an X, and the values of in1 and in2 are both 1, the simulation value at the output is X in simulation but is 1 in real hardware. This behavior is called X-pessimism because the known value simulates as an unknown. More specifically, we say it is 1-pessimistic because the output should have been a value of 1.

Fig. 1. X-pessimism example showing an X results when the values of signals in1 and in2 are both 1.

Fig. 1. X-pessimism example showing an X results when the values of signals in1 and in2 are both 1.

X-optimism is the opposite, when an unknown value is simulated as if it is a known value in hardware. Consider the example shown in figure 2 below. If the “input” signal is an X value, that means that “input” could be either a 0 or a 1 value in real hardware because real hardware does not have an X value. So in real hardware, signal “D” might also be a 0 or a 1 value. However, in simulation, the output “D” would always show as a 1 value. It is called “optimism” because the unknown was resolved as a known value. This can cause functional bugs to be missed in RTL simulations, although in netlist the X would always be properly propagated.

Fig. 2. X-optimism example showing an input value of X produces a 1 result.

Fig. 2. X-optimism example showing an input value of X produces a 1 result.

The examples above are very simplistic. In real designs the logic cone of the “cloud” is often very complex, with the selected output being driven by a logic expression and the “input” also being a logic expression. In this case, sometimes the output X value is a result of pessimism and sometimes it is simply the propagation of an X that was optimistic in RTL. One can be artificially corrected without harm, but the other could be a bug lurking to be discovered. Refer to “X-Propagation Woes – A Condensed Primer”1 for more details.

Commonly Used Quick Fixes

There are three approaches that we hear being used for addressing X-pessimism, all with downsides. The approaches are 1) eliminating all X’s in the design, 2) artificial random initialization of uninitialized flops, and 3) manual drudgery.

The first common approach for eliminating all X’s is to add a reset to all memory elements. This eliminates the most common source of Xs – uninitialized flops. However, synchronous resets sometimes can cause pessimism issues to be introduced during the synthesis process. Also, other sources of X’s do exist in a design, such as explicit X-assigns for flagging illegal states, bus contention, and out of range references, among others. A more significant issue is that extra resets eat into power, size, and routing budgets. Resettable flops are larger, more power hungry, and require additional routing resources. Resetting all flops is practical only for smaller designs, and does not address all sources of X.

The second common approach is to artificially randomize the initialization of uninitialized memory elements at time 0. The issue with this approach is that while it will help simulations, it will not necessarily match real hardware. It also does not address all sources of X in a design. X-optimism bugs that were masked in RTL likely will be artificially masked again in netlist simulations. This risk of missing a bug that was masked by optimism in RTL and artificially removed at netlist could result in a critical bug being missed. These days, with the high costs of manufacturing and – heaven forbid – recalls, the downside for a failure can be very expensive.

The third approach is to manually analyze the root cause of all X-differences between the outputs of RTL and gate-level simulation, and then determine the correct value and time to force and release pessimistic nodes. This can be very difficult to do because a gate-level design is a transformation of an RTL design into something that is more complex, has a different structure, and has unfamiliar signal names. It can be very time consuming to chase down those pessimistic X’s in the netlist simulation. The issue is exacerbated when there is a mixture of X differences from both optimism and pessimism, with pressure to get it fixed as soon as possible and not accidently miss a real bug. For large designs, this effort can take months.

Existing approaches fall short because they do not address pessimism caused by X’s from all sources, they can mask issues in the gate-level simulations due to X-optimism in RTL simulations, and they are insufficient to handle the largest SoC designs.

In part two we will look at how the Ascent XV tool correctly addresses X-safe verification.

Sep 24, 2015 | Comments

Calypto Design Systems: A Changed Partner

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Calypto Design Systems was embraced by Mentor Graphics this week.  Founded in 2002, the company was born out of discussions between founder Devadas Varma and Dado Banatao, partner at Tallwood Venture Capital.  By early 2005, it had raised $22 million in venture capital, and had 42 employees, 18 with PhDs.  It was tackling equivalence checking (SLEC) between ESL and RTL design representations.

In 2011, Mentor bought a 51% interest in the company and sold Calypto its Catapult-C synthesis technology, which seemed like a good match to their SLEC tool.  With Calypto’s growing success, it was natural that Mentor would pull them into their fold.

For several years, Calypto Design Systems and Real Intent have co-operated in support of verification flows for mutual customers.  Both companies shared the same distributor is Korea.

One flow we had jointly announced was our Ascent Lint with their Catapult synthesizer.  Catapult lets designers use industry standard ANSI C++ or SystemC to describe functional intent at the ESL level. From these high-level descriptions, Catapult automatically generates production quality RTL.  Ascent Lint ensures Catapult-generated RTL code is lint clean and error free for a safe and reliable implementation flow from RTL to GDSII layout.

Calypto also has the PowerPro tool which is an automated RTL power optimization and analysis product that identifies and inserts sequential clock gating and memory enable logic into synthesizable Verilog and VHDL designs.  You can see it on the right side in the following flow diagram from an illustration used at the 2014 Design Automation Conference.


In this flow, the Real Intent Meridian CDC tool checks that signals crossing asynchronous clock boundaries are correctly registered and synchronized in the RTL code.  In this flow, it is used to verify that any optimizations done to your RTL have not broken the assumptions for CDC synchronization.

Partnerships are not just about technology, they are also about people.  I have had the pleasure to work with Mark Milligan, VP of Marketing and his predecessor Shawn McCloud in promoting these flows.  As well, I want to thank Mathilde Karsenti, Marketing Programs Manager, who brings a passion for excellence in her work and made our joint promotions a success.

Now that Calypto is part of Mentor Graphics the co-operation with Real Intent will take a different form.  But the drivers for the co-operation are still very much current.  RTL generated from a high-level synthesis tool or modified by power optimization needs to be verified to ensure that inadvertent errors have not been introduced.  Real Intent will continue to work with its partners in the industry to sign-off their RTL prior to synthesis and implementation.

Sep 17, 2015 | Comments

A Verification Standard for Design Reliability


The great thing about a standard is that once you decide to use it, your life as a designer is suddenly easier. Using a standard reduces the long list of choices and decisions that need to be made to get a working product out the door. It also gives assurance to the customer that you are following best practices of the industry.

A standard for the world of aviation electronics (avionics) is the RTCA/DO-254, Design Assurance Guidance For Airborne Electronic Hardware. It is a process assurance flow for civilian aerospace design of complex electronic hardware typically implemented using ASICs or big FPGAs. In the USA, the Federal Aviation Administration (FAA) requires that the DO-254 process is followed. In Europe there is an equivalent standard called EUROCAE ED-80.

At first glance the standard seems daunting. It defines how design and verification flows must be strongly tied to both implementation and traceability. In DO-254 projects, HDL coding standards must be documented, and any project code must be reviewed to ensure it follows these standards. They address the following issues: 1326

Aug 17, 2015 | Comments

New 3D XPoint Fast Memory a Big Deal for Big Data

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

After years of research, a new memory technology emerges that combines the best attributes of DRAM and NAND, promising to “completely evolve how it’s used in computing.”

Memory and storage technologies such as DRAM and NAND have been around for decades, with their original implementations able to perform only at a fraction of the level achieved by today’s latest products. But those performance gains, like most in computing, are typically evolutionary, with each generation incrementally faster and more cost effective than the one preceding it. Quantum leaps in performance often come from completely new or radically different ways of solving a particular problem. The 3D XPoint technology announced by Intel in partnership with Micron comes from the latter approach.

The initial technology stores 128Gb per die across two memory layers.

The initial technology stores 128Gb per die across two memory layers.

“This has no predecessor and there was nothing to base it on,” said Al Fazio, Intel senior fellow and director of Memory Technology Development.  “It’s new materials, new process architecture, new design, new testing. We’re going into some existing applications, but it’s really intended to completely evolve how it’s used in computing.”

Touted as the biggest memory breakthrough since the introduction of NAND in 1989, 3D XPoint is a new memory technology that is non-volatile like NAND memory, but is up to 1,000 times faster, with a faster speed only attainable by DRAM, and with endurance up to 1,000 times better than NAND.

3D XPoint owes its performance attributes and namesake to a transistor-less three-dimensional circuit of columnar memory cells and selectors connected by horizontal wires. This “cross point” checkerboard structure allows memory cells to be addressed individually. This structure enables data to be written and read in smaller blocks than NAND, resulting in faster and more efficient read/write processes.

Game-changing Technology

Removing bottlenecks in a system is a key method to increase overall performance. Memory in particular has been a growing barrier, primarily because consistent performance gains in processors in recent years have dramatically outpaced both the speed of hard disks and the cost and density of DRAM.

“What’s exciting about the technology is that it unleashes the microprocessor. It gets more data closer to the CPU. It has 10 times the density of DRAM at near levels of performance, and it allows people running applications to have much more data available to them,” said Rob Crooke, vice president and general manager of Intel’s  Non-Volatile Memory Solutions group. “Conversely in the storage, it’s up to 1000 times faster than NAND. To put that in perspective, most people have experienced an SSD versus a hard disk, where the SSD is about 1,000 times faster than a hard disk. This new technology is going to be that same level of pop, like everything’s in memory.”

Storage versus CPU Performance

Storage versus CPU Performance

“We’ve looked at gaming performance, and it has a phenomenal impact and gives the game creators much more freedom,” explained Crooke. “As opposed to constraining their game levels to how much they can fit in memory and then loading a new level, they now have total freedom to create a much richer game experience and one that’s seamless and continuous, and they can decide if they want to break it up. It’s at their artistic and creative discretion to do that, as opposed to some physical limit like memory size.”

“It’ll be a game-changing experience not only in the client platforms, but also in the data center where they’re trying to analyze remarkable amounts of big data,” Crooke continued. “More and more data needs to be driven to the CPU to analyze faster. Having much more data available to the CPU at a very short latency is pretty exciting.”


Micron and Intel have been jointly developing this technology since 2012. There was, however, basic research on various technologies at both companies for years prior to this partnership. The research team could have tried an easier route—committing to performance and density, or performance and cost, for example—but “if you want to change something, you’ve really got to go for that tougher problem and tie them all together,” Fazio explained.

In 2012, Micron and Intel agreed to jointly pursue the most promising technologies from the research findings. Hundreds of Intel and Micron engineers have been involved in developing the technology to its current state, spanning facilities in California, Idaho and around the world. Over the last three years, the process development for this technology occurred in Micron’s state of the art 300mm R&D facility in Boise, Idaho.

“Nobody has ever attempted productizing a stackable cross point architecture at these densities.  Learning the characteristics and developing the integration methods for this novel architecture was full of engineering challenges,” said Scott DeBoer, vice president of R&D at Micron. “3D XPoint technology required the development of a number of innovative films and materials, some of which have never before been used in semiconductor manufacturing. Understanding the characteristics and sensitivities of these new materials and how to enable them was daunting.”

3D XPoint, NAND and DRAM

While 3D XPoint may have capabilities that can displace DRAM and NAND, DeBoer noted that it’s an additive technology that will co-exist with current solutions while also enabling new innovations. “DRAM will still be the best technology for most demanding highest performance applications, where non-volatility, cost and capacity are less critical. 3D NAND will still be the best technology for absolute lowest cost, where performance metrics are less critical.”

What could be a significant factor in these different memory solutions co-existing is that they can all share manufacturing facilities. “This technology is fabricated using the same manufacturing lines and methods as conventional memory technologies,” said DeBoer. “With the cross point architecture and the materials systems required for the new cell technology, some unique tooling was developed, but these requirements are on par with standard technology node introductions for NAND or DRAM. This technology is fully compatible and not disruptive to current manufacturing lines.”

Scalable Into the Future

The future of this technology looks wide-open too. “The cross point memory cell should be the most scalable architecture,” said Crooke. “It should allow us to scale the memory technology to pretty good densities yet allow it to be byte-addressable or word-addressable like memory is, as opposed to NAND, which is accessed in blocks of data.”

“Because it does not require the overhead of additional access or select transistors, the stackable cross point architecture enables the most aggressive physical scaling of array densities available,” DeBoer added.

Potential Ahead

A technological solution that paves the way for new models of computing doesn’t come by very often. It took teams of hundreds of experts, countless flights, and constant open lines of communication and cooperation to bring make 3D XPoint technology possible.

“Micron and Intel have a long working history inside our NAND JDP and our IMFT joint venture. This made enabling the team cooperation and performance that much easier as we have already strengthened and grown the partnership in that program,” said DeBoer. “Entirely new technologies don’t come around very often, and to be part of this team was truly a once-in-a-career opportunity.”

“One of the things that we should be proud of is the persistence we’ve had over a long period of time,” Crooke added. “Working on a technology problem that you don’t know is solvable, for a sustained period of time, requires a level of confidence and stick-to-it-iveness.”

3D XPoint Die

3D XPoint Die

This article is provided courtesy of IntelFreePress.

Aug 6, 2015 | Comments

Technology Errors Demand Netlist-level CDC Verification

Roger Hughes   Dr. Roger B. Hughes
   Director of Strategic Accounts

Multiple asynchronous clocks are a fact of life on today’s SoC. Individual blocks have to run at different speeds so they can handle different functional and power payloads efficiently, and the ability to split clock domains across the SoC has become a key part of timing-closure processes, isolating clock domains to subsections of the device within which traditional skew-control can still be used.

As a result, clock domain crossing (CDC) verification is required to ensure logic signals can pass between regions controlled by different clocks without being missed or causing metastability. Traditionally, CDC verification has been carried out on RTL descriptions on the basis that appropriate directives inserted in the RTL will ensure reliable data synchronizers are inserted into the netlist by synthesis. But a number of factors are coming together that demand a re-evaluation of this assumption.

A combination of process technology trends and increased intervention by synthesis tools in logic generation, both intended to improve power efficiency, is leading to a situation in which a design that is considered CDC-clean at RTL can fail in operation. Implementation tools can fail to take CDC into account and unwittingly increase the chances of metastability.

Various synthesis features and post-synthesis tools will insert logic cells that, if used in the path of a CDC, conflict with the assumptions made by formal analysis during RTL verification. Test synthesis will, for example, insert additional registers to enable inspection of logic paths through JTAG. Low-power design introduces further issues through the application of increasingly fine-grained clock gating. The registers and combinatorial cells these tools introduce can disrupt the proper operation of synchronization cells inserted into the RTL.

The key issue is that all clock-domain crossings involve, by their nature, asynchronous logic and one of the hazards of asynchronous logic is metastability. Any flip-flop can be rendered metastable. If its data input is toggled at the same time as the sampling edge of the clock, the register is likely to fail to capture the correct input but instead become metastable. The state of the capturing flop may not settle by the end of the current clock period, and so presents a high chance of feeding the wrong value to downstream logic (Fig 1).

FIGURE 1. When data is still changing as a clock changes, the output can become metastable

FIGURE 1. When data is still changing as a clock changes, the output can become metastable

Metastability trends

The risk of metastability with asynchronous logic is always present. Designers can ensure that their designs are unlikely to experience a problem from metastability by increasing the mean time between failures (MTBF) of each synchronizer.

EQUATION 1. The governing equation of Mean Time Between Failures

EQUATION 1. The governing equation of Mean Time Between Failures

The MTBF varies with the settling time of the signal, the time window over which data is expected to settle to a known state, the clock frequency, the data frequency, and the resolution time-constant for the synchronizer, written as τ (tau). The parameter τ depends primarily on the capacitance of the first flip-flop in the synchronizer, divided by its transconductance. MTBF exhibits an exponential dependence on τ as it is proportional to e1/τ. The value of τ tends to vary with both process technology and operating temperature because that affects drain current, which, in turn, affects transconductance. The MTBF can drop many orders of magnitude at temperature extremes, making a failure far more likely.

Technology evolution has generally improved τ, making it less significant as a parameter over the past decade or more, but the property is beginning to become significant again in more advanced nodes because of the failure of some device parameters to scale.

Designs that would probably not have experienced failure before are now at risk of suffering from metastability issues. Coupled with the need for higher performance, MTBF for CDC situations needs to be monitored carefully. Automatically inserted logic can introduce problems for the synchronizer, because register depth and organization affects MTBF. Tools need to be able to take these effects into account if they are to insert cells that reduce the probability of metastability. Further, logic inserted ahead of the synchronizer can introduce glitches that are mistakenly captured as data by the receiver in the other clock domain. Therefore information about the implementation is vital to guarantee performance during CDC checks. The following examples show some of the situations that can arise due to logic insertion by implementation tools.

Example implementation errors

Implementation tools can introduce a number of potential hazards by failing to take CDC into account. Additional registers inserted by test synthesis, for example, can result in glitches on clock lines that can lead to an increased probability of mis-timing issues (Fig 2).

FIGURE 2. The addition of test logic post-synthesis can make mis-timing more likely

FIGURE 2. The addition of test logic post-synthesis can make mis-timing more likely

Clock-gating cells inserted by synthesis tools to reduce switching power may also be incompatible with a good CDC strategy. A combinatorial cell such as an AND gate that follows the register intended to pass a clock signal across the boundary to drive the receiving registers is more likely to experience glitches (Fig 3).

FIGURE 3. Clock-gating logic may be susceptible to glitches

FIGURE 3. Clock-gating logic may be susceptible to glitches

Timing optimization can result in significant changes in logic organization. The optimizer may choose to clone flops so that the path following each flop has a lower capacitance to drive, which should improve performance. If the flops being cloned form part of a synchronizer, this can result in CDC problems. A better way of handling the situation is to synchronize the signal first, and then to duplicate the logic beyond the receiving synchronizer (Fig 4).

FIGURE 4. The introduction of additional flops in parallel to help meet timing can increase the probability of metastability and create correlation issues

FIGURE 4. The introduction of additional flops in parallel to help meet timing can increase the probability of metastability and create correlation issues

The introduction of test logic may even result in the splitting of two flops intended for synchronization. In other situations, optimization of control logic or the use of non-monotonic multiplexer functions can result in the restructuring of CDC interfaces and introduce the potential for glitches (Fig 5).

FIGURE 5. Control logic optimizations may introduce glitches

FIGURE 5. Control logic optimizations may introduce glitches

Because of these possibilities, CDC verification needs to occur at both RTL and netlist – any solution that does not perform netlist-level verification is not complete. An effective strategy for verification is to ensure that the design is CDC clean at RTL and then to use physical-level CDC checks on the netlist to ensure that problems that may have been created by the various implementation tools are trapped and solved using a combination of structural and formal techniques. Tools such as Meridian Physical CDC take the full netlist into account, which is very large in modern designs and can often run to hundreds of millions of gates, ensuring that a design signed-off for CDC at RTL remains consistent with its actual implementation.

This article was originally published on TechDesignForums and is reproduced here by permission.

Jul 30, 2015 | Comments