Blog Archive
August 2015
8/20/2015: A Verification Standard for Design Reliability
8/17/2015: A Verification Standard for Design Reliability
8/06/2015: New 3D XPoint Fast Memory a Big Deal for Big Data
July 2015
7/30/2015: Technology Errors Demand Netlist-level CDC Verification
7/23/2015: Video: SoC Requirements and “Big Data” are Driving CDC Verification
7/16/2015: 50th Anniversary of Moore’s Law: What If He Got it Wrong?
7/09/2015: The Interconnected Web of Work
7/06/2015: In Fond Memory of Gary Smith
7/01/2015: Richard Goering and Us: 30 Great Years
June 2015
6/12/2015: Quick 2015 DAC Recap and Racing Photo Album
6/05/2015: Advanced FPGA Sign-off Includes DO-254 and …Missing DAC?
May 2015
5/28/2015: #2 on GarySmithEDA What to See @ DAC List – Why?
5/14/2015: SoC Verification: There is a Stampede!
5/07/2015: Drilling Down on the Internet-of-Things (IoT)
April 2015
4/30/2015: Reflections on Accellera UCIS: Design by Architect and Committee
4/23/2015: DO-254 Without Tears
4/17/2015: Analysis of Clock Intent Requires Smarter SoC Verification
4/09/2015: High-Level Synthesis: New Driver for RTL Verification
4/03/2015: Underdog Innovation: David and Goliath in Electronics
March 2015
3/27/2015: Taking Control of Constraints Verification
3/20/2015: Billion Dollar Unicorns
3/13/2015: My Impressions of DVCon USA 2015: Lies; Experts; Art or Science?
3/06/2015: Smarter Verification: Shift Mindset to Shift Left [Video]
February 2015
2/27/2015: New Ascent Lint, Cricket Video Interview and DVCon Roses
2/20/2015: Happy Lunar New Year: Year of the Ram (or is it Goat or Sheep?)
2/12/2015: Video: Clock-Domain Crossing Verification: Introduction; SoC challenges; and Keys to Success
2/06/2015: A Personal History of Transaction Interfaces to Hardware Emulation: Part 2
January 2015
1/30/2015: A Personal History of Transaction Interfaces to Hardware Emulation: Part 1
1/22/2015: Intel’s new SoC-based Broadwell CPUs: Less Filling, Taste Great!
1/19/2015: Reporting Happiness: Not as Easy as You Think
1/09/2015: 38th VLSI Design Conf. Keynote: Nilekani on IoT and Smartphones
December 2014
12/22/2014: December 2014 Holiday Party
12/17/2014: Happy Holidays from Real Intent!
12/12/2014: Best of “Real Talk”, Q4 Summary and Latest Videos
12/04/2014: P2415 – New IEEE Power Standard for Unified Hardware Abstraction
November 2014
11/27/2014: The Evolution of RTL Lint
11/20/2014: Parallelism in EDA Software – Blessing or a Curse?
11/13/2014: How Big is WWD – the Wide World of Design?
11/06/2014: CMOS Pioneer Remembered: John Haslet Hall
October 2014
10/31/2014: Is Platform-on-Chip The Next Frontier For IC Integration?
10/23/2014: DVClub Shanghai: Making Verification Debug More Efficient
10/16/2014: ARM TechCon Video: Beer, New Meridian CDC, and Arnold Schwarzenegger ?!
10/10/2014: New CDC Verification: Less Filling, Picture Perfect, and Tastes Great!
10/03/2014: ARM Fueling the SoC Revolution and Changing Verification Sign-off
September 2014
9/25/2014: Does Your Synthesis Code Play Well With Others?
9/19/2014: It’s Time to Embrace Objective-driven Verification
9/12/2014: Autoformal: The Automatic Vacuum for Your RTL Code
9/04/2014: How Bad is Your HDL Code? Be the First to Find out!
August 2014
8/29/2014: Fundamentals of Clock Domain Crossing: Conclusion
8/21/2014: Video Keynote: New Methodologies Drive EDA Revenue Growth
8/15/2014: SoCcer: Defending your Digital Design
8/08/2014: Executive Insight: On the Convergence of Design and Verification
July 2014
7/31/2014: Fundamentals of Clock Domain Crossing Verification: Part Four
7/24/2014: Fundamentals of Clock Domain Crossing Verification: Part Three
7/17/2014: Fundamentals of Clock Domain Crossing Verification: Part Two
7/10/2014: Fundamentals of Clock Domain Crossing Verification: Part One
7/03/2014: Static Verification Leads to New Age of SoC Design
June 2014
6/26/2014: Reset Optimization Pays Big Dividends Before Simulation
6/20/2014: SoC CDC Verification Needs a Smarter Hierarchical Approach
6/12/2014: Photo Booth Blackmail at DAC in San Francisco!
6/06/2014: Quick Reprise of DAC 2014
May 2014
5/01/2014: Determining Test Quality through Dynamic Runtime Monitoring of SystemVerilog Assertions
April 2014
4/24/2014: Complexity Drives Smart Reporting in RTL Verification
4/17/2014: Video Update: New Ascent XV Release for X-optimization, ChipEx show in Israel, DAC Preview
4/11/2014: Design Verification is Shifting Left: Earlier, Focused and Faster
4/03/2014: Redefining Chip Complexity in the SoC Era
March 2014
3/27/2014: X-Verification: A Critical Analysis for a Low-Power World (Video)
3/14/2014: Engineers Have Spoken: Design And Verification Survey Results
3/06/2014: New Ascent IIV Release Delivers Enhanced Automatic Verification of FSMs
February 2014
2/28/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 3
2/20/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 2
2/13/2014: DVCon Panel Drill Down: “Where Does Design End and Verification Begin?” – Part 1
2/07/2014: Video Tech Talk: Changes In Verification
January 2014
1/31/2014: Progressive Static Verification Leads to Earlier and Faster Timing Sign-off
1/30/2014: Verific’s Front-end Technology Leads to Success and a Giraffe!
1/23/2014: CDC Verification of Fast-to-Slow Clocks – Part Three: Metastability Aware Simulation
1/16/2014: CDC Verification of Fast-to-Slow Clocks – Part Two: Formal Checks
1/10/2014: CDC Verification of Fast-to-Slow Clocks – Part One: Structural Checks
1/02/2014: 2013 Highlights And Giga-scale Predictions For 2014
December 2013
12/13/2013: Q4 News, Year End Summary and New Videos
12/12/2013: Semi Design Technology & System Drivers Roadmap: Part 6 – DFM
12/06/2013: The Future is More than “More than Moore”
November 2013
11/27/2013: Robert Eichner’s presentation at the Verification Futures Conference
11/21/2013: The Race For Better Verification
11/18/2013: Experts at the Table: The Future of Verification – Part 2
11/14/2013: Experts At The Table: The Future Of Verification Part 1
11/08/2013: Video: Orange Roses, New Product Releases and Banner Business at ARM TechCon
October 2013
10/31/2013: Minimizing X-issues in Both Design and Verification
10/23/2013: Value of a Design Tool Needs More Sense Than Dollars
10/17/2013: Graham Bell at EDA Back to the Future
10/15/2013: The Secret Sauce for CDC Verification
10/01/2013: Clean SoC Initialization now Optimal and Verified with Ascent XV
September 2013
9/24/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 4
9/20/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 3
9/20/2013: CEO Viewpoint: Prakash Narain on Moving from RTL to SoC Sign-off
9/17/2013: Video: Ascent Lint – The Best Just Got Better
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain, Part 2
9/16/2013: EETimes: An Engineer’s Progress With Prakash Narain
9/10/2013: SoC Sign-off Needs Analysis and Optimization of Design Initialization in the Presence of Xs
August 2013
8/15/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 4
8/08/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 3
July 2013
7/25/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 2
7/18/2013: Semiconductor Design Technology and System Drivers Roadmap: Process and Status – Part 1
7/16/2013: Executive Video Briefing: Prakash Narain on RTL and SoC Sign-off
7/05/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 3
June 2013
6/27/2013: Bryon Moyer: Simpler CDC Exception Handling
6/21/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 2
6/17/2013: Peggy Aycinena’s interview with Prakash Narain
6/14/2013: Lending a ‘Formal’ Hand to CDC Verification: A Case Study of Non-Intuitive Failure Signatures — Part 1
6/10/2013: Photo Booth Blackmail!
6/03/2013: Real Intent is on John Cooley’s “DAC’13 Cheesy List”
May 2013
5/30/2013: Does SoC Sign-off Mean More Than RTL?
5/24/2013: Ascent Lint Rule of the Month: DEFPARAM
5/23/2013: Video: Gary Smith Tells Us Who and What to See at DAC 2013
5/22/2013: Real Intent is on Gary Smith’s “What to see at DAC” List!
5/16/2013: Your Real Intent Invitation to Fun and Fast Verification at DAC
5/09/2013: DeepChip: “Real Intent’s not-so-secret DVcon’13 Report”
5/07/2013: TechDesignForum: Better analysis helps improve design quality
5/03/2013: Unknown Sign-off and Reset Analysis
April 2013
4/25/2013: Hear Alexander Graham Bell Speak from the 1880′s
4/19/2013: Ascent Lint rule of the month: NULL_RANGE
4/16/2013: May 2 Webinar: Automatic RTL Verification with Ascent IIV: Find Bugs Simulation Can Miss
4/05/2013: Conclusion: Clock and Reset Ubiquity – A CDC Perspective
March 2013
3/22/2013: Part Six: Clock and Reset Ubiquity – A CDC Perspective
3/21/2013: The BIG Change in SoC Verification You Don’t Know About
3/15/2013: Ascent Lint Rule of the Month: COMBO_NBA
3/15/2013: System-Level Design Experts At The Table: Verification Strategies – Part One
3/08/2013: Part Five: Clock and Reset Ubiquity – A CDC Perspective
3/01/2013: Quick DVCon Recap: Exhibit, Panel, Tutorial and Wally’s Keynote
3/01/2013: System-Level Design: Is This The Era Of Automatic Formal Checks For Verification?
February 2013
2/26/2013: Press Release: Real Intent Technologist Presents Power-related Paper and Tutorial at ISQED 2013 Symposium
2/25/2013: At DVCon: Pre-Simulation Verification for RTL Sign-Off includes Automating Power Optimization and DFT
2/25/2013: Press Release: Real Intent to Exhibit, Participate in Panel and Present Tutorial at DVCon 2013
2/22/2013: Part Four: Clock and Reset Ubiquity – A CDC Perspective
2/18/2013: Does Extreme Performance Mean Hard-to-Use?
2/15/2013: Part Three: Clock and Reset Ubiquity – A CDC Perspective
2/07/2013: Ascent Lint Rule of the Month: ARITH_CONTEXT
2/01/2013: “Where Does Design End and Verification Begin?” and DVCon Tutorial on Static Verification
January 2013
1/25/2013: Part Two: Clock and Reset Ubiquity – A CDC Perspective
1/18/2013: Part One: Clock and Reset Ubiquity – A CDC Perspective
1/07/2013: Ascent Lint Rule of the Month: MIN_ID_LEN
1/04/2013: Predictions for 2014, Hier. vs Flat, Clocks and Bugs
December 2012
12/14/2012: Real Intent Reports on DVClub Event at Microprocessor Test and Verification Workshop 2012
12/11/2012: Press Release: Real Intent Records Banner Year
12/07/2012: Press Release: Real Intent Rolls Out New Version of Ascent Lint for Early Functional Verification
12/04/2012: Ascent Lint Rule of the Month: OPEN_INPUT
November 2012
11/19/2012: Real Intent Has Excellent EDSFair 2012 Exhibition
11/16/2012: Peggy Aycinena: New Look, New Location, New Year
11/14/2012: Press Release: New Look and New Headquarters for Real Intent
11/05/2012: Ascent Lint HDL Rule of the Month: ZERO_REP
11/02/2012: Have you had CDC bugs slip through resulting in late ECOs or chip respins?
11/01/2012: DAC survey on CDC bugs, X propagation, constraints
October 2012
10/29/2012: Press Release: Real Intent to Exhibit at ARM TechCon 2012 – Chip Design Day
September 2012
9/24/2012: Photos of the space shuttle Endeavour from the Real Intent office
9/20/2012: Press Release: Real Intent Showcases Verification Solutions at Verify 2012 Japan
9/14/2012: A Bolt of Inspiration
9/11/2012: ARM blog: An Advanced Timing Sign-off Methodology for the SoC Design Ecosystem
9/05/2012: When to Retool the Front-End Design Flow?
August 2012
8/27/2012: X-Verification: What Happens When Unknowns Propagate Through Your Design
8/24/2012: Article: Verification challenges require surgical precision
8/21/2012: How To Article: Verifying complex clock and reset regimes in modern chips
8/20/2012: Press Release: Real Intent Supports Growth Worldwide by Partnering With EuropeLaunch
8/06/2012: SemiWiki: The Unknown in Your Design Can be Dangerous
8/03/2012: Video: “Issues and Struggles in SOC Design Verification”, Dr. Roger Hughes
July 2012
7/30/2012: Video: What is Driving Lint Usage in Complex SOCs?
7/25/2012: Press Release: Real Intent Adds to Japan Presence: Expands Office, Increases Staff to Meet Demand for Design Verification and Sign-Off Products
7/23/2012: How is Verification Complexity Changing, and What is the Impact on Sign-off?
7/20/2012: Real Intent in Brazil
7/16/2012: Foosball, Frosty Beverages and Accelerating Verification Sign-off
7/03/2012: A Good Design Tool Needs a Great Beginning
June 2012
6/14/2012: Real Intent at DAC 2012
6/01/2012: DeepChip: Cheesy List for DAC 2012
May 2012
5/31/2012: EDACafe: Your Real Intent Invitation to Fast Verification and Fun at DAC
5/30/2012: Real Intent Video: New Ascent Lint and Meridian CDC Releases and Fun at DAC 2012
5/29/2012: Press Release: Real Intent Leads in Speed, Capacity and Precision with New Releases of Ascent Lint and Meridian CDC Verification Tools
5/22/2012: Press Release: Over 35% Revenue Growth in First Half of 2012
5/21/2012: Thoughts on RTL Lint, and a Poem
5/21/2012: Real Intent is #8 on Gary Smith’s “What to see at DAC” List!
5/18/2012: EETimes: Gearing Up for DAC – Verification demos
5/08/2012: Gabe on EDA: Real Intent Helps Designers Verify Intent
5/07/2012: EDACafe: A Page is Turned
5/07/2012: Press Release: Graham Bell Joins Real Intent to Promote Early Functional Verification & Advanced Sign-Off Circuit Design Software
March 2012
3/21/2012: Press Release: Real Intent Demos EDA Solutions for Early Functional Verification & Advanced Sign-off at Synopsys Users Group (SNUG)
3/20/2012: Article: Blindsided by a glitch
3/16/2012: Gabe on EDA: Real Intent and the X Factor
3/10/2012: DVCon Video Interview: “Product Update and New High-capacity ‘X’ Verification Solution”
3/01/2012: Article: X-Propagation Woes: Masking Bugs at RTL and Unnecessary Debug at the Netlist
February 2012
2/28/2012: Press Release: Real Intent Joins Cadence Connections Program; Real Intent’s Advanced Sign-Off Verification Capabilities Added to Leading EDA Flow
2/15/2012: Real Intent Improves Lint Coverage and Usability
2/15/2012: Avoiding the Titanic-Sized Iceberg of Downton Abbey
2/08/2012: Gabe on EDA: Real Intent Meridian CDC
2/08/2012: Press Release: At DVCon, Real Intent Verification Experts Present on Resolving X-Propagation Bugs; Demos Focus on CDC and RTL Debugging Innovations
January 2012
1/24/2012: A Meaningful Present for the New Year
1/11/2012: Press Release: Real Intent Solidifies Leadership in Clock Domain Crossing
August 2011
8/02/2011: A Quick History of Clock Domain Crossing (CDC) Verification
July 2011
7/26/2011: Hardware-Assisted Verification and the Animal Kingdom
7/13/2011: Advanced Sign-off…It’s Trending!
May 2011
5/24/2011: Learn about Advanced Sign-off Verification at DAC 2011
5/16/2011: Getting A Jump On DAC
5/09/2011: Livin’ on a Prayer
5/02/2011: The Journey to CDC Sign-Off
April 2011
4/25/2011: Getting You Closer to Verification Closure
4/11/2011: X-verification: Conquering the “Unknown”
4/05/2011: Learn About the Latest Advances in Verification Sign-off!
March 2011
3/21/2011: Business Not as Usual
3/15/2011: The Evolution of Sign-off
3/07/2011: Real People, Real Discussion – Real Intent at DVCon
February 2011
2/28/2011: The Ascent of Ascent Lint (v1.4 is here!)
2/21/2011: Foundation for Success
2/08/2011: Fairs to Remember
January 2011
1/31/2011: EDA Innovation
1/24/2011: Top 3 Reasons Why Designers Switch to Meridian CDC from Real Intent
1/17/2011: Hot Topics, Hot Food, and Hot Prize
1/10/2011: Satisfaction EDA Style!
1/03/2011: The King is Dead. Long Live the King!
December 2010
12/20/2010: Hardware Emulation for Lowering Production Testing Costs
12/03/2010: What do you need to know for effective CDC Analysis?
November 2010
11/12/2010: The SoC Verification Gap
11/05/2010: Building Relationships Between EDA and Semiconductor Ventures
October 2010
10/29/2010: Thoughts on Assertion Based Verification (ABV)
10/25/2010: Who is the master who is the slave?
10/08/2010: Economics of Verification
10/01/2010: Hardware-Assisted Verification Tackles Verification Bottleneck
September 2010
9/24/2010: Excitement in Electronics
9/17/2010: Achieving Six Sigma Quality for IC Design
9/03/2010: A Look at Transaction-Based Modeling
August 2010
8/20/2010: The 10 Year Retooling Cycle
July 2010
7/30/2010: Hardware-Assisted Verification Usage Survey of DAC Attendees
7/23/2010: Leadership with Authenticity
7/16/2010: Clock Domain Verification Challenges: How Real Intent is Solving Them
7/09/2010: Building Strong Foundations
7/02/2010: Celebrating Freedom from Verification
June 2010
6/25/2010: My DAC Journey: Past, Present and Future
6/18/2010: Verifying Today’s Large Chips
6/11/2010: You Got Questions, We Got Answers
6/04/2010: Will 70 Remain the Verification Number?
May 2010
5/28/2010: A Model for Justifying More EDA Tools
5/21/2010: Mind the Verification Gap
5/14/2010: ChipEx 2010: a Hot Show under the Hot Sun
5/07/2010: We Sell Canaries
April 2010
4/30/2010: Celebrating 10 Years of Emulation Leadership
4/23/2010: Imagining Verification Success
4/16/2010: Do you have the next generation verification flow?
4/09/2010: A Bug’s Eye View under the Rug of SNUG
4/02/2010: Globetrotting 2010
March 2010
3/26/2010: Is Your CDC Tool of Sign-Off Quality?
3/19/2010: DATE 2010 – There Was a Chill in the Air
3/12/2010: Drowning in a Sea of Information
3/05/2010: DVCon 2010: Awesomely on Target for Verification
February 2010
2/26/2010: Verifying CDC Issues in the Presence of Clocks with Dynamically Changing Frequencies
2/19/2010: Fostering Innovation
2/12/2010: CDC (Clock Domain Crossing) Analysis – Is this a misnomer?
2/05/2010: EDSFair – A Successful Show to Start 2010
January 2010
1/29/2010: Ascent Is Much More Than a Bug Hunter
1/22/2010: Ascent Lint Steps up to Next Generation Challenges
1/15/2010: Google and Real Intent, 1st Degree LinkedIn
1/08/2010: Verification Challenges Require Surgical Precision
1/07/2010: Introducing Real Talk!

A Verification Standard for Design Reliability

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

_do-254The great thing about a standard is that once you decide to use it, your life as a designer is suddenly easier.  Using a standard reduces the long list of choices and decisions that need to be made to get a working product out the door.  It also gives assurance to the customer that you are following best practices of the industry.

A standard for the world of aviation electronics (avionics) is the RTCA/DO-254, Design Assurance Guidance For Airborne Electronic Hardware.  It is a process assurance flow for civilian aerospace design of complex electronic hardware typically implemented using ASICs or big FPGAs.  In the USA, the Federal Aviation Administration (FAA) requires that the DO-254 process is followed.  In Europe there is an equivalent standard called EUROCAE ED-80.

At first glance the standard seems daunting. It defines how design and verification flows must be strongly tied to both implementation and traceability. In DO-254 projects, HDL coding standards must be documented, and any project code must be reviewed to ensure it follows these standards.  They address the following issues:

  • Catching potential design problems in HDL code that may not normally surface until later in the process and may not be caught by other verification activities.
  • Supporting error detection, containment and recovery mechanisms.
  • Enforcing style and readability practices to improve code comprehension, portability, and reviews.

The specific rules or guidelines can be grouped into the following categories:

  • Coding Practices:  Ensure that a safety-critical coding style and good digital design practices are used
  • Clock Domain Crossing:  Addresses  potential hazards with designs containing multiple clock zones and asynchronous clock zone transitions
  • Safe Synthesis: Checks to ensure a proper netlist is created by the synthesis tool
  • Design Reviews: Checks to make the process of design reviews and code comprehension easier

A specific guideline, “Coding Practice 6” that ensures safe finite-state-machine (FSM) transitions declares:

  1. An FSM should have a defined reset state.
  2. All unused (illegal or undefined) states should transition to a defined state, whereupon this error condition can be processed accordingly.
  3. In an FSM there should be no unreachable states (i.e., those without any incoming transitions) and no dead-end states (i.e., those without any outgoing transitions).
Fig.1. A single event upset (SEU) due to environmental radiation can cause an FSM to perform invalid transitions, and guideline CP6 eliminates this behavior in DO-254 compliant designs.

Fig.1. A single event upset (SEU) due to environmental radiation can cause an FSM to perform invalid transitions, and guideline CP6 eliminates this behavior in DO-254 compliant designs.

Guideline CP6 is an example of the granularity within the standard.  It addresses how you write state machines, the coding style you use and the conformity of the state machines to that style. Figure 1 illustrates how environmental radiation can cause incorrect behavior, and the need to prevent that.

While reviews can be done manually, an automated approach (when possible) guarantees a more consistent HDL-code quality assessment.  It takes a lot of pain out of the process and makes it less daunting. Automating the HDL code assessment process has the added benefit of promoting regular HDL design-checking steps throughout the design development process, as opposed to waiting for gating design reviews in which issues can be overwhelming and more costly to address.

DO-254 compliance for HDL code is now covered by lint tools, such as Ascent Lint from Real Intent.  Their accumulation of design knowledge helps ensure that safety-critical designs will be successful.  Automation makes it easy to adopt lint tools into an existing team’s design flow.

To achieve a more robust DO-254 compliance, a linter is an important foundation, but not a standalone solution. You need a suite of tools, also packed with the same kind of design intelligence.

Verification that analyzes the sequential behavior and the deeper intent of RTL code provides an additional level of checking necessary for a safety-critical design.  An autoformal tool uses proof engines to find subtle corner conditions that cannot be seen by a lint tool and could easily be missed in simulation. An X-propagation tool assures that designs come out of reset and low-power states correctly.

A suite of focused tools greatly improves the efficiency of existing players delivering projects and also lowers entry barriers for new ones. It boosts competition, resulting in higher quality.

Right now, aviation is an exciting field enabled by advanced electronics. The drone market alone – spurred by interest from the likes of Amazon and Google – is being awarded multi-billion dollar valuations. In the US, the FAA has finally described the operational role for unmanned aerial vehicles (UAVs), albeit relatively small ones for now.

As UAVs become more commonplace, their DO-254 compliance increasingly will be required, even if the FAA is not itself making that mandatory…yet. DO-254 clearly is a standard for high reliability verification in the field of avionics whose importance will soar.

Aug 20, 2015 | Comments

A Verification Standard for Design Reliability


The great thing about a standard is that once you decide to use it, your life as a designer is suddenly easier. Using a standard reduces the long list of choices and decisions that need to be made to get a working product out the door. It also gives assurance to the customer that you are following best practices of the industry.

A standard for the world of aviation electronics (avionics) is the RTCA/DO-254, Design Assurance Guidance For Airborne Electronic Hardware. It is a process assurance flow for civilian aerospace design of complex electronic hardware typically implemented using ASICs or big FPGAs. In the USA, the Federal Aviation Administration (FAA) requires that the DO-254 process is followed. In Europe there is an equivalent standard called EUROCAE ED-80.

At first glance the standard seems daunting. It defines how design and verification flows must be strongly tied to both implementation and traceability. In DO-254 projects, HDL coding standards must be documented, and any project code must be reviewed to ensure it follows these standards. They address the following issues: 1326

Aug 17, 2015 | Comments

New 3D XPoint Fast Memory a Big Deal for Big Data

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

After years of research, a new memory technology emerges that combines the best attributes of DRAM and NAND, promising to “completely evolve how it’s used in computing.”

Memory and storage technologies such as DRAM and NAND have been around for decades, with their original implementations able to perform only at a fraction of the level achieved by today’s latest products. But those performance gains, like most in computing, are typically evolutionary, with each generation incrementally faster and more cost effective than the one preceding it. Quantum leaps in performance often come from completely new or radically different ways of solving a particular problem. The 3D XPoint technology announced by Intel in partnership with Micron comes from the latter approach.

The initial technology stores 128Gb per die across two memory layers.

The initial technology stores 128Gb per die across two memory layers.

“This has no predecessor and there was nothing to base it on,” said Al Fazio, Intel senior fellow and director of Memory Technology Development.  “It’s new materials, new process architecture, new design, new testing. We’re going into some existing applications, but it’s really intended to completely evolve how it’s used in computing.”

Touted as the biggest memory breakthrough since the introduction of NAND in 1989, 3D XPoint is a new memory technology that is non-volatile like NAND memory, but is up to 1,000 times faster, with a faster speed only attainable by DRAM, and with endurance up to 1,000 times better than NAND.

3D XPoint owes its performance attributes and namesake to a transistor-less three-dimensional circuit of columnar memory cells and selectors connected by horizontal wires. This “cross point” checkerboard structure allows memory cells to be addressed individually. This structure enables data to be written and read in smaller blocks than NAND, resulting in faster and more efficient read/write processes.

Game-changing Technology

Removing bottlenecks in a system is a key method to increase overall performance. Memory in particular has been a growing barrier, primarily because consistent performance gains in processors in recent years have dramatically outpaced both the speed of hard disks and the cost and density of DRAM.

“What’s exciting about the technology is that it unleashes the microprocessor. It gets more data closer to the CPU. It has 10 times the density of DRAM at near levels of performance, and it allows people running applications to have much more data available to them,” said Rob Crooke, vice president and general manager of Intel’s  Non-Volatile Memory Solutions group. “Conversely in the storage, it’s up to 1000 times faster than NAND. To put that in perspective, most people have experienced an SSD versus a hard disk, where the SSD is about 1,000 times faster than a hard disk. This new technology is going to be that same level of pop, like everything’s in memory.”

Storage versus CPU Performance

Storage versus CPU Performance

“We’ve looked at gaming performance, and it has a phenomenal impact and gives the game creators much more freedom,” explained Crooke. “As opposed to constraining their game levels to how much they can fit in memory and then loading a new level, they now have total freedom to create a much richer game experience and one that’s seamless and continuous, and they can decide if they want to break it up. It’s at their artistic and creative discretion to do that, as opposed to some physical limit like memory size.”

“It’ll be a game-changing experience not only in the client platforms, but also in the data center where they’re trying to analyze remarkable amounts of big data,” Crooke continued. “More and more data needs to be driven to the CPU to analyze faster. Having much more data available to the CPU at a very short latency is pretty exciting.”


Micron and Intel have been jointly developing this technology since 2012. There was, however, basic research on various technologies at both companies for years prior to this partnership. The research team could have tried an easier route—committing to performance and density, or performance and cost, for example—but “if you want to change something, you’ve really got to go for that tougher problem and tie them all together,” Fazio explained.

In 2012, Micron and Intel agreed to jointly pursue the most promising technologies from the research findings. Hundreds of Intel and Micron engineers have been involved in developing the technology to its current state, spanning facilities in California, Idaho and around the world. Over the last three years, the process development for this technology occurred in Micron’s state of the art 300mm R&D facility in Boise, Idaho.

“Nobody has ever attempted productizing a stackable cross point architecture at these densities.  Learning the characteristics and developing the integration methods for this novel architecture was full of engineering challenges,” said Scott DeBoer, vice president of R&D at Micron. “3D XPoint technology required the development of a number of innovative films and materials, some of which have never before been used in semiconductor manufacturing. Understanding the characteristics and sensitivities of these new materials and how to enable them was daunting.”

3D XPoint, NAND and DRAM

While 3D XPoint may have capabilities that can displace DRAM and NAND, DeBoer noted that it’s an additive technology that will co-exist with current solutions while also enabling new innovations. “DRAM will still be the best technology for most demanding highest performance applications, where non-volatility, cost and capacity are less critical. 3D NAND will still be the best technology for absolute lowest cost, where performance metrics are less critical.”

What could be a significant factor in these different memory solutions co-existing is that they can all share manufacturing facilities. “This technology is fabricated using the same manufacturing lines and methods as conventional memory technologies,” said DeBoer. “With the cross point architecture and the materials systems required for the new cell technology, some unique tooling was developed, but these requirements are on par with standard technology node introductions for NAND or DRAM. This technology is fully compatible and not disruptive to current manufacturing lines.”

Scalable Into the Future

The future of this technology looks wide-open too. “The cross point memory cell should be the most scalable architecture,” said Crooke. “It should allow us to scale the memory technology to pretty good densities yet allow it to be byte-addressable or word-addressable like memory is, as opposed to NAND, which is accessed in blocks of data.”

“Because it does not require the overhead of additional access or select transistors, the stackable cross point architecture enables the most aggressive physical scaling of array densities available,” DeBoer added.

Potential Ahead

A technological solution that paves the way for new models of computing doesn’t come by very often. It took teams of hundreds of experts, countless flights, and constant open lines of communication and cooperation to bring make 3D XPoint technology possible.

“Micron and Intel have a long working history inside our NAND JDP and our IMFT joint venture. This made enabling the team cooperation and performance that much easier as we have already strengthened and grown the partnership in that program,” said DeBoer. “Entirely new technologies don’t come around very often, and to be part of this team was truly a once-in-a-career opportunity.”

“One of the things that we should be proud of is the persistence we’ve had over a long period of time,” Crooke added. “Working on a technology problem that you don’t know is solvable, for a sustained period of time, requires a level of confidence and stick-to-it-iveness.”

3D XPoint Die

3D XPoint Die

This article is provided courtesy of IntelFreePress.

Aug 6, 2015 | Comments

Technology Errors Demand Netlist-level CDC Verification

Roger Hughes   Dr. Roger B. Hughes
   Director of Strategic Accounts

Multiple asynchronous clocks are a fact of life on today’s SoC. Individual blocks have to run at different speeds so they can handle different functional and power payloads efficiently, and the ability to split clock domains across the SoC has become a key part of timing-closure processes, isolating clock domains to subsections of the device within which traditional skew-control can still be used.

As a result, clock domain crossing (CDC) verification is required to ensure logic signals can pass between regions controlled by different clocks without being missed or causing metastability. Traditionally, CDC verification has been carried out on RTL descriptions on the basis that appropriate directives inserted in the RTL will ensure reliable data synchronizers are inserted into the netlist by synthesis. But a number of factors are coming together that demand a re-evaluation of this assumption.

A combination of process technology trends and increased intervention by synthesis tools in logic generation, both intended to improve power efficiency, is leading to a situation in which a design that is considered CDC-clean at RTL can fail in operation. Implementation tools can fail to take CDC into account and unwittingly increase the chances of metastability.

Various synthesis features and post-synthesis tools will insert logic cells that, if used in the path of a CDC, conflict with the assumptions made by formal analysis during RTL verification. Test synthesis will, for example, insert additional registers to enable inspection of logic paths through JTAG. Low-power design introduces further issues through the application of increasingly fine-grained clock gating. The registers and combinatorial cells these tools introduce can disrupt the proper operation of synchronization cells inserted into the RTL.

The key issue is that all clock-domain crossings involve, by their nature, asynchronous logic and one of the hazards of asynchronous logic is metastability. Any flip-flop can be rendered metastable. If its data input is toggled at the same time as the sampling edge of the clock, the register is likely to fail to capture the correct input but instead become metastable. The state of the capturing flop may not settle by the end of the current clock period, and so presents a high chance of feeding the wrong value to downstream logic (Fig 1).

FIGURE 1. When data is still changing as a clock changes, the output can become metastable

FIGURE 1. When data is still changing as a clock changes, the output can become metastable

Metastability trends

The risk of metastability with asynchronous logic is always present. Designers can ensure that their designs are unlikely to experience a problem from metastability by increasing the mean time between failures (MTBF) of each synchronizer.

EQUATION 1. The governing equation of Mean Time Between Failures

EQUATION 1. The governing equation of Mean Time Between Failures

The MTBF varies with the settling time of the signal, the time window over which data is expected to settle to a known state, the clock frequency, the data frequency, and the resolution time-constant for the synchronizer, written as τ (tau). The parameter τ depends primarily on the capacitance of the first flip-flop in the synchronizer, divided by its transconductance. MTBF exhibits an exponential dependence on τ as it is proportional to e1/τ. The value of τ tends to vary with both process technology and operating temperature because that affects drain current, which, in turn, affects transconductance. The MTBF can drop many orders of magnitude at temperature extremes, making a failure far more likely.

Technology evolution has generally improved τ, making it less significant as a parameter over the past decade or more, but the property is beginning to become significant again in more advanced nodes because of the failure of some device parameters to scale.

Designs that would probably not have experienced failure before are now at risk of suffering from metastability issues. Coupled with the need for higher performance, MTBF for CDC situations needs to be monitored carefully. Automatically inserted logic can introduce problems for the synchronizer, because register depth and organization affects MTBF. Tools need to be able to take these effects into account if they are to insert cells that reduce the probability of metastability. Further, logic inserted ahead of the synchronizer can introduce glitches that are mistakenly captured as data by the receiver in the other clock domain. Therefore information about the implementation is vital to guarantee performance during CDC checks. The following examples show some of the situations that can arise due to logic insertion by implementation tools.

Example implementation errors

Implementation tools can introduce a number of potential hazards by failing to take CDC into account. Additional registers inserted by test synthesis, for example, can result in glitches on clock lines that can lead to an increased probability of mis-timing issues (Fig 2).

FIGURE 2. The addition of test logic post-synthesis can make mis-timing more likely

FIGURE 2. The addition of test logic post-synthesis can make mis-timing more likely

Clock-gating cells inserted by synthesis tools to reduce switching power may also be incompatible with a good CDC strategy. A combinatorial cell such as an AND gate that follows the register intended to pass a clock signal across the boundary to drive the receiving registers is more likely to experience glitches (Fig 3).

FIGURE 3. Clock-gating logic may be susceptible to glitches

FIGURE 3. Clock-gating logic may be susceptible to glitches

Timing optimization can result in significant changes in logic organization. The optimizer may choose to clone flops so that the path following each flop has a lower capacitance to drive, which should improve performance. If the flops being cloned form part of a synchronizer, this can result in CDC problems. A better way of handling the situation is to synchronize the signal first, and then to duplicate the logic beyond the receiving synchronizer (Fig 4).

FIGURE 4. The introduction of additional flops in parallel to help meet timing can increase the probability of metastability and create correlation issues

FIGURE 4. The introduction of additional flops in parallel to help meet timing can increase the probability of metastability and create correlation issues

The introduction of test logic may even result in the splitting of two flops intended for synchronization. In other situations, optimization of control logic or the use of non-monotonic multiplexer functions can result in the restructuring of CDC interfaces and introduce the potential for glitches (Fig 5).

FIGURE 5. Control logic optimizations may introduce glitches

FIGURE 5. Control logic optimizations may introduce glitches

Because of these possibilities, CDC verification needs to occur at both RTL and netlist – any solution that does not perform netlist-level verification is not complete. An effective strategy for verification is to ensure that the design is CDC clean at RTL and then to use physical-level CDC checks on the netlist to ensure that problems that may have been created by the various implementation tools are trapped and solved using a combination of structural and formal techniques. Tools such as Meridian Physical CDC take the full netlist into account, which is very large in modern designs and can often run to hundreds of millions of gates, ensuring that a design signed-off for CDC at RTL remains consistent with its actual implementation.

This article was originally published on TechDesignForums and is reproduced here by permission.

Jul 30, 2015 | Comments

Video: SoC Requirements and “Big Data” are Driving CDC Verification

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Just before the design automation conference in June, I interviewed Sarath Kirihennedige and asked him about the drivers for clock-domain crossing (CDC) verification of highly integrated SoC designs, and the requirements for handling the “big data” that this analysis produces.  He discusses these trends and how the 2015 release of Meridian CDC from Real Intent meets this challenge.

He does this in under 5 minutes!   You can see it right here…

Jul 23, 2015 | Comments

50th Anniversary of Moore’s Law: What If He Got it Wrong?

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Electronics  April 16, 1965

Electronics April 16, 1965

On April 19, 1965, Electronics magazine published an article that would change the world. It was authored by a Fairchild Semiconductor’s R&D director, who made the observation that transistors would decrease in cost and increase in performance at an exponential rate. The article predicted the personal computer and mobile communications. The author’s name was Gordon Moore and the seminal observation was later dubbed “Moore’s Law.” Three years later he would co-found Intel. The law defines the trajectory of the semiconductor industry, with profound consequences that have touched every aspect of our lives.

The period is sometimes quoted as 18 months because of Intel executive David House, who in 1975 predicted that chip performance would double every 18 months; being a combination of the effect of more transistors and their faster switching time.

What if Gordon Moore got his math wrong and that instead of the number of components on an integrated circuit doubling every couple of years, he said every three years?

If we play out that scenario, we’d be back in 1998. The year Google was founded and Facebook’s Mark Zuckerberg was 14 years old. Apple discontinued development of the Newton computer, and Synopsys had just acquired EPIC and Viewlogic, and Cadence was buying Quickturn.  Intel had released the Pentium II microprocessor with around 8 million transistors in a 250nm process.  Here are some more consequences of a slower growth rate:

  • No modern smartphones (3.6 billion in use today)
  • No social media as we know it (Twitter started nine years ago when the New Horizons Pluto mission was launched)
  • Lower fuel efficiency and higher CO2 emissions
  • The World Wide Web was a youngster e.g. no YouTube
  • Lower agricultural output
  • Higher mortality rates
  • Renewable power would not be commercially viable e.g no solar panels on my house
  • China would not yet be the world’s manufacturing center

Fortunately(?), we live in 2015 and not 1998.  Moore’s law has continued to hold up after 50 years.  But like all exponential growth curves in the real world, they eventually saturate their eco-system and can grow no further.  Where are we with Moore’s law?

One positive sign is the announcement of 7nm test chips by IBM researchers.   The transistors were silicon-germanium channel types and extreme ultraviolet (EUV) lithography was used to fabricate the chips.  Designs employing 20 billion transistors will be possible.  Commercial availability is at least two to three years away.

According to Wally Rhines of Mentor Graphics, Moore’s Law is a special case of the engineering learning curve.  He says that as long as we could shrink feature sizes and wafer diameters fast enough, then we could stay on the learning curve. Sooner or later, we will have to do other things, because shrinking feature sizes will become too expensive. We will need to use other methods in addition to shrinking feature sizes to keep ahead.

And what will those other methods be?   That is a topic for another article.

Happy 50th Birthday to Moore’s Law!

Gordon Moore

Gordon Moore

Jul 16, 2015 | Comments

The Interconnected Web of Work

Ramesh Dewangan   Ramesh Dewangan
   Vice President of Application Engineering at Real Intent

“Imagine stepping into a car that recognizes your facial features and begins playing your favorite music. A pair of gloves that knows the history of your vehicle from the time of its inception as a lone chassis on the factory floor. “ –Doug Davis on IoT@Intel

Trends in the Internet of Things (IoT) has been fascinating to follow.

In my last blog on the topic I mentioned the 4 challenges facing an IoT system as spelled out by James Stansberry, SVP and GM, IoT Products, Silicon Labs: functionality, energy, connectivity and integration.

Four elements make up successful IoT hardware

Four elements make up successful IoT hardware

This had me thinking… Does this paradigm apply only to the hardware of IoT?

Let us look at a typical team in our workforce. The success of any work team depends on:

1. Skill set – This relates to functionality in the IoT diagram. Each team member brings unique skills (functionality) to the system. The team is successful only if you have the right mix of skills (is functionally complete).

wt-2In most EDA product development I have been involved in, we had an architect, a few software developers, some product engineers, a technical marketeer, a tech writer, and a build/regression owner and so on. Everyone in the team brought a unique set of skills to the table. Any time there was staff turnover and the loss of one specific skill, this caused the team output to suffer.

2. Energy – This term is especially relevant to work teams. Energy denotes the drive, enthusiasm and motivation of the people in their work together. The lower the energy of the people, the poorer the team performance will be. Likewise, poor energy efficiency and wasted power will not work for IoT. A high energy level (efficiency) is a key enabler for team success, and is also true for IoT hardware.

wt-3In one company that I worked for, we started as a highly energetic team excited about our engineering project, but as the management changed and the company grew larger, the energy started dwindling and the product took longer to ship. Great leaders detect this negative spiral and take corrective actions before it is too late.

3. Interaction  – This relates to Connectivity in the IoT diagram. The greater the synchronization of team members the higher the team output is. Likewise better connectivity enables IoT bandwidth and results. Just as infrequent and poor communication can bring a team performance to a crawl, poor connectivity can kill an IoT.

wt-4In one case, I had one of the smartest guys in the team, but wouldn’t get along with anyone else in the team. The conflict came to the point where the other team members began leaving the group. Our product delivery date was delayed by 6 months.

4. Integration – The same terminology applies to both teams and IoT. The more the team members integrate with each other, understand each other, have mutual trust and respect, the better the team performs. If team members are on different wavelengths, the team will perform poorly. Likewise, poor integration of IoT components will likely lead to a failued product.

I joined a Silicon Valley company with an international move to the United States. I was new to the culture and surroundings. In a team meeting, our Vice President told us he wanted team members to perform like Jerry Rice. I was astounded. I looked Jerry Rice up on the internet and figured out who he was. That was a poor way to integrate a diverse team.

I have tried to make the analogy that work teams (WT) and the internet-of-things both share the same components for success. If we understand why and how the work teams well, I think we can design better IoT systems! We better do this quickly, because IoTs are here to stay.

“If you think that the internet has changed your life, think again. The IoT is about to change it all over again!” — Brendan O’Brien, Chief Architect & Co-Founder, Aria Systems

Jul 9, 2015 | Comments

In Fond Memory of Gary Smith

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

A long-time EDA industry analyst, Gary Smith, passed away on Friday, July 3, 2015 after a short bout of pneumonia in Flagstaff, Arizona.  He died peacefully surrounded by his family.

Gary Smith USNA graduate in 1963

Gary Smith in 1963

Gary was from Stockton, CA and graduated from the United States Naval Academy in 1963 with a bachelor of science degree in engineering.  His class yearbook says: “He managed to maintain an average grade point despite the frantic efforts of the Foreign Language Department. Tuesday nights found Gary carrying his string bass to NA-10 practice.”  Gary continued to be a musician and played his electric bass for years with the Full Disclosure Blues band at the Design Automation Conference Denali party with other industry figures.  The band started out of a jam session in 2000 with Grant Pierce who asked Gary to help put together a group for the following DAC.  Gary had suggested Aart de Gues as lead guitar who ended up giving the band its name.

Gary got into the world of semiconductors in 1969. He had roles at the following companies:

LSI Logic, Design Methodologist (and RTL evangelist), 2 years
Plessey Semiconductor, ASIC Business Unit Manager, 3 years
Signetics, various positions, 7 years

In 1994 he retired from the semiconductor industry and joined  Dataquest, a Gartner Company to become an Electronic Design Automation (EDA) analyst.  Gary described his retirement this way: “instead of having to worry about Tape Outs and Product Launches, I get to fly around the world and shoot off my big mouth (which I seem to be good at) generally playing the World’s Expert role. Obviously there isn’t much competition. Now if I could only get my ‘retirement’ under sixty hours a week I’d be happy.”

321270[1]In 2003, Gary had a health scare which was caught early and successfully treated.  Also around that time, he met his future wife Lori Kate.  I won’t share all the cute details of their story, but Gary’s charm eventually won her over, and they were married in July, 2004. They became the parents of a bouncing baby boy, Casey Carlisle Smith in Sept. 2005.

In late 2006 Gartner shut down Gary’s analyst group and he decided to reform under the name Gary Smith EDA. The only thing that changed according to Gary was all of the corporate stuff no longer interfered with his real work and finally got his work week down to 40 hours.

He was most recently a member of the Design TWG for the International Semiconductor Road Map (ITRS), editorial chair of the IEEE Design Automation Technical Committee (DATC) and served on the DAC Strategic Committee. He was also past Chair of the IEEE Electronic Design Processes conference. Gary has been quoted and published numerous times in all the electronics publications including EE Times, EDN, Electronic Business in addition to the Wall Street Journal and Business Week. In 2007, he received an ACM SIGDA Distinguished Service award.

Gary-headshot-180x220[1]Over the last 15 years, I personally enjoyed getting together with Gary each spring in anticipation of the upcoming DAC show to give him an update on what was new in the EDA products I was marketing.  Gary always listened politely and then gave his opinion on what he thought was going to work or not.  If Gary like what you were doing, your company was mentioned on his famous What to See @ DAC List.

I will miss Gary’s wit and insights and promotion of the electronic system level (ESL) for design.

To provide a loving memorial for his son Casey and granddaughters, Rachel and Shannon, his wife Lori Kate and the family kindly request that you share your favorite stories and/or pictures with

A memorial service is in planning for the late morning of Sunday, July 12th in San Jose, CA.  If you are able to attend please send an email with the size of your party so that the appropriate arrangements can be made.  All are welcome. To contact Lori Kate, please use

Jul 6, 2015 | Comments

Richard Goering and Us: 30 Great Years

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

Richard Goering at his 30th DAC, San Francisco in 2014

Richard Goering at his 30th DAC, San Francisco in 2014

Richard Goering, the EDA industry’s distinguished reporter and most recently Cadence blogger is finally closing his notebook and retiring from the world of EDA writing after 30 years.  I can’t think of anyone that is more universally regarded and respected in our industry, even though all he did was report and analyze industry news and developments.

Richard left Cadence Design Systems at the end of June (last month).  According to his last blog posting EDA Retrospective: 30+ Years of Highlights and Lowlights, and What Comes Next he will be pursuing a variety of interests other than EDA. He will “keep watching to see what happens next in this small but vital industry”.

When Richard left EETimes in 2007, there was universal hand-wringing and distress that we had lost a key part of our industry.  John Cooley did a Wiretap post on his DeepChip web-site with contributions from 20 different executives, analysts and other media heavyweights.  Here are a just few quotes that I picked out for this post:

Richard was a big supporter of start-ups and provided the best coverage that this industry could ever get.

– Rajeev Madhavan of Magma

Richard has been a cornerstone of the EDA industry since I was on the customer side. He was never influenced by hype; he looked for content. I have always appreciated his objectivity, recognizing that his analysis would go beyond the superficial aspects of an industry event or product announcement and search for the real impact.

– Wally Rhines of Mentor

Goering has been an icon for the EDA industry since I first became aware of what EDA was. EDA is an industry with somewhat loose definitions. Just as you can say that RTL is defined by what Design Compiler accepts, you can say that EDA is defined by what Richard Goering covers. If he stops covering it, will it stop being EDA?

– John Sanguinetti

Like Rajiv Madhavan, I also experienced the great support of my startup back in 1999.  A few of us had founded a formal verification company called HDAC (later Averant) and we were very surprised to end up on the front-page of EETimes when we launched.  Richard was indeed THE reporter at the number 1 industry publication.

You will want to read Richard’s last blog post.  His retrospective covers

1985, CAE/CAD, Daisy, Mentor, Valid, Orcad, Gates-to-RTL, function verification, ESL, high-level synthesis, lawsuits, standards wars, DFM, brain drain, and Where is EDA Headed?

For the last 6 years, Richard’s steady hand has covered industry trends and developments on behalf of Cadence. Never one for hyperbole and exaggeration he was always been a good read.

Goodbye Richard.  You will be very much missed.

Jul 1, 2015 | Comments

Quick 2015 DAC Recap and Racing Photo Album

Graham Bell   Graham Bell
   Vice President of Marketing at Real Intent

This years Design Automation Conference in San Francisco was excellent!   You don’t have to take my word for it.  At the Industry Liaison Committee meeting for DAC exhibitors on Thursday June 11, the various members were in agreement that show traffic was up, and the quality of the customer meetings exceeded expectations.  Why is that?  It is in large part of due to the tremendous efforts of Anne Cerkel senior director for technology marketing at Mentor Graphics, who was the general chair for the 52nd DAC.

One innovation at this year’s show was opening the exhibitor floor at 10 am.  This made it more convenient to see the morning keynotes and also more flexibility in commuting to the show from around the Bay area.  I think you can expect to see this again at the next 53rd DAC show in Austin Texas.

Our two GRID racing car simulators was one reason the show was excellent for Real Intent.  We were able to draw a large crowd to our booth.  Budding race car drivers could challenge their friends and colleagues to a race and enjoy our license-to-speed verification solutions.  A special thank you to Shama Jawaid and the team at OpenText who was our partner for the license-to-speed promotion.

Here are some quick photos from the show for you to enjoy.








Our booth hostesses Crisca and Costina with their mother Chau


Happy Booth Staff

Jun 12, 2015 | Comments