Informal, Unformal, or Appformal? …and new FormalWorld.org
Graham Bell Vice President of Marketing at Real Intent
Around the Design and Verification Conference in San Jose at the beginning of the March, a lot of activity was happening in the online world in preparation for the big meetup of the verification community.
First, the DeepChip.com web-site published a set of five (5) articles that surveyed the world of formal verification in EDA, written by Jim Hogan, of Vista Ventures, a Silicon Valley investment firm. Jim is currently on the Board of OneSpin, a formal tools company. Knowing Jim, he did his homework before getting involved with them. If you read the articles, which total 12,000 words, you will have to agree with me there is a lot of great content here. If a technical writer charged for producing this, you would be looking at a bill close to $20,000.
These days you have to two general ways to verify the functionality of your RTL with formal. You either write your own properties and then feed them and the RTL to a formal property verifier (FPV) tool, OR you have an application-focused formal tool automatically read and apply properties to your design.
In the early days of Real Intent we had our own FPV tool. However, by the late 2000’s we realized that designers needed specific solutions to verification problems. And so instead of attacking the general verification market, we focused on clock-domain crossing verification, and automatic code-checks for common RTL problems with FSMs, dead-code, etc. This lead to our existing suite of Meridian and Ascent products which cover five (5) different categories.
In the fourth article by Jim, “16 Formal Apps that make Formal easy for us non-Formal engineers“, Jim uses the “apps” moniker to identify this category. Our CTO, Pranav Ashar, takes exception to this term. In his view, “app” has been subverted to convey a pejorative sense as in “It is just an app!” It has become something to download to your phone to use without a second thought, and with a corresponding low value.
Our Meridian clock-domain crossing sign-off tool fits into the CDC app category, but there is a substantial amount of software technology under-the-hood to make it effective for a wide-range of designs. If CDC is an “app” so is static timing analysis (STA). But nobody calls STA an “app”!
Perhaps this new category of problem-focused formal applications, which are easy to use, need a different label. Since these new formal tools are targeted to ordinary engineers and not PhD’s, perhaps we could call these solutions “informal”, “unformal” or even “appformal”.
What do you think?
Second, a new web-site called FormalWorld.org has just been launched. It is an open and free community, and anyone interested in Formal Verification is invited to post relevant and useful information, or write a blog. It is intended to make it easy to explore and learn about formal, as well as to share experience and research with others. It is sponsored by OneSpin but has pointers and content for all the formal verification suppliers, including Real Intent. OneSpin “will take a hands-off role in its management.”
The site is managed by Jan Kuster, an independent consultant. Contact Jan Kuster with content suggestions or ideas at ContactUs@FormalWorld.org.
I looked around the site and I guarantee you will see something that will interest you.
I did read the blog post by Elchanan Rappaport of Gila Logic. Titled “Formal Evaluations, or the Fine Art of Shooting Yourself in the Foot” it gives a real world experience of introducing a FPV tool to a company and what trouble can arise. A great balance to Jim’s articles, I recommend everyone considering a FPV tool to read it and learn from other’s experience.
Graham Bell Vice President of Marketing at Real Intent
The Design and Verification Conference in Silicon Valley delivered the goods again this year. Here are some quick highlights from the show.
Graham talking to Koko and Pippa at the Oski Tech booth.
Wally Rhines, Chairman and CEO of Mentor Graphics gave another engaging, informative and amusing keynote presentation. Wally took us down through the years with his talk titled “Design Verification Challenges: Past, Present and Future.” Here is the summary slide of Wally’s presentation that captures the major ideas of his talk.
On Tuesday evening, Jim Hogan spoke with Ajoy Bose, former CEO of Atrenta, on his career building multiple businesses in the technology industry. He started with AT&T and then later on founded Interra which led to the inception of Atrenta and the SpyGlass product. He mentioned the importance of two mentors in this success. I know only some small pieces of Ajoy’s story and found his history to be in many ways the history of Silicon Valley. Ajoy’s one-on-one with Jim was video recorded, and will be available in the near future on the EDA Consortium web-site.
On Wednesday were two conference panel and a sponsored lunch panel by Cadence. The first panel on “Redefining ESL” certainly exposed the reality that ESL (Electronic Systems Level) is not just 4 0r 5 technology components but more like 15 or more. This makes the transition from Gate-level to RTL design which involved just a small set of technologies much less daunting that the transition from RTL to ESL design. Panelists from the EDA companies said we are getting there is incremental steps. Those in the audience at the microphone pushed for an even wider definition of ESL. This panel was also video recorded and will be available at a future date.
The sponsored lunch panel was on the topic “Software Driven Verification with Portable Stimulus: The Next Productivity Leap Enabling the Continuum of Verification Engines.” Frank Schirrmeister of Cadence Design Systems was host and presented how the new portable stimulus standard will be useful in keeping different verification working with maximum efficiency by using a high-level stimulus definition. The panel was also video recorded and will be available at a future date.
In the afternoon on Wednesday was the panel sponsored by Real Intent and hosted by Jim Hogan: “Emulation + Static Verification Will Replace Simulation.” At the beginning it seemed all the panelists were in agreement that we were not going to replace simulation. Panelist Brian Hunter from Cavium was particulary adamant that this was the case and necessary for clearing maximum bugs at lowest cost. Lauro Rizzatti, a verteran emulation executive, made his pitch that we should be using more emulation and that it is the most cost effective way to get to design sign-off. Pranav Ashar make the point only static verification can handle some of the problems facing design teams, and used the example of CDC verification. This panel was also video recorded and will be available at a future date.
The next DVCon events are in India and Europe in September and October respectively. I am sure they will be engaging and informative as our Silicon Valley event was this week.
Free Panels and Keynote at DVCon in Silicon Valley
Graham Bell Vice President of Marketing at Real Intent
You will glad to know that the free Exhibits-Only registration for the Design and Verification Conference (DVCon) that is taking place Feb. 29 through Mar. 3, gives you access to the Tuesday keynote by Wally Rhines of Mentor, and the two panels on Wednesday. And don’t miss the Tuesday evening reception hosted by EDAC, which finishes with Jim Hogan speaking with Dr. Ajoy Bose (Atrenta) about his experiences building multiple successful companies. Your DVCon registration gives you free access to this event.
Here are more details on the panels, one of which is organized by Real Intent.
Emulation + Static Verification Will Replace Simulation — Wednesday March 02, 1:30pm – 2:30pm | Oak/Fir
Moderator: Jim Hogan – Vista Ventures
Organizer: Graham Bell – Real Intent, Inc.
Emulation and static verification have both been on a tear lately. With processor frequency at a plateau of few GHz and the “processor + system architecture + software” combine still catching up to the parallelism imperative, emulation has stepped up to fill the void nicely. Almost all chips go through some combination of emulation or FPGA-prototyping prior to product release. With a cloud-based pay-as-you-go model, emulation doesn’t even have to be expensive. Emulation is all about speed – the only way to push through stimuli through a high-end SOC.
Likewise static verification is also on a steep upward spiral with almost universal adoption of targeted tools for sign-off verification problems like CDC as well as increasing adoption for problems like power management, reset analysis, X-verification, timing exceptions, security, SOC integration etc. System-level functional formal verification has been on a slower but also positive adoption trajectory. On verification problems where they work well, static methods have come to deliver enhanced productivity and sign-off level confidence. Static tools ensure that design quality is already extremely high before simulation or emulation is started.
May be the verification paradigm of the future is to invest in high-end targeted static verification tools to get the design to a very high quality level, followed by very high-speed emulation or FPGA-prototyping for system-level functional verification. Where does that leave RTL simulation? Between a rock and a hard place! Gate-level simulation is already marginalized to doing basic sanity checks. May be RTL simulation will follow. Or will it?
Ashish Darbari – Imagination Technologies Ltd.
Richard Ho – Google, Inc.
Lauro Rizzatti – Consultant
Brian Hunter – Cavium, Inc.
Steven Holloway – Dialog Semiconductor
Pranav Ashar – Real Intent, Inc.
Moderator: Brian Bailey – Semiconductor Engineering
Organizers: Dave Kelf – OneSpin Solutions GmbH, Nanette Collins – Nanette V. Collins Marketing and Public Relations
Brian Bailey of Semiconductor Engineering recently wrote an article titled, “What ESL Is Really About.” ESL is not a design flow, he noted, it is a verification flow, and it will not take off until the industry recognizes that. With as many views as there are fragmented pieces of ESL, panelists will have plenty of angles to consider as they discuss raising the abstraction from the register transfer level (RTL) for both design and verification. For example, HLS raises design abstraction, but only works well for certain parts of the design. Portable stimulus raises the abstraction of test specification. Formal works at this level because assertions can be written similarly to parts of the specification. It is a fragmented mess –– different inputs to the main design effort the starts at RTL –– that will be a lively debate.DVCon attendees are invited to join Brian and a panel of distinguished experts who will attempt to define ESL verification, from tools to flows. They will attempt to answer: How or when can all the disparate pieces be brought together, or is that even necessary?
Adnan Hamid – Breker Verification Systems, Inc. Dave Pursley – Cadence Design Systems, Inc. Bryan Bowyer – Mentor Graphics Corp. Simon Davidmann – Imperas Software Ltd. Raik Brinkmann – OneSpin Solutions GmbH Patrick Sheridan – Synopsys, Inc.
The Times They are a-Changin’: Gravity Waves, Moore’s Law, and Record Basketball
Graham Bell Vice President of Marketing at Real Intent
Big changes happened this week.
First, gravity waves have been detected for the first time in an announcement on Feb. 11. This confirms a major prediction of Albert Einstein’s 1915 general theory of relativity and opens an unprecedented new window onto the cosmos. Physicists have concluded that the detected gravitational waves were produced during the final fraction of a second of the merger of two black holes to produce a single, more massive spinning black hole. This collision of two black holes had been predicted but never observed.
The gravitational waves were detected by both of the twin Laser Interferometer Gravitational-wave Observatory (LIGO) detectors, located in Livingston, Louisiana, and Hanford, Washington, USA. The LIGO Observatories were conceived, built, and are operated by Caltech and MIT.
Last March, researchers completed major upgrades to the interferometers, known as Advanced LIGO, increasing the instruments’ sensitivity and enabling them to detect a change in the length of each arm, smaller than one-ten-thousandth the diameter of a proton(!). By September, they were ready to start new observations and then saw the black hole merger.
Physicists are understandably excited:
‘We are really witnessing the opening of a new tool for doing astronomy,’ MIT astrophysicist Nergis Mavalvala said in an interview. ‘We have turned on a new sense. We have been able to see and now we will be able to hear as well.’
‘This is the holy grail of science,’ said Rochester Institute of Technology astrophysicist Carlos Lousto. ‘The last time anything like this happened was in 1888 when Heinrich Hertz detected the radio waves that had been predicted by James Clerk Maxwell’s field-equations of electromagnetism in 1865,’ added Durham University physicist Tom McLeish.
Second, the current issue of Nature discusses the demise of Moore’s law. Next month, the worldwide semiconductor industry will formally acknowledge what has become increasingly obvious to everyone involved: Moore’s law, the principle that has powered the information-technology revolution since the 1960s, is nearing its end.
Chip development is complicated. If you don’t have all of the technology, equipment and processes in place you cannot move to a smaller node. The US semiconductor industry launched a roadmap effort in 1991, with hundreds of engineers from various companies working on the first report and its subsequent iterations. In 1998, the effort became the International Technology Roadmap for Semiconductors, with participation from industry associations in Europe, Japan, Taiwan and South Korea.
Every time the scale is halved, manufacturers need a whole new generation of ever more precise photolithography machines. Building a new fab line today requires an investment typically measured in many billions of dollars — something only a handful of companies can afford. And the fragmentation of the market triggered by mobile devices is making it harder to recoup that money. “As soon as the cost per transistor at the next node exceeds the existing cost, the scaling stops,” says Bill Bottoms, president of Third Millennium Test Solutions, an equipment manufacturer in Santa Clara. Many observers think that the industry is perilously close to that point already. “My bet is that we run out of money before we run out of physics,” says Daniel Reed, a computer scientist and vice-president for research at the University of Iowa.
With this sea-change, this year’s report, in keeping with its new approach, will be called the International Roadmap for Devices and Systems.
Third, the Golden State Warriors NBA basketball team were the first ever to reach the mid-season All-Star break, with a record of 48 wins and only 4 losses. If they keep up this pace, they will hold the new record for most wins of any team from any era in the NBA. Go Warriors!
We give the final word to Bob Dylan and his song The Times They are a-Changin’. It was recorded in 1964, one year before the inception of Moore’s law.
Super Bowl 50, and Semiconductor and Design Predictions for 2016
Graham Bell Vice President of Marketing at Real Intent
The game will be played in the Silicon Valley city of Santa Clara.
Super Bowl 50 is being played this Sunday, Feb. 7 to determine the champion of the National Football League (NFL) for the 2015 season. It will be held at Levi’s Stadium in Santa Clara, California, between the National Football Conference (NFC) champion Carolina Panthers and the American Football Conference (AFC) champion Denver Broncos. The game will be played about 6 miles from my house in the East Bay, but I will be enjoying the event on television. There will be the usual collection of interesting commercials and the half-time entertainment will include the band Coldplay and feature Beyoncé and Bruno Mars. My prediction is that the Panthers will beat the spread and defeat the Broncos by more than 6 points. I have not made any cash bets. Yet.
The Global Semiconductor Industry Survey from KPMG LLP that was published in December, 2015 identifies a number of important trends for the year 2016 based on the response of Semi industry CEOs. I have highlighted the ones I think are most important for the EDA and Design communities and put my comments in parenthesis ():
93% of semiconductor leaders expect the rapid M&A pace to continue in 2016, compared to only 83% in 2015. (In a conversation I had with one semiconductor CEO recently, he said there is $1,000 billion in capital available for supporting M&A, and that we should much more activity.)
Most of the Semi consolidation will happen in the Americas.
The expectation of short term growth of revenue is down with only 71% forecasting an increase. 81% of last year CEOs’ saw an increase. (Gartner, in their analysis of the industry as a whole, predicts a slightly positive outlook for 2016. This year’s revenue is estimated to decline 1.9%, but semiconductor revenue will increase 1.9% to $344 billion in 2016. However, Gartner forecasts an oversupply in DRAM in 2016. DRAM revenue is expected to decline 12% in 2016 due to weak pricing.)
China will be the most important region for revenue growth. (This is not surprising with China’s program for the Development of the IC Industry which plans to accelerate China’s efforts in several areas, such as 14nm finFETs, advanced packaging, MEMS and memory. As part of the plan, China created a $19.3 billion fund which will be used to invest in its domestic IC firms.)
Microprocessors will be the sector with the highest growth opportunities, followed by sensors and memory. The end markets with the highest growth opportunities are forecast to be networking and communications, computing, and automotive.
R&D spend is expected to increase for 62% of respondents, which is down from 83% the previous year. (This is a source of concern for EDA companies since their revenue growth typically lags R&D revenue growth by 6-9 months.)
With the expected consolidation, there will be fewer companies purchasing EDA tools. This will benefit the Big Three companies, and will put additional pressure on Tier 2 and startup companies to differentiate their offerings and maintain product pricing.
The NAND process roadmap from TechInsight below gives us a preview to what will happen in 2016 for technology development. Samsung and Toshiba/SanDisk will introduce their 12(10) nm technology node to the memory marketplace. Micron/Intel and SK Hynix will be playing catch-up to reach Samsung and Toshiba/SanDisk in the 3D memory space. The introduction of their 32-level offerings will provide 192Gb parts in the marketplace.
Timeline for smaller process nodes and increasing layers of 3D ICs.
We have talked about consolidation in the semiconductor industry and we should consider what will be the trend for EDA companies. The Big Three will continue to expand their portfolios for companies which are a part of the electronic system design ecosystem, but not focus on traditional EDA companies. Automotive, embedded software, software design, and also design IP will be target areas that will receive attention in 2016, so we should see acquisitions from there.
Verification companies like OneSpin and Real Intent will become more visible in 2016 as alternatives to the Big Three. Their turn-on-a-dime support will distinguish them from the “take a ticket and wait” model of the larger companies. In addition, they will expand the kinds of failures they can quickly identify.
What about the broad adoption of formal technologies in verification? The Wilson Research and Mentor Graphics 2014 Functional Verification Study gives us a strong trend as shown in the following graphic.
Automatic formal applications like CDC are in the fastest growing EDA segment.
The adoption of static verification for design projects has jumped 62% in two years from 2012 to 2014. This growth rate makes it one of the fastest growing segments in functional verification. I predict this trend will continue and in 2016 automatic formal will exceed the adoption of formal property checking (where designers write their own properties).
What is in this automatic formal category? Harry Foster of Mentor Graphics identifies a list of these applications including SoC integration connectivity checking, deadlock detection, X semantic safety checks and coverage reachability analysis. I would add clock-domain crossing, reset-domain crossing, X-optimism and X-pessimism analysis, timing constraint validation, and lint checks to the list as well.
The continuing growth in the size of SoCs, and in adoption of heterogeneous IPs is clearly driving this trend. To complete functional verification of these large designs, teams are turning to emulation and FPGA prototyping. We can see the adoption by design size in the following graphic. It is expected that FPGA prototyping will grow in 2016 for those designs over 80M gates as easier to use solutions are introduced to the marketplace.
CDC Verification of Fast-to-Slow Clocks – Part 3: Metastability Aware Simulation
Dr. Roger B. Hughes Director of Strategic Accounts
We continue the short blog series that addresses the issue of doing clock domain crossing analysis where the clocks differ in frequency. In Part 1 and Part 2, we discussed the use of structural and formal checks when there is a fast-to-slow transition in a clock domain crossing. In this blog, we will present the third and final step using a design’s testbench.
The next step in the verification process of fast-to-slow clock domain crossings is to do metastability-aware simulation on the whole design. When running a regular simulation test bench, there is no concept of what could happen to the design if there was metastability present in the data or control paths within the design. One of the key reasons for doing CDC checks is to ensure that metastability does not affect a design. After structural analysis ensures that all crossings do contain synchronizers, and formal analysis ensures that the pulse width and data are stable, a whole-chip metastability-aware simulation is needed to see if the design is still sensitive to metastability. Functional monitors and metastability checkers are shown in Figure 7. No changes are made to the design, and the necessary monitors and checkers are written in an auxiliary Verilog simulation test bench file. This auxiliary file is simply referred to by the original simulation test bench file to invoke the metastability checking. As a prerequisite, this step requires that the design have a detailed simulation test bench.
Figure 7 – Metastability aware simulation checks the tolerance of downstream logic to the presence of jitter in the data path through the use of functional monitors and CDC checkers.
Meridian CDC enables metastability simulation sign-off by offering two capabilities. First, it randomly inserts cycle jitter onto the control or data crossings to mimic the metastability effect; Second, it writes simulation checkers to catch violations during simulation. This combined effect enables metastability simulation sign-off using Meridian CDC.
FAST-TO-SLOW CLOCK METHODOLOGY SUMMARY
To provide rigorous clock domain crossing checks on a design, especially one containing transitions of a fast-to-slow nature, three steps must be done.
Use structural checking to ensure all crossings – including fast-to-slow clock and slow-to-fast clock – are CDC safe. This means that all crossings have been checked to have the data switched via a control signal that has the appropriate levels of synchronization.
Use formal verification on all CDC crossings to ensure control signals have a sufficient pulse width to enable the receiving domain clock to capture the transmit domain’s control pulse. This step is especially important for all fast-to-slow clock domain crossings. Use formal analysis to examine the data transitions for data stability. These checks are PULSE_WIDTH and DATA_STABILITY, respectively.
Use metastability-aware simulation on the entire design with an existing simulation test bench to insert random metastability into data and control crossings. Run the simulation against specialized checkers that are automatically generated to prove that the design is not sensitive to metastability.
After these three steps are carried out, designers will have full confidence that the fast-to-slow clock domain crossings are analyzed correctly.
Modern CDC tools, such as Meridian from Real Intent, provide a mix of approaches for sign-off of clock domain crossing analysis. Three techniques were progressively discussed when fast-to-slow clocks were used. Structural checks can be quickly run even on large designs on the whole design When a design is has passed structural analysis, formal checks of certain crossings can be done locally. This is required for all fast-to-slow clock transitions of control signals, either on the feed-forward circuit or on the feedback circuit, depending on which circuit has a fast-to-slow transition. Finally, simulation test benches can be augmented with random metastability injection and checkers to verify that the design is tolerant of metastability.
CDC Verification of Fast-to-Slow Clocks – Part 2: Formal Checks
Dr. Roger B. Hughes Director of Strategic Accounts
We continue the short blog series that addresses the issue of doing clock domain crossing analysis where the clocks differ in frequency. In Part 1, we ended the discussion noting that when there is a fast-to-slow transition, there is a possibility that a short duration control pulse may be completely missed by the receive domain and a formal analysis is required to discover if this is a potential problem. We will look at how formal analysis can verify this kind of transition.
A formal check also is required on a slow-to-fast data crossing with feedback. In such a circuit, as shown in Figure 4, an acknowledge signal coming from the receiving fast-clock domain to the transmitting slow-clock domain also requires a formal Pulse Width check. Although the control pulse (request) is going from slow to fast and does not need a formal pulse width check, the acknowledge pulse-width check is necessary because the acknowledge signal (the feedback circuit) is going from a fast to a slow clock and, in order for the acknowledge to be properly captured, the acknowledge pulse (transmitted from the receiving side) must be sufficiently wide to be captured (received on the transmitting side) by the slower clock domain of the transmitting side flops. Failure to check for this condition is the reason behind many a request/acknowledge circuit not working as expected. Note that feedback circuits in a fast-to-slow crossing are operating in a slow-to-fast mode and the acknowledge signal in such a circuit does not need to be pulse-width checked. In short, all fast-to-slow control signal transitions, whether connected in a feed-forward or a feedback manner need to be formally pulse-width checked to ensure integrity of the control aspect of the clock domain crossing.
Figure 4 – Slow-to-Fast Clock Crossing with Feedback (red flops are slow clock, blue flops are fast clock).
To check if the fast-to-slow clock domain crossings have control signals that can be captured by the receive domain clock, Meridian CDC offers formal analysis capability targeted at the asynchronous clock domain crossings of interest. There is a requirement that the control pulse in the fast transmit domain must be of a certain minimum width in order to be captured by the slower receive domain clock. Figure 5 shows that TX CNTL must be held high for several clock periods of Clk1 for the TX domain flop value to be captured by the RX domain Sync1 flop and then passed into RX domain Sync2 flop. A formal check called PULSE_WIDTH verifies that the transmit domain’s control pulse has sufficient length to be captured by the receive domain’s clock in all circumstances. This check examines all the pulse generation logic and takes into consideration the clock frequency ratio during detailed formal analysis to determine pass or fail. If there is a case in which the pulse length is insufficient, a counter example is generated to show the circumstances in which this would occur. If PULSE_WIDTH passes, the crossing shown always has correct control pulse duration to ensure there will not be a missed control pulse.
Figure 5 – Fast-to-slow clock domain crossing with sufficient pulse length. Here the TX CNTL pulse is held high for a sufficient number of TX Clk1 periods so that an edge of RX Clk2 is able to sample the value on the TX CNTL flop into the RX CNTL Sync1 flop, which then can pass the value to RX CNTL Sync2. This can be formally proven using the PULSE_WIDTH check of Meridian CDC.
There also needs to be a check on the data path. There is a possibility that if the data is not held stable for a long enough period, it might get missed in the receive domain. For example, suppose the transmit domain has the data sequence <D0><D1><D2><D3>, etc. This sequence of data changes is shown in Figure 5. If the data changes before the control signal has been passed to the receive domain, it is possible that the receive domain might miss some data and end up with <D0><D1><D3>… if <D2> was not correctly transmitted to the receive domain. An additional formal check in Meridian CDC called DATA_STABILITY ensures that the data transitions at a slow enough rate to be captured by the transmit domain clock and then transferred to the receive domain. Only a formal check using full sequential analysis of behavior can do this correctly.
Figure 6 – Fast-to-slow clock domain crossing with data instability. It is important that the data is held stable long enough to be captured by the receive clock RX Clk2. If the data changes too quickly, an element of the data will be missing in the receive clock domain, as shown with D2 missing from the data stream in the receiving clock domain. This can be formally proven using the DATA_STABILITY check in Meridian CDC as part of the formal analysis.
For all the formal checks, additional information is required beyond just using the structural checks. In structural checking, the environment can be captured automatically with very little user input required for resets and clocks. In contrast, formal checks require that all the reset signals be very accurately associated with their corresponding clocks. All clock frequencies must be specified for the appropriate formal checks to be made on the fast-to-slow clock domain crossings. So setup for formal is necessarily more complex and requires detailed design knowledge about all the frequencies of clocks and combinations of those frequencies. For example, multi-mode designs also need the selection signals for mode selection to be specified by the user. Where appropriate, automatic multi-frequency analysis also can be addressed by the formal engines within Meridian CDC.
In Part Three, the conclusion for this series, we will discuss doing metastability-aware simulation on the whole design.
CDC Verification of Fast-to-Slow Clocks – Part 1: Structural Checks
Dr. Roger B. Hughes Director of Strategic Accounts
This is a reprise of a short blog series that addresses the issue of doing clock domain crossing analysis where the clocks differ in frequency, and the use of three different techniques for a complete analysis.
CDC checking of any asynchronous clock domain crossing requires that the data path and the control path be identified and that the receive clock domain data flow is controlled by a multiplexer with a select line that is fed by a correctly synchronized control line. Meridian CDC will always identify all the data and associated control paths in a design and will ensure that the control signals passing from a transmit clock domain to an asynchronous receive clock domain are correctly synchronized. There are three separate techniques that are used within Meridian CDC: structural checking, formal checks and simulation-based injected metastability checks.
The structural checking approach does not care if the asynchronous transitions are slow to fast clocks or fast-to-slow clocks. It will ensure that all the transitions are correctly synchronized in terms of having the appropriate synchronizer flops. From a structural perspective, the entire design can be checked in one run and all the clock domain transitions checked for correctness. Let’s look at an example CDC in Figure 1, with transmit clock Clk1 on the left (orange flops), and the receiving clock Clk2 on the right (blue flops).
Fig. 1 – A Typical Synchronized Control and Data Clock Domain Crossing.
It can be seen in Figure 2, where positive going clock edges are shown by the vertical lines, that all will work well for a slow-to-fast clock transition. This is because any change of a control signal in the slow domain will always be captured by one of the edges of the receive domain clock, Clk2, before Clk1 causes the control signal to be released since Clk2 is faster than the transmit domain clock, Clk1. Also, for slow-to-fast clock transitions the data will typically always be stable long enough to be captured and transmitted through to the receive domain. In Figure 2, the possible Clk2 edges that could capture the TX CNTL signal in RX CNTL Sync2 flop are shown with dashed vertical lines.
Figure 2 – Slow-to-Fast Clock Crossing. There are many possible clock edges, shown dashed, of RX Clk2 that can sample the value held in the TX CNTL flop. The dot-dash edge is the first possible transition into Sync1, dashed are transitions into Sync2. There is no issue with TX CNTL period as long as the signal is sampled by one clock edge of TX Clk1.
However, the situation is different in a fast-to-slow clock domain crossing. When there is a fast-to-slow transition, there is a possibility that a short duration control pulse may be completely missed by the receive domain. To address this concern, and others that cannot be addressed by a purely structural CDC check, formal analysis is required.
Next time we will look at how formal analysis can verify this kind of transition.
With acquisitions, customers get nervous and for good reason. The support and responsiveness they get changes. Five respondents said they were considering possibly replacing SpyGlass with Real Intent. One user reported the following conversation:
“Your SpyGlass customer support won’t change as a result of the SNPS acquisition.” They actually said that to me with a straight face.
The article also reported a customer evaluation of our Ascent Lint and Meridian CDC (clock-domain crossing) tools. Here is a quick snippet:
Wanted to check whether we could identify design bugs with Real Intent that were being missed before.
Started with Ascent-Lint on one of our small a8051 microcontroller IP blocks which was in pure Verilog design and 17K gates in size.
From the start, we noticed that Ascent Lint was very easy to run.
Within minutes getting it installed we were able to run the tool on our a8051 with practically no time spent on set up.
We caught a few issues even on it. One FSM was missing a “default” statement. The designer missed this issue because of an involved mix of pragmas and `defines in the RTL code. Ascent Lint ran in under a minute to find the need to make a RTL fix for this. There were few other minor FSM-related issues identified we passed those on to the
a8051 design team.
Next we ran their Meridian CDC clock-domain analysis tool on our Ethernet MAC IP, a 40K gate Verilog design which had 4 different clock domains.
…When running Meridian to catch CDC problems, it was correctly able to identify our CNTL synchronizer and our other synchronizer structures automatically.
…The Meridian reports provided appropriate details of the violations without bombarding us with too much information.
After the end of our eval, we decided to start using both Real Intent tools on our next IP development project. We also plan to recommend Ascent Lint and Meridian CDC tools to our consulting clients.