Richard Goering and Us: 30 Great Years
| Graham Bell|
Vice President of Marketing at Real Intent
Richard Goering, the EDA industry’s distinguished reporter and most recently Cadence blogger is finally closing his notebook and retiring from the world of EDA writing after 30 years. I can’t think of anyone that is more universally regarded and respected in our industry, even though all he did was report and analyze industry news and developments.
Richard left Cadence Design Systems at the end of June (last month). According to his last blog posting EDA Retrospective: 30+ Years of Highlights and Lowlights, and What Comes Next he will be pursuing a variety of interests other than EDA. He will “keep watching to see what happens next in this small but vital industry”.
When Richard left EETimes in 2007, there was universal hand-wringing and distress that we had lost a key part of our industry. John Cooley did a Wiretap post on his DeepChip web-site with contributions from 20 different executives, analysts and other media heavyweights. Here are a just few quotes that I picked out for this post:
Richard was a big supporter of start-ups and provided the best coverage that this industry could ever get.
– Rajeev Madhavan of Magma
Richard has been a cornerstone of the EDA industry since I was on the customer side. He was never influenced by hype; he looked for content. I have always appreciated his objectivity, recognizing that his analysis would go beyond the superficial aspects of an industry event or product announcement and search for the real impact.
– Wally Rhines of Mentor
Goering has been an icon for the EDA industry since I first became aware of what EDA was. EDA is an industry with somewhat loose definitions. Just as you can say that RTL is defined by what Design Compiler accepts, you can say that EDA is defined by what Richard Goering covers. If he stops covering it, will it stop being EDA?
– John Sanguinetti
Like Rajiv Madhavan, I also experienced the great support of my startup back in 1999. A few of us had founded a formal verification company called HDAC (later Averant) and we were very surprised to end up on the front-page of EETimes when we launched. Richard was indeed THE reporter at the number 1 industry publication.
You will want to read Richard’s last blog post. His retrospective covers
For the last 6 years, Richard’s steady hand has covered industry trends and developments on behalf of Cadence. Never one for hyperbole and exaggeration he was always been a good read.
Goodbye Richard. You will be very much missed.
Quick 2015 DAC Recap and Racing Photo Album
| Graham Bell|
Vice President of Marketing at Real Intent
This years Design Automation Conference in San Francisco was excellent! You don’t have to take my word for it. At the Industry Liaison Committee meeting for DAC exhibitors on Thursday June 11, the various members were in agreement that show traffic was up, and the quality of the customer meetings exceeded expectations. Why is that? It is in large part of due to the tremendous efforts of Anne Cerkel senior director for technology marketing at Mentor Graphics, who was the general chair for the 52nd DAC.
One innovation at this year’s show was opening the exhibitor floor at 10 am. This made it more convenient to see the morning keynotes and also more flexibility in commuting to the show from around the Bay area. I think you can expect to see this again at the next 53rd DAC show in Austin Texas.
Our two GRID racing car simulators was one reason the show was excellent for Real Intent. We were able to draw a large crowd to our booth. Budding race car drivers could challenge their friends and colleagues to a race and enjoy our license-to-speed verification solutions. A special thank you to Shama Jawaid and the team at OpenText who was our partner for the license-to-speed promotion.
Here are some quick photos from the show for you to enjoy.
Advanced FPGA Sign-off Includes DO-254 and …Missing DAC?
| Graham Bell|
Vice President of Marketing at Real Intent
One trend we’re seeing in Asia is the number of FPGA design starts — now counting in the thousands. Getting a functionally correct design is the first goal for designers. It is easy to think that once that is achieved FPGAs can shipped out in finished products. But that’s not a robust model. For example, we have had customers with failures in the field due to a subtle timing change between FPGA part lots. Larger FPGA designs have grown in complexity, resulting in an amalgamation of disparate IP that can lead to clock domain challenges. A robust model for FPGA designs requires advanced signoff tools, a design flow that works easily with Xilinx and Altera tools, and support for high-reliability standards like DO-254. This is where Real Intent’s Meridian and Ascent products excel. For high-performance, our CDC and Lint tools provide the confidence design teams need, with unsurpassed verification and sign-off support.
Come visit us in Booth #1422 at DAC in San Francisco, June 8-10 to see our latest technical presentations. To choose your technical presentation click here.
Can’t attend DAC? Check out some of our latest video interviews with Real Intent technologists or email us for a personal presentation to you or your team.
#2 on GarySmithEDA What to See @ DAC List – Why?
| Graham Bell|
Vice President of Marketing at Real Intent
The last two weeks before the Design Automation Conference in San Francisco are a busy time. For us marketeers, it has been called “our Superbowl.” We want to get the word out that we have something new and important to show visitors to at our exhibit booth. But there is more going on which I will mention after I talk about our booth activities.
Real Intent is number two on the GarySmithEDA What to See @ DAC list. I know why we are number two on the list. But I don’t want to give the secret away. If you know the reason, then please let everyone know in the comments section at the end of the blog.
Here are the quick titles for our technical presentation in our demo suites.
- Ascent Lint with 3rd Generation iDebug Platform and DO-254
- Meridian CDC for RTL with New 3rd Generation iDebug Platform
- Ascent XV with Advanced Gate-level Pessimism Analysis
- Accelerate Your RTL Sign-off
- Hierarchical CDC Analysis and Reporting for Giga-gate Designs
- Next-Generation Meridian Constraints for SDC
- Autoformal RTL Verification
- FPGA Sign-off and Verification
Click on this appointment sign-up link to arrange a meeting with us.
Besides fast RTL sign-off, we are also having fun at our booth and giving away cool prizes. Come and race against other drivers in our two GRID Racing Simulators and receive your License-to-Speed. Get your license stamped at both the Real Intent and Open Text booths (just around the corner) and you will get a change to win $$$ Amazon gift cards. Fill out our verification survey and you will get a chance to win a Roku 3 streaming media player or a Kindle PaperWhite e-reader. Here is a picture of the GRID simulators.
I hinted earlier that there was more going on than just activities at the Real Intent booth. We are organizers for a Test and Verification panel: Scalable Verification: Evolution or Revolution? on Wed., June 10 from 4:30-6 p.m. in Room #304. Moderated by Brian Bailey (technology and EDA editor of Semiconductor Engineering), it has a panel of experts from Freescale Semiconductor, NVIDIA, Qualcomm, Hewlett-Packard and ARM.
We are also sponsoring the Love IP DAC Party on Monday, June 8 at Jillians in the Metreon, just steps away from the Moscone Center. Doors open at 7 p.m. The party is organized by Heart of Technology (HOT), the philanthropic organization founded by EDA veteran Jim Hogan. This event brings the DAC and IP communities together to raise money for the San Jose State Guardian Scholars – a program to help underprivileged and homeless students at the university. The party’s theme is “Summer of Love,” so come in your best Jerry Garcia look-alike costume!
And don’t forget the The Denali Party by Cadence on Tuesday night, June 9. You will want to sign up online before the DAC show starts to get your ticket by Tuesday morning. See you there!
SoC Verification: There is a Stampede!
| Graham Bell|
Vice President of Marketing at Real Intent
In the stories of the Wild West from the 1800s, the image of a cattle drive often is depicted. A small team of cowboys delivers thousands of heads of cattle to market. The cowboys spend many days crossing open land until they reach their destination – one with stock yards to accept their precious herd, and a rail station to deliver it quickly to market. Along the way there are dangers, including losses by predators and mad stampedes by cattle rushing blindly when frightened or disturbed. The primary job of the cowboys is to keep the herd on track and settled as they move to ship-out.
I see immediate parallels between the cowboys of the Wild West and today’s system-on-chip (SoC) design and verification engineers. Cowhands struggle to control and move a big herd. Similarly, today’s design teams grapple with how to keep a project on target and converging to tape-out and success when the gate count of SoCs has become so large it can stretch and even overwhelm their ability to stay on track. How big are these new SoCs?
The Xbox One gaming console, for example, uses 5 billion transistors, which is equivalent to 1.25 billion digital gates. Its AMD-designed SoC produced at TSMC on a 28-nm process combines eight Jaguar CPU cores and Graphics Core Next (GCN)-class integrated graphics. (See Figure 1.)
Another example, pictured on the left, is Nvidia’s GK110 GPU (also made on TSMC’s 28-nm process), which has 7.1 billion transistors. This translates to nearly 2 billion digital gates. These are not just big chips but giant chips!
With each smaller semiconductor node foundries provide, more gates can be squeezed into the same die size. In parallel, many different kinds of design blocks and intellectual property (IP) are employed, usually created by third-parties, to accelerate the implementation of the design objectives. The interaction of the various blocks across various power and timing conditions adds a new kind of complexity to the design. The result is a “herd” of interfaces with thousands of different crossings that must be checked and verified to ensure the design does not run off into a fatal operating condition.
It would be great to have the luxury of several hundred design and verification engineers to verify all possible failures in these giant SoCs, but that is not usually the case. Typically a small team relies on design automation software to manage the complexity of the verification challenge.
For each interface in the SoC, signals cross asynchronously between the various IPs and must be registered correctly to ensure the integrity of the digital signal path and eliminate metastability errors. For bus-level signals, circuitry such as a FIFO manages the data transfer and verification to ensure there is no data overflow or underflow that could compromise the design. This approach requires a full-chip clock domain crossing (CDC) analysis.
Design teams need three elements to achieve overnight CDC analysis runs for functional sign-off – precision, throughput and ease of use. (See Figure 2.)
Precise analysis means the software must accurately capture all possible interfaces in the design, including buses; provide reset analysis, including glitches in both asynchronous and synchronous domains; and correctly handle crossings that may be blocked by environment definition. Once the analysis is done, it is essential to be able to verify the interfaces automatically, using formal technologies, so all possible failure conditions can be exhaustively covered.
Likewise, throughput has three important considerations: runtime, capacity and methodology. Design analysis must be done in overnight runs to make the necessary progress to stay on schedule. In terms of capacity, a terabyte of computer memory no longer is needed to verify a 500-million gate design. Instead, teams can use more standard hardware. For giga-scale designs, a hierarchical methodology is needed to leverage block-level CDC signoff for chip-level CDC verification. This methodology is effective for sign-off only if the SoC verification makes no approximations or abstractions. Only then can it truly ensure no signal crossing errors are missed.
Ease of use is the third major aspect of CDC analysis for functional sign-off. The software setup must be easy and automated to ensure the quality of results. The various kinds of analysis including formal analysis must generate results without the user writing any tests. Finally and perhaps most important, the debug of analysis results must be hierarchical and fully customizable. This kind of flexibility is available typically only from a full database of analysis results. Graphical and command-line interfaces must be able to extract the necessary reports in a variety of formats and with the data organized as required for any specific verification flow requirements. Whether using HTML docs or custom spreadsheets, the design and verification team should be able to “rope-in” any interface issue.
SoC verification poses many challenges through the sheer size of designs and the various mix of design IP, each operating with its own clocking scheme. Successful SoC design teams will meet the challenge of clock domain crossing verification with a solution that provides the necessary precision, throughput and ease-of-use they need. This approach will avoid a stampede of errors and late debugging that will delay the ship-out of their designs.
This blog article was originally published on EETimes SoC Designlines.
Drilling Down on the Internet-of-Things (IoT)
| Ramesh Dewangan|
Vice President of Application Engineering at Real Intent
Did you know there will be 50 billion connected devices by 2020?
I am not making it up!
This was the future painted by Dr. Martin Scott, SVP and GM, Cryptography Research Division, Rambus, in a scintillating session on the Internet of Things (IoT) at the Silicon Summit 2015 event organized by Global Semiconductor Alliance in April.
What will the future look like when there will be over 6 devices for every person on the planet?
I’ll summarize the 3 key points I learned regarding IoT: the components, the scope and the challenges.
Components of an IoT System
Dr. Scott laid out the high-level components of an IoT system:
- End points are the IoT devices with sensors, hardware and software to provide touch point to the users or gather data
- The Hub/Edge is the data gateway or aggregators. They could be mobile phones, routers, towers and so on.
- There is a cloud system/data center to store and analyze data. A high bandwidth wide area and local area connectivity move data across these components.
Lastly you have Analytics apps to provide meaningful data back to the providers and consumers.
Scope of IoT
The scope of IoT applications is vast. I was aware of to its applications in the consumer segment based on media coverage that I had been exposed to sofar. It turns out that in addition to the consumer segment, IoT is already playing major roles in industrial and medical segments. As per Rahul Patel, SVP and GM, Wireless Connectivity, Broadcom, IoT has limitless possibilities:
Challenges to IoT success
James Stansberry, SVP and GM IoT Products, Silicon Labs laid out the challenges succinctly: It is Energy, Functionality, Integration and Connectivity.
Energy: How many times have you been frustrated with your smart phone running out of juice in the middle of the day? While devices are improving battery life with every generation, IoT devices need sustained battery life for a much longer period. IoT devices must operate on a coin cell battery for 5 years. Unless that happens, the applications will be limited. The SoCs driving the IoT devices have to be ultra-low power.
Connectivity: The bandwidth and flexibility of existing connectivity systems, be they WiFi, or Bluetooth or LTE, are too limiting for IoTs to become pervasive. There needs to be higher bandwidth and flexible switching among the connectivity protocols. New standards like new WiFi standards, Bluetooth Smart, ZigBee, and THREAD are emerging as viable solutions.
Integration: A typical IoT SoC will need to integrate highly complex IPs and interface with sensors, control, RF and battery. The process nodes and the SoC development methodology must enable such large scale integration.
Functionality: Dr. Scott, pointed out that sensitive data in transit remains vulnerable going from end-point to hub to cloud. The functionality must include security as a key component.
Recently, my son realized that he had lost his car keys at his college campus one weekend. I thought he would be frantic, asking around for help to find them. Instead, he calmly opened an app on his smart phone and then located his keys on a convenient map , thanks to a tiny tracking chip he had added to his key ring.
IoT is not a concept any more, it is real, and it is happening. It will become pervasive and ingrained in our lives as soon as the significant challenges in functionality, energy, connectivity and integration, are tackled!
Reflections on Accellera UCIS: Design by Architect and Committee
| David Scott|
In late March, Brian Bailey of Semiconductor Engineering published an article on standards: “Design by Architect or Committee?” This made me think of my own experience with the Accellera Unified Coverage Interoperability Standard (UCIS), something of which I am both proud and embarrassed. Proud, because when I was at Mentor Graphics I was the architect of the winning donation, and that’s a rare thing in any career — to contribute the design and architecture for an industry standard. However, I am embarrassed because I know I could have done better in a re-design. Any software engineer will tell you this: the second design is always better, because you’ve learned from the first. We did some re-design as part of the standardization effort, but not to the degree I wanted.
In retrospect, the politics of Accellera UCIS were bound to be difficult, because if you think about it, the standard allows users to easily switch simulators. That’s what the “interoperable” part means. With simulation a slowly growing market, a sort of zero-sum game, one company’s gain is another’s loss. No one is going to be enthusiastic about a standard that helps them lose business. This point was also made in Brian’s article.
I also participated in the SystemVerilog standard of the IEEE. Say what you like about SystemVerilog, it is not just design by committee, it is design by multiple committees. But those committees do really have a lot of common ground and work pretty well together. The atmosphere in Accellera UCIS meetings was more polarized.
The inception for the standard was the realization inside Mentor Graphics that coverage analysis needed a public application programming interface (API). We made the crucial decision to use the same API internally for coverage creation, reporting, and analysis, and to make it usable in a standalone fashion as well. We tried to keep it simple, easy to grasp for verification engineers who were not software developers, without the complex data models and handles that would make it more like SystemVerilog VPI. This wasn’t entirely possible, but when we were done, we had something that was complete and functional.
It remains my favorite project of my career. In the early days of formulating the API, I had great fun brainstorming with Doug Warmke and Samiran Laha. (Samiran presented a poster on the UCIS API just this past DVCon.) We then gradually re-architected the coverage GUIs with my hands-on marketing counterpart Darron May and created a suite of brand new verification management features. It culminated in the Questa Verification Management Tracker GUI, allowing test traceability analysis tying together all kinds of coverage. I myself wrote the internal machinery of the GUI, and it was the ultimate validation of the API started a few years before.
There was quite a debate within Mentor about whether to try to make the API an industry standard. This is the rarified domain of Mentor’s great tactician Dennis Brophy, so I don’t really know why we decided in favor of submitting it. I had heard there was a customer telling us to participate. I think we then expected backing from that customer, but it didn’t happen that way. One interesting twist is in the behavior of the Big Three. With three big gorillas in the room, you get a lot of two-versus-one alignments. The push to SystemVerilog 2005 was initially a Synopsys and Mentor alliance versus Cadence. Perhaps just for political balance, UCIS became Cadence and Mentor versus Synopsys. We started meeting with Cadence well before the donation was approved, so the basis for the UCIS standard was really a combined effort of Cadence and Mentor.
The most vocal customers on the committee, however, were from Synopsys. This made the negotiations in the meetings difficult for us.
How we won the committee vote to accept Mentor’s donation in June, 2009 I cannot say. This had much more to do with Dennis Brophy than with me, and certainly little to do with the merits of the competing donations. I’ll tell you, though, the most stressful day of a 25-year career was having to defend my donation to the committee, because it had to be as perfect a performance as I could muster, and it didn’t really matter. It was a political exercise, not a technical one.
The first meeting after acceptance of the donation, I produced a list of defects I wanted to correct or improve. From my point of view, this was just standard software engineering post mortem; I’d lived with the design for years and could do better. The immediate reaction, however, was not a happy one, and I had to shut up.
I wasn’t completely ignored; some of my and others’ suggested improvements were made during my remaining tenure on the committee, and more after I left Mentor and the committee. The most serious criticism of the standard, which I agree with, is that the coverage models are not really interoperable. The API is, but not the way coverage itself is stored by different simulators. While I understand users would like this, you have to ask which vendors would like this. None. Vendors would have to change their current implementation to adhere to some new way of doing things, only to increase the risk of losing their customers to another vendor. The worst problem is that coverage is rooted in particular language scopes, and language scopes aren’t even standardized. Synthesizable scopes are, but not verification scopes like those created by parameterized classes in SystemVerilog. Because this depends on a company’s proprietary elaboration algorithm, it is very unlikely this will ever be a standard.
So, bottom line, UCIS was not a “win-win, a benefit for the vendors and a benefit for the users,” as Arturo Salz said in Brian’s article. I think Mentor initiated it to increase its profile and credibility as a verification vendor, and I suspect others were dragged along by the force of customers, but without a clear and universal win-win, its full promise remains unrealized.
I will always be grateful that it was something I could participate in, and it is a highlight of my professional career. But I do look back on it as a stressful experience. I hope the UCIS will evolve and mature, and I pray it encourages an ecosystem of coverage analysis tools to develop along with it. I am interested to see some positive signs, like Mark Litterick’s DVCon paper I blogged about last time. But now UCIS has a life of its own without me. As one of its several parents, I will follow it with natural interest, and of course, some measure of pride.
DO-254 Without Tears
| Dr. Pranav Ashar|
Chief Technology Officer
This article was originally published on TechDesignForums and is reproduced here by permission.
At first glance the DO-254 aviation standard, ‘Design Assurance Guideline for Airborne Electronic Hardware’, seems daunting. It defines design and verification flows tightly with regard to both implementation and traceability.
Here’s an example of the granularity within the standard: a sizeable block addresses how you write state machines, the coding style you use and the conformity of those state machines to that style.
This kind of stylistic, lower-level semantic requirement – and there are many within DO-254 – makes design managers stop and think. So it should. The standard is focused on aviation’s safety-critical demands, assessing the hardware design’s execution and functionality in appropriate depth right up to the consequences of a catastrophic failure.
Nevertheless, one pervasive and understandable concern has been the degree to which such a tightly-drawn standard will impact on and be compatible with established flows. This particularly goes for new entrants in avionics and its related markets.
Your company has a certain way of doing things so you inevitably wonder how easily that can be adapted and extended to meet the requirements of DO-254… or will a painful and expensive rethink be necessary? Can we realistically do this?
Here’s the good news. The demands of the standard map closely to how EDA tools have developed and continue to evolve. Automation therefore takes a lot of pain out of the process.
DO-254 and EDA in harmony
First, what is a linter if not largely an accumulation of design knowledge that is applied to a new project in the light of what has been discovered on earlier ones? That’s where most of the rules come from. This has obvious and very beneficial implications for designs that observe predefined coding styles.
Our lint tool can guide you to the right places to look. When you have that information, it becomes a lot easier to adapt your flow and your design practices.
But let’s go further and look at the philosophy behind DO-254.
Consider the implications of ‘complexity’. It may be the most overused word in EDA but it’s still true that the increasing challenges faced by electronics system design have seen more intelligence fed into tools of all types.
To achieve DO-254 compliance specifically, I would argue that a linter is an important foundation, but you need to go further. You need a suite of tools, also packed with the same kind of semantic intelligence.
The kind of hierarchical RTL verification offered by our Ascent IIV tool and the depth of understanding of unknowns within our Ascent XV X-verification tool illustrate the extra checks and traces that are likely to be needed for a safety-critical design.
And there they are already in our tools – and yes, those of some of our competitors. These tools have evolved largely in parallel with the needs of this particular standard, but more importantly with the broader needs of all electronic system design.
Processes alone can only take you so far. Processes that highlight the need for an informed approach to design are what we need. That last quality strikes me as a key and very welcome aspect of DO-254.
DO-254 has its rewards
None of this means that DO-254 compliance is ‘easy’. No safety-first design should be. Attention to detail matters. But again, you already knew that even if you have never worked on an aviation project before. Today, nothing is easy.
In that context, today’s EDA tools include capabilities that greatly improve the efficiency with which existing players in aviation deliver projects and also lower the barriers to entry for new ones. That boosts competition and thereby quality.
Right now, aviation is an exciting field. The drone market alone – spurred by interest from the likes of Amazon and Google – is being awarded multi-billion dollar valuations. In the US, the FAA has this month finally described how it sees UAVs operating, albeit relatively small ones for now.
As UAVs become more commonplace, their DO-254-compliance will increasingly be required… even if the FAA is not itself making that mandatory. Yet.
A tremendous opportunity exists and EDA can help a great many of its customers take advantage of it. DO-254 does present challenges, but they are not so different from those we already face – with the right tools you can adapt without tears.
Analysis of Clock Intent Requires Smarter SoC Verification
Thanks to the widespread reuse of intellectual property (IP) blocks and the difficulty of distributing a system-wide clock across an entire device, today’s system-on-chip (SoC) designs use a large number of clock domains that run asynchronously to each other. A design involving hundreds of millions of transistors can easily incorporate 50 or more clock domains and hundreds of thousands of signals that cross between them.
Although the use of smaller individual clock domains helps improve verification of subsystems apart from the context of the full SoC, the checks required to ensure that the full SoC meets its timing constraints have become increasingly time consuming.
Signals involved in clock domain crossing (CDC), for example where a flip-flip driven by one clock signal feeds data to a flop driven by a different clock signal raise the potential issue of metastability and data loss. Tools based on static verification technology exist to perform CDC checks and recommend the inclusion of more robust synchronizers or other changes to remove the risk of metastability and data loss.
Conventionally, the verification team would run CDC verification on the entire design database before tapeout as this is the point that it becomes possible to perform a holistic check of the clock-domain structure and ensure that every single domain-crossing path is verified. However, on designs that incorporate hundreds of thousands of gates, this is becoming impractical as the compute runtime alone can run into days at a point where every hour saved or spent is precious. And, if CDC verification waits for this point, the number of violations – some of which may be false positives – will potentially generate many weeks of remedial effort, after which another CDC verification cycle needs to be run. To cope with the complexity, CDC verification needs a smarter strategy.
By grouping modules into a hierarchy, the verification team can apply a divide-and-conquer strategy. Not only that, the design team can play a bigger role in ensuring that potential CDC issues are trapped early and checked automatically as the design progresses.
A hierarchical methodology makes it possible to perform CDC checks early and often to ensure design consistency such that, following SoC database assembly, the remaining checks can pass quickly and, most likely, result in a much more manageable collection of potential violations.
Traditionally, teams have avoided hierarchical management of CDC issues because of the complexity of organizing the design and ensuring that paths are not missed. A potential problem is that all known CDC paths may be deemed clean within a block and that it can be considered ‘CDC clean’. But there may be paths that escape attention because they cross the hierarchy boundaries in ways that cannot be caught easily – largely because the tools do not have sufficient information about the logic on the unimplemented side of the interface and the designer has made incorrect clock-related assumptions about the incoming paths.
If those sneak paths were not present, it would be possible to present the already-verified modules as black boxes to higher levels of hierarchy such that only the outer interfaces need to be verified with the other modules at that level of hierarchy. For hierarchical CDC verification to work effectively, a white- or grey-box abstraction is required in which the verification process at higher levels of hierarchy is able to reach inside the model to ensure that all potential CDC issues are verified.
As the verification environment does not have complete information about the clocking structure before final SoC assembly, reporting will tend to err on the side of caution, flagging up potential issues that may not be true errors. Traditionally, designers would provide waivers for flops on incoming paths that they believe not to be problematic to avoid them causing repeated errors in later verification runs as the module changes. However, this is a risky strategy as it relies on assumptions about the overall SoC clocking structure that may not be born out in reality.
Refinements to the model
The waiver model needs to be refined to fit a smart hierarchical CDC verification strategy. Rather than apply waivers, designers with a clear understanding of the internal structure of their blocks can mark flops and related logic to reflect their expectations. Paths that they believe not to be an issue and therefore not require a synchronizer can be marked as such and treated as low priority, focusing attention on those paths that are more likely to reveal serious errors as the SoC design is assembled and verified.
However, unlike paths marked with waivers, these paths are still in the CDC verification environment database. Not only that, they have been categorized by the design engineer to reflect their assumptions. If the tool finds a discrepancy between that assumption and the actual signals feeding into that path, errors will be generated instead of being ignored. This database-driven approach provides a smart infrastructure for CDC verification and establishes a basis for smarter reporting as the project progresses.
Reporting will be organized to meet the specification rather than a long list of uncategorized errors that may or may not be false positives. This not only accelerates the process of reviews but allows the work to be distributed among engineers. As the specification is created and paths marked and categorized, engineers establish what they expect to see in the CDC results, providing the basis for smart reporting from the verification tools.
When structural analysis finds that a problematic path that was previously thought to be unaffected by CDC issues, the engineer can zoom in on the problem and deploy formal technologies to establish the root cause and potential solutions. Once fixed, the check can be repeated to ensure that the fix has worked.
The specification-led approach also allows additional attention to be paid to blocks that are likely to lead to verification complications, such as those that employ reconvergent logic. Whereas structural analysis will identify most problems on normal logic, these areas may need closer analysis using formal technology. Because the database-driven methodology allows these sections to be marked clearly, the right verification technology can be deployed at the right time.
By moving away from waivers and black-box models, the database-driven hierarchical CDC methodology encourages design groups to take SoC-oriented clocking issues into account earlier in the design cycle and ensure that any concerns about interfaces that may involve modules designed by design groups located elsewhere or even by different companies are carried forward to the critical SoC-level analysis without incurring the overhead of having to repeatedly re-verify each port on the module. Through earlier CDC analysis and verification, the team reduces the risk of encountering a large number of schedule-killing violations immediately prior to tapeout, and be far more confident that design deadlines will be met.
This article was originally published on TechDesignForums and is reproduced here by permission.
High-Level Synthesis: New Driver for RTL Verification
| Graham Bell|
Vice President of Marketing at Real Intent
In a recent blog, Does Your Synthesis Code Play Well With Others?, I explored some of the requirements for verifying the quality of the RTL code generated by high-level synthesis (HLS) tools. At a minimum, a state-of-the-art lint tool should be used to ensure that there are no issues with the generated code. Results can be achieved in minutes, if not seconds for generated blocks.
What else can be done to ensure the quality of the generated RTL code? For functional verification, an autoformal tools, like Real Intent’s Ascent IIV product can be used to ensure that basic operation is correct. The IIV tools will automatically generate sequences and detect whether incorrect or undesirable behavior can occur. Here is a quick list of what IIV can catch in the generated code:
- FSM deadlocks and unreachable states
- Bus contention and floating busses
- Full- and Parallel-case pragma violations
- Array bounds
- Constant RTL expressions, nets & state vector bits
- Dead code
Designers are are also concerned about the resettability of their designs and if they power-up into a known good state. We have seen some interesting results when Real Intent’s Ascent XV tool is applied to RTL blocks generated by HLS. Besides analyzing X-optimism and X-pessimism, the Ascent XV tool can determine the minimum number of flops that need to have reset lines routed to them. To save routing resources and reduce power requirements a minimal set of flops should be used. Running additional lines does not improve the design.
Here are the results for a block that was 130K gates in size:
|Number of Flops||17,495|
|Ascent XV Analysis Time (sec)||20|
|Unitialized Flops Found||646|
|Redundant Flops Initialization||11,896|
In this example, the Ascent XV tool took 20 seconds to analyze all 17,495 flops and discover that 646 were unitialized and that of the roughly 16,800 other flops, most of these did not need to have reset signals routed to them. The savings were 68% compared to the unimproved design. We have seen similar savings on other blocks generated by HLS tools.
HLS is now an important part of the hardware flow, and improves the productivity of designers. With easy generation of RTL code, designers should expert to use quick static verification tools such as lint, autoformal, and reset analysis to confirm quality and correct operation. This will save valuable time when designs are given to simulation and gate-level synthesis tools later in the flow.