Graham Bell Vice President of Marketing at Real Intent
Its a fact of life that semiconductor design is a world-wide activity, and that EDA companies are helping customers in a 24 hour day. How international is this world? According to the latest statistics from the EDA Consortium, over 50% of the business activity is outside North America. The total of EDA, semiconductor IP and design services revenue was 6.9 billion dollars in 2013. Comparing this to a world population of 7.1 billion means roughly one dollar per person was spent on the wide world of design.
While Real Intent is not a large company in the world of EDA, we do have representation for our verification products through-out the world and have just added two new distributors in Asia.
In India, we added a second distributor, Claytronic Solutions. This move extends Real Intent’s customer care efforts worldwide and addresses growing demand among design teams throughout India for local expertise and support of Real Intent’s advanced verification solutions.
Incorporated in 2010, Bangalore-based Claytronics provides specialized services and expertise at each stage of a product’s life cycle; it accelerates and simplifies the journey from concept to market with one-stop shopping for solutions and products in consumer electronics, multimedia, automotive and infotainment, and defense and aerospace. Claysol’s management team has deep design expertise and holds patents in multi-core parallel processing, multi-core architecture and video processing. In our view Claysol offers the uncompromising level of service and support we were looking for to meet the surging interesting the Indian market.
Umesh Tallam, Engineering Director of Claytronics, is looking forward to the new collaboration since “partnering with leading international vendors like Real Intent provides our customers unparalleled tools to help them build globally competitive products quickly and cost-effectively.’
The move strengthens Real Intent’s local sales and support for handling Taiwan’s fast-growing interest in Real Intent’s solutions for early functional verification and advanced sign-off verification. It also complements Real Intent’s existing sales and support teams in the rest of Asia.
Kaviaz, which means “best friend,” was established in 2008 to provide the best possible service and support for EDA customers in Taiwan. Its broad expertise spans logic design and verification tools, analog/mixed-signal tools, physical implementation and analysis tools, mask tools and IP distribution.
We picked Kaviaz since it is renowned for its long-standing technical expertise in the EDA arena, and for its uncompromising dedication to addressing customer needs. We think this synergistic partnership brings proven ways to accelerate design sign-off to electronic engineering teams throughout Taiwan
Timing Huang, president of Kaviaz Technology, has told me he is always seeking out leading-edge verification solutions for innovative companies in the Taiwan market looking for efficiency gains. He thinks Real Intent’s Ascent and Meridian products will boost the productivity of his clients.
Real Intent is looking to expand further into Asia in support of our growing customer list. Stay tuned for further announcements.
Graham Bell Vice President of Marketing at Real Intent
I still get the daily newspaper delivered to my house, the San Jose Mercury News. I came across the obituary for John Haslet Hall, one of the leading innovators at the birth of CMOS technology in Silicon Valley. I had not heard of Hall, and thought that you might also want to learn of his many wide-ranging contributions to the world of semiconductors.
John Haslet Hall, son of the late William McLaurine Hall, Jr and Mary Helen (Ent) Hall, was born July 11, 1932 and died October 30, 2014.
Hall was an early and prolific Silicon Valley inventor. In a career that spanned over 60 years, Hall developed technology included in over 20 fundamental patents, including pioneering work in low-power CMOS integrated circuit technology. A 1992 San Francisco Chronicle article referred to Hall as, “one of Silicon Valley’s unsung innovators.”
Hall served in the U.S. Navy in the late 1950s, working with aircraft electronics development and testing, often riding in planes that were pulling target drones to collect data. He graduated from the University of Cincinnati in 1961 and sought to apply his chemical engineering education in the nascent semiconductor industry.
In 1962, Hall met semiconductor pioneer Dr. Jean Hoerni, a cofounder of Fairchild and inventor of the planar process, the basis of today’s electronics industry. Hoerni and Hall worked on several consulting projects together, which led to Hoerni asking Hall to work for him at Union Carbide. Hall’s work there included the development of the first on-board aircraft computer made entirely of integrated circuits (ICs), used for the SR-71 Blackbird.
Hall worked as Union Carbide’s director of IC Development from 1962 to 1967, and his years there included innovations that would lead the semiconductor industry. These include the first application of thin-film technology; the first dual transistor on a single chip; and the invention of dielectric isolation technology. The Union Carbide semiconductor plant Hall built in Mountain View later became Intel’s first production facility.
In 1968, Hoerni asked Hall to join him in the formation of a new company, Intersil. As co-founder of Intersil, Hall headed R&D and achieved a breakthrough in coating silicon oxide gates with phosphorous glass resulting in the first practical metal oxide semiconductor (MOS) processes. Hall’s Intersil team also develop the first N-Channel memory chip at a time when most companies overlooked its potential.
Hall’s work in thin film resistors and CMOS technology formed the basis of Intersil’s electronic watch development for Seiko, which was chosen over a competing bid from RCA. Hall’s watch was the first successful quartz crystal watch, running on a one-volt battery that would last over a year.
Following the sale of Intersil in 1968, Hall declined Hoerni’s offer to join him in a new venture and opted to go out on his own with the backing of Seiko. Hall went to Japan to be the principal architect of Seiko’s – and Japan’s — first CMOS fabrication facility in 1970.
In 1971, Hall founded Micro Power Systems and for the next 15 years produced a string of commercial and technical successes. In one example, Hall competed with Motorola and Texas Instruments in a bid to Medtronics to create a computerized heart pacemaker. Hall’s design allowed the pacemaker to operate for 10 years without a battery replacement and enabled doctors to change its settings via remote control rather than invasive surgery.
In 1986, after a highly publicized technology transfer dispute with Seiko’s new management, still a key investor at his company at the time, Hall was forced to leave Micro Power Systems. He initiated a lawsuit against Seiko that was settled out of court in 1990. Hall in 1987 founded Linear Integrated Systems, Inc. and continued to develop new IC and specialized discrete device technology. At the time of Hall’s death, he was continuing to lead the company as chairman of the board and chief executive officer and was conducting research into further noise reduction in junction field effect transistors.
In a departure from his semiconductor endeavors, Hall in 1992 founded Integrated Wave Technologies, Inc., (IWT) a speech recognition company that employed former Soviet scientists and engineers. As a sister company to Linear Systems, IWT developed body worn speech recognition devices for DARPA, the Air Force, the Navy and the Department of Justice. These devices were highly successful in Iraq and Afghanistan operations, and IWT’s work was recognized as a significant accomplishment in the book DARPA: 50 Years of Bridging the Gap.
Hall was preceded in death by his parents, his brother William, sister Jean Anderson and son Richard Hall. He is survived by his children John Michael Hall (Sondra), Jennifer Hall, Jasmine Hall, Mary Helen Hall, and five grandchildren; Michael, William, Ozzalyn, Isabella and Sloan. Services to be held Monday, November 10, 2014 10:00 a.m. at the Calvary Cemetery, 2650 Madden Avenue, San Jose, 95116. Reception to follow at 12:00 p.m. Mariani’s, 2500 El Camino Real, Santa Clara, CA 95051
Is Platform-on-Chip The Next Frontier For IC Integration?
Srinivas Vaidyanathan Staff Technical Engineer
I was musing the other day about the completeness of SoCs – they include a mix of embedded processors for programmable functionality, hardware engines that accelerate specific features such as graphics, and multiple interfaces for memory, buses, and peripherals. And this remarkably complete solution is delivered on a single die. We have the perfect building block for creating systems with high-value and low-cost. But, even with Moore’s law allowing us to build more complex silicon, is new feature integration a scalable future for SoCs?
My conclusion is that we are approaching a steady state. From what I see, SoC design is still a custom solution in many ways, tailored to fit a generation of parts that meet some specific requirements. While complete in itself, the features cast in silicon offer only a coarse control of functionality. This leaves the end-user having to provide additional software and hardware to fill in any feature gaps at additional cost and time spent. While the intended and configured functions of the SoC might been implemented, any feature extensions may have compromises in performance.
When choosing between speed and configurability, designers make the choice of using either software running on a processor, or custom dedicated hardware. Software is ever forgiving, allowing multiple iterations towards the desired goal. On the other hand, hardware’s rigidity offers quick and reliable execution. Ideally, any desired feature enhancement would sit somewhere in this speed-configurability spectrum. Including this option in the SoC arsenal would allow for the perfect platform to unlock additional potential from hardware.
If we borrow an idea from the world of FPGAs, SoCs can gain significant versatility in providing the reconfigurability of software in dedicated hardware. Traditionally, FPGAs are looked at for hardware designs that are continually evolving; giving design teams the flexibility of keeping pace with change without the capital expenditure necessary for fixed silicon. For SoC customers, however, the constraints on area and power are just as critical as cost. Hence, a viable solution could be to include Programmable Gate Arrays (PGAs) in SoCs. In doing so, the choice of enhancing hardware can be weighed in context to other requirements. Importantly, this pushes out the hardware-software partitioning decision to much later in the product cycle, than currently available to developers today.
While the hybrid concept of mixing fixed silicon with FPGAs has been explored before, the key difference today is the growing software developer community that has been built around SoCs. To put this in context, consider the organization of the Andriod Software Stack on a SoC, using the example of the Ti-OMAP in the figure below. You can see there are fixed associations between the underlying hardware and the upper layers of software abstraction. However, introducing a PGA in hardware adds a completely new dimension to the software stack. Software libraries that were previously routed through the processor, for lack of a dedicated hardware, can now be offloaded to custom hardware created on-the-fly. Even libraries that have dedicated hardware accelerators, like graphics, can be augmented to cater to customized product requirements. By using some imagination, we can envision self-evolving hardware, morphing to suit the dynamic demands that applications place.
Obviously, there is more work required in the software stack to ensure that the generated hardware does not violate system parameters. But with that said, the capabilities in an architecture that bundles SoC and PGAs on a single die has the potential to be the ideal platform for endless product possibilities. I call this new innovation Platform-on-Chip (PoC).
While an interesting idea, is there a market for PoC? Consider Google’s Project Ara, a modular smart phone that is designed to swap out modules to suit the end-users’ usage needs. Among its many goals, this device is aimed at reducing e-waste by allowing the user to “upgrade individual modules as innovations emerge” . For a SoC, the possibilities of adding customized features, however, are restricted to the breadth of options the underlying SoC provides. With a PoC and its PGA component, emerging innovations are allowed more room to grow, and further extend the platform’s lifetime. Extrapolating the idea even further to the impending growth of the Internet of Things (IoT), there would be new ways for developers to re-purpose silicon in meeting with non-standard requirements at faster hardware speeds. A parallel topic of application could also be security. In an era where personal information is increasingly finding its way onto Internet enabled devices, real time reconfigurable hardware can offer stronger means of achieving identity protection.
If we have reached the end of innovation for SoC, do you think PoC could be the answer?
DVClub Shanghai: Making Verification Debug More Efficient
Ramesh Dewangan Vice President of Application Engineering at Real Intent
DVClub Shanghai took place on Sept. 26, 2014 with presentations by Real Intent, Solvertec, Mentor Graphics, Cadence, Synopsys and ARM. The theme of the meeting was “Making Verification Debug More Efficient.” Before I talk about two of the presentations that were recorded, here is some quick background on DVClub Shanghai which started at the end of 2013.
The principle goal of DVClub is to have fun while helping build the verification community through quarterly educational and networking events. The DVClub events are targeted to the semiconductor industry in China, with a focus on design verification. Membership is free and is open to all non-service provider semiconductor professionals. Most members work in verification, but there are also plenty of entrepreneurs, students, managers, investors, and even design engineers who attend. There are at least 4 events every year: March, June, September and December.
Mike Bartley opened the event with a talk that was titled “Improving Debug – Our biggest Challenge?” If you follow the link you can see the recording of his presentation, where he talks about the 6 things that we need for improved debug.
Even with high degree of design reuse, verification continues to be the long pole in design development. This has created a huge stress in current functional verification methodologies, which rely primarily on dynamic verification. Design complexity has made debug cycle times unpredictable and longer.
Static verification is a perfect complement to the dominantly dynamic verification in use today. Not only is static verification exhaustive, it needs minimal setup and offers faster debug cycles. Both structural and formal techniques have made dramatic advances in the recent years by analyzing the designer’s intent. Structural static verification has expanded its effectiveness to several critical application domains. And formal techniques have progressed from an expert-user model to a mainstream-user model.
The static verification techniques have been successfully used in targeted problem domains like clock domain crossing, reset optimization, X-optimism/pessimism, FSM integrity and so on. My presentation provides specific design examples and how the static techniques solves them more efficiently.
Not every verification problem is a nail that you need big hammer for. Simulation is too expensive and time consuming for a majority of verification problems. Why not, ease the pain by using faster, targeted, and exhaustive static verification techniques to shorten your verification debug cycle?
ARM TechCon Video: Beer, New Meridian CDC, and Arnold Schwarzenegger ?!
Graham Bell Vice President of Marketing at Real Intent
At ARM Tech Con 2014, I discussed beer, the new release of our Real Intent clock-domain crossing software Meridian CDC, and a new spokesperson for our company, with Sean O’Kane of ChipEstimate.TV. Enjoy!
New CDC Verification: Less Filling, Picture Perfect, and Tastes Great!
Graham Bell Vice President of Marketing at Real Intent
Real Intent will release our greatly extended Meridian CDC clock domain crossing software in November with new capabilities headlined by more hierarchical firepower and the launch of a user-configurable debugger.
The 2014.A edition announced last week (on my wife’s birthday), will have 30% higher performance against the existing tool and a 40% smaller memory footprint. The formal analysis engine within Meridian has also been given a 10X boost in throughput.
In the YouTube video interview below, Ramesh Dewangan, vice-president of application engineering, points out that the bottom-up hierarchical flow is key to Meridian CDC’s giga-scale capacity (though the tool is equally capable of handling designs ‘flat’).
The hierarchical approach means that] the complete design view of the SoC is available for CDC analysis at any time. There is no abstraction or any approximation that is used that has a potential to miss bugs. Being more specific, there is neither abstract modeling nor waivers.
New iDebug for CDC debugging
Our new debugger specifically leverages this hierarchical approach. Named iDebug (the ‘i’ standing for ‘intent’), it draws upon a Meridian CDC database that captures all phases of clock domain crossing verification for a hierarchical analysis of the design’s intent.
The iDebug software identifies root causes and then presents issues to users in an easy-to-assess and easy-t0-debug environment. We think it is a next generation debug environment. It has an integrated GUI and has user-configurability and programmability using command line interfaces (CLI). All the CDC analysis data is stored in a database that can be accessed through the command-line interface (CLI). So you are not stuck with one methodology that the tool provides for debug. Instead, you can create your own debug methodologies, custom to your own design flows which may include spreadsheet reports, graphical reports, scripting, and so on.
Run-time for CDC analysis
With recent envelope-pushing designs from AMD and nVIDIA both exceeding 5 billion gates, the tool has been designed to allow CDC checks to be undertaken at speed. The 40% decrease in memory and other performance improvements should mean that most projects can be run overnight on a reasonable-sized machine with a few-hundred GBytes of RAM.
ARM Fueling the SoC Revolution and Changing Verification Sign-off
Dr. Pranav Ashar Chief Technology Officer
ARM TechCon was in Santa Clara this week and Real Intent was exhibiting at the event. TechCon was enjoying its 10th anniversary and ARM was celebrating the fact that it is at the center of the System-on-Chip (SoC) revolution.
The SoC ecosystem spans the gamut of designs from high-end servers to low-power mobile consumer segments. A large and heterogeneous set of players (foundries, IP vendors, SoC integrators, etc.) has a stake in fostering the success of the ecosystem model. While the integrated device manufacturer (IDM) model has undeniable value in terms of bringing to bear large resources in tackling technology barriers, one could argue that the rapid-fire smartphone revolution we have experienced in the last five years owes in large part to the broad-based innovation enabled by the SoC ecosystem model. How are the changing dynamics of SoCs driving changes in verification requirements, tools and flows and thereby changing the timing sign-off paradigm?
ARM should be applauded for the significant role it has played in bootstrapping and further enabling the SoC ecosystem model. By licensing its processor and digital IP instead of manufacturing its own chips, ARM has freed its partners to aggressively build and refine their products without being reliant on general purpose devices or rigid form factors. And it did not hurt that ARM’s advantage in the mobile space was the low-power architecture and instruction set designed to sip power rather than glug it. Going forward, the optimization and perpetuation of this model will depend on a deep commitment by the EDA industry to recognize and fill the specific needs inherent to this model. ARM has a made a significant commitment on its side and works closely with EDA companies to develop reference methodologies (RMs) for implementation flows that enable ARM licensees to customize, implement, verify and characterize ARM processors for their chosen process technologies.
A case in point is that an SoC today is really a sea of interfaces. This is a consequence of the building-block design style used to create them. Since system timing is dominated by the delays found in long interconnect wiring paths, large monolithic ICs have given way to designs that use a number of small blocks with signal crossing interfaces. Besides the timing issue, each block can be optimized for low-power operation using an independent supply voltage and clock frequency control.
As a consequence, much of the performance optimization is being targeted at these interfaces in the form of aggressive design of protocols and their implementation. A second consequence is that most of these interfaces are asynchronous or need to be modeled that way. In other words, it would be more correct to say that an SoC today is really a sea of aggressively designed asynchronous interfaces!
This prominence of interfaces in the modern SoC has big implications on verification requirements, tools and flows and is changing the sign-off paradigm.
For starters, Clock-Domain-Crossing (CDC) verification is now a first-order sign-off requirement. CDC bugs are insidious in that they can remain unnoticed until after tape out and deployment. Their difficulty lies in that they are at the intersection of functionality and timing and neither functional simulation nor static timing analysis meets their challenge. It used to be the case that the number of clock crossings in a chip was small enough that manual review sufficed. Not any more. With more than fifty clock domains per SoC, and SoC sizes in the hundreds of millions of gates, it is absolutely essential that CDC sign-off be automated by means of a specialized tool that has deep and first-principles domain expertise in asynchronous interface design techniques and the typical implementation idioms therein.
An important point to register is that correct clock-crossing interface design is predicated not just on correct circuit implementation, but also on correct protocol design. As a result, the first CDC sign-off must happen at the pre-synthesis RTL abstraction level to intercept any protocol design bugs.
A further truism is that CDC sign-off is only as good as the environment setup feeding into it. Improper clock grouping, clock propagation, mode setup, reset propagation, etc. can lead to incorrect CDC analysis and a bad sign-off.
There are two implications of the above statements on timing-constraints closure in the modern SoC.
First, until recently, timing constraints setup fed into the Quality-of-Results (QoR) steps of synthesis, physical design and static timing analysis. Going forward, timing constraints closure is being fed into a black-and-white verification sign-off step. The timing-constraints specification exercise is, therefore, no more just a question of dealing with over-designed paths and logic, or of compromising on the QoR spec. It now part of a verification sign-off step with an implication of possible very-expensive-to-fix field bugs if done incorrectly.
Second, CDC sign-off now starts at the pre-synthesis RT level. That is possible only if SoC-level timing-constraints are available at that stage. Basically, the obligation to plan, create and manage timing constraints has moved up an abstraction level.
All of the above points to the need for tools for precise CDC analysis and for full-featured timing-constraints creation and management starting at the RT level. Real Intent is very much in the business of filling this need with its Meridian family of CDC and Constraints tools.
There are other changes in the SoC sign-off paradigm. What have you seen? And what is your number one concern today? I am very much interested in hearing your comments.
Graham Bell Vice President of Marketing at Real Intent
Recently, Real Intent put out a new release of our Ascent Lint tool, which checks your RTL to make sure it meets the standards for good coding practices. Linting has the advantages of delivering very quick feedback on troublesome and even dangerous coding style that causes problems that can show up in simulation, but will likely take a much longer time to uncover. With the right lint tool, you can catch the “low-hanging fruit” before tackling functional errors. In a recent blog, we discussed how a staged analysis starting with Initial checks, followed by Mature and Handoff checks, can very efficiently get you to ‘hardened’ RTL code that is ready to be integrated with the rest of the design.
Our latest release of Ascent Lint supports this staged analysis through a series of different policy files, and this is very effective for hand-coded RTL that designers create.
Another important aspect to Linting is the requirement to verify that automatically generated RTL code from high-level synthesis (HLS) tools is ‘clean’, as well. You might think that since the code is automatically generated does not need linting. In fact, the freedom given to designers in their HLS environments, permits them to generate RTL code that should be checked.
One example flow is MathWork’s HDL Coder™. It generates portable, synthesizable Verilog® and VHDL code from MATLAB functions, Simulink models, and Stateflow® charts. We announced in May, that Ascent Lint is integrated with the HDL Coder user interface which automates the setup of files and commands for Ascent Lint. This tight integration enables users to verify that the RTL code generated using HDL Coder is compliant with users’ coding conventions and industry standards, for a safe and reliable implementation flow for digital synthesis tools used by ASIC and FPGA designers.
Another flow is the integration of Calypto’s Catapult high-level synthesis tool with Ascent™ Lint, which we announced last year. Catapult synthesizes ANSI C++ and SystemC to RTL code. The advantage of starting with C-code means verification can be done over 100X faster than at RTL and designers get early feedback on their design architecture in terms of performance, area, and power. By integrating with Calypto’s synthesis tool, Ascent Lint enables designers to quickly go from from the system-level to gates, secure in the knowledge that their RTL code meets all of the industry quality standards in their implementation flow.
Besides our new integration with Mathworks, a number of other features and enhancement were introduced to Ascent Lint to keep it the highest performance linting tool in the marketplace. For a perspective on these new capabilities, please view the video below by Srinivas Vaidyanathan, staff technical engineer at Real Intent.
It’s Time to Embrace Objective-driven Verification
Dr. Pranav Ashar Chief Technology Officer
This article was originally published on TechDesignForums and is reproduced here by permission.
Consider the Wall Street controversy over High Frequency Trading (HFT). Set aside its ethical (and legal) aspects. Concentrate on the technology. HFT exploits customized IT systems that allow certain banks to place ‘buy’ or ‘sell’ stock orders just before rivals, sometimes just milliseconds before. That tiny advantage can make enough difference to the share price paid that HFT users are said to profit on more than 90% of trades.
Now look back to the early days of electronic trading. Competitive advantage then came down to how quickly you adopted an off-the-shelf, one-size-fits-all e-trading package.
Banking has long been at computing’s cutting edge. What HFT illustrates today is a progressive shift in the strategy it uses to develop systems from tool-based (‘We have bought an e-trading system’) to objective-driven (‘Make our e-trades the fastest and most profitable’).
As I said, I want to set aside the fair/unfair debate around HFT, and take it simply as a high profile illustration of how Wall Street’s approach to IT is evolving. Banks are continuously developing other systems based on objective-driven thinking. My point is that we can draw important lessons for SoC design from this overall shift because we are moving – and need to move – in the same direction toward objective-driven verification. Less controversially (thankfully), but we should still follow the trend more aggressively.
Wall Street’s riches point the way for objective-driven verification
‘Objective-driven verification’ defined
What do we mean by ‘objective-driven’? At a high level, the mindset of the system architect has changed: He has gone from identifying useful tools and deploying them in isolation to starting with a pre-defined goal that is achieved through a customized synthesis of available tools and methods.
Going deeper, one can identify two triggers:
A recognition that systemic tasks have become so complex it is very unlikely that you can fully realize them using a single raw tool, or even a few. Multiple tools and techniques must be combined and used in a fuller context.
A deeper understanding of the inner workings of complex systems that allows architects to isolate the processes and cause-effect relationships relevant to their objectives.
These triggers describe IT trends in logic verification as well as in banking.
The ‘system’ in verification is the SoC. The raw tools are, first, simulation, but also static-timing analysis and formal analysis. After a healthy run of around 25 years, SoC complexity has caught up with and overtaken this coarse-grain raw-tool model.
Objective-driven verification begins with that deeper understanding of the SoC architecture and the processes involved in putting it together. The objectives themselves emerge from today’s greater knowledge of failure modes and hard-to-achieve verification goals.
The model moves away from treating logic verification as monolithic. It focuses instead on specific goals. For each, we now know that custom solutions are more effective. Objective-driven verification rewards us with a much deeper, much cheaper process.
Raw tools play a role but have become interchangeable and commoditized. The productivity of an SoC design group is no longer determined by the use of a particular simulator. Rather, productivity and the viability of the design depend on how well the group adopts objective-driven solutions.
The value today therefore resides in a layer that sits on top of commoditized raw tools which contains a deep knowledge of different failure-modes within a structured workflow. This is where your big verification dollars need to be spent.
It is a disruption of a logic verification business model long based on selling raw tools. Nevertheless, the assertion that future growth will come from objective-driven verification is already well illustrated in two specific instances.
Objective-driven verification is already here
Take verification for failures caused by asynchronous clock-domain crossing (CDC). Until recently, it entailed manual design review and the use of specialized synchronizer library cells in simulation. You bought a fast simulator and then pounded stimuli onto the special cell-equipped model. This worked for crossings up to, say, the dozens. But as they grew in number and complexity, the approach broke down. Asynchronous-crossing failures increased alarmingly.
In response SoC designers, aided by vendors like Real Intent, have carved out asynchronous-CDC as a distinct objective-driven verification task. They have adopted dedicated solutions and workflows that address the problem to sign-off. Objective: “There will be no failures caused by asynchronous crossings.”
Real Intent’s asynchronous CDC solution stack illustrates an objective-driven verification process. It starts with a first-principles understanding of the failure modes. Around that is built a synergy of structural analysis methods, formal analysis methods and simulation hooks. A workflow then guides the user through an iterative chip-environment setup and the progressive refinement of verification results until full-chip sign-off is achieved.
This workflow component shows that objective-driven verification goes beyond simply a rediscovery of the ‘point tool’. Context, relationships with other ‘objectives’ and their solutions, relevance to the overall goal, and even the UI play subtle but important roles they did not in the point era.
Every SoC taped out today goes through an explicit asynchronous CDC sign-off based on a dedicated static solution of this type. However, I would note that the workflows associated with different solutions are materially different and lead to measurably different levels of productivity and quality of final results.
Objective-driven verification is also becoming the norm in X propagation. Logic simulation has long been an imperfect tool here: It can still incorrectly turn a deterministic value into an X, or an X into a deterministic value. The second effect is worse because it can mask bugs, giving false confidence in the chip’s correctness.
These insidious failures make it imperative that SoC design teams deploy objective-driven verification to catch them early. The same template applies as for asynchronous CDC: Synergistic structural and formal analysis with simulation hooks are joined to an intuitive and iterative workflow. This delivers progressively better results.
The list of high-value goals to which we can apply objective-based verification is getting longer. The broader concept is spreading quickly out from Wall Street’s deep-pocketed IT pioneers. And importantly for SoC design, objective-based verification techniques for asynchronous CDC and X-effects already demonstrate a value you can – well – take to the bank.
Autoformal: The Automatic Vacuum for Your RTL Code
Graham Bell Vice President of Marketing at Real Intent
The Roomba automatic vacuum cleaner may be the most popular home robot in the world. It wakes up, wanders around your house collecting ‘dust bunnies’ and other dirt and then parks itself, where it can recharge and be ready for the next cleaning cycle.
Real Intent also offers an automatic tool that cleans up your RTL code. Ascent IIV is an autoformal tool that automatically analyses the implied intent of your RTL code. It verifies different kinds of sequences and reports back on those that are suspicious. Because the analysis is smart and hierarchical, it reports primary errors that, when corrected, can remove a cascade of secondary errors.
Here is a quick list of checks that Ascent IIV automatically performs:
FSM deadlocks and unreachable states
Bus contention and floating busses
Full- and Parallel-case pragma violations
Constant RTL expressions, nets & state vector bits
SystemVerilog ‘unique’, ‘unique0′, and ‘priority’ checks for if and case constructs
In July, Real Intent announced a new release of Ascent IIV. Here is a video interview with Lisa Piper , senior technical marketing manager, discussing how IIV makes debug even easier with new features such as causation trees and focused custom reports.