Analysis of Clock Intent Requires Smarter SoC Verification
Thanks to the widespread reuse of intellectual property (IP) blocks and the difficulty of distributing a system-wide clock across an entire device, today’s system-on-chip (SoC) designs use a large number of clock domains that run asynchronously to each other. A design involving hundreds of millions of transistors can easily incorporate 50 or more clock domains and hundreds of thousands of signals that cross between them.
Although the use of smaller individual clock domains helps improve verification of subsystems apart from the context of the full SoC, the checks required to ensure that the full SoC meets its timing constraints have become increasingly time consuming.
Signals involved in clock domain crossing (CDC), for example where a flip-flip driven by one clock signal feeds data to a flop driven by a different clock signal raise the potential issue of metastability and data loss. Tools based on static verification technology exist to perform CDC checks and recommend the inclusion of more robust synchronizers or other changes to remove the risk of metastability and data loss.
Conventionally, the verification team would run CDC verification on the entire design database before tapeout as this is the point that it becomes possible to perform a holistic check of the clock-domain structure and ensure that every single domain-crossing path is verified. However, on designs that incorporate hundreds of thousands of gates, this is becoming impractical as the compute runtime alone can run into days at a point where every hour saved or spent is precious. And, if CDC verification waits for this point, the number of violations – some of which may be false positives – will potentially generate many weeks of remedial effort, after which another CDC verification cycle needs to be run. To cope with the complexity, CDC verification needs a smarter strategy.
By grouping modules into a hierarchy, the verification team can apply a divide-and-conquer strategy. Not only that, the design team can play a bigger role in ensuring that potential CDC issues are trapped early and checked automatically as the design progresses.
A hierarchical methodology makes it possible to perform CDC checks early and often to ensure design consistency such that, following SoC database assembly, the remaining checks can pass quickly and, most likely, result in a much more manageable collection of potential violations.
Traditionally, teams have avoided hierarchical management of CDC issues because of the complexity of organizing the design and ensuring that paths are not missed. A potential problem is that all known CDC paths may be deemed clean within a block and that it can be considered ‘CDC clean’. But there may be paths that escape attention because they cross the hierarchy boundaries in ways that cannot be caught easily – largely because the tools do not have sufficient information about the logic on the unimplemented side of the interface and the designer has made incorrect clock-related assumptions about the incoming paths.
If those sneak paths were not present, it would be possible to present the already-verified modules as black boxes to higher levels of hierarchy such that only the outer interfaces need to be verified with the other modules at that level of hierarchy. For hierarchical CDC verification to work effectively, a white- or grey-box abstraction is required in which the verification process at higher levels of hierarchy is able to reach inside the model to ensure that all potential CDC issues are verified.
As the verification environment does not have complete information about the clocking structure before final SoC assembly, reporting will tend to err on the side of caution, flagging up potential issues that may not be true errors. Traditionally, designers would provide waivers for flops on incoming paths that they believe not to be problematic to avoid them causing repeated errors in later verification runs as the module changes. However, this is a risky strategy as it relies on assumptions about the overall SoC clocking structure that may not be born out in reality.
Refinements to the model
The waiver model needs to be refined to fit a smart hierarchical CDC verification strategy. Rather than apply waivers, designers with a clear understanding of the internal structure of their blocks can mark flops and related logic to reflect their expectations. Paths that they believe not to be an issue and therefore not require a synchronizer can be marked as such and treated as low priority, focusing attention on those paths that are more likely to reveal serious errors as the SoC design is assembled and verified.
However, unlike paths marked with waivers, these paths are still in the CDC verification environment database. Not only that, they have been categorized by the design engineer to reflect their assumptions. If the tool finds a discrepancy between that assumption and the actual signals feeding into that path, errors will be generated instead of being ignored. This database-driven approach provides a smart infrastructure for CDC verification and establishes a basis for smarter reporting as the project progresses.
Reporting will be organized to meet the specification rather than a long list of uncategorized errors that may or may not be false positives. This not only accelerates the process of reviews but allows the work to be distributed among engineers. As the specification is created and paths marked and categorized, engineers establish what they expect to see in the CDC results, providing the basis for smart reporting from the verification tools.
When structural analysis finds that a problematic path that was previously thought to be unaffected by CDC issues, the engineer can zoom in on the problem and deploy formal technologies to establish the root cause and potential solutions. Once fixed, the check can be repeated to ensure that the fix has worked.
The specification-led approach also allows additional attention to be paid to blocks that are likely to lead to verification complications, such as those that employ reconvergent logic. Whereas structural analysis will identify most problems on normal logic, these areas may need closer analysis using formal technology. Because the database-driven methodology allows these sections to be marked clearly, the right verification technology can be deployed at the right time.
By moving away from waivers and black-box models, the database-driven hierarchical CDC methodology encourages design groups to take SoC-oriented clocking issues into account earlier in the design cycle and ensure that any concerns about interfaces that may involve modules designed by design groups located elsewhere or even by different companies are carried forward to the critical SoC-level analysis without incurring the overhead of having to repeatedly re-verify each port on the module. Through earlier CDC analysis and verification, the team reduces the risk of encountering a large number of schedule-killing violations immediately prior to tapeout, and be far more confident that design deadlines will be met.
This article was originally published on TechDesignForums and is reproduced here by permission.
High-Level Synthesis: New Driver for RTL Verification
Graham Bell Vice President of Marketing at Real Intent
In a recent blog, Does Your Synthesis Code Play Well With Others?, I explored some of the requirements for verifying the quality of the RTL code generated by high-level synthesis (HLS) tools. At a minimum, a state-of-the-art lint tool should be used to ensure that there are no issues with the generated code. Results can be achieved in minutes, if not seconds for generated blocks.
What else can be done to ensure the quality of the generated RTL code? For functional verification, an autoformal tools, like Real Intent’s Ascent IIV product can be used to ensure that basic operation is correct. The IIV tools will automatically generate sequences and detect whether incorrect or undesirable behavior can occur. Here is a quick list of what IIV can catch in the generated code:
FSM deadlocks and unreachable states
Bus contention and floating busses
Full- and Parallel-case pragma violations
Constant RTL expressions, nets & state vector bits
Designers are are also concerned about the resettability of their designs and if they power-up into a known good state. We have seen some interesting results when Real Intent’s Ascent XV tool is applied to RTL blocks generated by HLS. Besides analyzing X-optimism and X-pessimism, the Ascent XV tool can determine the minimum number of flops that need to have reset lines routed to them. To save routing resources and reduce power requirements a minimal set of flops should be used. Running additional lines does not improve the design.
Here are the results for a block that was 130K gates in size:
Number of Flops
Ascent XV Analysis Time (sec)
Unitialized Flops Found
Redundant Flops Initialization
In this example, the Ascent XV tool took 20 seconds to analyze all 17,495 flops and discover that 646 were unitialized and that of the roughly 16,800 other flops, most of these did not need to have reset signals routed to them. The savings were 68% compared to the unimproved design. We have seen similar savings on other blocks generated by HLS tools.
HLS is now an important part of the hardware flow, and improves the productivity of designers. With easy generation of RTL code, designers should expert to use quick static verification tools such as lint, autoformal, and reset analysis to confirm quality and correct operation. This will save valuable time when designs are given to simulation and gate-level synthesis tools later in the flow.
Underdog Innovation: David and Goliath in Electronics
Ramesh Dewangan Vice President of Application Engineering at Real Intent
The story of “David and Goliath” from the book of Samuel, has taken on a secular meaning of describing any underdog situation, a contest where a smaller, weaker opponent faces a much bigger, stronger adversary. Not just in EDA, but all companies in different technology industries deal with this struggle.
Organizations have moved from “build once, last forever” to “build fast and improve faster” approach to meet the dynamic requirements of their customers. In order to scale, evolve and respond, companies are choosing between two business philosophies. One which focuses on building larger, process driven yet efficient organizations and the other on smaller more efficient teams.
The panel discussion “The paradox of leadership: Incremental approach to Big Ideas ” at the recent Confluence 2015 conference addressed this question. It explored the pros and cons of each of these philosophies and tried to gauge if there is a preferred way to creating success as part of the conference theme: “Building the Technology Organizations of Tomorrow.” In my previous blog, Billion Dollar Unicorns, I discussed which companies were leading innovators, but the question remains: how do companies get there?
Confluence 2015 Panelists from Facebook, Pactera Technology, Saama Capital, SAP, and Zinnov.
Whereas industry startups (the Davids) have inherent advantage of being nimble and focused, the necessary ingredients for the significant innovation, the large companies (Goliaths) suffer from bureaucratic processes that can dampen or kill innovation.
‘0-1’ denotes a major innovation, and may be a disruptive solution, a new product or technology
‘1-n’ denotes an evolutionary or incremental innovation
Large companies are good at 1-n innovation. The panelists emphatically asserted that the only way to achieve 0-1 innovation in the large companies is to form a separate group with the right skills to focus on the specific innovation. This team can be guided by a corporate sponsor (adult supervision) or could be an independent subsidiary.
On the other hand, the startups are formed on the very basis of a major idea that leads them to 0-1 innovation. In many cases, a 0-1 idea that couldn’t see the light-of-day in a large company is the very reason why a startup is formed. In the EDA context, think Silicon Perspective, Sierra Design Automation, Berkeley Design Automation, and numerous others.
This brings an interesting question, what about the startups established for a decade or more, who already have differentiated products? Let us call them “established Davids” (eDavids). Atoptech, Atrenta, Berekely DA (recently acquired), Calypto, Forte (recently acquired), Jasper (recently acquired), and Real Intent come to mind. Whereas eDavids are still working on the new products (0-1 innovation), the 1-n innovations form bulk of such company focus, as the majority of its work is on improving their products that already have a large customer base for good number of years.
For example, Real Intent has the leading Clock Domain Crossing (CDC) product in the market for several years. It competes with big and small players alike. They continue to deliver several 1-n innovations in CDC.
Does it mean eDavids are not differentiating against Goliaths?
First of all, what we consider a 1-n innovation by eDavids, are sometimes a 0-1 innovation for a large company. For example, one of the large EDA companies is still working on a viable CDC product. Another large company has ceased to innovate and their current product is on life-support. A third large company has a CDC product for years in market but with a low rate of innovation their customers tell us.
Then there is question of how you distinguish between 0-1 and 1-n innovations in an established product. For example, Real Intent introduced a completely unique next generation configurable CDC debug environment with command line interface. Real Intent improved its data model that enables its CDC tool to run full-chip CDC analysis on a 1 billion gate chip. Should we call these a 0-1 innovation or 1-n?
The debate on how-to-do innovation, among Davids (established or not) and Goliaths will not cease, not even in EDA! But one thing is clear, eDavids are having field day with the success of their innovations thanks to the immense value their customers realize!
This article was originally published on TechDesignForums and is reproduced here by permission.
Constraints are a vital part of IC design, defining, among other things, the timing with which signals move through a chip’s logic and hence how fast the device should perform. Yet despite their key role, the management and verification of constraints’ quality, completeness, consistency and fidelity to the designer’s intent is an evolving art.
Why constraints management matters
Constraints management matters for a couple of reasons: as a way of ensuring that the intent of the original designers, be they SoC architects or third-party IP providers, is taken into account throughout the design process; and for their ability to enable better designs.
For example, It’s possible to use constraints to define ‘false paths’, routes through the logic that cannot affect its overall timing and so need not be optimized, giving the synthesis and physical implementation tools greater freedom to act.
Functional false paths are rare. But the ability to define a false path is often used to denote asynchronous paths or signals that timing engines don’t have to care about because they only transition once, for example in accessing configuration registers during boot sequences. Without effective constraints management it is easy to lose track of the rationale for particular constraints, and hence the opportunity for greater optimization.
It is also possible to define ‘multi-cycle paths’, through which signals are expected to propagate in more than a single clock cycle. Designers use multi-cycle path constraints in two ways: to denote paths that really are functionally multi-cycle paths; and as a way around corporate methodologies that ban the setting of false-path constraints. In this scenario, designers define a multi-cycle path with a large multiplier as another way to relax timing requirements.
Multi-mode designs, for which different constraints may apply to particular paths in different operating modes, present another constraint-management challenge. It is easy to lose track of the rationale for each constraint in each mode, and to overlook potential conflicts between multiple constraints applied to the same path in different modes.
Constraints management challenges
Managing and verifying design constraints presents a number of challenges to methodology developers and verification engineers. The first is that of carrying forward a designer’s intent, expressed in the constraints that accompany the logic definition, throughout the design flow from abstract code through synthesis and related transformations (such as test insertion) to gates in silicon.
The second, in this age of increasing chip sizes and shrinking timescales, is ensuring that verification engineers aren’t overwhelmed with such large volumes of debug data that they are unable to analyze it effectively and act upon it quickly as they work to sign off the constraints.
These issues are not well addressed in today’s methodologies: designers often use custom scripts to check the properties of constraints, such as quality and consistency.
Formal approaches can be useful in this context, but because of their speed and capacity limitations, it makes sense to develop a process of stepwise constraints refinement, using a series of targeted analyses and interventions to address the simpler issues. This reduces the burden on formal tools when they are eventually pressed into service.
In this approach, likened by some to peeling an onion, verification engineers might start by checking that the existing constraints have been correctly applied to the design. The next step could be to define all the paths which can be safely ignored, using algorithmic approaches to find such paths and denote them by adding constraints to the design. For example, multi-cycle paths need a retention capability at their start and finish, so an algorithm can check for that. The algorithm needs smarts, though: a multi-cycle path may exploit retention capabilities from elsewhere in the design, such as a state machine that is driving it, so the analysis need to consider the path’s context as well.
These analyses can be done quickly, before applying formal techniques that risk delivering such detailed reports that engineers get overwhelmed. Effective constraints verification tools need to be able to categorize exceptions based on predefined principles, to provide a prioritized view of what’s important.
Ensuring consistency between SoC and block-level constraints
As the use of IP increases, constraints files are providing a useful way to ensure that the same timing budgets are not being allocated twice, once at the block level and once at the SoC level.
Checking for this kind of consistency throws up subtle issues. For example, an IP block may include asynchronous paths that are recognized within a block-level constraint. At the SoC level, though, the IP block’s asynchronous paths may not matter and so can be safely ignored. There’s a twist, though – if other signals within the IP block depend on these paths, then the original constraints on those paths should be taken into account after all.
The key is to be able to assess block-level constraints within the SoC context, which may be easier said than done if the SoC constraints file doesn’t include placeholders for these issues. For example, how do we promote an internally generated clock, derived from a signal on the IP boundary, up to the SoC level?
It is also important to remember a second form of consistency that needs checking – between blocks. Depending on the context in which a block is being driven, it may be considered as synchronous or asynchronous. If a tool regards one of the instantiations of the block as correct, it may see other instantiations in different contexts as incorrect – creating a reporting issue.
Given the importance of constraints in defining how an IC is meant to work, it is increasingly important that their quality, completeness, and consistency is properly verified, and that they are correctly applied throughout the whole design elaboration process.
The best way to verify constraints is to develop a step by step approach, tackling particular classes of issue at a time, supported by tools that can sort and prioritize their error reports so that engineers can focus on the most important issues first. If these tools also help preserve the design intent expressed in the constraints all they way through the process, that is a bonus.
Ramesh Dewangan Vice President of Application Engineering at Real Intent
The business magazine, Fortune, in a Feb. 2015 article proclaimed The Age of Unicorns — private companies valued at more than $1 billion by investors. Unicorns are the stuff of myth, but billion-dollar tech start-ups seem to be everywhere, backed by a bull market and a new generation of disruptive technology. According to a recent New York Times article, there are over 50 unicorns in Silicon Valley right now.
Upcoming unicorns formed a popular discussion topic at the Confluence 2015 conference organized by Zinnov, on March 12th in Santa Clara, Calif. The conference theme was “Building the Technology Organizations of Tomorrow”.
Here is a sampling of six unicorns that have emerged as real winners using innovative strategies:
Airbnb (San Francisco) is a web marketplace for the rental of local lodging, with listings in 192 countries. It uses social media technology to conduct background checks for both providers and renters, to amplify stories, connect with travelers and ultimately drive business growth. And you thought Facebook was just for time-wasters! Watch this YouTube video to see how Airbnb leverages social media.
Uber operates a mobile-phone based transportation network using private cars and taxis. It employs just 3 people per city when it first launches operations in a new location. The teams get support from the San Francisco headquarters mainly for IT operations. The team also leverages the network of operators in other cities. In contrast, rivals employ hundreds of employees to manage a driver network. This fat-free model is helping Uber to roll out operations at a rapid pace.
Flipkart (Bangalore) is a web store and sellers marketplace in India. It was established in 2007, and is valued at $12 billion. One specific feature – “Cash on delivery” – introduced in 2013, accelerated their sales significantly. You hand over the required cash to the delivery staff, and get the product handed over to you in return, all with a human touch. They figured India is primarily a cash driven economy where plastic card penetration is extremely low in India (<1%). Why couldn’t Amazon think of it?
The high-definition personal camera company, GoPro is based in San Mateo, Calif. It raised $427 million when it went IPO with a valuation of $2.96 billion in 2014. It turned its customers into a stoked sales force, by enabling users to flood the Internet with videos of their own adventures. In 2013 alone, GoPro customers uploaded 2.8 years worth of video featuring GoPro in the title. Each video not only serves as a customer testimonial, it is guerrilla advertising, giving potential customers millions of reasons why they should buy one of GoPro’s little cameras. To learn more, read the Wired article Why GoPro’s Success Isn’t Really About the Cameras.
Pivotal Labs, based in San Francisco, offers a next-generation Platform-as-a-service (PaaS) for creating web applications in the cloud. It has grown to over 400 consultants, with an office presence in nine major tech hubs in the US and now internationally in Toronto and London. They use pair programming (agile software development) with their clients, a technique in which two engineers work together at one computer, write code, and collaborate on solutions to problems. Pair programming with the clients is the most common reason they choose to work with Pivotal since it accelerates learning and expertise. Check out this video article on how Pair Programming is the secret sauce to Pivotal Labs’ growth and success.
Zoho University, in Chennai India, started as a corporate social responsibility experiment a decade ago. The IT university has no exams, deadlines or assignments, but students are paid to attend and graduates receive a professional certificate. Zoho University is now among the largest contributors to the 2,600-strong workforce of the India-based IT company Zoho Corporation. Nearly 15%, or about 300, of the company’s employees are graduates of Zoho University. Learn more about this innovative educational institution in this video interview of Sridhar Vembu, CEO of Zoho.
So, what about the design automation industry?
First of all, startups will not have billion dollar valuations, given that market value of the whole industry is less than 20 billion dollars. So, let’s define our one-horned-wonders as the hot startups that are ready to deliver significantly superior products compared to the big 3 of EDA.
So, where are the EDA unicorns? Where will they come from?
I believe that unicorns will be the ones using innovative strategies to provide solutions that tackle a highly difficult pain-point in chip design, and prevent chip killer problems. I mentioned in my blog Redefining Chip Complexity in the SoC Era, we are dealing with chip complexity that is orders of magnitude higher than the past. The complexity comes not only from the sheer size (approaching 1 billion gates) or lower process nodes, but also the scale of IP integration, complex low power requirements, asynchronous interfaces, x-propagation risks, verification bug escapes, and so on.
EDA unicorns will create high capacity and high performance methodologies to prevent chip failures and provide a reliable sign-off solution!
My Impressions of DVCon USA 2015: Lies; Experts; Art or Science?
David Scott Principal Architect
Last week I attended the Design and Verification Conference in San Jose. It had been six years since my last visit to the conference. Before then, I had attended five years in a row, so it was interesting to see what had changed in the industry. I focused on test bench topics, so this blog records my impressions in that area.
First, my favorite paper was “Lies, Damned Lies, and Coverage” by Mark Litterick of Verilab, which won an Honorable Mention in the Best Paper category. Mark explained common shortcomings of coverage models implemented as SystemVerilog covergroups. For example, because a covergroup has its own sampling event, that may or may not be appropriate for the design. If you sample when a value change does not matter for the design, the covergroup has counted a value as covered when in fact it really isn’t. In the slides, Mark’s descriptions of common errors were pithy and, like any good observation, obvious only in retrospect. More interestingly, he proposed correlating coverage events via the UCIS (Unified Coverage Interoperability Standard) to verify that they have the expected relationships. For example, a particular covergroup bin count might be expected to be the same as the pass count of some cover property (in SystemVerilog Assertions) somewhere else, or perhaps as much as some block count in code coverage. It struck me that some aspects of this must be verifiable using formal analysis. You can read the entire paper here and see the presentation slides here.
I was also impressed by the use of the C language in verification — not SystemC, but old-fashioned C itself. Harry Foster of Mentor Graphics shared some results of his Verification Survey, and there were only two languages whose use had increased from year-to-year: SystemVerilog and C. For example, there was a Cypress paper by David Crutchfield et al where configuration files were processed in C. Why is this, I wondered? Perhaps because SystemVerilog makes it easy via the Direct Programming Interface (DPI): you can call SystemVerilog functions from C and vice-versa. Also, a lot of people know C. I imagine if there were a Python DPI or Perl DPI, people would use those a lot as well!
Of course, the Universal Verification Methodology (UVM) is becoming, well… almost universal. I get the impression that verification architects are turning into software engineers. They are having fun, if that is the word, creating abstractions so that they can re-use the same top-level verification code in different circumstances, with differing design blocks or versions of IP. But like creating classes in C++ software, as I do for Real Intent, there are many different ways of doing the same thing. It seems to me UVM has made the verification problem less constrained rather than more constrained, in some sense, and that does add some risks, as well as make static analysis more difficult.
The crowd got a kick out of the fact that even the UVM experts can’t agree among themselves how much of it is minimally necessary; there were some lively discussions among the presenters in the UVM Session on Wednesday afternoon. First, Stu Sutherland and Tom Fitzpatrick proposed a minimal subset. The next two authors contradicted it. One feature that Tom said never to use was then the subject of a paper by John Aynsley. Last in the session, my friend Rich Edelman described his UVM template generator. I think there could be as many template generators as authors!
Some presentations had the tinge of an advertisement. There was an “e” paper where a user described reasons to miss aspect-oriented programming, which is not found in SystemVerilog. For the first time, I got a good definition of aspect-oriented programming, which you will find on Wikipedia, as focused on cross-cutting concerns. My paraphrase of cross-cutting concerns is a feature that usually requires implementation in multiple locations; an aspect-oriented language can put the cross-cutting concerns in one place. But it also strikes me that an aspect-oriented language really allows the extension or re-definition of anything from anywhere. This may in fact be aspect-oriented, or it may not; nothing guarantees that it is. If not, you risk a giant mess where you need to read all the source code to understand anything. At least, object-oriented languages like SystemVerilog have features that push people in an object-oriented direction.
Finally, for Real Intent, I was encouraged to hear from Harry Foster, during the “Art or Science?” panel, that “formal apps” — or focused formal applications dedicated to analysis of a particular problem area — grew in usage year-to-year by over 60%, and this is the fastest-growing area for EDA tools. I’m glad to be working for a company in such an interesting area.
P.S. The answer, by the way, to the question of whether verification is “Art or Science” is easy. Of course, it’s both!
We are at the dawn of a new age of digital verification for SoCs. A fundamental change is underway. We are moving away from a tool and technology approach — “I have a hammer, where are some nails?” — and toward a verification-objective mindset for design sign-off, such as “Does my design achieve reset in two cycles?”
Objective-driven verification at the RT level now is being accomplished using static-verification technologies. Static verification comprises deep semantic analysis (DSA) and formal methods. DSA is about understanding the purpose and intent of logic, flip-flops, state machines, etc. in a design, in the context of the verification objective being addressed. When this understanding is at the core of an EDA tool set, a major part of the sign-off process happens before the use or need of formal analysis.
The right mix of these two components — DSA and formal methods — significantly reduces the need for dynamic analysis (simulation). Although dynamic analysis continues to have a role, increasingly it is viewed as a backstop and not the main focus of the verification flow. Any simulation must be absolutely necessary and be tied to a companion static analysis step.
Pranav also covered this topic in a recent interview with Warren Savage, President and CEO of IP Extreme, on his IP Watch YouTube channel. Pranav shares his background in the high-tech industry before the conversation turns to verification and how it has changed over the years.
New Ascent Lint, Cricket Video Interview and DVCon Roses
Graham Bell Vice President of Marketing at Real Intent
New Ascent Lint with DO-254 Compliance Testing
On February 25 we announced the 2015 release of Ascent Lint for comprehensive RTL analysis and rule checking. The new version for 2015 delivers enhanced support for the SystemVerilog language, DO-254 policy files for compliance testing of complex electronic hardware in airborne systems, deeper rule coverage and easy configurability. We believe it is the industry’s fastest-performance, highest-capacity and most precise Lint solution in the market.
Additional enhancements and new features for Ascent Lint include:
Enhanced VHDL finite state machine (FSM) handling for deeper analysis
17 new VHDL and 12 new Verilog lint rules that ensure design code quality and consistency for a wide range of potential issues
Lower noise in reporting of design issues
To read further details about the announcement, click here. For additional insights and comments from Srinivas Vaidyanathan, staff technical engineer, including his take on the Cricket World Cup, please watch the video interview below.
Real Intent at DVCon 2015: Verification Solutions and Roses in Booth #602
We will exhibit our Ascent and Meridian products in Booth #602 at the 2015 Design & Verification Conference & Exhibition (DVCon 2015) next week. Visitors to our also will receive a rose from Real Intent – a sweet tradition for two years now. DVCon, which typically attracts more than 800 attendees, is the premier industry conference for design and verification engineers of all experience levels, and for engineering managers
DVCon Expo Booth Crawl
Monday, Mar. 2, 5-7 p.m. – food and drink provided
DVCon Expo Exhibit
Tuesday, Mar. 3 and Wednesday, Mar. 4 from 2:30-6:30 p.m.
at the Doubletree Hotel, San Jose, Calif.
Happy Lunar New Year: Year of the Ram (or is it Goat or Sheep?)
Graham Bell Vice President of Marketing at Real Intent
The Lunar New Year Day is on Thursday February 19, 2015. According to Chinese astrology, 2015 is year of Wooden Ram and is the 4,712th year in the traditional calendar. The original Chinese word for this year is “yang,” a generic term for various horned ruminating mammals. During the translation process, people have interpreted the word differently, and communities pick the animal that represents the qualities they admire. For example, sheep are associated with mildness and moderation, which is seen as an ideal attitude by some Asian societies, so they will call 2015 the Year of the Sheep.
You can learn an overwhelming amount of information at various web pages. The following Wikipedia page is a good place to start: Goat (zodiac). Let’s just say that the Year of the Ram will be an auspicious one and will bring a happy turnaround in fortunes in the coming months.
Happy New Year!
P.S. I am reminded of the stories about early computer translation programs that converted “hydraulic ram” into the equivalent of “water goat,” which is not the same thing!
Video: Clock-Domain Crossing Verification: Introduction; SoC challenges; and Keys to Success
Graham Bell Vice President of Marketing at Real Intent
In the YouTube video interview below, Oren Katzir, vice-president of application engineering, introduces the topic of clock-domain crossing (CDC) verification. He identifies what are the four key issues that need to be met to achieve SoC sign-off, and what are the features that Real Intent’s Meridian CDC tool offers to handle the deluge of data that can arise in CDC analysis, and as well, work effectively with different design methodologies. I am sure you will learn something from Oren’s experience with many customers’ designs.