CDC Verification of Fast-to-Slow Clocks – Part 3: Metastability Aware Simulation
Dr. Roger B. Hughes Director of Strategic Accounts
We continue the short blog series that addresses the issue of doing clock domain crossing analysis where the clocks differ in frequency. In Part 1 and Part 2, we discussed the use of structural and formal checks when there is a fast-to-slow transition in a clock domain crossing. In this blog, we will present the third and final step using a design’s testbench.
The next step in the verification process of fast-to-slow clock domain crossings is to do metastability-aware simulation on the whole design. When running a regular simulation test bench, there is no concept of what could happen to the design if there was metastability present in the data or control paths within the design. One of the key reasons for doing CDC checks is to ensure that metastability does not affect a design. After structural analysis ensures that all crossings do contain synchronizers, and formal analysis ensures that the pulse width and data are stable, a whole-chip metastability-aware simulation is needed to see if the design is still sensitive to metastability. Functional monitors and metastability checkers are shown in Figure 7. No changes are made to the design, and the necessary monitors and checkers are written in an auxiliary Verilog simulation test bench file. This auxiliary file is simply referred to by the original simulation test bench file to invoke the metastability checking. As a prerequisite, this step requires that the design have a detailed simulation test bench.
Figure 7 – Metastability aware simulation checks the tolerance of downstream logic to the presence of jitter in the data path through the use of functional monitors and CDC checkers.
Meridian CDC enables metastability simulation sign-off by offering two capabilities. First, it randomly inserts cycle jitter onto the control or data crossings to mimic the metastability effect; Second, it writes simulation checkers to catch violations during simulation. This combined effect enables metastability simulation sign-off using Meridian CDC.
FAST-TO-SLOW CLOCK METHODOLOGY SUMMARY
To provide rigorous clock domain crossing checks on a design, especially one containing transitions of a fast-to-slow nature, three steps must be done.
Use structural checking to ensure all crossings – including fast-to-slow clock and slow-to-fast clock – are CDC safe. This means that all crossings have been checked to have the data switched via a control signal that has the appropriate levels of synchronization.
Use formal verification on all CDC crossings to ensure control signals have a sufficient pulse width to enable the receiving domain clock to capture the transmit domain’s control pulse. This step is especially important for all fast-to-slow clock domain crossings. Use formal analysis to examine the data transitions for data stability. These checks are PULSE_WIDTH and DATA_STABILITY, respectively.
Use metastability-aware simulation on the entire design with an existing simulation test bench to insert random metastability into data and control crossings. Run the simulation against specialized checkers that are automatically generated to prove that the design is not sensitive to metastability.
After these three steps are carried out, designers will have full confidence that the fast-to-slow clock domain crossings are analyzed correctly.
Modern CDC tools, such as Meridian from Real Intent, provide a mix of approaches for sign-off of clock domain crossing analysis. Three techniques were progressively discussed when fast-to-slow clocks were used. Structural checks can be quickly run even on large designs on the whole design When a design is has passed structural analysis, formal checks of certain crossings can be done locally. This is required for all fast-to-slow clock transitions of control signals, either on the feed-forward circuit or on the feedback circuit, depending on which circuit has a fast-to-slow transition. Finally, simulation test benches can be augmented with random metastability injection and checkers to verify that the design is tolerant of metastability.
CDC Verification of Fast-to-Slow Clocks – Part 2: Formal Checks
Dr. Roger B. Hughes Director of Strategic Accounts
We continue the short blog series that addresses the issue of doing clock domain crossing analysis where the clocks differ in frequency. In Part 1, we ended the discussion noting that when there is a fast-to-slow transition, there is a possibility that a short duration control pulse may be completely missed by the receive domain and a formal analysis is required to discover if this is a potential problem. We will look at how formal analysis can verify this kind of transition.
A formal check also is required on a slow-to-fast data crossing with feedback. In such a circuit, as shown in Figure 4, an acknowledge signal coming from the receiving fast-clock domain to the transmitting slow-clock domain also requires a formal Pulse Width check. Although the control pulse (request) is going from slow to fast and does not need a formal pulse width check, the acknowledge pulse-width check is necessary because the acknowledge signal (the feedback circuit) is going from a fast to a slow clock and, in order for the acknowledge to be properly captured, the acknowledge pulse (transmitted from the receiving side) must be sufficiently wide to be captured (received on the transmitting side) by the slower clock domain of the transmitting side flops. Failure to check for this condition is the reason behind many a request/acknowledge circuit not working as expected. Note that feedback circuits in a fast-to-slow crossing are operating in a slow-to-fast mode and the acknowledge signal in such a circuit does not need to be pulse-width checked. In short, all fast-to-slow control signal transitions, whether connected in a feed-forward or a feedback manner need to be formally pulse-width checked to ensure integrity of the control aspect of the clock domain crossing.
Figure 4 – Slow-to-Fast Clock Crossing with Feedback (red flops are slow clock, blue flops are fast clock).
To check if the fast-to-slow clock domain crossings have control signals that can be captured by the receive domain clock, Meridian CDC offers formal analysis capability targeted at the asynchronous clock domain crossings of interest. There is a requirement that the control pulse in the fast transmit domain must be of a certain minimum width in order to be captured by the slower receive domain clock. Figure 5 shows that TX CNTL must be held high for several clock periods of Clk1 for the TX domain flop value to be captured by the RX domain Sync1 flop and then passed into RX domain Sync2 flop. A formal check called PULSE_WIDTH verifies that the transmit domain’s control pulse has sufficient length to be captured by the receive domain’s clock in all circumstances. This check examines all the pulse generation logic and takes into consideration the clock frequency ratio during detailed formal analysis to determine pass or fail. If there is a case in which the pulse length is insufficient, a counter example is generated to show the circumstances in which this would occur. If PULSE_WIDTH passes, the crossing shown always has correct control pulse duration to ensure there will not be a missed control pulse.
Figure 5 – Fast-to-slow clock domain crossing with sufficient pulse length. Here the TX CNTL pulse is held high for a sufficient number of TX Clk1 periods so that an edge of RX Clk2 is able to sample the value on the TX CNTL flop into the RX CNTL Sync1 flop, which then can pass the value to RX CNTL Sync2. This can be formally proven using the PULSE_WIDTH check of Meridian CDC.
There also needs to be a check on the data path. There is a possibility that if the data is not held stable for a long enough period, it might get missed in the receive domain. For example, suppose the transmit domain has the data sequence <D0><D1><D2><D3>, etc. This sequence of data changes is shown in Figure 5. If the data changes before the control signal has been passed to the receive domain, it is possible that the receive domain might miss some data and end up with <D0><D1><D3>… if <D2> was not correctly transmitted to the receive domain. An additional formal check in Meridian CDC called DATA_STABILITY ensures that the data transitions at a slow enough rate to be captured by the transmit domain clock and then transferred to the receive domain. Only a formal check using full sequential analysis of behavior can do this correctly.
Figure 6 – Fast-to-slow clock domain crossing with data instability. It is important that the data is held stable long enough to be captured by the receive clock RX Clk2. If the data changes too quickly, an element of the data will be missing in the receive clock domain, as shown with D2 missing from the data stream in the receiving clock domain. This can be formally proven using the DATA_STABILITY check in Meridian CDC as part of the formal analysis.
For all the formal checks, additional information is required beyond just using the structural checks. In structural checking, the environment can be captured automatically with very little user input required for resets and clocks. In contrast, formal checks require that all the reset signals be very accurately associated with their corresponding clocks. All clock frequencies must be specified for the appropriate formal checks to be made on the fast-to-slow clock domain crossings. So setup for formal is necessarily more complex and requires detailed design knowledge about all the frequencies of clocks and combinations of those frequencies. For example, multi-mode designs also need the selection signals for mode selection to be specified by the user. Where appropriate, automatic multi-frequency analysis also can be addressed by the formal engines within Meridian CDC.
In Part Three, the conclusion for this series, we will discuss doing metastability-aware simulation on the whole design.
CDC Verification of Fast-to-Slow Clocks – Part 1: Structural Checks
Dr. Roger B. Hughes Director of Strategic Accounts
This is a reprise of a short blog series that addresses the issue of doing clock domain crossing analysis where the clocks differ in frequency, and the use of three different techniques for a complete analysis.
CDC checking of any asynchronous clock domain crossing requires that the data path and the control path be identified and that the receive clock domain data flow is controlled by a multiplexer with a select line that is fed by a correctly synchronized control line. Meridian CDC will always identify all the data and associated control paths in a design and will ensure that the control signals passing from a transmit clock domain to an asynchronous receive clock domain are correctly synchronized. There are three separate techniques that are used within Meridian CDC: structural checking, formal checks and simulation-based injected metastability checks.
The structural checking approach does not care if the asynchronous transitions are slow to fast clocks or fast-to-slow clocks. It will ensure that all the transitions are correctly synchronized in terms of having the appropriate synchronizer flops. From a structural perspective, the entire design can be checked in one run and all the clock domain transitions checked for correctness. Let’s look at an example CDC in Figure 1, with transmit clock Clk1 on the left (orange flops), and the receiving clock Clk2 on the right (blue flops).
Fig. 1 – A Typical Synchronized Control and Data Clock Domain Crossing.
It can be seen in Figure 2, where positive going clock edges are shown by the vertical lines, that all will work well for a slow-to-fast clock transition. This is because any change of a control signal in the slow domain will always be captured by one of the edges of the receive domain clock, Clk2, before Clk1 causes the control signal to be released since Clk2 is faster than the transmit domain clock, Clk1. Also, for slow-to-fast clock transitions the data will typically always be stable long enough to be captured and transmitted through to the receive domain. In Figure 2, the possible Clk2 edges that could capture the TX CNTL signal in RX CNTL Sync2 flop are shown with dashed vertical lines.
Figure 2 – Slow-to-Fast Clock Crossing. There are many possible clock edges, shown dashed, of RX Clk2 that can sample the value held in the TX CNTL flop. The dot-dash edge is the first possible transition into Sync1, dashed are transitions into Sync2. There is no issue with TX CNTL period as long as the signal is sampled by one clock edge of TX Clk1.
However, the situation is different in a fast-to-slow clock domain crossing. When there is a fast-to-slow transition, there is a possibility that a short duration control pulse may be completely missed by the receive domain. To address this concern, and others that cannot be addressed by a purely structural CDC check, formal analysis is required.
Next time we will look at how formal analysis can verify this kind of transition.
With acquisitions, customers get nervous and for good reason. The support and responsiveness they get changes. Five respondents said they were considering possibly replacing SpyGlass with Real Intent. One user reported the following conversation:
“Your SpyGlass customer support won’t change as a result of the SNPS acquisition.” They actually said that to me with a straight face.
The article also reported a customer evaluation of our Ascent Lint and Meridian CDC (clock-domain crossing) tools. Here is a quick snippet:
Wanted to check whether we could identify design bugs with Real Intent that were being missed before.
Started with Ascent-Lint on one of our small a8051 microcontroller IP blocks which was in pure Verilog design and 17K gates in size.
From the start, we noticed that Ascent Lint was very easy to run.
Within minutes getting it installed we were able to run the tool on our a8051 with practically no time spent on set up.
We caught a few issues even on it. One FSM was missing a “default” statement. The designer missed this issue because of an involved mix of pragmas and `defines in the RTL code. Ascent Lint ran in under a minute to find the need to make a RTL fix for this. There were few other minor FSM-related issues identified we passed those on to the
a8051 design team.
Next we ran their Meridian CDC clock-domain analysis tool on our Ethernet MAC IP, a 40K gate Verilog design which had 4 different clock domains.
…When running Meridian to catch CDC problems, it was correctly able to identify our CNTL synchronizer and our other synchronizer structures automatically.
…The Meridian reports provided appropriate details of the violations without bombarding us with too much information.
After the end of our eval, we decided to start using both Real Intent tools on our next IP development project. We also plan to recommend Ascent Lint and Meridian CDC tools to our consulting clients.
X-optimism occurs when an unknown X value is incorrectly resolved to a known value in RTL simulation. Optimism issues can be difficult to detect and debug because the X is no longer visible once the optimism occurs. The functional issue may not show up at an output for many, many clock cycles after the optimism. X-optimism issues also show up in a gate-level netlist or FPGA-based prototypes, but debug is difficult due to limited visibility in FPGAs, and netlist designs are less familiar post-synthesis. Trying to find an X-optimism bug in an FPGA model is like looking for a needle in a haystack due to limited visibility. In netlist simulations the design hierarchy is flattened, signal names changed, and there is a danger that the X under consideration will be mistaken for a pessimistic node and forced to a known value that hides a functional issue.
Real Intent’s Ascent XV uses static analysis to identify potential X-optimism issues at RTL so they can be fixed prior to simulation, ensuring efficient and accurate simulations. Fixing optimism issues in RTL streamlines getting netlist simulations or FPGA-based prototypes, up and running faster and reduces costly iterations.
X’s can cause X-optimism in RTL simulations. X-optimism occurs when an unknown value is simulated as though it is a known value in hardware. Consider the example shown in figure 1 below. If the “input” signal is an X value, this means that “input” could be either a 0 or a 1 value in real hardware (because real hardware cannot have a X value). So in real hardware, signal “D” might also be a 0 or a 1 value. However, in simulation, the output “D” would always show as a 1 value. It is called “optimism” because the unknown was resolved as a known value. This can cause functional bugs to be missed in RTL simulations, though in netlist the X would always be properly propagated.
Figure 1. X-optimism Example.
Ascent XV-RTL Optimism Static Analysis
Ascent XV provides a totally static solution that is used prior to RTL simulation to ensure that simulation is free of X-optimism issues that cause inaccurate simulation. It is also used to monitor or model potential optimism points in simulation should you choose not to fix the issues. The advantage of the Ascent XV approach is that the static analysis identifies the select few constructs that need to be monitored or modeled for X-accuracy, versus all constructs in the design. Figure 2 below shows the flow, with the tasks done by Ascent XV on the left, outputs in the middle, and user actions on the right.
Figure 2. Ascent XV Optimism Flow.
The first step of the flow is to read in the design and specify the clocks and resets. Reset and clocks are used to determine X-sources from uninitialized flops and latches. Ascent XV will automatically generate this design specific specification. Ascent XV supports complex reset scenarios, such as phased resets. Clocks can also be configured for a delayed start. Support of complex reset scenarios is key to an accurate reset analysis.
To analyze X-optimism, you must first identify all X-sources that cause X-optimism. Next, trace where those X-sources can propagate and cause X-optimism issues. The third step is the reporting of the results in a complete and concise manner to enable fixing the issues. Once the analysis is complete, you can also choose to use SimPortal.
Minimizing X-sources is key to eliminating X-optimism issues. Structural analysis ensures that all potential sources of X’s are included. The built-in formal analysis minimizes noise and ensures accuracy. X-sources identified by structural analysis include such things as unconnected module input ports, bus contention, operations with an unknown result like reading from a non-existent RAM address, and out of range bit selects and array indices. Formal analysis is then used to identify uninitialized flops and latches in the design, taking into account the design resets and the propagation of known values. Reset analysis minimizes noise by accurately modeling what happens in real hardware, thereby minimizing uninitialized flops and latches as X-sources.
X-propagation analysis traces where the X-sources can propagate. Ascent XV will trace the propagation of each X-source through the design, noting where it can cause X-optimism and what signals might exhibit optimistic values.
Ascent XV’s reporting of X-optimism is designed to be concise and to guide the user through understanding first where the X-sources in the design are, and secondly where they can propagate. This enables first eliminating as many X-sources as possible, and then managing the remaining X-propagation issues. Also, Ascent XV has automatic waivers that reduce noise.
The first section of the report shows the results of the X-source analysis, and for each X-source a summary of the propagation analysis. The X-source section identifies the type of X-sources. X-sources are prioritized and sometimes automatically waived based on the propagation analysis.
A VCD waveform trace can be viewed that shows the initialization process. This can be very useful for determining why initialization did not occur.
Once as many X-sources are eliminated as is practical, review of the X-propagation analysis section is needed. This section shows where X-optimism can potentially occur so it can be addressed either via X-accurate coding or SimPortal monitoring and modeling. Constructs with X-accurate coding can be automatically waived to minimize unnecessary analysis. The reporting of X-propagation groups both the signals that might have X-optimistic values together with the control signals to which an X can propagate and cause the optimism. Only those constructs to which an X can propagate are analyzed, and only the control signals to which an X can propagate are listed. The presentation of information helps to determine whether it is best to code for X-accuracy or simply monitor.
A third section of the report provides reset analysis information, indicating how initialized flops became initialized, either from an asynchronous reset, a synchronous reset, or via data propagation of known values. The reset section also shows the initialized value and time of initialization based on the clock specifications.
Once the X-optimism analysis is complete, Ascent XV can generate SimPortal files to address unresolved issues. SimPortal files are side files that are integrated into an existing simulation environment. Ascent XV’s SimPortal simulation add-ons allow for selective dynamic monitoring and/or automatic X-accurate correction during RTL simulation. The most common example is to monitor whether inputs to the device have X’s, and also to report when an output has an X. Another type of checker will check that X’s do not occur on clocks and resets, as this can also cause optimism.
Ascent XV- RTL Optimism uses static analysis to eliminate X-optimism issues before you get to simulation, so simulations can run faster with X-accuracy. Its hardware accurate reset analysis uncovers where X’s exist after initialization, and ensures accurate analysis of potential X-propagation. Noise reduction techniques that have improved over 5 years of usage result in precise, compact, and non-redundant reporting of potential X-optimism. The prioritization of X’s that need to be eliminated, reset optimization to ensure that there are no uninitialized flops that can drive control logic, and the root cause analysis of each optimism point, together streamline debug to eliminate potential X-optimism issues before handing off the RTL.
Ascent XV-RTL Optimism has proven to catch missed X-optimism issues in real designs, and is much more efficient than a later debug of issues in hardware or in netlist simulations. Ascent XV can be used to eliminate X-sources in the design, identify where those X’s create an optimism risk, and can correct pessimism at the netlist. Ascent XV is a total X-Verification solution that leverages static analysis to ensure efficient and accurate simulations.
Graham Bell Vice President of Marketing at Real Intent
Just in case you have never read a Presidential proclamation, here is the text for Thanksgiving Day, 2015. I learned something when I read it. Following this are two political cartoons for your amusement. Happy Thanksgiving to All!
THANKSGIVING DAY, 2015
– – – – – – –
BY THE PRESIDENT OF THE UNITED STATES OF AMERICA A PROCLAMATION
Rooted in a story of generosity and partnership, Thanksgiving offers an opportunity for us to express our gratitude for the gifts we have and to show our appreciation for all we hold dear. Today, as we give of ourselves in service to others and spend cherished time with family and friends, we give thanks for the many blessings bestowed upon us. We also honor the men and women in uniform who fight to safeguard our country and our freedoms so we can share occasions like this with loved ones, and we thank our selfless military families who stand beside and support them each and every day.
Our modern celebration of Thanksgiving can be traced back to the early 17th century. Upon arriving in Plymouth, at the culmination of months of testing travel that resulted in death and disease, the Pilgrims continued to face great challenges. An indigenous people, the Wampanoag, helped them adjust to their new home, teaching them critical survival techniques and important crop cultivation methods. After securing a bountiful harvest, the settlers and Wampanoag joined in fellowship for a shared dinner to celebrate powerful traditions that are still observed at Thanksgiving today: lifting one another up, enjoying time with those around us, and appreciating all that we have.
Carrying us through trial and triumph, this sense of decency and compassion has defined our Nation. President George Washington proclaimed the first Thanksgiving in our country’s nascence, calling on the citizens of our fledgling democracy to place their faith in “the providence of Almighty God,” and to be thankful for what is bequeathed to us. In the midst of bitter division at a critical juncture for America, President Abraham Lincoln acknowledged the plight of the most vulnerable, declaring a “day of thanksgiving,” on which all citizens would “commend to [God’s] tender care” those most affected by the violence of the time — widows, orphans, mourners, and sufferers of the Civil War. A tradition of giving continues to inspire this holiday, and at shelters and food centers, on battlefields and city streets, and through generous donations and silent prayers, the inherent selflessness and common goodness of the American people endures.
In the same spirit of togetherness and thanksgiving that inspired the Pilgrims and the Wampanoag, we pay tribute to people of every background and belief who contribute in their own unique ways to our country’s story. Each of us brings our own traditions, cultures, and recipes to this quintessential American holiday — whether around dinner tables, in soup kitchens, or at home cheering on our favorite sports teams — but we are all united in appreciation of the bounty of our Nation. Let us express our gratitude by welcoming others to our celebrations and recognize those who volunteer today to ensure a dinner is possible for those who might have gone without. Together, we can secure our founding ideals as the birthright of all future generations of Americans.
NOW, THEREFORE, I, BARACK OBAMA, President of the United States of America, by virtue of the authority vested in me by the Constitution and the laws of the United States, do hereby proclaim November 26, 2015, as a National Day of Thanksgiving. I encourage the people of the United States to join together — whether in our homes, places of worship, community centers, or any place of fellowship for friends and neighbors — and give thanks for all we have received in the past year, express appreciation to those whose lives enrich our own, and share our bounty with others.
IN WITNESS WHEREOF, I have hereunto set my hand this twentieth day of November, in the year of our Lord two thousand fifteen, and of the Independence of the United States of America the two hundred and fortieth.
Video: “Why A New Gate-level CDC Verification Solution?”
Graham Bell Vice President of Marketing at Real Intent
Recently, I interviewed Vikas Sachdeva, Sr. Technical Marketing Manager at Real Intent where we discuss why gate-level CDC verification is necessary, what are some of the failure modes that can occur, and why Meridian Physical CDC is the right tool to do gate-level sign-off. You can see the video below. You can also find more information about Meridian Physical CDC here.
A few weeks ago I attended the “10 Years of IEEE 1800™ SystemVerilog Celebration” lunch at an IEEE Standard Association symposium. One of the Verilog/SystemVerilog world’s luminaries sat next to me, and he started talking to other luminaries about how his son, as part of a general engineering degree, was using SystemVerilog.
I had to ask: “With more of a software background, what’s his reaction to SystemVerilog? It must seem like a godawful mess.”
He said, “He used those same words.”
Several months ago, I wondered whether SystemVerilog was the most complex computer language yet invented, and I found this page on StackOverflow. The number of keywords may not be the best metric of language complexity, but it is simple and easy to calculate. According to this answer, COBOL (the Common Business-Oriented Language invented in 1959) has 357. SystemVerilog has 323. C#, Microsoft’s answer to C++ and JAVA, is a distant third with 102. If this answer is complete, nothing competes with COBOL and SystemVerilog.
I didn’t even realize COBOL is still very much used, but in financial software, it is. Learn a little bit about it from Wikipedia, and you see that it is crammed full of miscellaneous features, such as report writing features. In the Criticism section of the Wikipedia page for COBOL, there is this line: “No academic computer scientists participated in the design of COBOL; all of those on the committee came from commerce or government.” All this sounds familiar, doesn’t it?
I understand how SystemVerilog came to be, as I was working on Mentor ModelSim/Questa at the time when we beat Synopsys VCS to the first commercial SystemVerilog release. It was Synopsys’ bid to replace “e“, and ModelSim’s bid to re-brand itself as a verification tool. Cadence followed. SystemVerilog is really a number of completely different languages in one.
Now working on the front-end of the toolset of a mid-tier EDA company, one of the few left these days, I have a different take on SystemVerilog. First, we mostly care about the synthesizable or design subset of the language. It’s not even clear what that is, and it keeps changing as the synthesis tools get updated, and I think may even be significantly different between ASICs and FPGAs (though I don’t as yet have any solid evidence for that observation.) While we are eternally grateful for the existence of Verific parser platforms as the bedrock of our front-end, we do our own netlist elaboration or construction later in the flow. That means we have to contend with the combinatoric complexities of the language: the many different ways in which features may be combined.
Having been previously aware of the test bench part of SystemVerilog, the design side of the language continues to surprise me. Do we really need elaborate regular-expression matching in case statements? Did you know that a use model of storing pre-compiled library elements on disk, which everyone knows is the traditional ModelSim compile model, is actually enshrined in the standard “for clarification purposes”? And what about interfaces? They don’t allow you to do anything you fundamentally couldn’t do before, but they do make the code more confusing by guaranteeing that you can no longer look at a module by itself and know what its I/O is. Now you have to look in four places: the module definition, the module instantiation, the interface definition, and the interface instantiation.
I have to spend a few moments on interfaces, as they came up during a hallway discussion at the IEEE symposium. An interface is a so-called “wire bundle”, an encapsulation of interconnect. Optionally, it may have modports, which can make some of the wires or variables inaccessible, or restrict them to read or write access. You can put modports in a generate statement, creating a sort of macro expansion of them. I defy anyone to point to anything in the LRM that describes how to use modports inside generates. Declare them, yes; use them, not really. In versions we had from last year, one simulator just generated a lot of errors, another said modports inside generates were not supported (the right answer, I think), and another bravely tried to implement them — except that I never could figure out how to connect them. Verific has perhaps the best answer, which is using a so-called generic interface reference, in other words, an interface connection that is bizarrely completely un-typed, allowing any kind of interface to be connected, as long as the names elaborate correctly. As language design, this is weird.
When I brought up interfaces, the hallway discussion covered more-or-less the following points. Isn’t an interface just a struct? They aren’t really useful. Virtual interfaces are useful. Interfaces would be useful if you could derive from them like a class. Does anyone really use interfaces? Bear in mind these are people who have served on the committee. I had to point out that the most popular synthesis tool partially supports them. We have at least two customers using them extensively. I was gratified that one of the SystemVerilog luminaries agreed that the LRM was “specification by example,” and he told of a very recent customer phone call discussing an error in the interfaces chapter.
You as a user can choose what to use in SystemVerilog; we as a vendor cannot. In some cases, for example the unique/priority keywords on ifs and cases, for which I implemented Ascent IIV formal checks last year, the language helps by standardizing something previously proprietary. In other cases, where the language offers new features for creating structure, it merely adds to the combinatoric complexity of building a design correctly. If there is innovation to be had in EDA, I have the feeling that won’t be where it lies. It will lie, perhaps, in something like the X-optimism and X-pessimism features of Real Intent Ascent XV — but the complexities of the front-end act as a tax on that effort.
Google Designing Its Own Next-Generation Smartphone SoCs?
Graham Bell Vice President of Marketing at Real Intent
Courtesy Ron Amadeo and Intel
Google is starting to push to have more say in the design and architecture of the chips that run the Android system in smart phones. They are also apparently making major investments into virtual reality, where some of the chip design effort is expected. And hiring staff from major SoC companies.
Ron Amadeo from the tech publication Ars Technica has published the following online report: According to a pair of reports from The Information (subscription required), Google has big ambitions for the inside of Android phones. The report says the search giant has sent a long list of requests to chip manufacturers for future SoC designs and that Google is even planning to build its own processors.
The report says that during discussions that happened this fall, “Google representatives put forward designs of chips it was interested in co-developing, including a phone’s main processor.” The new chips are reportedly needed for future Android features that Google hopes to release “in the next few years.” By designing its own chips, Google can make sure the right amount of horsepower gets assigned to all the right places and remove bottlenecks that would slow down these new features.
The report specifically calls out “virtual and augmented reality” as use cases for the new chips. Publicly, only Google Cardboard has surfaced from Google’s VR initiative, but internally, it seems like the company is gearing up for a huge VR push. Some of Google’s biggest names have left their posts on flagship products to go work on the virtual reality team: Jon Wiley, the lead designer of Google Search, and Alex Faaborg, the former lead designer for Firefox, Google Now, and Android Wear. An earlier report from The Wall Street Journalclaimed Google was building a version of Android that would become a virtual reality operating system.
Read the rest of Ron Amadeo’s article here and learn who Google is hiring.