No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs

Size: px
Start display at page:

Download "No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs"

Transcription

1 White Paper No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs April 2012 Author Angela Sutton, Staff Product Marketing Manager, Synopsys, Inc. It comes as no surprise that the designers of FPGAs for military and aerospace applications are interested in increasing the reliability and availability of their designs. This is, of course, particularly true in the case of mission-critical and safety-critical electronic systems. But the need for high-reliability and high-availability electronic systems has expanded beyond traditional military and aerospace applications. Today, this growing list includes communications infrastructure systems, medical intensive care and life-support systems (such as heart-lung machines, mechanical ventilation machines, infusion pumps, radiation therapy machines, robotic surgery machines), nuclear reactor and other power station control systems, transportation signaling and control systems, amusement ride control systems, and the list goes on. How can designers maintain high standards and ensure success for these types of demanding designs? The answers are here. In this paper we will review the definitions of key concepts: mission critical, safety critical, high reliability and high availability. We will then consider the various elements associated with the creation of high-reliability and high-availability FPGA designs. Key Concepts Mission-Critical: A mission-critical design refers to those portions of a system that are absolutely necessary. The concept originates from NASA, where mission-critical elements were those items that had to work or a billion dollar space mission would blow up. Mission-critical systems must be able to handle peak loads, scale on demand and always maintain sufficient functionality to complete the mission. Safety-Critical: A safety-critical or life-critical system is one whose failure or malfunction may result in death or serious injury to people, loss of or severe damage to equipment or damage to the environment. The main object of safety-critical design is to prevent the system from responding to a fault with wrong conclusions or wrong outputs. If a fault is severe enough to cause a system failure, then the system must fail gracefully, without generating bad data or inappropriate outputs. For many safety-critical systems, such as medical infusion pumps and cancer irradiation systems, the safe state upon detection of a failure is to immediately stop and turn the system off. A safety-critical system is one that has been designed to lose less than one life per billion hours of operation.

2 High-Reliability: In the context of an electronic system, the term reliability refers to the ability of a system or component to perform its required function(s) under stated conditions for a specified period of time. This is often defined as a probability. A high-reliability system is one that will remain functional for a longer period of time, even in adverse conditions. Some reliability regimes for mission-critical and safety-critical systems are as follows: ``Fail-Operational systems continue to operate when their control systems fail, for example electronically controlled car doors that can be unlocked even if the locking control mechanism fails ``Fail-Safe systems automatically become safe when they can no longer operate. Many medical systems fall into this category, such as x-ray machines, which will switch off when an error is detected ``Fail-Secure systems maintain maximum security when they can no longer operate; while fail-safe electronic doors unlock during power failures, their fail-secure counterparts would lock. For example, a bank s safe will automatically go into lockdown when the power goes out ``Fail-Passive systems continue to operate in the event of a system failure. In the case of a failure in an aircraft s autopilot, for example, the aircraft should remain in a state that can be controlled by the pilot ``Fault-Tolerant systems avoid service failure when faults are introduced into the system. The normal method to tolerate faults is to continually self-test the parts of a system and to switch in duplicate redundant backup circuitry, called hot spares, for failing subsystems High-Availability: Users want their electronic systems to be ready to serve them at all times. The term availability refers to the ability of the user community to access the system; if a user cannot access the system it is said to be unavailable. The term downtime is used to refer to periods when a system is unavailable for use. Availability is usually expressed as a percentage of uptime over some specified duration. Table 1 reflects the translation from a given availability percentage to the corresponding amount of time a system would be unavailable per week, month or year. Downtime Availability (%) Per week Per Month* Per year 90% one nine 16.8 hours 72 hours 36.5 days 99% two nines 1.68 hours 7.2 hours 3.65 days 99.9% three nines 10.1 minutes 43.2 minutes 8.76 hours 99.99% four nines 1.01 minutes 4.32 minutes minutes % five nines 6.05 seconds 25.9 seconds minutes % six nines seconds 2.59 seconds 31.5 seconds *A 30-day month is assumed for monthly calculations. Table 1. Availability (as a percentage) versus downtime Key Elements of an FPGA Design and Verification Flow In this section we will briefly consider the various elements associated with an FPGA design specification, creation and verification flow in the context of creating high-reliability and high-availability designs. These elements are depicted in Figure 1 and we will explore them in more detail throughout the course of this paper, with particular emphasis on designs intended for mission-critical and safety-critical applications. No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs 2

3 Methodologies, Processes, and Standards Low-Power Design Distributed Design Traceability, Repeatability, Design Management Virtual Prototype Algorithmic Exploration High-Level Synthesis Simulation Simulation Req Spec Eng/Arch Spec Design (RTL) Capture Synthesis/ Optimization Gate-Level IP Selection State Machines Formal Verification Figure 1: Elements of an FPGA design and verification flow Methodologies, Processes and Standards A key element in creating high-reliability and high-availability designs is to adopt standards such as the ISO 9001 quality management standard. Also, it is vital to define internal methodologies and processes that meet DO-254 (and other safety-critical) certification needs. The DO-254 standard was originally intended to provide a way to deliver safe and reliable designs for airborne systems. This standard was subsequently adopted by the creators of a variety of high-reliability and high-availability electronic systems. In Europe, industrial automation equipment manufacturers are required to develop their safety-critical designs according to the ISO and IEC standards. Both of these standards are based upon the generic IEC standard, which defines requirements for the development of safety products using FPGAs. In order to meet these standards, designers of safety-critical systems must validate the software, every component and all of the development tools used in the design. Requirements Specification The first step in the process of developing a new design is to capture the requirements for that design. This may be thought of as the what (what we want) rather than the how (how are we going to achieve this). At the time of writing, a requirements specification is typically captured and presented only in a human-readable form such as a written document. In some cases, this document is created by an external body in the form of a request for proposal (RFP). Req Spec In conventional design environments, the requirements specification is largely divorced from the remainder of the process. This can lead to problems such as the final product not fully addressing all of the requirements. In the case of high-reliability and high-availability designs, it is necessary to provide some mechanism for the requirements to be captured in a machine-readable form perhaps as line items in a database and for downstream specification and implementation details to be tied back to their associated requirements. This helps to ensure that each requirement has been fully addressed and that no requirement falls through the cracks. No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs 3

4 Engineering and Architectural Specification The next step in the process is to define the architecture of the system along with the detailed engineering specification for the design. This step includes decisions on how to partition the system into its hardware and software components. It also includes specifying the desired failure modes (failoperational, fail-safe, fail-secure, fail-passive), and considering any special test logic that may be required to detect and diagnose failures when the system has been deployed in the field. Eng/Arch Spec In some cases it may involve defining the architecture of the system in such a way as to avoid a single point of failure. If a system requires two data channels, for example, implementing both channels in a single FPGA makes that FPGA a single point of failure for both channels. By comparison, splitting the functionality across multiple FPGAs means that at least one channel will remain alive. The creation and capture of the engineering and architectural specification is the result of expert designers and system architects making educated guesses. The process typically involves using whiteboards and spreadsheets and may be assisted by the use of transaction-level system simulation, which is described in the Architecture Exploration and Performance Analysis section below. Today, the engineering and architectural specification is typically captured and presented only in a human readable form such as Word documents and Excel spreadsheets. In conventional design environments, this specification is not necessarily directly tied to the original requirements specification or the downstream implementation. In the case of high-reliability and high-availability designs, it is necessary to provide some mechanism for the engineering and architectural specification to be captured in a machine-readable form such that it can be tied to the original upstream requirements and also to the downstream implementation. Architecture Exploration and Performance Analysis There is currently a tremendous growth in the development of systems that involve multiple processors and multiple hardware accelerators operating in closely-coupled or networked topologies. In addition to tiered memory structures and multilayer bus structures, these systems, which may be executing hundreds of millions to tens of billions of instructions per second, feature extremely complex software components, and the software content is currently increasing almost exponentially. Virtual Prototype One aid to the development of the most appropriate system architecture is to use a transaction-level simulation model, or virtual prototype, of the system to explore, analyze and optimize the behavior and performance of the proposed hardware architecture. To enable this, available models of the global interconnect and shared memory subsystem are typically combined with traffic generators that represent the performance workload of each application subsystem. Simulation and collection of analysis data enables users to estimate performance before software is available and optimize architecture and algorithmic parameters for best results. Hardware-software performance validation can follow by replacing the traffic generators with processor models running the actual system software. No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs 4

5 Accurate measurement based on transaction traffic and software workloads that model real-world system behavior (performance, power consumption, etc.), allows system architects to ensure that the resulting design is reliable and meets the performance goals of the architecture specification without overdesign. It allows the architects to make accurate decisions and make hardware/software tradeoffs early in the design process so that changes and recommendations can be made early, reducing project risk. Once an optimal architecture has been determined, the transaction-level performance model of the system can become a golden reference model against which the hardware design teams later verify the actual functionality of the hardware portions of the design. Distributed Design The creation of complex FPGAs may involve multiple system architects, system engineers, hardware design engineers, software developers and verification engineers. These engineers could be split into multiple teams, which may span multiple companies and/or may be geographically dispersed around the world. Aside from anything else, considerations about how different portions of the design are to be partitioned across different teams may influence the engineering and architectural specification. A key consideration is that the entire design and verification environment should be architected so as to facilitate highly distributed design, parallel design creation and verification, all while allowing requirements and modifications to be tracked and traced. This means, for example, ensuring that no one can modify an interface without all relevant/impacted people being informed that such a change has taken place; also the recording of the fact that a change has been made, who made that change and the reason that the change was made. Part of this includes the ability to relate implementation decisions and details to specific items in the engineering and architectural specification. Also required is the ability to track progress and report the ongoing status of the project. Distributed design also requires very sophisticated configuration management, including the ability to take snapshots of all portions of the design (that is, the current state of all of the hardware and software files associated with the design), hierarchical design methodologies, along with support for revisions, versions and archiving of the design and the entire environment used to create the design. This allows the process by which the design was created to become fully repeatable. Algorithmic Exploration With regard to design blocks that perform digital signal processing (DSP) it may be necessary to explore a variety of algorithmic approaches to determine the optimal solution that satisfies the performance and power consumption requirements as defined by the overall engineering and architectural specification. Algorithmic Exploration In this case, it is common to capture these portions of the design at a very high level of abstraction. This can be done using model-based design concepts or by creating plain functional C/C++/SystemC models. These high-level representations are also used to explore the effects of fixed-point quantization. The design environment should allow testbenches that are created to verify any high-level representations of the design to also be used throughout the remainder of the flow. This ensures that the RTL created during algorithmic exploration fully matches its algorithmic counterpart. No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs 5

6 High-Level Synthesis (HLS) High-Level Synthesis As was discussed in the previous topic, some portions of the design may commence with representations created at a high-level of abstraction. These representations are initially used to validate and finetune the desired behavior of the design. The next step is to select the optimal micro-architectures for these portions of the design and to then progress these micro-architectures into actual implementations. Until recently, the transition from an original high-level representation to the corresponding micro-architecture and implementation was performed by hand, which was time-consuming and prone to error. Also, due to tight development schedules, designers rarely had the luxury of experimenting with alternative micro-architecture and implementation scenarios. Instead, it was common to opt for a micro-architecture and implementation that were guaranteed to work, even if the results were less-than-optimal in terms of power consumption, performance and silicon area. High-Level Synthesis (HLS) refers to the ability to take the original high-level representation and to automatically synthesize it into an equivalent RTL implementation, thereby eliminating human-induced errors associated with manual translation. The use of HLS also allows system architects and designers to experiment with a variety of alternative implementation scenarios so as to select the optimal implementation for a particular application. Furthermore, HLS allows the same original representation to be re-targeted to different implementations for different deployments. Selection and Verification of Intellectual Property Today s high-end FPGA designs can contain the equivalent of hundreds of thousands or even millions of logic gates. Creating each new design from the ground up would be extremely resource-intensive, time-consuming and error-prone. Thus, in order to manage this complexity, around 75% of a modern design may consist of intellectual property (IP) blocks. Some of these blocks may be internally generated from previous designs; others may come from thirdparty vendors. In fact it is not unusual for an FPGA design to include third-party IP blocks from multiple vendors. IP Selection In some cases the IP may be delivered as human-readable RTL; in other cases it may be encrypted or obfuscated. Sometimes the IP vendor may deliver two different models one at a high-level of abstraction for use with software and one at the gate-level for implementation into the design. To create high-reliability and high-availability FPGA designs, the design environment must allow selection and integration of these IP blocks. Also, the IP blocks should be testable and be delivered with testbenches. Even if the IP is encrypted or obfuscated, there should be visibility into key internal registers to facilitate verification and debug in the context of the entire design. No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs 6

7 State Machines FPGA designs often include the use of one or more state machines. In fact, as opposed to a single large state machine, it is common to employ a large number of smaller machines that interact with each other, often in extremely complicated ways. State Machines In order to create high-reliability and high-availability FPGA designs, it is necessary to create the control logic associated with these multiple state machines in such a way as to ensure that they don t step on each other s toes. For example, it would be easy to create two state machines, each of which can write data into the same first-in, first-out memory (FIFO). When this portion of the testbench is created, its designer will ensure that both of the state machines can indeed write into the FIFO. However, the testbench designer may neglect to test the case in which both state machines attempt to simultaneously access the FIFO. This type of scenario can become exceedingly complicated when only a few state machines are involved, and it can become overwhelmingly complex as the number of state machines increases. To address this problem, special tools and techniques are available to ensure that whenever there is the potential for such a problem to occur, the design engineer is informed and is also required to make a decision. In the case of multiple state machines writing to the same FIFO, for example, the designer may decide to specify a priority order ( State machine A has priority over state machine B, which in turn has priority over state machine C, and so forth). Alternatively, the designer may decide to use a round robin approach in which each of the state machines take things in turn. The key point is that the control logic for the state machines should be designed from the ground up in such a way that the machines cannot interfere with each other in an undefined manner. Another consideration with state machines is how to design them in such a way that they cannot power-up into an undefined or illegal state; also that nothing can occur to cause them to transition into an undefined or illegal state. Once again, there are tools and techniques that can aid designers in creating high-reliability and high-availability state machines of this nature. That said, irrespective of the quality of the design, radiation events can potentially cause a state machine to enter an undefined or illegal state. In order to address this, additional logic must be included in the design to detect and mitigate such an occurrence. This topic is explored in more detail in the Creating Radiation-Tolerant FPGA Designs section later in this paper. RTL Synthesis and Optimization To facilitate verification and debug, all aspects of the design must be traceable to ensure that the Synthesis/ implementation correctly reflects the intended design Optimization functionality. During the process of synthesizing an RTL representation into its gate-level equivalent, for example, it is necessary to keep track of the relationship between designer-specified signal names in the RTL and automatically-generated signal names in the gate-level representation to ease the task of instrumenting the design, further described in the Verification and Debug section below, and to support cross-probing between the gate and RTL levels. This means that even when working with the physical device operating on the board, signal values are automatically presented to the users in the context of the RTL source code with which they are most familiar, dramatically increasing the ease and speed of debugging. No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs 7

8 Today s logic and physical RTL synthesis and optimization tools are incredibly powerful and sophisticated. Countless hours have been devoted to developing algorithms that result in optimal designs that use the lowest possible power, consume the smallest possible amount of FPGA resources (which translates as silicon area in ASIC terms), and extract the maximum level of performance out of the device. However, in order to create high-reliability and high-availability FPGA designs, it may NOT be desirable for the synthesis tool to perform all of the optimizations of which it is capable. For example, it may be desirable to be able to preserve certain nodes all the way through the design process; that is, to identify specific nodes in the RTL representation and to maintain these nodes in the gate-level representation and also in the physical device following the mapping of the logic into the FPGA s look-up tables (LUTs). Furthermore, it would be undesirable for the synthesis tool to inadvertently remove any logic that it regarded as being unnecessary, but that the designers had specifically included in the design to support downstream verification, debug and test. Similarly, in the case of radiation-tolerant designs that employ triple modular redundancy (TMR) in which logic is triplicated and voting circuits are used to select the majority view from the three circuits it would be unfortunate, to say the least, if the synthesis tool determined that this redundant logic was unnecessary and decided to remove it. The end result is that it must be possible for the users of the synthesis technology to be able to control the tool and to instruct it about which portions of the design can be rigorously optimized and which portions of the design serve a debug or redundant circuitry purpose and must therefore be preserved unchanged. Furthermore, it must be possible to be able to tie these decisions back to specific elements in the engineering and architectural specification, which are themselves associated with specific items in the original requirements specification. Verification and Debug There are many aspects to verification and debug that affect the creation of high-reliability and highavailability FPGA designs. For example, it is necessary to be able to perform formal equivalence checking between the various representations of the design such as the RTL and gate-level descriptions to ensure that any transformations performed by synthesis and optimization have not impacted the desired functionality of the design. Another consideration is that the design environment should allow any testbenches that were created to verify the high-level representations during the architecture exploration and algorithmic exploration portions of the design flow to be reused throughout the remainder of the flow. This ensures that the RTL and gate-level implementations fully match their algorithmic counterparts. One very important consideration is the ability to instrument the RTL with special debug logic in the form of virtual logic analyzers. This allows the designer to specify which signals internal to the device are to be monitored along with any trigger conditions that will turn the monitoring on and off. These logic analyzers will subsequently be synthesized into the design and loaded into the physical FPGA. In addition to the fact that this technology should be quick and easy to use, the environment must keep track of the relationship between designer-specified signal names in the RTL and automatically-generated signal names in the gate-level representation. This means that even when working with the physical device, signal values are automatically presented to the users in the context of the RTL source code with which they are most familiar, which dramatically increases the ease and speed of debugging. No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs 8

9 Low-Power Design Over the past few years, power consumption has moved to the forefront of FPGA design and verification concerns. Power consumption has a direct impact on various aspects of the design, including its cost and reliability. For example, consider a multi-fpga design that consumes so much power that it is necessary to employ a fan for cooling purposes. In addition to increasing the cost of the system (and also the fact that the fan itself consumes more power) the use of the fan impacts the reliability and availability of the system. This is because a failure of the fan, which is a very common occurrence, can cause the system to overheat and fail/ shut-down. In the not-so-distant past, power considerations were relegated to the later stages of the FPGA development flow. By comparison, in the case of today s extremely complex FPGA designs, low power isn t just something that can be simply bolted on at the end of the development process. System architects and design engineers need to be able to estimate power early on and to measure power later on because the consequences of running too hot may necessitate time-consuming design re-spins. In order to meet aggressive design schedules, it is no longer sufficient to consider power only in the implementation phase of the design. The size and complexity of today s FPGAs makes it imperative to consider power throughout the entire development process, from the engineering and architectural specification phase, through the virtual prototyping and algorithmic evaluation portions of the flow, all the way to implementation with power-aware synthesis and optimization. Creating Radiation-Tolerant FPGA Designs It is well known that the designers of equipment intended for deployment in hostile environments such as nuclear power stations and aerospace applications have to expend time and effort to ensure that the electronic components chosen are physically resistant to the effects of radiation radiation-hardened (radhard). In addition to the rad-hard components themselves, it is also necessary to create the designs to be radiation tolerant (rad-tolerant), which means that the designs are created in such a way as to mitigate the effects of any radiation events. Such rad-tolerant designs may contain, for example, built-in error correcting memory architectures and include built-in redundant circuit elements. In reality, radiation from one source or another is all around us all the time. In addition to cosmic rays that are raining down on us from above, radioactive elements are found in the ground we walk on, the air we breathe and the food we eat. Even the materials used to create the packages for electronic components such as silicon chips can spontaneously emit radioactive particles. This was not a significant problem until recently, because the structures created in the silicon were relatively large and were not typically affected by the types and strengths of radioactive sources found close to the Earth s surface. However, in our efforts to increase silicon capacity, increase performance, reduce power consumption and lower costs each new generation of integrated circuit features smaller and smaller transistors. Work has already commenced on rolling out devices at the 28-nm node, with the 22-/20-nm node not far behind. These structures are so small that they can be affected by the levels of radiation found on Earth. Radiation-induced errors can result in a telecom router shutting down, a control system failing to respond to a command or an implantable medical device incorrectly interpreting a patient s condition and responding inappropriately. These are just a few examples of many high-reliability or mission-critical systems that require designers to understand and account for radiation-induced effects. A radiation event may flip the state of a sequential element in the design such as a register or a memory cell this is known as a single-event upset (SEU). Alternatively, a radiation event may cause an unwanted transient in the combinatorial logic this is referred to as a single-event transient (SET). If an SET is clocked into a register or stored in a memory element, then it becomes an SEU. No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs 9

10 Insertion of error detection and mitigation strategies is key to the alleviation of SEUs. Some techniques are listed in Table 2. Error Detection TMR: Triplicate logic and compare outputs then report any mismatch Error Migration Create mitigation logic to masks fault Distributed TMR: Triplicate submodules prone to SEUs/SETs and vote on the outputs Fault tolerant FSMs using Hamming-3 encoding for immunity against single bit errors ECC RAMs (with TMR) for single bit error detection and correction in memories Safe FSM and Safe Sequential Circuitry: Create and preserve the custom error-detection circuitry specified in your RTL Periodically, scrub the device. Reprogram device on the fly Table 2: SEU Error detection and mitigation approaches In order to be able to create radiation-tolerant high-reliability and high-availability FPGA designs, design tools need to be able to take the original RTL specified by the designers and to automatically replicate parts of the circuit, for example, implement TMR. Distributed TMR inserts redundancy automatically into the design by triplicating all or part of the logic in a circuit and then adds in majority voting logic to determine the best two out of three results in case a signal is changed due to an SEU. TMR is, by its very nature, expensive on resources so it is usual to apply TMR to just those parts of the design that the designer considers to be the most critical parts of the circuit. The synthesis tools can typically help you to specify where you want redundancy and the tool will then automatically apply it during synthesis. TMR may be required at the register level, individual memory level, the block level or at the entire system level. In the case of state machines, it is no longer sufficient to just create a design that cannot clock the state machine into an illegal state. Today, that state machine could be forced into an illegal state by a radiation event that flips a state register. Thus, the design tools must be capable of taking the original state machine representation defined by the design and augmenting it with the ability to detect and mitigate radiationinduced errors. Safe FSM and safe sequential circuitry implementations involve using error-detection circuitry to force a state machine or sequential logic into a reset state or into a user-defined error state so the error can be handled in a custom manner as specified by the user in their RTL. The user can, for example, specify the mitigation circuitry as an RTL others clause. The synthesis software will then automatically implement this circuitry so that, should an error occur during operation of the design, the FSM or sequential logic will return operation into a safe state such as a reset or default state. Fault-tolerant FSMs with Hamming-3 encoding, for example, can be used to detect and correct single bit errors with a Hamming distance of 3, ensuring that a state register erroneously reaching an adjacent state would be detected and correct operation of the FSM continues automatically. Prior to synthesis, the designer need only tell the synthesis tool that they wish to use a Hamming-3 encoding strategy for designated FSMs. The synthesis tool will automatically create all circuitry for error detection and mitigation and the design will automatically continue to run in the event of an error. Error correcting code (ECC) memories may be used to detect and correct single-bit errors. ECC memories combined with TMR prevent false data from being captured by the memory and from being propagated to parts of the circuitry that the RAM output controls. Once you specify in the RTL or constraints file which memory functions are safety critical for your design, the synthesis software knows No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs 10

11 to automatically infer the ECC memories offered by many FPGA vendors and automatically makes the proper circuit connections and, if requested, deploys additional TMR. Furthermore, FPGA-based designs have an additional consideration with regard to their configuration cells. Thus far, the majority of FPGAs used in high-radiation environments have been based on antifuse configuration cells. These have the advantage of being immune to radiation events, but they have the disadvantage of being only one-time programmable. Also, antifuse-based FPGAs are typically one or two technology nodes behind the highest performance, highest capacity state-of-the-art SRAM-based devices. While users are aware of the advantages offered by SRAM-based FPGAs, they realize their design (and design tools) must offer some way to mitigate against radiation-induced errors in the configuration cells. In non-antifuse FPGA technologies, automated TMR, the ability of the software to select ECC memories, as well as generate safe or fault-tolerant FSMs as described above are ways to alleviate SEUs. Considerations when deciding where and what techniques to deploy involve both risk and tradeoffs between cost and performance. Ultimately, during synthesis, it is important for the software to allow the user to select and control the specific error detection and mitigation strategies to use and where in the design to deploy each of them. Software Considerations The task of creating high-reliability and high-availability FPGA designs involves all aspects of the system, including both the hardware and software components. Software has become an increasingly critical part of nearly all present day systems. As with hardware, creating high-reliability, high-availability software depends on good requirements, design and implementation. In turn, this relies heavily on a disciplined software engineering process that will anticipate and design against unintended consequences. Traceability, Repeatability and Design Management The concepts of traceability, repeatability and design management permeate the entire development flow when it comes to creating high-reliability and high-availability FPGA designs. Right from the origination of a new development project, it is necessary to build project plans, to track project deliverables against milestones and to constantly monitor the status of the project to ensure that the schedule will be met successfully. As has been noted throughout this paper, this requires some way to capture the original requirements in a machine readable form and to associate individual elements in the engineering and architectural specification with corresponding items in the requirements specification. Similarly, as the design proceeds through architecture exploration, algorithmic evaluation, high-level synthesis, RTL capture, logic and physically aware synthesis, every aspect of the implementation should be associated with corresponding items in the engineering and architectural specification. The development environment also needs to support design and configuration management, including the ability to take snapshots of a distributed design (that is, the current state of all of the hardware and software files associated with the design), along with support for revisions and versions and archiving. This is important for every design, especially those involving hardware, software and verification engineers that are split into multiple teams, which may span multiple companies and/or may be geographically dispersed around the world. No Room for Error: Creating Highly Reliable, High-Availability FPGA Designs 11

12 Summary For FPGAs designed at the 28-nm node and below, high reliability and high availability of the resulting systems are of great concern for a wide variety of target application areas. Fortunately, techniques are now available within Synopsys EDA tools to automate aspects of developing both mission-critical and safety-critical FPGA-based systems. These tools and techniques span engineering and architectural specification and exploration, the ability to incorporate pre-verified IP within your design and techniques to trace, track and document project requirements every step of the way to ensure compliance with industry practices and standards such as DO-254. Using Synopsys tools, engineers can now create radiation-tolerant FPGA designs by incorporating deliberate redundancy within their design and by developing safe state machines with custom error mitigation logic that returns the design to a known safe state of operation, should an error occur due to radiation effects. This logic can ensure high system availability in the field and provide reliable system operation. Synopsys tools also enable you to verify reliable and correct operation of your design by allowing you to create an implementation and then monitor, probe and debug its operation on the board to ensure correct system behavior. Specifically, you can probe, monitor and debug your design operation from the RTL level while running the design on the board. During the design creation process, design engineers may additionally choose to use Synopsys formal verification equivalence checking, virtual prototyping and software simulation to validate functional correctness and to ensure that performance and power needs are being met. For more details on solutions that help you develop highly reliable, high-availability designs, please contact Synopsys and visit Synopsys, Inc. 700 East Middlefield Road Mountain View, CA Synopsys, Inc. All rights reserved. Synopsys is a trademark of Synopsys, Inc. in the United States and other countries. A list of Synopsys trademarks is available at All other names mentioned herein are trademarks or registered trademarks of their respective owners. 04/12.RP.CS1598.

ESP-CV Custom Design Formal Equivalence Checking Based on Symbolic Simulation

ESP-CV Custom Design Formal Equivalence Checking Based on Symbolic Simulation Datasheet -CV Custom Design Formal Equivalence Checking Based on Symbolic Simulation Overview -CV is an equivalence checker for full custom designs. It enables efficient comparison of a reference design

More information

Verification of Triple Modular Redundancy (TMR) Insertion for Reliable and Trusted Systems

Verification of Triple Modular Redundancy (TMR) Insertion for Reliable and Trusted Systems Verification of Triple Modular Redundancy (TMR) Insertion for Reliable and Trusted Systems Melanie Berg 1, Kenneth LaBel 2 1.AS&D in support of NASA/GSFC Melanie.D.Berg@NASA.gov 2. NASA/GSFC Kenneth.A.LaBel@NASA.gov

More information

Fault Tolerant Servers: The Choice for Continuous Availability

Fault Tolerant Servers: The Choice for Continuous Availability Fault Tolerant Servers: The Choice for Continuous Availability This paper discusses today s options for achieving continuous availability and how NEC s Express5800/ft servers can provide every company

More information

Introduction to Digital System Design

Introduction to Digital System Design Introduction to Digital System Design Chapter 1 1 Outline 1. Why Digital? 2. Device Technologies 3. System Representation 4. Abstraction 5. Development Tasks 6. Development Flow Chapter 1 2 1. Why Digital

More information

design Synopsys and LANcity

design Synopsys and LANcity Synopsys and LANcity LANcity Adopts Design Reuse with DesignWare to Bring Low-Cost, High-Speed Cable TV Modem to Consumer Market What does it take to redesign a commercial product for a highly-competitive

More information

Meeting the Demands of Robotic Space Applications with CompactPCI

Meeting the Demands of Robotic Space Applications with CompactPCI 1 of 6 1/10/2006 3:26 PM Meeting the Demands of Robotic Space Applications with CompactPCI The robotic tasks in manned and unmanned space applications need increasing sophistication, intelligence and autonomy,

More information

Agenda. Michele Taliercio, Il circuito Integrato, Novembre 2001

Agenda. Michele Taliercio, Il circuito Integrato, Novembre 2001 Agenda Introduzione Il mercato Dal circuito integrato al System on a Chip (SoC) La progettazione di un SoC La tecnologia Una fabbrica di circuiti integrati 28 How to handle complexity G The engineering

More information

FPGA Prototyping Primer

FPGA Prototyping Primer FPGA Prototyping Primer S2C Inc. 1735 Technology Drive, Suite 620 San Jose, CA 95110, USA Tel: +1 408 213 8818 Fax: +1 408 213 8821 www.s2cinc.com What is FPGA prototyping? FPGA prototyping is the methodology

More information

Design Compiler Graphical Create a Better Starting Point for Faster Physical Implementation

Design Compiler Graphical Create a Better Starting Point for Faster Physical Implementation Datasheet Create a Better Starting Point for Faster Physical Implementation Overview Continuing the trend of delivering innovative synthesis technology, Design Compiler Graphical delivers superior quality

More information

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB

Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Highly Available Mobile Services Infrastructure Using Oracle Berkeley DB Executive Summary Oracle Berkeley DB is used in a wide variety of carrier-grade mobile infrastructure systems. Berkeley DB provides

More information

INCREASE SYSTEM AVAILABILITY BY LEVERAGING APACHE TOMCAT CLUSTERING

INCREASE SYSTEM AVAILABILITY BY LEVERAGING APACHE TOMCAT CLUSTERING INCREASE SYSTEM AVAILABILITY BY LEVERAGING APACHE TOMCAT CLUSTERING Open source is the dominant force in software development today, with over 80 percent of developers now using open source in their software

More information

Testing Low Power Designs with Power-Aware Test Manage Manufacturing Test Power Issues with DFTMAX and TetraMAX

Testing Low Power Designs with Power-Aware Test Manage Manufacturing Test Power Issues with DFTMAX and TetraMAX White Paper Testing Low Power Designs with Power-Aware Test Manage Manufacturing Test Power Issues with DFTMAX and TetraMAX April 2010 Cy Hay Product Manager, Synopsys Introduction The most important trend

More information

7a. System-on-chip design and prototyping platforms

7a. System-on-chip design and prototyping platforms 7a. System-on-chip design and prototyping platforms Labros Bisdounis, Ph.D. Department of Computer and Communication Engineering 1 What is System-on-Chip (SoC)? System-on-chip is an integrated circuit

More information

SAN Conceptual and Design Basics

SAN Conceptual and Design Basics TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer

More information

Programmable Logic IP Cores in SoC Design: Opportunities and Challenges

Programmable Logic IP Cores in SoC Design: Opportunities and Challenges Programmable Logic IP Cores in SoC Design: Opportunities and Challenges Steven J.E. Wilton and Resve Saleh Department of Electrical and Computer Engineering University of British Columbia Vancouver, B.C.,

More information

VMware High Availability (VMware HA): Deployment Best Practices

VMware High Availability (VMware HA): Deployment Best Practices VMware High Availability (VMware HA): Deployment Best Practices VMware vsphere 4.1 TECHNICAL WHITE PAPER This paper describes best practices and guidance for properly deploying VMware High Availability

More information

Technology Update White Paper. High Speed RAID 6. Powered by Custom ASIC Parity Chips

Technology Update White Paper. High Speed RAID 6. Powered by Custom ASIC Parity Chips Technology Update White Paper High Speed RAID 6 Powered by Custom ASIC Parity Chips High Speed RAID 6 Powered by Custom ASIC Parity Chips Why High Speed RAID 6? Winchester Systems has developed High Speed

More information

Overload Protection in a Dual-Corded Data Center Environment

Overload Protection in a Dual-Corded Data Center Environment Overload Protection in a Dual-Corded Data Center Environment White Paper 206 Revision 0 by Neil Rasmussen Executive summary In a dual-corded environment, the loss of power on one path will cause the load

More information

Fault Tolerant Servers: The Choice for Continuous Availability on Microsoft Windows Server Platform

Fault Tolerant Servers: The Choice for Continuous Availability on Microsoft Windows Server Platform Fault Tolerant Servers: The Choice for Continuous Availability on Microsoft Windows Server Platform Why clustering and redundancy might not be enough This paper discusses today s options for achieving

More information

Real-time Protection for Hyper-V

Real-time Protection for Hyper-V 1-888-674-9495 www.doubletake.com Real-time Protection for Hyper-V Real-Time Protection for Hyper-V Computer virtualization has come a long way in a very short time, triggered primarily by the rapid rate

More information

Improved Software Testing Using McCabe IQ Coverage Analysis

Improved Software Testing Using McCabe IQ Coverage Analysis White Paper Table of Contents Introduction...1 What is Coverage Analysis?...2 The McCabe IQ Approach to Coverage Analysis...3 The Importance of Coverage Analysis...4 Where Coverage Analysis Fits into your

More information

PARALLELS CLOUD STORAGE

PARALLELS CLOUD STORAGE PARALLELS CLOUD STORAGE Performance Benchmark Results 1 Table of Contents Executive Summary... Error! Bookmark not defined. Architecture Overview... 3 Key Features... 5 No Special Hardware Requirements...

More information

System-on. on-chip Design Flow. Prof. Jouni Tomberg Tampere University of Technology Institute of Digital and Computer Systems. jouni.tomberg@tut.

System-on. on-chip Design Flow. Prof. Jouni Tomberg Tampere University of Technology Institute of Digital and Computer Systems. jouni.tomberg@tut. System-on on-chip Design Flow Prof. Jouni Tomberg Tampere University of Technology Institute of Digital and Computer Systems jouni.tomberg@tut.fi 26.03.2003 Jouni Tomberg / TUT 1 SoC - How and with whom?

More information

The functionality and advantages of a high-availability file server system

The functionality and advantages of a high-availability file server system The functionality and advantages of a high-availability file server system This paper discusses the benefits of deploying a JMR SHARE High-Availability File Server System. Hardware and performance considerations

More information

RESTRICTION ON DISCLOSURE AND USE OF DATA

RESTRICTION ON DISCLOSURE AND USE OF DATA RESTRICTION ON DISCLOSURE AND USE OF DATA This document includes data that shall not be disclosed outside the Government and shall not be duplicated, used, or disclosed in whole or in part for any purpose

More information

COMP5426 Parallel and Distributed Computing. Distributed Systems: Client/Server and Clusters

COMP5426 Parallel and Distributed Computing. Distributed Systems: Client/Server and Clusters COMP5426 Parallel and Distributed Computing Distributed Systems: Client/Server and Clusters Client/Server Computing Client Client machines are generally single-user workstations providing a user-friendly

More information

White Paper. S2C Inc. 1735 Technology Drive, Suite 620 San Jose, CA 95110, USA Tel: +1 408 213 8818 Fax: +1 408 213 8821 www.s2cinc.com.

White Paper. S2C Inc. 1735 Technology Drive, Suite 620 San Jose, CA 95110, USA Tel: +1 408 213 8818 Fax: +1 408 213 8821 www.s2cinc.com. White Paper FPGA Prototyping of System-on-Chip Designs The Need for a Complete Prototyping Platform for Any Design Size, Any Design Stage with Enterprise-Wide Access, Anytime, Anywhere S2C Inc. 1735 Technology

More information

The Economics of Cisco s nlight Multilayer Control Plane Architecture

The Economics of Cisco s nlight Multilayer Control Plane Architecture The Economics of Cisco s nlight Multilayer Control Plane Architecture Executive Summary Networks are becoming more difficult to plan and optimize because of high traffic growth, volatile traffic patterns,

More information

Value Paper Author: Edgar C. Ramirez. Diverse redundancy used in SIS technology to achieve higher safety integrity

Value Paper Author: Edgar C. Ramirez. Diverse redundancy used in SIS technology to achieve higher safety integrity Value Paper Author: Edgar C. Ramirez Diverse redundancy used in SIS technology to achieve higher safety integrity Diverse redundancy used in SIS technology to achieve higher safety integrity Abstract SIS

More information

Fault Tolerance & Reliability CDA 5140. Chapter 3 RAID & Sample Commercial FT Systems

Fault Tolerance & Reliability CDA 5140. Chapter 3 RAID & Sample Commercial FT Systems Fault Tolerance & Reliability CDA 5140 Chapter 3 RAID & Sample Commercial FT Systems - basic concept in these, as with codes, is redundancy to allow system to continue operation even if some components

More information

VHDL DESIGN OF EDUCATIONAL, MODERN AND OPEN- ARCHITECTURE CPU

VHDL DESIGN OF EDUCATIONAL, MODERN AND OPEN- ARCHITECTURE CPU VHDL DESIGN OF EDUCATIONAL, MODERN AND OPEN- ARCHITECTURE CPU Martin Straka Doctoral Degree Programme (1), FIT BUT E-mail: strakam@fit.vutbr.cz Supervised by: Zdeněk Kotásek E-mail: kotasek@fit.vutbr.cz

More information

9/14/2011 14.9.2011 8:38

9/14/2011 14.9.2011 8:38 Algorithms and Implementation Platforms for Wireless Communications TLT-9706/ TKT-9636 (Seminar Course) BASICS OF FIELD PROGRAMMABLE GATE ARRAYS Waqar Hussain firstname.lastname@tut.fi Department of Computer

More information

An Introduction to. Metrics. used during. Software Development

An Introduction to. Metrics. used during. Software Development An Introduction to Metrics used during Software Development Life Cycle www.softwaretestinggenius.com Page 1 of 10 Define the Metric Objectives You can t control what you can t measure. This is a quote

More information

Functional safety. Essential to overall safety

Functional safety. Essential to overall safety Functional safety Essential to overall safety What is Functional safety? In public spaces, factories, offi ces or homes; we are surrounded by an increasing number of electric and electronic devices and

More information

CHAPTER 11: Flip Flops

CHAPTER 11: Flip Flops CHAPTER 11: Flip Flops In this chapter, you will be building the part of the circuit that controls the command sequencing. The required circuit must operate the counter and the memory chip. When the teach

More information

SOFTWARE-IMPLEMENTED SAFETY LOGIC Angela E. Summers, Ph.D., P.E., President, SIS-TECH Solutions, LP

SOFTWARE-IMPLEMENTED SAFETY LOGIC Angela E. Summers, Ph.D., P.E., President, SIS-TECH Solutions, LP SOFTWARE-IMPLEMENTED SAFETY LOGIC Angela E. Summers, Ph.D., P.E., President, SIS-TECH Solutions, LP Software-Implemented Safety Logic, Loss Prevention Symposium, American Institute of Chemical Engineers,

More information

Hunting Asynchronous CDC Violations in the Wild

Hunting Asynchronous CDC Violations in the Wild Hunting Asynchronous Violations in the Wild Chris Kwok Principal Engineer May 4, 2015 is the #2 Verification Problem Why is a Big Problem: 10 or More Clock Domains are Common Even FPGA Users Are Suffering

More information

Moving Service Management to SaaS Key Challenges and How Nimsoft Service Desk Helps Address Them

Moving Service Management to SaaS Key Challenges and How Nimsoft Service Desk Helps Address Them Moving Service Management to SaaS Key Challenges and How Nimsoft Service Desk Helps Address Them Table of Contents Executive Summary... 3 Introduction: Opportunities of SaaS... 3 Introducing Nimsoft Service

More information

Hybrid Power Considerations for Today s Telecommunications

Hybrid Power Considerations for Today s Telecommunications Hybrid Power Considerations for Today s Telecommunications A White Paper by Liebert Corporation Modern telecommunications facilities are faced with the dilemma of how to power a variety of equipment as

More information

Architectures and Platforms

Architectures and Platforms Hardware/Software Codesign Arch&Platf. - 1 Architectures and Platforms 1. Architecture Selection: The Basic Trade-Offs 2. General Purpose vs. Application-Specific Processors 3. Processor Specialisation

More information

End to End Data Path Protection

End to End Data Path Protection End to End Data Path Protection Application Note AN004 February 2012 Corporate Headquarters: 39870 Eureka Dr., Newark, CA 94560, USA Tel:(510) 623 1231 Fax:(510) 623 1434 E mail: info@smartstoragesys.com

More information

VMware vsphere Data Protection 6.1

VMware vsphere Data Protection 6.1 VMware vsphere Data Protection 6.1 Technical Overview Revised August 10, 2015 Contents Introduction... 3 Architecture... 3 Deployment and Configuration... 5 Backup... 6 Application Backup... 6 Backup Data

More information

The EMSX Platform. A Modular, Scalable, Efficient, Adaptable Platform to Manage Multi-technology Networks. A White Paper.

The EMSX Platform. A Modular, Scalable, Efficient, Adaptable Platform to Manage Multi-technology Networks. A White Paper. The EMSX Platform A Modular, Scalable, Efficient, Adaptable Platform to Manage Multi-technology Networks A White Paper November 2002 Abstract: The EMSX Platform is a set of components that together provide

More information

Safety and Hazard Analysis

Safety and Hazard Analysis Safety and Hazard Analysis An F16 pilot was sitting on the runway doing the preflight and wondered if the computer would let him raise the landing gear while on the ground - it did A manufacturer of torpedoes

More information

Ten steps to better requirements management.

Ten steps to better requirements management. White paper June 2009 Ten steps to better requirements management. Dominic Tavassoli, IBM Actionable enterprise architecture management Page 2 Contents 2 Introduction 2 Defining a good requirement 3 Ten

More information

Embedded Systems Lecture 9: Reliability & Fault Tolerance. Björn Franke University of Edinburgh

Embedded Systems Lecture 9: Reliability & Fault Tolerance. Björn Franke University of Edinburgh Embedded Systems Lecture 9: Reliability & Fault Tolerance Björn Franke University of Edinburgh Overview Definitions System Reliability Fault Tolerance Sources and Detection of Errors Stage Error Sources

More information

TranScend. Next Level Payment Processing. Product Overview

TranScend. Next Level Payment Processing. Product Overview TranScend Next Level Payment Processing Product Overview Product Functions & Features TranScend is the newest, most powerful, and most flexible electronics payment system from INTRIX Technology, Inc. It

More information

Figure 1 FPGA Growth and Usage Trends

Figure 1 FPGA Growth and Usage Trends White Paper Avoiding PCB Design Mistakes in FPGA-Based Systems System design using FPGAs is significantly different from the regular ASIC and processor based system design. In this white paper, we will

More information

MySQL and Virtualization Guide

MySQL and Virtualization Guide MySQL and Virtualization Guide Abstract This is the MySQL and Virtualization extract from the MySQL Reference Manual. For legal information, see the Legal Notices. For help with using MySQL, please visit

More information

Introduction of ISO/DIS 26262 (ISO 26262) Parts of ISO 26262 ASIL Levels Part 6 : Product Development Software Level

Introduction of ISO/DIS 26262 (ISO 26262) Parts of ISO 26262 ASIL Levels Part 6 : Product Development Software Level ISO 26262 the Emerging Automotive Safety Standard Agenda Introduction of ISO/DIS 26262 (ISO 26262) Parts of ISO 26262 ASIL Levels Part 4 : Product Development System Level Part 6 : Product Development

More information

Understanding Safety Integrity Levels (SIL) and its Effects for Field Instruments

Understanding Safety Integrity Levels (SIL) and its Effects for Field Instruments Understanding Safety Integrity Levels (SIL) and its Effects for Field Instruments Introduction The Industrial process industry is experiencing a dynamic growth in Functional Process Safety applications.

More information

High Availability White Paper

High Availability White Paper High Availability White Paper This document provides an overview of high availability best practices for mission critical applications. Author: George Quinlan, Senior Consultant Background - High Availability

More information

Test Automation Architectures: Planning for Test Automation

Test Automation Architectures: Planning for Test Automation Test Automation Architectures: Planning for Test Automation Douglas Hoffman Software Quality Methods, LLC. 24646 Heather Heights Place Saratoga, California 95070-9710 Phone 408-741-4830 Fax 408-867-4550

More information

Oracle Data Guard OTN Case Study SWEDISH POST

Oracle Data Guard OTN Case Study SWEDISH POST Oracle Data Guard OTN Case Study SWEDISH POST Corporate Profile Annual revenue EUR 2.5 billion 40,000 employees Serving 3 million homes and 800.000 businesses daily url: http://www.posten.se Disaster Recovery

More information

Embedded Real-Time Systems (TI-IRTS) Safety and Reliability Patterns B.D. Chapter 9. 405-456

Embedded Real-Time Systems (TI-IRTS) Safety and Reliability Patterns B.D. Chapter 9. 405-456 Embedded Real-Time Systems (TI-IRTS) Safety and Reliability Patterns B.D. Chapter 9. 405-456 Version: 10-5-2010 Agenda Introduction to safety Patterns: 1. Protected Single Channel Pattern 2. Homogeneous

More information

Staying Alive Understanding Array Clustering Technology

Staying Alive Understanding Array Clustering Technology White Paper Overview This paper looks at the history of supporting high availability network environments. By examining first and second-generation availability solutions, we can learn from the past and

More information

Switch Fabric Implementation Using Shared Memory

Switch Fabric Implementation Using Shared Memory Order this document by /D Switch Fabric Implementation Using Shared Memory Prepared by: Lakshmi Mandyam and B. Kinney INTRODUCTION Whether it be for the World Wide Web or for an intra office network, today

More information

Improving Compute Farm Throughput in Electronic Design Automation (EDA) Solutions

Improving Compute Farm Throughput in Electronic Design Automation (EDA) Solutions Improving Compute Farm Throughput in Electronic Design Automation (EDA) Solutions System Throughput in the EDA Design Flow Abstract Functional verification of Silicon on Chip (SoC) designs can contribute

More information

Deploying Exchange Server 2007 SP1 on Windows Server 2008

Deploying Exchange Server 2007 SP1 on Windows Server 2008 Deploying Exchange Server 2007 SP1 on Windows Server 2008 Product Group - Enterprise Dell White Paper By Ananda Sankaran Andrew Bachler April 2008 Contents Introduction... 3 Deployment Considerations...

More information

Understanding DO-254 Compliance for the Verification of Airborne Digital Hardware

Understanding DO-254 Compliance for the Verification of Airborne Digital Hardware White Paper Understanding DO-254 Compliance for the of Airborne Digital Hardware October 2009 Authors Dr. Paul Marriott XtremeEDA Corporation Anthony D. Stone Synopsys, Inc Abstract This whitepaper is

More information

Smarter Balanced Assessment Consortium. Recommendation

Smarter Balanced Assessment Consortium. Recommendation Smarter Balanced Assessment Consortium Recommendation Smarter Balanced Quality Assurance Approach Recommendation for the Smarter Balanced Assessment Consortium 20 July 2012 Summary When this document was

More information

IBM Software Information Management Creating an Integrated, Optimized, and Secure Enterprise Data Platform:

IBM Software Information Management Creating an Integrated, Optimized, and Secure Enterprise Data Platform: Creating an Integrated, Optimized, and Secure Enterprise Data Platform: IBM PureData System for Transactions with SafeNet s ProtectDB and DataSecure Table of contents 1. Data, Data, Everywhere... 3 2.

More information

The role of integrated requirements management in software delivery.

The role of integrated requirements management in software delivery. Software development White paper October 2007 The role of integrated requirements Jim Heumann, requirements evangelist, IBM Rational 2 Contents 2 Introduction 2 What is integrated requirements management?

More information

Satellite REPRINTED FROM. John D. Prentice, Stratos Global Corp., USA, www.oilfieldtechnology.com

Satellite REPRINTED FROM. John D. Prentice, Stratos Global Corp., USA, www.oilfieldtechnology.com Satellite solutions John D. Prentice, Stratos Global Corp., USA, discusses how new satellite solutions impact offshore and land based exploration and production. REPRINTED FROM www.oilfieldtechnology.com

More information

www.dotnetsparkles.wordpress.com

www.dotnetsparkles.wordpress.com Database Design Considerations Designing a database requires an understanding of both the business functions you want to model and the database concepts and features used to represent those business functions.

More information

Design Verification The Case for Verification, Not Validation

Design Verification The Case for Verification, Not Validation Overview: The FDA requires medical device companies to verify that all the design outputs meet the design inputs. The FDA also requires that the final medical device must be validated to the user needs.

More information

Optimizing the Data Center for Today s State & Local Government

Optimizing the Data Center for Today s State & Local Government WHITE PAPER: OPTIMIZING THE DATA CENTER FOR TODAY S STATE...... &.. LOCAL...... GOVERNMENT.......................... Optimizing the Data Center for Today s State & Local Government Who should read this

More information

How To Optimize Data Center Performance

How To Optimize Data Center Performance Data Center Optimization WHITE PAPER PARC, 3333 Coyote Hill Road, Palo Alto, California 94304 USA +1 650 812 4000 engage@parc.com www.parc.com Abstract Recent trends in data center technology have created

More information

Certification Authorities Software Team (CAST) Position Paper CAST-9

Certification Authorities Software Team (CAST) Position Paper CAST-9 Certification Authorities Software Team (CAST) Position Paper CAST-9 Considerations for Evaluating Safety Engineering Approaches to Software Assurance Completed January, 2002 NOTE: This position paper

More information

DO-254 Requirements Traceability

DO-254 Requirements Traceability DO-254 Requirements Traceability Louie De Luna, Aldec - June 04, 2013 DO-254 enforces a strict requirements-driven process for the development of commercial airborne electronic hardware. For DO-254, requirements

More information

Testing of Digital System-on- Chip (SoC)

Testing of Digital System-on- Chip (SoC) Testing of Digital System-on- Chip (SoC) 1 Outline of the Talk Introduction to system-on-chip (SoC) design Approaches to SoC design SoC test requirements and challenges Core test wrapper P1500 core test

More information

Microsoft SQL Server on Stratus ftserver Systems

Microsoft SQL Server on Stratus ftserver Systems W H I T E P A P E R Microsoft SQL Server on Stratus ftserver Systems Security, scalability and reliability at its best Uptime that approaches six nines Significant cost savings for your business Only from

More information

Application Release Automation with Zero Touch Deployment

Application Release Automation with Zero Touch Deployment WHITE PAPER JUNE 2013 Application Release Automation with Zero Touch Deployment Daneil Kushner and Eran Sher Application Delivery 2 WHITE PAPER: APPLICATION RELEASE AUTOMATION WITH ZERO TOUCH DEPLOYMENT

More information

Rapid System Prototyping with FPGAs

Rapid System Prototyping with FPGAs Rapid System Prototyping with FPGAs By R.C. Coferand Benjamin F. Harding AMSTERDAM BOSTON HEIDELBERG LONDON NEW YORK OXFORD PARIS SAN DIEGO SAN FRANCISCO SINGAPORE SYDNEY TOKYO Newnes is an imprint of

More information

High Availability and Disaster Recovery Solutions for Perforce

High Availability and Disaster Recovery Solutions for Perforce High Availability and Disaster Recovery Solutions for Perforce This paper provides strategies for achieving high Perforce server availability and minimizing data loss in the event of a disaster. Perforce

More information

How To Fix A 3 Bit Error In Data From A Data Point To A Bit Code (Data Point) With A Power Source (Data Source) And A Power Cell (Power Source)

How To Fix A 3 Bit Error In Data From A Data Point To A Bit Code (Data Point) With A Power Source (Data Source) And A Power Cell (Power Source) FPGA IMPLEMENTATION OF 4D-PARITY BASED DATA CODING TECHNIQUE Vijay Tawar 1, Rajani Gupta 2 1 Student, KNPCST, Hoshangabad Road, Misrod, Bhopal, Pin no.462047 2 Head of Department (EC), KNPCST, Hoshangabad

More information

Developments in Point of Load Regulation

Developments in Point of Load Regulation Developments in Point of Load Regulation By Paul Greenland VP of Marketing, Power Management Group, Point of load regulation has been used in electronic systems for many years especially when the load

More information

High Availability for Citrix XenServer

High Availability for Citrix XenServer WHITE PAPER Citrix XenServer High Availability for Citrix XenServer Enhancing XenServer Fault Tolerance with High Availability www.citrix.com Contents Contents... 2 Heartbeating for availability... 4 Planning

More information

Performance Optimization Guide

Performance Optimization Guide Performance Optimization Guide Publication Date: July 06, 2016 Copyright Metalogix International GmbH, 2001-2016. All Rights Reserved. This software is protected by copyright law and international treaties.

More information

Building Remote Access VPNs

Building Remote Access VPNs Building Remote Access VPNs 124 Grove Street, Suite 309 Franklin, MA 02038 877-4-ALTIGA www.altiga.com Building Remote Access VPNs: Harnessing the Power of the Internet to Reduce Costs and Boost Performance

More information

Data center virtualization

Data center virtualization Data center virtualization A Dell Technical White Paper August 2011 Lay the foundation for impressive disk utilization and unmatched data center flexibility Executive summary Many enterprise IT departments

More information

solution brief September 2011 Can You Effectively Plan For The Migration And Management of Systems And Applications on Vblock Platforms?

solution brief September 2011 Can You Effectively Plan For The Migration And Management of Systems And Applications on Vblock Platforms? solution brief September 2011 Can You Effectively Plan For The Migration And Management of Systems And Applications on Vblock Platforms? CA Capacity Management and Reporting Suite for Vblock Platforms

More information

ESA s Data Management System for the Russian Segment of the International Space Station

ESA s Data Management System for the Russian Segment of the International Space Station iss data management system ESA s Data Management System for the Russian Segment of the International Space Station J. Graf, C. Reimers & A. Errington ESA Directorate of Manned Spaceflight and Microgravity,

More information

NVM memory: A Critical Design Consideration for IoT Applications

NVM memory: A Critical Design Consideration for IoT Applications NVM memory: A Critical Design Consideration for IoT Applications Jim Lipman Sidense Corp. Introduction The Internet of Things (IoT), sometimes called the Internet of Everything (IoE), refers to an evolving

More information

Cisco Change Management: Best Practices White Paper

Cisco Change Management: Best Practices White Paper Table of Contents Change Management: Best Practices White Paper...1 Introduction...1 Critical Steps for Creating a Change Management Process...1 Planning for Change...1 Managing Change...1 High Level Process

More information

The Role of Automation Systems in Management of Change

The Role of Automation Systems in Management of Change The Role of Automation Systems in Management of Change Similar to changing lanes in an automobile in a winter storm, with change enters risk. Everyone has most likely experienced that feeling of changing

More information

Advanced Core Operating System (ACOS): Experience the Performance

Advanced Core Operating System (ACOS): Experience the Performance WHITE PAPER Advanced Core Operating System (ACOS): Experience the Performance Table of Contents Trends Affecting Application Networking...3 The Era of Multicore...3 Multicore System Design Challenges...3

More information

Powering Converged Infrastructures

Powering Converged Infrastructures Powering Converged Infrastructures By Mike Jackson Product Manager Eaton Executive summary Converged infrastructures utilize virtualization and automation to achieve high levels of availability in a costeffective

More information

ACHIEVING 100% UPTIME WITH A CLOUD-BASED CONTACT CENTER

ACHIEVING 100% UPTIME WITH A CLOUD-BASED CONTACT CENTER ACHIEVING 100% UPTIME WITH A CLOUD-BASED CONTACT CENTER Content: Introduction What is Redundancy? Defining a Hosted Contact Center V-TAG Distribution Levels of Redundancy Conclusion Fault Tolerance Scalability

More information

Take value-add on a test drive. Explore smarter ways to evaluate phone data providers.

Take value-add on a test drive. Explore smarter ways to evaluate phone data providers. White Paper Take value-add on a test drive. Explore smarter ways to evaluate phone data providers. Employing an effective debt-collection strategy with the right information solutions provider helps increase

More information

RAID technology and IBM TotalStorage NAS products

RAID technology and IBM TotalStorage NAS products IBM TotalStorage Network Attached Storage October 2001 RAID technology and IBM TotalStorage NAS products By Janet Anglin and Chris Durham Storage Networking Architecture, SSG Page No.1 Contents 2 RAID

More information

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage

The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage The Shortcut Guide to Balancing Storage Costs and Performance with Hybrid Storage sponsored by Dan Sullivan Chapter 1: Advantages of Hybrid Storage... 1 Overview of Flash Deployment in Hybrid Storage Systems...

More information

Use of Reprogrammable FPGA on EUCLID mission

Use of Reprogrammable FPGA on EUCLID mission 19/05/2016 Workshop su Applicazioni FPGA in ambito Astrofisico Raoul Grimoldi Use of Reprogrammable FPGA on EUCLID mission Euclid mission overview EUCLID is a cosmology mission part of Cosmic Vision 2015-2025

More information

SHARPCLOUD SECURITY STATEMENT

SHARPCLOUD SECURITY STATEMENT SHARPCLOUD SECURITY STATEMENT Summary Provides details of the SharpCloud Security Architecture Authors: Russell Johnson and Andrew Sinclair v1.8 (December 2014) Contents Overview... 2 1. The SharpCloud

More information

Increasing Data Center Resilience While Lowering PUE

Increasing Data Center Resilience While Lowering PUE Increasing Data Center Resilience While Lowering PUE Nandini Mouli, Ph.D. President/Founder esai LLC mouli.nandini@gmail.com www.esai.technology Introduction esai LLC esai LLC: Is a Disadvantaged woman-owned

More information

Network-Wide Change Management Visibility with Route Analytics

Network-Wide Change Management Visibility with Route Analytics Network-Wide Change Management Visibility with Route Analytics Executive Summary Change management is a hot topic, and rightly so. Studies routinely report that a significant percentage of application

More information

ARM Ltd 110 Fulbourn Road, Cambridge, CB1 9NJ, UK. *peter.harrod@arm.com

ARM Ltd 110 Fulbourn Road, Cambridge, CB1 9NJ, UK. *peter.harrod@arm.com Serial Wire Debug and the CoreSight TM Debug and Trace Architecture Eddie Ashfield, Ian Field, Peter Harrod *, Sean Houlihane, William Orme and Sheldon Woodhouse ARM Ltd 110 Fulbourn Road, Cambridge, CB1

More information

The Total Cost of Ownership (TCO) of migrating to SUSE Linux Enterprise Server for System z

The Total Cost of Ownership (TCO) of migrating to SUSE Linux Enterprise Server for System z The Total Cost of Ownership (TCO) of migrating to SUSE Linux Enterprise Server for System z This White Paper explores the financial benefits and cost savings of moving workloads from distributed to mainframe

More information

Traffic Engineering Management Concepts

Traffic Engineering Management Concepts 3 CHAPTER This chapter includes an overview of Cisco Prime Fulfillment and of some of the concepts used in this guide. This chapter includes the following sections: Prime Fulfillment TEM Overview, page

More information

Disaster Recovery for Oracle Database

Disaster Recovery for Oracle Database Disaster Recovery for Oracle Database Zero Data Loss Recovery Appliance, Active Data Guard and Oracle GoldenGate ORACLE WHITE PAPER APRIL 2015 Overview Oracle Database provides three different approaches

More information