PCI Express Carriers and OpenVPX

Similar documents
VITA 46, 48, and 65: The Next Generation VME system replacement

VPX Implementation Serves Shipboard Search and Track Needs

ADP - Avionics Development Platform. OpenVPX Based Development Platform. Rapid Prototyping, Rapid Deployment. ADP Highlights

PCIe XMC x8 Lane Adapter

power rid B ge C o m p u t e r

Interposers, Probes and Adapters for Teledyne LeCroy PCI Express Systems

1U µtca.4 Chassis with 2 AMC Slots, PCIe Gen 3 VT816

Key features: About this backplane:

Bus Interconnect Evolution in embedded computing

Cisco Unified Computing System Hardware

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

PCIeBPMC (PCI/PCI-X Bus Compatible) Bridge based PCIe and PMC Compatible Adapter Carrier Front View shown with 1 installed fans model # PCIeBPMC-FAN2

EDUCATION. PCI Express, InfiniBand and Storage Ron Emerick, Sun Microsystems Paul Millard, Xyratex Corporation

White Paper. Intel Sandy Bridge Brings Many Benefits to the PC/104 Form Factor

VPX Slot chassis for conduction cooled 3U VPX payloads

Router Architectures

A+ Guide to Managing and Maintaining Your PC, 7e. Chapter 1 Introducing Hardware

High-Speed SERDES Interfaces In High Value FPGAs

How to build a high speed PCI Express bus expansion system using the Max Express product family 1

PMC Solid State Memory Card Product Manual V

Route Processor. Route Processor Overview CHAPTER

High Performance 2U Storage Server. Mass Storage Capacity in a 2U Chassis

21152 PCI-to-PCI Bridge

PCI Express and Storage. Ron Emerick, Sun Microsystems

TS500-E5. Configuration Guide

PCI Express Impact on Storage Architectures. Ron Emerick, Sun Microsystems

How To Write An Article On An Hp Appsystem For Spera Hana

PCI Express IO Virtualization Overview

HP Cloudline Overview

» Application Story «PCI Express Fabric breakthrough

PCI Express Overview. And, by the way, they need to do it in less time.

nanoetxexpress Specification Revision 1.0 Figure 1 nanoetxexpress board nanoetxexpress Specification Rev 1.

UCS M-Series Modular Servers

Matrox 4Sight GP. Industrial imaging computer with desktop-level performance and expansion. The right fit for machine vision or medical imaging

How PCI Express Works (by Tracy V. Wilson)

Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family

PCI Express Impact on Storage Architectures and Future Data Centers. Ron Emerick, Oracle Corporation

PCI EXPRESS: AN OVERVIEW OF PCI EXPRESS, CABLED PCI EXPRESS, AND PXI EXPRESS

SC1-ALLEGRO CompactPCI Serial CPU Card Intel Core i7-3xxx Processor Quad-Core (Ivy Bridge)

Integrating PCI Express into the PXI Backplane

Phone and Fax: (717) or Send to- On the Internet at

Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation

HUAWEI Tecal E6000 Blade Server

Set Up Your MXI -Express x4 System

Flexible I/O Using FMC Standard FPGA and CPU Track B&C HWCONF 2013

COMPUTING. Centellis Virtualization Platform An open hardware and software platform for implementing virtualized applications

AXIe: AdvancedTCA Extensions for Instrumentation and Test

PCI Express Impact on Storage Architectures and Future Data Centers

Getting Started. Chapter 1

Express11-G3 Backplane. User Guide. Version: 1.02

Nutaq. PicoDigitizer 125-Series 16 or 32 Channels, 125 MSPS, FPGA-Based DAQ Solution PRODUCT SHEET. nutaq.com MONTREAL QUEBEC

PCI Express* Ethernet Networking

Version 3.0 Technical Brief

High Performance 1U Server. High Performance 1U Storage Server. Front Parallel CPU Design features better cooling

XMC Modules. XMC-6260-CC 10-Gigabit Ethernet Interface Module with Dual XAUI Ports. Description. Key Features & Benefits

White Paper: M.2 SSDs: Aligned for Speed. Comparing SSD form factors, interfaces, and software support

Set Up Your MXI -Express x1 System

QuickSpecs. HP 200 G1 Microtower Business PC. HP 200 G1 Microtower Business PC. Overview

Tyan Computer. Transport PX22. Service Engineer s Manual

Vess. Architectural & Engineering Specifications For Video Surveillance. A2200 Series. Version: 1.2 Feb, 2013

» Communication Rack Mount Servers «

Network Security Appliance. Overview Performance Platform Mainstream Platform Desktop Platform Industrial Firewall

EMC SYMMETRIX VMAX 20K STORAGE SYSTEM

Application Server Platform Architecture. IEI Application Server Platform for Communication Appliance

A21C 6U VMEbus QorIQ P1013/P1022 CPU (PMC/XMC)

New levels of efficiency and optimized design. The latest Intel CPUs. 2+1 expandability in UP 1U

Transitioning to 6Gb/s SAS (Serial-Attached SCSI) A Dell White Paper

PCI vs. PCI Express vs. AGP

Using PCI Express for Next Generation Battlefield Applications

PCI Express 2.0 SATA III RAID Controller Card with Internal Mini-SAS SFF-8087 Connector

Supercomputing Clusters with RapidIO Interconnect Fabric

zseries 18-Slot Chassis 18-Slot 3U PXI Express Chassis with AC Up to 8 GB/s

Storage Architectures. Ron Emerick, Oracle Corporation

Fujitsu PRIMERGY BX920 S2 Dual-Socket Server

Appendix A: Camera connectors

The Bus (PCI and PCI-Express)

PCI Express SATA III RAID Controller Card with Mini-SAS Connector (SFF-8087) - HyperDuo SSD Tiering

The BENEFITS of a Storage Bridge Bay Solution

Oracle Virtual Networking Overview and Frequently Asked Questions March 26, 2013

Intel architecture. Platform Basics. White Paper Todd Langley Systems Engineer/ Architect Intel Corporation. September 2010

StorageBox High Performance NVMe JBOF

AXIS 262+ Network Video Recorder

S-series SQ Controller

The proliferation of the raw processing

OpenVMS Support for c-class Blades

Getting Started. Chapter 1

A HIGH-BANDWIDTH BACKPLANE WITH INTEL ARCHITECTURE-BASED SYSTEM HOST BOARD FOR SURGERY ROOM and VIDEO WALL APPLICATIONS

Motherboard- based Servers versus ATCA- based Servers

Data Sheet Fujitsu PRIMERGY BX900 S1 Blade Server

Design Guide for Photonic Architecture

HP recommended configuration for Microsoft Exchange Server 2010: HP LeftHand P4000 SAN

PCI Express High Speed Networking. A complete solution for demanding Network Applications

VPX. Sub headline on second deck. GE Fanuc Intelligent Platforms

760 Veterans Circle, Warminster, PA Technical Proposal. Submitted by: ACT/Technico 760 Veterans Circle Warminster, PA

USB Port PCI Express Card

Secu6 Technology Co., Ltd. Industrial Mini-ITX Intel QM77 Ivy Bridge Mobile Motherboard Support 3 rd Generation Core i7 / i5 / i3 Mobile Processor

White Paper Utilizing Leveling Techniques in DDR3 SDRAM Memory Interfaces

Scalable. Reliable. Flexible. High Performance Architecture. Fault Tolerant System Design. Expansion Options for Unique Business Needs

Architecting High-Speed Data Streaming Systems. Sujit Basu

Getting Started. Chapter 1

Transcription:

PCI Express Carriers and OpenVPX Introduction Mezzanine carrier cards have been used in system designs for many years, and will continue to play an integral part, especially as the performance of processor cards continue to increase in the future. This white paper will discuss the many uses of carrier cards designed specifically for VPX (1). and OpenVPX (2). systems. First, let s define what we mean by a mezzanine carrier card. In some system designer s vernacular, a mezzanine carrier card can be defined as any card that can host a mezzanine. Although one can take this simple approach, we will be bit more restrictive in our definition. In this white paper, we will refer to a mezzanine carrier card as a card that can host one or more mezzanines but does not provide any processing function. The processing function or control of the mezzanine is provided by another card in the system. Stated another way, the mezzanine carrier is used to host a mezzanine which is then controlled and used by a processor card in the system. Hence, we could refer to the mezzanine carrier as being dumb. Why Use a Mezzanine Carrier Mezzanine carriers are useful in system designs where: 1. Additional mezzanines are required and the host card does not have a mezzanine site available. --Using another processor will increase the cost and power, especially where the existing processor in the system has plenty of processing left. 2. Mezzanines are required but the host card s mezzanine site cannot support the mezzanine due to: a. Seating - Many host cards have large memory capacity (x GB) and these devices are located in the mezzanine area making it difficult to seat a mezzanine in an air-cooled environment. b. Current draw - Some mezzanines such as high end DSP and FPGA cards require input currents which exceed what a basecard can provide. c. Power dissipation - Similar cards referenced in (b) have very high power dissipation which increases the overall power dissipation of the host card such that it is over its limits. d. Mezzanine Cooling - Heat from the base card is such that the mezzanine cannot be cooled at the required card edge temperature, or the mezzanine adds additional heat into the base card such that the base card cannot be cooled. Either way, moving the mezzanine to a cooler carrier card may be the solution. e. Access - Some mezzanine cards (ex: optical) require front panel access which may require modifications to a densely populated basecard. Such modifications, if possible, can be expensive. Modification to a less dense carrier can be an easy and cheaper solution. 1

f. Space limitation - Many densely populated host cards do not have the space to provide Pn1, Pn2, Pn3, Pn4 PMC connectors, along with Pn5 and Pn6 connectors to support XMC (VITA 42 (4) ). Many of these cards will only provide one of the additional connectors for the XMC, typically Pn5 for the PCI Express (PCIe) fabric while using Pn4 for the XMC I/O. A less dense mezzanine carrier card can provide all 6 connectors. g. Pinout limitation - the number of pins (I/O) from the mezzanine sites are typically limited by the amount of base card I/O (I.E. serial ports, USB, SATA, Ethernet, DIO, etc) that is supported on general purpose Single Board Computers (SBC) or similar cards designed for generic, not specific applications. This will have a limitation as to what can be pinned out as per VITA 46.9 (3). A card design specifically to support mezzanines will not be I/O limited by typical SBC I/O and can therefore support the maximum I/O from a PMC plus the XMC Site. Mezzanine Carriers Mezzanine carriers generally are designed to host two types of mezzanines: 1. PMC cards (Pn1-4) (VITA 20 (5) ) 2. XMC cards with either Pn5 & Pn6, or Pn5 and Pn4. (VITA 42) A PCIe Mezzanine carrier comes in two basic flavors: A switchless (direct connect) mezzanine carrier A switched mezzanine carrier Figures 1a and 1b illustrate two examples of a switchless mezzanine carrier which does not have a PCIe switch. In Figure 1a, the PCIe lanes coming from the backplane are routed to a PCIe- PCI Bridge to support the PCI format conversion. In Figure 1b, the PCIe lanes can be routed directly to the Pn5 connector of the XMC. This provides a lower power carrier with an added benefit that the carrier could also be used with a Serial RapidIO mezzanine card as the series caps reside on the mezzanine. Figure 1a Figure 1b 2

Figure 2 illustrates a switched carrier where a PCIe switch is provided. There are advantages to using an onboard switch. The switch provides: a. More flexibility as the mezzanine carrier can be used to support both a PMC and XMC mezzanine (as shown in the figure). b. If the mezzanines are low performance (not requiring much bandwidth), a single port (perhaps x4 lanes) on a 6U mezzanine carrier can be used to control 2 mezzanines rather than have dedicated ports per mezzanine site. c. The switch can be used to provide a rate conversion from an XMC mezzanine that only supports a x8 PCIe Gen 1 card, to a backplane with x4 PCIe Gen 2 ports (as may profiles have data plane widths of x4 lane ports). d. The switch can be used to chain additional carriers together providing the ability for one processor to control and use many mezzanines. In the case of a backplane profile with an expansion plane, carriers that have their backplane ports on the expansion plane can easily be chained together and run off a processor at the end or center of the chain. Figure 2 Mezzanine Carrier Conductivity When the VITA46 specification was released, the handling of carrier cards was not really addressed in the specification, nor how they would/should be connected together to create system solutions. So, system architects and module providers defined their own way of connecting the various VPX cards together to create their system solutions. The OpenVPX System Specification (2) was the next step in the evolution of VPX to provide guidance to system architectural solutions using VPX. It is based on a flexible VPX family of standards using the standard module mechanical, connectors, thermal, communications protocols, utility, and power definitions provided by specific VPX standards to define a series of standard profiles that define slots, backplanes, modules, and Standard Development Chassis. These slot profiles, module profiles, payload profiles and backplane profiles provided system architectures with predefined building blocks to use in a consistent manner for interconnecting cards provided that the cards meet the various slot/module formats and backplane formats. OpenVPX also introduced the concept of the Data Plane and Expansion Plane. The Data Planes is defined as a plane that is used for application and external data traffic. The Expansion Plane is defined as a plane that is dedicated to communication between a logical controlling system element and a separate, but logically adjunct, system resource. In other words, the data plane is the primary plane that cards use to exchange data between each other, and the expansion plane is used by a controlling element to control an attached system resource to that card which can be a carrier card. OpenVPX also defines payload profiles which define where expansion planes would be provided should they be part of that payload profile. There are many different payload profiles specified for 3U and 6U size cards, many with the expansion plane defined but many without mainly because the profile does not have the pins to provide it. 3

OpenVPX also defines backplane profiles for 3U and 6U systems, again some with and some without the expansion plane defined. Many systems that were developed prior to OpenVPX used the data plane to provide access to carrier cards and these are still being designed today, as not all payload profiles defined expansion planes. New OpenVPX systems where the expansion plane is available can make use of the expansion plane. Let us now explore some system topologies. Figure 3 shows the simple case of a two card system where the host card (in this case a 6U processor) requires additional low speed mezzanines. The host card has an OpenVPX 6U payload profile VME Bridge Slot Profile SLT6-BRG-4F1V-10.5.2. As there is no expansion plane on a VME bridge card, the mezzanine carrier must be connected using one of the Data Plane ports, in this case illustrated as Port A. Figure 3 Figure 4 shows an example where mezzanines are used in a large centralized switch system such as a OpenVPX BKP6-CEN16-11.2.2-n backplane. In this case the host card(s) are connected to the centralized switch through the data plane, so they cannot connect to the carriers via the data plane. Figure 4: OpenVPX BKP6-CEN16-11.2.2-n (2) backplane 4

However, the cards used in the system do have an expansion plane that can used to connect to mezzanine carriers. Figure 5 (on next page) illustrates the case where one host card is connected to two carriers over the expansion plane. It can do this as most cards in this backplane profile have access to two cards over the expansion plane (one on each side of the card). Figure 5: Mezzanine carries connected over expansion plane In some cases, a host processor may only need one extra mezzanine site. For a 3U carrier, this is not a problem. However in the case of a 6U carrier card, using only one site would seem like a waste. Additionally, in a system where 2 host processors may require one mezzanine per processor, providing 2 extra mezzanine carriers each with one site unused means adding cost, power and using an additional slot that could be used for future growth. One solution, as illustrated in Figure 5a, is to use a switchless carrier where the PCIe lanes are connected directly to each mezzanine site. Another solution would be to use a carrier with Gen 2 PCIe switches which provide the ability to create two virtual switches in a single device. Ports are assigned to partitions (virtual switches), and owned by the host on the upstream port of each virtual switch. This is illustrated in Figure 5b, where a single carrier s switch is partitioned such that each processor owns/controls its assigned mezzanine. Figure 5a: Shared Mezzanines on a switchless carrier Figure 5b: Shared Mezzanines on a partitioned switched carrier 5

Let us now explore some actual use cases of mezzanine carriers. Use Case 1 In this specific use case, the system designer has a requirement for a 3U, VPX system with an Intel SBC, 1553, 256 GB solid state storage, and 3 GigE ports. The Intel SBC has only one mezzanine site available, and in a 3U system, it is not possible to provide all the I/O on the SBC. The solution is to use a mezzanine carrier card to host the additional I/O. Figure 6: 3U system with 2 mezzanine carriers Figure 6, shows an example solution using a Curtiss-Wright Intel SBC the VPX3-1252, the XMC-603 which is a 1553 mezzanine, the VPX3-215 mezzanine carrier card, an XMC-552 storage mezzanine card, and a 3rd party PMC GigE mezzanine card to provide a solution. The VPX3-215 is an example of a switched mezzanine carrier card with one XMC/PMC site. It has four x4 PCIe lanes, and two ports can be combined into a single x8 PCIe port. Use Case 2 In this use case, the system designer has a requirement for a 3U, VPX system, needs to measure fast occurring events, and the algorithms are done in an FPGA. Figure 7: 3U System with a single mezzanine carrier The SBC has a mezzanine site available however the power dissipation on this XMC FPGA card makes it impossible to be hosted on the SBC. A mezzanine carrier card provides the best solution to resolve this issue. Figure 7, shows the solution using the Curtiss-Wright PPC SBC - the VPX3-127, a VPX3-215, and a high power FPGA mezzanine such as Curtiss-Wright ADX000 to provide a solution. Use Case 3 In this use case, a 3U system designer has a requirement for 2 low performance, low speed mezzanines. The system processor card has no mezzanine sites, but has enough processing power to control two carrier cards used to provide the mezzanines. It also only supports a single PCIe port. Figure 8: 3U system with chained switched carriers The solution is to provide two mezzanines carriers, both of which are switched carriers (that is the have PCIe to PCIe switch). The carriers are chained together such that only a single PCIe port off the host processor card is needed. By using two identical carriers, the cost of spares is reduced as there are no only two different base cards. The first carrier hosts a low speed XMC serial port card providing 8 synchronous 422 ports. The second carrier is hosting a XMC storage mezzanine with 128GB of solid state storage. Figure 8 shows a representative implementation with a Curtiss-Wright FPE320 with an embedded processor, 2 VPX3-215s, one with a XMC-552 Storage mezzanine and the other with the 8 port serial PMC. 6

Use Case 4 In this last use case, the system designer had a requirement for Two 6U single board computers for processing, with VME bridge functionality Legacy VME card 2 solid state storage cards 2 FPGA mezzanine cards for algorithm implementation 1 graphics mezzanine The final implementation is shown in Figure 9. The two processors are implemented with Curtiss-Wright s VPX6-185, which meet the Bridge Slot Profile SLT6-BRG-4F1V-10.5.2 payload profile. The processors communicate over and control the legacy VME card over the VME bus. One processor is used to host the storage mezzanines implemented with Curtiss-Wright s XMC-552 storage XMC. The second processor hosts one of the FPGA XMC mezzanines (Curtiss-Wright s ADX00). The carrier, VPX6-215 hosts the graphics card (Curtiss-Wright s XMC- 710) and the second ADX000. Figure 9: 6U System with Mezzanines and Legacy VME Support 7

Conclusion Mezzanine carriers provide a valuable tool to system designers and integrators. They are an ideal solution for hosting mezzanines when the system design/integrator: a. Needs to support additional PMCs and/or XMCs mezzanines b. Needs to host mezzanines for a controlling base card that because of density, space and power reasons it cannot host the required mezzanine c. Needs to cool high performance, high power mezzanine cards that are unable to be cooled on a controlling base card d. Needs to provide front panel access for mezzanine cards e. Needs to provide maximum pin out support for a PMC or XMC Because of the varying requirements of system design, VPX Mezzanine cards are available in different flavors (switched or direct connect) depending on the requirements of the host system, and whether the system is OpenVPX compliant. For more information on the cards discussed in this white paper, please refer and the respective product pages. Contact Information To find your appropriate sales representative: Website: www./sales Email: defensesales@curtisswright.com Technical Support For technical support: Website: www./support Email: support@curtisswright.com References: 1. VITA 46 VPX: Base Specification, 2. VITA 65 OpenVPX Architectural Framework for VPX, 3. VITA 46.9 VPX: PMC/XMC Rear I/O Fabric Signal Mapping on 3U and 6U VPX Modules 4. VITA 42 XMC: Switched Mezzanine Card Base Specification, 5. VITA 20 Conduction Cooled PMC, Copyright 2012, Curtiss-Wright Controls Defense Solutions All Rights Reserved. MKT-PCIe Carriers and OpenVPX-062912v1 The information in this document is subject to change without notice and should not be construed as a commitment by Curtiss-Wright Controls Defense Solutions. While reasonable precautions have been taken, Curtiss-Wright assumes no responsibility for any errors that may appear in this document. All products shown or mentioned are trademarks or registered trademarks of their respective owners. *Other names and brands may be claimed as the property of others. 8