NARC: Network-Attached Reconfigurable Computing for High-performance, Network-based Applications



Similar documents
Using FPGAs to Design Gigabit Serial Backplanes. April 17, 2002

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

Figure 1.Block diagram of inventory management system using Proximity sensors.

Digitale Signalverarbeitung mit FPGA (DSF) Soft Core Prozessor NIOS II Stand Mai Jens Onno Krah

TCP Offload Engines. As network interconnect speeds advance to Gigabit. Introduction to

7a. System-on-chip design and prototyping platforms

Networking Virtualization Using FPGAs

Getting the most TCP/IP from your Embedded Processor

Open Flow Controller and Switch Datasheet

A Transport Protocol for Multimedia Wireless Sensor Networks

White Paper Increase Flexibility in Layer 2 Switches by Integrating Ethernet ASSP Functions Into FPGAs

Von der Hardware zur Software in FPGAs mit Embedded Prozessoren. Alexander Hahn Senior Field Application Engineer Lattice Semiconductor

A low-cost, connection aware, load-balancing solution for distributing Gigabit Ethernet traffic between two intrusion detection systems

Best Practises for LabVIEW FPGA Design Flow. uk.ni.com ireland.ni.com

The proliferation of the raw processing

High-Performance IP Service Node with Layer 4 to 7 Packet Processing Features

基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器

Cut Network Security Cost in Half Using the Intel EP80579 Integrated Processor for entry-to mid-level VPN

10/100/1000Mbps Ethernet MAC with Protocol Acceleration MAC-NET Core with Avalon Interface

PCI Express* Ethernet Networking

How To Monitor And Test An Ethernet Network On A Computer Or Network Card

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

HANIC 100G: Hardware accelerator for 100 Gbps network traffic monitoring

Windows Server 2008 R2 Hyper-V Live Migration

Going Linux on Massive Multicore

VMWARE WHITE PAPER 1

Architectures and Platforms

Boosting Data Transfer with TCP Offload Engine Technology

OpenFlow Based Load Balancing

Building High-Performance iscsi SAN Configurations. An Alacritech and McDATA Technical Note

Development of complex KNX Devices

760 Veterans Circle, Warminster, PA Technical Proposal. Submitted by: ACT/Technico 760 Veterans Circle Warminster, PA

SPI I2C LIN Ethernet. u Today: Wired embedded networks. u Next lecture: CAN bus u Then: wireless embedded network

Question: 3 When using Application Intelligence, Server Time may be defined as.

SIDN Server Measurements

UPPER LAYER SWITCHING

Network connectivity controllers

Intel DPDK Boosts Server Appliance Performance White Paper

Performance and Recommended Use of AB545A 4-Port Gigabit Ethernet Cards

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago

Network Instruments white paper

D1.2 Network Load Balancing

System-on-a-Chip with Security Modules for Network Home Electric Appliances

Windows Server 2008 R2 Hyper-V Live Migration

Pre-tested System-on-Chip Design. Accelerates PLD Development

Can High-Performance Interconnects Benefit Memcached and Hadoop?

Seeking Opportunities for Hardware Acceleration in Big Data Analytics

Reconfigurable Architecture Requirements for Co-Designed Virtual Machines

General Pipeline System Setup Information

Enhance Service Delivery and Accelerate Financial Applications with Consolidated Market Data

Analyzing Full-Duplex Networks

High Speed I/O Server Computing with InfiniBand

Quantum StorNext. Product Brief: Distributed LAN Client

Cisco Integrated Services Routers Performance Overview

The Advantages of Multi-Port Network Adapters in an SWsoft Virtual Environment

An Embedded Based Web Server Using ARM 9 with SMS Alert System

Technical Brief. DualNet with Teaming Advanced Networking. October 2006 TB _v02

C-GEP 100 Monitoring application user manual

Computer Systems Structure Input/Output

General system requirements

Accelerate Cloud Computing with the Xilinx Zynq SoC

How To Test A Microsoft Vxworks Vx Works (Vxworks) And Vxwork (Vkworks) (Powerpc) (Vzworks)

SCSI vs. Fibre Channel White Paper

Voice over IP. Demonstration 1: VoIP Protocols. Network Environment

Universal Flash Storage: Mobilize Your Data

Simplifying Embedded Hardware and Software Development with Targeted Reference Designs

FPGA PCIe Bandwidth. Abstract. 1 Introduction. Mike Rose. Department of Computer Science and Engineering University of California San Diego

Chapter 13 Selected Storage Systems and Interface

Network Monitoring White Paper

The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group

White Paper Abstract Disclaimer

Cluster Implementation and Management; Scheduling

Frequently Asked Questions

The virtualization of SAP environments to accommodate standardization and easier management is gaining momentum in data centers.

Putting it on the NIC: A Case Study on application offloading to a Network Interface Card (NIC)

Gigabit Ethernet Design

Accelerating High-Speed Networking with Intel I/O Acceleration Technology

I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology

Using High Availability Technologies Lesson 12

Stream Processing on GPUs Using Distributed Multimedia Middleware

Technical Note. Micron NAND Flash Controller via Xilinx Spartan -3 FPGA. Overview. TN-29-06: NAND Flash Controller on Spartan-3 Overview

GigE Vision cameras and network performance

Overview of Computer Networks

CCNA R&S: Introduction to Networks. Chapter 5: Ethernet

Chapter 2 - The TCP/IP and OSI Networking Models

10 Gigabit Ethernet: Scaling across LAN, MAN, WAN

UG103.8 APPLICATION DEVELOPMENT FUNDAMENTALS: TOOLS

Lustre Networking BY PETER J. BRAAM

FPGA Accelerator Virtualization in an OpenPOWER cloud. Fei Chen, Yonghua Lin IBM China Research Lab

Xeon+FPGA Platform for the Data Center

Linux. Reverse Debugging. Target Communication Framework. Nexus. Intel Trace Hub GDB. PIL Simulation CONTENTS

Notes and terms of conditions. Vendor shall note the following terms and conditions/ information before they submit their quote.

Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage

Kirchhoff Institute for Physics Heidelberg

Programmable Networking with Open vswitch

Transcription:

NARC: Network-Attached Reconfigurable Computing for High-performance, Network-based Applications Chris Conger, Ian Troxel, Daniel Espinosa, Vikas Aggarwal, and Alan D. George High-performance Computing and Simulation (HCS) Research Lab Department of Electrical and Computer Engineering University of Florida Conger

Outline Introduction NARC Board Architecture, Protocols Case Study Applications Experimental Setup Results and Analysis Pitfalls and Lessons Learned Conclusions Future Work Conger 2

Introduction Network-Attached Reconfigurable Computer (NARC) Project Inspiration: network-attached storage (NAS) devices Core concept: investigate challenges and alternatives for enabling direct network access and control over reconfigurable (RC) devices Method: prototype hardware interface and software infrastructure, demonstrate proof of concept for benefits of network-attached RC resources Motivations for NARC project include (but not limited to) applications such as: Network-accessible processing resources Generic network RC resource, viable alternative to server and supercomputer solutions Power and cost savings over server-based FPGA cards are key benefits No server needed to host RC device Infrastructure provided for robust operation and interfacing with users Performance increase over existing RC solutions is not a primary goal of this approach Network monitoring and packet analysis Easy attachment; unobtrusive, fast traffic gathering and processing Network intrusion and attack detection, performance monitoring, active traffic injection Direct network connection of FPGA can enable wire-speed processing of network traffic Aircraft and advanced munitions systems Standard Ethernet interface eases addition and integration of RC devices in aircraft and munitions systems Low weight and power also attractive characteristics of NARC device for such applications Conger 3

Envisioned Applications Aerospace & military applications Modular, low-power design lends itself well to military craft and munitions deployment FPGAs providing high-performance radar, sonar, and other computational capabilities Scientific field operations Quickly provide first-level estimations for scientific field operations for geologists, biologists, etc. Cost-effective intelligent sensor networks Field-deployable covert operations Completely wireless device enabled through battery, WLAN Passive network monitoring applications Active network traffic injection Distributed computing Use FPGAs in close conjunction with sensors to provide pre-processing functions before network transmission High-performance network technologies Fast Ethernet may be replaced by any network technology Gig-E, Infiniband, RapidIO, proprietary communication protocols Cost-effective, RC-enabled clusters or cluster resources Cluster NARC devices at a fraction of cost, power, cooling Conger 4

NARC Board Architecture: Hardware ARM9 network control with FPGA processing power (see Figure 1) Prototype design consists of two boards, connected via cable: Network interface board (ARM9 processor + peripherals) Xilinx development board(s) (FPGA) Network interface peripherals include: Layer-2 network connection (hardware PHY+MAC) External memory, SDRAM and Flash Serial port (debug communication link) FPGA control and data lines NARC hardware specifications: ARM-core microcontroller, 1.8V core, 3.3V peripheral 32-bit RISC, 5-stage pipeline, in-order execution 16KB data cache, 16KB instruction cache Core clock speed 180MHz, peripheral clock 60MHz On-chip Ethernet MAC layer with DMA External memory, 3.3V 32MB SDRAM, 32-bit data bus 2MB Flash, 16-bit data bus Port available for additional 16-bit SRAM devices Ethernet transceiver, 3.3V DM9161 PHY layer transceiver 100Mbps, full duplex capable RMII interface to MAC Figure 1 Block diagram of NARC device Conger 5

NARC Board Architecture: Software narc.h ARM processor runs Linux kernel 2.4.19 Provides TCP(UDP)/IP stack, resource management, threaded execution, Berkeley Sockets interface for applications Configured and compiled with drivers specifically for our board Applications written in C, compiled using GCC compiler for ARM (see Figure 2) NARC API: Low-level driver function library for basic services Initialize and configure on-chip peripherals of ARM-core processor Configure FPGA (SelectMAP protocol) Transfer data to/from FPGA, manipulate control lines Monitor and initiate network traffic NARC protocol for job exchange (from remote workstation) NARC board application and client application must follow standard rules and procedures for responding to requests from a user User appends a small header onto data (if any) containing info. about request before sending over network (see Figure 3) Bootstrap software in on-board Flash, automatically loads and executes on power-up Configures clocks, memory controllers, I/O pins, etc Contacts tftp server running on network, downloads Linux and ramdisk Boot Linux, automatically execute NARC board software contained in ramdisk Optional serial interface through HyperTerminal for debugging/development client.c client applicatio n main.c main applicatio n Makefile util.c library routines definition s global vars arm-linux-gcc gcc Linux Kernel NARC board application RAMDISK Client application Figure 2 Software development process Figure 3 Request header field definitions NARC Board ` User Workstation Conger 6

NARC Board Architecture: FPGA Interface Data communicated to/from FPGA by means of unidirectional data paths 8-bit input port, 8-bit output port, 8 control lines (Figure 4) Control lines manage data transfer, also drive configuration signals Data transferred one byte at a time, full duplex communication possible Control lines include following signals: Clock software-generated signal to clock data on data ports Reset reset signal for interface logic in FPGA Ready signal indicating device is ready to accept another byte of data Valid signal indicating device has placed valid data on port SelectMAP all signals necessary to drive SelectMAP configuration ARM PROG, INIT, CS, WRITE, DONE Out[0:7] f_ready a_valid In[0:7] f_valid a_ready clock reset PROG, INIT, CS, WRITE, DONE D[0:7] In[0:7] f_ready a_valid Out[0:7] f_valid a_ready SelectMAP Port FPGA FPGA configuration through SelectMAP protocol Figure 4 FPGA interface signal diagram Fastest configuration option for Xilinx FPGAs, protocol emulated using GPIO pins of ARM NARC board enables remote configuration and management of FPGA User submits configuration request (RTYPE = 01), along with bitfile and function descriptor Function descriptor is ASCII string, formatted list of functions with associated RTYPE definition ARM halts and configures FPGA, stores descriptor in dedicated RAM buffer for user queries All FPGA designs must restrict use of all SelectMAP pins after configuration Some signals are shared between SelectMAP port and FPGA-ARM link Once configured, SelectMAP pins must remain tri-stated and unused Conger 7

Results and Analysis: Raw Performance FPGA interface I/O throughput (Table 1) 1KB data transferred over link, timed Measured using hardware methods Logic analyzer to capture raw link data rate, divide data sent by time from first clock to last clock (see Figure 9) Performance lower than desired for prototype Handshake protocol may add unnecessary overhead Widening data paths, optimizing software routine will significantly improve FPGA I/O performance Network throughput (Table 2) Measured using Linux network benchmark IPerf NARC board located on arbitrary switch within network, application partner is user workstation Transfers as much data as possible in 10 seconds, calculates throughput based on data sent divided by 10 seconds Performed two experiments with NARC board serving as client in one run, server in other Both local and remote (remote location ~400 miles away, at Florida State University) IPerf partner Network interface achieves reasonably good bandwidth efficiency External memory throughput (Table 3) 4KB transferred to external SDRAM, both read and write Measurements again taken using logic analyzer Memory throughput sufficient to provide wire-speed buffering of network traffic On-chip Ethernet MAC has DMA to this SDRAM Should help alleviate I/O bottleneck between ARM and FPGA Mb/s Input Output Logic Analyzer 6.08 6.12 Mb/s Figure 9 Logic analyzer timing Table 1 FPGA interface I/O performance Local Network Remote Network (WAN) NARC- 75.4 4.9 Server Server- Server 78.9 5.3 Table 2 Network throughput Mb/s Read Write Logic Analyzer 183.2 160 Table 3 External SDRAM throughput Conger 8

Results and Analysis: Raw Performance Reconfiguration speed Includes time to transfer bitfile over network, plus time to configure device (transfer bitfile from ARM to FPGA), plus time to receive acknowledgement Our design currently completes a user-initiated reconfiguration request with a 1.2MB bitfile in 2.35 sec Area/resource usage of minimal wrapper for Virtex-II Pro FPGA Stats on resource requirements for a minimal design to provide required link control and data transfer in an application wrapper are presented below: Design implemented on older Virtex-II Pro FPGA Numbers to right indicate requirements for wrapper only, un-used resources available for use in user applications Extremely small footprint! Footprint will be even smaller on larger FPGA Device utilization summary: -------------------------------------------------------- Selected Device : 2vp20ff1152-5 Number of Slices: 143 out of 9280 1% Number of Slice Flip Flops: 120 out of 18560 0% Number of 4 input LUTs: 238 out of 18560 1% Number of bonded IOBs: 24 out of 564 4% Number of BRAMs: 8 out of 88 9% Number of GCLKs: 1 out of 16 6% Conger 9

Case Study Applications Clustered RC Devices: N-Queens HPC application demonstrating NARC board s role as generic compute resource Application characterized by minimal communication, heavy computation within FPGA NARC version of N-Queens adapted from previously implemented application for PCIbased Celoxica RC1000 board housed in a conventional server N-Queens algorithm is a part of the DoD high-performance computing benchmark suite and representative of select military and intelligence processing algorithms Exercises functionality of various developed mechanisms and protocols for job submission, data transfer, etc. on NARC Figure c/o Jeff Somers User specifies a single parameter N, upon completion the algorithm returns total number of possible solutions Purpose of algorithm is to determine how many possible arrangements of N queens there are on an N N chess board, such that no queen may Figure 5 Possible 8x8 solution attack another (see Figure 5) Results are presented from both NARC-based execution and RC1000-based execution for comparison Conger 10

Case Study Applications Network processing: Bloom Filter This application performs passive packet analysis through use of a classification algorithm known as a Bloom Filter Application characterized by constant, bursty communication patterns Most communication is Rx over network, transmission to FPGA Filter may be programmed or queried NARC device copies all received network frames to memory, ARM parses TCP/IP header and sends it to Bloom Filter for classification User can send programming requests, which include a header and string to be programmed into Filter User can also send result collection requests, which causes a formatted results packet to be sent back to the user Otherwise, application constantly runs, querying each header against the current Bloom Filter and recording match/header pair information Bloom Filter works by using multiple hash functions on a given bit string, each hash function rendering indices of a separate bit vector (see Figure 6) To program, hash inputted string and set resulting bit positions as 1 To query, hash inputted string, if all resulting bit positions are 1 the string matches Implemented on Virtex-II Pro FPGA Uses slightly larger, but ultimately more effective application wrapper (see Figure 7) Larger FPGA selected to demonstrate interoperability with any FPGA Figure 6 Bloom Filter algorithmic architecture Figure 7 Bloom Filter implementation architecture Conger 11

Experimental Setup N-Queens: Clustered RC devices NARC device located on arbitrary switch in network User interfaces through client application on workstation, requests N-Queens procedure Figure 8 illustrates experimental environment Client application records time required to satisfy request Power supply measures current draw of active NARC device N-Queens also implemented on RC-enabled server equipped with Celoxica RC1000 board Client-side function call to NARC board replaced with function call to RC1000 board in local workstation, same timing measurement NARC NARC Ethernet Network RC-enabled servers Comparison offered in terms of performance, power, cost Workstation Bloom Filter: Network processing Same experimental setup as N-Queens case study Software on ARM co-processor captures all Ethernet frames Only packet headers (TCP/IP) are passed to FPGA Data continuously sent to FPGA as packets arrive over network By attaching NARC device to switch, limited packets can be captured Only broadcast packets and packets destined for the NARC device can be seen Dual-port device could be inserted in-line with network link, monitor all flow-through traffic User Figure 8 Experimental environment Conger 12

Results and Analysis: N-Queens Case Study First, consider an execution time comparison between our NARC board and a PCI-based RC card (see Figure 10a and 10b) Both FPGA designs clocked at 50MHz Performance difference is minimal between devices Being able to match performance of PCI-based card is a resounding success! Power consumption and cost of NARC devices drastically lower than that of server with RC card combos Multiple users may share NARC device, PCI-based cards somewhat fixed in an individual server Power consumption calculated using following method Three regulated power supplies exist in complete NARC device (network interface + FPGA board): 5V, 3.3V, 2.5V Current draw from each supply was measured Power consumption is calculated as sum of V I products of all three supplies Exec. Time (s) Exec. Time (s) 0.05 0.04 0.03 0.02 0.01 0 70 60 50 40 30 20 10 0 N-Queens Execution Time Comparison (small board size) NARC RC-1000 5 6 7 8 9 10 Algorithm Parameter (N) N-Queens Execution Time Comparison (large board size) NARC RC-1000 11 12 13 14 Algorithm Parameter (N) Figure 10 Performance comparison between NARC board and PCI-based RC card on server Conger 13

Results and Analysis: N-Queens Case Study Figure 11 summarizes the performance ratio of N-Queens between both NARC and RC-1000 platforms Consider Table 4 for a summary of cost and power statistics Unit price shown excluding cost of FPGA FPGA costs offset when compared to another device Price shown includes PCB fabrication, component costs Approximate power consumption drastically less than server + RC-card combo Power consumption of server varies depending on particular hardware Typical servers operate off of 200-400W power supplies See Figure 12 for example of approximate power consumption calculation P Ratio 25 20 15 10 5 0 NARC / RC-1000 Performance Ratio 5 6 7 8 9 10 11 12 13 14 Algorithm Parameter (N) NARC Board RATIO Cost per unit (prototype) $175.00 3.28 W Approx. Power Consumption Table 4 Price and power figures for NARC device Equivalency Figure 11 Power consumption calculation P = (5V)(I 5 ) + (3.3V)(I 33 ) + (2.5V)(I 25 ) I 5 0.2A ; I 33 0.49A ; I 25 0.27A = (5)(.2) + (3.3)(.49) + (2.5)(.27) =3.28W Figure 12 Power consumption calculation Conger 14

Results and Analysis: Bloom Filter Passive, continuous network traffic analysis Wrapper design was slightly larger than previous minimal wrapper used with N-Queens Still small footprint on chip, majority of FPGA remains for application Maximum wrapper clock frequency 183 MHz, should not limit application clock if in same clock domain Packets received over network link are parsed by ARM, with TCP/IP header saved in buffer Headers sent one-at-a-time as query requests to Bloom Filter (FPGA), when query finishes another header will be de-queued if available User may query NARC device at any time for results update, program new pattern Figure 13 shows resource usage for Virtex-II Pro FPGA Maximum clock frequency of 113MHz Not affected by wrapper constraint Significantly faster computation speed than FPGA-ARM link communication speed FPGA-side buffer will not fill up, headers are processed before next header transmitted to FPGA ARM-side buffer may fill up under heavy traffic loads 32MB ARM-side RAM gives large buffer Device utilization summary: ------------------------------------------------------- Selected Device : 2vp20ff1152-5 Number of Slices: 1174 out of 9280 13% Number of Slice Flip Flops: 1706 out of 18560 9% Number of 4 input LUTs: 2032 out of 18560 11% Number of bonded IOBs: 24 out of 564 4% Number of BRAMs: 9 out of 88 10% Number of GCLKs: 1 out of 16 6% Figure 13 Device utilization statistics for Bloom Filter design Conger 15

Pitfalls and Lessons Learned FPGA I/O throughput capacity remains persistent problem One motivation for designing custom hardware is to remove typical PCI bottleneck and provide wire-speed network connectivity for FPGA Under-provisioned data path between FPGA and network interface restricts performance benefits for our prototype design Luckily, this problem may be solved through a variety of approaches Wider data paths (16-bit, 32-bit) double or quadruple throughput, at expense of higher pin count Use of higher-performance co-processor capable of faster I/O switching frequencies Optimized data transfer protocol Having co-processor in addition to FPGA to handle network interface is vital to success of our approach Required in order to permit initial remote configuration of FPGA, as well as additional reconfigurations upon user request Offloading network stack, basic request handling, and other maintenance-type tasks from FPGA saves significant amount of valuable slices for user designs Drastically eases interfacing with user application on networked workstation Active co-processor for FPGA applications, e.g. parsing network packets as in Bloom Filter application Conger 16

Conclusions A novel approach to providing FPGAs with standalone network connectivity has been prototyped and successfully demonstrated Investigated issues critical to providing remote management of standalone NARC resources Proposed and demonstrated solutions to discovered challenges Performed pair of case studies with two distinct, representative applications for a NARC device Network-attached RC devices offer potential benefits for a variety of applications Impressive cost and power savings over server-based RC processing Independent NARC devices may be shared by multiple users without moving Tightly coupled network interface enables FPGA to be used directly in path of network traffic for real-time analysis and monitoring Two issues that are proving to be a challenge to our approach include: Data latency in FPGA communication Software infrastructure required to achieve a robust standalone RC unit While prototype design achieves relatively good performance in some areas, and limited performance in others, this is acceptable for concept demonstration Fairly complex board design; architecture and software enhancements in development As proof of NARC concept, important goal of project was achieved in demonstration of an effective and efficient infrastructure for managing NARC devices Conger 17

Future Work Expansion of network processing capabilities Further development of packet filtering application More specific and practical activity or behavior sought from network traffic Analyze streaming packets at or near wire-speed rates Expansion of Ethernet link to 2-port hub Permit transparent insertion of device into network path Provide easier access to all packets in switched IP network Merging FPGA with ARM co-processor and network interface into one device Ultimate vision for NARC device Will restrict number of different FPGAs which may be supported, according to chosen FPGA socket/footprint for board Increased difficulty in PCB design Expansion to Gig-E, other network technologies Fast Ethernet targeted for prototyping effort, concept demonstration True high-performance device should support Gigabit Ethernet Other potential technologies include (but not limited to) InfiniBand, RapidIO Further development of management infrastructure Need for more robust control/decision-making middleware Automatic device discovery, concurrent job execution, fault-tolerant operation Conger 18