Network Traffic Monitoring and Analysis with GPUs



Similar documents
GPU-Based Network Traffic Monitoring & Analysis Tools

Network Traffic Monitoring & Analysis with GPUs

Packet-based Network Traffic Monitoring and Analysis with GPUs

WireCAP: a Novel Packet Capture Engine for Commodity NICs in High-speed Networks

Sockets vs. RDMA Interface over 10-Gigabit Networks: An In-depth Analysis of the Memory Traffic Bottleneck

MIDeA: A Multi-Parallel Intrusion Detection Architecture

Optimizing GPU-based application performance for the HP for the HP ProLiant SL390s G7 server

High-Density Network Flow Monitoring

Accelerating CFD using OpenFOAM with GPUs

The Lagopus SDN Software Switch. 3.1 SDN and OpenFlow. 3. Cloud Computing Technology

Limitations of Packet Measurement

Performance of Software Switching

NetFlow Aggregation. Feature Overview. Aggregation Cache Schemes

HP ProLiant SL270s Gen8 Server. Evaluation Report

High-performance vswitch of the user, by the user, for the user

Stream Processing on GPUs Using Distributed Multimedia Middleware

Tyche: An efficient Ethernet-based protocol for converged networked storage

Performance Evaluations of Graph Database using CUDA and OpenMP Compatible Libraries

Networking Virtualization Using FPGAs

Multi-Gigabit Intrusion Detection with OpenFlow and Commodity Clusters

Benchmarking Cassandra on Violin

Xeon+FPGA Platform for the Data Center

ICRI-CI Retreat Architecture track

Introduction to Passive Network Traffic Monitoring

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

and reporting Slavko Gajin

OpenFlow with Intel Voravit Tanyingyong, Markus Hidell, Peter Sjödin

Resource Utilization of Middleware Components in Embedded Systems

Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago

Data Centric Interactive Visualization of Very Large Data

PANDORA FMS NETWORK DEVICES MONITORING

NetScaler Logging Facilities

Performance Evaluation of VMXNET3 Virtual Network Device VMware vsphere 4 build

IPv6/IPv4 Automatic Dual Authentication Technique for Campus Network

Cloud Optimize Your IT

Parallel Firewalls on General-Purpose Graphics Processing Units

Network forensics 101 Network monitoring with Netflow, nfsen + nfdump

OpenPOWER Outlook AXEL KOEHLER SR. SOLUTION ARCHITECT HPC

PANDORA FMS NETWORK DEVICE MONITORING

Wireshark in a Multi-Core Environment Using Hardware Acceleration Presenter: Pete Sanders, Napatech Inc. Sharkfest 2009 Stanford University

HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief

I3: Maximizing Packet Capture Performance. Andrew Brown

RouteBricks: A Fast, Software- Based, Distributed IP Router

Tempesta FW. Alexander Krizhanovsky NatSys Lab.

Challenges in high speed packet processing

NetStream (Integrated) Technology White Paper HUAWEI TECHNOLOGIES CO., LTD. Issue 01. Date

PERFORMANCE TUNING ORACLE RAC ON LINUX

Achieving a High-Performance Virtual Network Infrastructure with PLUMgrid IO Visor & Mellanox ConnectX -3 Pro

Gigabit Ethernet Design

EMC ISILON AND ELEMENTAL SERVER

Overview on Modern Accelerators and Programming Paradigms Ivan Giro7o

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Introduction to Cisco IOS Flexible NetFlow

Control and forwarding plane separation on an open source router. Linux Kongress in Nürnberg

NetFlow/IPFIX Various Thoughts

Overlapping Data Transfer With Application Execution on Clusters

Intel Xeon +FPGA Platform for the Data Center

GPU File System Encryption Kartik Kulkarni and Eugene Linkov

Unified Computing Systems

Xen and the Art of. Virtualization. Ian Pratt

Features Overview Guide About new features in WhatsUp Gold v12

Deep Application Traffic Inspection real-time traffic analysis up to 20Gbps

CORRIGENDUM TO TENDER FOR HIGH PERFORMANCE SERVER

Firewall Implementation

Benchmarking Hadoop & HBase on Violin

HPC and Big Data. EPCC The University of Edinburgh. Adrian Jackson Technical Architect

Table of Contents. Cisco How Does Load Balancing Work?

Datacenter Operating Systems

Computer Graphics Hardware An Overview

GPU-based Decompression for Medical Imaging Applications

Clustering Billions of Data Points Using GPUs

How To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory)

Benchmark Hadoop and Mars: MapReduce on cluster versus on GPU

Network Traffic Analysis

Enhance Service Delivery and Accelerate Financial Applications with Consolidated Market Data

Design Issues in a Bare PC Web Server

J-Flow on J Series Services Routers and Branch SRX Series Services Gateways

Traffic Monitoring in a Switched Environment

Cray Gemini Interconnect. Technical University of Munich Parallel Programming Class of SS14 Denys Sobchyshak

Boosting Data Transfer with TCP Offload Engine Technology

User Reports. Time on System. Session Count. Detailed Reports. Summary Reports. Individual Gantt Charts

Accelerating From Cluster to Cloud: Overview of RDMA on Windows HPC. Wenhao Wu Program Manager Windows HPC team

An Oracle Technical White Paper November Oracle Solaris 11 Network Virtualization and Network Resource Management

Monitoring of Tunneled IPv6 Traffic Using Packet Decapsulation and IPFIX

OpenFlow and Onix. OpenFlow: Enabling Innovation in Campus Networks. The Problem. We also want. How to run experiments in campus networks?

Applications to Computational Financial and GPU Computing. May 16th. Dr. Daniel Egloff

A Packet Forwarding Method for the ISCSI Virtualization Switch

Characterize Performance in Horizon 6

Bridging the Gap between Software and Hardware Techniques for I/O Virtualization

Campus Network Design Science DMZ

Can High-Performance Interconnects Benefit Memcached and Hadoop?

Transcription:

Network Traffic Monitoring and Analysis with GPUs Wenji Wu, Phil DeMar wenji@fnal.gov, demar@fnal.gov GPU Technology Conference 2013 March 18-21, 2013 SAN JOSE, CALIFORNIA

Background Main uses for network traffic monitoring & analysis tools: Operations & management Capacity planning Performance troubleshooting Levels of network traffic monitoring & analysis: Device counter level (snmp data) Traffic flow level (flow data) At the packet inspection level (The Focus of this work) security analysis application performance analysis traffic characterization studies 2

Characteristics of packet-based network monitoring & analysis applications Time constraints on packet processing. Compute and I/O throughput-intensive High levels of data parallelism. Background (cont.) Each packet can be processed independently Extremely poor temporal locality for data Typically, data processed once in sequence; rarely reused 3

The Problem Packet-based traffic monitoring & analysis tools face performance & scalability challenges within highperformance networks. High-performance networks: 40GE/100GE link technologies Servers are 10GE-connected by default 400GE backbone links & 40GE host connections loom on the horizon. Millions of packets generated & transmitted per sec 4

Monitoring & Analysis Tool Platforms (I) Requirements on computing platform for high performance network monitoring & analysis applications: High Compute power Ample memory bandwidth Capability of handing data parallelism inherent with network data Easy programmability 5

Monitoring & Analysis Tool Platforms (II) Three types of computing platforms: NPU/ASIC CPU GPU Features NPU/ASIC CPU GPU High compute power Varies High memory bandwidth Varies Easy programmability Data-parallel execution model Architecture Comparison 6

Our Solution Use GPU-based Traffic Monitoring & Analysis Tools Highlights of our work: Demonstrated GPUs can significantly accelerate network traffic monitoring & analysis 11 million+ pkts/s without drops (single Nvidia M2070) Designed/implemented a generic I/O architecture to move network traffic from wire into GPU domain Implemented a GPU-accelerated library for network traffic capturing, monitoring, and analysis. Dozens of CUDA kernels, which can be combined in a variety of ways to perform monitoring and analysis tasks 7

Key Technical Issues GPU s relatively small memory size: Nvidia M2070 has 6 GB Memory Workarounds: Mapping host memory into GPU with zero-copy technique? Partial packet capture approach Need to capture & move packets from wire into GPU domain without packet loss A new packet I/O engine Need to design data structures that are efficient for both CPU and GPU 8

System Architecture Four Types of Logical Entities: Traffic Capture Preprocessing Monitoring & Analysis Output Display 1. Traffic Capture 2. Preprocessing 3. Monitoring & Analysis GPU Domain 4. Output Display Captured Data... Packet Buffer Packet Buffer Output Output Output Capturing Packet Chunks Monitoring & Analysis Kernels Output User Space NICs... Network Packets 9

Packet I/O Engine Processing Data Recycle Capture User Space OS Kernel Packet Buffer Chunk Attach... Descriptor Segments Free Packet Buffer Chunks Recv Descriptor Ring NIC Incoming Packets... Key techniques Pre-allocated large packet buffers Packet-level batch processing Memory mapping based zero-copy Key Operations Open Capture Recycle Close 10

GPU-based Network Traffic Monitoring & Analysis Algorithms A GPU-accelerated library for network traffic capturing, monitoring, and analysis apps. Dozens of CUDA kernels Can be combined in a variety of ways to perform intended monitoring & analysis operations 11

Packet-Filtering Kernel index 0 1 2 3 4 5 6 7 raw_pkts [ ] filtering_buf [ ] p1 p2 p3 p4 p5 p6 p7 p8 1 x x x x 0 1 1 0 0 1 0 1 Filtering We use Berkeley Packet Filter (BPF) as the packet filter scan_buf [ ] filtered_pkts [ ] index 0 1 1 2 3 3 3 4 0 1 2 3 p1 p3 p4 p7 index 0 1 2 3 4 5 6 7 2 Scan 3 Compact A few basic GPU operations, such as sort, prefix-sum, and compact. Advanced packet filtering capabilities at wire speed are necessary so that we only analyze those packets of interest to us. 12

index raw_pkts[ ] Traffic-Aggregation Kernel Reads an array of n packets at pkts[] and aggregates traffic between same src & dst IP addresses. Exports a list of entries; each entry records a src & dst IP address pair, with associated traffic statistics such as packets and bytes sent etc. 0 1 2 3 4 5 6 7 key_value[ ] value key1 key2 0 src1 dst1 1 src1 dst2 2 src2 dst4 3 src1 dst1 4 src1 dst1 5 src1 dst3 6 src2 dst4 7 src2 dst4 sorted_pkts[ ] value key1 key2 0 src1 dst1 3 4 1 5 2 src1 src1 src1 dst1 dst2 dst3 src1 dst1 src2 dst4 6 src2 dst4 7 src2 dst4 Multikey-Value Sort diff_buf[ ] inc_scan_buf[ ] 1 0 0 1 1 1 0 0 1 1 1 2 3 4 4 4 Inclusive Scan IP_Traffic [ ] src1 dst1 src1 dst2 src1 dst3 src2 dst4 stats stats stats stats index 0 1 2 3 Use to build IP conversations 13

Unique-IP-Addresses Kernel It reads an array of n packets at pkts[] and outputs a list of unique src or dst IP addresses seen on the packets. 1. for each i [0,n-1] in parallel do IPs[i] = src or dst addr of pkts[i]; end for 2. perform sort on IPs[] to determine sorted_ips[]; 3. diff_results[0] =1; for each i [1,n-1] in parallel do if(sorted_ips[i] sorted_ips[i-1]) diff_buf[i]=1; else diff_buf[i]=0; end for 4. perform exclusive prefix sum on diff_buf[]; 5. for each i [0,n-1] in parallel do if(diff_buf[i] ==1) Output[scan_buf[i]]=sorted_IPs[i]; end for 14

A Sample Use Case Using our GPU-accelerated library, we developed a sample use case to monitor network status: Monitor networks at different levels: from aggregate of entire network down to one node Monitor network traffic by protocol Monitor network traffic information per node: Determine who is sending/receiving the most traffic For both local and remote addresses Monitor IP conversations: Characterizing by volume, or other traits. 15

A Sample Use Case Data Structures Three key data structures were created at GPU: protocol_stat[] an array that is used to store protocol statistics for network traffic, with each entry associated with a specific protocol. ip_snd[] and ip_rcv[] arrays that are used to store traffic statistics for IP conversations in the send and receive directions, respectively. ip_table a hash table that is used to keep track of network traffic information of each IP address node. These data structures are designed to reference themselves and each other with relative offsets such as array indexes. 16

A Sample Use Case Algorithm 1. Call Packet-filtering kernel to filter packets of interest 2. Call Unique-IP-addresses kernel to obtain IP addresses 3. Build the ip_table with a parallel hashing algorithm 4. Collect traffic statistics for each protocol and each IP node 5. Call Traffic-Aggregation kernel to build IP conversations 17

Node 0 Node 1 Prototyped System Mem QPI M2070 PCI-E 10G-NIC PCI-E 10G-NIC P0 I O H QPI QPI P1 I O H QPI PCI-E Mem Prototyped System Our application is developed on Linux. CUDA 4.2 programming environment. The packet I/O engine is implemented on Intel 82599-based 10GigE NIC A two-node NUMA system Two 8-core 2.67GHz Intel X5650 processors. Two Intel 82599-based 10GigE NICs One Nvidia M2070 GPU. 18

Packet Capture Rate CPU Usage Performance Evaluation Packet I/O Engine 120% 100% PacketShader Netmap GPU-I/O 120% 100% PacketShader Netmap GPU-I/O 80% 80% 60% 60% 40% 40% 20% 20% 0% 1.6 GHz 2.0 GHz 2.4 GHz CPU Frequencies 0% 1.6 GHz 2.0 GHz 2.4 GHz CPU Frequencies Packet Capture Rate CPU Usage Our Packet I/O engine (GPU-I/O) and PacketShader achieve nearly 100% packet capture rate; Netmap suffers significant packet drops. Our Packet I/O engine requires least CPU usage 19

Execu on Time (Unit: Millisecond) Performance Evaluation GPU-based Packet Filtering Algorithm 2000 1800 1600 1400 1200 1000 800 600 400 200 0 standard-gpu-exp mmap-gpu-exp cpu-exp-1.6g cpu-exp-2.0g cpu-exp-2.4g 1 2 3 4 Experiment Data Set 20

Execu on Time (Unit: Millisecond) Performance Evaluation GPU-based Sample Use Case 3500 3000 2500 gpu-exp cpu-exp-2.0g cpu-exp-1.6g cpu-exp-2.4g 2000 1500 1000 500 0 IP TCP NET 131.225.107 NET 129.15.30 BPF Filters 21

Performance Evaluation Overall System Performance Packet Ratio Packet Ratio 1 0.8 0.6 0.4 One NIC Experiments 1.6GHz 2.0GHz 2.4GHz OnDemand 50 100 150 200 250 300 Packet Size (Byte) 1 0.8 0.6 0.4 Two NIC Experiments 1.6GHz 2.0GHz 2.4GHz OnDemand 50 100 150 200 250 300 Packet Size (Byte) In the experiments: Generators transmitted packets at wire rate. Packet sizes are varied across the experiments. Prototype system run in full operation with sample use case. 22

Conclusion We demonstrate that GPUs can significantly accelerate network traffic monitoring & analysis in high-performance networks. 11 million+ pkts/s without drops (single Nvidia M2070) We designed/implemented a generic I/O architecture to move network traffic from wire into GPU domain. We implemented a GPU-accelerated library for network traffic capturing, monitoring, and analysis. Dozens of CUDA kernels, which can be combined in various ways to perform monitoring and analysis tasks. 23

Question? Thank You! Email: wenji@fnal.gov and demar@fnal.gov 24