Performance test of Voyage on Alix Board



Similar documents
Stress Testing for Performance Tuning. Stress Testing for Performance Tuning

How to create a load testing environment for your web apps using open source tools by Sukrit Dhandhania

Cloud Performance Benchmark Series

Nomadic Communications Labs. Alessandro Villani

One Server Per City: C Using TCP for Very Large SIP Servers. Kumiko Ono Henning Schulzrinne {kumiko, hgs}@cs.columbia.edu

FreeBSD + nginx: Best WWW server for the best OS

Frequently Asked Questions

Communication Protocol

ZVA64EE PERFORMANCE BENCHMARK SOFINTEL IT ENGINEERING, S.L.

D. SamKnows Methodology 20 Each deployed Whitebox performs the following tests: Primary measure(s)

VMWARE WHITE PAPER 1

Windows Server 2008 R2 Hyper-V Live Migration

Industrial Communication Whitepaper. Principles of EtherNet/IP Communication

Zeus Traffic Manager VA Performance on vsphere 4

Introducing the Microsoft IIS deployment guide

MEASURING WORKLOAD PERFORMANCE IS THE INFRASTRUCTURE A PROBLEM?

Active Management Services

Monitoring Netflow with NFsen

httperf A Tool for Measuring Web Server Performance

Delivering Quality in Software Performance and Scalability Testing

Performance Testing Process A Whitepaper

Deployment Guide. AX Series with Microsoft Office SharePoint Server

COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service

ZEN LOAD BALANCER EE v3.02 DATASHEET The Load Balancing made easy

Parallels Cloud Server 6.0

Job Reference Guide. SLAMD Distributed Load Generation Engine. Version 1.8.2

High-precision Web Application Monitoring

ZEN NETWORKS 3300 PERFORMANCE BENCHMARK SOFINTEL IT ENGINEERING, S.L.

httperf A Tool for Measuring Web Server Performance

IOS Server Load Balancing

Exhibit B5b South Dakota. Vendor Questions COTS Software Set

ELIXIR LOAD BALANCER 2

D1.2 Network Load Balancing

TRACE PERFORMANCE TESTING APPROACH. Overview. Approach. Flow. Attributes

Riverbed Stingray Traffic Manager VA Performance on vsphere 4 WHITE PAPER

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

Understanding Slow Start

A comparison of TCP and SCTP performance using the HTTP protocol

Windows Server 2008 R2 Hyper-V Live Migration

High Frequency Trading and NoSQL. Peter Lawrey CEO, Principal Consultant Higher Frequency Trading

The new frontier of the DATA acquisition using 1 and 10 Gb/s Ethernet links. Filippo Costa on behalf of the ALICE DAQ group

Operating Systems and Networks Sample Solution 1

Datasheet iscsi Protocol

Network Performance Optimisation and Load Balancing. Wulf Thannhaeuser

CREW - FP7 - GA No Cognitive Radio Experimentation World. Project Deliverable D7.5.4 Showcase of experiment ready (Demonstrator)

CLOUDSPECS PERFORMANCE REPORT LUNACLOUD, AMAZON EC2, RACKSPACE CLOUD AUTHOR: KENNY LI NOVEMBER 2012

TCP Labs. WACREN Network Monitoring and Measurement Workshop Antoine Delvaux perfsonar developer

Web Server Deathmatch

An Introduction to Open vswitch

IOS Server Load Balancing

Performing Load Capacity Test for Web Applications

SOLUTION BRIEF: SLCM R12.8 PERFORMANCE TEST RESULTS JANUARY, Submit and Approval Phase Results

Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database

White paper. QNAP Turbo NAS with SSD Cache

Measuring Wireless Network Performance: Data Rates vs. Signal Strength

Application Level Congestion Control Enhancements in High BDP Networks. Anupama Sundaresan

Monitoring DoubleTake Availability

Where IT perceptions are reality. Test Report. OCe14000 Performance. Featuring Emulex OCe14102 Network Adapters Emulex XE100 Offload Engine

Main Memory & Backing Store. Main memory backing storage devices

Final for ECE374 05/06/13 Solution!!

Linux Tools for Monitoring and Performance. Khalid Baheyeldin November 2009 KWLUG

Performance Analysis of Distributed Computer Network over IP Protocol

Performance of VMware vcenter (VC) Operations in a ROBO Environment TECHNICAL WHITE PAPER

Konica Minolta s Optimised Print Services (OPS)

Linux NIC and iscsi Performance over 40GbE

Konica Minolta s Optimised Print Services (OPS)

Updated November 30, Version 4.1

Deployment Guide Oracle Siebel CRM

Cassandra Installation over Ubuntu 1. Installing VMware player:

Cloud Operating Systems for Servers

White Paper Open-E NAS Enterprise and Microsoft Windows Storage Server 2003 competitive performance comparison

Machine Architecture and Number Systems. Major Computer Components. Schematic Diagram of a Computer. The CPU. The Bus. Main Memory.

Notification messages

Deployment Guide. AX Series with Microsoft Exchange Server

Building and Using NX Open Source Components version 3.X

Distributed File System Performance. Milind Saraph / Rich Sudlow Office of Information Technologies University of Notre Dame

Monitoring Traffic manager

McAfee Firewall for Linux 8.0.0

VI Performance Monitoring

Comparison of Drive Technologies for High-Titan aggregate Performance

Executive Summary. Methodology

WHITE PAPER 1

Stratusphere Solutions

INITIAL TOOL FOR MONITORING PERFORMANCE OF WEB SITES

Using Dynamic Feedback to Optimise Load Balancing Decisions

httperfma Tool for Measuring Web Server Performance

1000Mbps Ethernet Performance Test Report

LCMON Network Traffic Analysis

Managing Virtual Servers

Manual. Netumo NETUMO HELP MANUAL Copyright Netumo 2014 All Rights Reserved

CT LANforge-FIRE VoIP Call Generator

Computer Networks. Chapter 5 Transport Protocols

Advanced Computer Networks Project 2: File Transfer Application

1. Product Information

Online Backup Client User Manual Linux

Ultra Thin Client TC-401 TC-402. Users s Guide

Performance Tuning Guide for ECM 2.0

lesson 1 An Overview of the Computer System

Quantum StorNext. Product Brief: Distributed LAN Client

SAS3 INSTALLATION MANUAL SNONO SYSTEMS 2015

Transcription:

Performance test of Voyage on Alix Board

Author Final revision Date Contend of table Summary... 2 1. Performance test... 3 2. Disk Test... 3 2.1 CPU test... 3 2.2 File IO test... 3 2.3 Disk analysis... 4 3. Network performance test... 4 3.1 TCP_STREAM... 4 3.2 UDP_STREAM... 5 3.3 TCP_CRR... 6 3.4 Network performance analysis... 6 4. Web Server Performance... 6 4.1 Motivation and Goals... 6 4.2 Benchmark Tool... 6 4.3 Setup Test Environment... 6 4.4 Test Procedures... 7 4.5 Web server performance analysis... 8 Reference... 9

Summary This document describes the capacity of the Gateway to deal with data reading and writing on disk, data sending and receiving in the network. So the test in total is divided into two categories: Disk test and network performance test. For the Disk test, the performance is also compared with the test result of another Ubuntu laptop.

1. Performance test Performance test plays an important role in the test phase of one system, because the test result deeply depends on the system performance. With a clear knowledge of basic capacity of a system, we can carry out a better evaluation of the results and further adapt the parameters used to obtain a better performance, in order to have faster system throughput, lower power consumption etc. 2. Disk Test Disk test is further divided into two steps [1]. a) CPU test b) File IO test CPU test is used to measure the calculation ability of CPU by calculating prime values to a maximum number, while File IO is used to measure the speed of writing or reading data on disk. The speed of disk operation reflects the limitation of the speed when we save data or read date from the database. 2.1 CPU test Command: sysbench --test=cpu --cpu-max-prime=20000 run CPU frequency RAM (bits) Total execution (Hz) time (s) Voyage/Alix board 498M 8G 592 Ubuntu/Guojun laptop 2.1G 20G 35.8 As we can see from the table, the frequency of the Alix board CPU is four times lower than that on Guojun s laptop, however, the speed of calculation is even lower, which is 16 times. 2.2 File IO test Command: $ sysbench --num-threads=16 --test=fileio --file-total-size=3g --file-test-mode=rndrw prepare $ sysbench --num-threads=16 --test=fileio --file-total-size=3g --file-test-mode=rndrw run $ sysbench --num-threads=16 --test=fileio --file-total-size=3g --file-test-mode=rndrw cleanup Thread sequential write sequential read random read random write Voyage/Alix Ubuntu Voyage/Alix Ubuntu Voyage/Alix Ubuntu Voyage/Alix Ubuntu 1 6.82 38.1 26.96 50.13 12.34 12.53 130Kb/s 12.47 2 6.21 33.8 26.37 50.69 12.38 29.74 133.5Kb/s 12.18

4 5.4 37.46 24.98 58 12.53 43.06 133Kb/s 11.94 8 5.47 33.08 26.5 49.39 12.69 50.62 134.02Kb/ 12.91 s 16 5.56 38.41 26.3 44.84 12.58 57.88 134.5Kb/s 12.64 Note: all the unit here is Mb/s, except those that have been followed with units. 2.3 Disk analysis The Disk access in our system is mainly about the sequential write and read because of the large amount of data collected from the base station. So from the data above, the speed of writing and reading can reach the maximum when only 1 thread is running. The speed of writing is 6.82Mb/s, while the speed for reading is 26.96Mb/s. This Alix board is not suitable to run large software tools and multiple applications at the same time which is shown by the CPU frequency and speed of file reading and writing. 3. Network performance test As all the sensor data is send over the DTN network, it is necessary to measure the speed of the network communication [2]. This test is divided into three test cases: a) TCP_STREAM b) UDP_STREAM c) TCP_CRR TCP_STREAM is used for single TCP connection with large amount of data size, while UDP_STREAM is designed for UDP connection which doesn t require pre-connection between the client and server. TCP_CRR means multiple TCP request and response communication within the time for one TCP connection. 3.1 TCP_STREAM Basic command to use is: netperf -H 192.168.23.1 -l 60 -- -m 16k (reference from http://www.ibm.com/developerworks/cn/linux/l-netperf/) 0 RECV socket size bytes Send socket size bytes Send Message size bytes Elapsed time secs. 1 87380 16k 50M 60.00 94.01 2 87380 16k 10M 60.00 94.01 Throughput 10^6 bits/sec

3 87380 16k 1M 60.00 94 4 87380 16k 32k 60.00 93.41 5 87380 16K 16K 60.00 93.17 6 87380 16K 8K 60.00 93.17 7 87380 16K 4K 60.00 93.13 8 87380 16K 2K 60.00 92.77 9 87380 16k 1k 60.00 92.59 10 87380 16k 512 60.00 92.66 Figure 1: Throughput for TCP_STREAM 3.2 UDP_STREAM Basic command to use is: netperf -t UDP_STREAM -H 192.168.23.1 -l 60 -- -m 16k (-s 8k,8k) (Reference from http://www.ibm.com/developerworks/cn/linux/l-netperf/) 0 socket size bytes Send Message size bytes Elapsed time secs. Messages Okay Errors 1 112640 50M 60.00 2 112640 10M 60.00 3 112640 1M 60.00 Message too long 4 112640 32k 60.00 22504 0 96.01 5 112640 16K 60.00 44996 0 95.99 6 112640 8K 60.00 89623 0 95.60 7 112640 4K 60.00 179072 0 95.50 8 112640 2K 60.00 352380 0 93.97 Throughput 10^6 bits/sec The message size should be smaller than the send socket size. If not, there will be errors.

3.3 TCP_CRR Command: netperf -t TCP_CRR -H 192.168.23.1 -r 1024,32 The transaction time decreases as the total size of request and response increases. 3.4 Network performance analysis For large amount of data transmission inside the DTN network, TCP_STREAM should be the one we need to focus, because it used for single TCP connection with large amount of data size. From the above data, we can see that the network throughput increases as we enlarge the message size for each transmission. However, the growth becomes smaller and smaller as we enlarge the message size and finally reach a constant value, 94.01Mb/s. 4. Web Server Performance 4.1 Motivation and Goals Since we use ALIX board as web server, it is necessary to evaluate the web server performance for ALIX board. Through the measurement, we have to find out the request and reply throughput of a web server. 4.2 Benchmark Tool Httperf Httperf[3] is a tool for measuring web server performance. It provides a flexible facility for generating various HTTP workloads and for measuring server performance. The focus of httperf is not on implementing one particular benchmark but on providing a robust, high-performance tool that facilitates the construction of both micro- and macrolevel benchmarks. The three distinguishing characteristics of httperf are its robustness, which includes the ability to generate and sustain server overload, support for the HTTP/1.1 and SSL protocols, and its extensibility to new workload generators and performance measurements. Autobench Autobench[4] is a simple Perl script for automating the process of benchmarking a web server (or for conducting a comparative test of two different web servers). The script is a wrapper around httperf. Autobench runs httperf a number of times against each host, increasing the number of requested connections per second on each iteration, and extracts the significant data from the httperf output, delivering a CSV or TSV format file which can be imported directly into a spreadsheet for analysis/graphing. 4.3 Setup Test Environment Install the Httperf and autobench on ubuntu machine and the script lists below. Install the Httperf cd /usr/local/

wget http://httperf.googlecode.com/files/httperf-0.9.0.tar.gz tar xvzf httperf-0.9.0.tar.gz cd httperf-0.9.0./configure make make install ln -s /usr/local/bin/httperf /usr/bin/httperf Install autobench cd /usr/local/src/ wget http://www.xenoclast.org/autobench/downloads/autobench-2.1.2.tar.gz tar zxvf autobench-2.1.2.tar.gz cd autobench-2.1.2 make make install 4.4 Test Procedures Test command autobench --single_host --host1 192.168.4.1 --uri1 /index.php quiet\ --low_rate 50 --high_rate 100 --rate_step 10\ --num_conn 7500 --timeout 5 --file results.tsv This command causes httperf to use the web server on the host with IP name 192.168.4.1, running at port 80. The web page being retrieved is ``/index.php'' and, in this simple test, the same page is retrieved repeatedly. The rate at which requests are issued starts 50 requests per second to 100 requests per second and each iterate increases 10 requests per second. The test involves initiating a total of 7500 TCP connections and on each connection one HTTP call is performed (a call consists of sending a request and receiving a reply). The timeout option selects the number of seconds that the client is willing to wait to hear back from the server. If this timeout expires, the tool considers the corresponding call to have failed. Note that with a total of 7500 connections and a rate of 50 per second, the total test duration will be approximately 150 seconds, independent of what load the server can actually sustain. Results Analysis dem_req_rate: is given from 50 to 100 with increasement of 10 requests per second for each round. req_rate: Request rate gives the rate at which HTTP requests were issued and the period that this rate corresponds to. In the picture above, the request rate was 100.0 requests per second, which corresponds to 10.0 milliseconds per request. As long as no persistent connections are employed, the results in this section are very similar or identical to results in the connection section. However, when persistent connections are

used, several calls can be performed on a single connection in which case the results would be different. con_rate: Connection rate line shows that new connections were initiated at a rate of 100.0 connections per second. This rate corresponds to a period of 10.0 milliseconds per connection. The last number in this line shows that at most 14 connections were open at any given time. min_rep_rate: minimal statistics for the reply rate avg_rep_rate: average statistics for the reply rate max_rep_rate: maximum statistics for the reply rate stddev_rep_rate: standard deviation statistics for the reply rate resp_time : "Reply Time gives information on how long it took for the server to respond and how long it took to receive the reply net_io: Net I/O giv es the average network throughput in kilobytes per second (where a kilobyte is 1024 bytes) and in megabits per second (where a megabit is 10ˆ6 bits). errors: statistics on the errors that were encountered during a test. Errors includes client timeout, socket-timeout, connection refused, connection reset and file descriptors unavailable. 4.5 Web server performance analysis We draw the two figures for throughput and response time in this case. It is clear that when the request sends from from 50 to 80 requests/second, web performance is well. When requests rate is more than 80, the performance decrease and throughput fluctuates around 75 requests/second. For response time, it increases dramatically from about 200ms to more than 1000ms. Thus we can see the capability of web performance for ALIX board is around process 70 requests/second, rather lower than most of Desktop Computer. This gives us implications that we should not use ALIX board as a web server and design the applications with less http requests in a second. Figure2. Request throughput Figure3. Response time

Reference [1] Sysbench http://sysbench.sourceforge.net/docs/ [2] Netperf http://www.ibm.com/developerworks/cn/linux/l-netperf/ [3] Httperf http://www.hpl.hp.com/research/linux/httperf/ [4] Autobench http://www.xenoclast.org/autobench/