FileBench's Multi-Client feature



Similar documents
How to Backup XenServer VM with VirtualIQ

Setting Up a CLucene and PostgreSQL Federation

Red Hat System Administration 1(RH124) is Designed for IT Professionals who are new to Linux.

LISTSERV in a High-Availability Environment DRAFT Revised

SysPatrol - Server Security Monitor

Spam Marshall SpamWall Step-by-Step Installation Guide for Exchange 5.5

DiskPulse DISK CHANGE MONITOR

DiskBoss. File & Disk Manager. Version 2.0. Dec Flexense Ltd. info@flexense.com. File Integrity Monitor

IBM WebSphere Application Server Version 7.0

Citrix EdgeSight for Load Testing Installation Guide. Citrix EdgeSight for Load Testing 3.8

F-SECURE MESSAGING SECURITY GATEWAY

Command Line Interface User Guide for Intel Server Management Software

32-Bit Workload Automation 5 for Windows on 64-Bit Windows Systems

Router CLI Overview. CradlePoint, Inc.

Worksheet 3: Distributed File Systems

Integrating with BarTender Integration Builder

Integrated Virtual Debugger for Visual Studio Developer s Guide VMware Workstation 8.0

EVault Software. Course 361 Protecting Linux and UNIX with EVault

Troubleshooting This document outlines some of the potential issues which you may encouter while administering an atech Telecoms installation.

WhatsUp Gold v16.2 MSP Edition Deployment Guide This guide provides information about installing and configuring WhatsUp Gold MSP Edition to central

Discovery Guide. Secret Server. Table of Contents

How To Use 1Bay 1Bay From Awn.Net On A Pc Or Mac Or Ipad (For Pc Or Ipa) With A Network Box (For Mac) With An Ipad Or Ipod (For Ipad) With The

Network Security EDA /2012. Laboratory assignment 4. Revision A/576, :13:02Z

SDK Code Examples Version 2.4.2

SuperLumin Nemesis. Administration Guide. February 2011

GWAVA 5. Migration Guide for Netware GWAVA 4 to Linux GWAVA 5

24x7 Scheduler Multi-platform Edition 5.2

Citrix EdgeSight for Load Testing Installation Guide. Citrix EdgeSight for Load Testing 3.5

Practice Fusion API Client Installation Guide for Windows

Step One: Installing Rsnapshot and Configuring SSH Keys

Installation Guide for WebSphere Application Server (WAS) and its Fix Packs on AIX V5.3L

Application Discovery Manager User s Guide vcenter Application Discovery Manager 6.2.1

Intelligent Power Protector User manual extension for Microsoft Virtual architectures: Hyper-V 6.0 Manager Hyper-V Server (R1&R2)

SEER Enterprise Shared Database Administrator s Guide

OnCommand Performance Manager 1.1

PN Connect:Enterprise Secure FTP Client Release Notes Version

Dell UPS Local Node Manager USER'S GUIDE EXTENSION FOR MICROSOFT VIRTUAL ARCHITECTURES Dellups.com

MD Link Integration MDI Solutions Limited

Red Hat Certifications: Red Hat Certified System Administrator (RHCSA)

Configuring Secure Socket Layer (SSL) for use with BPM 7.5.x

Pandora FMS 3.0 Quick User's Guide: Network Monitoring. Pandora FMS 3.0 Quick User's Guide

PaperStream Connect. Setup Guide. Version Copyright Fujitsu

Easy Setup Guide 1&1 CLOUD SERVER. Creating Backups. for Linux

Elixir Schedule Designer User Manual

RECOVER ( 8 ) Maintenance Procedures RECOVER ( 8 )

ICE.TCP Pro Update Installation Notes

Enterprise Content Management System Monitor. How to deploy the JMX monitor application in WebSphere ND clustered environments. Revision 1.

Oracle VM Server Recovery Guide. Version 8.2

Remote Unix Lab Environment (RULE)

Introduction to Operating Systems

McAfee Endpoint Encryption for PC 7.0

Setting Up the Site Licenses

Matisse Installation Guide for MS Windows

User Guide. Version 3.2. Copyright Snow Software AB. All rights reserved.

Installation Instructions for Version 8 (TS M1) of the SAS System for Microsoft Windows

Cumulus 6 HELIOS Companion 2.0. Administrator Guide

Wakanda Studio Features

Make a folder named Lab3. We will be using Unix redirection commands to create several output files in that folder.

Novell Distributed File Services Administration Guide

OutDisk 4.0 FTP FTP for Users using Microsoft Windows and/or Microsoft Outlook. 5/1/ Encryptomatic LLC

embeo Getting Started and Samples

Installation and Deployment

Remote Control Tivoli Endpoint Manager - TRC User's Guide

Informatica Corporation Proactive Monitoring for PowerCenter Operations Version 3.0 Release Notes May 2014

Web Browsing Examples. How Web Browsing and HTTP Works

Chapter 1: How to Register a UNIX Host in a One-Way Trust Domain Environment 3

Code Estimation Tools Directions for a Services Engagement

Secure Web Gateway Version 11.7 High Availability

Deploying the BIG-IP LTM system and Microsoft Windows Server 2003 Terminal Services

MyOra 3.0. User Guide. SQL Tool for Oracle. Jayam Systems, LLC

File Transfer And Access (FTP, TFTP, NFS) Chapter 25 By: Sang Oh Spencer Kam Atsuya Takagi

Capture Pro Software FTP Server System Output

Centralize AIX LPAR and Server Management With NIM

Managed Backup Service - Agent for Linux Release Notes

enicq 5 System Administrator s Guide

4PSA Total Backup User's Guide. for Plesk and newer versions

Monitoring Oracle Enterprise Performance Management System Release Deployments from Oracle Enterprise Manager 12c

Handle Tool. User Manual

ScanJour PDF 2014 R8. Configuration Guide

What's New in BlackBerry Enterprise Server 5.0 SP4 for Novell GroupWise

Synthetic Monitoring Scripting Framework. User Guide

Extending Remote Desktop for Large Installations. Distributed Package Installs

LOCKSS on LINUX. CentOS6 Installation Manual 08/22/2013

Hadoop Basics with InfoSphere BigInsights

Sun Cobalt Migration Utility. User Manual

Portals and Hosted Files

Sophos Anti-Virus for Linux user manual

System Administration Guide

Device Log Export ENGLISH

Hadoop Data Warehouse Manual

Syncplicity On-Premise Storage Connector

Setting Up Scan to SMB on TaskALFA series MFP s.

Using Symantec NetBackup with Symantec Security Information Manager 4.5

Enterprise Content Management System Monitor 5.1 Agent Debugging Guide Revision CENIT AG Author: Stefan Bettighofer

TABLE OF CONTENTS OVERVIEW SYSTEM REQUIREMENTS GETTING STARTED - DEPLOYMENT GETTING STARTED - DEPLOYMENT ON A CLUSTER

Using Microsoft Expression Web to Upload Your Site

WebSphere Application Server security auditing

Controlling the Linux ecognition GRID server v9 from a ecognition Developer client

Transcription:

FileBench's Multi-Client feature Filebench now includes facilities to synchronize workload execution on a set of clients, allowing higher offered loads to the server. While primarily intended for network file system measurements, it could also be used with shared disk storage over Fiber-Channel networks. How to do the later is left as an exercise for the reader. The examples in this document will be for NFS, though CIFS clients may also be used. Overview The typical set up will consist of multiple client machines producing file system access requests which are handled by a server (NFS or CIFS) or multiple servers (pnfs). The client machines each run a local copy of go_filebench and the relevant workloads under the control of a single copy of filebench.pl running on one of them, the server, or a separate machine. The clients synchronize their workload generation and results reporting phases using TCP sockets and place their results files in a shared storage region accessible by the controlling filebench.pl script. That script also runs the go_filebench instances using the ssh facility. Figure 1 illustrates a three client system where the filebench.pl program is run on a fourth client machine and the target for all machines is a shared file server. File Server Network Client1 Client2 Client3 Master Figure 1: Example three client system networked to a file server How it works As mentioned, the filebench.pl script plays a key role in controlling the various client instances of go_filebench. The approach used is an extension of the current system by which filebench.pl controls the execution of a single copy of go_filebench to run workloads specified in a filebench.prof file. In the single client case, filebench.pl is supplied with a.prof workload profile, which contains a DEFAULT section, specifying parameters common to all runs, and one or more CONFIG sections, which specify specific workloads to run and any special parameters they need. Filebench.pl reads in the file, saving the default information, then building a shell script named thisrun.f for each CONFIG 1

encountered and spawning a process to run it. Only one CONFIG is run at a time, of course, with control returning to filebench.pl after each run. Each script invokes an instance of go_filebench, then passes it command lines which are used to load the appropriate personality file, modify local variables, control the execution of the workload, and finally save the results. When finished, the script creates a summary output in html or xml, suitable for inspection by a browser. A similar approach is used for multi-client operation, however filebench.pl has been modified to create custom thisrun.f files for each client, and start instances of go_filebench on each client using ssh. The controlling copy of filebench.pl also spawns a synchronization process which acts as a synchronization server for the clients. Figure 2 illustrates the ssh sessions (in red), each of which is run from a separate filebench.pl processes, and the synchronization sockets (in blue). Each go_filebench instance creates a socket to the synchronization server and registers with it, then creates files and filesets on the file server as specified by its copy of thisrun.f, then signals the synchronization server that it finished with the creation phase. When all registered clients have signaled the synchronization server, it responds with a proceed message to each. The clients then run their workloads for the desired amount of time, shutdown their workloads and then signal the synchronization server that they are finished with the run phase. They synchronization server waits for all clients to finish then sends them messages to continue to the results dumping phase. Once a server has written its results to the shared results storage, it terminates, which terminates the ssh session. When all ssh sessions have terminated, the controlling filebench.pl program aggregates the individual client results into a summary file and continues on to the next CONFIG section of the.prof script. File Server Network go_filebench go_filebench go_filebench Client1 Client2 Client3 sync_server() ssh() ssh() filebench.pl ssh() Master Figure 2: Synchonization Socket (Blue) and remote process (Red) Relationships Two commands and code to send synchronization messages over TCP Sockets have been added to go_filebench to make this work. The commands are inserted into the thisrun.f file and processed by go_filebench at the appropriate times. The first command is: 2

enable multi master=<synchronization master name>, client =<this client's name> which is used to establish a socket with the synchronization server and to register the client with it. This is followed by multiple synchronization commands of the form: domultisync value = <integer> Which instructs go_filebench to send a synchronization request message to the synchronization server and wait for a response from it. In the current scheme, the domultisync command is used twice, first with a value of 1 to synchronize the transition from fileset creation to workload generation, and second with a value of 2 to synchronize the transition from workload generation to results reporting. The synchronization server creates an internal list of registered clients and which synchronization phase they are in. It verifies that the synchronization request is for the correct phase (has the correct value assigned) and responds with a matching response message once all registered clients have checked in. This ensures that all the clients start their workload generation at approximately the same time, and that the results reporting traffic is sent after all clients have finished, preventing it from interfering with the workload generation of any late clients. As illustrated in Figure 3, each client creates its files and filesets in a separate subdirectory on the shared file server. A preamble section in the config (.prof) file, the MULTICLIENT section, identifies the path to the shared server as it is viewed from each client using the targetpath attribute. In the example, each client has the shared file system mounted locally as. The other entry in the MULTICLIENT section, clients, tells filebench.pl the host names of all the clients involved. It will create a tree of subdirectories on the fileserver as part of the initialization processes, which includes subdirectories for each client. There will be two top level directories, specified by the dir and stats attributes in the DEFAULT section, or as overridden by dir or stats in any CONFIG section. Thus in the example in Appendix 1, client1 will store its files and filesets in the directory /export/fbench/tmp/client1 on the file server. However, as the targetpath attribute indicates, /export/fbench is mounted on each of the clients as, so client1 will access its files and filesets using the path /tmp/client1. Client2 and client3 access their files and filesets similarly. The instance of filebench.pl running on the machine named Master will generate custom thisrun.f scripts for each client, storing them in the appropriate subdirectory of /export/fbench/stat_tmp, or as it views it, /stat_tmp. For example, client1 will run the script that it finds at: /stat_tmp/client1/thisrun.f. When it finishes, it will put its results file stats.<configname>.out, where <configname> is the name of the particular personality being run. A log of its console output is also appended to runs.out, to aid in debugging. Once all the runs are finished, filebench.pl will scan through all the stats.<configname>.out files of all the clients and aggregate them into a results directory identical to what single client filebench creates, and store it in <targetpath>/<stats>/. 3

File Server theserver:/export/fbench/stats_tmp/client1 /client2 /client3 /tmp/client1 /client2 /client3 Network Client1 Client2 Client3 Master Figure 3: File system path names as viewed from the server and its clients. Running Multi-Client workloads Once all the networking issues are resolved, running is just like running a single client FileBench workload. The user just has to invoke filebench with the name of the.prof script to parse: $filebench multi_fileserver for example. However, a new block, the MULTICLIENT block must be added to the.prof file, the file server's shared directory needs to be mounted on all clients, and, of course, all the clients, the server, and the machine running filebench must be able to communicate over tcp/ip with each other. Once that communication is established, then the file server's exported directory must be mounted on all clients. The mount point does not have to have the same path on each client, but it simplifies the final setup step if it is. Probably the most tedious part is setting things up so ssh can be used to log into each client without requiring a password. This is done by creating a public/private key pair for the server and copying it to the home directory of the filebench user on each client. See the ssh man page for details. At least this is only a one time exercise, assuming the network configuration doesn't change. Filebench needs to be installed on all client machines, preferably the same version. This includes go_filebench and all the.f workload files that will be used. The.prof file that drives the multi-client operations needs to be on whichever machine will be used to control the runs (i.e. the machine on 4

which the user types filebench multi_server ). Individual go_filebench instances will be invoked on each client, and they will access their local workloads directories for the requested.f files. Of course, the workloads directories could refer to an nfs or cifs directory on another file server. The final step before running is to add a MULTICLIENT section to the front of the intended.prof file. This is a pretty simple section that indicates what the path to the server from each client is, and what the network visible name of each client is. Filebench.pl use the information in preparing custom thisrun.f scripts, creating the subdirectories on the file server where working files, thisrun.f files, and results files will be stored, and finally in starting up go_filebench instances on all clients. That's it. Just run your modified.prof as you would for a single client filebench run. The go_filebench instances on each client will create and use files and filesets in directories on the file server named using a combination of the targetpath, dir and client name: <targetpath><dir>/<clientname>. The results will for individual clients will appear in the declared stats directory under each of their subdirectories, while the aggregated results and.html index will appear directly under <targetpath><stats>. One additional point: there is no way for the fs_flush.pl script to affect the server directly, so the type of file system declared in the DEFAULT section with the filesystem attribute should be the one the client is using to access the server, normally nfs or cifs, as appropriate. Appendix 1: Example configuration file Here is the example.prof file which is referenced in this discussion and which uses the fileserver.f workload in a multi-client setting: CDDL HEADER START The contents of this file are subject to the terms of the Common Development and Distribution License (the "License"). You may not use this file except in compliance with the License. You can obtain a copy of the license at usr/src/opensolaris.license or http://www.opensolaris.org/os/licensing. See the License for the specific language governing permissions and limitations under the License. When distributing Covered Code, include this CDDL HEADER in each file and include the License file at usr/src/opensolaris.license. If applicable, add the following below this CDDL HEADER, with the fields enclosed by brackets "[]" replaced with your own identifying information: Portions Copyright [yyyy] [name of copyright owner] CDDL HEADER END 5

Copyright 2008 Sun Microsystems, Inc. All rights reserved. Use is subject to license terms. Example multi-client fileserver workload. Three clients named "client1", "client2" and "client3" access one file server whose shared directory is mounted on each client under the pathname "". This will run the fileserver workload on each of the clients, using seperate filesets for each server. MULTICLIENT { targetpath = ; clients = client1, client2, client3; } DEFAULTS { runtime = 60; dir = /tmp; stats = /stats_tmp; filesystem = nfs; description = "fileserver nfs"; } CONFIG fileserver { function = generic; personality = fileserver; nfiles = 1000; meandirwidth = 20; filesize = 16k; nthreads = 1; meaniosize = 2k; } 6