Stingray Services Controller User s Guide

Size: px
Start display at page:

Download "Stingray Services Controller User s Guide"

Transcription

1 Stingray Services Controller User s Guide Version 2.0 December 2014

2 2014 Riverbed Technology, Inc. All rights reserved. Riverbed, SteelApp, SteelCentral, SteelFusion, SteelHead, SteelScript, SteelStore, Steelhead, Cloud Steelhead, Virtual Steelhead, Granite, Interceptor, Stingray, Whitewater, WWOS, RiOS, Think Fast, AirPcap, BlockStream, FlyScript, SkipWare, TrafficScript, TurboCap, WinPcap, Mazu, OPNET, and Cascade are all trademarks or registered trademarks of Riverbed Technology, Inc. (Riverbed) in the United States and other countries. Riverbed and any Riverbed product or service name or logo used herein are trademarks of Riverbed. All other trademarks used herein belong to their respective owners. The trademarks and logos displayed herein cannot be used without the prior written consent of Riverbed or their respective owners. Akamai and the Akamai wave logo are registered trademarks of Akamai Technologies, Inc. SureRoute is a service mark of Akamai. Apple and Mac are registered trademarks of Apple, Incorporated in the United States and in other countries. Cisco is a registered trademark of Cisco Systems, Inc. and its affiliates in the United States and in other countries. EMC, Symmetrix, and SRDF are registered trademarks of EMC Corporation and its affiliates in the United States and in other countries. IBM, iseries, and AS/400 are registered trademarks of IBM Corporation and its affiliates in the United States and in other countries. Juniper Networks and Junos are registered trademarks of Juniper Networks, Incorporated in the United States and other countries. Linux is a trademark of Linus Torvalds in the United States and in other countries. Microsoft, Windows, Vista, Outlook, and Internet Explorer are trademarks or registered trademarks of Microsoft Corporation in the United States and in other countries. Oracle and JInitiator are trademarks or registered trademarks of Oracle Corporation in the United States and in other countries. UNIX is a registered trademark in the United States and in other countries, exclusively licensed through X/Open Company, Ltd. VMware, ESX, ESXi are trademarks or registered trademarks of VMware, Inc. in the United States and in other countries. This product includes Windows Azure Linux Agent developed by the Microsoft Corporation ( Copyright 2012 Microsoft Corporation. This product includes software developed by the University of California, Berkeley (and its contributors), EMC, and Comtech AHA Corporation. This product is derived from the RSA Data Security, Inc. MD5 Message-Digest Algorithm. The Virtual Steelhead Mobile Controller includes VMware Tools. Portions Copyright VMware, Inc. All Rights Reserved. NetApp Manageability Software Development Kit (NM SDK), including any third-party software available for review with such SDK which can be found at and are included in a NOTICES file included within the downloaded files. For a list of open source software (including libraries) used in the development of this software along with associated copyright and license agreements, see the Riverbed Support site at https//support.riverbed.com. This documentation is furnished AS IS and is subject to change without notice and should not be construed as a commitment by Riverbed. This documentation may not be copied, modified or distributed without the express authorization of Riverbed and may be used only in connection with Riverbed products and services. Use, duplication, reproduction, release, modification, disclosure or transfer of this documentation is restricted in accordance with the Federal Acquisition Regulations as applied to civilian agencies and the Defense Federal Acquisition Regulation Supplement as applied to military agencies. This documentation qualifies as commercial computer software documentation and any use by the government shall be governed solely by these terms. All other use is prohibited. Riverbed assumes no responsibility or liability for any errors or inaccuracies that may appear in this documentation. Riverbed Technology 680 Folsom Street San Francisco, CA Phone: Fax: Web: Part Number

3 Contents Preface...1 About This Guide...1 Audience...1 Document Conventions...2 Documentation and Release Notes...2 Contacting Riverbed...3 Chapter 1 - Understanding the Services Controller...5 Overview of the Services Controller...5 Functionality that is Specific to the Services Controller or the Services Controller VA...6 Traffic Manager Instance Hosts...7 Verifying the Instance Host UTF-8 Locale...7 Nonmanaged Mode for Instances...8 Network Isolation for Multiple Traffic Manager Instances...8 Network Isolation Using LXC...9 Network Isolation Using Port Offsets...10 Usage Metering and Activity Metrics...11 Creating Metering Logs...13 Health and Performance Monitoring...14 Monitoring Settings...14 Retrieving Monitoring Data...15 Database-Only Updates...16 Using INSTALLROOT in This Guide...16 Chapter 2 - Managing Services Controller Licenses...17 Overview of Services Controller Licenses...17 Services Controller Software Licenses...18 Bandwidth Pack Licenses...19 Add-On Licenses...21 Installing an Add-On License...21 Stingray Services Controller User s Guide iii

4 Contents Licensing on Services Controller Clusters...21 Traffic Manager FLA Licenses...22 Generating a Self-Signed SSL Server Certificate...22 Installing FLA Licenses...23 Checking the Health of an FLA License...23 Chapter 3 - Installing and Configuring the Services Controller Software...27 Prerequisites...27 Required Linux Packages...28 Hardware Requirements...28 Software and License Requirements...28 Required Services Controller Files...29 Required Traffic Manager Files...29 Services Controller User Types...29 Required Installation Parameters...30 Configuring the MySQL Database for the Services Controller...32 Configuring the MySQL Database for Remote Availability...32 Installing and Configuring the Services Controller on Ubuntu...33 Installing the Services Controller on CentOS...34 Configuring CentOS Instance Hosts...34 Automating Configuration for the Services Controller...35 Installing the Services Controller Software License...35 Creating Licensing Reports...36 Starting and Stopping the Services Controller...36 Upgrading the Services Controller on Ubuntu...37 Upgrading the Services Controller on CentOS...38 Upgrading a Services Controller Cluster...39 Downgrading a Services Controller Cluster...39 Downgrading the Services Controller on Ubuntu...39 Downgrading the Services Controller on CentOS...40 Chapter 4 - Configuring Traffic Manager Instances...41 Overview of Instances...41 Services Controller High Availability Deployments...41 Traffic Manager Instance Clusters...42 Enabling Services Controller Communication with Instance Hosts...42 Adding Supporting Resources...43 Creating Version Resources...43 Creating License Resources...43 Creating Feature Pack Resources...44 Creating Instance Hosts...45 iv Stingray Services Controller User s Guide

5 Contents Deploying and Configuring Traffic Manager Instances...46 Preparing to Deploy a Managed Instance in a Container with Network Isolation...47 Preparing to Deploy a Managed Instance in a Container without Network Isolation...50 Preparing to Deploy a Managed Instance Outside of a Container...52 Configuring a Nonmanaged Instance...54 Deploying a Managed Instance...56 Chapter 5 - Configuring Load Balancing as a Service...59 Overview of LBaaS...59 Enterprise Licensing for LBaaS...61 LBaaS Service Life Cycle...61 Service Failover Mechanism...62 Prerequisites for Using LBaaS...63 Prerequisite Examples...64 Creating Feature Packs...64 Creating an FLA License Resource...64 Creating a Traffic Manager Version...65 Creating Instance Hosts...65 Creating and Starting an LBaaS Service...66 Configuration Options for an LBaaS Service...66 Creating an LBaaS Service...67 Starting and Stopping an LBaaS Service...68 Viewing Service Instances...69 Changing the LBaaS Service Cluster Size...71 Deleting an LBaaS Service...72 LBaaS REST API Reference...73 Authentication...73 API Root...73 template Resource...74 service Resource...81 task Resource...86 Chapter 6 - Configuring Elastic Load Balancing as a Service...89 Overview of ELBaaS...89 Enterprise Licensing for ELBaaS...91 ELBaaS Service Life Cycle...91 Service Failover Mechanism...92 Prerequisites for Using ELBaaS...93 Prerequisite Examples...94 Creating Feature Packs...94 Creating an FLA License Resource...94 Creating a Traffic Manager Version...95 Creating Instance Hosts...95 Creating and Starting an ELBaaS Service...97 Configuration Options for an ELBaaS Service...97 Creating an ELBaaS Service...98 Stingray Services Controller User s Guide v

6 Contents Understanding Scaling Thresholds Starting and Stopping an ELBaaS Service Viewing Service Instances Changing the Potential ELBaaS Service Cluster Size Deleting an ELBaaS Service ELBaaS REST API Reference Authentication API Root template Resource service Resource task Resource Chapter 7 - Using the REST API with the Services Controller Introducing REST Authentication URI Root Parts Inventory Resources Resource Reference action Resource add_on_pack_license_key Resource add_on_sku Resource bandwidth_pack_license_key Resource cluster Resource controller_license Resource controller_license_key Resource feature_pack Resource host Resource instance Resource license Resource manager Resource monitoring Resource service Resource sku Resource user Resource version Resource Using the REST API to Check Status Understanding REST Request Errors Chapter 8 - Installing the Services Controller Virtual Appliance Overview of the Services Controller VA Prerequisites Resource Requirements Required Configuration Information Obtaining the Services Controller VA Software vi Stingray Services Controller User s Guide

7 Contents Installing, Configuring, and Administering the Services Controller VA Creating the VM in vsphere Running the Services Controller VA Setup Wizard Changing the Password for the Admin User Upgrading the Services Controller VA Downgrading the Services Controller VA Upgrading Instance Host VAs Migrating Instance Host VAs Chapter 9 - Configuring the Stingray Services Controller Virtual Appliance Using the Administration UI to Manage the Services Controller Connecting to the Administration UI Using the Administration UI Overview of Services Controller Tasks Configuring Network Settings Configuring General Network Settings Configuring Base Interfaces Configuring System Settings Configuring Announcements Configuring Settings Configuring Resources for the Services Controller Managing REST API User Credentials Uploading the Traffic Manager Image Creating a Traffic Manager Version Resource Managing the Services Controller Licenses Managing Bandwidth Pack Licenses Managing Add-On Licenses Managing Flexible Licenses Managing Services Controller Certificates Managing Feature Packs Managing the Services Controller Service Configuring Database Settings Importing and Exporting the Local Database Configuring Services Controller Mode Settings Viewing and Modifying Services Controller Settings Managing Service Controller Resources Creating and Managing Instance Hosts Creating and Managing Traffic Manager Instances Creating and Managing Clusters Creating and Managing LBaaS Services Creating a Service with the LBaaS Wizard Creating an LBaaS Service Changing the Status of an LBaaS Service Changing the Properties of an LBaaS Service Stingray Services Controller User s Guide vii

8 Contents Starting and Stopping an LBaaS Service Changing the Error State of an LBaaS Service Host Deleting an LBaaS Service Viewing Reports and Diagnostics Instance Report Bandwidth Allocation Report CPU Utilization Report Throughput Utilization Report Viewing Logs and Generating System Dumps Viewing Logs Generating System Dumps Generating Metering Logs Getting Help Using the CLI to Manage the Services Controller Importing the SSL Certificate, Key, and Licenses Importing an Instance Host OVA into the Services Controller Enabling Passwordless SSH Communication Creating a Feature Pack for Instances in the Services Controller Provisioning an Instance Host Creating a Traffic Manager Instance Without a Container Configuring a Traffic Manager Instance with a Container Working with LBaaS Services ESXi vsphere Host Port Mapping Exporting a database Generating Metering Logs Chapter 10 - Configuring for High Availability High Availability Configuration Prerequisites Shared Access to the Database High Availability Mode Settings FLA-based Load Balancing and Mode Settings API Prerequisites for an HA Licensing Environment Redeeming Licensing Tokens Suggested Mode Settings for Enterprise setups Suggested Mode Settings for CSP setups How Failures are Detected Recovering from a Failure Enabling GUI Access over HTTPS Making the SSC Database Highly Available Chapter 11 - Troubleshooting Resolving Configuration Errors When Starting the Services Controller VA Tracking the Progress of Actions Maintaining Service During Services Controller Failure viii Stingray Services Controller User s Guide

9 Contents Generating a Technical Support Report Identifying Instances from a Support Entitlement Key Performing Advanced Settings Stingray Services Controller User s Guide ix

10 Contents x Stingray Services Controller User s Guide

11 Preface Welcome to the Stingray Services Controller User s Guide. Read this preface for an overview of the information provided in this guide. This preface includes the following sections: About This Guide on page 1 Documentation and Release Notes on page 2 Contacting Riverbed on page 3 About This Guide The Stingray Services Controller User s Guide describes how to configure and manage the Riverbed Stingray Services Controller and the Riverbed Stingray Services Controller Virtual Appliance. This guide includes information relevant to the following products: Riverbed Stingray Services Controller (Services Controller) Riverbed Stingray Services Controller Virtual Appliance (Services Controller VA) Riverbed Stingray Traffic Manager (Traffic Manager) Audience This guide is written for system administrators familiar with administering and managing large-scale hosting environments. This guide assumes that you are familiar with Riverbed Stingray Traffic Manager. Stingray Services Controller User s Guide 1

12 Preface Documentation and Release Notes Document Conventions This guide uses the following standard set of typographical conventions. Convention italics boldface Courier Meaning Within text, new terms and emphasized words appear in italic typeface. Within text, CLI commands, CLI parameters, and REST API properties appear in bold typeface. Code examples appear in Courier font: amnesiac > enable amnesiac # configure terminal < > Values that you specify appear in angle brackets: interface <ip-address> [ ] Optional keywords or variables appear in brackets: ntp peer <ip-address> [version <number>] { } Elements that are part of a required choice appear in braces: {<interface-name> ascii <string> hex <string>} The pipe symbol represents a choice to select one keyword or variable to the left or right of the symbol. The keyword or variable can be either optional or required: {delete <filename> upload <filename>} Documentation and Release Notes To obtain the most current version of all Riverbed documentation, go to the Riverbed Support site at If you need more information, see the Riverbed Knowledge Base for any known issues, how-to documents, system requirements, and common error messages. You can browse titles or search for keywords and strings. To access the Riverbed Knowledge Base, log in to the Riverbed Support site at Each software release includes release notes. The release notes identify new features in the software as well as known and fixed problems. To obtain the most current version of the release notes, go to the Software and Documentation section of the Riverbed Support site at Examine the release notes before you begin the installation and configuration process. 2 Stingray Services Controller User s Guide

13 Contacting Riverbed Preface Contacting Riverbed This section describes how to contact departments within Riverbed. Technical support - If you have problems installing, using, or replacing Riverbed products, contact Riverbed Support or your channel partner who provides support. To contact Riverbed Support, open a trouble ticket by calling RVBD-TAC ( ) in the United States and Canada or outside the United States. You can also go to Professional services - Riverbed has a staff of professionals who can help you with installation, provisioning, network redesign, project management, custom designs, consolidation project design, and custom coded solutions. To contact Riverbed Professional Services, [email protected] or go to Documentation - The Riverbed Technical Publications team continually strives to improve the quality and usability of Riverbed documentation. Riverbed appreciates any suggestions you might have about its online documentation or printed materials. Send documentation comments to [email protected]. Stingray Services Controller User s Guide 3

14 Preface Contacting Riverbed 4 Stingray Services Controller User s Guide

15 CHAPTER 1 Understanding the Services Controller This chapter provides an overview of the Services Controller and describes the product features. It includes the following sections: Overview of the Services Controller on page 5 Traffic Manager Instance Hosts on page 7 Nonmanaged Mode for Instances on page 8 Network Isolation for Multiple Traffic Manager Instances on page 8 Usage Metering and Activity Metrics on page 11 Health and Performance Monitoring on page 14 Database-Only Updates on page 16 Using INSTALLROOT in This Guide on page 16 Overview of the Services Controller The Services Controller enables you to manage multiple instances of the Traffic Manager through a Representational State Transfer (REST) API. Alternatively, you can use the Services Controller Virtual Appliance (Services Controller VA) to manage multiple instances of the Traffic Manager. For detailed information about the Services Controller VA, see Chapter 8, Installing the Services Controller Virtual Appliance. The Services Controller stores information about deployed Traffic Manager instances, including the resources that it needs to manage in an inventory database. The Services Controller supports the MySQL database running against the default InnoDB back end. The Services Controller can perform a range of life-cycle actions on Traffic Manager instances. The Services Controller can: deploy an instance of the Traffic Manager with specified parameters. start and stop an instance of the Traffic Manager. uninstall an instance of the Traffic Manager. upgrade an instance of the Traffic Manager to a newer version. Stingray Services Controller User s Guide 5

16 Understanding the Services Controller Overview of the Services Controller These actions are triggered through the Services Controller REST API, through a REST API instance resource. You issue a REST API request for the Services Controller server to update the inventory database, which in turn queues an action to implement the requested operation before responding to the request. To avoid time-out issues, the response is returned before the action completes. After the action completes, the inventory database is updated. There is no progress callback from the Services Controller; you must poll the Services Controller to check for the status of the action. For detailed information about REST API resources, see Chapter 7, Using the REST API with the Services Controller. The Services Controller supports two licensing models: Cloud Service Providers (CSP) with billing dependent on instance metering. Enterprise licensing, which is prepaid and is used for managing Services Controller bandwidth allocations to internal customers. For detailed information about Services Controller licenses, see Chapter 2, Managing Services Controller Licenses. Riverbed recommends that you install the Services Controller in a high availability deployment (that is, with multiple Services Controllers) to ensure redundancy in case of a server failure. For detailed information about high availability deployments, see Chapter 10, Configuring for High Availability. The following sections summarize Services Controller features and concepts that will help you install, configure, and deploy the system. Functionality that is Specific to the Services Controller or the Services Controller VA The following functionality is only present in the Services Controller, and not the Services Controller VA: Instance Tagging. See Appendix, Understanding the Tag Property. Understanding the Tag Property on page 49. FLA checker. See Checking the Health of an FLA License on page 23. Metering log phone home. See Creating Metering Logs on page 13. Monitoring data retrieval. See Retrieving Monitoring Data on page 15. The following functionality is only present in the Services Controller VA, and not the Services Controller: Can be used via a CLI or an Administration UI. See Using the Administration UI to Manage the Services Controller on page 179 and Using the CLI to Manage the Services Controller on page 238. Instance Host deployment. See Creating and Managing Instance Hosts on page 205. Linux container management on Instance Host. See To create a new instance host resource on page 206. Visualized reporting, including instance status, bandwidth allocation, instance CPU and throughput utilization. See Viewing Reports and Diagnostics on page Stingray Services Controller User s Guide

17 Traffic Manager Instance Hosts Understanding the Services Controller Traffic Manager Instance Hosts The Services Controller enables you to use one or more external host machines to run instances of the Traffic Manager. These instances are referred to as Traffic Manager instance hosts or instance hosts. Instance hosts are physical servers or virtual machines. The Services Controller does not directly provision these instance hosts. You must set up each instance host manually and provide the settings required to access them through the Services Controller REST API. The Services Controller uses passwordless SSH to communicate with all instance hosts. You must configure the appropriate SSH login credentials and provide these settings to the Services Controller through the REST API. For example, consider that the Services Controller is installed on a host called sschost and runs as a username of sscuser. If the Services Controller is expected to manage instances of the Traffic Manager on a separate instance host server called IHserver, you must enable passwordless SSH from sscuser@sschost to root@ihserver. For detailed information about setting up passwordless SSH, see To generate a new SSH key on page 42. You use the Services Controller REST API to create logical host resources for each instance host you want to deploy. The REST API does not immediately verify that the Services Controller can access these hosts. This is for the following reasons: It enables a user to prepopulate the inventory database, even though the instance hosts might not be currently available. In a scenario involving multiple Services Controller installations, it ensures high availability. For example, verifying that one instance of the software has the correct passwordless SSH credentials available does not automatically mean that the other instances are in the same state. You can make the Services Controller check instance host accessibility by using the status_check parameter when accessing a host resource. Riverbed recommends that you perform this check on all Services Controller installations for all available instance hosts. To facilitate manageability, the Services Controller assumes that all Traffic Manager instances on one instance host are installed under a single root directory. This directory is referred to as the installation root directory, and a separate subdirectory is automatically created for each Traffic Manager instance when it is deployed. For detailed information about configuring instances, see Chapter 4, Configuring Traffic Manager Instances. Verifying the Instance Host UTF-8 Locale You must make sure that instance hosts are set to the UTF-8 locale for correct behavior during deployment operations. To verify the instance host UTF-8 locale To check that the locale of an instance host is set correctly, run the locale command on that instance host. The output of this command should include the following line: LANG=<language code>.utf-8 If the locale is set correctly, the LANG variable ends with the.utf-8 suffix (the language code might vary). Stingray Services Controller User s Guide 7

18 Understanding the Services Controller Nonmanaged Mode for Instances Nonmanaged Mode for Instances You might want to deploy Traffic Manager instances outside the Services Controller in nonmanaged mode. When the Services Controller accesses a nonmanaged instance, it is only able to manages licenses and provide metering capabilities for billing; it does not support Services Controller life-cycle operations on nonmanaged instances. Reasons for deploying instances outside the Services Controller include: Your Traffic Manager instance hosts are running a currently unsupported operating system, so the Services Controller cannot carry out deployment and other tasks. Your Traffic Manager instances are deployed as virtual appliances, which the Services Controller cannot manage. You can register nonmanaged instances with the Services Controller with a database-only update (see Database-Only Updates on page 16) or a nonmanaged update. If you issue a database-only REST API PUT request (either when initially creating an instance record or when modifying one), you can set a number of properties that are otherwise managed directly by the Services Controller. These properties are: rest_address admin_username admin_password snmp_address ui_address Note: Setting any of these properties through a database-only REST API PUT request does not result in changes being passed to the Traffic Manager instance; only the Services Controller database is affected. The Services Controller relies on the admin_username, admin_password, and rest_address properties to carry out REST proxy requests. The rest_address property must be unique and accurate to identify a Traffic Manager instance for licensing purposes. The Services Controller uses the snmp_address property to meter a Traffic Manager instances. It must be set correctly for metering to occur. Network Isolation for Multiple Traffic Manager Instances A Traffic Manager instance uses TCP and UDP ports for internal and external communication. Each of these ports has its own default value that a Traffic Manager uses in a typical deployment: for example, the Administration UI defaults to listening on port However, multiple Traffic Manager instances running on the same instance host cannot share the same default port numbers. 8 Stingray Services Controller User s Guide

19 Network Isolation for Multiple Traffic Manager Instances Understanding the Services Controller The Traffic Manager instances can cohabit on an instance host using these methods: Linux Containers - Run the Traffic Manager instances on the instance host inside Linux containers (LXC) to provide network isolation. Port Offsets - Run the Traffic Manager instances directly on the instance host without using LXC, and specify a port offset. Port offset settings calculate a unique set of default port numbers that differs for each Traffic Manager instance. Network Isolation Using LXC You can use LXC to provide network isolation and a degree of resource isolation. The Services Controller expects prepared container configuration files in advance of instance deployment. You must put these files in the installation root directory of the instance host. Riverbed recommends that you plan your container deployment strategy before installing the Services Controller to ensure adequate system resource availability. For more information about LXC, see the configuration and management instructions supplied with your Linux distribution. Note: For CentOS, Riverbed recommends LXC v To install LXC on an instance host for Ubuntu As root user, at the system prompt, enter: apt-get install lxc Note: To install LXC for CentOS, you should consult the CentOS documentation. Within the context of LXC, the Services Controller does not assume that you have created the containers. The Services Controller creates the requisite container based upon the container configuration you set up when it starts a Traffic Manager instance. The Services Controller destroys the container when it stops the instance. These items must match for the Services Controller to correctly identify an instance for license validation: The container_name property of the instance resource The container configuration filename (with a.conf extension) The lxc.utsname parameter in the container configuration file (if you set up your containers to use virtual networking) For example, a Traffic Manager instance deployed with a container_name property set to stm1.example.com requires a container configuration file called stm1.example.com.conf, which is placed in the instance host installation root directory and contains this setting: lxc.utsname=stm1.example.com You must use a fully qualified domain name (FQDN) for the container name. Failure to set these values results in an unlicensed instance. Traffic Manager instances running inside containers do not automatically raise a default network gateway, so your instances cannot contact hosts outside of the local network. Stingray Services Controller User s Guide 9

20 Understanding the Services Controller Network Isolation for Multiple Traffic Manager Instances The Services Controller uses the container_configuration property in the instance resource to set the default gateway for that instance. This property value is a string representation of a JavaScript Object Notation (JSON) structure. Special JSON characters (such as double quotation marks) must be escaped correctly. Use the following JSON format to set the container_configuration property in a REST API PUT request of the instance resource: "container_configuration" : "{\"gateway\":\"<gatewayip>\"}" For detailed information about configuring instances and their properties, see Chapter 4, Configuring Traffic Manager Instances. In a typical installation, the Traffic Manager uses as many CPU cores as possible to achieve maximum performance. If you want to deploy a large number of Traffic Manager instances on one Services Controller instance host, you must associate each instance with specific CPU cores to ensure balanced resource allocation. In this way, each instance has a lower maximum performance, but you can run multiple instances on one instance host. Instance CPU usage is controlled by the container configuration file and must contain the following setting: lxc.cgroup.cpuset.cpus=0 If you are running Traffic Manager instances without using LXC, you must manually set CPU usage with the cpu_usage property of the instance resource. For detailed information about configuring instances and their properties, see Chapter 4, Configuring Traffic Manager Instances. Network Isolation Using Port Offsets If you choose not to use LXCs for network isolation, you can use port offsets to create a unique set of management and control port numbers for each instance. For port offsets, you must create each instance with a port_offset defined in the config_options property of the instance resource. You must provide each instance on a particular instance host with a unique port_offset value. Note: For ports that must be consistent across all Traffic Manager instances in a cluster, set the cluster_port_offset setting in the cluster resource. The port_offset option in the config_options for the instance must also be set. Any change to the config_options settings on a managed instance will cause a restart of the instance. Nonmanaged instances are not affected. Specifying the port_offset setting automatically creates a new predefined base port number from which to apply the offset. This number differs from the standard Traffic Manager default port number to account for the potentially large number of instances on one instance host machine. If you apply the offset to the standard default port (for example, the REST API uses port 9070), there is a significant potential for port clashes. When using port offsets, consider the following rules: If you do not define port_offset for an instance, the Traffic Manager uses default port 9070 for the REST API. If you do define port_offset for an instance, the base port for the REST API of that instance changes to a starting value of If you define port_offset as 1, the REST API for that instance is placed at If you define port_offset as 2, the REST API for that instance is placed at Stingray Services Controller User s Guide

21 Usage Metering and Activity Metrics Understanding the Services Controller A similar calculation is also made for other internal and external ports on which the Traffic Manager listens. You can determine the ports in use by a particular instance by performing a REST API GET or PUT request for the instance resource. The REST API response includes several properties corresponding to the key externally facing services for that instance: rest_address - The location of the configuration REST API for that instance. This property must be an FQDN. ui_address - The location of the Administration UI for that instance, if enabled. This property must be an FQDN. snmp_address - The location of the SNMP service for that instance, if enabled. This property must be an FQDN. Each property contains a value in the form management_address:<port>, where <port> is the port number that has been set using these rules. Note: The rest_address and ui_address ports are TCP, while the snmp_address port is UDP. For detailed information about configuring instances and their properties, see Chapter 4, Configuring Traffic Manager Instances. Usage Metering and Activity Metrics The Services Controller automatically meters usage on a regular basis, and it optionally sends this information to Riverbed for billing purposes. By default, it records this information once per hour. If a Traffic Manager instance is active, the Services Controller polls it to obtain total throughput and peak activity metrics. The Services Controller creates a metrics log file with one line of metrics data for each Traffic Manager instance. Each line of metrics data records the name of the instance, the time elapsed since the resource was created, and the polled metrics. If an instance is not active, only the elapsed time is recorded. If you want to generate usage or billing information, typically you process all metering log files and aggregate the results. You should use caution when aggregating data results for billing since metering records include failed deployments. Note: Generating log files has a cumulative impact on disk space. The Services Controller records the most recent metrics information for each instance in the inventory database. You can obtain this data using the REST API. The REST API does not supply bulk metrics data. The set of metrics is recorded across one line of the log file. The Services Controller writes the metering log file in comma-separated value (CSV) format. The Services Controller appends an extra line to each metering log file containing an MD5 hash of the previous lines. Ignore this line in aggregating data for billing. Stingray Services Controller User s Guide 11

22 Understanding the Services Controller Usage Metering and Activity Metrics The metering log file contains these fields. Field Timestamp Instance ID Owner Management IP Instance SKU Instance Bandwidth Feature Pack Deploy Time Throughput Peak Throughput Peak Requests Peak SSL Requests Record Hash Description The date and time, in UTC format, that the line was written. The unique instance ID for the Traffic Manager instance. Optionally, the owner of the Traffic Manager instance. The management IP address of the Traffic Manager instance. The SKU assigned to the Traffic Manager instance (at the time of writing to the log). The SKU might vary between readings, and variations are not recorded in the metrics log file. This property includes a hash of features applicable to the SKU. Ignore these features for billing purposes. The bandwidth (in Mbps) allocated to the Traffic Manager instance. The feature pack assigned to the Traffic Manager instance (at the time of writing to the log). The length of time (in days, hours and minutes) since the instance was deployed. The number of bytes sent by the Traffic Manager instance, as recorded in the SNMP counter. This number is cumulative and is reset whenever the Traffic Manager instance is restarted. It is not the throughput since the latest metering action. To generate usage or billing information based on throughput, you should set your aggregating script to detect a drop in throughput and designate this as a restart. This property is applicable to active Traffic Manager instances only. For Idle or Inactive instances, it contains a value of 0 (zero) in the log. For uncontactable instances, it contains a value of -1 in the log. The highest number of bytes sent by the Traffic Manager instance in any second of the previous hour. This property is applicable to active Traffic Manager instances only. For Idle or Inactive instances, it contains a value of 0 (zero) in the log. For uncontactable instances, it contains a value of -1 in the log. The highest number of requests received by the Traffic Manager instance in any second of the previous hour. This property is applicable to active Traffic Manager instances only. For Idle or Inactive instances, it contains a value of 0 (zero) in the log. For uncontactable instances, it contains a value of -1 in the log. The highest number of Secure Socket Layer (SSL) requests received by the Traffic Manager instance in any second of the previous hour. This property is applicable to active Traffic Manager instances only. For Idle or Inactive instances, it contains a value of 0 (zero) in the log. For uncontactable instances, it contains a value of -1 in the log. An MD5 or similar hash of the record from the Services Controller license file for tamper detection. Ignore this for billing purposes. 12 Stingray Services Controller User s Guide

23 Usage Metering and Activity Metrics Understanding the Services Controller If metrics are not collected for a period of time, peaks for the missing time are not recorded. If you reduce the metering interval, the peak values are still relative to the previous hour rather than the time since metrics were last collected. Creating Metering Logs By default, the Services Controller retains and archives metering log files in the /var/log/ssc/metering directory. You define the log output location when you first configure the Services Controller. To allow the Services Controller to send metering logs to Riverbed, enable the phone home feature through the REST API. Perform a PUT request on the settings/phone_home resource, setting the phone_home_enabled property to true. This property is set to false by default. If the phone home feature is disabled, you must manually extract the metering log files and send them to Riverbed. Employing phone home on your system ensures that the process is entirely automatic, secure, and without staffing overhead. The Services Controller employs a cron job, created during installation, to run a script that prepares a metering data archive file in this format: <ssc_hostname><controller_license_name>.zip The script sends this file to a secure SFTP server at Riverbed. The cron job runs once a month, at a randomly selected date and time. The Services Controller authenticates the destination SFTP server against a preconfigured locally stored host key. You can change this key by placing a new value in: ~/.ssh/known_hosts If the host key is not found, or does not match the SFTP server, the phone home process stops. The Services Controller makes two attempts to send the log file. If the first attempt fails, the Services Controller waits a random period of up to 12 hours and then tries again. If the second attempt fails, the Services Controller sends an alert to the notification list. Equally, if the file is successfully sent, the Services Controller sends an alert to the notification list announcing the success. Errors and warnings pertaining to the phone home mechanism are all logged to the file metering_phone_home.log. This file provides a source of debugging information for problem resolution. Note: You can also extract metering logs using the Services Controller Virtual Appliance. See Generating System Dumps on page 235. Creating Metering Records Manually The Services Controller includes a command line utility, prepare_metering_records, to manually prepare metering records for processing. You can use this script to create the metering archive file ready for transmission to Riverbed. To create metering records At the system prompt, enter: prepare_metering_records [--help] [--force] --force - Runs the script without prompting for user input. Stingray Services Controller User s Guide 13

24 Understanding the Services Controller Health and Performance Monitoring Creating Metering Records Using the Phone Home Script You can run the metering_phone_home script to manually perform a phone home operation. This script is stored in the bin directory of the Services Controller installation directory. For example, in: /opt/riverbed_ssc_1.4/bin To run the metering phone home script At the system prompt, enter: metering_phone_home -v/--verbose -n/--noretry -v/--verbose - Redirects logging to stdout instead of a file. -n/--noretry - Prevents further attempts at sending. Health and Performance Monitoring Each Services Controller in your deployment monitors the health of all other peer Services Controllers and deployed Traffic Manager instances recorded in the inventory database. If a failure occurs, the Services Controller records a warning entry in the event log and sends an notification to any system administrators declared in the database. Note: You can configure the "From" address of alert s. This address can be set in INSTALLROOT/conf/ _config.txt, in the common section, as from_address. The symbol "$fqdn" will be replaced by the fully-qualified domain name of the SSC host. The other sections in this file should not normally be modified. For SSC installs on AWS it is likely that you will need to change this setting to be an address that is resolvable to the instance's public IP. The Services Controller also monitors supported versions of deployed Traffic Manager instances for a number of key performance metrics. You can obtain these metrics through the REST API monitoring resource. A series of inter-controller REST API requests are used to identify the status of each Services Controller and Traffic Manager instance in your deployment. You must make sure that each Services Controller has network access to all other Services Controllers and instance hosts. You must also enable the REST service on your running Traffic Manager instances and record the REST access details in the inventory database. Note: This process is performed automatically for Traffic Manager instances that are deployed by the Services Controller. However, you must enable and record REST API credentials for nonmanaged instances manually to allow monitoring to take place for these instances. Your Services Controllers in an Active state are monitored. Monitoring Settings By default, all active Services Controllers share the task of monitoring. You can alter this behavior by modifying the Services Controller's monitoring mode settings in the REST API manager resource. These settings do not normally need to be modified from their default values. For details, see Chapter 7, Using the REST API with the Services Controller. 14 Stingray Services Controller User s Guide

25 Health and Performance Monitoring Understanding the Services Controller You select whether each Services Controller individually monitors all other Services Controllers and Traffic Manager instances in the deployment, shares the responsibility of monitoring a proportion of Traffic Manager instances with other Services Controllers, or performs no monitoring actions at all. You can view and modify various monitoring interval settings for the Services Controller in the REST API monitoring resource. These settings do not normally need to be modified from their default values. For details, see Chapter 7, Using the REST API with the Services Controller. You set distinct interval values for monitoring Services Controllers and Traffic Manager instances for each of the following two categories: Monitoring Interval - The period of time, in seconds, that must elapse between health checks. The default value is 60. A setting of 0 forces the Services Controller to use a predefined short interval suitable for deployments that require very frequent monitoring. Failure Identification Interval - The period of time, in seconds, that must elapse between continuous health check failures before your Services Controller determines that a service failure has occurred. This setting helps to prevent against transient network errors being incorrectly identified as service outages. The default value is 180. You can also set the following failure identification settings: Overdue Monitoring Warning Interval - The period of time, in seconds, that must elapse before any pending monitoring actions are considered overdue. This might occur during periods of unusually heavy load. If defined, a breach of this interval causes the Services Controller to issue an alert. The default value is 300. Warning Interval - The period of time, in seconds, that must elapse before subsequent alerts are sent. You can use this setting to avoid large numbers of s being sent, one for each occasion a warning is triggered. A new is sent only after this interval has passed. This contains all monitoring events since the previous was sent. Retrieving Monitoring Data You can retrieve stored monitoring state data from the Services Controller by using the REST API monitoring resource. This resource is read-only and supports only the REST API GET request method. For details, see Chapter 7, Using the REST API with the Services Controller. You can access the following child elements through the REST API monitoring resource: /monitoring/manager - This element contains monitoring state data for all of your Services Controllers, whether or not they have failed. /monitoring/host - This element contains monitoring state data for all of your service hosts, whether or not they have failed. /monitoring/instance - This element contains monitoring state data and key performance metrics for your Traffic Manager instances, whether or not they have failed. /monitoring/failures - This element contains a pair of arrays for failed Services Controllers and Traffic Manager instances. You can use this element to retrieve a list of currently failed devices without needing to check the status of each one individually. Stingray Services Controller User s Guide 15

26 Understanding the Services Controller Database-Only Updates Database-Only Updates The Services Controller uses the inventory database to store and maintain information about the state of each Traffic Manager instance it is aware of. This information includes the current status of each instance. For example, Inactive or Active. However, the Services Controller does not actively monitor this state. If you start or stop a Traffic Manager instance outside of the Services Controller, it is unaware of this change of state. You can use these techniques to resolve this monitoring issue: Issue a GET REST API request for the Traffic Manager instance resource, including the URL parameter status_check=true. The Services Controller actively checks the state of the Traffic Manager instance and updates the stored status accordingly. Issue a PUT REST API request to modify the Traffic Manager instance resource and set the status property to the known correct state with URL parameter deploy=false. The Services Controller updates the status of the Traffic Manager instance in the inventory database but does not attempt to start or stop the instance itself. You can also use a database-only update if you need to update the recorded admin user password. This action is useful if the password has been set or changed on the Traffic Manager instance directly. Using INSTALLROOT in This Guide This guide uses the term INSTALLROOT to refer to the location of the Services Controller software installation directory. It is not an environment variable and is used in this guide for consistency only. Previous versions of the Services Controller used a deprecated environment variable, $SSCHOME, for this purpose. If this is still set in your environment, you must explicitly unset it prior to installing or upgrading the Services Controller. The default installation location for the Services Controller software package is: /opt/riverbed_ssc_ Stingray Services Controller User s Guide

27 CHAPTER 2 Managing Services Controller Licenses This chapter provides an overview and installation instructions for the Services Controller licenses. It includes the following sections: Overview of Services Controller Licenses on page 17 Services Controller Software Licenses on page 18 Traffic Manager FLA Licenses on page 22 Overview of Services Controller Licenses The Services Controller requires the following licenses, depending on your type of deployment and features: Services Controller Software Licenses - The following Services Controller licenses differ in how they license Traffic Manager instances. Cloud Services Provider (CSP) - A CSP license allows the deployment and licensing of any number of Traffic Manager instances, and places no limits on the features or bandwidth they can use. Using a CSP license, the Services Controller implements a metering scheme to obtain throughput and other metrics from each Traffic Manager instance on a regular basis (typically, hourly) and records the data in central log files. Service providers and hosting organizations use the metrics data to bill end users accordingly. Riverbed uses the same metrics data to charge the Services Controller customer. Enterprise - An Enterprise license enables prepayment for features and bandwidth. It also manages bandwidth and feature allocation to internal customers within licensed limits. An Enterprise license does not provide a billing option and must be used in conjunction with Bandwidth Pack licenses and Add-On licenses to enable deployment and licensing. Bandwidth Pack - A Bandwidth Pack license is a secondary type of license for Enterprise customers. Each Bandwidth Pack license associates a specific amount of bandwidth (typically, 5Gbps) with a Traffic Manager SKU. Each Bandwidth Pack is tied to a specific FLA. Add-On - An Add-On license is a secondary type of license that provides a mechanism for adding feature specific licenses: for example, Federal Information Processing Standards (FIPS), Stingray Application Firewall (WAF), Stingray Aptimizer Web Accelerator, and Load Balancing as a Service (LBaaS). Each Add-On license is tied to a specific Enterprise License Key. Stingray Services Controller User s Guide 17

28 Managing Services Controller Licenses Services Controller Software Licenses Traffic Manager Flexible Licensing Architecture (FLA) License - A Traffic Manager FLA license is intended for the Traffic Manager instances rather than Services Controller itself. With the FLA license you do not have to obtain licenses for individual Traffic Manager instances. Instead, the Services Controller applies a site-specific license to each instance and dynamically sets the feature set (SKU) and bandwidth desired for each instance. The FLA license requires a self-signed (or equivalent) certificate to be generated prior to use. To retrieve Stingray and Services Controller product licenses 1. License tokens are automatically ed to you when you order your product. If you have not received your tokens, contact [email protected]. 2. Redeem license tokens at Riverbed Support at To redeem tokens you must have a support site login and password. You can register for a new account at Riverbed Support. 3. Licenses are ed to you as attachments. Services Controller Software Licenses This section describes the different licensing schemes for the Services Controller. For detailed information about installing Services Controller licenses, see Installing the Services Controller Software License on page 35. The Services Controller supports the following licensing models: Cloud Service Providers (CSP) with billing dependent on instance metering. Enterprise licensing for managing Services Controller bandwidth allocation to internal customers. Before starting the first Services Controller in a cluster, you must place a valid license key file in the INSTALLROOT/licenses directory. New licenses in this location are automatically added to the database as they are discovered. The Services Controller license is automatically copied to the inventory database after communication is established. A newly installed Services Controller instance that is configured to use an existing inventory database containing valid license keys uses those keys to operate after it has started. Riverbed recommends that you install all subsequent Services Controller licenses via the REST API s controller_license_key resource. For details, see controller_license_key Resource on page 133. A Services Controller license can be free or it can be tied to an IP or MAC address: Free - A free Services Controller license can be used on any host and is shared so that a single Services Controller license is used for multiple Services Controllers operating as a cluster and sharing one database. Tied - A tied Services Controller license is associated with a single host by either the IP or MAC address. A tied license can be installed from any Services Controller in the same cluster as the Services Controller to be licensed. As a result, this Services Controller may not match the IP address and MAC address defined in the license. If the license is installed by the intended Services Controller, the license is written to the database and then verified. Once verified, the database is updated. If the license is installed by another Services Controller in the cluster, the license is written to the database, but it cannot be verified until the intended Services Controller is active. Once active, the Services Controller then verifies the license and updates the database. 18 Stingray Services Controller User s Guide

29 Services Controller Software Licenses Managing Services Controller Licenses If a Bandwidth Pack or Add-On license is associated with a tied Services Controller license, then that Services Controller license must be active to verify the license. Once active, the license is available to all Services Controllers in the cluster. Bandwidth Pack Licenses A Bandwidth Pack is a secondary type of license that is tied to a specific Enterprise Services Controller license. Riverbed recommends that you add the Bandwidth Pack license to the shared database from a running SSC using the REST API s bandwidth_pack_license_key resource. For details, see bandwidth_pack_license_key Resource on page 128. However, you can also place the Bandwidth Pack license in the INSTALLROOT/licenses directory and restart the SSC. New licenses are automatically added to the database as they are discovered. Each Bandwidth Pack enables a specific amount of bandwidth (typically, 5 Gbps) to a Traffic Manager SKU. The Bandwidth Pack license allows you to deploy and license Traffic Manager instances with an aggregate bandwidth allowance equal to that of the Bandwidth Pack. Each Bandwidth Pack is associated with one Services Controller license and cannot be used unless that Services Controller license has been loaded and found to be valid. A Bandwidth Pack only allows the deployment and licensing of Traffic Manager instances with one SKU. If you want to deploy Traffic Manager instances with different SKUs, then they require multiple Bandwidth Packs. Services Controller licenses and Bandwidth Packs are perpetual or they can have start and end dates. Multiple Bandwidth Packs can license bandwidth for a SKU; their allowances are added to determine the total. Bandwidth Packs can be upgraded from one SKU to another. Where an existing Bandwidth Pack license is upgraded, a new license is issued with the same serial number as the existing one, but licensing a different SKU. Only one of a set of Bandwidth Pack licenses with a shared serial number is used at any one time. Where licensed capacity is exceeded for a given SKU, all licensing requests for instances using that SKU are rejected. This behavior is also true of instances using an Add-On license SKU with insufficient licensed bandwidth. If you are using a Bandwidth Pack license, the Services Controller does not allow you to exceed the licensed bandwidth with your deployed Traffic Manager instances. This behavior includes failed instances (with a status of Failed to deploy), but not instances scheduled to be deleted. Only instances that have been deleted are excluded; instances scheduled to be deleted still count toward deployed totals. Any instance whose status is not Deleted continues to consume licensing bandwidth. The same rules apply to the consumption for Add-On license SKUs bandwidth for instances using Add-On SKUs. Installing the Bandwidth Pack License To install the Bandwidth Pack license 1. Place the license on an accessible location in your infrastructure. For details about obtaining your license keys, see To retrieve Stingray and Services Controller product licenses on page Install and configure the Services Controller. For details, see Chapter 3, Installing and Configuring the Services Controller Software. Stingray Services Controller User s Guide 19

30 Managing Services Controller Licenses Services Controller Software Licenses 3. Riverbed recommends that you add the Bandwidth Pack license to the shared database from a running SSC using the REST API s bandwidth_pack_license_key resource. For details, see bandwidth_pack_license_key Resource on page 128. Alternatively, copy a Services Controller license file containing the license key to the INSTALLROOT/ licenses directory of an SSC and restart the SSC. New licenses are automatically added to the database as they are discovered. Upgrading Bandwidth Pack Licenses (Enterprise Licensing Model Only) When your Services Controller is using the Enterprise Licensing model, you can upgrade a bandwidth pack to support a different STM SKU. This supports the replacement of existing purchased licensing with the same quantity of a more feature-rich STM SKU. For example, a deployment of Traffic Manager instances is using the STM-300 STM SKU, and an upgrade to the STM-400 STM SKU is required. For each existing bandwidth pack license key being upgraded, the Administrator will be provided with two new bandwidth pack licenses keys: The first license will contain the same serial number as the existing bandwidth pack license key. Only one of these licenses can contribute licensed bandwidth in your Services Controller deployment at any time. The second bandwidth pack license key will be a time-limited key which provides extra bandwidth used during the switchover. This provides a workaround for the Services Controller s protection against licensing compliance breaches. Upgrading bandwidth pack licenses 1. Obtain the replacement (upgrade) license key and the supplementary temporary bandwidth pack license keys from Riverbed. 2. Install replacement and supplementary bandwidth pack license keys on the Services Controller. It may be necessary to set the upgrade bandwidth pack license key(s) to an Active status after installation. Once complete, the controller_license_key resource's cluster_bandwidth property should show sufficient unused STM-400 bandwidth for the instances that are to be switched to use this STM SKU. 3. Create a feature_pack resource using the STM-400 STM SKU if one does not already exist. 4. Set the feature_pack property of each affected Traffic Manager instance resource to the desired STM-400 feature_pack resource (as created in step 3). 5. Remove the supplementary bandwidth pack license keys. If the SSC does not allow removal of the supplementary license keys, it may indicate a licensing shortage. This situation may result in unlicensed Traffic Managers after these keys expire. 20 Stingray Services Controller User s Guide

31 Services Controller Software Licenses Managing Services Controller Licenses Add-On Licenses An Add-On license is a secondary type of license that is tied to a specific Enterprise License Key. Each Add- On license contributes license bandwidth for a single specific feature, known as an Add-On SKU. These Add-On SKUs can be combined with base SKUs when the user creates a Feature Pack. For an instance set to use such a Feature Pack, the feature capabilities of the base SKU are augmented by those of the Add-On SKU. Add-On SKUs can be used with CSP licensing model, and do not require the use of an Add-On license. The Services Controller supports the following Add-On licenses: Federal Information Processing Standards (STM-B-ADD-FIPS, STM-CSP-U-ADD-FIPS) Stingray Application Firewall license (STM-B-ADD-WAF, STM-CSP-U-ADD-WAF) Stingray Aptimizer Web Accelerator (STM-B-ADD-WEBACCEL, STM-CSP-U-ADD-WEBACCEL) Add-On licenses have unique serial numbers and do not support upgrades. Installing an Add-On License To retrieve an Add-On license, you are sent a token via to redeem at Riverbed Support at support.riverbed.com. To install an Add-On license 1. Place the licenses on an accessible location in your infrastructure. For details about obtaining your license keys, see To retrieve Stingray and Services Controller product licenses on page Install and configure the Services Controller. For details, see Chapter 3, Installing and Configuring the Services Controller Software. 3. Riverbed recommends that you add the Add-On license to the shared database from a running SSC using the REST API s add_on_pack_license_key resource. For details, see add_on_pack_license_key Resource on page 127. Alternatively, copy a Services Controller license file containing the license key to the INSTALLROOT/ licenses directory of an SSC and restart the SSC. New licenses are automatically added to the database as they are discovered. Licensing on Services Controller Clusters A cluster of Services Controller instances that share a single inventory database also shares license restrictions: If you make licensing changes through the REST API of one cluster member, the licensing changes are immediately accessible to the other cluster members. If you put a license in INSTALLROOT/licenses directory of one cluster member, the license is added to the database when the Services Controller reboots. These changes are then accessible to the other cluster members. Where each Services Controller in a cluster is licensed with a separate address-restricted key, each Services Controller must raise the required IP address or have the appropriate MAC address present to run. If you are using the Enterprise Licensing model, Bandwidth Packs might be tied to any of the Services Controller licenses, so it is important that all licenses are validated by their respective Services Controller instances. Stingray Services Controller User s Guide 21

32 Managing Services Controller Licenses Traffic Manager FLA Licenses For detailed information about installing Services Controller licenses, see Installing the Services Controller Software License on page 35. Traffic Manager FLA Licenses A Traffic Manager FLA license is a license file intended for the Traffic Manager instances rather than Services Controller itself. The FLA license allows a Traffic Manager to contact one or more Services Controllers and to set Traffic Manager features (SKU) on a dynamic basis. This information is required to generate a Traffic Manager FLA license: A list of the fully qualified host names that is used for Services Controllers acting as license servers, along with port numbers. The SSL server certificate that is used by all of the Services Controllers (different controllers are not permitted to use different certificates). An FLA license attempts to contact each of the listed license servers and succeeds if they are contacted in turn, until it makes a successful connection or has attempted and failed to contact each one. The SSL server certificate is verified by the FLA license. If an SSL server certificate does not match what is required by the FLA license, then that FLA license will not connect to the Services Controller license servers. If this failure occurs, you may need to generate a new FLA license or correct the key/certificate used by the Services Controller. Generating a Self-Signed SSL Server Certificate The Services Controller is commonly deployed using self-signed certificate/key pairs, using the self-signed server certificate in the FLA license. To generate a self-signed SSL server certificate 1. At the Linux prompt, enter: openssl req -x509 -nodes -newkey rsa:1024 -keyout key.pem -out cert.pem -days 365 Parameter req Description Specifies an X509 certificate signing request management. -x509 Specifies a self-signed certificate rather than a certificate request. -nodes -newkey rsa:1024 -keyout key.pem -out cert.pem -days 365 Specifies that the private key will not be encrypted (otherwise, the server needs a password to start). Generates a new certificate request and sets the key size. Sets the target for the new private key. Sets the target for the certificate. Specifies the duration of the certificate (default is 30 days). A longer period may be desirable as a fresh FLA license will need to be generated and then deployed to all STM instances when the certificate expires. 22 Stingray Services Controller User s Guide

33 Traffic Manager FLA Licenses Managing Services Controller Licenses The FLA license does not accept composite certificates that include a server certificate along with other information or certificates created by ssh-keygen. To verify the SSL certificate 1. At the prompt, enter: openssl x509 -in certificate.crt -noout This command succeeds silently for a valid certificate or report errors. 2. To verify signed certificates, at the system prompt, enter: openssl verify <certificate name> Installing FLA Licenses The Traffic Manager FLA license is installed by placing the FLA license file in the configured sources directory (the location for the Traffic Manager image and FLA license files), and then creating a license resource via the Services Controller REST API. To install the FLA license 1. Choose a source location for FLA licenses. This is used during the installation process for the Services Controller. For detailed information see, Required Installation Parameters on page Place the license file in the location chosen in step 1. For details about obtaining your license keys, see To retrieve Stingray and Services Controller product licenses on page Install and configure the Services Controller. For details, see Chapter 3, Installing and Configuring the Services Controller Software. 4. Create a REST API license resource in the Services Controller REST API. For details, see Creating Version Resources on page 43. Checking the Health of an FLA License The Services Controller supports an FLA Health Checker. This tool enables you to manually test the licensing of all your resources against an FLA license. This enables you to identify any licensing problems with the FLA before any instances start using it. You will typically run the FLA Health Checker immediately after creating the dependent resources for instance deployment. That is, host, license, feature pack, and version. You start the FLA health checker using the REST API. To do this, issue a GET REST API request for the license resource, including the URL parameter status_check=true. The response from the GET request will depend on the success of this operation. Initially, the health_check_status property indicates that the FLA health check has started in the background, and is in progress. Stingray Services Controller User s Guide 23

34 Managing Services Controller Licenses Traffic Manager FLA Licenses For example: { "generic_errors" : null, "health_check_results" : [ ], "health_check_status" : "In Progress", "info" : "", "last_health_check_time" : "<timestamp>", "status" : "Active" } You can then poll the URI with normal GET request until the health check completes. Note: Only one FLA health check can be running at any time. When a health check completes successfully: the health_check_status is set to Completed. the health_check_result is Passed. the details property is empty. For example: { "generic_errors" : null, "health_check_results" : [ { "details" : { }, "health_check_result" : "Passed", "instance_host" : "<instance_host>.com", "ssc_host" : "<ssc_host>:<port>", "ssc_port" : <port> } ], "health_check_status" : "Completed", "info" : "", "last_health_check_time" : "<timestamp>", "status" : "Active" } When the health check completes with SSL connection errors: the health_check_status is set to Completed. the health_check_result is Failed. the details property includes the SSL errors. For example: { "generic_errors" : null, "health_check_results" : [{ "details" : { "ssl_errors" : { "errors" : [{ "err_code" : 18, "err_text" : "SSC sent a self-signed certificate which cannot be trusted" }], "fla_certs" : [{ "common_name" : "<common_name>", "issuer_common_name" : "<issuer_common_name>", "not_after" : " Z", "not_before" : " Z" }], 24 Stingray Services Controller User s Guide

35 Traffic Manager FLA Licenses Managing Services Controller Licenses } "ssc_certs" : [{ "common_name" : "<common_name>", "issuer_common_name" : "<issuer_common_name>", "not_after" : " Z", "not_before" : " Z" }] }}, "health_check_result" : "Failed", "instance_host" : "<instance_host>", "ssc_host" : "<ssc_host>:<port>", "ssc_port" : <port> }, { "details" : { "ssl_errors" : { "errors" : [{ "err_code" : 18, "err_text" : "SSC sent a self-signed certificate which cannot be trusted" }], "fla_certs" : [{ "common_name" : "", "issuer_common_name" : "", "not_after" : " Z", "not_before" : " Z" }], "ssc_certs" : [{ "common_name" : "<common_name>", "issuer_common_name" : "<issuer_common_name>", "not_after" : " Z", "not_before" : " Z" } ] } }, "health_check_result" : "Failed", "instance_host" : "<instance_host>", "ssc_host" : "<ssc_host>:<port>", "ssc_port" : <port> } ], "health_check_status" : "Completed", "info" : "test", "last_health_check_time" : "<timestamp>", "status" : "Active" In the above response, the ssl_errors property included: the details of the errors (errors) the certificate embedded in the FLA (fla_certs). the details of the certificate sent by the SSC (ssc_certs). When the health check completes with network related errors: the health_check_status is set to Completed. the health_check_result is Failed. the details property includes the network errors. Stingray Services Controller User s Guide 25

36 Managing Services Controller Licenses Traffic Manager FLA Licenses For example: { "generic_errors" : null, "health_check_results" : [{ "details" : { "network_errors" : "Failed to resolve SSC host '<ssc_host>': Name or service not known" }, "health_check_result" : "Failed", "instance_host" : "<instance_host>", "ssc_host" : "<ssc_host>:<port>", "ssc_port" : <port> }, { "details" : { "network_errors" : "Failed to resolve SSC host <ssc_host>': Name or service not known" }, "health_check_result" : "Failed", "instance_host" : "<instance_host>", "ssc_host" : "<ssc_host>:<port>", "ssc_port" : <port> } ], "health_check_status" : "Completed", "info" : "test", "last_health_check_time" : "<timestamp>", "status" : "Active" } The generic_errors top level property specifies errors that may happen while carrying out FLA health checks, but which is not related to the actual FLA health check. For instance, if there are no active hosts to carry out the checks. 26 Stingray Services Controller User s Guide

37 CHAPTER 3 Installing and Configuring the Services Controller Software This chapter describes how to install and configure the Services Controller. It includes the following sections: Prerequisites on page 27 Configuring the MySQL Database for the Services Controller on page 32 Installing and Configuring the Services Controller on Ubuntu on page 33 Installing the Services Controller on CentOS on page 34 Automating Configuration for the Services Controller on page 35 Installing the Services Controller Software License on page 35 Starting and Stopping the Services Controller on page 36 Upgrading the Services Controller on Ubuntu on page 37 Downgrading the Services Controller on Ubuntu on page 39 These instructions assume you have already installed the MySQL database. For detailed instructions on installing the MySQL database, see the MySQL database documentation. These instructions assume that you have retrieved the Services Controller and Traffic Manager licenses. For detailed information, see To retrieve Stingray and Services Controller product licenses on page 18 Prerequisites The Services Controller supports the following: Operating system: Ubuntu (x86_64), CentOS 6.5 (x86_64) Database: MySQL 5.5 ( is recommended) Stingray Services Controller User s Guide 27

38 Installing and Configuring the Services Controller Software Prerequisites Required Linux Packages Make sure the following Linux packages are installed before you start the installation process. The packages differ according to the operating system you are using. Ubuntu bit mysql-common libmysqlclient18 mysql-client-5.5 and mysql-client-core-5.5 CentOS bit mysql glibc-devel (v2.12 or greater) Python Argparse (required only for MADC instance hosts) Note: You must install and configure all software, including the Services Controller, as superuser or a user account with administrator or equivalent access permissions. Hardware Requirements The Services Controller requires the following hardware. Hardware Requirements CPU Minimum Memory Minimum Disk Space Intel Xeon / AMD Opteron 2 GB 10 GB (plus additional disk space for metering logs depending on number of instances metered) Software and License Requirements To install the Services Controller, you need the following software and licenses: MySQL Database Server - The Services Controller uses a MySQL database to store inventory data and other items relating to the deployment of your Traffic Manager instances. Mail Server - The Services Controller uses to alert you of certain error conditions (for example, running low on log disk space). You must set up and provide SMTP mail server connection details to the Services Controller for correct operation. The Services Controller does not support SMTP connections that require authentication. Services Controller Licenses - Before starting the first Services Controller in a cluster, you must place a valid license key file in the INSTALLROOT/licenses directory. New licenses are automatically added to the database as they are discovered. The Services Controller license is automatically copied to the inventory database after communication is established. A newly installed Services Controller instance that is configured to use an existing inventory database containing valid license keys uses those keys to operate after it has started. Riverbed recommends that you install all subsequent Services Controller licenses via the REST API s controller_license_key resource. For details, see controller_license_key Resource on page Stingray Services Controller User s Guide

39 Prerequisites Installing and Configuring the Services Controller Software Required Services Controller Files To install the Services Controller, you need the following files: SSL Certificate/Key - The SSL certificate and key are used by the SSC REST API to enable authentication of the Service Controller when it is being accessed by remote clients. An important client is the Traffic Manager FLA license, which requires that you generate a self-signed SSL certificate and key (or equivalent compatible certificate and key). See FLA Usage Certificate Criteria on page 254 for details of criteria for the key and certificate. You must place these files on an accessible location in your infrastructure. For details, see Generating a Self-Signed SSL Server Certificate on page 22. Required Traffic Manager Files The Services Controller requires a set of external Traffic Manager files to deploy Traffic Manager instances: Traffic Manager Image - The Traffic Manager image (that is, its tarball file) is required to create Traffic Manager instances in the Services Controller. You must place this in the Source File Location that you provide when you install the SSC. Traffic Manager FLA License - The Traffic Manager FLA license is required to create instances in the Services Controller. You must place this in the Source File Location that you provide when you install the SSC. These external files are not part of the initial Services Controller installation. You might require different versions of the Traffic Manager, and the license files are customized for each Services Controller deployment in conjunction with your support provider. You use the Services Controller REST API to create an inventory of resources for the Traffic Manager image and its license files. The REST API does not import the actual files and does not, by default, verify that the files are present. For detailed information about checking the status of REST API resources, see Using the REST API to Check Status on page 153. Services Controller User Types Within the Services Controller, there are these types of users: MySQL Database User - When you create the inventory database required by the Services Controller, you create a MySQL user that has access to the database. Riverbed recommends you create a specific Services Controller user rather than using the database root user. You supply the name and password of this user when you install the Services Controller so that it can access the database that you have pre-created. These credentials are recorded in the Services Controller configuration file. See Configuring the MySQL Database for the Services Controller on page 32. The Services Controller Linux User - The Services Controller software is run under a Linux user account. By default, this is the root user. The Services Controller REST API User - To make Services Controller REST API requests you must specify credentials for a SSC admin user. This user is stored within the Services Controller inventory database. You can create additional Services Controller users or change the password using the REST API. Stingray Services Controller User s Guide 29

40 Installing and Configuring the Services Controller Software Prerequisites Traffic Manager Instance Host User - When the Services Controller carries out actions on its designated instance hosts, it does so by means of passwordless SSH access. You must create these users and provide credentials so that the Services Controller can communicate with each instance host. Typically, you create this user as root, although you can specify in the REST API resource the name of the user for each host. Traffic Manager Admin and Service Users - see Users in Managed Traffic Manager Instances on page 30 and Users in Nonmanaged Traffic Manager Instances on page 30. Users in Managed Traffic Manager Instances When you deploy a managed Traffic Manager instance, the Services Controller creates two default user accounts within the instance: admin - The administrative user for this instance, with full access to all features and capabilities. service - The service user, with privileges restricted to those required for service creation and management. That is, no system wide privileges. You can change the password for each of these user accounts by updating the relevant password property in the instance resource. Riverbed strongly recommends that you do not provide admin user credentials to end-tenant users. You can safely provide the service user credentials, provided that the privileges for this account have not been altered from their default settings. Users in Nonmanaged Traffic Manager Instances When you register a nonmanaged Traffic Manager instance with the Services Controller, it is not mandatory for the user to supply service user credentials. Riverbed recommends that you specify a username and password for an existing admin user when deploying a nonmanaged instance. This admin user (and a specified rest_address) for the instance resource enables monitoring of the nonmanaged instance, and provides access to its REST API proxy. Riverbed strongly recommends that you do not provide admin user credentials to end-tenant users. Required Installation Parameters The Services Controller installation prompts you to provide values for parameters used in setting up the software. Make sure you have these parameter values before you start the installation process. The following table lists the required parameters to install the Services Controller. Parameter Database Host Database Port Database Name Database User Database user Password API Server Port Description The hostname or IP address of the server running the MySQL database server. The port number of the server running the MySQL database server. The name of the MySQL database used as the inventory database. The name of a MySQL user authorized to use the inventory database. This username, along with the corresponding password, is used internally by the Services Controller to access the MySQL database. The password of the MySQL user. This password is used internally by the Services Controller to access the MySQL database. The number of the port on which the Services Controller listens for REST API requests. 30 Stingray Services Controller User s Guide

41 Prerequisites Installing and Configuring the Services Controller Software Parameter SSL Certificate File SSL Private Key File Client Request Thread Pool Size Action Thread Pool Size Monitor Thread Pool Size Source File Location Log Output Location Alert Message Addresses SMTP Relay Host SMTP Relay Port Description The full path and filename of the certificate file to use for HTTPS connections to the REST API. The full path and filename of the private key file to use for HTTPS connections to the REST API. The maximum number of threads used for Services Controller REST requests. This parameter limits the number of possible simultaneous HTTP requests. The maximum number of threads used for actions such as deploying, starting, and stopping Traffic Manager instances. Riverbed recommends that you set Action Thread Pool Size and Monitor Thread Pool Size such that the sum of both is no greater than the MySQL maximum connections limit. If you have other applications that query the same MySQL database, you must make allowances for these additional connection requirements in your calculations. The maximum number of threads used to monitor Traffic Manager instances and other Services Controllers. If you experience warnings about overdue monitoring actions, increase this value. Riverbed recommends that you set Action Thread Pool Size and Monitor Thread Pool Size such that the sum of both is no greater than the MySQL maximum connections limit. If you have other applications that query the same MySQL database, you must make allowances for these additional connection requirements in your calculations. The name of a directory accessible by the Services Controller in which the Traffic Manager image and license files are placed. The name of a directory accessible by the Services Controller server in which log files and metering log files are placed. A list of one or more addresses to which warnings are sent in case of problems. Note: You can also configure the "From" address of alert s. This address can be set in INSTALLROOT/conf/ _config.txt, in the common section, as from_address. The symbol "$fqdn" will be replaced by the fully-qualified domain name of the SSC host. The other sections in this file should not normally be modified. For SSC installs on AWS it is likely that you will need to change this setting to be an address that is resolvable to the instance's public IP. The hostname or IP address of the SMTP server used to send warnings. The port number of the SMTP server used to send warnings. Note: You must configure the hostname of each Services Controller server in your local DNS settings so that all Traffic Manager instances can correctly resolve the addresses of all Services Controller servers. Stingray Services Controller User s Guide 31

42 Installing and Configuring the Services Controller Software Configuring the MySQL Database for the Services Controller Configuring the MySQL Database for the Services Controller These instructions assume you have already installed the MySQL database. For detailed instructions on installing the MySQL database, see the MySQL database documentation. You must create an empty MySQL database with an associated user account for use with the Services Controller. The Services Controller requires information about the MySQL database during the installation process. Riverbed recommends that you note the database name and the username and password before you start the installation process. For detail information about required installation parameters, see Required Installation Parameters on page 30. You can install the database on the same host as the Services Controller. You must configure the privilege settings for the MySQL user account to allow access to the database from all IP addresses (or at a minimum the IP addresses of the machines on which you install the Services Controller). To create a MySQL database with access to all IP addresses 1. Log in to the MySQL monitor client program on the database host machine: mysql u root -p 2. To create a MySQL database named ssc with a user named ssc, execute the following SQL commands: CREATE DATABASE ssc; CREATE USER 'ssc'@'localhost' IDENTIFIED BY '<YOUR PW>'; GRANT ALL ON ssc.* TO 'ssc'@'%' IDENTIFIED BY '<YOUR PW>' \ WITH GRANT OPTION; FLUSH PRIVILEGES; 3. Type quit to exit the MySQL monitor program. Configuring the MySQL Database for Remote Availability To allow multiple Services Controllers to be placed in a cluster, the MySQL database must be set up to allow remote access. An example outlining one approach is below. To allow database access from a remote IP address on Ubuntu 1. Open the my.cnf configuration file (for example, /etc/mysql/my.cnf) in any text editor. 2. Set the option bind-address to the public IP address of the MySQL server 3. Close and save the file. 4. Restart the MySQL daemon, at the system prompt, enter: service mysql restart 32 Stingray Services Controller User s Guide

43 Installing and Configuring the Services Controller on Ubuntu Installing and Configuring the Services Controller Software To allow database access from a remote IP address on CentOS 1. Open the my.cnf configuration file (for example, /etc/mysql/my.cnf) in any text editor. 2. Set the option bind-address to the public IP address of the MySQL server 3. Close and save the file. 4. Restart the MySQL daemon, at the system prompt, enter: service mysqld restart Installing and Configuring the Services Controller on Ubuntu For Ubuntu hosts, the Services Controller software is provided in the form of a Debian package. To install the Services Controller on Ubuntu 1. As super user, use the standard dpkg command to install the Services Controller. At the system prompt, enter: dpkg -i riverbed-ssc_2.0_amd64.deb You are prompted for several configuration parameters. For detailed information, see Required Installation Parameters on page To initialize the software, execute the configuration script. At the system prompt, enter: /opt/riverbed_ssc_2.0/bin/configure_ssc --liveconfigonly You are prompted for the REST API administrative username and password. This login is the administrative user for the Services Controller REST API using HTTP authorization. The configuration script validates whether you have configured all the required directories. The script also populates the contents of the Services Controller database. The initial installation is complete. You can change the configuration parameters after the initial installation by re-running the configuration script. To change the configuration parameters 1. At the system prompt, enter: configure_ssc 2. For your changes to take effect, restart the Services Controller. At the system prompt, enter: service ssc stop service ssc start To reenter configuration parameters At the system prompt, enter: dpkg-reconfigure riverbed-ssc Stingray Services Controller User s Guide 33

44 Installing and Configuring the Services Controller Software Installing the Services Controller on CentOS Installing the Services Controller on CentOS For a CentOS host, the Services Controller software is provided as an RPM package. For CentOS installations using Linux Containers (LXC), Riverbed recommends LXC v Note: By default, CentOS has more restrictive iptables rules than Ubuntu. These iptables rules must be amended to enable access to Services Controllers and Instance Hosts, either on all or selected ports. For example, to configure an ssh port for an Instance Host. To install the Services Controller on CentOS 1. If it is not already installed, install the GNU C Library. At the system prompt, enter: yum install glibc-devel 2. At the system prompt, enter: rpm -i riverbed-ssc x86_64.rpm 3. After installation, configure the Services Controller software using the configuration script. At the system prompt, enter: /opt/riverbed-ssc/bin/configure_ssc You are prompted for several configuration parameters. For detailed information, see Required Installation Parameters on page 30. To change the configuration parameters 1. To change configuration parameters after the initial installation, at the system prompt, enter: /opt/riverbed-ssc/bin/configure_ssc 2. For your changes to take effect, restart the Services Controller. At the system prompt, enter: stop ssc start ssc Configuring CentOS Instance Hosts For CentOS instance hosts, since the default Python version is 2.6, you must install the Python Argparse module. To install the Python Argparse module On a Services Controller instance host, enter: yum install python-setuptools easy_install argparse 34 Stingray Services Controller User s Guide

45 Automating Configuration for the Services Controller Installing and Configuring the Services Controller Software Automating Configuration for the Services Controller You can use a replay file to automate the configuration of the Services Controller. A replay file is a previously created text file containing the Services Controller parameters and settings. To use a replay file, you must specify the full set of parameters required by the Services Controller configure script. --database_host=localhost --database_port=0 --database_user=ssc_user --database_password=ssc_password --database_name=ssc --server_port= server_certificate_file=/space/cert.pem --server_private_key_file=/space/key.pem --server_threads=10 --action_threads=5 --monitor_threads=20 --source_file_loc=/var/lib/ssc/files --log_file_loc=/var/log/ssc [email protected] --smtp_host=localhost --smtp_port=25 --admin_username=admin --admin_password=myadminpassword To automate configuration of the Services Controller using a replay file 1. Create and save the replay file in any text editor. 2. At the system prompt, enter: If any of these parameters are incorrect or invalid, configuration script will fail. Riverbed recommends capturing the command output to a file. You can inspect this file for errors or warnings. Installing the Services Controller Software License Your license file must be readable by the Services Controller Linux user. For detailed information about Services Controller licenses, see Chapter 2, Managing Services Controller Licenses. To install the Services Controller license for the first SSC in a cluster 1. Place the licenses on an accessible location in your infrastructure. For details about obtaining your license keys, see To retrieve Stingray and Services Controller product licenses on page Copy a Services Controller license file containing the license key to the INSTALLROOT/licenses directory. The Services Controller scans the directory for changes at start up and at daily intervals. New licenses are automatically added to the database as they are discovered. Stingray Services Controller User s Guide 35

46 Installing and Configuring the Services Controller Software Starting and Stopping the Services Controller To confirm that you have correctly installed your license, run the Services Controller software manually. Any problems with the license will result in error messages being displayed on the command line. For detailed information about running the Services Controller manually, see Starting and Stopping the Services Controller on page 36 If the Services Controller shuts down due to a licensing failure, the start and end dates of any decoded licenses are added as a warning to the event log. To install Services Controller licenses once the first SSC in a cluster is running After you have installed your licenses and started the Services Controller for the first time, you can use the REST API to update existing licenses and install new licenses. POST /api/tmcm/1.4/controller_license_key HTTP/1.1 XX1-RSSC E33-3E3A AB Creating Licensing Reports The licensing decode tool (license_decode) produces a detailed report for all your license keys, including invalid ones. The license decode tool is part of the Services Controller installation package; it is stored under the /opt/riverbed_ssc_2.0/bin/ directory. Invalid license keys might be malformed or might have a dependency on a missing Services Controller license key. For license keys listed as invalid, verify that the associated Services Controller license key is present. To create a license report At the prompt, enter: license_decode <KEY1> <KEY2> <KEY3> You can decode multiple license keys by specifying each one in a space-separated argument list. To create a license report for licenses tied to IP or MAC addresses At the prompt, enter: license_decode <KEY1> <KEY2> <KEYN> [-a IP-addresses] You can decode multiple IP/MAC addresses by specifying each in a space-separated argument list. Starting and Stopping the Services Controller You must select the appropriate method of running the Services Controller depending on the status of your Services Controller installation: Manual - If you are running the Services Controller for the first time or if you are having problems with your Services Controller at start up, run the Services Controller manually to see additional diagnostic messages on the command line. Upstart - For an established and normally operating Services Controller installation, run the Services Controller as an upstart service. 36 Stingray Services Controller User s Guide

47 Upgrading the Services Controller on Ubuntu Installing and Configuring the Services Controller Software To manually start the Services Controller software At the prompt, enter: run_ssc_server [-v/--version] -v or --version - Displays the current version and build number of the Services Controller software, but does not start the server. Manually starting the Services Controller provides output and startup messages that you can use to identify and resolve Services Controller system configuration issues. Note: To perform a controlled shutdown of the Services Controller from this state, you must press Ctrl+C. To start the Services Controller as an upstart service (normal operation) At the prompt, enter: start ssc To stop the Services Controller running as an upstart service At the prompt, enter: stop ssc Upgrading the Services Controller on Ubuntu For Ubuntu hosts, the Services Controller software is provided in the form of a Debian package. You can upgrade the Services Controller software using the following procedure. This procedure causes a loss of service from your Services Controller deployment, although you should schedule the upgrade for a time of least interruption to your end users. Note: Earlier versions of the Services Controller used an environment variable called $SSCHOME to refer to the software installation directory. This is now deprecated. Before upgrading the Services Controller to the latest version, you must make sure that this variable is unset. To upgrade a single Services Controller 1. At the prompt, enter: dpkg -i riverbed-ssc_2.0_amd64.deb This stops the Services Controller software but does not restart it. 2. Make sure the INSTALLROOT/licenses directory contains the correct license files, including any licenses for Bandwidth Packs. 3. At the prompt, enter: /opt/riverbed_ssc_2.0/bin/configure_ssc --liveconfigonly This command makes backward-compatible changes to the Services Controller database. Stingray Services Controller User s Guide 37

48 Installing and Configuring the Services Controller Software Upgrading the Services Controller on CentOS 4. At the prompt, enter: start ssc 5. To confirm that the Services Controller has not shut down due to licensing issues and that the software has started by issuing a GET request via the Services Controller REST API. Upgrading the Services Controller on CentOS For a CentOS host, the Services Controller software is provided as an RPM package. For CentOS installations using Linux Containers (LXC), Riverbed recommends LXC v You can upgrade the Services Controller software using the following procedure. This procedure causes a loss of service from your Services Controller deployment, although you should schedule the upgrade for a time of least interruption to your end users. Note: Earlier versions of the Services Controller used an environment variable called $SSCHOME to refer to the software installation directory. This is now deprecated. Before upgrading the Services Controller to the latest version, you must make sure that this variable is unset. To upgrade a single Services Controller 1. At the prompt, enter: rpm -U riverbed-ssc x86_64.rpm This stops the Services Controller software but does not restart it. 2. Make sure the INSTALLROOT/licenses directory contains the correct license files, including any licenses for Bandwidth Packs. 3. At the prompt, enter: /opt/riverbed-ssc/bin/configure_ssc --liveconfigonly This command makes backward-compatible changes to the Services Controller database. 4. At the prompt, enter: start ssc 5. To confirm that the Services Controller has not shut down due to licensing issues and that the software has started by issuing a GET request via the Services Controller REST API. 38 Stingray Services Controller User s Guide

49 Upgrading a Services Controller Cluster Installing and Configuring the Services Controller Software Upgrading a Services Controller Cluster Part of the upgrade process for a Services Controller is to apply changes to the inventory database schema, in order to support additional features in newer versions. Since clustered Services Controllers share the same inventory database, upgrades of these clusters must be carefully sequenced to ensure that: inventory records are not locked during the upgrade of the database schema, as this will cause the upgrade to fail. once the database schema has been upgraded, changes made to the inventory by remaining preupgrade Services Controllers are avoided. To support these goals, it is recommended that the following sequence is used: 1. Before starting the upgrade, select one Services Controller in your cluster as the upgrade master. This Services Controller will be upgraded first and will be responsible for applying changes to the database schema. 2. Stop the other Services Controller in the cluster. 3. Upgrade the upgrade master as instructed in To upgrade a single Services Controller on page 37, and restart this Services Controller once the upgrade is complete. 4. Upgrade the next Services Controller in the cluster as instructed in To upgrade a single Services Controller on page 37, and restart this Services Controller once the upgrade is complete. 5. Repeat step 4 until all Services Controller in the cluster are upgraded. Note: Deactivation of the upgrade master Services Controller during step 3 may cause instances to briefly enter the licensing grace period, but they will return to normal operation once the upgrade master is restarted. Downgrading a Services Controller Cluster You can downgrade a Services Controller cluster using the same principles and procedure as an upgrade to the Services Controller cluster. See Upgrading a Services Controller Cluster on page 39. Downgrading the Services Controller on Ubuntu For Ubuntu hosts, the Services Controller software is provided in the form of a Debian package. To revert to the previous software version in a Services Controller cluster, follow this procedure on each cluster member in turn. Stingray Services Controller User s Guide 39

50 Installing and Configuring the Services Controller Software Downgrading the Services Controller on CentOS To revert to the previous software version 1. At the prompt, enter: stop ssc 2. At the prompt, enter: dpkg --force-downgrade -i riverbed_ssc_1.3_amd64.deb --force-downgrade - Downgrades to the desired software version. In this example, version 1.3. This includes a database schema downgrade. 3. Make sure that the INSTALLROOT/licenses directory in your downgraded software installation contain the required license keys compatible with the downgraded version of the Services Controller. 4. At the prompt, enter: start ssc Downgrading the Services Controller on CentOS For a CentOS host, the Services Controller software is provided as an RPM package. For CentOS installations using Linux Containers (LXC), Riverbed recommends LXC v To revert to the previous software version in a Services Controller cluster, follow this procedure on each cluster member in turn. To revert to the previous software version 1. At the prompt, enter: stop ssc 2. At the prompt, enter: /opt/riverbed-ssc/bin/configure_ssc --liveconfigonly --to_version=1.3 --to_version=1.3 - Specifies the downgrade version. In this example, version 1.3. This includes a database schema downgrade. You will be asked if you want to add another REST API user. You can add a user, but this is not needed for the downgrade process. 3. At the prompt, enter: rpm -U --oldpackage riverbed-ssc x86_64.rpm --oldpackage - Downgrades to the desired software version. In this example, Make sure that the INSTALLROOT/licenses directory in your downgraded software installation contain the required license keys compatible with the downgraded version of the Services Controller. 5. At the prompt, enter: start ssc 40 Stingray Services Controller User s Guide

51 CHAPTER 4 Configuring Traffic Manager Instances This chapter describes how to configure the Services Controller and Traffic Manager instance host, and how to deploy and configure Traffic Manager instances. It includes the following sections: Overview of Instances on page 41 Enabling Services Controller Communication with Instance Hosts on page 42 Adding Supporting Resources on page 43 Creating Instance Hosts on page 45 Deploying and Configuring Traffic Manager Instances on page 46 Note: The procedures in this chapter are appropriate for all supported operating systems. Overview of Instances The information in this chapter assumes you have already installed the Services Controller and its related components, the Services Controller software is running, and you can access the REST API. For details, see Chapter 3, Installing and Configuring the Services Controller Software. The instructions in this chapter describe how to configure a simple deployment. That is, a single Services Controller server and a single Traffic Manager instance host. More complex deployments are supported and are described below. Services Controller High Availability Deployments Typically, high availability (HA) production environments deploy multiple copies of the Services Controller with multiple Traffic Manager instance hosts on each. Each instance host runs multiple Traffic Manager instances. For more information about HA configurations, see Chapter 10, Configuring for High Availability. Stingray Services Controller User s Guide 41

52 Configuring Traffic Manager Instances Enabling Services Controller Communication with Instance Hosts Traffic Manager Instance Clusters You can deploy Traffic Manager instances in clusters to ensure HA at the Traffic Manager level. By configuring Traffic Manager instance clusters using Traffic IP Groups, you can ensure that if one Traffic Manager instance in a cluster goes down, then the remaining active Traffic Manager instances in the cluster share the IP addresses, and therefore the traffic load, from the failed Traffic Manager instance. (This type of HA requires that you configure Traffic IP Groups in the Traffic Manager.) While the Services Controller enables multiple instances to run on a host, it allows only one instance from a given cluster to be installed on an instance host. Adding another instance to the cluster requires another available instance host. This additional host enables the Services Controller to spread the traffic load, and thus promotes HA. However, if the instance hosts are virtual machines, residing on the same physical host, HA is diminished because the entire cluster might be vulnerable to the failure of a single piece of hardware. As a best practice, virtualized instance hosts should be distributed over multiple physical hosts. Enabling Services Controller Communication with Instance Hosts The Services Controller uses passwordless SSH to communicate with your instance hosts. Passwordless SSH enables the Services Controller to copy files and remotely run commands to deploy, start, stop, upgrade, and delete Traffic Manager instances. When you add a new instance host to the Services Controller, you must set up the credentials to allow this. To generate a new SSH key 1. As root user, generate a new SSH authentication key by running the following command on the Services Controller server (you must be logged in as the user used to run the Services Controller software): ssh-keygen -t rsa 2. When prompted, enter a blank pass-phrase, and accept the default key location This operation results in the generation of these key files: ~/.ssh/id_rsa (certificate) ~/.ssh/id_rsa.pub (public key) 3. Use the ssh-copy-id command (included in the openssh-client package) to install your public key to the instance host authorized_keys file. For example: ssh-copy-id root@<instancehost> 4. Return to the command line of the Services Controller and, as root user, connect to the new instance host by SSH. If you are successful, the credentials for passwordless SSH are correctly set up. After these credentials have been set up, it is possible to add the host to the Services Controller system by means of a REST request. 42 Stingray Services Controller User s Guide

53 Adding Supporting Resources Configuring Traffic Manager Instances Adding Supporting Resources Before you perform any of the following procedures, you must first obtain your Traffic Manager installation package (that is, a.tgz file) and an FLA-style license key. For details about retrieving your Traffic Manager licenses, see To retrieve Stingray and Services Controller product licenses on page 18. You copy these files to the location specified by the Source File Location parameter during the installation process for the Services Controller. For detailed information see, Required Installation Parameters on page 30. Creating Version Resources Perform the following procedures to create a REST API version resource. For detailed information about the properties for this resource, see version Resource on page 152. To create a version resource 1. To create a version resource, perform a PUT request: Use the following JSON structure as the body of your request: { } "version_filename" : "ZeusTM_97_Linux-x86_64.tgz", "info" : "Version 9.7" Creating License Resources Perform the following procedures to create a REST API license resource. For detailed information about the properties for this resource, see license Resource on page 146. To create a version and license resource 1. To create a license resource for an FLA-style license key file (for example flexlic1), perform a PUT request: Include the following JSON structure as the body of your request: { } "info" : "This is the resource for the flexlic1 license" Note: You can pass an empty JSON structure ({}) to the license resource because there are no mandatory properties for this resource type. Verifying a Resource You can use the Services Controller REST API to verify that all the required files are present and accessible. Perform a file status check GET request to verify that the Services Controller can find and access the files that are referred to in any of the defined resources. Stingray Services Controller User s Guide 43

54 Configuring Traffic Manager Instances Adding Supporting Resources To verify a resource 1. To verify that the Services Controller can find and access the Traffic Manager package and license key files, perform a file status check GET request: The following output is displayed: { } "licenses": [{ "href": "/api/tmcm/1.4/license/flexlic1", "name": "flexlic1", "present": true, "filename": "/var/lib/ssc/files/flexlic1" }], "versions": [{ "href": "/api/tmcm/1.4/version/9.7", "name": "9.7", "present": true, "filename": "/var/lib/ssc/files/zeustm_97_linux-x86_64.tgz" }] Creating Feature Pack Resources To complete the set of supporting resources required by a Traffic Manager instance, you must specify one or more feature packs with the required feature sets. The stm_sku value must be the name of one of the available sku resources created when the Services Controller was installed. The choice of SKU depends on a combination of: the features you want to enable on the instances that will use the feature pack. the available licensed bandwidth for the SKU (for Enterprise Licensed customers). the metered cost of the SKU (for Cloud Service Provider licensed customers). To create a feature pack 1. To create a feature_pack resource (for example, STM-U-CSP-200-FP), perform a PUT request: Use the following JSON structure as the body of your request to define the SKU: { } "stm_sku" : "STM-U-CSP-200", "excluded" : "" In this example, STM-U-CSP-200 represents an available sku resource in the Services Controller. 2. When complete, the following response is displayed: { } "info": "", "status": "Active", "stm_sku": "STM-200", "add_on_skus": [], "excluded": "" 44 Stingray Services Controller User s Guide

55 Creating Instance Hosts Configuring Traffic Manager Instances Creating Instance Hosts After you create your Traffic Manager instance host, and all supporting resources are in place, you can add the instance host to the Services Controller configuration. This is the server on which you have the Services Controller deploy your Traffic Manager instances. Note: An Instance Host resource is not required when you are only configuring one or more nonmanaged instances. To create an instance host 1. Prepare a Traffic Manager instance host server: This server must conform to the minimum requirements as specified in the release notes provided with your Services Controller software. Set up the required credentials for passwordless SSH access between the Services Controller and your instance host server. Create a working directory and installation root directory locations. These directories are set as work_location and install_root during creation of the corresponding host resource in the next step. Verify that these directories have read, write, and execute permissions (770 in octal) set for both the user and group. Verify that the instance host has an appropriate UTF-8 locale. 2. To create a new host resource for the Traffic Manager instance host server, perform a PUT request: Use the following JSON structure as the body of your request (the directories used here are examples; substitute your own values): { } "work_location" : "/var/lib/ssc/files", "install_root" : "/root/install", "username" : "root" In this example, the resource name ih1.mydomain.com corresponds to the fully resolvable, qualified hostname of your instance host server. This correspondence is required by the Services Controller FLA license scheme. 3. To verify that the Services Controller server has access to the Traffic Manager instance host, perform a GET request. For example: If the Services Controller detects any problems, they are reported under the status_check property of the returned resource. In this case, fix any reported problems and perform this procedure again. Stingray Services Controller User s Guide 45

56 Configuring Traffic Manager Instances Deploying and Configuring Traffic Manager Instances If there are no problems, the request returns an empty status_check property. For example: { } "info": "", "install_root": "/root/install", "work_location": "/var/lib/ssc/files", "status_check": {}, "username": "root", "status": "Active", "usage_info": "" Deploying and Configuring Traffic Manager Instances After you have created and verified the host resource, you can deploy and configure your Traffic Manager instances. You may require that Linux Containers (LXC) are used during deployment. This table summarizes the supported options for Traffic Manager instances. Required configuration Managed Instance? Container? Network Isolation? Section Managed Instance in an LXC Container with Network Isolation Managed Instance in an LXC Container without Network Isolation Managed Instance without LXC Containers Yes Yes Yes Preparing to Deploy a Managed Instance in a Container with Network Isolation on page 47 Yes Yes No Preparing to Deploy a Managed Instance in a Container without Network Isolation on page 50 Yes No No Preparing to Deploy a Managed Instance Outside of a Container on page 52 Nonmanaged Instance No No No Configuring a Nonmanaged Instance on page 54 Once preparations are complete for a managed instance, you can deploy an instance. See Deploying a Managed Instance on page 56. Nonmanaged instances are not deployed from the Services Controller. 46 Stingray Services Controller User s Guide

57 Deploying and Configuring Traffic Manager Instances Configuring Traffic Manager Instances Preparing to Deploy a Managed Instance in a Container with Network Isolation Before deploying instances in a Linux container with network isolation, you must perform the following tasks: Configure a virtual network on the instance host for the Traffic Manager instance. Typically, this configuration requires setting up a virtual bridge (using, for example, Linux bridge or Open vswitch), then configuring the Linux container to attach to the bridge when running Traffic Manager. Create a Linux container configuration file for the instance in which you specify the virtual network, the CPU usage, and the hostname of the instance (that is, the lxc.utsname setting). The name and location of the container configuration file is important: You must name the container configuration file <fqdn>.conf, where <fqdn> is the fully qualified domain name of the host. This hostname is the value of the lxc.utsname setting in the container configuration file. You must place the container configuration file in the installation root directory on the instance host. For example: /root/install Once these preparations are complete, you can deploy the instance. See Deploying a Managed Instance on page 56. Container Configuration File Example The container configuration file must include networking entries for your Traffic Manager instance IP address. You can use either Linux virtual networking or Open vswitch to set up network isolation for your Traffic Manager instance. For example, a container configuration file, stm1.example.com.conf, uses Linux virtual networking: lxc.utsname = stm1.example.com lxc.network.type = veth lxc.network.link = br0 lxc.network.flags = up lxc.network.ipv4 = XX.XX.XX.XX/SS lxc.network.ipv4.gateway = YY.YY.YY.YY In this example: XX.XX.XX.XX/SS is the IP address and subnet mask of the LXC container network. YY.YY.YY.YY is the IP address of the subnetwork gateway inside the LXC container. In addition, the configuration file can define any other characteristics (for example, CPU and memory usage) that will apply to the LXC container. Properties for a Managed Instance in a Container with Network Isolation The table below describes the REST API properties for a managed instance in a container with network isolation. When you create the instance, make sure you assign the same name to both the instance and the container, because the instance uses the container name as its hostname. Stingray Services Controller User s Guide 47

58 Configuring Traffic Manager Instances Deploying and Configuring Traffic Manager Instances For details of all instance properties, see instance Resource on page 137. Property Description host_name container_name container_configuration owner stm_version stm_feature_pack license_name config_options bandwidth tag cpu_usage The name of the Traffic Manager instance host on which you deploy the instance. This name must match the fully qualified domain name of the instance host that was created. You must create a host entry before you create an instance. The name of the LXC container for the Traffic Manager instance. This name must match the name of the required container configuration (.conf) file in the installation directory. (Do not include the.conf file extension.) You must create an appropriate container configuration file of the form <containername>.conf in the install_root directory of the container host. For example, a container_name of stm1.example.com requires a container configuration file named stm1.example.com.conf. The container configuration file must set lxc.utsname to the container name for the licensing server to operate correctly. A JSON-formatted string used to set up the default network gateway inside the LXC container. Use this format: "{\"gateway\":\"<ip_address>\"}" For LXC deployments, this is the IP address raised on the bridge interface this container is connected to. Specify who owns the instance. The name of the Traffic Manager version resource for the instance. The name of the feature_pack resource associated with the Traffic Manager instance. This feature pack represents the set of features that are available for the instance. The name of the FLA license resource you want to use for this instance. When you modify this property, the Services Controller updates the license on the Traffic Manager instance. If you specify the cluster_id property, then you must also set the config_options property to include admin_ui=yes and start_flipper=yes. Note: Whenever the config_options property is set, all currently modified options must be specified again in the REST call. Any options that are not specified will lose their current value and be reset to their default value. Note: Any change to the config_options settings on a managed instance will cause a restart of the instance. Nonmanaged instances are not affected. The maximum allowed bandwidth for the Traffic Manager instance (in units of Mbps). A text property which provides an alternative way of referring to an instance. Unlike the unique ID for an instance, the tag value can be changed or reused, subject to some restrictions. See Understanding the Tag Property on page 49. Note: You cannot set the tag property on a service instance. A string that describes which CPUs are used for this Traffic Manager instance. The CPU affinity is defined using the lxc.cgroups.cpuset setting in the container configuration file, in which case cpu_usage is set to an empty string. Note: Any change to the cpu_usage settings of a managed instance will cause a restart of the instance. Nonmanaged instances are not affected. 48 Stingray Services Controller User s Guide

59 Deploying and Configuring Traffic Manager Instances Configuring Traffic Manager Instances Property Description cluster_id management_address Optionally, the name of a cluster resource to which the instance belongs. If you specify an entry for this property, it must refer to a cluster resource. The cluster_id property cannot be changed after you create an instance. Instances must be added to a cluster when you create them. (This requirement also applies to the first instance in a cluster.) If you specify the cluster_id property, then you must also set the config_options property to include admin_ui=yes and start_flipper=yes. In an LXC-isolated network, the management_address property is typically the same as the lxc.utsname and the container configuration filename (minus the.conf file extension). This property is a fully qualified domain name. For example, if the container configuration filename is: stm1.example.com.conf and the lxc.utsname is defined as follows: lxc.utsname = stm1.example.com then the management_address is defined as follows: management_address = stm1.example.com Understanding the Tag Property Each instance has a tag property which provides an alternative way of identifying the instance. A tag is a user-friendly name which, unlike the unique ID for an instance, can be changed or reused (subject to some restrictions). A tag is useful in the situation where you want to identify an instance by a consistent name. If an error occurs that requires that instance to be deleted, the unique ID is no longer available, as the instance persists with a Deleted status. The tag, however, can be reused on a new instance, enabling consistent naming. Restrictions to tag values are as follows: A tag cannot be the same as a unique ID of any instance (except itself). This restriction includes Deleted instances. A tag must be unique among the tags of all instances. This restriction does not include Deleted instances. The tag property is available in the REST API only; it is not available in the Services Controller VA or Services Controller CLI. Note: The tag property on a service instance is always an empty string. You cannot change this. Stingray Services Controller User s Guide 49

60 Configuring Traffic Manager Instances Deploying and Configuring Traffic Manager Instances Preparing to Deploy a Managed Instance in a Container without Network Isolation You can use LXCs for purposes other than network isolation. For example, LXCs can provide CPU and memory isolation. Before deploying instances in a Linux container without network isolation, you must perform the following tasks: Create a Linux container configuration file for the instance in which you specify the virtual network, the CPU usage, and the hostname of the instance (that is, the lxc.utsname setting). The name and location of the container configuration file is important: You must name the container configuration file <fqdn>.conf, where <fqdn> is the fully qualified domain name of the host. This hostname is the value of as the lxc.utsname setting in the container configuration file. You must place the container configuration file in the installation root directory on the instance host. For example: /root/install If you plan to deploy multiple Traffic Manager instances on the same physical or virtual host, the instances will share ports on the same IP address: For nonclustered instances, you must specify a value for the port_offset in the config_options property of the instance resource. For clustered instances, you must specify a value for both the cluster_port_offset property of the cluster resource and the port_offset property in the config_options property of the instance resource. For detailed information about cluster and instance resource properties, see cluster Resource on page 131 and instance Resource on page 137. Note: Whenever the config_options property is set, all currently modified options must be specified again in the REST call. Any options that are not specified will lose their current value and be reset to their default value. Any change to the config_options settings on a managed instance will cause a restart of the instance. Nonmanaged instances are not affected. Once these preparations are complete, you can deploy the instance. See Deploying a Managed Instance on page 56. Container Configuration File Example The container configuration file will typically include the utsname for your Traffic Manager instance. In addition, as network isolation is not being used, the configuration file will define any other characteristics (for example, CPU and memory usage) that will apply to the LXC container. For example: lxc.utsname = stm1.example.com lxc.cgroup.cpuset.cpus = 0,1 lxc.cgroup.cpu.shares = Stingray Services Controller User s Guide

61 Deploying and Configuring Traffic Manager Instances Configuring Traffic Manager Instances Properties for a Managed Instance in a Container without Network Isolation The table below describes the REST API properties for a managed instance in a container without network isolation. For details of all instance properties, see instance Resource on page 137. Property Description host_name container_name owner stm_version stm_feature_pack license_name config_options bandwidth tag cpu_usage The name of the Traffic Manager instance host on which you deploy the instance. This name must match the FQDN of the instance host that was created. You must create a host entry before you create an instance. The name of the LXC container for the Traffic Manager instance. This name must match the name of the required container configuration (.conf) file in the installation directory. (Do not include the.conf suffix.) You must create an appropriate container configuration file of the form <containername>.conf in the install_root directory of the container host. For example, a container_name for stm1.example.com requires a container configuration file named stm1.example.com.conf. Specify who owns the instance. The name of the Traffic Manager version resource for the instance. The name of the feature_pack resource associated with the Traffic Manager instance. This value represents the set of features that are available for the instance. The name of the FLA license resource you want to use for this instance. When you modify this property, the Services Controller updates the license on the Traffic Manager instance. Instances running within containers without network isolation share an IP address with the instance host. To avoid port conflicts for some functions (like the REST or Admin interfaces between instances), specify a port offset: For nonclustered instances, you must specify a value for port_offset. This value is of the form port_offset=<number>. This offset controls port values by a fixed amount. The offset range is from 0 and 499. For clustered instances, you must specify a value for both the cluster_port_offset property of the cluster resource and the port_offset property in the config_options property of the instance resource. Note: Whenever the config_options property is set, all currently modified options must be specified again in the REST call. Any options that are not specified will lose their current value and be reset to their default value. Note: Any change to the config_options settings on a managed instance will cause a restart of the instance. Nonmanaged instances are not affected. The maximum allowed bandwidth for the Traffic Manager instance (in units of Mbps). A text property which provides an alternative way of referring to an instance. Unlike the unique ID for an instance, the tag value can be changed or reused, subject to some restrictions. See Understanding the Tag Property on page 49. Note: You cannot set the tag property on a service instance. A string that describes which CPUs are used for this Traffic Manager instance. The CPU affinity is defined using the lxc.cgroups.cpuset setting in the container configuration file, in which case cpu_usage is set to an empty string. Note: Any change to the cpu_usage settings of a managed instance will cause a restart of the instance. Nonmanaged instances are not affected. Stingray Services Controller User s Guide 51

62 Configuring Traffic Manager Instances Deploying and Configuring Traffic Manager Instances Property Description cluster_id management_address Optionally, the name of a cluster resource to which the instance belongs. If you specify an entry for this property, it must refer to a cluster resource. The cluster_id property cannot be changed after you create an instance. Instances must be added to a cluster when you create them. (This requirement also applies to the first instance in a cluster.) If you specify the cluster_id property, then you must also set the config_options property to include admin_ui=yes and start_flipper=yes. For instances without network isolation, the management_address is the fully qualified domain name of the instance host (therefore, it is the same value as the host_name property). Preparing to Deploy a Managed Instance Outside of a Container In this scenario, the managed Traffic Manager instance does not use a Linux container. That is, the Traffic Manager instance is configured and deployed directly to the host. No container configuration file is required. Note: LXC containers are present on the host, but they are not used for this procedure. Once these preparations are complete, you can deploy the instance. See Deploying a Managed Instance on page 56. Properties for a Managed Instance Outside of a Container The table below describes the REST API properties for a managed instance outside a container. For details of all instance properties, see instance Resource on page 137. Property Description host_name owner stm_version stm_feature_pack license_name The name of the Traffic Manager instance host on which you deploy the instance. This name must match the FQDN of the instance host that was created. You must create a host entry before you create an instance. Specify who owns the instance. The name of the Traffic Manager version resource for the instance. The name of the feature_pack resource associated with the Traffic Manager instance. This represents the set of features that are available for the instance. The name of the FLA license resource you want to use for this instance. When you modify this property, the Services Controller updates the license on the Traffic Manager instance. 52 Stingray Services Controller User s Guide

63 Deploying and Configuring Traffic Manager Instances Configuring Traffic Manager Instances Property Description config_options cluster_id bandwidth tag cpu_usage management_address Instances running outside containers share an IP address with the instance host. To avoid port conflicts for Stingray functions (like the REST or Admin interfaces between instances), specify a port offset: For nonclustered instances, you must specify a value for port_offset. This value is of the form port_offset=<number>. This offset controls port values by a fixed amount. The offset range is from 0 and 499. For clustered instances, you must specify a value for both the cluster_port_offset property of the cluster resource and the port_offset property in the config_options property of the instance resource. Note: Whenever the config_options property is set, all currently modified options must be specified again in the REST call. Any options that are not specified will lose their current value and be reset to their default value. Note: Any change to the config_options settings on a managed instance will cause a restart of the instance. Nonmanaged instances are not affected. Optionally, the name of a cluster resource to which the instance belongs. If you specify an entry for this property, it must refer to a cluster resource. If you specify the cluster_id property, then you must also set the config_options property to include admin_ui=yes and start_flipper=yes. The maximum allowed bandwidth for the Traffic Manager instance (in units of Mbps). A text property which provides an alternative way of referring to an instance. Unlike the unique ID for an instance, the tag value can be changed or reused, subject to some restrictions. See Understanding the Tag Property on page 49. Note: You cannot set the tag property on a service instance. A mandatory string that describes which CPUs are used for this Traffic Manager instance. If used, you must either: specify a value in a form that is used by the taskset command. For example, "0,3,5-7". set this property to an empty string. This indicates that the host is not limited in its use of CPU cores (unless it is deployed within an LXC container). This is the default setting for the property if you do not specify a string. Note: Any change to the cpu_usage settings will cause a restart of the instance. The hostname used to address the Traffic Manager instance. The hostname must be an FQDN. You can modify this property only for a nonmanaged Traffic Manager instance (or in a database-only request). If you update this property, the host component of the rest_address, ui_address, and snmp_address properties is also updated. These values must be FQDNs. Stingray Services Controller User s Guide 53

64 Configuring Traffic Manager Instances Deploying and Configuring Traffic Manager Instances Configuring a Nonmanaged Instance The Services Controller enables you to license and meter Traffic Manager instances that have not been deployed or started by the Services Controller. These are known as nonmanaged instances. Enabling monitoring and the REST API Proxy for an Nonmanaged Instance To enable both monitoring and the REST API proxy for an nonmanaged Traffic Manager instance, you must configure: The instance resources in the Services Controller. Provide correct values for the following properties: admin_username admin password rest_address See Properties for a Nonmanaged Instance on page 54. The nonmanaged Traffic Manager instance: Enable the REST API. Enabling Metering for an Nonmanaged Instance To enable metering for an nonmanaged Traffic Manager instance, you must configure: The instance resource in the Services Controller. Provide a correct value for the following properties: snmp_address. The snmp!community setting in the config_options on the instance resource must match the snmp!community setting on the instance itself. See Properties for a Nonmanaged Instance on page 54. The nonmanaged Traffic Manager instance: Enable SNMP. The snmp!community setting on the instance must match the snmp!community setting in the config_options on the instance resource. Note: When you are configuring a nonmanaged instance, an Instance Host resource is not required. Properties for a Nonmanaged Instance The table below describes the REST API properties for a nonmanaged instance. When you create a nonmanaged instance using the REST API, the request supports a URL parameter?managed=false. Include this to indicate that the new instance a nonmanaged instance. For example: 54 Stingray Services Controller User s Guide

65 Deploying and Configuring Traffic Manager Instances Configuring Traffic Manager Instances For details of all instance properties, see instance Resource on page 137. Property Description owner stm_feature_pack license_name config_options bandwidth tag management_address ui_address admin_username admin_password rest_address snmp_address Specify who owns the instance. The name of the feature_pack resource associated with the Traffic Manager instance. This represents the set of features that are available for the instance. The name of the FLA license resource you want to use for this instance. For a nonmanaged instance, this property will not update the licenses on the Traffic Manager instance. A single configuration option is supported: snmp!community - The SNMP v2 community setting for this nonmanaged instance. This must be set to the same value as the equivalent snmp!community property on the instance resource (default: "public"). Note: Whenever the config_options property is set, all currently modified options must be specified again in the REST call. Any options that are not specified will lose their current value and be reset to their default value. Note: Unlike managed instances, nonmanaged instances do not restart when config_options are changed. The maximum allowed bandwidth for the Traffic Manager instance (in units of Mbps). A text property which provides an alternative way of referring to an instance. Unlike the unique ID for an instance, the tag value can be changed or reused, subject to some restrictions. See Understanding the Tag Property on page 49. The hostname used to address the Traffic Manager instance. The hostname must be a fully qualified domain name. You can modify this property only for a nonmanaged Traffic Manager instance (or in a database-only request). If you modify this property, the host component of the rest_address, ui_address, and snmp_address properties is also updated. These values must be fully qualified domain names. The address (host or IP address plus port number) of the Traffic Manager instance Administration UI. If you do not enter a value, the UI address defaults to :9090. If you use a hostname instead of an IP address, you must use a fully qualified domain name. You can modify this property only for a nonmanaged Traffic Manager instance (or in a database-only request). The user name for the admin account for the nonmanaged instance. The password for the admin account for the nonmanaged instance. The hostname or IP address and port number of the Traffic Manager instance configuration REST API. If left blank, it defaults to :9070. The rest_address property must match the instance hostname. If you use a hostname instead of an IP address, you must use a fully qualified domain name. You can modify this property only for a nonmanaged Traffic Manager instance (or in a database-only request). The hostname or IP address and port number of the Traffic Manager instance SNMP responder. This setting enables you to set the SNMP address used for metering. If you use a hostname instead of an IP address, you must use a fully qualified domain name. You can modify this property only for a nonmanaged Traffic Manager instance (or in a database-only request). Stingray Services Controller User s Guide 55

66 Configuring Traffic Manager Instances Deploying and Configuring Traffic Manager Instances Deploying a Managed Instance After you have completed preparations to deploy a managed Traffic Manager instance, you can deploy it to the instance host. To deploy a managed Traffic Manager instance 1. Choose a name for the instance. You must use a valid alphanumeric name suitable for use as a directory name and as part of a URI. For example, stm1. 2. Choose a Traffic Manager instance host to which you want to deploy the instance. This is the fully qualified domain name for your LXC container if you are deploying to a container. 3. To create an instance resource, perform a PUT request. For example, for an instance called stm1: This request supports a URL parameter?managed=true to indicate that this is a managed instance. However, this is the default setting, and can be omitted for managed instances. Use the following JSON structure as the body of your request: { } "owner" : "ACME Corp", "stm_version" : "9.7", "host_name": "1h1.mydomain.com", "container_name": "1c1.mydomain.com", "container_configuration": "{\"gateway\": \"XX.XX.XX.XX\" }", "config_options": "admin_ui=yes port_offset=1", "cpu_usage": "0", "stm_feature_pack": "STM-U-CSP-200-FP", "bandwidth": 100, "tag": "" "license_name": "flexlic1", "management_address": "1m1.mydomain.com" In this example: XX.XX.XX.XX is the IP address of your gateway. The container_name and container_configuration properties are not required when deploying an instance that is not in a container. 1c1.mydomain.com is the fully qualified domain name of a LXC container. 1m1.mydomain.com is the fully qualified domain name of the management host. If you are using an LXC container, this is the same as 1c1.mydomain.com. In all other cases, this is the same as 1h1.mydomain.com. tag is a text property which provides an alternative way of referring to an instance. Unlike the unique ID for an instance, the tag value can be changed or reused, subject to some restrictions. See Understanding the Tag Property on page 49. You cannot set the tag property on a service instance. The REST API response indicates that the instance is scheduled to deploy. 56 Stingray Services Controller User s Guide

67 Deploying and Configuring Traffic Manager Instances Configuring Traffic Manager Instances 4. To poll the Services Controller until the instance is successfully deployed, perform a GET request for the URI of the instance that you created. For example: The response to this request contains a JSON structure representing the instance resource. This contains additional properties, one of which gives the status of the instance. For example: { } "owner" : "ACME Corp", "stm_version" : "9.7", "status" : "Idle" The status property changes to reflect the current state of the instance deployment. After deployment, the Services Controller sets the status to Idle. That is, the instance does not start automatically. 5. To start the instance, perform a PUT request to the instance URI: Use the following JSON structure as the body of your request. { } "status" : "Active" 6. To poll the Services Controller until the instance is successfully activated, repeat the GET request. During this process, the Services Controller sets the status to Starting. After completion, the status is set to Active. Stingray Services Controller User s Guide 57

68 Configuring Traffic Manager Instances Deploying and Configuring Traffic Manager Instances 58 Stingray Services Controller User s Guide

69 CHAPTER 5 Configuring Load Balancing as a Service This document describes how to configure Load Balancing as a Service (LBaaS). It includes the following sections: Overview of LBaaS on page 59 Prerequisites for Using LBaaS on page 63 Creating and Starting an LBaaS Service on page 66 LBaaS REST API Reference on page 73 Overview of LBaaS LBaaS is a service template provided by the Services Controller (SSC) REST API. LBaaS enables you to create and configure generic load-balancing services with multiple front-end IP addresses shared in a highavailability cluster of automatically deployed Traffic Manager service instances and load-balancing across a group of back-end nodes. A traffic manager instance that is deployed for a service (such as LBaaS) is a service instance. When an LBaaS service is created and activated, a defined number of service instances are created. The Services Controller performs health monitoring of service instances. It automatically ejects failed instances from the service cluster, and where resources allow, deploys additional instances in their place. The deployment strength of a service is a percentage measure of the number of deployed instances against the required number of instances. When a service instance fails, this failure is reflected by a reduction in the deployment strength of its service. Additionally, an is sent, and potentially an administrator task is raised. Stingray Services Controller User s Guide 59

70 Configuring Load Balancing as a Service Overview of LBaaS You can configure the following LBaaS elements. Figure 5-1. LBaaS Configurable Elements LBaaS provides Traffic Manager instance co-existence such that multiple services can place instances on the same hosts. Figure 5-2. LBaaS Provides Traffic Manager Co-Existence 60 Stingray Services Controller User s Guide

71 Overview of LBaaS Configuring Load Balancing as a Service Enterprise Licensing for LBaaS Enterprise licensing enables you to prepay for blocks of available bandwidth for a Traffic Manager SKU (Stock Keeping Unit) and for a specified time period. The licensed bandwidth for a SKU is calculated as the total bandwidth of all the valid installed Bandwidth Packs for that SKU. With Enterprise Licensing, the Services Controller rejects REST API requests that result in a Services Controller state where instances and services using a particular SKU consume more bandwidth than has been licensed for that SKU. For example, the following requests might be rejected due to insufficient bandwidth: Increasing the value of the bandwidth property in an instance or service resource. Deploying an additional instance or increasing the requested number of instances to be deployed for a service. Altering the stm_feature_pack property value of an instance or service to a feature pack based on a different SKU. Deleting, deactivating, or upgrading an existing Bandwidth Pack. The total bandwidth that a service consumes is calculated as the product of the values: num_instances x bandwidth The bandwidth used by this service is reserved from the Bandwidth Pack for the SKU that is linked by the stm_feature_pack property. As with Services Controller nonservice instances, you cannot make changes to these property values that exceed your licensed bandwidth. LBaaS Service Life Cycle Each service has a status property that identifies and modifies the service in terms of where it is in the service life cycle. Some of these statuses are stable states, whereas others are transient states that indicate the service is undergoing changes. The possible values of the status property are listed below. Status Type Description Creating Transient The service is being created. Inactive Stable The service has just been created but has not been activated yet, thus it has no service instances. Alternatively, the service has been stopped. Starting Transient The service is in the process of starting by deploying instances into a service cluster before they are configured. Active Stable The service is operational. Following a successful start, the service transitions to the Active state. Stopping Transient The service is in the process of stopping, with all the instances in the service cluster being deactivated and deleted. Following a successful stop, the service transitions to the Inactive state. Deleting Transient The service is being permanently decommissioned and all instances in the service cluster are being deleted. A successful deletion results in the service transitioning to the Deleted state. Deleted Stable The service and its service cluster instances have been permanently deleted. Stingray Services Controller User s Guide 61

72 Configuring Load Balancing as a Service Overview of LBaaS Status Type Description Changing Transient The service is currently processing a configuration change: An amendment to the number of instances required for the service cluster has occurred. A change to the configuration of the ELBaaS service itself. No further configuration changes are accepted until this configuration change has been completely processed. A successful change results in the service transitioning to the Active state. This state can also arise as the result of a failover following an instance failure. Config Failed Stable A failure state has been encountered where the configuration objects required to configure the LBaaS service could not be applied to the Traffic Manager cluster. For transitions that are directly under your control (that is, Inactive > Active, Active > Inactive, and Inactive > Deleted), you make a REST API PUT request to the service resource, by setting the status property to the target status. The service does not move directly to the target status, but goes into the transitional state, such as Starting, Stopping, Deleting, and Changing. Service Failover Mechanism In the Services Controller, a service is an automatically deployed and configured cluster of Traffic Manager instances (that is, a service cluster). When an LBaaS service is created and activated, the Services Controller deploys and maintains a configured service cluster with a given number of instances. The Services Controller service manager relies on the monitoring feature of the Services Controller to periodically check the health of each service instance to make sure it is still accessible to the Services Controller. If a service instance remains inaccessible for a period of time longer than the number of seconds specified in the instance_health_fail_trigger service property, then a failover is triggered for the service. With failover, the Services Controller automatically performs the following actions for each failed instance: 1. Stops the instance. 2. Removes the instance from the service cluster. 3. Deletes the instance. If any of these actions are unsuccessful, the instance is ejected (that is, disassociated from the rest of the service cluster), an administrator task is automatically generated, and an notification of the outstanding task is sent. If there is available capacity on one of the service hosts, a new instance is automatically deployed and added to the service cluster. If there is no available capacity on any service host, the service continues to run with fewer instances in the service cluster: the service resource property deployment_strength is automatically adjusted downwards to represent this reduction in the size of the service cluster. As with instance failures detected by Services Controller monitoring for nonservice instances, the Services Controller sends an notification of the detected failure. If the Services Controller is unsuccessful in stopping, removing, and deleting the instance, an administrator task is automatically generated and an notification of the outstanding task is sent. In this case, you must manually resolve the problem. The Services Controller also sends a daily summary of remaining outstanding administrator tasks. 62 Stingray Services Controller User s Guide

73 Prerequisites for Using LBaaS Configuring Load Balancing as a Service Configuration of a Service Failover Mechanism The Services Controller automatically monitors the health of service cluster instances, using any Services Controller in a Services Controller cluster that has monitoring enabled. This feature is enabled by default, but is deactivated on a given Services Controller using the monitoring mode settings in the REST API manager resource. Monitoring in Services Controller has the following configurable values: Monitor Interval - The number of seconds between each individual health checks for a given resource. Failure Period - The number of seconds that must elapse during which no successful health checks are made before the monitored resource is considered failed. Services have instance_health_monitor_interval and instance_health_failure_trigger properties corresponding to these two configurable values, so that monitoring for instances in a service cluster are configured on a per-service basis. If you do not specify values for these service resource properties, the default values from the /api/tmcm/1.4/settings/monitoring resource are used for all instances in the service cluster. Prerequisites for Using LBaaS Before creating an LBaaS service, you must set up the Services Controller with these supporting resources: Feature Pack - A resource based on one of the standard preconfigured Traffic Manager SKUs, determining the features available for instances that use that feature pack. Currently, only the STM-400 SKU supports LBaaS. FLA License - Each service instance must have an appropriate Traffic Manager Flexible Licensing Architecture (FLA) license installed to ensure that it are licensed by Services Controller. The FLA license used by service instances is the same as those used by nonservice instances. Stingray Traffic Manager (STM) Version - A resource that represents the Services Controller tarball from which service instances are installed. The Services Controller validates any Traffic Manager version that you select for an LBaaS service to ensure it is an acceptable version. Instance Host - A resource representing a host on which service instances are deployed. In most respects, an instance host used for service instances is the same as the one used for nonservice instances, but there are some differences. Instance hosts can either be used exclusively for nonservice instances or exclusively for service instances, but not for both. Hosts that are intended for use exclusively for service instances are referred to as service hosts. When allocated, service host resources must be set up with the following additional properties: usage_info - Specify servicemanager. Indicates the host is earmarked for service instances only, and fails if the host has normal instances already. size - Specify an integer for the number of LBaaS instances that can fit on the source host. The Services Controller will not deploy more instances beyond this value. For LBaaS, each deployed service instance counts as having a volume of 1. A service host with a size of 5 can accommodate up to five LBaaS instances. With LBaaS, size is a required property for a host resource. cpu_cores - Specify the set of cores available on the host (in taskset format). The Services Controller uses this property to balance CPU affinity of Traffic Manager instance processes belonging to multiple services. If you do not set this property, the Linux kernel determines process scheduling that results in reduced achievable densities. Stingray Services Controller User s Guide 63

74 Configuring Load Balancing as a Service Prerequisites for Using LBaaS retained_info_dir - Specify a directory to store support information about instances that are culled. This directory will contain STM TSRs, if it is possible to generate them, or a copy of the STM logs if it was not possible to create a TSR. Typically, a Services Controller requires multiple service hosts to manage LBaaS services. Prerequisite Examples For these examples, you deploy one or more LBaaS services, using: a Services Controller installation running on host myssccontroller.mydomain.com, with its REST API running on port an STM-400 SKU based feature pack. instances based on Traffic Manager v9.7 or later. Riverbed-generated Traffic Manager FLA license fla-ssl. a group of three service hosts, myhost-01.mydomain.com, myhost-02.mydomain.com, and myhost- 03.mydomain.com, to host your service instances. This example assumes that passwordless SSH between myssccontroller.mydomain.com and these three service hosts has been set up, and that the working location directories and installation root directories have been set up in advance (as would be the case with any other instance host). Creating Feature Packs This procedure is the same as any feature_pack resource. Currently, only the STM-400 SKU supports LBaaS. Creating an FLA License Resource This procedure is the same as any Traffic Manager FLA license resource. It creates an FLA license resource that is used for normal or service instances. To create an FLA license Resource 1. Perform a PUT request: Include the following JSON structure as the body of your request: { } "info" : "This is the resource for the fla-ssl license" 64 Stingray Services Controller User s Guide

75 Prerequisites for Using LBaaS Configuring Load Balancing as a Service Creating a Traffic Manager Version This procedure is the same as any Traffic Manager version resource. It creates a Traffic Manager version resource that is used for normal or service instances. To create a Traffic Manager version resource 1. Perform a PUT request: Include the following JSON structure as the body of your request: { } "version_filename":"zeustm_97_linux-x86_64.tgz", "info":"version 9.7 Creating Instance Hosts This procedure is similar to any instance host resource, but it creates a resource that is dedicated to hosting only service instances. To create instance hosts 1. To create the first instance host, perform a PUT request: Include the following JSON structure as the body of your request: { } "cpu_cores":"0-3", "work_location":"/space/workspace", "install_root":"/space/install", "username":"root", "usage_info":"servicemanager", "retained_info_dir":"/space/retain", "size":5 2. To create the second instance host, perform a PUT request: Include the following JSON structure as the body of your request: { } "cpu_cores":"0-3", "work_location":"/space/workspace", "install_root":"/space/install", "username":"root", "usage_info":"servicemanager", "retained_info_dir":"/space/retain", "size":5 Stingray Services Controller User s Guide 65

76 Configuring Load Balancing as a Service Creating and Starting an LBaaS Service 3. To create the third instance host, perform a PUT request: Include the following JSON structure as the body of your request: { } cpu_cores":"0-3", "work_location":"/space/workspace", "install_root":"/space/install", "username":"root", "usage_info":"servicemanager", "retained_info_dir":"/space/retain", "size":10 The setting of usage_info to servicemanager indicates that these hosts are intended for exclusive use for service deployments. The size property indicates how many LBaaS service instances are accommodated on each host. In these examples, myhost-03 is twice as large as myhost-01 and myhost- 02. Creating and Starting an LBaaS Service To create and modify LBaaS services, you interact only with the Services Controller REST API. The Services Controller deploys and configures a Traffic Manager cluster based on your specifications without you needing to worry about the details of the Traffic Manager configuration. Configuration Options for an LBaaS Service To create an LBaaS service, you must define your requirements for the service through the following set of configurable values: Number of Traffic Manager instances to deploy into a service cluster Allocated maximum bandwidth per Traffic Manager instance Traffic Manager feature pack Traffic Manager version Front-end IP addresses and ports Back-end IP addresses and ports The protocol to be balanced (HTTP, HTTPS, TCP, SSL) Load-balancing algorithm to use (such as, random, round robin, or perceptive, and so forth) Secure Socket Layer (SSL) off-load and associated certificate/key pairs (this functionality is optional) Back-end node health monitoring (this functionality is optional) Session persistence options (this functionality is optional) Note: If you omit the Traffic Manager version (that is, the stm_version property) from the JSON structure, the version is automatically selected by the Services Controller. The latest version of all the available version resources is selected. If you omit a feature pack, a feature pack that uses a SKU that satisfies all of the licensing requirements of the service (as defined by the service configuration) and with maximum available bandwidth is chosen. 66 Stingray Services Controller User s Guide

77 Creating and Starting an LBaaS Service Configuring Load Balancing as a Service Creating an LBaaS Service To create an LBaaS service, perform this task. To create an LBaaS service To create the service, perform a PUT request: Include the following JSON structure as the body of your request: { } "service_type":"lbaas", "status":"inactive", "stm_feature_pack":"stm400fp", "num_instances":1, "license_name":"fla-ssl", "instance_bandwidth":1000, "protocol":"http", "back_end_nodes":["xx.xx.xx.xx:80","xx.xx.xx.xx:80"], "front_end_port":80, "front_end_ips":["yy.yy.yy.yy","yy.yy.yy.yy","yy.yy.yy.yy","yy.yy.yy.yy","yy.yy.yy.yy"] The following table summarizes the individual properties in this service resource. Property "service_type":"lbaas" "status":"inactive" "stm_feature_pack":"stm400fp" "num_instances":1 "license_name":"fla-ssl" Description Specifies the type of service to create based on a template. The template determines the other properties that must be provided when creating the service resource. You might require different templates that will have different requirements. For example, an LBaaS service requires you to specify the protocol to be load balanced. A different template for a more specific service type might not require this property. Currently, only the LBaaS service template is available, which is used for both LBaaS and ELBaaS services. All services start in the Inactive state. For details, see LBaaS Service Life Cycle on page 61. Specifies the name of the feature pack to be applied to all instances in the service cluster. You can provide a value for this property when creating a service or allow the Services Controller to auto-select a suitable resource for the template in use. The Services Controller automatically determines an eligible feature pack if this property is omitted. Riverbed recommends that you select the feature pack carefully as it can have cost or capacity implications, depending on the licensing model in use. The Services Controller validates any feature pack that you specify for an LBaaS service to ensure that it provides sufficient features for the LBaaS service configuration. Specifies the number of instances that the Services Controller can deploy and maintain in a healthy state in this service cluster. When you initially create a service, no instances are deployed until the service is set to the Active state. Specifies the FLA license resource used for instances in this service cluster. Stingray Services Controller User s Guide 67

78 Configuring Load Balancing as a Service Creating and Starting an LBaaS Service Property "instance_bandwidth":1000 "protocol":"http" "back_end_nodes":["xx.xx.xx.xx:80", "XX.XX.XX.XX:80"] "front_end_port":80 "front_end_ips":["yy.yy.yy.yy", "YY.YY.YY.YY","YY.YY.YY.YY", "YY.YY.YY.YY","YY.YY.YY.YY"] Description Specifies the bandwidth allocation in Mbps given to each service instance. For Enterprise licensed Services Controller deployments, the service's footprint in the available pool of licensed bandwidth is: num_instances x instance_bandwidth For details, see Enterprise Licensing for LBaaS on page 61. Specifies the protocol to be balanced by this LBaaS service. Specifies the set of back-end nodes in the server pool. These nodes are used when balancing incoming requests across the ELBaaS service. Replace XX.XX.XX.XX with the actual IP addresses. Specifies the front-end IP address port number. Specifies the set of front-end IP addresses to be raised by the LBaaS service instances. These are raised using one of the High Availability features of Traffic Manager, Traffic IP groups, so that these IP addresses and ports are automatically shared between instances in a service cluster, and passed around between instances in the case of an instance failure. Replace YY.YY.YY.YY with the actual IP addresses. This example represents only a partial example of the properties you can set for the LBaaS service. For details, see LBaaS REST API Reference on page 73. Starting and Stopping an LBaaS Service You can start or stop an LBaaS service in a stable state by setting its status property. To start the LBaaS service Perform a PUT request: Include the following JSON structure as the body of your request: { } "status":"active" The service switches to the transitional state Starting while it is deploying, clustering, and activating service instances, before finally switching to the state Active. To stop the LBaaS service Perform a PUT request: Include the following JSON structure as the body of your request: { "status":"inactive" } The service switches to the transitional state Stopping while it is deactivating and deleting service instances, before switching to state Inactive. For more information about the service life cycle, see LBaaS Service Life Cycle on page Stingray Services Controller User s Guide

79 Creating and Starting an LBaaS Service Configuring Load Balancing as a Service Viewing Service Instances Once services are created, you can use the REST API to view all Traffic Manager service instances and (separately) their individual details. Two parameter switches are required for these REST API calls:?show_service=true lists Traffic Manager service instances instead of Traffic Manager instances.?override_service_protection=true includes a full list of properties for a Traffic Manager service instance. Examples of the use of these switches are shown in the processes that follow. To view all service instances To view all Traffic Manager service instances on an SSC, perform a GET request, including the?show_service=true parameter switch in the URI. For example: The response to this request contains a JSON structure representing all defined Traffic Manager service instances. For example: { } "children": [{ "href": "/api/tmcm/1.3/instance/lbaas_service1_1", "name": "LBaaS_service1_1" }, { "href": "/api/tmcm/1.3/instance/lbaas_service1_2", "name": "LBaaS_service1_2" }, { "href": "/api/tmcm/1.3/instance/lbaas_service1_3", "name": "LBaaS_service1_3" }, { "href": "/api/tmcm/1.3/instance/lbaas_service1_4", "name": "LBaaS_service1_4" }, { "href": "/api/tmcm/1.3/instance/lbaas_service1_5", "name": "LBaaS_service1_5" }, { "href": "/api/tmcm/1.3/instance/lbaas_service1_6", "name": "LBaaS_service1_6" }] To view service instance properties 1. To view basic properties for a Traffic Manager service instance, use the REST API as you would for a Traffic Manager instance. For example, for a service instance called LBaaS_service1_6: An example response: { "status": "Active", "config_options": "port_offset=0 num_children=1", "metrics_peak_rps": 0, "cpu_usage": "", "license_name": "fla-ssl", "creation_date": " :41:20", "bandwidth": 1000, "tag": "" "stm_feature_pack": "FP1", "cluster_id": "cluster_service1", Stingray Services Controller User s Guide 69

80 Configuring Load Balancing as a Service Creating and Starting an LBaaS Service } "container_name": "", "owner": "Service:service1", "metrics_throughput": 0, "metrics_date": " :00:00", "metrics_peak_throughput": 0, "stm_version": "9.7", "host_name": "myhost-01.mydomain.com", "container_configuration": "", "metrics_peak_ssl_tps": 0 Note: You cannot set the tag property on a service instance. It is always an empty string. 2. To view an expanded set of properties for the same Traffic Manager service instance, perform a GET request, including the?override_service_protection=true parameter switch in the URI. For example: LBaaS_service1_6?override_service_protection=true An example response: { "status": "Active", "config_options": "port_offset=0 num_children=1", "metrics_peak_rps": 0, "cpu_usage": "", "license_name": "fla-ssl", "rest_address": "myssccontroller:50000", "snmp_address": "myssccontroller:50500", "creation_date": " :41:20", "bandwidth": 1000, "tag": "" "stm_feature_pack": "FP1", "cluster_id": "cluster_service1", "container_name": "", "owner": "Service:service1", "service_username": "servuser", "metrics_throughput": 0, "service_password": "fpvgnqvnr7", "metrics_date": " :00:00", "admin_username": "admin", "metrics_peak_throughput": 0, "stm_version": "9.7", "host_name": "myhost-01.mydomain.com", "management_address": "myssccontroller", "container_configuration": "", "ui_address": "myssccontroller:51500", "metrics_peak_ssl_tps": 0, "admin_password": "fpvgnqbvr7" } Note: You cannot set the tag property on a service instance. It is always an empty string. 70 Stingray Services Controller User s Guide

81 Creating and Starting an LBaaS Service Configuring Load Balancing as a Service Changing the LBaaS Service Cluster Size The Services Controller allows you to easily scale up and scale down the number of Traffic Manager instances for an LBaaS service. When you define an LBaaS service, you specify how many Traffic Manager instances you require in the service cluster (that is, the num_instances property for the service resource). Assuming sufficient resources have been made available, the Services Controller deploys the requested number of Traffic Manager instances into a cluster and configures the cluster instances to provide the service. You can choose to increase or decrease the number of required instances after you create the service instance. The Services Controller can add or remove cluster members as necessary. While it is possible for multiple Traffic Manager instances to coexist on the same service host, only one Traffic Manager from a given cluster are placed on the same host. Thus, the size of a service cluster is limited by the total number of service hosts that you have made available to the Services Controller. In some cases, the Services Controller has enough service hosts available, but it is unable to deploy the full number of instances that you have requested for an LBaaS service cluster. For example, a service host might be temporarily down or inaccessible via the network. In this case, to determine what percentage of the desired deployment is in place, a service resource provides a property called deployment_strength. The deployment_strength property has a range from 1 to 100, representing the percentage of the desired cluster size (that is, the num_instances) that has been successfully deployed. If, following a service change, the value of deployment_strength is less than 100, this indicates that you must manually increase the availability of service hosts. For example, if the deployment_strength is 0, the service has no deployed instances, traffic is not being transferred, and that immediate action must be taken. You should monitor the deployment_strength for a service, particularly after service life cycle changes or alterations to the value of the num_instances property. Occasionally, when a service has a deployment strength of less than 100, when new host resources become available (for example, by the addition of a new service host to the inventory or the clean up of a failed deletion), you can stimulate the under-deployed service to make use of these newly available resources. This situation requires you to put an empty JSON object into the service resource. To stimulate the service to make use of under-deployed host resources Perform a PUT request: /api/1.0/tmsm/service/myservice Include an empty JSON structure as the body of your request: {} This action updates the service status to Changing and prompts the service whether it can deploy further service instances using the new resource. To change the number of service instances in a cluster To change the number of service instances in a cluster to 3, perform a PUT request: Include the following JSON structure as the body of your request: { } "num_instances":3 Stingray Services Controller User s Guide 71

82 Configuring Load Balancing as a Service Creating and Starting an LBaaS Service For an active service currently with one instance, this example sets the service to status Changing, deploys two more instances into the service cluster, and then reconfigures the service cluster to allow front-end IP addresses to be hosted by those instances (that are deployed on separate hosts), before setting the service to Active status once more. To decrease the number of requested service instances in a cluster 1. To decrease the number of requested service instances in a cluster to 2, perform a PUT request: Include the following JSON structure as the body of your request: { } "num_instances":2 For an active service currently with three instances, this example sets the service status to Changing, removes one instance from the service cluster, and then reconfigures the service cluster accordingly before setting the service to Active status once more. Deleting an LBaaS Service If a service is no longer required, it is deleted. This process removes all the deployed cluster instances and permanently marks the service resource as Deleted. To delete an LBaaS service 1. To stop the service, perform a PUT request: Include the following JSON structure as the body of your request: { } "status":"inactive" 2. To set the service status to Deleted, perform a PUT request: Include the following JSON structure as the body of your request: { } "status":"deleted" The service enters the Deleting state in the service cluster; once this task is complete it enters the Deleted state. 72 Stingray Services Controller User s Guide

83 LBaaS REST API Reference Configuring Load Balancing as a Service LBaaS REST API Reference This section describes the LBaaS REST API resources and properties. Note: The LBaaS REST API is a service-specific API. It is not part of the Services Controller nonservice REST API used for resources such as instances, instance hosts, feature packs. The Services Controller nonservice REST API is accessed through /api/tmcm/1.4/, whereas the ELBaaS REST API is accessed through /api/tmsm/1.0/. Authentication All requests to the Services Controller REST API must be authenticated by means of HTTP Basic Authentication. You must create an initial Services Controller user outside of the REST API, but you can create and manage other users using the REST API. You must access the Services Controller REST API through HTTPS. Client certificates are not checked for validity, and HTTPS is used only for encryption and to allow the FLA license to verify the server identity. All LBaaS inventory database resources are provided through a common base URI that identifies the root of the resource model: where <host> is the hostname of the server containing the inventory database, and <port> is the port that the REST API is published on. The following sections describe the inventory resources at this URI. API Root A GET on this resource returns an object which contains a property children. This property is a list of names and hrefs for child entries beneath /api/tmsm/1.0/. Each of these child resources contains collections of related resources. Example Request GET /api/tmsm/1.0/ Response { "children" : [ { "href" : "/api/tmsm/1.0/template", "name" : "template" }, { "href" : "/api/tmsm/1.0/service", "name" : "service" }, { "href" : "/api/tmsm/1.0/task", "name" : "task" } ] } Stingray Services Controller User s Guide 73

84 Configuring Load Balancing as a Service LBaaS REST API Reference The following table summarizes the child resources. Child Resource template service task Description Contains a property children that is a list of the names and hrefs of all available service templates. Currently there is only an LBaaS service template available. This template is used for both LBaaS and ELBaaS services. Contains a property children that is a list of the names and hrefs of all user created services. Each of these child resources contains collections of related resources. The Services Controller attempts to automatically maintain the service cluster for the service user by deploying, deleting, and failing-over individual instances within the cluster automatically. template Resource The /api/tmsm/1.0/template resource contains the children property which is a list of the names and hrefs of all available service templates. Currently the only service template is the LBaaS template. This template is used for both LBaaS and ELBaaS services. The properties in the template resource cannot be updated; its purpose is to provide information. Example Request GET Response { "children" : [ { "href" : "/api/tmsm/1.0/template/lbaas", "name" : "LBaaS" } ] } Templates enable you to configure Traffic Manager clusters to provide useful services without knowing the details of Traffic Manager configuration objects and without knowing how to deploy individual Traffic Manager instances through Services Controller. The Services Controller combines template-specific parameter values with the service template they correspond to, and converts them into appropriate Traffic Manager configuration objects. It then applies these objects to an automatically deployed Traffic Manager cluster. For example, an LBaaS service requires you to provide a set of IP addresses on the front-end to accept incoming requests, a set of IP addresses and ports for back-end servers, and the protocol that is being load balanced. Properties Every service resource is comprised of a combination of common properties that are provided for a service resource regardless of the service type and template-specific properties. The common properties include a mandatory service_type property that names the template resource on which the service is based. The value of service_type determines the template-specific properties that are available to you when you create the service. 74 Stingray Services Controller User s Guide

85 LBaaS REST API Reference Configuring Load Balancing as a Service Each service template resource supports this by describing parameters that correspond to template-specific properties that must be provided when you create a service based on that template. Different services require different parameters depending on their purpose. The purpose of template resources is to enable you to determine what a valid set of property values might be for a given service resource. The parameter descriptions, along with some other template information (such as the minimum version of Traffic Manager with which the service is compatible) are encoded in a JSON object in the following table. Property description long_description min_stm_version parameters Description Brief description of the service type supported by the template. More verbose description of the service type supported by the template. The version number of the earliest Traffic Manager version compatible with this service template. A list of JSON objects, each representing a template-specific parameter for this template type. Each object contains a number of repairer entries specifying metadata for the parameter in question. Parameter metadata objects can contain some or all of the following keys. Property name description required type default depends choices children Description The parameter name. This name maps directly to a service resource property name in service resources based on this template. A brief description. Whether you must supply a value for the corresponding resource property when creating a service resource based on this template. What kind of value is expected for the corresponding resource property. If present, the default value that is applied for the corresponding resource property if you do not supply one when you create a service resource based on this template. A JSON dictionary with keys as the possible values of the property and for those given values, which are the list of values that another resource property must have. For example, { "children" : { }, "default" : false, "depends" : { "true" : { "protocol" : [ "http","tcp"] } }, "description" : "Enable SSL encrypt to the backend nodes for service", "name" : "ssl_encrypt", "required" : false, "structure" : null, "type" : "boolean" } Thus if ssl_encrypt is true, then the protocol must be either http or tcp. Where the type is enum, this key contains a list of the allowed values for the corresponding resource property when creating a service resource based on this template. If present, this property describes dependencies between parameters for this template. An example from the LBaaS template is: { "cookie" : ["session_persistence_cookie"]} If the service resource property corresponding to this parameter is set to cookie, then provide a value for the session_persistence_cookie property as well. Stingray Services Controller User s Guide 75

86 Configuring Load Balancing as a Service LBaaS REST API Reference Property regex structure Description If present, this property holds a regex that is used to validate any value provided for the corresponding service resource property. If set to null, this property indicates that the resource property is a single value; if set to list. This indicates that the resource property is a list of elements. Example Request GET /api/tmsm/1.0/template/lbaas Response An example response is shown below: { "description": "Load Balancing as a Service", "long_description": "This template allows the generation of a Load Balancing Service.", "min_stm_version": "9.6", "parameters": [ { "children": {}, "choices": [ "fastest_response_time", "least_connections", "perceptive", "random", "round_robin" ], "default": "round_robin", "depends": {}, "description": "Algorithm used to decide where to distribute traffic to", "name": "lb_algorithm", "required": false, "structure": null, "type": "enum" }, { "children": {}, "choices": [ "http", "https", "tcp", "ssl" ], "depends": {}, "description": "The protocol being balanced, either HTTP, HTTPS, TCP, or SSL.", "name": "protocol", "required": true, "structure": null, "type": "enum" }, { "children": {}, "default": "/", "depends": {}, "description": "The URI path to use in the HTTP monitor test", "name": "monitor_path", "regex": "^/[^ ]*$", "required": false, "structure": null, "type": "regex" }, 76 Stingray Services Controller User s Guide

87 LBaaS REST API Reference Configuring Load Balancing as a Service { "children": {}, "default": 2048, "depends": {}, "description": "Maximum amount of data to read back from a server for a monitoring request", "name": "monitor_resp_length", "required": false, "structure": null, "type": "int" }, { "children": {}, "default": 5, "depends": {}, "description": "The amount of time (in seconds) to wait for a response before marking a health probe as failed", "name": "monitor_timeout", "range": " ", "required": false, "structure": null, "type": "int" }, { "children": {}, "default": 2, "depends": {}, "description": "The period between two consecutive health checks", "name": "monitor_interval", "range": " ", "required": false, "structure": null, "type": "int" }, { "children": {}, "default": false, "depends": { "true": { "protocol": [ "http", "tcp" ] } }, "description": "Enable SSL encryption to the backend nodes for the service", "name": "ssl_encrypt", "required": false, "structure": null, "type": "boolean" }, { "children": {}, "default": "^[234][0-9][0-9]$", "depends": {}, "description": "The regex value that the HTTP monitor test response code should match against", "name": "monitor_status_regex", "required": false, "structure": null, "type": "string" }, { "children": {}, Stingray Services Controller User s Guide 77

88 Configuring Load Balancing as a Service LBaaS REST API Reference "depends": {}, "description": "Cookie name to use for persistence (if session_persistence_type is 'cookie')", "name": "session_persistence_cookie", "regex": "[A-Za-z0-9_.-]+", "required": false, "structure": null, "type": "regex" }, { "children": {}, "default": 3, "depends": {}, "description": "The number of failures before a node is marked as failed", "name": "monitor_failure_threshold", "range": " ", "required": false, "structure": null, "type": "int" }, { "children": { "cookie": [ "session_persistence_cookie" ] }, "choices": [ "asp", "cookie", "ip", "j2ee", "named", "ssl", "transparent", "universal", "x-zeus" ], "depends": {}, "description": "The type of session persistence used for the service", "name": "session_persistence_type", "required": false, "structure": null, "type": "enum" }, { "children": {}, "depends": {}, "description": "The port number on which we listen for incoming connections on the front-end IPs", "name": "front_end_port", "range": " ", "required": true, "structure": null, "type": "int" }, { "children": {}, "depends": {}, "description": "The host header value to use in the HTTP monitor test", "name": "monitor_host_header", "required": false, "structure": null, "type": "string" }, 78 Stingray Services Controller User s Guide

89 LBaaS REST API Reference Configuring Load Balancing as a Service { "children": { "url": [ "session_persistence_redirect_url" ] }, "choices": [ "close", "new_node", "url" ], "default": "new_node", "depends": {}, "description": "Determines what happens in the case of a persistence failure", "name": "session_persistence_failure_mode", "required": false, "structure": null, "type": "enum" }, { "children": {}, "depends": {}, "description": "A set of IP addresses to be raised by the LBaaS service, on which we'll listen for incoming connections", "name": "front_end_ips", "required": true, "structure": "list", "type": "ipaddress" }, { "children": {}, "depends": {}, "description": "Private key, must be set if ssl_offload is in use.", "name": "private_key", "required": false, "structure": null, "type": "string" }, { "children": {}, "default": false, "depends": {}, "description": "Whether or not the monitor should connect using SSL", "name": "monitor_use_ssl", "required": false, "structure": null, "type": "boolean" }, { "children": {}, "depends": {}, "description": "URL to redirect connection to in the case of a persistence failure", "name": "session_persistence_redirect_url", "regex": "^((((https?) (rtsp))://) (sip:))\\s+(:\\d+)?(/\\s*)?", "required": false, "structure": null, "type": "regex" }, { "children": {}, "depends": {}, "description": "A set of IP address/port pairs to which traffic should be balanced", Stingray Services Controller User s Guide 79

90 Configuring Load Balancing as a Service LBaaS REST API Reference "name": "back_end_nodes", "required": true, "structure": "list", "type": "node" }, { "children": { "true": [ "public_certificate", "private_key" ] }, "default": false, "depends": {}, "description": "Enable SSL decryption for the service", "name": "ssl_offload", "required": false, "structure": null, "type": "boolean" }, { "children": { "true": [ "session_persistence_type" ] }, "default": false, "depends": {}, "description": "Enable session persistence for the service", "name": "session_persistence", "required": false, "structure": null, "type": "boolean" }, { "children": {}, "default": true, "depends": {}, "description": "Boolean value, determines whether the connection is closed in the case of a session persistence failure", "name": "session_persistence_failure_delete", "required": false, "structure": null, "type": "boolean" }, { "children": {}, "depends": {}, "description": "The regex value that the HTTP monitor test response body should match against", "name": "monitor_body_regex", "required": false, "structure": null, "type": "string" }, { "children": {}, "depends": {}, "description": "Public certificate, must be set if ssl_offload is in use.", "name": "public_certificate", "required": false, "structure": null, "type": "string" }, 80 Stingray Services Controller User s Guide

91 LBaaS REST API Reference Configuring Load Balancing as a Service { "children": {}, "depends": {}, "description": "The value of the basic auth header to use in a HTTP request, format <username>:<password>'", "name": "monitor_auth", "regex": "^.*?:.*?$", "required": false, "structure": null, "type": "regex" }, { "children": {}, "choices": [ "connect", "http" ], "default": "connect", "depends": {}, "description": "The scheme used (if any) to monitor back end node health", "name": "monitor_type", "required": false, "structure": null, "type": "enum" }, { "children": { "true": [ "monitor_type" ] }, "default": false, "depends": {}, "description": "Enable service health monitoring", "name": "health_monitoring", "required": false, "structure": null, "type": "boolean" } ] } service Resource The /api/tmsm/1.0/service resource contains the children property which is a list of the names and hrefs of all user created services. Example Request GET /api/tmsm/1.0/service/ Response { "children" : [ { "href" : "/api/tmsm/1.0/service/mylbaas1", "name" : "mylbaas1" } ] } Stingray Services Controller User s Guide 81

92 Configuring Load Balancing as a Service LBaaS REST API Reference Properties A service resource represents a user-created service based on a service template. Each service resource is made up of a combination of common properties that are provided for a service resource regardless of the service type and template-specific properties. The common properties include a mandatory service_type property that names the template resource that the service are based on. The value of the service_type property determines template-specific properties that are available to you when you create the service. Common service properties are summarized in the following table. Property Description Actions service_type status num_instances deployment_strength instance_health_monitor_interval instance_health_fail_trigger stm_rest_version The name of the service template used for this service. Currently, the only available template is LBaaS. A service is in one of the following states at any given time: Unknown, Creating, Inactive, Starting, Stopping, Active, Changing, Deleting, Deleted, Config Failed. The default value is Inactive. For details, see LBaaS Service Life Cycle on page 61. The number of Traffic Manager instances that the Services Controller maintains in the automatically deployed service cluster when the service status is not Inactive, Deleting, or Deleted. The definition of num_instances is different between LBaaS and ELBaaS services. See Properties on page 116. An integer in the range from 0 to 100 that represents the number of instances deployed and working in the service cluster as a percentage of num_instances. This value is calculated by the Services Controller; you cannot update this value. The interval in seconds between monitoring checks on the instances in the service cluster. The time in seconds that must elapse during which no successful health checks have occurred on a service instance before it is considered unhealthy and removed from the service cluster. Determines which version of the Traffic Manager REST API should be used by LBaaS for communicating with service instances when setting and checking the configuration. The default value is 3.0 (that is, the latest REST API version available in the Traffic Manager v9.7 release). This property should not be changed if you are using Traffic Manager v9.7 for creating the service cluster. Create Create/Update Create/Update Value set only by the Services Controller Create/Update Create/Update Create/Update 82 Stingray Services Controller User s Guide

93 LBaaS REST API Reference Configuring Load Balancing as a Service Property Description Actions stm_version stm_feature_pack license_name instance_bandwidth The name of the version resource used when deploying service instances. You can provide a value for this property when creating a service or allow the Services Controller to auto-select a suitable resource for the template in use. If you omit the Traffic Manager version, the latest version of all the available version resources is selected. The name of the feature_pack resource applied to the instances in the service cluster. You can provide a value for this property when creating a service or allow the Services Controller to auto-select a suitable resource for the template in use. The name of the license resource that must be installed in instances in the service cluster. The maximum bandwidth allowed for each instance in the service cluster. In the Enterprise licensing model, the number of instances that are deployed for a service depends on a combination of instance_bandwidth, num_instances, and the stm_feature_pack in use. Create/Update Create/Update Create/Update Create/Update The additional service properties allowed for a service where service_type is set to LBaaS are summarized in the following table. Property Description Actions front_end_ips front_end_port back_end_nodes lb_algorithm A set of IP addresses to be raised by the LBaaS service on which the service will listen for incoming connections. Presented as a JSON array of strings. For example: [" "," "] The port number in the range from 0 to on which the Services Controller listens for incoming connections on the front-end IP addresses. A set of IP address/port pairs to which traffic should be balanced. Presented as a JSON array of strings. For example: [" :80", " :80"] The algorithm that decides where to distribute traffic. It is one of the following: fastest_response_time least_connections perceptive random round_robin The default value is round_robin. Setting the lb_algorithm property to null reverts to the default value of round_robin. Create/Update Create/Update Create/Update Create/Update Stingray Services Controller User s Guide 83

94 Configuring Load Balancing as a Service LBaaS REST API Reference Property Description Actions protocol ssl_offload ssl_encrypt session_persistence health_monitoring public_certificate private_key monitor_type monitor_use_ssl monitor_timeout monitor_interval The protocol being balanced. It is one of the following values: http https tcp ssl Whether to enable Secure Socket Layer (SSL) decryption for the service (that is, true or false). Properties public_certificate and private_key must be set correctly if this property is true. Note: To be a valid JSON object, the "\n" newlines in the public certificate and private key must be escaped. That is, "\\n" instead of "\n". Whether to enable SSL encryption the back-end nodes for the service. The property protocol must be set to http or tcp if this property is true. Whether to enable session persistence for the service. The property session_persistence_type must be set if this property is true. Whether to enable health monitoring for the backend nodes (that is, true or false). The property monitor_type must be set if this property is true. A PEM encoded public certificate encoded as a JSON string. This property must be set if the ssl_offload property is set to true. Note: To be a valid JSON object, the "\n" newlines in the public certificate must be escaped. That is, "\\n" instead of "\n". A PEM encoded private key encoded as a JSON string. This property must be set if the ssl_offload property is set to true. Note: To be a valid JSON object, the "\n" newlines in the private key must be escaped. That is, "\\n" instead of "\n". The scheme used (if any) to monitor back-end node health, either connect or http. Whether or not the monitor should connect using SSL, either true or false. The period of time, in seconds, to wait for a response before marking a health probe as failed. The period of time, in seconds, between two consecutive health checks. Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update 84 Stingray Services Controller User s Guide

95 LBaaS REST API Reference Configuring Load Balancing as a Service Property Description Actions monitor_failure_threshold monitor_host_header The number of monitoring failures before a back-end node is marked as failed. The value to use for the host header in the HTTP monitor test. Create/Update Create/Update monitor_path The URI path to use in the HTTP monitor test. Create/Update monitor_auth monitor_status_regex monitor_body_regex monitor_resp_length session_persistence_type session_persistence_cookie session_persistence_failure_mode session_persistence_failure_delete session_persistence_redirect_url The value of the basic auth header to use in a HTTP request; a string in the following format: <username>:<password> The regex value that the HTTP monitor test response code should match against. The regex value that the HTTP monitor test response body should match against. Maximum amount of data to read back from a server for a monitoring request. The type of session persistence used for the service. It is one of the following values: asp cookie ip j2ee named ssl transparent universal x_zeus If the session_persistence_type is set to cookie, the cookie name to use for persistence. Determines what happens in the case of a persistence failure. It is of the following values: close new_node url This property must be set if property session_persistence_type is set. Determines whether the connection is closed in the case of a session persistence failure, either true or false. URL to redirect connection to in the case of a persistence failure. Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Example Request GET /api/tmsm/1.0/service/mylbaas1 Stingray Services Controller User s Guide 85

96 Configuring Load Balancing as a Service LBaaS REST API Reference Response { "back_end_nodes" : [ " :80", " :80" ], "deployment_strength" : 100, "front_end_ips" : [ " ", " ", " ", " ", " " ], "front_end_port" : 80, "instance_bandwidth" : 1000, "instance_health_fail_trigger" : 90, "instance_health_monitor_interval" : 30, "license_name" : "fla-ssl", "num_instances" : 1, "protocol" : "HTTP", "service_type" : "LBaaS", "status" : "Active", "stm_feature_pack" : "STM400FP", "stm_version" : "ZeusTM_97_Linux-x86_64" } task Resource The /api/tmsm/1.0/task resource contains a property children that is a list of the names and hrefs of all user created services. Example Request GET /api/tmsm/1.0/task/ Response { "children" : [ { "href" : "/api/tmsm/1.0/task/1", "name" : "1" } ] } Properties The Services Controller maintains the service cluster for you by deploying, deleting, and failing-over individual instances within the cluster automatically. Occasionally, the Services Controller is unable to automatically clean up instances (for example, due to network connectivity or host failures). In these cases, the Services Controller ejects the instance from the service cluster and creates a task resource for the cleanup operation that you (that is, the service administrator) must perform manually. 86 Stingray Services Controller User s Guide

97 LBaaS REST API Reference Configuring Load Balancing as a Service The set of tasks under /api/tmsm/1.0/task/ is a to-do list for the administrator to makes sure service hosts do not have ejected instances, After the cleanup operation has been completed, you can mark the task as done by performing a REST API PUT operation setting the property status to Complete. You can also delete completed tasks. Property failure_reason host_name info status task_type Description A digest of the error messages that was received during the automatic cleanup operation. Name of the instance host affected by the problem. Description of the failure (including the instance name). Either Required when the task is still outstanding or Complete when it has been performed by the administrator. Currently, Cleanup is the only task type, requiring you to manually stop (if appropriate) and then manually remove the installed instances from service hosts. Example Request GET /api/tmsm/1.0/task/1 Response { "failure_reason" : "Failed to copy file /opt/riverbed_ssc_1.1/scripts/configure_clean.replay to [email protected]:/space/workspace/configure_clean.replay with exit code 255\nssh: connect to host myhost-02.mydomain.com port 22: Connection refused\r\n", "host_name" : "myhost-02.mydomain.com", "info" : "Failed to delete instance LBaaS_myLBaaS1_1", "status" : "Required", "task_type" : "Cleanup" } Stingray Services Controller User s Guide 87

98 Configuring Load Balancing as a Service LBaaS REST API Reference 88 Stingray Services Controller User s Guide

99 CHAPTER 6 Configuring Elastic Load Balancing as a Service This document describes how to configure Elastic Load Balancing as a Service (ELBaaS). It includes the following sections: Overview of ELBaaS on page 89 Prerequisites for Using ELBaaS on page 93 Creating and Starting an ELBaaS Service on page 97 ELBaaS REST API Reference on page 107 Overview of ELBaaS An ELBaaS service enables you to create and configure a generic load balancing service across an automatically scaled set of Traffic Manager service instances. Each service has multiple front-end IP addresses that are shared in a high-availability cluster of automatically deployed service instances. The service also provides load-balancing across a group of back-end nodes. A traffic manager instance that is deployed for a service (such as ELBaaS) is a service instance. When an ELBaaS service is created and activated, a defined minimum number of service instances are created. The overall size of the service cluster varies between configured limits. The number of required service instances is determined by a scaling metric mechanism. Currently, the average CPU usage of all current service instances is the only metric supported. This metric works as follows: Where average CPU usage is high, additional Traffic Manager service instances are created automatically, upscaling the cluster size up incrementally to a defined maximum. Where average CPU usage is low, the number of Traffic Manager service instances is downscaled incrementally, and the affected service instance is automatically deleted. The Services Controller performs health monitoring of service instances. The deployment strength of a service is a percentage measure of the number of deployed instances against the required number of instances. When a service instance fails, or a service instance deployment fails after an upscaling event, this failure is reflected by a reduction in the deployment strength of its service. Additionally, an is sent, and potentially an administrator task is raised. If there is available capacity on one of the service hosts, a new instance is automatically deployed and added to the service cluster. The regularity of CPU usage checks, the cool-off period between scaling events, and threshold CPU usage percentages are all configurable. Stingray Services Controller User s Guide 89

100 Configuring Elastic Load Balancing as a Service Overview of ELBaaS You can configure the following ELBaaS elements. Figure 6-1. ELBaaS Configurable Elements ELBaaS provides Traffic Manager instance co-existence such that multiple services can place instances on the same hosts. Figure 6-2. ELBaaS Provides Traffic Manager Co-Existence 90 Stingray Services Controller User s Guide

101 Overview of ELBaaS Configuring Elastic Load Balancing as a Service Enterprise Licensing for ELBaaS Enterprise licensing enables you to prepay for blocks of available bandwidth for a Traffic Manager SKU (Stock Keeping Unit) and for a specified time period. The licensed bandwidth for a SKU is calculated as the total bandwidth of all the valid installed Bandwidth Packs for that SKU. With Enterprise Licensing, the Services Controller rejects REST API requests that result in a Services Controller state where instances and services using a particular SKU consume more bandwidth than has been licensed for that SKU. For example, the following requests might be rejected due to insufficient bandwidth: Increasing the value of the bandwidth property in an instance or service resource. Deploying an additional instance or increasing the requested number of instances to be deployed for a service. Altering the stm_feature_pack property value of an instance or service to a feature pack based on a different SKU. Deleting, deactivating, or upgrading an existing Bandwidth Pack. The total bandwidth that a service consumes is calculated as the product of the values: num_instances x bandwidth The bandwidth used by this service is reserved from the Bandwidth Pack for the SKU that is linked by the stm_feature_pack property. As with Services Controller nonservice instances, you cannot make changes to these property values that exceed your licensed bandwidth. Before a scale-up event, a check is performed against the license to ensure that a new instance is allowed to be deployed: (num_instances + 1) x bandwidth If this is not permitted, the scale-up does not happen, and an is sent. ELBaaS Service Life Cycle Each service has a status property that identifies and modifies the service in terms of where it is in the service life cycle. Some of these statuses are stable states, whereas others are transient states that indicate the service is undergoing changes. The possible values of the status property are listed below. Status Type Description Creating Transient The service is being created. Inactive Stable The service has just been created but has not been activated yet, thus it has no service instances. Alternatively, the service has been stopped. Starting Transient The service is in the process of starting by deploying instances into a service cluster before they are configured. Active Stable The service is operational. Following a successful start, the service transitions to the Active state. Stopping Transient The service is in the process of stopping, with all the instances in the service cluster being deactivated and deleted. Following a successful stop, the service transitions to the Inactive state. Stingray Services Controller User s Guide 91

102 Configuring Elastic Load Balancing as a Service Overview of ELBaaS Status Type Description Deleting Transient The service is being permanently decommissioned and all instances in the service cluster are being deleted. A successful deletion results in the service transitioning to the Deleted state. Deleted Stable The service and its service cluster instances have been permanently deleted. Changing Transient The service is currently processing a configuration change: A scale-up or scale-down event has occurred. An automatic amendment to the number of instances required for the service cluster has occurred. A change to the configuration of the ELBaaS service itself. No further configuration changes are accepted until this configuration change has been completely processed. A successful change results in the service transitioning to the Active state. This state can also arise as the result of a failover following an instance failure. Config Failed Stable A failure state has been encountered where the configuration objects required to configure the ELBaaS service could not be applied to the Traffic Manager cluster. For transitions that are directly under your control (that is, Inactive > Active, Active > Inactive, and Inactive > Deleted), you make a REST API PUT request to the service resource, by setting the status property to the target status. The service does not move directly to the target status, but goes into the transitional state, such as Starting, Stopping, Deleting, and Changing. Service Failover Mechanism In the Services Controller, a service is an automatically deployed and configured cluster of Traffic Manager instances (that is, a service cluster). When an ELBaaS service is created and activated, the Services Controller deploys and maintains a configured service cluster with a number of instances that varies between a defined minimum and a defined maximum. The Services Controller service manager relies on the monitoring feature of the Services Controller to periodically check the health of each service instance to make sure it is still accessible to the Services Controller. If a service instance remains inaccessible for a period of time longer than the number of seconds specified in the instance_health_fail_trigger service property, then a failover is triggered for the service. With failover, the Services Controller automatically performs the following actions for each failed instance: 1. Stops the instance. 2. Removes the instance from the service cluster. 3. Deletes the instance. If any of these actions are unsuccessful, the instance is ejected (that is, disassociated from the rest of the service cluster), an administrator task is automatically generated, and an notification of the outstanding task is sent. If there is available capacity on one of the service hosts, a new instance is automatically deployed and added to the service cluster. 92 Stingray Services Controller User s Guide

103 Prerequisites for Using ELBaaS Configuring Elastic Load Balancing as a Service If there is no available capacity on any service host, the service continues to run with fewer instances in the service cluster: the service resource property deployment_strength is automatically adjusted downwards to represent this reduction in the size of the service cluster. As with instance failures detected by Services Controller monitoring for nonservice instances, the Services Controller sends an notification of the detected failure. If the Services Controller is unsuccessful in stopping, removing, and deleting the instance, an administrator task is automatically generated and an notification of the outstanding task is sent. In this case, you must manually resolve the problem. The Services Controller also sends a daily summary of remaining outstanding administrator tasks. Configuration of a Service Failover Mechanism The Services Controller automatically monitors the health of service cluster instances, using any Services Controller in a Services Controller cluster that has monitoring enabled. This feature is enabled by default, but is deactivated on a given Services Controller using the monitoring mode settings in the REST API manager resource. Monitoring in Services Controller has the following configurable values: Monitor Interval - The number of seconds between each individual health checks for a given resource. Failure Period - The number of seconds that must elapse during which no successful health checks are made before the monitored resource is considered failed. Services have instance_health_monitor_interval and instance_health_failure_trigger properties corresponding to these two configurable values, so that monitoring for instances in a service cluster are configured on a per-service basis. If you do not specify values for these service resource properties, the default values from the /api/tmcm/1.4/settings/monitoring resource are used for all instances in the service cluster. Prerequisites for Using ELBaaS Before creating an ELBaaS service, you must set up the Services Controller with these supporting resources: Feature Pack - A resource based on one of the standard preconfigured Traffic Manager SKUs, determining the features available for instances that use that feature pack. Currently, only the STM-400 SKU supports LBaaS services such as ELBaaS. FLA License - Each service instance must have an appropriate Traffic Manager Flexible Licensing Architecture (FLA) license installed to ensure that it is licensed by Services Controller. The FLA license used by service instances is the same as those used by nonservice instances. Stingray Traffic Manager (STM) Version - A resource that represents the Services Controller tarball from which service instances are installed. The Services Controller validates any Traffic Manager version that you select for an ELBaaS service to ensure it is an acceptable version. Instance Host - A resource representing a host on which service instances is deployed. In most respects, an instance host used for service instances is the same as the one used for nonservice instances, but there are some differences. Instance hosts can either be used exclusively for service instances or exclusively for nonservice instances, but not for both. Hosts that are intended for use exclusively for service instances are referred to as service hosts. When allocated, service host resources must be set up with the following additional properties: usage_info - Specify servicemanager. Indicates the host is earmarked for service instances only, and fails if the host has normal instances already. Stingray Services Controller User s Guide 93

104 Configuring Elastic Load Balancing as a Service Prerequisites for Using ELBaaS size - Specify an integer for the number of ELBaaS instances that can fit on the source host. The Services Controller will not deploy more instances beyond this value. For ELBaaS, each deployed service instance counts as having a volume of 1. A service host with a size of 5 can accommodate up to five ELBaaS instances. With ELBaaS, size is a required property for a host resource cpu_cores - Specify the set of cores available on the host (in taskset format). The Services Controller uses this property to balance CPU affinity of Traffic Manager instance processes belonging to multiple services. If you do not set this property, the Linux kernel determines process scheduling that results in reduced achievable densities. retained_info_dir - Specify a directory to store support information about instances that are culled. This is used to store STM TSRs if it is possible to generate them, or a copy of the STM logs if it was not possible to create a TSR. Typically, a Services Controller requires multiple service hosts to manage ELBaaS services. Prerequisite Examples For these examples, you deploy one or more ELBaaS services, using: a Services Controller installation running on host myssccontroller.mydomain.com, with its REST API running on port an STM-400 SKU based feature pack. instances based on Traffic Manager v9.7 or later. Riverbed-generated Traffic Manager FLA license fla-ssl. a group of three service hosts, myhost-01.mydomain.com, myhost-02.mydomain.com and myhost- 03.mydomain.com, to host your service instances. This example assumes that passwordless SSH between myssccontroller.mydomain.com and these three service hosts has been set up, and that the working location directories and installation root directories have been set up in advance (as would be the case with any other instance host). Creating Feature Packs This procedure is the same as any feature_pack resource. Currently, only the STM-400 SKU supports LBaaS. Creating an FLA License Resource This procedure is the same as any Traffic Manager FLA license resource. It creates an FLA license resource that is used for normal or service instances. To create an FLA license resource Perform a PUT request: Include the following JSON structure as the body of your request: { } "info" : "This is the resource for the fla-ssl license" 94 Stingray Services Controller User s Guide

105 Prerequisites for Using ELBaaS Configuring Elastic Load Balancing as a Service Creating a Traffic Manager Version This procedure is the same as any Traffic Manager version resource. It creates a Traffic Manager version resource that is used for normal or service instances. To create a Traffic Manager version resource Perform a PUT request: Include the following JSON structure as the body of your request: { } "version_filename":"zeustm_97_linux-x86_64.tgz", "info":"version 9.7 Creating Instance Hosts This procedure is similar to any instance host resource, but it creates a resource that is dedicated to hosting only service instances. To create instance hosts 1. To create the first instance host, perform a PUT request: Include the following JSON structure as the body of your request: { } "cpu_cores":"0-3", "work_location":"/space/workspace", "install_root":"/space/install", "username":"root", "usage_info":"servicemanager", "retained_info_dir":"/space/retain", "size":5 2. To create the second instance host, perform a PUT request: Include the following JSON structure as the body of your request: { } "cpu_cores":"0-3", "work_location":"/space/workspace", "install_root":"/space/install", "username":"root", "usage_info":"servicemanager", "retained_info_dir":"/space/retain", "size":5 3. To create the third instance host, perform a PUT request: Stingray Services Controller User s Guide 95

106 Configuring Elastic Load Balancing as a Service Prerequisites for Using ELBaaS Include the following JSON structure as the body of your request: { } cpu_cores":"0-3", "work_location":"/space/workspace", "install_root":"/space/install", "username":"root", "usage_info":"servicemanager", "retained_info_dir":"/space/retain", "size":10 The setting of usage_info to servicemanager indicates that these hosts are intended for exclusive use for service deployments. The size property indicates how many ELBaaS service instances are accommodated on each host. In these examples, myhost-03 is twice as large as myhost-01 and myhost Stingray Services Controller User s Guide

107 Creating and Starting an ELBaaS Service Configuring Elastic Load Balancing as a Service Creating and Starting an ELBaaS Service To create and modify ELBaaS services, you interact only with the Services Controller REST API. The Services Controller deploys and configures a Traffic Manager cluster based on your specifications without you needing to worry about the details of the Traffic Manager configuration. Note: As more services are added or removed from a host, the scaling percentages for all services on the host may need to be revisited. See Understanding Scaling Thresholds on page 101. Configuration Options for an ELBaaS Service To create an ELBaaS service, you must define your requirements for the service through the following set of configurable values: The minimum and maximum numbers of Traffic Manager instances to deploy into a service cluster The average CPU thresholds that will trigger scale-up and scale-down events The poll interval for monitoring of average CPU usage for deployed Traffic Manager instances The number of CPU monitoring cycles that must pass before a scaling event is triggered The amount of time that must pass before another scaling event is possible (refractory period) Allocated maximum bandwidth per Traffic Manager instance Traffic Manager feature pack Traffic Manager version Front-end IP addresses and ports Back-end IP addresses and ports The protocol to be balanced (HTTP, HTTPS, TCP, SSL) Load balancing algorithm to use (such as, random, round robin, or perceptive, and so forth) SSL off-load and associated certificate/key pairs (this functionality is optional) Back-end node health monitoring (this functionality is optional) Session persistence options (this functionality is optional) Note: If you omit the Traffic Manager version (that is, the stm_version property) from the JSON structure, the version is automatically selected by the Services Controller. The latest version of all the available version resources is selected. If you omit a feature pack, a feature pack that uses a SKU that satisfies all of the licensing requirements of the service (as defined by the service configuration) and with maximum available bandwidth is chosen. Stingray Services Controller User s Guide 97

108 Configuring Elastic Load Balancing as a Service Creating and Starting an ELBaaS Service Creating an ELBaaS Service To create an ELBaaS service, perform the following steps. As additional services are added to a host, the scaling percentages for all services on the host may need to be revisited. See Understanding Scaling Thresholds on page 101. To create an ELBaaS service To create the service, perform a PUT request: Include the following JSON structure as the body of your request: { } "service_type":"lbaas", "status":"inactive", "stm_feature_pack":"stm400fp", "elastic": true, "elastic_configuration": { "min_instances": 1, "max_instances": 3, "monitoring_cycles_before_scaling": 3, "poll_interval": 60, "refractory_period": 180, "scaling_metric": { "average_cpu": { "enabled": true, "scale_up_threshold": 50, "scale_down_threshold": 10 } } } "license_name":"fla-ssl", "instance_bandwidth":1000, "protocol":"http", "back_end_nodes":["xx.xx.xx.xx:80","xx.xx.xx.xx:80"], "front_end_port":80, "front_end_ips":["yy.yy.yy.yy","yy.yy.yy.yy","yy.yy.yy.yy", "YY.YY.YY.YY","YY.YY.YY.YY"] The following table summarizes the individual properties in this service resource. Property "service_type":"lbaas" "status":"inactive" Description Specifies the type of service to create based on a template. The template determines the other properties that must be provided when creating the service resource. You might require different templates that will have different requirements. For example, an ELBaaS service requires you to specify the protocol to be load balanced. A different template for a more specific service type might not require this property. Currently, only the LBaaS service template is available, which is used for both LBaaS and ELBaaS services. All services start in the Inactive state. For details, see ELBaaS Service Life Cycle on page Stingray Services Controller User s Guide

109 Creating and Starting an ELBaaS Service Configuring Elastic Load Balancing as a Service Property "stm_feature_pack":"stm400fp" "elastic_configuration": "elastic":true "min_instances":1 "max_instances":3 "poll_interval":60 "monitoring_cycles_before_scaling":3 "refractory_period":180 "scaling_metric": "average_cpu": "enabled": true Description Specifies the name of the feature pack to be applied to all instances in the service cluster. You can provide a value for this property when creating a service or allow the Services Controller to auto-select a suitable resource for the template in use. The Services Controller automatically determines an eligible feature pack if this property is omitted. Riverbed recommends that you select the feature pack carefully as it can have cost or capacity implications, depending on the licensing model in use. The Services Controller validates any feature pack that you specify for an ELBaaS service to ensure that it provides sufficient features for the ELBaaS service configuration. The elastic-specific properties in the load balancing service definition. This includes elastic, min_instances, max_instances, poll_interval, monitoring_cycles_before_scaling, refractory_period, scaling_metric, average_cpu, enabled, scale_down_threshold and scale_up_threshold. Defines the LBaaS service as elastic. The elastic configuration properties are then required. When elastic is true, you must specify elastic_configuration properties. The minimum number of instances that are maintained for the ELBaaS service. In this example, 1. The service requires this field to be specified. The service should never go below these number of instances from a scaling point of view. For an ELBaaS service, this should be less than or equal to max_instances. The maximum number of instances that the service will scale to. In this example, 3. This has no default value for an ELBaaS service; the service requires this field to be specified. For an ELBaaS service, this should be greater than or equal to min_instances. Note that this is also the minimum number of front end IP addresses. The frequency, in seconds, of CPU usage collection for service instances. In this example, 60. The default is 60. Specifies the number of polling cycles where the usage is above or below the thresholds to trigger a scale up or scale down event. In this example, 3. The default is 5. The minimum amount of time between two scaling events. This setting is essentially the stabilization period after a change in instance cluster size. In this example, 180. The default is 180 seconds. The scaling mechanism for the ELBaaS service. Currently, only average_cpu is supported. This has enabled, scale_up_threshold and scale_down_threshold properties. Enables the average_cpu scaling mechanism. This is currently the only scaling metric mechanism that is supported, and must be enabled for scaling of the ELBaaS service to occur using scale_up_threshold and scale_down_threshold. Stingray Services Controller User s Guide 99

110 Configuring Elastic Load Balancing as a Service Creating and Starting an ELBaaS Service Property "scale_down_threshold":10 "scale_up_threshold":50 "license_name":"fla-ssl" "instance_bandwidth":1000 "protocol":"http" "back_end_nodes":["xx.xx.xx.xx:80", "XX.XX.XX.XX:80"] "front_end_port":80 "front_end_ips":["yy.yy.yy.yy", "YY.YY.YY.YY","YY.YY.YY.YY", "YY.YY.YY.YY","YY.YY.YY.YY"] num_instances Description The percentage of average CPU usage below which a scale down will occur. In this example, 10. This threshold must persist for monitoring_cycles_before_scaling for a scale down to occur. Choosing a value for this threshold is described in Understanding Scaling Thresholds on page 101. Note: as more services are added or removed from a host, the scale_down_threshold for all services on the host may need to be revisited. The percentage of average CPU usage above which a scale up will occur. In this example, 50. The default is 90, which is suitable for a single service running on an instance host. This threshold must persist for monitoring_cycles_before_scaling for a scale up to occur. Choosing a value for this threshold is described in Understanding Scaling Thresholds on page 101. Note: as more services are added or removed from a host, the scale_up_threshold for all services on the host may need to be revisited. Specifies the FLA license resource used for instances in this service cluster. Specifies the bandwidth allocation in Mbps given to each service instance. For Enterprise licensed Services Controller deployments, the service's footprint in the available pool of licensed bandwidth is: num_instances x instance_bandwidth For details, see Enterprise Licensing for ELBaaS on page 91. Specifies the protocol to be balanced by this ELBaaS service. Specifies the set of back-end nodes in the server pool to incoming requests that are balanced between by the ELBaaS service. Replace XX.XX.XX.XX with the actual IP addresses. Specifies the front-end IP address port number. Specifies the set of front-end IP addresses to be raised by the ELBaaS service instances. These are raised using one of the High Availability features of Traffic Manager, Traffic IP groups, so that these IP addresses and ports are automatically shared between instances in a service cluster, and passed around between instances in the case of an instance failure. Replace YY.YY.YY.YY with the actual IP addresses. Note: For an ELBaaS service, this value is set only by the Services Controller, and not by users. This example represents only a partial example of the properties you can set for the ELBaaS service. For details, see ELBaaS REST API Reference on page Stingray Services Controller User s Guide

111 Creating and Starting an ELBaaS Service Configuring Elastic Load Balancing as a Service Understanding Scaling Thresholds Elastic load-balancing services require a pair of scaling thresholds: scale_up_percentage is the percentage of average CPU usage above which a scale up event will occur. scale_down_percentage is the percentage of average CPU usage below which a scale down event will occur. For both properties, the threshold being reached must persist for the monitoring_cycles_before_scaling before a scaling event occurs. Scaling thresholds are set on a per-service basis. Choosing thresholds involves a number of factors: Contention for the cores on the service hosts: Where service instances are allocated to separate cores on each service host, the scaling percentages can be set independently for each service. For example, a scale_down_threshold of 10 and scale_up_threshold of 90. Where a number of service instances share a single core on each service host, the sum of the scale-up percentages for their services should not exceed 100%. For example, where three service instances share a single core in an instance host, the scale_up_threshold for each service should not exceed 33%. Both of these examples assume a uniform number and allocation of cores across the service hosts. Where contention varies, other factors must be considered. Loading on the busiest service host. Contention may vary when, for example: the number of cores on each service host is not consistent, and services are sharing them. the number of services allocated to a core changes. Scaling thresholds are set on a per-service basis. Where contention varies, the chosen percentages should reflect the characteristics of the busiest service host. If this is not done, scaling events across the service may not trigger at the required time. Monitoring of all elastic services to determine scaling behavior is recommended. Adjustment of the scaling percentages after consideration of the above factors may be required to achieve ideal scaling behavior. Scaling thresholds should also be revisited whenever services are added/removed from a service host, or the potential maximum size of an elastic service is changed. Stingray Services Controller User s Guide 101

112 Configuring Elastic Load Balancing as a Service Creating and Starting an ELBaaS Service Starting and Stopping an ELBaaS Service You can start or stop an LBaaS service in a stable state by setting its status property. To start the ELBaaS service Perform a PUT request: Include the following JSON structure as the body of your request: { } "status":"active" The service switches to the transitional state Starting while it is deploying, clustering, and activating service instances, before finally switching to the state Active. To stop the ELBaaS service Perform a PUT request: Include the following JSON structure as the body of your request: { } "status":"inactive" The service switches to the transitional state Stopping while it is deactivating and deleting service instances, before switching to state Inactive. For more information about the service life cycle, see ELBaaS Service Life Cycle on page 91. Viewing Service Instances Once services are created, you can use the REST API to view all Traffic Manager service instances and (separately) their individual details. Two parameter switches are required for these REST API calls:?show_service=true lists Traffic Manager service instances instead of Traffic Manager instances.?override_service_protection=true includes a full list of properties for a Traffic Manager service instance. Examples of the use of these switches are shown in the processes that follow. To view all service instances To view all Traffic Manager service instances on an SSC, perform a GET request, including the?show_service=true parameter switch in the URI. For example: Stingray Services Controller User s Guide

113 Creating and Starting an ELBaaS Service Configuring Elastic Load Balancing as a Service The response to this request contains a JSON structure representing all defined Traffic Manager service instances. For example: { } "children": [{ "href": "/api/tmcm/1.3/instance/elbaas_service1_1", "name": "ELBaaS_service1_1" }, { "href": "/api/tmcm/1.3/instance/elbaas_service1_2", "name": "ELBaaS_service1_2" }, { "href": "/api/tmcm/1.3/instance/elbaas_service1_3", "name": "ELBaaS_service1_3" }, { "href": "/api/tmcm/1.3/instance/elbaas_service1_4", "name": "ELBaaS_service1_4" }, { "href": "/api/tmcm/1.3/instance/elbaas_service1_5", "name": "ELBaaS_service1_5" }, { "href": "/api/tmcm/1.3/instance/elbaas_service1_6", "name": "ELBaaS_service1_6" }] To view service instance properties 1. To view basic properties for a Traffic Manager service instance, use the REST API as you would for a Traffic Manager instance. For example, for a service instance called ELBaaS_service1_6: An example response is shown below: { } "status": "Active", "config_options": "port_offset=0 num_children=1", "metrics_peak_rps": 0, "cpu_usage": "", "license_name": "fla-ssl", "creation_date": " :41:20", "bandwidth": 1000, "stm_feature_pack": "FP1", "tag": "" "cluster_id": "cluster_service1", "container_name": "", "owner": "Service:service1", "metrics_throughput": 0, "metrics_date": " :00:00", "metrics_peak_throughput": 0, "stm_version": "9.7", "host_name": "myhost-01.mydomain.com", "container_configuration": "", "metrics_peak_ssl_tps": 0 Note: You cannot set the tag property on a service instance. It is always an empty string. Stingray Services Controller User s Guide 103

114 Configuring Elastic Load Balancing as a Service Creating and Starting an ELBaaS Service 2. To view an expanded set of properties for the same Traffic Manager service instance, perform a GET request, including the?override_service_protection=true parameter switch in the URI. For example: ELBaaS_service1_6?override_service_protection=true An example response is shown below: { "status": "Active", "config_options": "port_offset=0 num_children=1", "metrics_peak_rps": 0, "cpu_usage": "", "license_name": "fla-ssl", "rest_address": "myssccontroller:50000", "snmp_address": "myssccontroller:50500", "creation_date": " :41:20", "bandwidth": 1000, "stm_feature_pack": "FP1", "tag": "" "cluster_id": "cluster_service1", "container_name": "", "owner": "Service:service1", "service_username": "servuser", "metrics_throughput": 0, "service_password": "fpvgnqvnr7", "metrics_date": " :00:00", "admin_username": "admin", "metrics_peak_throughput": 0, "stm_version": "9.7", "host_name": "myhost-01.mydomain.com", "management_address": "myssccontroller", "container_configuration": "", "ui_address": "myssccontroller:51500", "metrics_peak_ssl_tps": 0, "admin_password": "fpvgnqbvr7" } Note: You cannot set the tag property on a service instance. It is always an empty string. Changing the Potential ELBaaS Service Cluster Size The potential size of a service cluster is defined by min_instances and max_instances. You can change either of these after the ELBaaS service is deployed, providing that: the maximum is equal to or greater than the minimum. the number of front end IP addresses is equal or greater than the maximum. Note: You are not able to change the num_instances property, as this is calculated automatically and dynamically by the Services Controller. When you redefine the cluster size for an ELBaaS service, the Services Controller will subsequently add or remove cluster members as necessary. 104 Stingray Services Controller User s Guide

115 Creating and Starting an ELBaaS Service Configuring Elastic Load Balancing as a Service While it is possible for multiple Traffic Manager instances to coexist on the same service host, only one Traffic Manager from a given cluster is placed on the same host. Thus, the size of a service cluster is limited by the total number of service hosts that you have made available to the Services Controller. In some cases, the Services Controller has enough service hosts available, but it is unable to deploy the full number of instances that you have requested for an ELBaaS service cluster. For example, a service host might be temporarily down or inaccessible via the network. In this case, to determine what percentage of the desired deployment is in place, a service resource provides a property called deployment_strength. The deployment_strength property has a range from 1 to 100, representing the percentage of the calculated cluster size (that is, the num_instances) that has been successfully deployed. If the value of deployment_strength is less than 100, this indicates that you must manually increase the availability of service hosts. For example, if the deployment_strength is 84% because one or more service hosts is unavailable or an up-scaling deployment has failed, then the service has insufficient deployed instances. You can correct this by adding new service hosts. You should monitor the deployment_strength for a service, particularly after service life cycle changes or alterations to the value of the min_instances or max_instances properties. To change the minimum number of service instances in a cluster To set the minimum number of service instances to 2, perform a PUT request: Include the following JSON structure as the body of your request: { } "min_instances":2 To increase the maximum number of service instances in a cluster To set the maximum number of service instances to 6, perform a PUT request: Include the following JSON structure as the body of your request: { } "max_instances":6 To decrease the maximum number of service instances in a cluster To decrease the maximum number of service instances to 4, perform a PUT request: Include the following JSON structure as the body of your request: { "max_instances":4 } For an active service currently with six deployed instances, this example: sets the service status to Changing. removes two instances from the service cluster reconfigures the service cluster. sets the service to Active status once more. Stingray Services Controller User s Guide 105

116 Configuring Elastic Load Balancing as a Service Creating and Starting an ELBaaS Service Deleting an ELBaaS Service If a service is no longer required, it is deleted. This process removes all the deployed cluster instances and permanently marks the service resource as Deleted. To delete an ELBaaS service 1. To stop the service, perform a PUT request: Include the following JSON structure as the body of your request: { } "status":"inactive" 2. To set the service status to Deleted, perform a PUT request: Include the following JSON structure as the body of your request: { } "status":"deleted" The service enters the Deleting state in the service cluster; once this task is complete it enters the Deleted state. 106 Stingray Services Controller User s Guide

117 ELBaaS REST API Reference Configuring Elastic Load Balancing as a Service ELBaaS REST API Reference This section describes the ELBaaS REST API resources and properties. Note: The ELBaaS REST API is a service-specific API. It is not part of the Services Controller nonservice REST API used for resources such as instances, instance hosts, feature packs. The Services Controller nonservice REST API is accessed through /api/tmcm/1.4/, whereas the ELBaaS REST API is accessed through /api/tmsm/1.0/. Authentication All requests to the Services Controller REST API must be authenticated by means of HTTP Basic Authentication. You must create an initial Services Controller user outside of the REST API, but you can create and manage other users using the REST API. You must access the Services Controller REST API through HTTPS. Client certificates are not checked for validity, and HTTPS is used only for encryption and to allow the FLA license to verify the server identity. All ELBaaS inventory database resources are provided through a common base URI that identifies the root of the resource model: where <host> is the hostname of the server containing the inventory database, and <port> is the port that the REST API is published on. The following sections describe the inventory resources at this URI. API Root A GET on this resource returns an object resource which contains the children property. This property is a list of names and hrefs for child entries beneath /api/tmsm/1.0/. Each of these child resources contains collections of related resources. Example Request GET /api/tmsm/1.0/ Response { "children" : [ { "href" : "/api/tmsm/1.0/template", "name" : "template" }, { "href" : "/api/tmsm/1.0/service", "name" : "service" }, { "href" : "/api/tmsm/1.0/task", "name" : "task" } ] } Stingray Services Controller User s Guide 107

118 Configuring Elastic Load Balancing as a Service ELBaaS REST API Reference The following table summarizes the child resources. Child Resource template service task Description Contains a property children that is a list of the names and hrefs of all available service templates. Currently there is only an LBaaS service template available. This is used for both LBaaS and ELBaaS services. Contains a property children that is a list of the names and hrefs of all user created services. Each of these child resources contains collections of related resources. The Services Controller attempts to automatically maintain the service cluster for the service user by deploying, deleting, and failing-over individual instances within the cluster automatically. template Resource The /api/tmsm/1.0/template resource contains a property children that is a list of the names and hrefs of all available service templates. Currently the only service template is the LBaaS template. This is used for both LBaaS and ELBaaS services. The properties in the template resource cannot be updated; its purpose is to provide information. Example Request GET Response { "children" : [ { "href" : "/api/tmsm/1.0/template/lbaas", "name" : "LBaaS" } ] } Templates enable you to configure Traffic Manager clusters to provide useful services without knowing the details of Traffic Manager configuration objects and without knowing how to deploy individual Traffic Manager instances through Services Controller. The Services Controller combines template-specific parameter values with the service template they correspond to, and converts them into appropriate Traffic Manager configuration objects. It then applies these to an automatically deployed Traffic Manager cluster. For example, an ELBaaS service requires you to provide a set of IP addresses on the front-end to accept incoming requests, a set of IP addresses and ports for back-end servers, and the protocol that is being load balanced. Properties Every service resource is comprised of a combination of common properties that are provided for a service resource regardless of the service type and template-specific properties. The common properties include a mandatory service_type property that names the template resource that the service is based on. The value of service_type determines the template-specific properties that are available to you when you create the service. 108 Stingray Services Controller User s Guide

119 ELBaaS REST API Reference Configuring Elastic Load Balancing as a Service Each service template resource supports this by describing parameters that correspond to template-specific properties that must be provided when you create a service based on that template. Different services require different parameters depending on their purpose. The purpose of template resources is to enable you to determine what a valid set of property values might be for a given service resource. The parameter descriptions, along with some other template information (such as the minimum version of Traffic Manager with which the service is compatible) are encoded in a JSON object in the following table. Property description long_description min_stm_version parameters Description Brief description of the service type supported by the template. More verbose description of the service type supported by the template. The version number of the earliest Traffic Manager version compatible with this service template. A list of JSON objects, each representing a template-specific parameter for this template type. Each object contains a number of repairer entries specifying metadata for the parameter in question. Parameter metadata objects can contain some or all of the following keys. Property name description required type default depends choices children Description The parameter name. This name maps directly to a service resource property name in service resources based on this template. A brief description. Whether you must supply a value for the corresponding resource property when creating a service resource based on this template. What kind of value is expected for the corresponding resource property. If present, the default value that is applied for the corresponding resource property if you do not supply one when you create a service resource based on this template. A JSON dictionary with keys as the possible values of the property and for those given values, which are the list of values that another resource property must have. For example: { "children" : { }, "default" : false, "depends" : { "true" : { "protocol" : [ "http","tcp"] } }, "description" : "Enable SSL encrypt to the backend nodes for service", "name" : "ssl_encrypt", "required" : false, "structure" : null, "type" : "boolean" } Thus if ssl_encrypt is true, then the protocol must be either http or tcp. Where the type is enum, this key contains a list of the allowed values for the corresponding resource property when creating a service resource based on this template. If present, this property describes dependencies between parameters for this template. An example from the LBaaS template is: { "cookie" : ["session_persistence_cookie"]} If the service resource property corresponding to this parameter is set to cookie, then provide a value for the session_persistence_cookie property as well. Stingray Services Controller User s Guide 109

120 Configuring Elastic Load Balancing as a Service ELBaaS REST API Reference Property regex structure Description If present, this property holds a regex that is used to validate any value provided for the corresponding service resource property. If set to null, this property indicates that the resource property is a single value; if set to list. This indicates that the resource property is a list of elements. Example Request GET /api/tmsm/1.0/template/lbaas Response An example response is shown below: { "description": "Load Balancing as a Service", "long_description": "This template allows the generation of a Load Balancing Service.", "min_stm_version": "9.6", "parameters": [ { "children": {}, "choices": [ "fastest_response_time", "least_connections", "perceptive", "random", "round_robin" ], "default": "round_robin", "depends": {}, "description": "Algorithm used to decide where to distribute traffic to", "name": "lb_algorithm", "required": false, "structure": null, "type": "enum" }, { "children": {}, "choices": [ "http", "https", "tcp", "ssl" ], "depends": {}, "description": "The protocol being balanced, either HTTP, HTTPS, TCP, or SSL.", "name": "protocol", "required": true, "structure": null, "type": "enum" }, { "children": {}, "default": "/", "depends": {}, "description": "The URI path to use in the HTTP monitor test", "name": "monitor_path", "regex": "^/[^ ]*$", "required": false, "structure": null, "type": "regex" }, 110 Stingray Services Controller User s Guide

121 ELBaaS REST API Reference Configuring Elastic Load Balancing as a Service { "children": {}, "default": 2048, "depends": {}, "description": "Maximum amount of data to read back from a server for a monitoring request", "name": "monitor_resp_length", "required": false, "structure": null, "type": "int" }, { "children": {}, "default": 5, "depends": {}, "description": "The amount of time (in seconds) to wait for a response before marking a health probe as failed", "name": "monitor_timeout", "range": " ", "required": false, "structure": null, "type": "int" }, { "children": {}, "default": 2, "depends": {}, "description": "The period between two consecutive health checks", "name": "monitor_interval", "range": " ", "required": false, "structure": null, "type": "int" }, { "children": {}, "default": false, "depends": { "true": { "protocol": [ "http", "tcp" ] } }, "description": "Enable SSL encryption to the backend nodes for the service", "name": "ssl_encrypt", "required": false, "structure": null, "type": "boolean" }, { "children": {}, "default": "^[234][0-9][0-9]$", "depends": {}, "description": "The regex value that the HTTP monitor test response code should match against", "name": "monitor_status_regex", "required": false, "structure": null, "type": "string" }, { "children": {}, Stingray Services Controller User s Guide 111

122 Configuring Elastic Load Balancing as a Service ELBaaS REST API Reference "depends": {}, "description": "Cookie name to use for persistence (if session_persistence_type is 'cookie')", "name": "session_persistence_cookie", "regex": "[A-Za-z0-9_.-]+", "required": false, "structure": null, "type": "regex" }, { "children": {}, "default": 3, "depends": {}, "description": "The number of failures before a node is marked as failed", "name": "monitor_failure_threshold", "range": " ", "required": false, "structure": null, "type": "int" }, { "children": { "cookie": [ "session_persistence_cookie" ] }, "choices": [ "asp", "cookie", "ip", "j2ee", "named", "ssl", "transparent", "universal", "x-zeus" ], "depends": {}, "description": "The type of session persistence used for the service", "name": "session_persistence_type", "required": false, "structure": null, "type": "enum" }, { "children": {}, "depends": {}, "description": "The port number on which we listen for incoming connections on the front-end IPs", "name": "front_end_port", "range": " ", "required": true, "structure": null, "type": "int" }, { "children": {}, "depends": {}, "description": "The host header value to use in the HTTP monitor test", "name": "monitor_host_header", "required": false, "structure": null, "type": "string" }, 112 Stingray Services Controller User s Guide

123 ELBaaS REST API Reference Configuring Elastic Load Balancing as a Service { "children": { "url": [ "session_persistence_redirect_url" ] }, "choices": [ "close", "new_node", "url" ], "default": "new_node", "depends": {}, "description": "Determines what happens in the case of a persistence failure", "name": "session_persistence_failure_mode", "required": false, "structure": null, "type": "enum" }, { "children": {}, "depends": {}, "description": "A set of IP addresses to be raised by the LBaaS service, on which we'll listen for incoming connections", "name": "front_end_ips", "required": true, "structure": "list", "type": "ipaddress" }, { "children": {}, "depends": {}, "description": "Private key, must be set if ssl_offload is in use.", "name": "private_key", "required": false, "structure": null, "type": "string" }, { "children": {}, "default": false, "depends": {}, "description": "Whether or not the monitor should connect using SSL", "name": "monitor_use_ssl", "required": false, "structure": null, "type": "boolean" }, { "children": {}, "depends": {}, "description": "URL to redirect connection to in the case of a persistence failure", "name": "session_persistence_redirect_url", "regex": "^((((https?) (rtsp))://) (sip:))\\s+(:\\d+)?(/\\s*)?", "required": false, "structure": null, "type": "regex" }, { "children": {}, "depends": {}, "description": "A set of IP address/port pairs to which traffic should be balanced", Stingray Services Controller User s Guide 113

124 Configuring Elastic Load Balancing as a Service ELBaaS REST API Reference "name": "back_end_nodes", "required": true, "structure": "list", "type": "node" }, { "children": { "true": [ "public_certificate", "private_key" ] }, "default": false, "depends": {}, "description": "Enable SSL decryption for the service", "name": "ssl_offload", "required": false, "structure": null, "type": "boolean" }, { "children": { "true": [ "session_persistence_type" ] }, "default": false, "depends": {}, "description": "Enable session persistence for the service", "name": "session_persistence", "required": false, "structure": null, "type": "boolean" }, { "children": {}, "default": true, "depends": {}, "description": "Boolean value, determines whether the connection is closed in the case of a session persistence failure", "name": "session_persistence_failure_delete", "required": false, "structure": null, "type": "boolean" }, { "children": {}, "depends": {}, "description": "The regex value that the HTTP monitor test response body should match against", "name": "monitor_body_regex", "required": false, "structure": null, "type": "string" }, { "children": {}, "depends": {}, "description": "Public certificate, must be set if ssl_offload is in use.", "name": "public_certificate", "required": false, "structure": null, "type": "string" }, 114 Stingray Services Controller User s Guide

125 ELBaaS REST API Reference Configuring Elastic Load Balancing as a Service { "children": {}, "depends": {}, "description": "The value of the basic auth header to use in a HTTP request, format <username>:<password>'", "name": "monitor_auth", "regex": "^.*?:.*?$", "required": false, "structure": null, "type": "regex" }, { "children": {}, "choices": [ "connect", "http" ], "default": "connect", "depends": {}, "description": "The scheme used (if any) to monitor back end node health", "name": "monitor_type", "required": false, "structure": null, "type": "enum" }, { "children": { "true": [ "monitor_type" ] }, "default": false, "depends": {}, "description": "Enable service health monitoring", "name": "health_monitoring", "required": false, "structure": null, "type": "boolean" } ] } service Resource The /api/tmsm/1.0/service resource contains the children property which is a list of the names and hrefs of all user created services. Example Request GET /api/tmsm/1.0/service/ Response { "children" : [ { "href" : "/api/tmsm/1.0/service/myelbaas1", "name" : "myelbaas1" } ] } Stingray Services Controller User s Guide 115

126 Configuring Elastic Load Balancing as a Service ELBaaS REST API Reference Properties A service resource represents a user-created service based on a service template. Each service resource is made up of a combination of common properties that are provided for a service resource regardless of the service type and template-specific properties. The common properties include a mandatory service_type property that names the template resource that the service are based on. The value of the service_type property determines template-specific properties that are available to you when you create the service. Service properties for an ELBaaS service are summarized in the following table. Property Description Actions service_type status stm_feature_pack elastic num_instances elastic_configuration min_instances The name of the service template used for this service. For an ELBaaS service, this is set to LBaaS. Note: Note that for an ELBaaS service, the elastic property must be set to true. A service is in one of the following states at any given time: Unknown, Creating, Inactive, Starting, Stopping, Active, Changing, Deleting, Deleted, Config Failed. The default value is Inactive. For details, see ELBaaS Service Life Cycle on page 91. The name of the feature_pack resource applied to the instances in the service cluster. You can provide a value for this property when creating a service or allow the Services Controller to auto-select a suitable resource for the template in use. A Boolean property that enables/disables the autoscaling (elastic) feature for a load-balancing service. For ELBaaS, this is always set to true, and you must specify elastic_configuration properties. You cannot set or modify this property when elastic is set to true, because this property is only used internally, and is dynamically changed by the scaling logic. This signifies the current number of instances that has been deemed reasonable by scaling logic. The elastic-specific properties in the load balancing service definition: elastic, min_instances, max_instances, poll_interval, monitoring_cycles_before_scaling, refractory_period, scaling_metric, average_cpu, enabled, scale_down_threshold and scale_up_threshold. The minimum number of instances that are maintained for an ELBaaS service. The service requires this field to be specified. The service should never go below these number of instances from a scaling point of view. However, if some instances have failed and there are no active hosts to deploy to, then deployment_strength is less than 100%. For an ELBaaS service, this should be less than or equal to max_instances. Create Create/Update Create/Update Create/Update Value set only by the Services Controller Create/Update 116 Stingray Services Controller User s Guide

127 ELBaaS REST API Reference Configuring Elastic Load Balancing as a Service Property Description Actions max_instances poll_interval monitoring_cycles_before_scaling refractory_period scaling_metric average_cpu enabled deployment_strength scale_down_threshold scale_up_threshold The maximum number of instances that the service will scale to. This has no default value for an ELBaaS service; the service requires this field to be specified. For an ELBaaS service, this should be greater than or equal to min_instances. Note: This is also the minimum number of front end IP addresses. The frequency, in seconds, of CPU usage collection for service instances. The default is 60. Specifies the number of polling cycles where the usage is above or below the thresholds to trigger a scale up or scale down event. This value ensures that data-points collected persist for a number of cycles before a scaling decision is taken. Defaults to 5. The minimum amount of time, in seconds, between two scaling events. This setting is essentially the stabilization period after a change in instance cluster size. Defaults to 180. The scaling mechanism for the ELBaaS service. Currently, only average_cpu is supported. This has enabled, scale_up_threshold and scale_down_threshold properties. Enables the average_cpu scaling mechanism. This is currently the only scaling metric mechanism that is supported, and must be enabled for scaling of the ELBaaS service to occur using scale_up_threshold and scale_down_threshold. An integer in the range from 0 to 100 that represents the number of instances deployed and working in the service cluster as a percentage of num_instances. This value is calculated by the Services Controller; you cannot update this value. The percentage of average CPU usage below which a scale down will occur. This threshold must persist for monitoring_cycles_before_scaling for a scale down to occur. Choosing a value for this threshold is described in Understanding Scaling Thresholds on page 101. Note: as more services are added or removed from a host, the scale_down_threshold for all services on the host may need to be revisited. The percentage of average CPU usage above which a scale up will occur. This threshold must persist for monitoring_cycles_before_scaling for a scale up to occur. The default is 90, which is suitable for a single service running on an instance host. Choosing a value for this threshold is described in Understanding Scaling Thresholds on page 101. Note: as more services are added or removed from a host, the scale_up_threshold for all services on the host may need to be revisited. Create/Update Create/Update Create/Update Create/Update Create/Update Value set only by the Services Controller Create/Update Create/Update Stingray Services Controller User s Guide 117

128 Configuring Elastic Load Balancing as a Service ELBaaS REST API Reference Property Description Actions instance_health_monitor_interval instance_health_fail_trigger stm_rest_version stm_version license_name instance_bandwidth front_end_ips front_end_port back_end_nodes The interval in seconds between monitoring checks on the instances in the service cluster. The time in seconds that must elapse during which no successful health checks have occurred on a service instance before it is considered unhealthy and removed from the service cluster. Determines which version of the Traffic Manager REST API should be used by ELBaaS for communicating with service instances when setting and checking the configuration. The default value is 3.0 (that is, the latest REST API version available in the Traffic Manager v9.7 release). This property should not be changed if you are using Traffic Manager v9.7 for creating the service cluster. The name of the version resource used when deploying service instances. You can provide a value for this property when creating a service or allow the Services Controller to auto-select a suitable resource for the template in use. If you omit the Traffic Manager version, the latest version of all the available version resources is selected. The name of the license resource that must be installed in instances in the service cluster. The maximum bandwidth allowed for each instance in the service cluster. In the Enterprise licensing model, the number of instances that are deployed for a service depends on a combination of instance_bandwidth, num_instances, and the stm_feature_pack in use. A set of IP addresses to be raised by the ELBaaS service on which the service will listen for incoming connections. Presented as a JSON array of strings. For example: [" "," "] Note that the minimum number of front end IPs is defined by the max_instances property. The port number in the range from 0 to on which the Services Controller listens for incoming connections on the front-end IP addresses. A set of IP address/port pairs to which traffic should be balanced. Presented as a JSON array of strings. For example: [" :80", " :80"] Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update 118 Stingray Services Controller User s Guide

129 ELBaaS REST API Reference Configuring Elastic Load Balancing as a Service Property Description Actions lb_algorithm protocol ssl_offload ssl_encrypt public_certificate private_key health_monitoring monitor_type The algorithm that decides where to distribute traffic. It is one of the following: fastest_response_time least_connections perceptive random round_robin The default value is round_robin. Setting the lb_algorithm property to null reverts to the default value of round_robin. The protocol being balanced. It is one of the following values: http https tcp ssl Whether to enable Secure Socket Layer (SSL) decryption for the service (that is, true or false). Properties public_certificate and private_key must be set correctly if this property is true. Note: To be a valid JSON object, the "\n" newlines in the public certificate and private key must be escaped. That is, "\\n" instead of "\n". Whether to enable SSL encryption the back-end nodes for the service. The property protocol must be set to http or tcp if this property is true. A PEM encoded public certificate encoded as a JSON string. This property must be set if the ssl_offload property is set to true. Note: To be a valid JSON object, the "\n" newlines in the public certificate and private key must be escaped. That is, "\\n" instead of "\n". A PEM encoded private key encoded as a JSON string. This property must be set if the ssl_offload property is set to true. Note: To be a valid JSON object, the "\n" newlines in the public certificate and private key must be escaped. That is, "\\n" instead of "\n". Whether to enable health monitoring for the back-end nodes (that is, true or false). The property monitor_type must be set if this property is true. The scheme used (if any) to monitor back-end node health, either connect or http. Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Stingray Services Controller User s Guide 119

130 Configuring Elastic Load Balancing as a Service ELBaaS REST API Reference Property Description Actions monitor_use_ssl monitor_timeout monitor_interval monitor_failure_threshold monitor_host_header Whether or not the monitor should connect using SSL, either true or false. The period of time, in seconds, to wait for a response before marking a health probe as failed. The period of time, in seconds, between two consecutive health checks. The number of monitoring failures before a back-end node is marked as failed. The value to use for the host header in the HTTP monitor test. Create/Update Create/Update Create/Update Create/Update Create/Update monitor_path The URI path to use in the HTTP monitor test. Create/Update monitor_auth monitor_status_regex monitor_body_regex monitor_resp_length session_persistence session_persistence_type session_persistence_cookie session_persistence_failure_mode The value of the basic auth header to use in a HTTP request; a string in the following format: <username>:<password> The regex value that the HTTP monitor test response code should match against. The regex value that the HTTP monitor test response body should match against. Maximum amount of data to read back from a server for a monitoring request. Whether to enable session persistence for the service. The property session_persistence_type must be set if this property is true. The type of session persistence used for the service. It is one of the following values: asp cookie ip j2ee named ssl transparent universal x_zeus If the session_persistence_type is set to cookie, the cookie name to use for persistence. Determines what happens in the case of a persistence failure. It is one of the following values: close new_node url This property must be set if property session_persistence_type is set. Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update Create/Update 120 Stingray Services Controller User s Guide

131 ELBaaS REST API Reference Configuring Elastic Load Balancing as a Service Property Description Actions session_persistence_failure_delete session_persistence_redirect_url Determines whether the connection is closed in the case of a session persistence failure, either true or false. URL to redirect connection to in the case of a persistence failure. Create/Update Create/Update Example Request GET /api/tmsm/1.0/service/myelbaas1 Response An example response is shown below (with generalized IP addresses): { "back_end_nodes": [ "XX.XX.XX.XX:80", "XX.XX.XX.XX:80" ], "deployment_strength": 100, "elastic": true, "elastic_configuration": { "min_instances": 1, "max_instances": 3, "monitoring_cycles_before_scaling": 3, "poll_interval": 60, "refractory_period": 180, "scaling_metric": { "average_cpu": { "enabled": true, "scale_up_threshold": 50, "scale_down_threshold": 10 } } } "front_end_ips": [ "YY.YY.YY.YY", "YY.YY.YY.YY", "YY.YY.YY.YY", "YY.YY.YY.YY" ], "front_end_port": 20000, "instance_bandwidth": 1000, "instance_health_fail_trigger": 120, "instance_health_monitor_interval": 60, "instance_num_worker_processes": 1, "license_name": "fla-ssl", "num_instances": 1, "protocol": "HTTP", "service_type": "LBaaS", "status": "Active", "stm_feature_pack": "fp1", "stm_rest_version": "3.0", "stm_version": "9.7" } Stingray Services Controller User s Guide 121

132 Configuring Elastic Load Balancing as a Service ELBaaS REST API Reference task Resource The /api/tmsm/1.0/task resource contains a property children that is a list of the names and hrefs of all user created tasks. Example Request GET /api/tmsm/1.0/task/ Response { "children" : [ { "href" : "/api/tmsm/1.0/task/1", "name" : "1" } ] } Properties The Services Controller maintains the service cluster for you by deploying, deleting, and failing-over individual instances within the cluster automatically. Occasionally, the Services Controller is unable to automatically clean up instances (for example, due to network connectivity or host failures). In these cases, the Services Controller ejects the instance from the service cluster and creates a task resource for the cleanup operation that you (that is, the service administrator) must perform manually. The set of tasks under /api/tmsm/1.0/task/ is a to-do list for the administrator to makes sure service hosts do not have ejected instances, After the cleanup operation has been completed, you can mark the task as done by performing a REST API PUT operation setting the property status to Complete. You can also delete completed tasks. Property failure_reason host_name info status task_type Description A digest of the error messages that was received during the automatic cleanup operation. Name of the instance host affected by the problem. Description of the failure (including the instance name). Either Required when the task is still outstanding or Complete when it has been performed by the administrator. Currently, Cleanup is the only task type, requiring you to manually stop (if appropriate) and then manually remove the installed instances from service hosts. Example Request GET /api/tmsm/1.0/task/1 Response { "failure_reason" : "Failed to copy file /opt/riverbed_ssc_1.1/scripts/configure_clean.replay to [email protected]:/space/workspace/configure_clean.replay with exit code 255\nssh: connect to host myhost-02.mydomain.com port 22: Connection refused\r\n", "host_name" : "myhost-02.mydomain.com", "info" : "Failed to delete instance ELBaaS_myELBaaS1_1", "status" : "Required", "task_type" : "Cleanup" } 122 Stingray Services Controller User s Guide

133 CHAPTER 7 Using the REST API with the Services Controller The REST API is the primary means of communicating with the Services Controller for all configuration and control purposes. This chapter includes the following sections: Introducing REST on page 123 Authentication on page 124 URI Root Parts on page 124 Inventory Resources on page 125 Resource Reference on page 126 Using the REST API to Check Status on page 153 Understanding REST Request Errors on page 154 Introducing REST Representational State Transfer (REST) is a framework for API design. It is based on generic facilities of the standard HTTP protocol, including the six basic HTTP methods (GET, POST, PUT, DELETE, HEAD, and INFO) and the full range of HTTP return codes. A REST interface partitions the API into a series of resources, each of which you can access using one or more HTTP methods. With the Services Controller, only the GET, PUT, and POST methods are used. (The action resource also supports the DELETE method due to the transient nature of the data contained within it. For more details, refer to action Resource on page 126. Each method operates on the Services Controller as follows: GET - Obtain a representation of the resource without modifying the server state (except perhaps for logging purposes). PUT - Create a new resource or apply some change to a resource. If the resource exists, only those properties specified in the request are modified; all others remain unchanged. If a resource object does not exist, a new one is created. POST - Create a new resource based on the details contained in the request body. If the resource exists, it is overridden. This method applies to the controller_license_key and bandwidth_pack_license_key resources only. The add_on_pack_license_key resource also supports the POST method for resource creation. Stingray Services Controller User s Guide 123

134 Using the REST API with the Services Controller Authentication Note: You cannot delete resources for auditing purposes (with the exception of the action resource). Instead, mark a resource as inactive by altering its status property. You cannot mark a resource as inactive if it is in use, and it cannot be altered after you mark it as inactive (the name cannot be reused). An Accept header, if present, provides a list of acceptable MIME types. If you specify an Accept header in your request, it must allow a MIME type of application/json. Accept: application/json The Content-type header when using PUT and POST methods matches the content type expected by each resource, which is typically application/json. However, when you POST license keys to <license type>_license_key resources, you must set the Content header to plain text. Content-Type: text/plain Each resource is uniquely identified with an address or uniform resource identifier (URI). In other words, if you know the URI, you can access the Authentication resource (subject to the authorization and authentication process). Because all resources have URIs, resources can point to other resources by embedding the URIs of related resources within their representations. In the Services Controller, all resources are represented as JavaScript Object Notation (JSON) structures. Requests and responses that interact with the Services Controller through the REST API must adopt the same format. The full range of HTTP return codes is available in REST, although in practice you can identify and apply a useful subset consistently. For example, the response can tell you whether a request has succeeded or not without any need for parsing the body of the response. However, the Services Controller always attempts to provide extra information regarding a failure into the response body. Authentication All requests to the Services Controller REST API must be authenticated by means of HTTP Basic Authentication. You must create an initial Services Controller user outside of the REST API, but you can create and manage other users using the REST API. You must access the Services Controller REST API through HTTPS. Client certificates are not checked for validity, and HTTPS is used only for encryption and to allow the FLA license to verify the server identity. URI Root Parts All inventory database resources are provided through a common base URI that identifies the root of the resource model: In this example, <host> is the hostname of the server containing the inventory database, and <port> is the port that the REST API is published on. You can find all inventory resources at this URI. You can perform a GET request on any level of the base URI to obtain a list of the child elements it contains. 124 Stingray Services Controller User s Guide

135 Inventory Resources Using the REST API with the Services Controller Inventory Resources This table summarizes inventory resources. Each of the inventory resources is located under a specific URI. Resource URI /api/tmcm/1.4/action /api/tmcm/1.4/ add_on_pack_license_key Description The list of pending, blocked, or waiting deployment actions. See action Resource on page 126. The list of installed Add-On license keys. See add_on_pack_license_key Resource on page 127. /api/tmcm/1.4/add_on_sku The list of supported Add-On SKUs. See add_on_sku Resource on page 127. /api/tmcm/1.4/ bandwidth_pack_license_key /api/tmcm/1.4/cluster /api/tmcm/1.4/ controller_license /api/tmcm/1.4/ controller_license_key /api/tmcm/1.4/feature_pack /api/tmcm/1.4/host The list of installed Bandwidth Pack license keys. See bandwidth_pack_license_key Resource on page 128. The list of defined Services Controller clusters. See cluster Resource on page 131. The list of installed Services Controller licenses. See controller_license Resource on page 132. The list of installed Services Controller license keys. See controller_license_key Resource on page 133. The list of feature packs that you can apply to Traffic Manager instances. See feature_pack Resource on page 135. The list of Traffic Manager instance hosts on which you can deploy Traffic Manager instances. See host Resource on page 135. /api/tmcm/1.4/instance The list of Traffic Manager instances. See instance Resource on page 137. /api/tmcm/1.4/license /api/tmcm/1.4/manager /api/tmcm/1.4/monitoring /api/tmcm/1.4/service /api/tmcm/1.4/sku /api/tmcm/1.4/user /api/tmcm/1.4/version The list of license files that you can apply to instances. See license Resource on page 146. The list of individual Services Controller instances that share the same database. The list also contains mode settings that you can manipulate to achieve HA for the Services Controller. See manager Resource on page 147. A read-only resource containing monitoring state data on Services Controllers and Traffic Manager instances in your deployment. See monitoring Resource on page 148. The list of services on the Services Controller. See service Resource on page 149. The list of SKUs that you can use to create feature packs to apply to Traffic Manager instances. See sku Resource on page 150. The set of Services Controller administrative users. See user Resource on page 151. The list of Traffic Manager versions you can apply to instances. See version Resource on page 152. Note: All resource names can be any acceptable URL part. URL encoding allows characters such as spaces. These might not be legitimate user or hostnames in the underlying system, but this is not checked or enforced by the REST API. Stingray Services Controller User s Guide 125

136 Using the REST API with the Services Controller Resource Reference Resource Reference This section contains a full description of the resource objects that you can obtain data from the Services Controller REST API. Each resource contains properties and a set of rules governing how you can interact with its properties. action Resource An action resource describes a deployment action. Whenever a REST request that affects an instance resource triggers a deployment action, an action resource is created. The resource is removed when the action is completed. An action resource can persist for the following reasons: If the Services Controller experiences a failure or interruption before an action is completed, an action resource is retained and is retried when the Services Controller recovers. If an action fails, the action resource is retained and marked as Blocked. It is not automatically retried, but it can be queued for implementation after any underlying problem has been addressed. The REST API does not allow direct creation of an action resource. However, you can delete an action resource by making a DELETE request. You cannot recover a deleted action. An action resource contains the following properties. Property Description Supported Actions request_user The name of the user whose request triggered the action. Read Only request_ip The IP address of the request that triggered the action. Read Only action_type A string representation of the action: DEPLOY, START, STOP, UPGRADE, or DELETE. Read Only action_args A string representation of the arguments to the action. Update status created instance blocked The status of the action resource: Waiting - Scheduled and waiting to be implemented. Pending - Currently being processed. Blocked - An error occurred. A timestamp string representation of the date and time that the action was created. A structure with the name and href of the Traffic Manager instance that the action is intended to change. A string representation of the date and time when the action was blocked (only applicable when the status is Blocked). Update Read Only Read Only Read Only block_reason A description of the reason the action was blocked, intended to aid in debugging and fixing the problem (only applicable when the status is Blocked). Read Only You can change the status of an action resource from Blocked to Waiting through a PUT request (which causes the request to be reattempted). You can also change its action_args property. No other property changes are supported. 126 Stingray Services Controller User s Guide

137 Resource Reference Using the REST API with the Services Controller add_on_pack_license_key Resource An add_on_pack_license_key resource describes the contents of a decoded Services Controller Add-On license key. An add_on_pack_license_key resource contains the following properties. Property Description Supported Actions bandwidth The bandwidth limit for this license. Read Only controller_license The optional associated Services Controller controller_license_key resource. Read Only feature_sku The feature sku resource this key applies to. Read Only license_key The license key string. Read Only serial The serial number of this license Read Only timestamp A timestamp encoded in this license. Read Only valid Describes whether the key was successfully validated (with a currently active controller license key): true or false. Read Only valid_from The license start date (Perpetual). Read Only valid_until The license end date (Perpetual). Read Only Create a new add_on_pack_license_key resource by making a POST request to the resource with the license key text in the request body. POST /api/tmcm/1.4/add_on_pack_license_key HTTP/1.1 Content-Type: text/plain LK1-ERSSCAPFIPS:1:VXTNN000C5725: T ACB8-AE89-67EE You must include a Content-Type header set to text/plain. add_on_sku Resource An add_on_sku resource defines a set of additional licensable features that can be added to those of an stm_sku to extend Traffic Manager or Services Controller functionality. You do not apply an add_on_sku directly to a Traffic Manager instance, but you can use it in a feature_pack to apply additional functionality to the base stm_sku of the pack. The add_on_sku resources are preinstalled in the Services Controller software, based on sets of features described by existing Traffic Manager template licenses and additional Services Controller features (such as LBaaS). The add_on_sku resource is read-only; PUT and POST HTTP requests to create or modify add_on_sku resources are not possible. An add_on_sku resource contains the following properties. Property Description Supported Actions info An optional descriptive string. Read Only features The features enabled by this add_on_sku. The features property is a string containing a space-separated list of licensable Traffic Manager or Services Controller feature names that the add_on_sku enables. See sku Resource on page 150. Read Only status The status of this resource: Active or Inactive. Read Only Stingray Services Controller User s Guide 127

138 Using the REST API with the Services Controller Resource Reference bandwidth_pack_license_key Resource A bandwidth_pack_license_key resource describes the contents of a decoded Services Controller Bandwidth Pack license key. A bandwidth_pack_license_key resource contains the following properties. Property Description Supported Actions valid status Determines whether the key was successfully validated (with the currently active Services Controller license): true or false. Determines whether the key is currently in use by the Services Controller: Active or Inactive. You can set this property to Active only. Read Only Update valid_from The license start date (Perpetual). Read Only valid_until The license end date (Perpetual). Read Only stm_sku The sku resource this license applies to. Read Only bandwidth The bandwidth limit for this license. Read Only serial The serial number of the Bandwidth Pack license. Read Only timestamp The timestamp of the Bandwidth Pack license. Read Only controller_license_serial controller_licenses The serial number of the associated Services Controller license. An optional list of associated Services Controller license resources. Read Only Read Only license_key The license key string. Read Only expiry_warning_days The number of days warning that is given for an impending license expiry. Read Only Create a new bandwidth_pack_license_key resource by making a POST request to the resource with the license key text in the request body. POST /api/tmcm/1.4/bandwidth_pack_license_key HTTP/1.1 Content-Type: text/plain LK1-ERSSCTPSTM_B_200:1:VXTNN000C5725: T ACB8-AE89-67EE Note: You must include a Content-Type header set to text/plain. Unlike other REST API resources, the Services Controller determines the name of the created resource and the content of each property based on the license key you use. If successful, the Services Controller returns a standard POST response of HTTP/ Created and the resource name in the Content-Location header. 128 Stingray Services Controller User s Guide

139 Resource Reference Using the REST API with the Services Controller The response body contains a JSON structure representing the properties of the created resource: { "valid": "true", "status": "Active", "valid_from": "Perpetual", "valid_until": "Perpetual", "stm_sku": "STM-B-200", "bandwidth": "1", "serial": "VXTNN000C5725", "timestamp": " T09:31: ", "controller_license_serial": "00003", "controller_license": "ERSSC00003-XXXX-YYYY", "license_key": "LK1-ERSSCTPSTM_B_200:1:VXTNN000C5725: T ACB8-AE89-67EE" } Note: The request body in a POST request to the bandwidth_pack_license_key resource must contain exactly one valid Bandwidth Pack license key. The bandwidth property of the controller_license_key resource on which it depends is updated as appropriate. After a bandwidth_pack_license_key resource has been created, the Services Controller typically activates it automatically. The exception to this is when activation would cause the current deployment to have insufficient licensed bandwidth; this is the case when an existing Bandwidth Pack license provides bandwidth, in use by the Services Controller, that would not be provided by the new Bandwidth Pack license. Activation also fails in the following circumstances: A license has a valid_from date in the future. A license has a valid_until date in the past. Activating a license would result in the Services Controller deployment having insufficient licensed bandwidth for its current Traffic Manager instance configuration. (This is the case when multiple Bandwidth Packs have been issued with the same license key; activating one pack requires deactivation of whichever pack is currently active.) The license is invalid. You can activate a Bandwidth Pack license manually by setting the status property to Active through a PUT request: PUT /api/tmcm/1.4/bandwidth_pack_license_key/vxtnn000c HTTP/1.1 {"status": "Active"} After the license has been activated, you cannot set this property back to Inactive. Note: You cannot activate invalid license keys (including those keys that do not validate against any installed Services Controller license). These keys have their valid property set to false. The Services Controller populates the controller_license property with any matching controller_license resource this Bandwidth Pack license is validated against. The bandwidth licensed by the Bandwidth Pack is listed in the controller_license_key resource when queried. If no matching Services Controller license is found, valid is set to false and controller_license is left blank. Stingray Services Controller User s Guide 129

140 Using the REST API with the Services Controller Resource Reference If you attempt to reinstall an existing Bandwidth Pack license, the Services Controller license state is unchanged. If you attempt to install a malformed or invalid key, the Services Controller responds with HTTP/ Bad Request and an appropriate error message in the response body. To view existing licenses, perform a GET request for the bandwidth_pack_license_key resource: The response to this request contains a JSON structure representing the list of installed licenses: { } "children" : [ { "href" : "/api/tmcm/1.4/bandwidth_pack_license_key/vxtnn000c ", "name" : "VXTNN000C " }, { "href" : "/api/tmcm/1.4/bandwidth_pack_license_key/vxtnn000c ", "name" : " VXTNN000C " }, { "href" : "/api/tmcm/1.4/bandwidth_pack_license_key/vxtnn000c ", "name" : " VXTNN000C " } ] To view the details for an individual Bandwidth Pack license key, perform a GET request for the specific license resource: The response to this request contains a JSON structure representing the license resource. The properties displayed are the decoded contents of the license key: { "valid": "true", "status": "Active", "valid_from": "Perpetual", "valid_until": "Perpetual", "stm_sku": "STM-B-200", "bandwidth": "1", "serial": "VXTNN000C5720", "timestamp": " T09:31: ", "controller_license_serial": "00003", "controller_license": "ERSSC00003-XXXX-YYYY", "license_key": "LK1-ERSSCTPSTM_B_200:1:VXTNN000C5720: T ACB8-AE89-67EE" } To delete a Bandwidth Pack license, use the DELETE request method with the desired bandwidth_pack_license_key resource in the URI: DELETE /api/tmcm/1.4/bandwidth_pack_license_key/vxtnn000c HTTP/1.1 If you attempt to delete a Bandwidth Pack license key that is used to provide bandwidth for the Services Controller's current configuration of Traffic Manager instances, the request is rejected with a HTTP/ Bad Request status code. 130 Stingray Services Controller User s Guide

141 Resource Reference Using the REST API with the Services Controller cluster Resource A cluster resource describes a Traffic Manager cluster. You put the name of the cluster resource into the cluster_id property of each Traffic Manager instance resource you want to add to that cluster. Creating a cluster resource does not automatically create the actual Traffic Manager cluster. You must first create the Traffic Manager instance resources (which in turn triggers deployment of your Traffic Manager instances) with the cluster_id property set to the name of the cluster resource. The Services Controller automatically clusters together all Traffic Manager instance resources using the same cluster_id property during deployment. Note: When adding a Traffic Manager instance to an existing cluster, you must have at least one Traffic Manager instance already running (with a status property of Active) within the cluster; otherwise, resource creation fails. When creating a cluster resource, you can specify the following properties. Property Description Supported Actions owner The name of the owning customer. This property is optional and not validated. It is intended for accounting purposes. Create status The status of this resource: Active or Inactive. Update cluster_port_offset There are no required properties for this resource, but the request body must be a valid JSON object. The cluster name is the resource name. The response to a GET request for a cluster resource contains an additional read-only list members property. This list contains the names of all Traffic Manager instances in the cluster. { } Provides an offset for ports used by the Services Controller that need to be consistent across all members of a cluster. Use a value in the range 0 to "owner":"", "status":"active", "cluster_port_offset":"0", "members":[ "TM1","TM2","TM3" ], You can set the owner or cluster_port_offset properties only when first creating a cluster resource. You cannot update these properties in an existing cluster resource. The Services Controller uses cluster_port_offset to calculate port numbers for ports that must be consistent across all Traffic Manager instances within a cluster. This property operates with equivalent functionality to the port_offset setting from the config_options property of the instance resource, but with cluster-wide, rather than per-instance, applicability. For clustered instances, you must specify a value for both: the cluster_port_offset property of the cluster resource. Create the port_offset property in the config_options property of the instance resource. Stingray Services Controller User s Guide 131

142 Using the REST API with the Services Controller Resource Reference Note: Whenever the config_options property is set, all currently modified options must be specified again in the REST call. Any options that are not specified will lose their current value and be reset to their default value. Any change to the config_options settings will cause a restart of the instance. For example, the port number specified in the Traffic Manager's flipper!unicast_port configuration key must be the same on each Traffic Manager instance in a particular cluster. Use the cluster_port_offset property to allow the Services Controller to reserve a suitable port range for this purpose. See cluster Resource on page 131 for more information about the effect of offsets. Note: You must not specify the same cluster_port_offset value in more than one active cluster resource. If you do not specify cluster_port_offset when creating a cluster resource, a Traffic Manager s flipper!unicast_port configuration setting is offset according to the port_offset value set in the Traffic Manager instance's config_options property. For instances in such clusters, the REST API disallows attempts to use a port_offset value inconsistent with that of the other cluster instances. Note: Any change to the config_options settings will cause a restart of the instance. You can mark the resource as inactive by changing the status property to Inactive. controller_license Resource A controller_license resource describes the license being used by the Services Controller to which you send the REST request. The resource only supports the GET method and is typically used only for identifying the license currently in use by a particular Services Controller. The controller_license_key resource is used to install Services Controller licenses, but Bandwidth Pack license must be installed using the bandwidth_pack_license_key resource. A controller_license resource contains the following properties. Property Description Supported Actions license_key The full license key for this Services Controller license. Read Only license_key_valid_from The license start date. Read Only license_key_valid_until The license end date. Read Only 132 Stingray Services Controller User s Guide

143 Resource Reference Using the REST API with the Services Controller controller_license_key Resource A controller_license_key resource describes the contents of a decoded Services Controller license key. A controller_license_key resource contains the following properties. Property Description Supported Actions add_on_packs bandwidth bandwidth_packs cluster_bandwidth The list of Add-On license keys associated with this Services Controller license key. This property is applicable to Enterprise license keys only; for CSP license keys, this property is omitted. The dictionary of licensed SKUs and the bandwidth allowance supplied to each one by this license and associated Add-On and Bandwidth Packs. This property is applicable to Enterprise license keys only; for CSP license keys, this property is omitted. The list of Bandwidth Packs associated with this Services Controller license key. This property is applicable to Enterprise license keys only; for CSP license keys, this property is omitted. The dictionary of licensed SKUs and the total bandwidth allowance supplied for each one by all valid license keys and associated Add-On and Bandwidth Packs. This property is applicable to Enterprise license keys only; for CSP license keys, this property is omitted. Read Only Read Only Read Only Read Only license_key The license key string. Read Only license_type The type of this license: Enterprise or Cloud Service Partner. Read Only serial The serial number of this license Read Only status Determines whether the key is currently in use by the Services Controller: Active or Inactive. Read Only valid_from The license start date (Perpetual). Read Only valid_until The license end date (Perpetual). Read Only bandwidth The list of licensed SKUs and the bandwidth limit applicable to each one under this license. For Enterprise Licensing keys, it also lists dependent Bandwidth Packs. Create a new controller_license_key resource by performing a POST request to the resource with the license key text (such as LK1-RSSC E30-3E8A AB) in the request body. POST /api/tmcm/1.4/controller_license_key HTTP/1.1 Content-Type: text/plain LK1-RSSC E30-3E8A AB Read Only Note: You must include a Content-Type header set to text/plain. Unlike other REST API resources, the Services Controller determines the name of the created resource and the content of each property based on the license key you use. If successful, the Services Controller returns a standard POST response of HTTP/ Created and the resource name in the Content-Location header. Stingray Services Controller User s Guide 133

144 Using the REST API with the Services Controller Resource Reference The response body contains a JSON structure representing the properties of the created resource: { } "add_on_packs": [], "bandwidth": {}, "bandwidth_packs": [], "license_key": "LK1-ERSSC E30-3E8A AB", "license_type": "Enterprise", "serial": "123456", "status": "Active", "valid_from": " ", "valid_until": " ", The add_on_packs, bandwidth, and bandwidth_packs properties are applicable to enterprise license keys only. Note: The request body in a POST request to the controller_license_key resource must contain exactly one valid Services Controller license key. License keys you have installed via the REST API in a clustered Services Controller deployment might need to be validated by a different cluster member and might not be immediately activated as a result. If you attempt to reinstall an existing controller license key, the Services Controller license state is unchanged. If you attempt to install a malformed or invalid license key, the Services Controller responds with HTTP/ Bad Request and an appropriate error message in the response body. To view existing licenses, perform a GET request for the controller_license_key resource: The response to this request contains a JSON structure representing the list of installed licenses: { } "children" : [ { "href" : "/api/tmcm/1.4/controller_license_key/xyz123", "name" : "XYZ123" }, { "href" : "/api/tmcm/1.4/controller_license_key/abc123", "name" : "ABC123" } ] To view the details for an individual controller license key, perform a GET request for the specific license resource: The response to this request contains a JSON structure representing the license resource. The properties displayed are the decoded contents of the license key: { } "license_key": "LLL-RSSC E30-3E8A AB", "license_type": "Cloud Service Partner", "serial": "123456", "status": "Active", "valid_from": " ", "valid_until": " " 134 Stingray Services Controller User s Guide

145 Resource Reference Using the REST API with the Services Controller To delete a Services Controller license, use the DELETE request method with the desired controller_license_key resource in the URI: DELETE /api/tmcm/1.4/controller_license_key/abc123 HTTP/1.1 If you attempt to delete an enterprise license key used to provide licensed bandwidth for the current deployment, the request is rejected with a HTTP/ Bad Request status code. You cannot delete cloud service provider license keys, for metering reasons, until their valid_until date is at least 180 days in the past. feature_pack Resource A feature_pack resource describes a set of licensable features that you can apply to a Traffic Manager instance. A feature pack is defined relative to a SKU. A feature pack is defined by a list of features excluded from a SKU (the list is empty). Therefore, the feature pack is always the same as a SKU or a strict subset of a SKU. When you deploy or modify a Traffic Manager instance, the feature pack controls which licensable features are allowed (but does not specify bandwidth limits). The add_on_sku field of the feature_pack resource is used to specify a list of one or more Add-On SKUs to the feature pack. Similar to base SKUs (that is, the stm_sku field), these are paid for via an Add-On license for Enterprise licensed customers and via metered usage for CSP licensed customers. When creating a feature_pack resource, you can specify the following properties. Property Description Supported Actions info An optional descriptive string. You can change the info property by updating an existing feature_pack resource, but you cannot change the stm_sku and excluded properties. Create/Update stm_sku The name of the parent SKU. Create add_on_skus excluded status Optionally, a list of Add-On SKUs used by this feature pack. It might be an empty list if no Add-On SKUs are included. A space-separated list of features excluded from the parent SKU. The excluded property is an empty list (in which the feature_pack includes all features from the parent SKU). The feature names in the excluded list must be only those from the parent SKU features property. The status of this resource: Active or Inactive. You can mark the resource as inactive by changing the status property to Inactive. Create Create Update host Resource A host resource describes a Traffic Manager instance host you can use to deploy Traffic Manager instances. The host must have specific directories set up and an SSH user enabled for access from the Services Controller servers. These requirements are of the Services Controller user. Stingray Services Controller User s Guide 135

146 Using the REST API with the Services Controller Resource Reference If you use a host to deploy Traffic Manager instances outside containers, you must name the resource using the FQDN of the Traffic Manager instance host to allow the licensing server to operate correctly. Note: If you create host resources solely for use with nonmanaged instances, the Services Controller does not use the username and install_root properties. In this case, you do not need to set up passwordless SSH access if the host is solely to be used by nonmanaged instances. For more information about nonmanaged instances, see host Resource on page 135. When creating a host resource, specify the following properties. Property Description Supported Actions info An optional descriptive string. Create/Update work_location install_root retained_info_dir username usage_info The absolute path of a temporary directory to which you can copy and create files. The absolute path of a directory under which Traffic Manager instances are created. Do not set the path to /var/lib/lxc/. Defines the location to store TSRs and logs for all deleted Service Instances. The name of a user that is used by means of passwordless SSH to carry out actions on the host. For several purposes, the user should be root. Describes the use of the host. Typically, this is empty for a nonservice host, and servicemanager for a LBaaS/ELBaaS service host. Create Create Create/Update Create/Update Create/Update size Defines the size (in instances) of the host Create/Update cpu_cores An optional string that describes which CPUs are used for this host when it is a service host. If used, you can either: specify a value in a form that is used by the taskset command. For example, "0,3,5-7". set this property to an empty string. This indicates that the host is not limited in its use of CPU cores. This is the default setting for the property if you do not specify a string. Create/Update status The status of this resource: Active or Inactive. Update The hostname is the name of the host resource, and this must be resolvable in the local network environment. The Services Controller does not perform checks on the validity of the work_location and install_root directories, and there is not a check that the directories are absolute directories. Once defined, you cannot change the work_location and installation_root directories. You can mark the resource as inactive by changing the status property to Inactive. The host REST API supports a single query parameter, status_check=true or status_check=false, on GET requests. The default is false; however status_check=true causes a check of the host for network connectivity, user validity, and the state of the install_root and work_location directory. 136 Stingray Services Controller User s Guide

147 Resource Reference Using the REST API with the Services Controller If status_check=true is set on a GET request for a host resource, an extra property is included in the response: status_check - A string that is empty (if there are no problems) or that contains a description of any problems found. Riverbed recommends that you perform this check after creating a host resource. instance Resource An instance resource describes a Traffic Manager instance. Creating or altering an instance resource causes the Traffic Manager instance to be deployed, deleted, or altered. A REST request to create or alter an instance returns promptly before the action is carried out (to avoid timeouts). You can then use a GET request to verify the status of the instance resource. When you create an instance using the REST API, the request supports a URL parameter?managed=[true/false]. This is used as follows: Include?managed=true to indicate that you are creating a managed instance. For example: Include?managed=false to indicate that you are creating an nonmanaged instance. For example: When creating and deploying an instance resource, you can specify the following properties. Property Description Supported Actions owner The name of the owning customer. Create cluster_id stm_version host_name container_name The optional name of a cluster resource to which the instance belongs. If you specify an entry for this property, it must refer to a cluster resource. You must also set the config_options property to include admin_ui=yes and start_flipper=yes. The name of the Traffic Manager version resource for the instance. If you modify this property, the Services Controller upgrades the Traffic Manager instance to the new version. You can change this property only if the instance status is Idle. The name of the Traffic Manager instance host on which you deploy the instance. This name must match the FQDN of the instance host that was created. The name of the LXC container for the Traffic Manager instance. If this is an empty string or set to none, the Traffic Manager is not run inside a container. If you specify a name, you must create an appropriate container configuration file of the form <containername>.conf in the install_root directory of the container host. For example, a container_name of stm1.example.com requires a container configuration file named stm1.example.com.conf. The container configuration file must set lxc.utsname to the container name for the licensing server to operate correctly. Create Create/Update Create Create Stingray Services Controller User s Guide 137

148 Using the REST API with the Services Controller Resource Reference Property Description Supported Actions container_configuration A JSON-formatted string used to set up the default network gateway inside the LXC container. Use this format: "{\"gateway\":\"<ip_address>\"}" Create/Update config_options For LXC deployments, this is the IP address raised on the bridge interface this container is connected to. A string containing configuration options for optional features. If specified, this is a space-delimited combination of one or more the following: default - This option has no effect and is used to avoid an empty string. If this is option is used, no other options can be specified in the config_options. admin_ui=yes/no - Start or bypass the Administration UI for the Traffic Manager instance (default: yes). You must set this to yes if you use the cluster_id property. maxfds=<number> - The maximum number of file descriptors (default: 4096). This setting must be consistent between all instances in a cluster. (See Notes, below). webcache!size=<number> - The size of RAM for the web cache (default: 0). This value can be specified in %, MB, GB by appending the corresponding unit symbol to the end of the value when not specifying a value in bytes. For example, 100%, 256MB, 1GB, and so on. This setting must be consistent between all instances in a cluster. (See Notes, below). java!enabled=yes/no - Start or bypass the Java server (default: no). This setting must be consistent between all instances in a cluster. (See Notes, below). statd!rsync_enabled=yes/no - Synchronize historical activity data within a cluster. If this data is unwanted, disable this setting to save CPU and bandwidth (default: yes). This setting must be consistent between all instances in a cluster. (See Notes, below). snmp!community - The SNMP v2 community setting for this instance resource. For metering of nonmanaged instances, this must be set to the same value as the equivalent snmp!community property on the instance itself (default: "public"). num_children=<number> - The number of child processes (default: 1). start_flipper=yes/no - Start or bypass the flipper process (default: yes). You must set this to yes if you use the cluster_id property. port_offset=<number> - Offset control port values by a fixed amount. When network isolation is not provided by a container configuration, each Traffic Manager instance on a particular host should be allocated a unique port offset to avoid port clashes. The offset range is from 0 and 499. (continued overleaf) Create/Update 138 Stingray Services Controller User s Guide

149 Resource Reference Using the REST API with the Services Controller Property Description Supported Actions config_options (continued) cpu_usage (continued) afm_deciders=<number> - The number of application firewall decider processes. If 0 is specified, the application firewall is not installed (default: 0). Note: You cannot update this option after the instance has been deployed. flipper!frontend_check_addrs=<host> - Check instance front-end connectivity with a specific host. When the Services Controller deploys an instance, it checks connectivity to the default gateway of the instance host by sending ICMP requests to it. If the default gateway is protected by a firewall or blocks ICMP requests, instance deployment can fail. To disable deployment connectivity checks, use flipper!frontend_check_addrs="". This setting must be consistent between all instances in a cluster. (See Notes, below). flipper!monitor_interval=<number> - The interval, in milliseconds, between flipper monitoring actions. (default: 500 ms). For higher density Traffic Manager instance deployments, use a larger value such as 2000ms. This setting must be consistent between all instances in a cluster. (See Notes, below). Note: Any change to the config_options settings will cause a restart of the instance. Note: Some configuration options, if specified here, must be consistent between all Traffic Manager instances in a cluster: maxfds webcache!size java!enabled statd!rsync_enabled flipper!monitor_interval flipper!frontend_check_addrs. If you set or update the value in one instance resource, the Services Controller replicates this update automatically to the other instance resources. The instance will restart whenever these are changed, but other instances in the cluster must be restarted manually. Note: Whenever the config_options property is set, all currently modified options must be specified again in the REST call. Any options that are not specified will lose their current value and be reset to their default value. A string that describes which CPUs are used for this Traffic Manager instance. If used, you must either: specify a value in a form that is used by the taskset command. For example, "0,3,5-7". set this property to an empty string. This indicates that the host is not limited in its use of CPU cores (unless it is deployed within an LXC container). This is the default setting for the property if you do not specify a string. Note: Any change to the cpu_usage settings will cause a restart of the instance. Create/Update Stingray Services Controller User s Guide 139

150 Using the REST API with the Services Controller Resource Reference Property Description Supported Actions stm_feature_pack bandwidth license_name management_address status creation_date admin_username The name of the feature_pack resource associated with the Traffic Manager instance. How much bandwidth the Traffic Manager instance is allowed (in Mbps). The name of the license resource you want to use for this instance. When you modify this property, the Services Controller updates the license on the Traffic Manager instance. The hostname used to address the Traffic Manager instance. The hostname must be an FQDN. You can modify this property only for a nonmanaged Traffic Manager instance (or in a database-only request). Note: If you update this property, the host component of the rest_address, ui_address, and snmp_address properties is also updated. These values must be FQDNs. The status of this resource: New - An instance that has been created by the REST API but has not yet been successfully deployed. Idle - An instance that has been deployed but is not currently running. Note that this is the only status from which you can delete the instance. Active - An instance that is currently running. Deleted - An instance that has been deleted. Starting - An instance that is waiting to start. Failed to start - An instance that has failed to start. Stopping - An instance that is waiting to stop. Failed to stop - An instance that has failed to stop. Deleting - An instance that is waiting to be deleted. Failed to delete - An instance whose deletion failed. Failed to deploy - An instance that has failed to deploy. See Understanding the State Transition Model for Instances on page 145. A string representation of the date and time of creation of this Traffic Manager instance resource. The primary admin username for the Traffic Manager instance. You can modify this property only for a nonmanaged Traffic Manager instance (or in a databaseonly request). Create/Update Create/Update Create/Update Create Update Read Only Create/Update admin_password The password (unencrypted) of the specified admin user. Create/Update service_username service_password The primary service username for the Traffic Manager instance. This property cannot be modified and is not applicable to nonmanaged instances. The password (unencrypted) of the specified service user. This property is not applicable to nonmanaged instances. Read Only Create/Update 140 Stingray Services Controller User s Guide

151 Resource Reference Using the REST API with the Services Controller Property Description Supported Actions rest_address ui_address snmp_address licensed_date metrics_date metrics_throughput metrics_peak_throughput metrics_peak_rps metrics_peak_ssl_tps pending_action The address (host or IP address plus port number) of the Traffic Manager instance configuration REST API. The rest_address property must match the instance hostname. You can modify this property only for a nonmanaged Traffic Manager instance (or in a database-only request). Note: If you use a hostname instead of an IP address, you must use an FQDN. The address (host or IP address plus port number) of the Traffic Manager instance Administration UI. This is blank if the instance does not have an active Administration UI. You can modify this property only for a nonmanaged Traffic Manager instance (or in a database-only request). Note: If you use a hostname instead of an IP address, you must use an FQDN. The address (host or IP address plus port number) of the Traffic Manager instance SNMP responder. This enables you to set the SNMP address used for metering. You can modify this property only for a nonmanaged Traffic Manager instance (or in a databaseonly request). Note: If you use a hostname instead of an IP address, you must use an FQDN. A string representation of the date and time of the latest successful license validation (if any). This is blank if the Traffic Manager instance has never had a license validated. A string representation of the date and time of the latest successful metrics collection (if any). This is blank if the instance has never had metrics collected. The latest collected metrics figure, in bytes, for throughput. This is blank if the Traffic Manager instance has never had metrics collected. The latest collected metrics figure, in bytes per second, for peak throughput (for example, the highest figure in the previous hour). This is blank if the Traffic Manager instance has never had metrics collected. The latest collected metrics figure for peak RPS (for example, the highest figure in the previous hour). This is blank if the Traffic Manager instance has never had metrics collected. The latest collected metrics figure for peak SSL TPS (for example, the highest figure in the previous hour). This is blank if the Traffic Manager instance has never had metrics collected. A structure that contains the name and href for the pending or blocked action. This is applicable only if a Traffic Manager instance is associated with a failed or blocked action (caused by deployment, start, stop, and so on). Create/Update Create/Update Create/Update Read Only Read Only Read Only Read Only Read Only Read Only Read Only Stingray Services Controller User s Guide 141

152 Using the REST API with the Services Controller Resource Reference The instance REST API supports a single query parameter, status_check=true or status_check=false, on GET requests. The default is false; however, status_check=true causes an activity check for the instance. The Services Controller carries out this check synchronously with the request handling, the duration of which can vary with load. If status_check=true is set on a GET request for an instance resource, an extra property is included in the response: status_check - A string that is empty (if there are no problems) or contains a description of any problems found. The Services Controller performs this check automatically after starting an instance resource. Any fault found is recorded as a blocking reason for the action. If you make changes to feature_pack or bandwidth, the Services Controller invokes changes to the licensed behavior of the deployed Traffic Manager instance. Use the status property to track the state of a Traffic Manager instance.if you modify this property, the Services Controller starts, stops, or deletes the Traffic Manager instance accordingly. Note: The Services Controller processes state transitions separately from the REST request, using a separate thread of execution. You can make a PUT request to change the status of a resource. The Services Controller immediately returns an intermediate status, subsequently applies the change, and finally updates the status property. You can poll for changes. The change has succeeded when you see the status property change to the expected value: If you deploy a Traffic Manager instance, this results in an immediate status of New. When the Services Controller has successfully deployed the instance, status changes to Idle. You can start an instance when it has a status of Idle (or Failed to start or Failed to stop) by updating status to Active. The Services Controller responds with an immediate status of Starting. When the instance has been successfully started, status is updated to Active. You can stop an instance when it has a status of Active (or Failed to start or Failed to stop) by updating status to Idle. The Services Controller responds with an immediate status of Stopping. When the instance has been successfully stopped, its status is updated to Idle. You can uninstall an instance when it has a status of Idle by updating status to Deleted. The Services Controller responds with an immediate status of Deleting. When the instance has been successfully uninstalled, its status is updated to Deleted. An instance in this state cannot be changed further. When the Traffic Manager instance in is one of the failed states (Failed to deploy, Failed to start, Failed to stop, or Failed to delete), you are not able to return it to an Idle state using defined state transitions. You must therefore change the status of the Traffic Manager instance to Idle manually. To do this, issue a PUT request to the REST API with a URL parameter of deploy=false and a property of status=idle. The deploy setting ensures that this is a database-only change. (See Database-Only Updates on page 16 for details.) Once this is complete, you are then able to delete the Traffic Manager instance. You can upgrade an instance by changing the stm_version property, but only when the instance has a status of Idle. The Services Controller responds with an immediate status of Upgrading. When the instance has been successfully upgraded, its status returns to Idle. The following procedure describes how to upgrade an instance in a cluster using the stm_version property. 142 Stingray Services Controller User s Guide

153 Resource Reference Using the REST API with the Services Controller To upgrade instances in an Traffic Manager cluster 1. Start with all instances in the cluster in Active status. 2. For one instance in the cluster: Change the status of the instance to Idle. Upgrade the instance by changing the stm_version property. Change the instance status back to Active. 3. If the upgrade is successful, repeat Step 2 for every other instance in the cluster. Upgrading a Traffic Manager cluster where all instances are Idle is not possible. At least one instance must be made Active before the other Idle instances are upgraded. In addition, one of the upgraded instances must be made Active so that the remaining instance can be made Idle and upgraded. This action is required because cluster replication must occur as part of a successful upgrade for a cluster member. Note: Do not make any configuration changes to the Traffic Manager instances during this cluster upgrade procedure. You can deploy a Traffic Manager instance in a HA scenario by creating a cluster resource and then specifying the name of this resource in the cluster_id property of each Traffic Manager instance you create. However, you must ensure the following: Clustered instances must all share the same license and version number. Clustered instances must have the admin_ui and start_flipper options set to Yes in config_options. The Traffic Manager instance REST API supports a query parameter on PUT requests, deploy=true or deploy=false. The default is true; however, deploy=false causes the Services Controller to apply changes to the inventory database, but no deployment changes are made. This supports testing and database reconciliation. If status is set with deploy=false, then that status is applied directly; no actions are carried out and no intermediate status is set. If a new instance resource is created with deploy=false, then the status is set to Idle on creation. The Traffic Manager instance REST API supports a query parameter on PUT or POST requests when an instance is created, managed=true, or managed=false. The default is true. Creating an instance with managed=false (a nonmanaged instance) means that the instance is never managed by the Services Controller in terms of deployment, starting, stopping, upgrading, reconfiguring, or deleting. The instance can make normal license requests and is metered. You can change its status, but this is stored and no attempt is made to alter the actual instance. This option is for instances that you manually deploy, but for which you want to use the licensing and metering abilities of the Services Controller. After you create a nonmanaged instance, any PUT or POST request is automatically considered to have the deploy=false parameter set. This means that you can change the database representation of the instance (and this can affect the features that are enabled through licensing and whether or not the instance is considered active for metering). Stingray Services Controller User s Guide 143

154 Using the REST API with the Services Controller Resource Reference Because a nonmanaged instance is not deployed by the Services Controller, you must set the snmp_address property (which cannot be set for a managed instance) to enable metering. You can omit the host_name, license_name, stm_version, and cpu_usage properties, because these are not required for a nonmanaged instance. If you do specify any of these properties, they must contain valid values. However, they are stored as empty strings in the inventory database. Note: You cannot update these properties with empty strings for an existing instance. Any changes to the properties of a nonmanaged instance affect only licensing or metering. For example, changes to the version or any configuration options have no effect. Note: You can modify admin_username, admin_password, management_address, rest_address, ui_address, and snmp_address for a nonmanaged Traffic Manager instance only (or in a database-only request). Updating management_address for a nonmanaged Traffic Manager instance automatically updates the host or IP component of rest_address, ui_address, and snmp_address. These values must be FQDNs. If a deployment, start, stop, or delete action (generated by a PUT request to create a new record or change the status property) fails, it can be traced through the pending_action property. You can analyze problems based on the action resource (see Proxying a Traffic Manager REST API on page 145). After you fix any underlying problems, you can retry the action in one of two ways: You can change the status of the original action to Waiting. This causes the action to be requeued and retried. You can change the status of the Traffic Manager instance to a new value; the system deletes the old action and queues an entirely new action based on the status you set. 144 Stingray Services Controller User s Guide

155 Resource Reference Using the REST API with the Services Controller Understanding the State Transition Model for Instances The following diagram summarizes the stable states for an Instance resource, and the transient states that connect them. Figure 7-1. State Transition Model for Instances Proxying a Traffic Manager REST API A running Traffic Manager instance maintains its own REST API for configuration purposes. You can access this directly using the rest_address property along with the admin_username and admin_password properties. However, for convenience, the Services Controller provides proxy access to the instance REST API through the URI of the instance resource. Note: You can access the instance REST API only if the instance itself is active. Proxy requests are otherwise rejected with an informative error message. You can make proxy requests by appending the following to the URI of the Traffic Manager instance resource: /tm/2.0/config/active Stingray Services Controller User s Guide 145

156 Using the REST API with the Services Controller Resource Reference Note: You must not include the initial /api component when accessing a proxied instance REST API. This is necessary only if you access the API directly. For example, the full URI to access the REST API of a running Traffic Manager instance called stm1 is: The Services Controller interprets such longer URIs as proxy requests for the named instance. The above example lists the top level configuration objects for the Traffic Manager instance stm1. The exact URL depends on the URLs allowed by the Traffic Manager configuration REST API and are not documented here. In this situation, the Traffic Manager configuration REST API is aware that it is serving a proxied request and automatically adjusts any href properties so you can use them directly through the proxy without requiring editing. The Services Controller does not interpret or amend the request or response bodies for proxy requests; however, it does provide appropriate authentication information based on the admin_username and admin_password properties. If the Traffic Manager REST API returns an error, it is returned unchanged to the client. The exception to this is an authentication error. These are reinterpreted as 500-series errors (with an informative error message), because it implies the Services Controller has applied the wrong credentials. Proxy requests allow additional content types or accept types, because the Services Controller is unable to determine for itself which types might be acceptable to the Traffic Manager REST API. Note: The Services Controller does not attempt to stream large requests or responses. Therefore, you should make such requests or responses directly to the appropriate Traffic Manager REST API. license Resource A license resource describes an FLA license file that you can use to deploy a Traffic Manager instance. Creating, altering, or deleting the license resource does not affect the existence of the actual license file. You are responsible for ensuring that the license file is available to all Services Controller servers. Note: If you create license resources solely for use with nonmanaged instances, the Services Controller does not require a corresponding license file to exist. For more information about nonmanaged instances, see license Resource on page 146. When creating a license resource, you can specify the following properties. Property Description Supported Actions info An optional descriptive string. Create/Update status The status of this resource: Active or Inactive. Update There are no required properties in this resource, but the request body must be a valid JSON object. The license filename is the resource name. You can change the info property by updating an existing license resource. You can mark the resource as inactive by changing the status property to Inactive. 146 Stingray Services Controller User s Guide

157 Resource Reference Using the REST API with the Services Controller manager Resource A manager resource represents an instance of the Services Controller and contains properties that apply specifically to that instance. You can use the resource properties, known as mode settings, to provide HA to a Services Controller deployment. The name of a manager resource is its hostname. You do not need to create a manager resource to use an installation of the Services Controller. If a manager resource does not already exist, it is created with reasonable default mode settings when the Services Controller starts. Note: You can issue a DELETE request via the REST API to a manager resource that monitoring has marked as failed. This enables you to de-cluster Services Controllers. A manager resource contains the following properties. Property Description Supported Actions management metering licensing monitoring Determines whether or not the named Services Controller is accessible through the REST API. This is set to enabled or disabled. To recover from the scenario where every Services Controller in your deployment has its management property set to disabled, or has failed, Riverbed provides a script, INSTALLROOT/bin/toggle_management, that is used to read and update the management property for the Services Controller on which it is run. Determines whether the named Services Controller meters all Traffic Manager instances or none. This is set to none or all. Determines whether and how the named Services Controller responds to license requests from Traffic Manager instances. You can set this to enabled, disabled, or enabledwithalerts. In enabledwithalerts mode, Services Controller alert messages are sent to all configured alert addresses when the rate at which license requests are being received exceeds a specified threshold setting. Determines the monitoring mode for this Services Controller. This mode is one of the following: all - This Services Controller monitors all other Services Controllers and all Traffic Manager instances, regardless of the mode setting on any other Services Controller in the cluster. shared - This Services Controller monitors all other Services Controllers, but together with other Services Controllers in the same mode, each one monitors only a proportion of all active Traffic Manager instances. none - This Services Controller does not perform any monitoring. Update Update Update Update Stingray Services Controller User s Guide 147

158 Using the REST API with the Services Controller Resource Reference monitoring Resource The monitoring resource contains stored monitoring state data. This is a read-only resource and accepts only the GET request method. The monitoring resource contains the following properties. Property Description Supported Actions manager instance Contains monitoring state data for all Services Controllers. A request for this URI returns an array of JSON objects containing properties (see below) that describe the monitoring state for all Services Controllers in your cluster. Append the name of a specific Services Controller to the URI (for example, /monitoring/manager/ssc1) to retrieve its properties alone. Contains monitoring state data for all Traffic Manager instances. A request for this URI returns an array of JSON objects containing properties (see below) that describe the monitoring state for all Traffic Manager instances in your cluster. Append the name of a specific instance to the URI (for example, /monitoring/instance/tm1) to retrieve its properties alone. Read Only Read Only failures Contains monitoring state data for all failed Services Controllers and Traffic Manager instances. This property returns a JSON structure with two elements, managers and instances. Each of these is an array of objects, one for each currently failed Services Controller or Traffic Manager instance. Read Only Each object returned from a request contains properties that describe the monitoring state for Services Controllers or Traffic Manager instances in your cluster. These properties are listed below. Property Description Supported Actions name The name of the Services Controller or Traffic Manager instance. Read Only monitor_date The time stamp of the latest monitoring action. Read Only monitor_health gone_down_date notified_down_date A string representation of the health of this Services Controller or Traffic Manager instance. The time stamp of when the Services Controller or Traffic Manager instance was first detected to have failed. This property is present only for items in the failures array. The time stamp of when the Services Controller or Traffic Manager instance was considered to have failed. This property is present only for items in the failures array. Read Only Read Only Read Only 148 Stingray Services Controller User s Guide

159 Resource Reference Using the REST API with the Services Controller Property Description Supported Actions cpu_idle_percent mem_free_percent current_conn total_bytes_in total_bytes_out throughput_in throughput_out The time, in percent (%), that CPUs are idle on the Traffic Manager instance's host. This property is present only for items in the instance array. It is available only for Traffic Manager instances with a status of Active and running a software version that supports performance monitoring. The percentage (%) of free memory available on the Traffic Manager instance's host. This property is present only for items in the instance array. It is available only for Traffic Manager instances with a status of Active and running a software version that supports performance monitoring. The number of current connections for the Traffic Manager instance. This property is present only for items in the instance array. It is available only for Traffic Manager instances with a status of Active and running a software version that supports performance monitoring. The total number of connections the Traffic Manager instance is currently handling. This property is present only for items in the instance array. It is available only for Traffic Manager instances with a status of Active and running a software version that supports performance monitoring. The cumulative total of bytes received by the Traffic Manager instance from its clients. This property is present only for items in the instance array. It is available only for Traffic Manager instances with a status of Active and running a software version that supports performance monitoring. The average throughput, in bytes per second, received by the Traffic Manager instance from its clients, calculated between the previous two monitoring events. This property is present only for items in the instance array. It is available only for Traffic Manager instances with a status of Active and running a software version that supports performance monitoring. The average throughput, in bytes per second, sent by the Traffic Manager instance to its clients, calculated between the previous two monitoring events. This property is present only for items in the instance array. It is available only for Traffic Manager instances with a status of Active and running a software version that supports performance monitoring. Read Only Read Only Read Only Read Only Read Only Read Only Read Only service Resource A service resource defines a load-balancing service. Currently, only Load Balancing as a Service (LBaaS) is supported, both in elastic and non-elastic configurations. See LBaaS REST API Reference on page 73 and ELBaaS REST API Reference on page 107 for details. Stingray Services Controller User s Guide 149

160 Using the REST API with the Services Controller Resource Reference sku Resource A sku resource contains a SKU that defines a set of licensable features. You do not apply a SKU directly to a Traffic Manager instance, but you can use it as the basis for a feature pack. The sku resources are preinstalled in the Services Controller software, based on sets of features described by existing Traffic Manager template licenses. The sku resource is read-only; PUT and POST HTTP requests to create or modify sku resources are not possible. A sku resource contains the following properties. Property Description Supported Actions info An optional descriptive string. Read Only features The features enabled for this SKU. Read Only status The status of this resource: Active or Inactive. Read Only The features property is a string containing a space-separated list of licensable feature names that the SKU enables. This list is supplied by Riverbed as part of a contractual definition. The possible feature names are listed below. Feature afm anlyt apt auto bwm cache comp cr evnts glb java kcd lbaas loca moni rate rb rhi safpx slm ssl Description Enable Stingray Application Firewall. Enable Realtime Analytics. Enable Advanced Web Accelerator. Enable Autoscaling. Enable Bandwidth Management classes. Enable Web Caching. Enable Compression. Do not limit the user to cut-down RuleBuilder content routing for TrafficScript. Enable Events and Actions. Enable Global Load Balancing. Enable Java. Enable Kerberos Constrained Delegation support. Enable Load Balancing as a Service (SSC only). Enable Location support. Enable Active Monitors. Enable Rate Shaping classes. Do not limit the user to RuleBuilder for TrafficScript. Enable Route Health Injection support. Enable Stingray Application Firewall Proxy. Enable Service Level Monitoring. Enable Secure Socket Layer (SSL). 150 Stingray Services Controller User s Guide

161 Resource Reference Using the REST API with the Services Controller Feature svcprt ts xml Description Enable Service Protection classes. Enable TrafficScript. Enable XML functions in TrafficScript. The list below provides licensable features for allowed session persistence algorithms. Feature spasp spip spj2 spkip spnam spsar spssl spuni Description ASP session persistence. IP-based session persistence. J2EE session persistence. Application cookie session persistence. Named node session persistence. Transparent session affinity. SSL session ID session persistence. Universal session persistence. spxze X-Zeus-backend cookie session persistence. The list below provides licensable features for allowed load balancing algorithms. Feature lbrnd lbrob lbwrob lbcon lbwcon lbrsp lbcel lbfail lbone Description Random. Round robin. Weighted round robin. Least connection based. Weighted least connection based. Fastest response times. Array of cells. Balance failure class (used only for testing and debugging purposes). Always choose first node in a pool (used only for testing and debugging purposes). user Resource A user resource describes a Services Controller administrative user. There is no system of privileges; all active Services Controller users have full read and write access to the inventory database. Stingray Services Controller User s Guide 151

162 Using the REST API with the Services Controller Resource Reference When creating a user, you must specify the following properties (the username is the resource name). Property Description Supported password active The user password string. This is supplied in clear text, although it is stored as a salted hash value. The user status: true or false. A nonactive user cannot be used for requests. Create/Update Update Note: The password property is mandatory when creating a user, and it must not be empty. There are no other quality constraints. You can change the password by updating an existing user resource (the PUT method). You can mark a user as inactive by updating the active property to false. This is not consistent with how other types of resources are marked as inactive. When reading a user resource (the GET method), only the active property is returned (the password is never returned). Note: For internal implementation reasons, user resource names are effectively case-insensitive, although other resource names are case-sensitive. Therefore, resources named admin and Admin refer to the same underlying user resource. version Resource A version resource describes a specific Traffic Manager installation file. A version resource is specific to a Traffic Manager version and architecture, and you can have multiple version resources for the same Traffic Manager version number (corresponding to different architectures). Creating, altering, or deleting the version resource does not affect the existence of the actual version file. You are responsible for ensuring that the Traffic Manager installation file is available to the Services Controller servers. Note: If you create version resources solely for use with nonmanaged instances, the Services Controller does not use the version_filename property. In this case, you do not need to make the Traffic Manager installation file available. For more information about nonmanaged instances, refer to version Resource on page 152. When creating a version resource, you can specify the following properties. Property Description Supported Actions info An optional descriptive string. Create/Update version_filename The Traffic Manager installation file (not the complete path). Create/Update 152 Stingray Services Controller User s Guide

163 Using the REST API to Check Status Using the REST API with the Services Controller Property Description Supported Actions status The status of this resource: Active or Inactive. Update md5sum The md5sum of the version tarball. This is an optional field that is used to validate STM tarballs during deployment. If you do not provide a value, the SSC will calculate the md5sum itself during the first deployment of that version, and continue to use that value for later validations. Create/Update You can change the version_filename and info properties by updating an existing version resource. You can mark the resource as inactive by changing the status property to Inactive. Note: When deploying a Traffic Manager instance, if a Traffic Manager installation file with the same filename as the version_filename property is already present on the target instance host, it is reused to save both time and bandwidth. The Services Controller does not attempt to verify the metadata or content of the installation file. As such, if you replace an installation file with different content under the same filename, you should manually remove any cached installation files with that name from your instance hosts. Using the REST API to Check Status You can check the status of various entities through the REST API. Some of these are directly associated with inventory items, while others are more universal. You can check the state of the files associated with the current Services Controller deployment by performing a GET request on the following URI: /api/tmcm/1.4/status/files Note: This URI supports only GET requests. This URI produces a response with the following properties: licenses - An array of objects, one for each active license, each with the following properties: name - The name of the license href - The URI of the license resource filename - The absolute path of the file associated with the license present - true if the file is found or false if not Stingray Services Controller User s Guide 153

164 Using the REST API with the Services Controller Understanding REST Request Errors versions - An array of objects, one for each active version, each with the following properties: name - The name of the version href - The URI of the version resource filename - The absolute path of the file associated with the version present - true if the file is found or false if not Note: This operation applies to only the specific Services Controller on which the REST request is run. Understanding REST Request Errors If the Services Controller REST service is unable to handle or interpret a request, it returns a HTTP response with an appropriate HTTP error code. The response body contains a JSON data structure that describes the error: { } "error_id": <error identifier>, "error_text": <error description>, "error_info": {<error-specific data structure, optional>} For certain error conditions, the error_info property can contain a data structure to further describe the error. 154 Stingray Services Controller User s Guide

165 CHAPTER 8 Installing the Services Controller Virtual Appliance This chapter describes the process for installing the Stingray Services Controller Virtual Appliance (Services Controller VA). Upgrading and downgrading procedures for both the Services Controller VA and Instance Host VAs are also provided. This chapter includes the following sections: Overview of the Services Controller VA on page 156 Prerequisites on page 156 Obtaining the Services Controller VA Software on page 159 Installing, Configuring, and Administering the Services Controller VA on page 160 Creating the VM in vsphere on page 161 Running the Services Controller VA Setup Wizard on page 162 Upgrading the Services Controller VA on page 175 Downgrading the Services Controller VA on page 176 Upgrading Instance Host VAs on page 176 Note: This chapter assumes that you are familiar with installing, configuring, and managing virtual appliances or machines (VAs or VMS) using VMware vsphere Hypervisor (vsphere). Stingray Services Controller User s Guide 155

166 Installing the Services Controller Virtual Appliance Overview of the Services Controller VA Overview of the Services Controller VA The Services Controller VA enables you to install, configure, and manage the Services Controller as a virtual appliance. Rather than managing a Linux installation, the Services Controller VA provides a graphical user interface (GUI) and a command-line interface (CLI) that enables you to create and manage Service Controller instances. Finally, the Services Controller VA image also includes the MySQL database, simplifying the installation process. After you create a VM in VMware vsphere, you use the Services Controller VA GUI to run an initial Setup wizard that populates the Services Controller VA with the necessary information for administering and managing the Services Controller. After you have run the Setup wizard, you can use either the GUI or the CLI to manage: certificates and private keys, licenses, and software images for the Services Controller and Traffic Manager. instances, instance hosts, users, and clusters. users credentials, feature packs, metering, and logging. health and monitoring reports, system logs, and system dumps. Prerequisites Before you begin installing the Services Controller VA, you must make sure that you have the correct software, hardware, and configuration information. Note: All required files must be in accessible locations in your infrastructure during the installation process. This will typically be on an accessible server, but can also be on your local machine. To install the Services Controller VA, you need the following software and hardware: OVAs - Two open virtual appliance (OVA) files are available: the Services Controller OVA. This is used to install the Services Controller VA. See Creating the VM in vsphere on page 161. the Instance Host OVA. This is used inside the Services Controller VA after installation, and is not required for initial installation. Creating and Managing Instance Hosts on page 205. You must place the OVA files in an accessible location in your infrastructure. You can obtain the OVAs from Riverbed Support at Services Controller Licenses - You must place the Services Controller Licenses (either CSP or Enterprise) and the Bandwidth Pack license key on an accessible location in your infrastructure. For details about retrieving your Services Controller licenses, see To retrieve Stingray and Services Controller product licenses on page 18. Traffic Manager Image - The Traffic Manager image (a tarball) is required to create Traffic Manager instances in the Services Controller. You must place the image file in a locally accessible directory on each Services Controller server in your infrastructure. You can download the Traffic Manager image from Riverbed Support at Stingray Services Controller User s Guide

167 Prerequisites Installing the Services Controller Virtual Appliance Traffic Manager FLA License - The Traffic Manager FLA license is required to create instances in the Services Controller VA. You must place the external file in a locally accessible directory on each Services Controller server in your infrastructure. For details about retrieving your Services Controller licenses, see To retrieve Stingray and Services Controller product licenses on page 18. SSL Certificate/Key - The Traffic Manager FLA license requires that you generate a self-signed SSL certificate and key. You must place these files on an accessible location in your infrastructure. VMware vsphere ESXi The Services Controller VA runs on vsphere ESXi However, the ssc host provision command to provision an instance host works only on ESXi 5.0 and 5.1, not ESXi 5.5. Riverbed assumes that you are familiar with creating and managing VMS using vsphere. For detailed information about creating virtual machines using vsphere, see MySQL Database - The MySQL database with a user account is required for the Services Controller VA. The MySQL database is automatically installed when you install the Services Controller VA. SMTP Server - An SMTP server is required for alerts. The Services Controller VA does not support SMTP connections that require authentication. Log and File Store - Accessible directories are required for logs and file storage. Note: If you have not received your Services Controller, enterprise bandwidth, and Traffic Manager FLA licenses, contact Riverbed Licensing ([email protected]) for assistance. Resource Requirements The Services Controller VA requires the following resources for the virtual machine. You can deploy flavors (which are different sizes) of instance host VMs. VA Type CPU Memory Disk SSC VA 4 vcpu 8 GB 46 GB Small Flavor Instance Host VA Large Flavor Instance Host VA 2 vcpu 4 GB 70 GB 8 vcpu 16 GB 70 GB Stingray Services Controller User s Guide 157

168 Installing the Services Controller Virtual Appliance Prerequisites Required Configuration Information The Administration UI Setup wizard prompts you to provide information that is used to configure the Services Controller VA. Make sure that you have the following information before you start the Setup wizard. Parameter Hostname Primary IP address DNS Server IPv4 DHCP or IPv6 SMTP server and port notification address SSL certification and private key Services Controller License and Bandwidth Pack License Traffic Manager (STM) FLA license Administrator user and password Traffic Manager Image Instance Host OVA Description The hostname for the Services Controller. The IP address for the Services Controller. The IP address for the primary name server. Settings for IPv4 or IPv6 traffic. The name of the SMTP server and port. External DNS and external access for SMTP traffic is required for notification of events and failures to function. A valid address to which notification of events and failures are to be sent. Note: You can also configure the "From" address of alert s. This address can be set in INSTALLROOT/conf/ _config.txt, in the common section, as from_address. The symbol "$fqdn" will be replaced by the fullyqualified domain name of the SSC host. The other sections in this file should not normally be modified. For SSC installs on AWS it is likely that you will need to change this setting to be an address that is resolvable to the instance's public IP. Imports the self-signed Secure Socket Layer (SSL) SSL certificate and private key file. For example, a local file, or an http, ftp, or scp URL. scp://username:password@host/path/filename Ensure these files are located on an accessible location in your infrastructure. Imports the Services Controller (CSP or Enterprise) license and Enterprise Bandwidth Pack license. Make sure these files are located on an accessible location in your infrastructure. Imports the Traffic Manager FLA license. Make sure this file is located on an accessible location in your infrastructure. The administrator password for the Services Controller. This password is used to access the Services Controller CLI and UI. Riverbed strongly recommends that you change the default administrator password at this time. The password must be a minimum of six characters. The default administrator user is admin and the password is password. Imports the Traffic Manager image and creates a version resource. This resource is necessary to create an instance. Make sure the Traffic Manager image file is located on an accessible location in your infrastructure. Uploads the Instance Host OVA file and creates an instance host. Make sure this file is located on an accessible location in your infrastructure. Note: You can also use the configuration wizard in the CLI to configure the Services Controller VA. For detailed information about connecting to the Services Controller VA, see the Stingray Services Controller Command-Line Interface Reference Manual. The first time you log in the configuration wizard starts automatically. If you need to rerun the configuration wizard, use the configuration jump-start command. 158 Stingray Services Controller User s Guide

169 Obtaining the Services Controller VA Software Installing the Services Controller Virtual Appliance Obtaining the Services Controller VA Software The Services Controller VA is provided by Riverbed Support as software that contains the VMX and VMDK files necessary to create virtual resources. Two OVAs are required: Services Controller OVA - Creates a Services Controller virtual appliance on ESXi. Instance Host OVA - Creates a virtual instance host on ESXi. You obtain the Services Controller OVA and the Instance Host OVA from Riverbed Support at support.riverbed.com. Stingray Services Controller User s Guide 159

170 Installing the Services Controller Virtual Appliance Installing, Configuring, and Administering the Services Controller VA Installing, Configuring, and Administering the Services Controller VA You perform the following steps to install, configure, and manage resources in the Services Controller VA. 1. Generate a self-signed SSL certificate and private key for the Services Controller VA. Place these resources in your infrastructure on an accessible location. 2. Obtain the required OVA files (that is, the Services Controller OVA and the Instance Host OVA) from Riverbed Support. Place these resources in an accessible location in your infrastructure. For details about obtaining your license keys, see To retrieve Stingray and Services Controller product licenses on page Obtain the Services Controller licenses and the Enterprise Bandwidth Pack license from your Riverbed account team. Place these resources in your infrastructure on an accessible location. For details about obtaining your license keys, see To retrieve Stingray and Services Controller product licenses on page Obtain the Traffic Manager image and the FLA license from your Riverbed account team. Place these resources in your infrastructure on an accessible location. For details about obtaining your license keys, see To retrieve Stingray and Services Controller product licenses on page Install the Services Controller OVA and the Instance Host OVA on vsphere and create a Services Controller VA. Make a note of the IP address to place in the browser window. 6. Power on the Services Controller VA in vsphere and log in to the Services Controller VA via any browser using the default username (admin) and password (password). 7. You must now run the Administration UI Setup wizard. Before you start, ensure that you have the prerequisite information. For details, see Required Configuration Information on page 158. The Setup wizard configures basic network settings, such as hostname, base interface settings, and notifications. It also imports required licenses and certificates, and it creates an instance host. For details, see Running the Services Controller VA Setup Wizard on page 162. You can also use the configuration wizard in the CLI to configure the Services Controller VA. For detailed information about connecting to the Services Controller VA, see the Stingray Services Controller Command-Line Interface Reference Manual. 8. Change the password for the admin user via the CLI. See Changing the Password for the Admin User on page You must enable passwordless SSH communication between the Services Controller and all instance hosts. See Enabling Passwordless SSH Communication on page Create instances (with or without LXC containers). For details, see Configuring a Traffic Manager Instance with a Container on page 245 and Creating a Traffic Manager Instance Without a Container on page Monitor the system and create system logs. For details, see Viewing Logs and Generating System Dumps on page Stingray Services Controller User s Guide

171 Creating the VM in vsphere Installing the Services Controller Virtual Appliance Creating the VM in vsphere This section describes the procedures for installing the Services Controller VA OVA package on a VMware ESXi host using the vsphere client. Note: This chapter assumes that you are familiar with installing, configuring, and managing VMs using VMware vsphere. The following instructions might vary. For detailed information about creating a VM in vsphere, see To create a VM in vsphere 1. Obtain the Services Controller OVA package from and download it to a local machine. 2. Open vsphere, type the hostname IP address or name, type your username and password, and click Login. 3. Choose File > Deploy OVF template. 4. Select Deploy from file, click Browse, and select the OVA file. 5. Click Open and Next. 6. Verify that the OVA file is the one you want to deploy and click Next. 7. Specify a name for the VM and click Next. 8. Select a host datastore in which to store the VM and its virtual disk files and click Next. Make sure that the host datastore you select has enough capacity for the OVA package to install. For example, for the SSC VM you need four CPUs, 8 GB of memory, and at least 46 GB of disk space. 9. On the Disk Format page, select Thick provisioned format and click Next. Thick provisioning preallocates all storage. 10. Select the destination network name, select a network from the drop-down list to map the source network to a destination network, and click Next. 11. Verify the deployment settings, and click Finish. 12. When the deployment finishes, click Close. The new VM appears under the hostname or host IP address of the VM inventory. Stingray Services Controller User s Guide 161

172 Installing the Services Controller Virtual Appliance Running the Services Controller VA Setup Wizard Running the Services Controller VA Setup Wizard After you have created the VM in vsphere, you complete the configuration process for the Services Controller VA using the Administration UI Setup wizard. The Setup wizard automatically starts the first time you log in to the Services Controller VA via a browser. Riverbed recommends you use the Administration UI Setup wizard to configure the Services Controller VA rather than the CLI configuration wizard. The Administration UI Setup wizard enables you to perform the initial configuration for required licenses and certificates, and creates and provisions an instance host. Note: You can restart the Setup wizard at any time by selecting Manage > System Settings: Setup Wizard. Note: You can also use the configuration wizard in the CLI to configure the Services Controller VA. For detailed information about connecting to the Services Controller VA, see the Stingray Services Controller Command-Line Interface Reference Manual. The first time you log in, the configuration wizard starts automatically. If you need to rerun the configuration wizard, use the configuration jump-start command. To run the Setup wizard 1. Right-click the VM you created in vsphere and select Power and Power On. 2. In the vsphere Client UI, click the Services Controller VA that you created. 3. On the right side, click Summary in the panel and view the IP Addresses attribute. This attribute is the IP address for your Services Controller VA Administration UI. You might need to wait for a moment to allow the VM to fully start after the VM is powered on. 4. To connect to Services Controller VA Administration UI for the first time, place the IP address for your Services Controller VA in a browser window. 5. Log in using the default value for the administrator user (admin) and the default password (password), and click Log In. Figure 8-1. Login Page 162 Stingray Services Controller User s Guide

173 Running the Services Controller VA Setup Wizard Installing the Services Controller Virtual Appliance 6. To start the Setup wizard, click Launch Setup Wizard. Figure 8-2. Launch Setup Wizard Page 7. To configure the hostname, DNS settings, and Web proxy services, click Host Name to display the Hostname page. Figure 8-3. Hostname Page 8. Complete the configuration as described in this table. Control Hostname Primary DNS Server Secondary DNS Server Tertiary DNS Server DNS Domain List Description Specify the hostname for the Services Controller. Specify the IP address for the primary name server. Optionally, specify the IP address for the secondary name server. Optionally, specify the IP address for the tertiary name server. Specify an ordered list of domain names. If you specify domains, the system automatically finds the appropriate domain for each of the hosts that you specify in the system. Stingray Services Controller User s Guide 163

174 Installing the Services Controller Virtual Appliance Running the Services Controller VA Setup Wizard 9. To configure base interfaces, click Interface Configuration to display the Interface Configuration page. Figure 8-4. Interface Configuration Page 10. Select the Enable Primary Interface check box and complete the configuration as described below. Note: Your browser will lose connection if you change the IP address of the primary interface. Close the browser and open the Services Controller VA using the new IP address after the configuration is applied. Control IPv4 DHCP IPv4 Static Apply Description Select this radio button to send the hostname with the DHCP request for registration with Dynamic DNS. Specify these settings: Dynamic DNS - Select this check box to automatically obtain the IP address from a DHCP server. A DHCP server must be available so that the system can request the IP address from it. Address - Specify the IP address. Subnet - Specify the subnet address. Select this radio button if you do not use a DHCP server to set the IPv4 address. Specify these settings: Address - Specify an IP address. Prefix - Specify a subnet mask. Click Apply to apply your changes to the running configuration. 11. Select the Enable Aux Interface check box if required and then complete the IPv4 configuration. 12. Configure the main IPv4 routing table. To do this, click the plus sign (+) above the Main IPv4 Routing Table to display a pop-up window. 164 Stingray Services Controller User s Guide

175 Running the Services Controller VA Setup Wizard Installing the Services Controller Virtual Appliance 13. Complete the configuration as described in this table. Control Destination IPv4 Address IPv4 Subnet Mask Gateway IPv6 Address Interface Apply Description Specify the destination IP address. Specify a subnet mask. Specify the default gateway IPv4 address. The default gateway must be in the same network as the primary interface. You must set the default gateway for inpath configurations. Specify the interface type: Auto, primary, or aux. Click Apply to apply your changes to the running configuration. 14. To set the configure the settings, DNS settings, and Web proxy services, click Settings to display the Settings page. Figure Setting Page 15. Complete the configuration as described in this table. Control SMTP Server SMTP Port Notification Description Specify the SMTP server. You must have external DNS and external access for SMTP traffic for this feature to function. Note: Make sure you provide a valid SMTP server to ensure that the users you specify receive notifications for events and failures. Specify the port number for the SMTP server. Typically you do not need to change the default port 25. Specify an address to receive the notification messages. Stingray Services Controller User s Guide 165

176 Installing the Services Controller Virtual Appliance Running the Services Controller VA Setup Wizard 16. To configure Services Controller Certificates, click SSC Certificate to display the SSC Certificates page. For detailed information about required Services Controller licenses, see Generating a Self-Signed SSL Server Certificate on page 22. Figure 8-6. SSC Certificates Page 17. Complete the configuration as described in this table. Control Import Existing Private Key and CA-Signed Public Certificate (One File in PEM format) Description Imports the key and certificate. Select this radio button if the existing private key and CA-signed certificate are located in one file. The page expands displaying Private Key and CA-Signed Public Certificate controls for browsing to the key and certificate files or a text box for copying and pasting the key and certificate. The private key is required regardless of whether you are adding or updating. Local File - Browse to the local certificate authority file. Text - Paste the certificate authority into the text box and click Import. To revert settings, click Revert. Decryption Password - Specify the decryption password, if necessary. 166 Stingray Services Controller User s Guide

177 Running the Services Controller VA Setup Wizard Installing the Services Controller Virtual Appliance Control Import Existing Private Key and CA-Signed Public Certificate (Two Files in PEM or DER formats) Description Imports the key and certificate. Select this radio button if the existing private key and CA-signed certificate are located in two files. The page expands displaying Private Key and CA-Signed Public Certificate controls for browsing to the key and certificate files or text boxes for copying and pasting the keys and certificates. Local File - Browse to the local certificate authority file. Certificate Text - Paste the certificate authority into the text box and click Import. To revert settings, click Revert. Details/PEM Select Details to display the certificate details; select PEM to display the certificate text. 18. To configure user credentials, click User Credentials to display the User Credentials page. Figure 8-7. User Credentials Page 19. Complete the configuration as described in this table. Control Admin Name Admin Password Create Description Specify the administrator name for the REST API. Specify the administrator password for the REST API. Click Create to add your changes to the running configuration. Stingray Services Controller User s Guide 167

178 Installing the Services Controller Virtual Appliance Running the Services Controller VA Setup Wizard 20. To configure the Services Controller Enterprise license, click SSC Enterprise License to display the SSC Enterprise Licensing page. For detailed information about Enterprise licenses, see Services Controller Software Licenses on page 18. Figure 8-8. SSC Enterprise License Page 21. Complete the configuration as described in this table. Control Stingray Services Controller License Text Description Copy and paste the Enterprise license text and click Import. 22. Click Import to use the pasted text as the license. 168 Stingray Services Controller User s Guide

179 Running the Services Controller VA Setup Wizard Installing the Services Controller Virtual Appliance 23. To configure the REST API port and to start and stop the Services Controller service, click SSC REST Port to display the SSC REST Port page. Figure 8-9. SSC REST Port Page 24. Complete the configuration as described in this table. Control REST Port Apply Start Stop Restart Description Specify the port for the REST API. Click Apply to apply your changes to the running configuration. Click Start to start the Services Controller service. Click Stop to stop the Services Controller service. Click Restart to restart the Services Controller service. Note: Ensure that your SSC service is running before continuing. Stingray Services Controller User s Guide 169

180 Installing the Services Controller Virtual Appliance Running the Services Controller VA Setup Wizard 25. To configure the Traffic Manager FLA license (that is, flexible), click SSC Flexible License to display the SSC Flexible License page. Figure SSC Flexible License Page 26. Complete the configuration as described in this table. Control FLA License Text Description Paste the Flexible license text and click Import. The FLA license text is displayed. 27. To add additional licenses, click Additional Licenses to display the Additional Licenses page. 28. To add a Services Controller license, click the plus sign (+) above the SSC Controller License table to display a pop-up window. 29. Copy and paste the Services Controller license key in the Controller License Key text box and click Add. 30. To add an Enterprise Bandwidth license, click the plus sign (+) above the Bandwidth table to display a pop-up window. 31. Copy and paste the Enterprise Bandwidth license key in the Bandwidth License Key text box and click Add. 32. To add an Add-On license, click the plus sign (+) above the Add-On License table to display a pop-up window. 33. Copy and paste the Add-On license text into the Add-On License Key text box and click Add. 170 Stingray Services Controller User s Guide

181 Running the Services Controller VA Setup Wizard Installing the Services Controller Virtual Appliance 34. To configure a Traffic Manager image and version resource, click STM Images to display the STM Images page. Figure STM Images Page 35. Complete the configuration as described in this table. Control Image Version Description Click the plus sign (+) above the Image table to display a pop-up window. Specify the Traffic Manager image filename or click Choose File to browse to the file and click Upload. Click the plus sign (+) above the Version table to display a pop-up window. Specify the following settings: Version Name - Specify a unique name for the version resource. Version File Name - Select the Traffic Manager version from the drop-down list. Info - Optionally, specify descriptive information. Click Add to add your settings to the running configuration. Stingray Services Controller User s Guide 171

182 Installing the Services Controller Virtual Appliance Running the Services Controller VA Setup Wizard 36. To configure instance hosts for the Services Controller, click Instance Hosts to display the Instance Hosts page. Figure Instance Hosts Page 37. Complete the configuration as described in this table. Control Import Instance Host OVA Description Local File - Select this radio button and then either type the path to the Instance Host OVA or click Choose File to browse to the Instance Host OVA. HTTP/SCP URL - Select this radio button and then specify a URL for the Instance Host OVA file. 38. Click Upload to upload the Instance Host OVA. Note: To delete a Host OVA, click the X next to its table entry and then click Delete. 39. Click the plus sign (+) above the instance hosts table to display a pop-up window. 40. Complete the standard configuration as described in this table. Setting Instance Host Name Host User Description Specify a unique name for the instance host. This name is used to set the hostname of the provisioned VM. This is required for all instance hosts. Note: For non-provisioned hosts, this must be the hostname of the existing host. Specify the instance host username. This property is unavailable for instance hosts where the Provision Only check box is selected. 172 Stingray Services Controller User s Guide

183 Running the Services Controller VA Setup Wizard Installing the Services Controller Virtual Appliance Setting Host Password Username Info Provision Only Service Manager Description Specify the password for the instance host user. This property is unavailable for instance hosts where the Provision Only check box is selected. Specify the user for SSH access; this user must be root. This property is unavailable for instance hosts where the Provision Only check box is selected. Optionally, specify descriptive information. This property is unavailable for instance hosts where the Provision Only check box is selected. Select this check box when you want to provision an instance host, but do not want it to be managed by the Services Controller, such as when the IP address is to be allocated by DHCP. The host is not added to the SSC database, but can be added once its IP address is known. When you select the Provision Only check box, the Provision check box is selected automatically and the Service Manager check box is cleared automatically. Select this check box for service manager hosts. After you select this check box: specify Max Instances to define how many instances can operate on this host. specify CPU Cores to define the number of processors that are available to the host. 41. To provision this instance host, select the Provision check box. This check box is cleared and unavailable when the Provision Only check box is selected. Complete the configuration as described in this table. Control Provision Description Specify this option to provision instance hosts. Specify these settings: Flavor - Select the size of the instance host from the drop-down list. Large - Specifies eight CPUs and 16 GB of memory. Small - Specifies two CPUs and 4 GB of memory. ESXi URI - Specify the URI to the ESXi host where this instance host is to be deployed. ESXi User - Specify the ESXi host username used for the provisioned instance host. ESXi Password - Specify the ESXi password used for the provisioned instance host. ESXi Datastore - Specify the ESXi storage pool. For example, datastore1. Mgmt Interface Mapping - Specify the name of the interface for management traffic. LAN Interface Mapping - Specify the name of the interface for server-facing traffic. WAN Interface Mapping - Specify the name of the interface for client-facing traffic. For example, you might have virtual networks named VM Network Mgmt, VM Network WAN, VM Network LAN created in your ESXi environment. 42. Click Add to apply your settings. Stingray Services Controller User s Guide 173

184 Installing the Services Controller Virtual Appliance Running the Services Controller VA Setup Wizard 43. To configure feature packs and view supported SKUs for the Services Controller, click SKU/Feature Pack to display the Instance Hosts page. Figure SKU/Feature Pack Page 44. Click the plus sign (+) above the feature pack table to display a pop-up window. 45. Complete the configuration as described in this table. Control Feature Pack Name STM SKU Excluded Add-On SKUs Info Description Specify a unique feature pack name. Select the Traffic Manager SKU from the drop-down list. A SKU is a set of features for the Traffic Manager. Specify the excluded features in a space-separated list. A blank field denotes that no features are excluded. For details about the feature list, see Managing Flexible Licenses on page 194. Specify the Add-On license. ADD-FIPS - Federal Information Processing Standards (FIPS) license ADD-WAF - Stingray Application Firewall license ADD-WEBACCEL - Stingray Aptimizer Web Accelerator For details about Add-On Licenses, see Installing an Add-On License on page 21. Optionally, specify descriptive information. 46. Click Add to apply your settings. 47. Click Finish to close the Setup wizard. 174 Stingray Services Controller User s Guide

185 Changing the Password for the Admin User Installing the Services Controller Virtual Appliance 48. To permanently save your settings, click the Save icon in the menu bar. Figure Save Icon in the Menu Bar 49. Finally, change the password for the admin user. This cannot be changed from within the Services Controller VA. See Changing the Password for the Admin User on page 175. The installation and setup of the Services Controller VA is complete. For details of the use of the Services Controller VA, see Chapter 9, Configuring the Stingray Services Controller Virtual Appliance. Changing the Password for the Admin User The password for the admin user cannot be changed from within the Services Controller VA. You perform this from the CLI using the following command: amnesiac (config) # username admin password <password> The password must be at least six letters long. Upgrading the Services Controller VA You can upgrade the Services Controller VA using the command-line interface (CLI). To upgrade the Services Controller 1. Download the Services Controller VA upgrade image from the Riverbed Support site at 2. Place the image file on a server that is accessible with HTTP, SCP, or FTP. 3. Connect to the Services Controller VA CLI and enter the enable mode: login as: admin Riverbed Stingray Services Controller Last login: Tue Apr 8 22:17: from amnesiac > enable amnesiac # 4. To fetch the image from your server, enter: amnesiac # image fetch <http, scp://, or ftp://username:password@hostname/path/filename> 5. To install the upgrade image and restart the Services Controller VA, enter: amnesiac # image upgrade <image-name.img> amnesiac # reload Stingray Services Controller User s Guide 175

186 Installing the Services Controller Virtual Appliance Downgrading the Services Controller VA Downgrading the Services Controller VA You can downgrade the Services Controller VA using the command-line interface (CLI). To downgrade the Services Controller VA to an earlier version 1. Obtain the required Services Controller VA image. 2. Follow the upgrade procedure for the Services Controller VA using the downgrade image. See Upgrading the Services Controller VA on page 175. Upgrading Instance Host VAs You can upgrade an instance host VA using the CLI for Services Controller VA that is at version 1.3 or later. You can also migrate Traffic Manager instances to this upgraded image of the instance host. Note: To upgrade an instance host VA from an earlier version, see Migrating Instance Host VAs on page 177. To improve the instance host upgrade process, the Host VA has been divided into dual image partitions where image executable binaries reside called root and backup. To upgrade the instance host VA 1. Download the instance host OVA upgrade image file from the Riverbed Support site at Place the image file in an accessible location in your infrastructure. 2. Connect to the Services Controller VA and enter the enable mode: login as: admin Riverbed Stingray Services Controller Last login: Tue Apr 8 22:17: from amnesiac > 3. Log in to the instance host VA, enter: amnesiac # slogin sscadmin@<adminuser>-<hostva> Riverbed Stingray Services Host Last login: Thu May 29 01:50: from $ 4. To display the current boot partition, enter: $ system current-partition Currently booted partition is 2 5. To fetch and install the instance host upgrade image on to alternate (nonactive) partition, enter: $ image install Successfully fetched & installed image [ Please boot into backup partition 176 Stingray Services Controller User s Guide

187 Upgrading Instance Host VAs Installing the Services Controller Virtual Appliance 6. To boot the system to the backup partition, enter: $ system boot <partition-id> System will boot in 2 partition on next boot. 7. To reboot the system, enter: $ host reload Broadcast message from root@rvbd-ssc-host The system is going down for reboot NOW! Successfully performed [reboot] operation clala-mbp:~ clala$ ssh sscadmin@ Warning: Permanently added ' ' (RSA) to the list of known hosts. sscadmin@ 's password: Riverbed Stingray Services Host 8. To display the current boot partition, enter: $ system current-partition 9. To change to an alternate boot partition on the next boot, enter: $ system boot Currently booted partition is 2 Migrating Instance Host VAs Upgrading of instance host VAs was not supported by the Services Controller until version 1.3. The Services Controller provides a migration tool to help you upgrade your pre-v1.3 instance hosts to version 1.3. From there, instance host upgrades are supported. During migration, the service provided by the Traffic Manager instances on the source instance host VA is interrupted. The migration tool scans all the Traffic Manager instances on the source host and decides the next step based on the status of all Traffic Manager instances: Active - If the Traffic Manager instance is Active, the migration tool exits with a warning message telling you that your Traffic Manager instances are in noncompliant status. Idle - If the Traffic Manager instance is Idle, the Traffic Manager instance is moved. Failed_xxx - When migrating, the migration tool ignores any Traffic Manager instance in Failed status. (Failed_to_deploy, Failed_to_start) New/Starting/Stopping/Upgrading - The migration tool exits with a warning message telling you that your Traffic Manager instances are in noncompliant status. To migrate to a new instance host VA 1. Create a standalone instance host VA with the new version that supports the image upgrade feature. Put the newly deployed instance host VA in the same subnet as the original instance host. To create a new instance host, use the Administration Setup wizard. You can rerun the Setup wizard at any time from Manage > System Settings: Setup Wizard. 2. Ensure that the new instance host VA is accessible via keyless SSH from the Services Controller VA. For example, the Services Controller VA administrator ssh public key is installed on instance host VA for root user. For detailed information about configuring SSH, see Enabling Services Controller Communication with Instance Hosts on page Stop all Traffic Manager instances. Stingray Services Controller User s Guide 177

188 Installing the Services Controller Virtual Appliance Upgrading Instance Host VAs 4. Connect to the Services Controller VA CLI and enter the configuration mode: login as: admin Riverbed Stingray Services Controller Last login: Tue Apr 8 22:17: from amnesiac > enable amnesiac # configure terminal amnesiac (config) # 5. Run the migration tool CLI command on Services Controller VA. This command moves all Traffic Manager instances from the existing managed Instance Host to the new Instance Host VA. Riverbed suggests that the migration is performed within the same networking environment. For example, with source and destination hosts in the same subnet. To migrate Traffic Manager instances that are on the same subnet, enter: amnesiac (config) # ssc host host-migrate from <hostname of source host> to <hostname or ip of destination host> If the two instance hosts are not in the same subnet, this command exits with warning. To migrate Traffic Manager instances that are not on the same subnet, enter: amnesiac (config) # ssc host host-migrate from <hostname of source host> to <hostname or ip of destination host> force 6. Update the DNS record to map the hostname of the original managed host resource to the IP address of the new Instance Host VA. 7. Restart all Traffic Manager instances. The migration is complete. To perform further upgrades, see Upgrading Instance Host VAs on page Stingray Services Controller User s Guide

189 CHAPTER 9 Configuring the Stingray Services Controller Virtual Appliance This chapter describes the configuration of the Stingray Services Controller Virtual Appliance (Services Controller VA). This chapter includes the following sections: Using the Administration UI to Manage the Services Controller on page 179 Configuring Network Settings on page 184 Configuring System Settings on page 187 Configuring Resources for the Services Controller on page 188 Viewing and Modifying Services Controller Settings on page 203 Managing Service Controller Resources on page 205 Creating and Managing LBaaS Services on page 215 Viewing Reports and Diagnostics on page 228 Viewing Logs and Generating System Dumps on page 234 Using the CLI to Manage the Services Controller on page 238 ESXi vsphere Host Port Mapping on page 250 Generating Metering Logs on page 250 Using the Administration UI to Manage the Services Controller After you have created the virtual appliance in vsphere, you can administer and manage the Services Controller using the Services Controller VA Administration UI or Services Controller VA CLI. The following instructions describe only how to perform these tasks using the Administration UI. For details about controlling the Services Controller using the CLI, see Using the CLI to Manage the Services Controller on page 238. Stingray Services Controller User s Guide 179

190 Configuring the Stingray Services Controller Virtual Appliance Using the Administration UI to Manage the Services Controller Note: You must import the following files into the Services Controller before you can create instances: Secure Socket Layer (SSL) certificate and key, Services Controller license, enterprise bandwidth license key, FLA license, and Traffic Manager image. If you have not received your license files, contact Riverbed Licensing for assistance. Connecting to the Administration UI You can also use the Administration UI to administer and manager the Services Controller using any supported Web browser. To connect to the Administration UI, you must know the host, domain, and administrator password that you assigned in the Setup wizard. Note: Supported browsers are Chrome, Safari, Firefox, and IE 8+. Cookies and JavaScript must be enabled in your browser. To connect to the Administration UI 1. Specify the URL for the Management Console in the location box of your Web browser: <protocol>://<host>.<domain> In this example: <protocol> - either http or https. HTTPS uses the SSL protocol to ensure a secure environment. When you connect using HTTPS, the system prompts you to inspect and verify the SSL certificate. <host> - the hostname you assigned to the Services Controller in the Setup wizard. If your DNS server maps that IP address to a name, you can specify the DNS name. <domain> - the full domain name for the appliance. Note: Alternatively, you can specify the IP address instead of the host and domain name. The Administration UI appears, displaying the Login page. Figure 9-1. Login Page 2. In the Username text box, type the user login: admin. This is the default login. 3. In the Password text box, type the password you assigned in the Setup wizard of the Services Controller. The Services Controller is shipped with the default password: password. 180 Stingray Services Controller User s Guide

191 Using the Administration UI to Manage the Services Controller Configuring the Stingray Services Controller Virtual Appliance 4. Click Log In to display the Home page. The Home page summarizes the current status of your system. Figure 9-2. Home Page Using the Administration UI This section describes how to use the Administration UI. The Administration UI contains Home, Manage, Diagnostics, and Support tabs that enable you to configure and manage your Services Controller. To permanently save settings Any configuration activity on the Services Controller VA must be saved for it to be persistent between sessions. Activities that are not related to configuration, such as exporting files, do not require this action. To save your current configuration: Click the Save icon in the menu bar. Figure 9-3. Save Icon in the Menu Bar Note: You must click the Save icon to permanently save your settings. Stingray Services Controller User s Guide 181

192 Configuring the Stingray Services Controller Virtual Appliance Using the Administration UI to Manage the Services Controller To expand Administration UI pages 1. Display any page that contains a table. For example, the STM Images page: Figure 9-4. Stingray Traffic Manager Images Page Showing Plus Sign 2. Click the plus sign (+) above the Version table to display a pop-up window. This window enables you to enter the required parameters for a new table entry. Figure 9-5. Stingray Traffic Manager Images Expanded 3. After you have entered the details, click Add to close the window and add the new resource to its list. To display details about a listed resource 1. Display any page that contains a table. Figure 9-6. STM Instance Page 182 Stingray Services Controller User s Guide

193 Using the Administration UI to Manage the Services Controller Configuring the Stingray Services Controller Virtual Appliance 2. Click the right arrow next to a table entry to expand it. Figure 9-7. STM Instance Page 3. Click the down arrow to collapse the entry. Overview of Services Controller Tasks The following sections are arranged in the order in which you would typically perform Services Controller tasks. Riverbed recommends that you run the Administration UI Setup wizard to configure Services Controller settings and resources the first time. For details, see Running the Services Controller VA Setup Wizard on page 162. The following sections describe how to: configure network settings, such as hostname, DNS servers, and IPv4 interface settings. configure system settings, such as announcements and settings for alerts. configure and administer the system, such as users, images, licenses, SSL certificate/key pairs, feature packs, database settings, mode settings, and the Services Controller service and settings. create and manage resources, such as instances, instance hosts, and clusters. view system reports. view system logs. obtain support, if needed. Stingray Services Controller User s Guide 183

194 Configuring the Stingray Services Controller Virtual Appliance Configuring Network Settings Configuring Network Settings You can configure network settings in Manage > Networking. Configuring General Network Settings You can configure network settings, including system hostname, additional hosts, and DNS servers in the General Network Settings page. To configure the system hostname 1. Choose Manage > Networking: General to display the General Networking page. Figure 9-8. General Networking Page 2. Specify the required hostname for the system and click Apply. 3. To permanently save your settings, click the Save icon in the menu bar. To add a host 1. Choose Manage > Networking: General to display the General Networking page. 2. Click the plus sign (+) above the table of hosts to display a pop-up window. 3. Specify the hostname and IP address. 4. Click Apply. 5. To permanently save your settings, click the Save icon in the menu bar. Note: To delete a host, click the X next to its table entry and then click Delete. 184 Stingray Services Controller User s Guide

195 Configuring Network Settings Configuring the Stingray Services Controller Virtual Appliance To configure DNS servers 1. Choose Manage > Networking: General to display the General Networking page. Figure 9-9. Adding DNS Servers 2. Under DNS Configuration, complete the configuration as described in this table. Control Primary DNS Server Secondary DNS Server Tertiary DNS Server DNS Domain List Description Specify the IP address for the primary name server. Optionally, specify the IP address for the secondary name server. Optionally, specify the IP address for the tertiary name server. Specify an ordered list of domain names. If you specify domains, the system automatically finds the appropriate domain for each of the hosts that you specify in the system. 3. Click Apply to apply your settings to the running configuration. 4. To permanently save your settings, click the Save icon in the menu bar. Configuring Base Interfaces You can configure the primary and auxiliary base interfaces in the Base Interfaces page. Stingray Services Controller User s Guide 185

196 Configuring the Stingray Services Controller Virtual Appliance Configuring Network Settings To configure the base interface settings 1. Choose Manage > Networking: Base Interfaces to display the Base Interfaces page. Figure Base Interfaces Page 2. Select the Enable Primary Interface check box and complete the configuration as described below. Note: Your browser will lose connection if you change the IP address of the primary interface. Close the browser and open the Services Controller VA using the new IP address after the configuration is applied. Control IPv4 DHCP IPv4 Static Description Select this radio button to send the hostname with the DHCP request for registration with Dynamic DNS. Specify these settings: Dynamic DNS - Select this option to automatically obtain the IP address from a DHCP server. A DHCP server must be available so that the system can request the IP address from it. Address - Specify the IP address. Subnet - Specify the subnet address. Select this radio button if you do not use a DHCP server to set the IPv4 address. Specify these settings: Address - Specify an IP address. Prefix - Specify a subnet mask. 3. Click Apply to apply your changes to the running configuration. 4. Select the Enable Aux Interface check box if required and then complete the IPv4 configuration. 5. To permanently save your settings, click the Save icon in the menu bar. 186 Stingray Services Controller User s Guide

197 Configuring System Settings Configuring the Stingray Services Controller Virtual Appliance To configure the IPv4 routing table 1. Choose Manage > Networking: Base Interfaces to display the Base Interfaces screen. 2. Click the plus sign (+) above the IPv4 routing table to display a pop-up window. 3. Complete the configuration as described in this table. Control Destination IPv4 Address IPv4 Subnet Mask Gateway IPv6 Address Interface Description Specify the destination IP address. Specify a subnet mask. Specify the default gateway IPv4 address. The default gateway must be in the same network as the primary interface. You must set the default gateway for inpath configurations. Specify the interface type: Auto, primary, or aux. 4. Click Apply to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. Configuring System Settings You can configure system announcements and settings in Manage > System Settings. Configuring Announcements You can create a login message and a message-of-the-day for the system in the Announcements page. To configure announcements 1. Choose Manage > System Settings: Announcements to display the Announcements page. Figure Announcements Page 2. Type a pre-login message to appear on the Login page. 3. Type a post-login message (MOTD) to appear on the Home page. 4. Click Apply to apply your settings. Stingray Services Controller User s Guide 187

198 Configuring the Stingray Services Controller Virtual Appliance Configuring Resources for the Services Controller 5. To permanently save your settings, click the Save icon in the menu bar. Configuring Settings You configure settings to receive for events and alerts for the system in the Settings page. To configure settings 1. Choose Manage > System Settings: Settings to display the Settings page. Figure Settings Page 2. Specify the SMTP server IP address, the SMTP port, and the address to receive events and alert messages. Note: You can also configure the "From" address of alert s. This address can be set in INSTALLROOT/ conf/ _config.txt, in the common section, as from_address. The symbol "$fqdn" will be replaced by the fullyqualified domain name of the SSC host. The other sections in this file should not normally be modified. For SSC installs on AWS it is likely that you will need to change this setting to be an address that is resolvable to the instance's public IP. 3. Click Apply to apply your settings. 4. To permanently save your settings, click the Save icon in the menu bar. Configuring Resources for the Services Controller You can perform the following tasks from the Manage > Deployment menu in the Administration UI: Create and manage REST API user credentials View and upload the Traffic Manager image (that is, the tarball) View and import Services Controller, bandwidth, Traffic Manager FLA licenses, and Add-On licenses View and import SSL certificate and private keys Create feature packs and view enabled features in SKUs Start, stop, and restart the Services Controller service Configure, import, and export the local database View and modify the Traffic Manager mode settings View and modify Services Controller settings, such as metering, logging, licensing, and deployment 188 Stingray Services Controller User s Guide

199 Configuring Resources for the Services Controller Configuring the Stingray Services Controller Virtual Appliance Managing REST API User Credentials You can create and view user accounts to perform REST API calls to the Services Controller in the User Credentials page. Any user account that is listed as Active is used for REST API calls. There is no restriction on the number of REST API users. If the user account is being used by the Services Controller to perform API calls, it cannot be modified (for example, its status cannot be changed). The Services Controller uses the most recently created account for CLI/UI access to Services Controller. Note: These user credential rules apply to the Administrator UI and CLI. To create a new REST API administrator user 1. Choose Manage > Deployment: User Credentials to display the User Credentials page. Figure User Credentials Page 2. Specify a unique name and a password for the REST API administrator. 3. Click Create to apply your settings. To create additional REST API users 1. Choose Manage > Deployment: User Credentials to display the User Credentials page. Figure User Credentials Page 2. Click the plus sign (+) above the table of users to display a pop-up window. 3. Specify a unique username and a password for the user. 4. Click Add to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. Stingray Services Controller User s Guide 189

200 Configuring the Stingray Services Controller Virtual Appliance Configuring Resources for the Services Controller To activate or deactivate a REST API user 1. Choose Manage > Deployment: User Credentials to display the User Credentials page. 2. Click the right arrow next to a username entry to expand it. Figure Modify User Credentials Status 3. To activate the user, select the Active check box; to deactivate the user, specify the password and clear the Active check box. 4. Click Apply to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. Uploading the Traffic Manager Image You can upload the Traffic Manager image in the STM Images page. You provide a URL to upload the image to the Services Controller VA. To upload the Traffic Manager Image 1. Choose Manage > Deployment: STM Images to display the STM Images page. 2. Click the plus sign (+) above the table of images to display a pop-up window. 3. Type the URL for the image in the STM Image text box or click Choose File and select the image. 4. Click Upload to upload the image. Creating a Traffic Manager Version Resource You create a Traffic Manager version (that is, image version) resource for the Services Controller in the Stingray Traffic Manager Images page. Creating a version resource makes the Services Controller aware of the new image and enables you to use it when you create an instance. You must create a version resource before you create instances in the Services Controller. 190 Stingray Services Controller User s Guide

201 Configuring Resources for the Services Controller Configuring the Stingray Services Controller Virtual Appliance To create a Traffic Manager version resource 1. Choose Manage > Deployment: STM Images to display the STM Images page. 2. Click the plus sign (+) above the table of version resources to display a pop-up window. 3. Complete the configuration as described in this table. Control Version Name Version Filename Info Description Specify a unique name for the version resource. Select the version filename from the drop-down list. Optionally, specify descriptive information. 4. Click Add to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. To modify or activate a Traffic Manager version resource 1. Choose Manage > Deployment: STM Images to display the STM Images page. 2. Click the right arrow next to the required version resource entry to expand it. Figure Modifying Version Resources 3. Complete the configuration as described in this table. Control Status Version Filename Description To activate or deactivate a version resource, select an option from the dropdown list: Active - Activates the resource. Inactive - Deactivates the resource. Specify the name of the Traffic Manager image. Info Optionally, specify descriptive information. 4. Click Apply to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. Stingray Services Controller User s Guide 191

202 Configuring the Stingray Services Controller Virtual Appliance Configuring Resources for the Services Controller Managing the Services Controller Licenses You can manage Services Controller licenses in the SSC Enterprise Licenses page. Figure Enterprise Licenses Page You must import the Services Controller license, the Bandwidth Pack license key, and the Traffic Manager FLA license before you create an instance. To import a Services Controller license 1. Choose Manage > Deployment: SSC Licenses to display the SSC Enterprise Licenses page. 2. Copy and paste the Stingray Services Controller license into the text box. 3. Click Import to import and display the license under Imported SSC License. 4. To permanently save your settings, click the Save icon in the menu bar. To import a Services Controller license key 1. Choose Manage > Deployment: SSC Licenses to display the SSC Enterprise Licenses page. 2. Click the plus sign (+) above the license key table to display a pop-up window. 3. Type the Services Controller license key in the Controller License Key text box. 4. Click Add to import and display the license under Imported SSC License. 192 Stingray Services Controller User s Guide

203 Configuring Resources for the Services Controller Configuring the Stingray Services Controller Virtual Appliance 5. To permanently save your settings, click the Save icon in the menu bar. Note: To delete an SSC controller license, click the X next to its table entry and then click Delete. Managing Bandwidth Pack Licenses You can add, view, and delete Enterprise Bandwidth Pack licenses in the SSC Enterprise Licenses page. You can have one or more Bandwidth Pack licenses, depending on the needs of your enterprise. The bandwidth licenses are activated when you import the license key. To import the bandwidth license key 1. Choose Manage > Deployment: SSC Licenses to display the SSC Enterprise Licenses page. 2. Click the plus sign (+) above the Bandwidth Pack license table to display a pop-up window. 3. Copy and paste the license key into the Bandwidth License Key text box. 4. Click Add to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. Note: To delete a bandwidth license, click the X next to its table entry and then click Delete. Managing Add-On Licenses You can add, view, and delete Add-On licenses in the SSC Enterprise Licenses page. With Add-On licensing you can enable specific feature set (SKU) in the Services Controller. Add-On licenses have unique serial numbers and do not support upgrades. The Services Controller supports the following types of Add-On licenses: Federal Information Processing Standards (FIPS) license. Stingray Application Firewall (SAF) license. You can perform the following tasks with Add-On licenses: Import an Add-On license. For details, see To import the Add-On license key on page 193. Create a feature pack. For details, see To create a feature pack on page 197. Associate the imported Add-On license with the existing feature pack. For details, see To change the settings on a feature pack on page 197. To import the Add-On license key 1. Choose Manage > Deployment: SSC Licenses to display the SSC Enterprise Licenses page. 2. Click the plus sign (+) above the Add-On license key table to display a pop-up window. Stingray Services Controller User s Guide 193

204 Configuring the Stingray Services Controller Virtual Appliance Configuring Resources for the Services Controller 3. Copy and paste the license key into the Add-On License Key text box. 4. Click Add to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. Note: To delete an Add-On license, click the X next to its table entry and then click Delete. Managing Flexible Licenses You can import and view the Traffic Manager Flexible License Architecture (FLA) license in the SSC Flexible License page. The Services Controller VA does not support multiple active FLA licenses. You must create a license resource for the FLA license before you create instances. A license resource makes the Services Controller aware of the FLA license and enables you to use it when you create an instance. To create an FLA license resource, you must use the CLI. For details, see Importing the SSL Certificate, Key, and Licenses on page 238. To import an FLA license 1. Choose Manage > Deployment: SSC Licenses to display the SSC Flexible License page. Figure SSC Flexible License 2. Copy and paste the FLA license into the SSC Flexible License text box. 3. Click Import to display the FLA license under the Imported FLA License. 4. To permanently save your settings, click the Save icon in the menu bar. Managing Services Controller Certificates You import a self-signed SSL certificate and private key for the Services Controller in the SSC Certificate and Private Key page. You must import the SSL certificate and private key before you create instances. 194 Stingray Services Controller User s Guide

205 Configuring Resources for the Services Controller Configuring the Stingray Services Controller Virtual Appliance To import the SSL certificate and private key 1. Choose Manage > Deployment: SSC Certificate to display the SSC Certificate and Private Key page. 2. If your private key and CA-signed certificate are in a single file, select the Import Existing Private Key and CA-Signed Public Certificate (One File in PEM format) radio button. The page refreshes to show the following fields. Figure Importing the SSC Certificate and Private Key: Single File 3. Complete the configuration as described in this table. Control Import Single File Description Local File - Select this radio button and then click Choose file to browse to the required certificate authority file. The private key is required regardless of whether you are adding or updating. Text - Select this radio button to input the certificate authority file directly. Paste the certificate authority into the text box. Decryption Password - Specify the decryption password for the private key, if required. Stingray Services Controller User s Guide 195

206 Configuring the Stingray Services Controller Virtual Appliance Configuring Resources for the Services Controller 4. If your private key and CA-signed certificate are in separate files, select the Import Existing Private Key and CA-Signed Public Certificate (Two Files in PEM or DER formats) radio button. The page refreshes to show the following fields. Figure Importing the SSC Certificate and Private Key: Two Files Complete the configuration as described in this table. Control Import Private Key Description Local File - Select this radio button and then click Choose file to browse to the required private key file. The private key is required regardless of whether you are adding or updating. Key Text - Select this radio button to input the private key directly. Paste the key into the text box. Decryption Password - specify the decryption password for the private key file, if required. Import Public Certificate Local File - Select this radio button and then click Choose file to browse to the required public certificate file. Certificate Text - Select this radio button to input the public certificate directly. Paste the public certificate into the text box. 5. Click Import. To revert settings, click Revert. 6. Click Details to display the certificate details, or click PEM to display the certificate text. 7. To permanently save your settings, click the Save icon in the menu bar. 196 Stingray Services Controller User s Guide

207 Configuring Resources for the Services Controller Configuring the Stingray Services Controller Virtual Appliance Managing Feature Packs You can view, create, and change the status of feature packs in the SKU/Feature Pack page. A feature pack is a subset of a Traffic Manager SKU. A SKU contains a defined feature set for the Traffic Manager that you can apply to instances. You must create a feature pack for the Traffic Manager before you create instances. To create a feature pack 1. Choose Manage > Deployment: SKU/Feature Pack to display the SKU/Feature Packs page. 2. Click the plus sign (+) above the feature pack table to display a pop-up window. 3. Complete the configuration as described in this table. Control Feature Pack Name STM SKU Excluded Add-On SKUs Info Description Specify a unique feature pack name. Select the Traffic Manager SKU from the drop-down list. A SKU is a set of features for the Traffic Manager. Specify the excluded features in a space-separated list. A blank field denotes that no features are excluded. For details about the feature list, see Managing Flexible Licenses on page 194. Select the Add-On SKUs in the feature pack. ADD-FIPS - Federal Information Processing Standards (FIPS) license ADD-WAF - Stingray Application Firewall license ADD-WEBACCEL - Stingray Aptimizer Web Accelerator For details about Add-On licenses, see Add-On Licenses on page 21. Optionally, specify descriptive information. 4. Click Add to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. To change the settings on a feature pack 1. Choose Manage > Deployment: SKU/Feature Pack to display the SKU/Feature Packs page. 2. Click the right arrow next to the required feature pack entry to expand it. Figure Changing the Status of a Feature Pack Stingray Services Controller User s Guide 197

208 Configuring the Stingray Services Controller Virtual Appliance Configuring Resources for the Services Controller 3. Complete the configuration as described in this table. Control Status Info Description To change the status of the feature pack, select an option from the drop-down list: Active - Activates the resource. Inactive - Deactivates the resource. Optionally, specify descriptive information. 4. Click Apply to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. To view included features in the SKU To display the included features for a SKU, click the right arrow next to the required SKU name. Figure Viewing SKU Features 198 Stingray Services Controller User s Guide

209 Configuring Resources for the Services Controller Configuring the Stingray Services Controller Virtual Appliance Managing the Services Controller Service You can change the REST API port for the Services Controller and start, stop, and restart the Services Controller service in the SSC Service page. To change the REST API port for the Services Controller service 1. Choose Manage > Deployment: SSC Service to display the SSC Service page. Figure Changing the REST API Port for the SSC Service 2. Specify the port number in the REST Port text box. 3. Click Apply to apply your settings. 4. To permanently save your settings, click the Save icon in the menu bar. To start, stop, or restart the Services Controller service 1. Choose Manage > Deployment: SSC Service to display the SSC Service page. Figure Starting, Stopping, and Restarting the SSC Service 2. Click Start, Stop, or Restart to start, stop, or restart the Services Controller service. Configuring Database Settings You can configure the port number, maximum number of connections, bind address, and the username and password for the MySQL database in the Configure MySQL DB page. Stingray Services Controller User s Guide 199

210 Configuring the Stingray Services Controller Virtual Appliance Configuring Resources for the Services Controller To configure MySQL database settings 1. Choose Manage > Deployment: DB Settings to display the Configure MySQL DB page. Figure Configuring MySQL DB Settings 2. Complete the configuration as described in this table. Control Use Local DB Use Remote DB Port Number Max Connections Bind Address DB Username DB Password Description Select this radio button for a local MySQL database. If you select this radio button, no port is required, and the IP address is a local (loopback) address. Select this radio button for a remote MySQL database. All other properties are required. Specify the port number for the MySQL database. Specify the maximum number of connections for the MySQL database. Specify the IP address for the MySQL database. For a local database, this is a local (loopback) address. Specify the username for the MySQL database. Specify the user password for the MySQL database. 3. Click Apply to apply your settings. 4. To permanently save your settings, click the Save icon in the menu bar. Importing and Exporting the Local Database You can import and export the MySQL database to make sure that you have a back-up copy of the database in the Import/Export Local DB page. To import the local database Note: An exported MySQL database can only be used with the version of the Services Controller Virtual Appliance from which it was exported. If an export from a previous version is used, your system may become corrupted. 200 Stingray Services Controller User s Guide

211 Configuring Resources for the Services Controller Configuring the Stingray Services Controller Virtual Appliance 1. Choose Manage > Deployment: DB Import/Export to display the Import/Export Local DB page. Figure Importing the Local Database 2. Type the database name or click Choose File to navigate to the database. 3. Click Import to apply your settings. 4. To permanently save your settings, click the Save icon in the menu bar. To export the database Note: An exported MySQL database can only be used with the version of the Services Controller Virtual Appliance from which it was exported. As a result, when the Services Controller VA is upgraded, you should replace any required exports with new ones created from the upgraded Services Controller VA. If an export from a previous version is used, your system may become corrupted. 1. Choose Manage > Deployment: DB Import/Export to display the Import/Export Local DB page. Figure Exporting the Local Database 2. Click Export to apply your settings. The name of the exported database file is chosen automatically, using the following format: sscdb_dump_<va_version>_<timestamp>.sql Stingray Services Controller User s Guide 201

212 Configuring the Stingray Services Controller Virtual Appliance Configuring Resources for the Services Controller Configuring Services Controller Mode Settings You can view and update Services Controller mode settings for high availability deployments in the SSC Mode Settings (HA) page. For details about recommended settings, see Managing Flexible Licenses on page 194. To view and update Services Controller mode settings 1. Choose Manage > Deployment: SSC Mode Settings to display the SSC Mode Settings (HA) page. 2. Click the right arrow next to the service controller name entry to expand it. Figure Updating SSC Mode Settings (HA) 3. Complete the configuration as described in this table. Control Management Metering Licensing Description Specify the current management status for the Services Controller: Enabled or Disabled. This option determines whether the specified manager is accessible through the REST API. Specify the metering value for the manager: All or None. This option determines whether the specified manager meters all Traffic Manager instances or none. For details about recommended settings, see Using the Administration UI to Manage the Services Controller on page 179. Specify to determine how the specified manager responds to license requests from the Traffic Manager: Enabled, Disabled, or EnabledWithAlerts. For details about recommended settings, see Managing Flexible Licenses on page Click Apply to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. 202 Stingray Services Controller User s Guide

213 Viewing and Modifying Services Controller Settings Configuring the Stingray Services Controller Virtual Appliance Viewing and Modifying Services Controller Settings You can view and modify Services Controller settings, such as monitoring, metering, logging, licensing, and deployment in the SSC Settings page. 1. Choose Manage > Deployment: Settings to display the SSC Settings page. Figure Modifying SSC Settings 2. To configure monitoring, complete the configuration as described in this table. Control Controller Failure Period Controller Monitor Interval Host Failure Period Host Monitor Interval Instance Failure Period Instance Monitor Interval Description Specify the period of time, in seconds, after which a Services Controller is considered to have failed. The default value is 180. Specify the period of time, in seconds, between monitoring the Services Controller. The default value is 60. Specify the period of time, in seconds, after which a host is considered to have failed. The default value is 180. Specify the period of time, in seconds, between monitoring hosts. The default value is 60. Specify the period of time, in seconds, after which the instance is considered to have failed. The default value is 180. Specify the period of time, in seconds, between monitoring instances. The default value is 60. Stingray Services Controller User s Guide 203

214 Configuring the Stingray Services Controller Virtual Appliance Viewing and Modifying Services Controller Settings Control Monitor Interval Overdue Warning Period Description Specify the period of time, in seconds, between monitoring alert s. The default value is 60. Specify the period of time, in seconds, to consider monitoring overdue. The default value is To configure metering and licensing, complete the configuration as described in this table. Control Meter Interval Log Check Interval Alert Threshold Description Specify the period of time, in seconds, between metering actions. The range is from The default value is 3600 seconds (1 hour). Specify the period of time, in seconds, between checks for log space. The range is from The default value is 3600 seconds (1 hour). Specify the number of alerts that sent. The range is from The default value is 1. The threshold and interval settings enable you to determine how many requests have to be received by a nonprimary license server in the specified interval before an alert is sent to the configured alert addresses. After the threshold and interval is reached then an alert message is sent. At most, one message is sent per hour, to protect against a flood of messages being sent in the case of complete failure of the primary license server on a busy system. Alert Threshold Interval Specify the period of time, in seconds, between alerts. The range is from The default value is 3600 seconds (1 hour). The threshold and interval settings enable you to specify the time interval before an alert is sent to the configured alert addresses. After the threshold and interval is reached, an alert message is sent. At most one message is sent per hour, to protect against a flood of sent messages in the case of complete failure of the primary license server on a busy system. 4. To configure logging and deployment settings, complete the configuration as described in this table. Control Description License Logging Specify a license value. The range is from The default value is 0, which equals no logging. A log level of 3 or greater causes responses to license server requests to be logged in full, including the feature values set by the feature pack and bandwidth associated with the instance making the request. Metering Logging Specify the metering logging value. The range is from The default value is 0, which equals no logging. A log level of 5 or greater gives a summary of the activities of the metering thread (that is, starting metering, stopping metering, and so forth) A log level of 9 or greater provides a detailed logging of each instance being metered. 204 Stingray Services Controller User s Guide

215 Managing Service Controller Resources Configuring the Stingray Services Controller Virtual Appliance Control Description Inventory Logging Specify the metering logging value. The range is The default value is 0, which equals no logging. A log level of 1 or greater will cause inventory changes to be logged (the equivalent of the audit records). A log level of 3 or greater causes logging of all deployment and action commands. A log level of 8 or greater causes logging of the output from all deployment and actions. Max Instances Specify the maximum number of Traffic Manager instances that can be deployed. The default value is 0, which equals no limit. Typically, this is the correct value for most deployments. Instances that have been deleted do not count towards the limit. Instances that have been deployed but are not active (that is, have not been started) do count towards the limit. If you create a new instance in excess of this number, the instance is rejected with an error message. If this property is set to a lower number than the number of currently deployed instances then there is no immediate effect but subsequent deployment requests are rejected. 5. To configure bandwidth licensing and controller licensing, complete the configuration as described in this table. Control Bandwidth: Expire Warning Days Controller: Expire Warning Days Description Specify the number of days to warn you before the license expires. The default value is 30. Specify the number of days to warn you before the license expires. The default value is Click Apply to apply your settings. 7. To permanently save your settings, click the Save icon in the menu bar. Managing Service Controller Resources You perform the following Services Controller tasks from the Manage > Controller menu: Create, view, and change the status of instance hosts. See Creating and Managing Instance Hosts on page 205. Create, view, and modify Traffic Manager instances. See Creating and Managing Traffic Manager Instances on page 209. Create, view, and change the status of clusters. Creating and Managing Clusters on page 214. Creating and Managing Instance Hosts You can upload the Instance Host OVA, as well as view, create, and change the status of instance hosts in the STM Instance Host page. You must import the Instance Host OVA and create an instance host resource before you can create a managed Traffic Manager instance. Stingray Services Controller User s Guide 205

216 Configuring the Stingray Services Controller Virtual Appliance Managing Service Controller Resources To upload the Instance Host OVA 1. Choose Manage > Controller: Instance Hosts to display the STM Instance Host page. Figure Uploading the Instance Host OVA 2. Complete the configuration as described in this table. Control Import Instance Host OVA Description Local File - Select this radio button and then either type the path to the Instance Host OVA or click Choose File to browse to the Instance Host OVA. HTTP/SCP URL - Select this radio button and then specify a URL for the Instance Host OVA file. 3. Click Upload to upload the Instance Host OVA. Note: To delete a Host OVA, click the X next to its table entry and then click Delete. After you have uploaded the Instance Host OVA, you must create the instance host resource. To create a new instance host resource 1. Choose Manage > Controller: Instance Hosts to display the STM Instance Host page. 2. Click the plus sign (+) above the instance hosts table to display a pop-up window. 3. Complete the standard configuration as described in this table. Setting Instance Host Name Host User Host Password Username Description Specify the hostname of this instance host. This is required for all instance hosts. Note: For non-provisioned hosts, this must be the hostname of the existing host. Specify the instance host username. This property is unavailable for instance hosts where the Provision Only check box is enabled. Specify the password for the instance host user. This property is unavailable for instance hosts where the Provision Only check box is enabled. Specify the user for SSH access; this user must be root. This property is unavailable for instance hosts where the Provision Only check box is enabled. 206 Stingray Services Controller User s Guide

217 Managing Service Controller Resources Configuring the Stingray Services Controller Virtual Appliance Setting Info Provision Provision Only Service Manager Description Optionally, specify descriptive information. This property is unavailable for instance hosts where the Provision Only check box is enabled. Specify this option to provision instance hosts. The use of this check box is described in step 4. Select this check box when you want to provision an instance host, but do not want it to be managed by the Services Controller, such as when the IP address is to be allocated by DHCP. The host is not added to the SSC database, but can be added once its IP address is known. When you select the Provision Only check box, the Provision check box is selected automatically and the Service Manager check box is cleared automatically. Select this check box for service manager hosts. After you select this check box: specify Max Instances to define how many instances can operate on this host. specify CPU Cores to define the number of processors that are available to the host. 4. To provision this instance host, enable the Provision check box. This check box is cleared and unavailable when the Provision Only check box is selected. Complete the configuration as described in this table. Control Provision Description Specify this option to provision instance hosts. Specify these settings: Flavor - Select the size of the instance host from the drop-down list. Large - Specifies eight CPUs and 16 GB of memory. Small - Specifies two CPUs and 4 GB of memory. ESXi URI - Specify the URI to the ESXi host where this instance host is to be deployed. ESXi User - Specify the ESXi host username used for the provisioned instance host. ESXi Password - Specify the ESXi password used for the provisioned instance host. ESXi Datastore - Specify the ESXi storage pool. For example, datastore1. Mgmt Interface Mapping - Specify the name of the interface for management traffic. LAN Interface Mapping - Specify the name of the interface for server-facing traffic. WAN Interface Mapping - Specify the name of the interface for client-facing traffic. For example, you might have virtual networks named VM Network Mgmt, VM Network WAN, VM Network LAN created in your ESXi environment. 5. Click Add to apply your settings. 6. To permanently save your settings, click the Save icon in the menu bar. To modify instance host settings 1. Choose Manage > Controller: Instance Hosts to display the STM Instance Host page. Stingray Services Controller User s Guide 207

218 Configuring the Stingray Services Controller Virtual Appliance Managing Service Controller Resources 2. Click the right arrow next to the required hosts entry to expand it. Figure Modifying Instance Host Settings 3. Complete the configuration as described in this table. Control Status Host User Host Password Username Info Service Manager Description Select an option from the drop-down list: Active - Activates the resource. Inactive - Deactivates the resource. Specify the instance host username. Specify the password for the instance host user. Specify the username for SSH access; this user must be root. Optionally, specify descriptive information. Select this check box for service manager hosts. After you select this check box: specify Max Instances to define how many instances can operate on this host. specify CPU Cores to define the number of processors that are available to the host. 4. Click Apply to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. 208 Stingray Services Controller User s Guide

219 Managing Service Controller Resources Configuring the Stingray Services Controller Virtual Appliance Creating and Managing Traffic Manager Instances You can create instances on the STM Instances page. Before you create an instance, you must: Create a license resource for the FLA license. For details, see Managing Flexible Licenses on page 194. Create a version resource for the Traffic Manager image. For details, see Creating a Traffic Manager Version Resource on page 190. Create a feature pack. For details, see Managing Feature Packs on page 197. Import an instance host VA and provision it. You can create and provision an instance host using the Administration UI Setup wizard. The wizard can started at any time. For details, see Running the Services Controller VA Setup Wizard on page 162. Create a DNS entry for the instance host. For details, see Installing, Configuring, and Administering the Services Controller VA on page 160 and Provisioning an Instance Host on page 241. To create a managed Traffic Manager instance 1. Choose Manage > Controller: Instances to display the STM Instances page. 2. Click the plus sign (+) above the table of Traffic Manager instances to display a pop-up window. 3. Select the Advanced Mode check box and ensure that the Managed Mode check box is selected. 4. Complete the configuration as described in this table. Control Instance Name License Name Bandwidth CPU Usage Owner STM Feature Pack STM Version Instance Host Name Management Address Description Specify the instance name. As a best practice, assign the same name to the instance and the Linux container. Select the FLA license name for the instance from the drop-down list. You must create a license entry before you create an instance. For details, see Managing Flexible Licenses on page 194. Specify the maximum allowed bandwidth for this instance (in Mbps). Specify a string that describes the CPUs that is used for the instance. The value is used by the taskset command and is typically a single CPU number. Specify the owner of the instance. For example, "Production" or "Development" or "SharePoint", and so on. Select the feature pack name associated with the instance. You must create a feature pack before you create an instance. For details, see Managing Feature Packs on page 197. Select the Traffic Manager version for the instance. You must create a version resource before you create an instance. For details, see Creating a Traffic Manager Version Resource on page 190. Select the fully qualified domain name of the instance host from the drop-down list. You must create an instance host entry for managed instances before you create an instance. For details, see Creating and Managing Instance Hosts on page 205. Specify the hostname or IP address of the instance. If you use a hostname instead of an IP address, you should use a fully qualified domain name. For instances inside a container with network isolation, use the lxc.network.ipv4 setting defined in the container configuration file. Stingray Services Controller User s Guide 209

220 Configuring the Stingray Services Controller Virtual Appliance Managing Service Controller Resources 5. To configure an instance in a container, complete the configuration as described in this table: Control Cluster ID Configuration Options Container Name Description If the instance is part of a cluster, specify the cluster ID. Configuration options vary according to the type of instance you are deploying. Specify these settings: Port Offset - Instances running within containers without network isolation share an IP address with the instance host. To avoid port conflicts for Stingray functions (like the REST or Admin interfaces between instances), specify a port offset: For nonclustered instances, you must specify a value for port_offset in the config_options property of the instance resource. For clustered instances, you must specify a value for both the cluster_port_offset property of the cluster resource and the port_offset property of the instance resource. Extra Options - Specify the values for config_options. For example, the following are the default values for extra configuration options: admin_ui=yes maxfds=100 webcache!size=50 java!enabled=yes num_children=10 start_flipper=yes The lists for Port Offset and Extra Options is a space-separated list enclosed by single quotes. 'admin_ui=yes maxfds=100 webcache!size=50 java!enabled=yes num_children=10 start_flipper=yes ' For detailed information, see Network Isolation Using Port Offsets on page 10 and Deploying and Configuring Traffic Manager Instances on page 46 Note: Any change to the config_options settings will cause a restart of the instance. Note: Some configuration options, if specified here, must be consistent between all Traffic Manager instances in a cluster: maxfds webcache!size java!enabled statd!rsync_enabled flipper!monitor_interval flipper!frontend_check_addrs. If you set or update the value in one instance resource, the Services Controller replicates this update automatically to the other instance resources. The instance will restart whenever these are changed, but other instances in the cluster must be restarted manually. Note: Whenever the config_options property is set, all currently modified options must be specified again in the REST call. Any options that are not specified will lose their current value and be reset to their default value. Specify the name of the Linux container in which the instance is running. For instances outside containers, leave blank. 210 Stingray Services Controller User s Guide

221 Managing Service Controller Resources Configuring the Stingray Services Controller Virtual Appliance Control Management IP/Subnet Gateway Description Specify the IP address for the management virtual interface. Use this format: XXX.XXX.XXX.XXX/XX The gateway IP address. LAN IP/Subnet WAN IP/Subnet Data Plane Gateway Specify the IP address for the LAN subnet for the container. This is the virtual interface that manages Traffic Manager load balancing. Use this Format: XXX.XXX.XXX.XXX/XX Specify the IP address for the WAN subnet for the container. This is the virtual interface that manages WAN traffic. Use this format: XXX.XXX.XXX.XXX/XX Specify the gateway IP address for the data plane. Flavor Specify the size of the instance host: small - specifies two CPUs and 4 GB of memory. large - specifies eight CPUs and 16 GB of memory. 6. Click Add to create the instance. The instance is created with a status of Idle. To change the status from Idle to Active, click Start in the Action column. 7. To permanently save your settings, click the Save icon in the menu bar. To create an nonmanaged Traffic Manager instance 1. Choose Manage > Controller: Instances to display the STM Instances page. 2. Click the plus sign (+) above the table of Traffic Manager instances to display a pop-up window. 3. Select the Advanced Mode check box and ensure that the Managed Mode check box is clear. 4. Complete the configuration as described in this table. Control Instance Name Bandwidth Owner STM Feature Pack Management Address Cluster ID Description Specify the instance name. As a best practice, assign the same name to the instance and the Linux container. Specify the maximum allowed bandwidth for this instance. Specify the owner of the instance. Select the feature pack name associated with the instance. You must create a feature pack before you create an instance. For details, see Managing Feature Packs on page 197. Specify the hostname or IP address of the instance. If you use a hostname instead of an IP address, you should use a fully qualified domain name. When this is set for a nonmanaged instance, the SNMP Address, REST Address and UI Address properties are populated automatically. Leave this field blank. Stingray Services Controller User s Guide 211

222 Configuring the Stingray Services Controller Virtual Appliance Managing Service Controller Resources Control Configuration Options SNMP Address REST Address Admin Username UI Address Admin Password Description Configuration options vary according to the type of instance you are deploying. Specify these settings: Port Offset - this is not supported for nonmanaged instances. Extra Options - Specify the values for config_options. A single configuration option is supported: snmp!community - The SNMP v2 community setting for this nonmanaged instance. This must be set to the same value as the snmp!community property on the instance resource (default: "public"). For detailed information, see Configuring a Nonmanaged Instance on page 54. The hostname or IP address, including the port, for the Traffic Manager instance SNMP server. This is populated automatically when you specify a Management Address. You can change the port number if required. Specify the IP address, including the port, for the instance REST API. This is populated automatically when you specify a Management Address. You can change the port number if required. Specify the username of the Traffic Manager instance administrator user. This is populated automatically when you specify a Management Address. You can change the port number if required. Specify the IP address, including the port, for the Traffic Manager instance Administration UI. If left blank, the address is set to :9090. If you use a hostname instead of an IP address, you must use a fully qualified domain name. Specify the password of the Traffic Manager instance administrator user. 5. Click Add to create the nonmanaged instance. 6. To permanently save your settings, click the Save icon in the menu bar. To view and modify Traffic Manager instance settings 1. Choose Manage > Controller: Instances to display the STM Instance page. 212 Stingray Services Controller User s Guide

223 Managing Service Controller Resources Configuring the Stingray Services Controller Virtual Appliance 2. Click the right arrow next to the required instance entry to expand it. Figure Modifying Traffic Manager Instances 3. Make any changes that are required: Modify any required properties and click Apply to apply your changes. To start an instance, click Start in the Action column. The status of the instance changes from Idle to Active. To stop an instance, click Stop in the Action column. The status of the instance changes from Active to Idle. To attempt to start an instance after a failure, click Retry. Note: You are not able to perform a downgrade of a traffic instance using the STM Version property. 4. To permanently save your settings, click the Save icon in the menu bar. Stingray Services Controller User s Guide 213

224 Configuring the Stingray Services Controller Virtual Appliance Managing Service Controller Resources Creating and Managing Clusters A cluster is a collection of Services Controllers that share a single inventory database. A cluster is used for a variety of purposes: Sharing of licensing. Scalability of services. Implementation of high availability. You can create and change the status of a cluster in the SSC Cluster page. When you create an instance that you intend to be part of a cluster, you must have created the cluster resource in advance. To create a new cluster 1. Choose Manage > Controller: STM Clusters to display the STM Cluster page. 2. Click the plus sign (+) above the table of clusters to display a pop-up window. 3. Complete the configuration as described in this table. Control Cluster Name Owner Cluster Port Offset Description Specify a cluster name. You use this name when you create instances. Specify the owner of the cluster. Specify the cluster port offset value. For clustered instances, you must specify a value. For detailed information, see Network Isolation Using Port Offsets on page Click Add to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. To change the status of a cluster 1. Choose Manage > Controller: STM Clusters to display the STM Cluster page. 2. Click the right arrow next to a cluster entry to expand it. Figure Changing the Status of a Cluster 3. To change the status of the cluster, in the Status drop-down box select Active or Inactive. 4. Click Apply to apply your settings. 5. To permanently save your settings, click the Save icon in the menu bar. 214 Stingray Services Controller User s Guide

225 Creating and Managing LBaaS Services Configuring the Stingray Services Controller Virtual Appliance Creating and Managing LBaaS Services You can perform the following Services Controller tasks: Create an LBaaS service and supporting resources using the LBaaS Wizard. This enables the service to be created in a guided manner, including the initial setup for licenses, images and instance hosts. See Creating a Service with the LBaaS Wizard on page 215. Create an LBaaS service based on existing resources. See Creating an LBaaS Service on page 221. Change the status of an LBaaS service. See Changing the Status of an LBaaS Service on page 225. Change the properties on an LBaaS service. See Changing the Properties of an LBaaS Service on page 225. Start and stop an LBaaS service. See Starting and Stopping an LBaaS Service on page 68. Change an error state of a failed LBaaS service host. See Changing the Error State of an LBaaS Service Host on page 226. Delete an LBaaS service. See Deleting an LBaaS Service on page 227. For a full description of LBaaS services, see Chapter 5, Configuring Load Balancing as a Service. For a full description of Elastic LBaaS (ELBaaS) services, see Chapter 6, Configuring Elastic Load Balancing as a Service. Creating a Service with the LBaaS Wizard The definition of an LBaaS service includes attributes for both standard LBaaS and Elastic LBaaS. Attributes that apply to just one service type are noted. To create a service using the LBaaS Wizard 1. To start the LBaaS wizard, choose Manage > LBaaS Settings: LBaaS Wizard. The Welcome page is displayed: Figure LBaaS Wizard: Welcome Page Stingray Services Controller User s Guide 215

226 Configuring the Stingray Services Controller Virtual Appliance Creating and Managing LBaaS Services 2. To configure the Traffic Manager FLA license, click SSC Flexible License to display the SSC Flexible Licenses page. Figure LBaaS Wizard: SSC Flexible License Page 3. Complete the configuration as described in this table. Control FLA License Text Description Paste the Flexible license text and click Import. The FLA license text is displayed. 4. To add additional licenses, click Additional Licenses to display the Additional Licenses page. Figure LBaaS Wizard: Additional Licenses Page 216 Stingray Services Controller User s Guide

227 Creating and Managing LBaaS Services Configuring the Stingray Services Controller Virtual Appliance 5. To add a Services Controller license directly, copy and paste the license text into the Stingray Services Controller License Text box. Click Import to use the pasted text as the license. You are also able to do this by adding it to the License table (see step 6). 6. To add a Services Controller license, click the plus sign (+) above the SSC Controller License table to display a pop-up window. 7. Copy and paste the Services Controller license key into the Controller License Key text box and click Add. 8. To add a bandwidth license, click the plus sign (+) above the Bandwidth table to display a pop-up window. 9. Copy and paste the bandwidth license key in the Bandwidth License Key text box and click Add. 10. To add an Add-On license, click the plus sign (+) above the Add-On License table to display a pop-up window. 11. Copy and paste the Add-On license key in the Add-On License Key text box and click Add. 12. To configure a Traffic Manager image and version resource, click STM Images to display the STM Images page. Figure LBaaS Wizard: STM Images Page 13. Complete the configuration as described in this table. Control Traffic Manager Image Traffic Manager Version Description Click the plus sign (+) above the Image table to display a pop-up window. Specify the Traffic Manager image filename or click Choose File to browse to the file and click Upload. Click the plus sign (+) above the Version table to display a pop-up window. Specify the following settings: Version Name - Specify a unique name for the version resource. Version Filename - Select the Traffic Manager version from the drop-down list. Info - Optionally, specify descriptive information. Click Add to add your settings to the running configuration. Stingray Services Controller User s Guide 217

228 Configuring the Stingray Services Controller Virtual Appliance Creating and Managing LBaaS Services 14. To configure feature packs and examine SKUs, click SKU/Feature Pack to display the SKU/Feature Pack page. Figure LBaaS Wizard: SKU/Feature Pack Page 15. Click the plus sign (+) above the feature pack table to display a pop-up window. 16. Complete the configuration as described in this table. Control Feature Pack Name STM SKU Excluded Add-On SKUs Info Description Specify a unique feature pack name. Select the Traffic Manager SKU from the drop-down list. A SKU is a set of features for the Traffic Manager. Specify the excluded features in a space-separated list. A blank field denotes that no features are excluded. For details about the feature list, see Managing Flexible Licenses on page 194. Select the Add-On SKUs in the feature pack. ADD-FIPS - Federal Information Processing Standards (FIPS) license ADD-WAF - Stingray Application Firewall license ADD-WEBACCEL - Stingray Aptimizer Web Accelerator For details about Add-On licenses, see Add-On Licenses on page 21. Optionally, specify descriptive information. 17. Click Add to apply your settings. 218 Stingray Services Controller User s Guide

229 Creating and Managing LBaaS Services Configuring the Stingray Services Controller Virtual Appliance 18. To configure instance hosts for the Services Controller, click Instance Hosts to display the Instance Hosts page. Figure LBaaS Wizard: Instance Hosts Page 19. Complete the configuration as described in this table. Control Import Instance Host OVA Description Local File - Select this radio button and then either type the path to the Instance Host OVA or click Choose File to browse to the Instance Host OVA. HTTP/SCP URL - Select this radio button and then specify a URL for the Instance Host OVA file. 20. Click Upload to upload the Instance Host OVA. Note: To delete a Host OVA, click the X next to its table entry and then click Delete. 21. Click the plus sign (+) above the instance hosts table to display a pop-up window. 22. Complete the standard configuration as described in this table. Setting Instance Host Name Host User Host Password Username Info Provision Description Specify the hostname of this instance host. This is required for all instance hosts. Note: For non-provisioned hosts, this must be the hostname of the existing host. Specify the instance host username. This property is unavailable for instance hosts where the Provision Only check box is enabled. Specify the password for the instance host user. This property is unavailable for instance hosts where the Provision Only check box is enabled. Specify the user for SSH access; this user must be root. This property is unavailable for instance hosts where the Provision Only check box is enabled. Optionally, specify descriptive information. This property is unavailable for instance hosts where the Provision Only check box is enabled. Select this check box when you want to provision an instance host. This check box is described in step 23. Stingray Services Controller User s Guide 219

230 Configuring the Stingray Services Controller Virtual Appliance Creating and Managing LBaaS Services Setting Provision Only Service Manager Description Select this check box when you want to provision an instance host, but do not want it to be managed by the Services Controller, such as when the IP address is to be allocated by DHCP. The host is not added to the SSC database, but can be added once its IP address is known. When you select the Provision Only check box, the Provision check box is selected automatically and the Service Manager check box is cleared automatically. Select this check box for service manager hosts. After you select this check box: specify Max Instances to define how many instances can operate on this host. specify CPU Cores to define the number of processors that are available to the host. 23. To provision this instance host, select the Provision check box. This check box is cleared and unavailable when the Provision Only check box is selected. Complete the configuration as described in this table. Control Provision Description Specify this option to provision instance hosts. Specify these settings: Flavor - Select the size of the instance host from the drop-down list. Large - Specifies eight CPUs and 16 GB of memory. Small - Specifies two CPUs and 4 GB of memory. ESXi URI - Specify the URI to the ESXi host where this instance host is to be deployed. ESXi User - Specify the ESXi host username used for the provisioned instance host. ESXi Password - Specify the ESXi password used for the provisioned instance host. ESXi Datastore - Specify the ESXi storage pool. For example, datastore1. Mgmt Interface Mapping - Specify the name of the interface for management traffic. LAN Interface Mapping - Specify the name of the interface for server-facing traffic. WAN Interface Mapping - Specify the name of the interface for client-facing traffic. For example, you might have virtual networks named VM Network Mgmt, VM Network WAN, VM Network LAN created in your ESXi environment. 24. Click Add to apply your settings. 25. To configure and create the required LBaaS/ELBaaS service, click Create LBaaS Service to display the Instance Hosts page. Then, create the service using the properties described in Creating an LBaaS Service on page Click Finish to close the Setup wizard. 27. To permanently save your settings, click the Save icon in the menu bar. 220 Stingray Services Controller User s Guide

231 Creating and Managing LBaaS Services Configuring the Stingray Services Controller Virtual Appliance Creating an LBaaS Service This section describes the properties that you must configure when you create an LBaaS service from either of the following pages: The Create LBaaS page of the LBaaS Wizard. This enables you to walk through all required steps. See Creating a Service with the LBaaS Wizard on page 215. The LBaaS Service page. To start this page, choose Manage > LBaaS Settings: LBaaS Service. Figure LBaaS Service Page The process, which is described below, is the same for these two pages. Configuring and creating an LBaaS Service 1. Click the plus sign (+) above the LBaaS Service table to display a pop-up window. 2. Complete the basic LBaaS service configuration as described in this table. Control LBaaS Name #Instances #Worker Process Algorithm License Name STM Feature Pack STM Version Description The name of the LBaaS service. The number of Traffic Manager instances that the Services Controller maintains in the automatically deployed service cluster when the service status is not Inactive, Deleting, or Deleted. Note: For an ELBaaS service, this value is set only by the Services Controller, and not by users. The number of Traffic Manager child processes that will be used by each service instance in this service to handle traffic. The algorithm that decides where to distribute traffic. It is one of the following: fastest_response_time least_connections perceptive random round_robin The default value is round_robin. The name of the license resource that will be used across the service cluster. The name of the feature pack applied to the instances in the service cluster. You can provide a value for this property when creating a service or allow the Services Controller to auto-select a suitable resource for the template in use. The STM version used when deploying service instances. You can provide a value for this property when creating a service or allow the Services Controller to auto-select a suitable resource for the template in use. Stingray Services Controller User s Guide 221

232 Configuring the Stingray Services Controller Virtual Appliance Creating and Managing LBaaS Services Control Instance Bandwidth Protocol SSL Encrypt Frontend Port Frontend IPs Backend Nodes Description The maximum bandwidth allowed for each instance in the service cluster. In the Enterprise licensing model, the number of instances that are deployed for a service depends on a combination of Instance Bandwidth, #Instances, and the STM feature pack in use. The protocol being balanced. It is one of the following values: http https tcp ssl Whether to enable SSL encryption for the back-end nodes for the service. The Protocol property must be set to http or tcp if this property is true. The port number in the range from 0 to on which the Services Controller listens for incoming connections on the front-end IP addresses. A set of IP addresses to be raised by the LBaaS service on which the service will listen for incoming connections "," A set of IP address/port pairs across which traffic should be balanced :80"," :80 3. If your service is elastic, select the Elastic check box. Then, complete the configuration as described in this table. Control Min Instances Description The minimum number of instances that are maintained for the ELBaaS service. The service requires this field to be specified. The service should never go below these number of instances from a scaling point of view. For an ELBaaS service, this should be less than or equal to max_instances. Max Instances The maximum number of instances that the service will scale to. In this case, 6. This has no default value for an ELBaaS service; the service requires this field to be specified. For an ELBaaS service, this should be greater than or equal to min_instances. Note that this is also the minimum number of front end IP addresses. Poll Interval Refractory Period Monitoring Cycles Before Scaling The frequency, in seconds, of CPU usage collection for service instances. The default is 60. The minimum amount of time, in seconds, between two scaling events. This setting is essentially the stabilization period after a change in instance cluster size. The default is 180. Specifies the number of polling cycles where the usage is above or below the thresholds to trigger a scale up or scale down event. In this case, 3. The default is 5. This value ensures that data-points collected persist for a number of cycles before a scaling decision is taken. 222 Stingray Services Controller User s Guide

233 Creating and Managing LBaaS Services Configuring the Stingray Services Controller Virtual Appliance Control Average CPU Scale Down Threshold Scale Up Threshold Description When this check box is selected, the elastic service rescales according to average CPU measures., using the Scale Up Threshold and Scale Down Threshold properties. See Understanding Scaling Thresholds on page 101. Note: This check box must always be selected for an ELBaaS service. The percentage of average CPU usage below which a scale down will occur. This threshold must persist for monitoring_cycles_before_scaling for a scale down to occur. Choosing a value for this threshold is described in Understanding Scaling Thresholds on page 101. Note: as more services are added or removed from a host, the scale_down_threshold percentage for all services on the host may need to be revisited. The percentage of average CPU usage above which a scale up will occur. The default is 90, which is suitable for a single service running on an instance host. This threshold must persist for monitoring_cycles_before_scaling for a scale up to occur. Choosing a value for this threshold is described in Understanding Scaling Thresholds on page 101. Note: as more services are added or removed from a host, the scale_up_threshold percentage for all services on the host may need to be revisited. 4. If your service requires health monitoring, select the Health Monitoring check box. Then, complete the configuration as described in this table. Control Monitor Type Monitor Path Monitor Response Length Monitor Timeout Monitor Interval Monitor Status Regex Monitor Failure Threshold Monitor Host Header Monitor Use SSL Monitor Body Regex Monitor Auth Description The scheme used (if any) to monitor back-end node health, either connect or http. The URI path to use in the HTTP monitor test. Maximum amount of data to read back from a server for a monitoring request. The period of time, in seconds, to wait for a response before marking a health probe as failed. The period of time, in seconds, between two consecutive health checks. The regex value that the HTTP monitor test response code should match against. The number of monitoring failures before a back-end node is marked as failed. The value to use for the host header in the HTTP monitor test. Whether or not the monitor should connect using SSL, either true or false. The regex value that the HTTP monitor test response body should match against. The value of the basic auth header to use in a HTTP request. This is a string in the following format: <username>:<password> Stingray Services Controller User s Guide 223

234 Configuring the Stingray Services Controller Virtual Appliance Creating and Managing LBaaS Services 5. If your service requires session persistence, select the Session Persistence check box. Then, complete the configuration as described in this table. Control Session Persistence Type Session Persistence Cookie Session Persistence Failure Delete Session Persistence Failure Mode Session Persistence Redirect URL Description The type of session persistence used for the service. It is one of the following values: asp cookie ip j2ee named ssl transparent universal x_zeus If the Session Persistence Type is set to cookie, the cookie name to use for persistence. Determines whether the connection is closed in the case of a session persistence failure. Select the check box to enable the closure. Determines what happens in the case of a persistence failure. It is of the following values: close new_node url URL to redirect connection to in the case of a persistence failure. 6. If your service requires SSL offloading, select the SSL Offload check box. Then, complete the configuration as described in this table. Control Public Certificate Private Key Description A PEM encoded public certificate. A PEM encoded private key. 7. Click Add to create your service. The service is added to the LBaaS service table in an Inactive state. 8. To permanently save your settings, click the Save icon in the menu bar. 224 Stingray Services Controller User s Guide

235 Creating and Managing LBaaS Services Configuring the Stingray Services Controller Virtual Appliance Changing the Status of an LBaaS Service When you create an LBaaS service, it is in an Inactive state. Changing the status of an LBaaS service 1. Choose Manage > LBaaS Settings: LBaaS Service to view the LBaaS Service page. Figure LBaaS Service Page The current status of the services are shown in the Status column. 2. Click the required action button in the Action column. The Status of the LBaaS service changes to Changing, and remains in this intermediate state until the process either fails or completes. Once the process ends, the Status column shows the current state. Changing the Properties of an LBaaS Service After you create an LBaaS service, you can change its properties. Changing the properties of an LBaaS service 1. Choose Manage > LBaaS Settings: LBaaS Service to view all services in the LBaaS Service page. 2. Click the right arrow next to the service you want to change. The table entry expands to show the current properties for the service. 3. Make the changes that are required. Note that: You can change the cluster size for a standard LBaaS service by changing the defined value of #Instances. The effect of this change on the Services Controller is detailed in Changing the LBaaS Service Cluster Size on page 71. You can change the cluster size for an elastic LBaaS service by changing the defined value of max_instances. The effect of this change on the Services Controller is detailed in Changing the Potential ELBaaS Service Cluster Size on page 104. Some properties cannot be changed for all LBaaS services. 4. Click Apply to update the service with the specified settings. Note that the service cannot be in an intermediate state. 5. To permanently save your settings, click the Save icon in the menu bar. Stingray Services Controller User s Guide 225

236 Configuring the Stingray Services Controller Virtual Appliance Creating and Managing LBaaS Services Starting and Stopping an LBaaS Service You can start and stop an LBaaS service from the LBaaS Service page. These processes depend on the service s current state. For more information about the service life cycle, see LBaaS Service Life Cycle on page 61. To start an LBaaS service 1. Choose Manage > LBaaS Settings: LBaaS Service to view all services in the LBaaS Service page. 2. To start an Inactive LBaaS service, click the Active button in the Action column. The service switches to the transitional state Starting while it is deploying, clustering, and activating service instances, before finally switching to the state Active. To stop an LBaaS service 1. Choose Manage > LBaaS Settings: LBaaS Service to view all services in the LBaaS Service page. 2. To stop an Active LBaaS service, click the Inactive button in the Action column. The service switches to the transitional state Stopping while it is deactivating and deleting service instances, before switching to state Inactive. Changing the Error State of an LBaaS Service Host While an LBaaS service is running, errors can occur in its service hosts that leave the service in a nonworking state. You can fix errors in service hosts from the LBaaS Tasks page. Changing the error state of an LBaaS service 1. Choose Manage > LBaaS Settings: LBaaS Tasks to view all failed service hosts in the LBaaS Tasks page. This page lists all failed service hosts. For example: Figure LBaaS Tasks Page: Failed Service Hosts 226 Stingray Services Controller User s Guide

237 Creating and Managing LBaaS Services Configuring the Stingray Services Controller Virtual Appliance 2. Click the right arrow next to a failed service host to see a description of the failure, and a list of recovery tasks that you must perform manually to clear the error. For example: Figure LBaaS Tasks Page: Failure Descriptions 3. Manually perform the recovery tasks listed in the Info field. 4. Click Complete to indicate that you have performed the recovery tasks. The Services Controller then attempts to put the recovered service host back into use. Deleting an LBaaS Service You can delete an Inactive LBaaS service from the LBaaS Service page. Deleting an LBaaS Service 1. Choose Manage > LBaaS Settings: LBaaS Service to view all services in the LBaaS Service page. 2. Locate the required service and stop it. See Starting and Stopping an LBaaS Service on page Click the X next to the service s entry in the table and then click Delete. The service enters the Deleting state in the service cluster; once this task is complete it enters the Deleted state. Stingray Services Controller User s Guide 227

238 Configuring the Stingray Services Controller Virtual Appliance Viewing Reports and Diagnostics Viewing Reports and Diagnostics The Reports tab in the Administration UI displays customizable reports about your current Traffic Manager instances, bandwidth allocation, CPU utilization, and throughput. You can see how your resources are utilized so that you can adjust and reallocate resources as needed. You can view these reports: Instance - The number of Traffic Manager instances by instance host or feature pack, and the current status of each: Active, Idle, or Failed. For details, see Instance Report on page 228. Bandwidth Allocation - The current bandwidth allocation by SKU or feature pack. For details, see Bandwidth Allocation Report on page 230. CPU Utilization - The current CPU utilization by Traffic Manager instance or instance host. For details, see CPU Utilization Report on page 232. Throughput - The current throughput by Traffic Manager instance or instance host. For details, see Throughput Utilization Report on page 233. Note: Historical reports are not available in this release. Instance Report The Instance report summarizes the status of all instances. The Instance report is a series of pie charts. The main page is a two-layer pie chart. The inner layer is divided by feature pack by default, while the outer layer is divided by instance status. The report displays the number of instances and the status with that feature pack. You can drill down into each individual feature pack and another pie chart is presented that gives you a report on that feature pack. You also have the option to divide the inner layer of pie chart in the main page by instance host. Similarly, you can drill down into each instance host. The Instance report displays the current status of instances in a color coded format. Instance Status Color Description Active Green An instance that is currently running. Failed Red An instance has failed to start. Idle Blue An instance that has been deployed but is not currently running. About Instance Report Pie Charts Hover over a specific area of the pie chart to view the feature pack or instance name (depending on the option chosen) and the number of instances. 228 Stingray Services Controller User s Guide

239 Viewing Reports and Diagnostics Configuring the Stingray Services Controller Virtual Appliance What This Report Tells You The Instance report answers these questions: What is the current status of my instance hosts? What is the current status of a particular instance host? What is the current status of my feature packs? What is the current status of a particular feature pack? To view the Instances report 1. Choose Reports > Instances to display the Instances report page. Figure Instances Report 2. Use the controls to change the report display as described in this table. Control Options Description Select a report type: Instance host. Then, select a specific instance host for the report, or select All. Feature pack. Then, select a specific feature pack for the report, or select All. When you select All, you can double-click an instance or feature pack in the pie chart to view details for the selected instance or feature pack. 3. For instance hosts, hover over a specific area of the pie chart to view the instance hostname and number of instances. 4. For feature packs, hover over a specific area of the pie chart to view the feature pack name and number of instances. Stingray Services Controller User s Guide 229

240 Configuring the Stingray Services Controller Virtual Appliance Viewing Reports and Diagnostics Bandwidth Allocation Report The Bandwidth Allocation report displays allocated bandwidth for your Traffic Manager instances by SKU or feature pack. A SKU is created based on a set of features, that is a feature pack. When you create a feature pack, you can choose the whole set of features or a subset of the features. When you create an instance, you must specify which feature pack you want to use; you do not specify the SKU. The Bandwidth Allocation report is a series of pie charts. The main page of Bandwidth allocation report is a two-layer pie chart. The inner layer is divided by licensed tied SKUs. The outer layer shows the bandwidth allocated to each of instances and total size of available bandwidth of each SKU. Note: You cannot specify how much bandwidth you want to reserve for a given feature pack. You can use the Bandwidth Allocation report to evaluate whether or not you need to reallocate bandwidth or purchase additional bandwidth licenses. About Bandwidth Allocation Report Pie Charts Hover over a specific area of the pie chart to view the allocated and unallocated bandwidth for a Stingray Traffic Manager (STM) SKU or instance. What This Report Tells You The Bandwidth Allocation report answers these questions: How much bandwidth is allocated to a SKU or instance? How much bandwidth is unallocated for a SKU or instance? 230 Stingray Services Controller User s Guide

241 Viewing Reports and Diagnostics Configuring the Stingray Services Controller Virtual Appliance To display the Bandwidth Allocation report 1. Choose Reports > Bandwidth Allocation to display the Bandwidth Allocation report page. Figure Bandwidth Allocation Report 2. To drill down to a particular SKU, double-click the area you want to view to display a three-layer pie chart: The inner layer displays the particular SKU. The middle layer is divided by feature pack created for that SKU. The outer layer represents the bandwidth allocated for each of instance. Stingray Services Controller User s Guide 231

242 Configuring the Stingray Services Controller Virtual Appliance Viewing Reports and Diagnostics Figure Detailed Bandwidth Allocation Report 3. To view the bandwidth for the license or Traffic Manager instance, hover your mouse pointer over the license or instance in the chart. 4. To return to the main chart from a detailed chart, click Reset. CPU Utilization Report The CPU utilization report displays real-time CPU usage by percentage over time of each instance in an active state and the aggregated CPU usage of all active instances on each host. The CPU Utilization report is a streaming line chart. The hosts and active instances are listed at the bottom of line charts. You can choose which host or instance CPU usage to displayed in the chart. If you have too many active instances, there is a Filter box from which you can filter the report by instance name. Riverbed recommends you use the regex name. What This Report Tells You The CPU Utilization report answers these questions: How much of the CPU is being used? What is the average and peak percentage of the CPU being used? 232 Stingray Services Controller User s Guide

243 Viewing Reports and Diagnostics Configuring the Stingray Services Controller Virtual Appliance To display the CPU Utilization report 1. Choose Reports > CPU Utilization to display the CPU Utilization report page. Figure CPU Utilization Report 2. To view a graph of data points over time, keep the page open. Data points are graphed every ten seconds. 3. To toggle on and off the graph for an instance host, click the instance hostname at the bottom of the page. 4. To see the CPU utilization for a particular instance, enter the Traffic Manager instance name and click Filter. Typically, this is the regex name. 5. To clear the data, refresh the page. Throughput Utilization Report The Throughput Utilization report displays the real-time throughput utilization of each instance in an active state and aggregated throughput utilization on each host. The Throughput Utilization report is a streaming line chart. The real-time throughput per second and peak throughput in last hour is displayed in the chart. The displayed throughput includes both incoming and outgoing throughput. Review the Throughput Utilization report to find out which instances use the most throughput, and then compare the results to the results you expected. For example, you might expect a lot of throughput for an instance that hosts a popular site. However, if an instance is using more throughput than expected, you can try to discover why so that you can make the appropriate adjustments. You can also use the Throughput report to monitor how close you are to reaching your license limitations, so that you can evaluate whether or not you should purchase additional licenses. Stingray Services Controller User s Guide 233

244 Configuring the Stingray Services Controller Virtual Appliance Viewing Logs and Generating System Dumps What This Report Tells You The Throughput Utilization report answers these questions: What was the average throughput? What was the peak throughput? About Report Graphs Use the mouse to hover over a specific data point to see what the y values and exact time stamp were in relation to peaks. To display the Throughput Utilization report 1. Choose Reports > Throughput Utilization to display the Throughput Utilization report. Figure Throughput Report 2. To see the throughput for a particular instance, enter the Traffic Manager instance name and click Filter. Viewing Logs and Generating System Dumps You can view system logs and generate system dumps from the Diagnostics tab. Viewing Logs You can view current logs for the Services Controller in the SSC Logs page. 234 Stingray Services Controller User s Guide

245 Viewing Logs and Generating System Dumps Configuring the Stingray Services Controller Virtual Appliance To view current Services Controller Logs 1. Choose Diagnostics > SSC Logs to display the SSC Logs page. Figure SSC Logs Page 2. Click first, prev, next, or last to navigate through the log pages. Alternatively, type a number in the Page text box and click Go to navigate to a specific page. Generating System Dumps You can generate system dumps for the Services Controller in the System Dumps page. You can tailor the contents of the system dumps to include statistics if required. To generate system dumps 1. Choose Diagnostics > System Dumps to display the System Dumps page. Figure System Dumps Page 2. Complete the configuration according to this table. Control All Logs Include Statistics Description Select to generate all current system logs. Select to include all statistics in system dump files. Stingray Services Controller User s Guide 235

246 Configuring the Stingray Services Controller Virtual Appliance Generating Metering Logs 3. Click Generate to create the system dump. Generating Metering Logs You can generate metering logs from the Diagnostics > Metering Log page. The files are created as.zip files and listed in a table. A maximum of ten metering logs can be generated by this process. You can download the files directly from the table. To generate metering logs 1. Choose Diagnostics > Metering Log to display the Metering Log page. Figure Metering Logs Page 2. Configure the log check boxes as required. The action of these check boxes is related to the most recent log generation activity. Select the Past Metering Logs check box to regenerate all logs up to the most recent log generation. Select the New Metering Logs check box to generate all logs since the most recent log generation. 3. Click Generate to create the metering logs. The generated metering log(s) are added to the table of metering log zip files: Where a past log is still present in the table, it is not regenerated. Where no new logs are present, no new logs are generated. Note: To generate metering logs using the Services Controller VA CLI, see Generating Metering Logs on page 250. To download metering logs 1. Choose Diagnostics > Metering Log to display the Metering Log page. 2. Click the name of the required.zip file in the metering log table. 3. Select a save location and click Save. 236 Stingray Services Controller User s Guide

247 Generating Metering Logs Configuring the Stingray Services Controller Virtual Appliance Getting Help You can view the following support information from the Support menu: Technical Support - View links and contact information for Riverbed Support. Documentation - View the link to the Services Controller documentation. Appliance Details - View the model, serial number7, and software version number. To view support information Choose Support to display the Support page. Figure Support Page Stingray Services Controller User s Guide 237

248 Configuring the Stingray Services Controller Virtual Appliance Using the CLI to Manage the Services Controller Using the CLI to Manage the Services Controller After you have created the virtual appliance in vsphere, you can administer and manage the Services Controller using the CLI or GUI. These instructions describe how to perform these tasks using the CLI only. To log in to the CLI To perform configuration tasks in the CLI, you must log in as the administrator user and enter configuration mode. For example: login as: admin Riverbed Stingray Services Controller password:<enter the password you specified in the Setup wizard> amnesiac > enable amnesiac # configure terminal amnesiac (config) # Note: The CLI accepts abbreviations for commands. For example, configure t to run the command configure terminal. You can press the tab key to complete a CLI command automatically. Importing the SSL Certificate, Key, and Licenses You must import the following files into the Services Controller before you can create instances: SSL certificate and key Services Controller license Bandwidth license key FLA license Traffic Manager image Note: If you have not received your license files, contact Riverbed Licensing ([email protected]) for assistance. To import the SSL certificate and key, Services Controller license, bandwidth, and FLA license 1. Open the Services Controller in a Telnet or SSH client program such as PuTTy. 2. Log in and enter configuration mode. Log in as the administrator user using the password that you assigned in the Setup wizard: login as: admin Riverbed Stingray Services Controller Last login: Sun Nov 3 19:32: from amnesiac > enable amnesiac # configure terminal amnesiac (config) # 238 Stingray Services Controller User s Guide

249 Using the CLI to Manage the Services Controller Configuring the Stingray Services Controller Virtual Appliance Note: All Services Controller CLI command examples in this guide use the prompt amnesiac as the example machine. 3. To import an SSL certificate and key, you must provide the file path to the certificate and key file. For example, http, ftp, or scp URL amnesiac (config) # ssc import-cert-key scp://username:[email protected]/ssc_archive/ cert_key.pem Certificate and Private Key imported successfully amnesiac (config) # show ssc certificate >>certificate and key is displayed 4. To import a Services Controller license, you must provide the file path to the license file. For example, http, ftp, or scp URL (scp://username:password@host/path). amnesiac (config) # ssc import-lic file scp://username:[email protected]/ssc_archive/ taranis-license License imported successfully amnesiac (config) # show ssc license-file XXX-XXXXXXX-XXXX-X-XXXX-XXXX-XXXX 5. Import the enterprise bandwidth license key into the Services Controller. To import the enterprise bandwidth license, provide the license key you obtained from your account representative. amnesiac (config) # ssc license enterprise throughput add XXX-XXXXXX-XXXX-X-XXXX-XXXX-XXXX 6. Next you must install the FLA license and create a license resource for it inside the Services Controller. (For example, fla-ssl-ssc). To import an FLA license, you must provide the file path. For example, http, ftp, or scp URL (scp://username:password@host/path). amnesiac (config) # ssc stm import-lic file scp://username:[email protected]/ssc_archive/ fla-ssl-ssc License imported successfully amnesiac (config) # show ssc stm license-file >>the license file is displayed amnesiac (config) # ssc license create license-name fla-ssl-ssc Field Value info Active status Active You can confirm that the Services Controller process is running: amnesiac (config) # show ssc service SSC service status: running 8. You must import the Traffic Manager image (that is, the tarball) and create a version resource for the software image. (For example, stm96.) To import an image, you must provide the file path. For example, http, ftp, or scp URL (scp://username:password@host/path). amnesiac (config) # ssc stm import-image file scp://[email protected]/ssc_archive/ ZeusTM_97_Linux-x86_64.tgz amnesiac (config) # show ssc stm images Imported Stingray Traffic Manager Images ZeusTM_97_Linux-x86_64.tgz amnesiac (config) # ssc version create version-name stm96 vfilename ZeusTM_97_Linux-x86_64.tgz vdirectory ZeusTM_97_Linux-x86_64.tgz Stingray Services Controller User s Guide 239

250 Configuring the Stingray Services Controller Virtual Appliance Using the CLI to Manage the Services Controller Field Value info Active status Active version_filename ZeusTM_97_Linux-x86_64.tgz version_directory None Note: To delete an image, run the command: no ssc stm image-file <image-name>. Syntax: ssc version create version-name <unique-name> vfilename <version-tarball-file> image Parameter version-name <unique-name> vfilename <version-tarball-file> Description Specify a unique name for the instance. Specify the name of the Traffic Manager image. Importing an Instance Host OVA into the Services Controller You import the Instance Host OVA and create an SSH key for the administrator user so that you can create a virtual instance host automatically on ESXi. Import the Instance Host OVA into the Services Controller. amnesiac (config) # ssc import-host-ova file scp://[email protected]/ssc/rvbd-ssc-host.ova Host OVA imported successfully. amnesiac (config) # show ssc host-ova Imported Host OVA Rvbd-host.ova Enabling Passwordless SSH Communication You must create a public SSH key to enable passwordless communication between the Services Controller and instance hosts. This SSH key will be used for all deployed instance hosts. 1. Create an SSH public key for the administrator user to perform passwordless communication to the instance host. amnesiac (config) # show ssh client private No user identities configured. SSH authorized keys: amnesiac (config) # ssh client generate identity user admin amnesiac (config) # show ssh client private User Identities: User admin: >>ssh public and private keys are displayed 2. Retrieve the IP address of the SSC Instance Host VA and login via SSH as user sscadmin. 3. Inject the SSH public key of the admin user of SSC VA into the instance host VA: $ user root sshkey "ssh-rsa <public_key> admin" 240 Stingray Services Controller User s Guide

251 Using the CLI to Manage the Services Controller Configuring the Stingray Services Controller Virtual Appliance Creating a Feature Pack for Instances in the Services Controller You must create a feature pack in the Services Controller before you can create instances. A feature pack describes a set of licensable features that you can apply to a Traffic Manager instance. A feature pack is the same as a SKU or a subset of features in a SKU. When you deploy or modify a Traffic Manager instance, the feature pack controls what licensable features are allowed (but does not specify bandwidth limits). Creating a feature pack in the Services Controller requires you to base the pack on a SKU and to give it a unique name. amnesiac (config) # ssc feature-pack create fpname default-fp stm-sku STM Field Value info Active status Active stm_sku STM-400 excluded None Syntax: ssc feature-pack create fpname <resource-unique-name> stm-sku <SKU-for-feature-pack> Provisioning an Instance Host After you have created a default feature pack, you can provision the instance host using the CLI. When you run the ssc host provision command, it untars the OVA, repackages the SSH public key created earlier, and provisions it on ESXi. After the instance host is created on ESXi, the public key is assigned to the user that is defined in the ssc host provision command (in this example, root). After this command is run, you can connect to the instance host without a password using SSH. Note: Although the Services Controller VA runs on vsphere ESXi 5.0+, the ssc host provision command works only on ESXi 5.0 and 5.1, not ESXi To provision an instance host, specify a name and size (large or small), and provide the ESXi username, password, location, and datastore location. You also specify the instance host username. amnesiac (config) # ssc host provision name test-demo flavor small esx-uri esx://sf.test.com esx-user admin esx-pass riverbed storage datastore1 host-user root Syntax: ssc host provision name <unique-name> flavor [small large] esx-uri <URI-or-path-to-esxihost> esx-user <esxi-username> esx-pass <esxi-password> storage <esxi-datastore-name> host-user <host-username> <cr> [host-info <instance-host-info>] Parameter name <unique-name> flavor [small large] esx-uri <URI-or-path-to-esxi-host> Description Specify the hostname of this instance host. This is required for all instance hosts. Note: For non-provisioned hosts, this must be the hostname of the existing host. Specify the size of the instance host: small - Specifies two CPUs and 4 GB of memory large - Specifies 8 CPUs and 16 GB of memory Specify the URI to the ESXi host where this instance host is to be deployed. Stingray Services Controller User s Guide 241

252 Configuring the Stingray Services Controller Virtual Appliance Using the CLI to Manage the Services Controller Parameter esx-user <esxi-username> esx-pass <esxi-password> storage <esxi-datastore-name> host-user <host-username> host-info <instance-host-info> Description Specify the ESXi host username used for the provisioned instance host. Specify the ESXi password used for the provisioned instance host. Specify the ESXi storage pool. For example, datastore1. Specify the name of the user that manages the instance host. Optionally, specify descriptive information for the new instance host. 2. To test whether the instance host has been provisioned in vsphere, open vsphere, right-click the host, and select Open Console to display the new instance host command line in a console window. 3. Log in to the new instance host. By default, the username is sscadmin and the password is password. $amnesiac login: sscadmin Password: password $show host interface mgmtbr0 >>interface details are displayed with the new IP address << 4. On the Services Controller, ping the new instance host. amnesiac (config) # ping test-demo 64 bytes from test0demo.example.com ( ): icmp_seq=4 ttl-64 time=0.761 ms 64 bytes from test0demo.example.com ( ): icmp_seq=4 ttl-64 time=0.761 ms 5. If the ping is successful, you can log in to the new instance host as the sscadmin user and change its management, LAN and WAN IP addresses. amnesiac (config) # slogin sscadmin@test-demo >>logs in to new host<< root@test-demo:~# show host info root@test-demo:~# host hostname <new_hostname> domain <domain_name> >>use this command if you need to change the name of the host root@test-demo:~# host interface mgmtbr0 ip <ip-addr> netmask <netmask> gateway <ip-gateway> root@test-demo:~# host interface wanbr0 ip <ip-addr> netmask <netmask> root@test-demo:~# host interface lanbr0 ip <ip-addr> netmask <netmask> root@test-demo:~#logout After the instance host has been provisioned in vsphere, you must create the instance host entry for managed instances on the Services Controller by assigning a name, and user. To create the instance host on the Services Controller using the CLI At the prompt, enter: amnesiac (config) # ssc host create host-name test-demo host-user sscadmin host-pass password username root Field Value info Active install_root /root/install status Active username root status_check None Stingray Services Controller User s Guide

253 Using the CLI to Manage the Services Controller Configuring the Stingray Services Controller Virtual Appliance amnesiac (config) # ssc host list Host test-demo # Syntax: ssc host create host-name <hostname> host-user <host-username> host-pass <hostpassword> username <host-username> host-info <host-description> Parameter name <unique-name> host-user <host-username> host-pass <host-password> username <host-username> host-info <host-description> Description Specify a unique name for the instance host. Specify the instance host username. The default value is sscadmin. Specify the login password of the instance host user. Specify the user for SSH access; this user must be root. Optionally, specify descriptive information about the new instance host. Creating a Traffic Manager Instance Without a Container After you have created an instance host, you can create a Traffic Manager instance using the CLI. This example creates an instance without a container. If you create Traffic Manager instances without using containers, you must ensure a degree of network isolation. amnesiac (config) # ssc instance create instance-name stm-i1 license-name fla-ssl-ssc bandwidth 1200 cpu-usage 0 owner tim stm-fpname default-fp stm-version stm96 host-name test-demo mgmtaddress test-demo >>host information is displayed<< Field Value amnesiac (config) # ssc instance start instance-name stm-i1 amnesiac (config) # show ssc instance stm-il >>instance information is displayed with status changed from Idle to Starting to Active<< Syntax: ssc instance create instance-name <name> license-name <name> bandwidth <bandwidth> cpu-usage <cpu> owner <instance owner> stm-fpname feature-pack name> stm-version stm version name> host-name <host> mgmt-address <container name> config-options <string> cluster-id <name> Parameter instance-name <name> license-name <name> bandwidth <bandwidth> Description Specify a unique name for the instance. The name of the license resource you want to use for this instance. When you modify this property, the Services Controller updates the license on the Traffic Manager instance. Specify the bandwidth allowed for this instance (in Mbps). Stingray Services Controller User s Guide 243

254 Configuring the Stingray Services Controller Virtual Appliance Using the CLI to Manage the Services Controller Parameter cpu-usage <cpu> owner <name> stm-fpname <feature-pack name> stm-version <stm version name> host-name <host> mgmt-address <host or ip-addr> config-options <string> Description A string that describes which CPUs are used for this Traffic Manager instance. If used, you must either: specify a value in a form that is used by the taskset command. For example, "0,3,5-7". set this property to an empty string. This indicates that the host is not limited in its use of CPU cores (unless it is deployed within an LXC container). This is the default setting for the property if you do not specify a string. Note: Any change to the cpu_usage settings will cause a restart of the instance. Specify a string that describes an owner of the instance. Specify the Traffic Manager feature-pack name associated with the instance. The name of the Traffic Manager version resource for the instance. If you modify this property, the Services Controller upgrades the Traffic Manager instance to the new version. You can change this property only if the instance status is Idle. Specify the name of the Traffic Manager instance host on which the instance is running. Specify the host name or IP address to reach the instance. A string containing configuration options for optional features. If specified, this is a space-delimited combination of one or more the following: default - This option has no effect and is used to avoid an empty string. If this is option is used, no other options can be specified in the config-options. admin_ui=yes/no - Start or bypass the Administration UI for the Traffic Manager instance (default: yes). You must set this to yes if you use the cluster_id property. maxfds=<number> - The maximum number of file descriptors (default: 4096). This setting must be consistent between all instances in a cluster. (See Notes, below). webcache!size=<number> - The size of RAM for the web cache (default: 0). This value can be specified in %, MB, GB by appending the corresponding unit symbol to the end of the value when not specifying a value in bytes. For example, 100%, 256MB, 1GB, and so on. This setting must be consistent between all instances in a cluster. (See Notes, below). java!enabled=yes/no - Start or bypass the Java server (default: no). This setting must be consistent between all instances in a cluster. (See Notes, below). statd!rsync_enabled=yes/no - Synchronize historical activity data within a cluster. If this data is unwanted, disable this setting to save CPU and bandwidth (default: yes). This setting must be consistent between all instances in a cluster. (See Notes, below). snmp!community - The SNMP v2 community setting for this instance resource. For metering of nonmanaged instances, this must be set to the same value as the equivalent snmp!community property on the instance itself (default: "public"). num_children=<number> - The number of child processes (default: 1). start_flipper=yes/no - Start or bypass the flipper process (default: yes). You must set this to yes if you use the cluster_id property. port_offset=<number> - Offset control port values by a fixed amount. When network isolation is not provided by a container configuration, each Traffic Manager instance on a particular host should be allocated a unique port offset to avoid port clashes. The offset range is from 0 and 499. (continued overleaf) 244 Stingray Services Controller User s Guide

255 Using the CLI to Manage the Services Controller Configuring the Stingray Services Controller Virtual Appliance Parameter config-options <string> (continued) cluster-id <ID or name> Description (continued) afm_deciders=<number> - The number of application firewall decider processes. If 0 is specified, the application firewall is not installed (default: 0). Note: You cannot update this option after the instance has been deployed. flipper!frontend_check_addrs=<host> - Check instance front-end connectivity with a specific host. When the Services Controller deploys an instance, it checks connectivity to the default gateway of the instance host by sending ICMP requests to it. If the default gateway is protected by a firewall or blocks ICMP requests, instance deployment can fail. To disable deployment connectivity checks, use flipper!frontend_check_addrs="". This setting must be consistent between all instances in a cluster. (See Notes, below). flipper!monitor_interval=<number> - The interval, in milliseconds, between flipper monitoring actions. (default: 500 ms). For higher density Traffic Manager instance deployments, use a larger value such as 2000ms. This setting must be consistent between all instances in a cluster. (See Notes, below). Note: Any change to the config_options settings will cause a restart of the instance. Note: Some configuration options, if specified here, must be consistent between all Traffic Manager instances in a cluster: maxfds webcache!size java!enabled statd!rsync_enabled flipper!monitor_interval flipper!frontend_check_addrs. If you set or update the value in one instance resource, the Services Controller replicates this update automatically to the other instance resources. The instance will restart whenever these are changed, but other instances in the cluster must be restarted manually. Note: Whenever the config_options property is set, all currently modified options must be specified again in the REST call. Any options that are not specified will lose their current value and be reset to their default value. Optionally, specify the name of a cluster resource to which the instance belongs. You can now access this Traffic Manager instance in a Web browser. Make sure that you put the UI port in the browser address. Configuring a Traffic Manager Instance with a Container After you have created an instance host, you can create a Traffic Manager instance using the CLI. This example creates an instance with a container. Note: Before you create a container using the CLI, make sure that you have a DNS entry for the hostname of the container, and that it resolves correctly. Stingray Services Controller User s Guide 245

256 Configuring the Stingray Services Controller Virtual Appliance Using the CLI to Manage the Services Controller amnesiac (config) # ssc host host-name test-demo user sscadmin password password Created host session for (test-demo) with user (sscadmin) amnesiac (config) # ssc instance create instance-name stm-cont-i1 license name fla-ssl-ssc bandwidth 1200 cpu-usage 0 owner tim stm-fpname default-fp stm-version stm96 host-name testdemo mgmt-address 10.l container-name stm-cont-i1 container-cfg "{'gateway': ' ', 'mgmtip':' /24', 'wanip':' /24', 'lanip':' /24', 'dataplanegw':' ', 'flavor':'small'}" >>host information is displayed<< Field Value amnesiac (config) # ssc instance start instance-name stm-i1 amnesiac (config) # show ssc instance stm-il >>instance information is displayed with status changed from Idle to Starting to Active<< Syntax: ssc instance create instance-name <name> license-name <name> bandwidth <bandwidth> cpu-usage <cpu> owner <instance owner> stm-fpname feature-pack name> stm-version stm version name> host-name <host> mgmt-address <container name> <cr> admin-password <password> admin-username <username> config-options <options> cluster-id <name> container-name <name> container-cfg <config data> deploy <none> managed <none> restaddress <uri and port> snmp-address <ip address> Parameter instance-name <name> license-name <name> bandwidth <bandwidth> cpu-usage <cpu> owner <name> stm-fpname <feature-pack name> stm-version <stm version name> host-name <host> mgmt-address <host or ip-addr> Description Specify a unique name for the instance. The name of the license resource you want to use for this instance. When you modify this property, the Services Controller updates the license on the Traffic Manager instance. Specify the bandwidth allowed for this instance (in Mbps). A string that describes which CPUs are used for this Traffic Manager instance. If used, you must either: specify a value in a form that is used by the taskset command. For example, "0,3,5-7". set this property to an empty string. This indicates that the host is not limited in its use of CPU cores (unless it is deployed within an LXC container). This is the default setting for the property if you do not specify a string. Note: Any change to the cpu_usage settings will cause a restart of the instance. Specify a string that describes an owner of the instance. Specify the Traffic Manager feature-pack name associated with the instance. The name of the Traffic Manager version resource for the instance. If you modify this property, the Services Controller upgrades the Traffic Manager instance to the new version. You can change this property only if the instance status is Idle. Specify the name of the Traffic Manager instance host on which the instance is running. Specify the host name or IP address to reach the instance. 246 Stingray Services Controller User s Guide

257 Using the CLI to Manage the Services Controller Configuring the Stingray Services Controller Virtual Appliance Parameter config-options <string> Description A string containing configuration options for optional features. If specified, this is a space-delimited combination of one or more the following: default - This option has no effect and is used to avoid an empty string. If this is option is used, no other options can be specified in the config_options. admin_ui=yes/no - Start or bypass the Administration UI for the Traffic Manager instance (default: yes). You must set this to yes if you use the cluster_id property. maxfds=<number> - The maximum number of file descriptors (default: 4096). This setting must be consistent between all instances in a cluster. (See Notes, below). webcache!size=<number> - The size of RAM for the web cache (default: 0). This value can be specified in %, MB, GB by appending the corresponding unit symbol to the end of the value when not specifying a value in bytes. For example, 100%, 256MB, 1GB, and so on. This setting must be consistent between all instances in a cluster. (See Notes, below). java!enabled=yes/no - Start or bypass the Java server (default: no). This setting must be consistent between all instances in a cluster. (See Notes, below). statd!rsync_enabled=yes/no - Synchronize historical activity data within a cluster. If this data is unwanted, disable this setting to save CPU and bandwidth (default: yes). This setting must be consistent between all instances in a cluster. (See Notes, below). snmp!community - The SNMP v2 community setting for this instance resource. For metering of nonmanaged instances, this must be set to the same value as the equivalent snmp!community property on the instance itself (default: "public"). num_children=<number> - The number of child processes (default: 1). start_flipper=yes/no - Start or bypass the flipper process (default: yes). You must set this to yes if you use the cluster_id property. afm_deciders=<number> - The number of application firewall decider processes. If 0 is specified, the application firewall is not installed (default: 0). Note: You cannot update this option after the instance has been deployed. flipper!frontend_check_addrs=<host> - Check instance front-end connectivity with a specific host. When the Services Controller deploys an instance, it checks connectivity to the default gateway of the instance host by sending ICMP requests to it. If the default gateway is protected by a firewall or blocks ICMP requests, instance deployment can fail. To disable deployment connectivity checks, use flipper!frontend_check_addrs="". This setting must be consistent between all instances in a cluster. (See Notes, below). flipper!monitor_interval=<number> - The interval, in milliseconds, between flipper monitoring actions. (default: 500 ms). For higher density Traffic Manager instance deployments, use a larger value such as 2000ms. This setting must be consistent between all instances in a cluster. (See Notes, below). Note: Any change to the config-options settings will cause a restart of the instance. Note: Some configuration options, if specified here, must be consistent between all Traffic Manager instances in a cluster: maxfds webcache!size java!enabled statd!rsync_enabled flipper!monitor_interval flipper!frontend_check_addrs. If you set or update the value in one instance resource, the Services Controller replicates this update automatically to the other instance resources. The instance will restart whenever these are changed, but you must restart other instances in the cluster manually. Stingray Services Controller User s Guide 247

258 Configuring the Stingray Services Controller Virtual Appliance Using the CLI to Manage the Services Controller Parameter cluster-id <ID or name> container-name <name> container-cfg <config data> deploy <none> managed <none> active [Active Inactive] Description Optionally, specify the name of a cluster resource to which the instance belongs. Specify the name of the LXC container in which the instance is running. If this is an empty string or none, the Traffic Manager is not run inside a container. Optionally, specify configuration data for the instance container. The string populates the container configuration file with the gateway IP, the management IP, the WAN IP, the LAN IP, the data plane gateway IP, and the flavor (also called size). Use the following format: "{'gateway': ' ', 'mgmtip':' /24', 'wanip':' /24', 'lanip':' /24', 'dataplanegw':' ', 'flavor':'small'}" Possible flavor values are: small= 256, 1 CPU, medium= CPU, or large= CPU. The default value is true. Specify none to apply changes to the database but not cause deployment changes. This setting supports testing and database reconciliation. No actions are carried out and no intermediate status is set. If a new instance resource is set to none then the status is set to Idle upon creation. Specify yes. Specify the status: Active - Activates a resource. Inactive - Deactivates a resource. A resource cannot be marked as Inactive if it is in use. The resource cannot be reactivated Inactive status has been specified. You can now access this Traffic Manager instance in a Web browser. Make sure that you put the UI port in the browser address. The Services Controller VA automatically creates a container configuration file for the container. You must first create a host session by assigning a username and password for the instance host. If you log in to the instance host and list the files in the installation directory (/root/install), you see the new container file listed, along with a container configuration file for it. amnesiac (config) # slogin root@test-demo >>machine and last log in info listed root@test-demo:~# ls /root/install stm-cont-i1 stm-cont-i1.conf stm-i1 root@test-demo:~# less /root/install/stm-cont-il.conf root@test-demo:~# logout Connection to test-demo closed. Start the instance with the container in the Services Controller. amnesiac (config) # ssc instance start instance-name stm-cont-i1 >>instance settings displayed with status = Starting<< amnesiac (config) # show ssc instance instance-name stm-cont-i1 >>instance settings displayed with status = Active<< Working with LBaaS Services The Services Controller supports the creation and configuration of Load Balancing as a Service (LBaaS) services. When an LBaas service is created and activated, the Services Controller will deploy and maintain a configured service cluster with a given number of instances. The Services Controller service manager relies on the monitoring feature of the Services Controller to periodically check the health of each individual service instance to make sure it is still accessible. If a service instance remains inaccessible for a period of time longer than the number of seconds specified in the instance_health_fail_trigger service property, then a failover is triggered for the service. 248 Stingray Services Controller User s Guide

259 Using the CLI to Manage the Services Controller Configuring the Stingray Services Controller Virtual Appliance Prerequisites for Using LBaaS Services Before creating an LBaaS service, you need to do some initial configuration of resources: Feature Pack - Feature packs used for LBaaS services must support LBaaS, otherwise they will not be accepted by service validation. FLA License - The FLA license used by service instances is the same as those used by non-service instances. STM Version - The Services Controller validates any Traffic Manager version that you select for an LBaaS service to ensure it is an acceptable version. Instance Host - An Instance host that will be used for Services is called a Service Host. This Host cannot be used for non-service instances. On deploying the service host, you must set the following parameters: usage_info - Specify servicemanager. Indicates the host is marked for service instances only. size - The number of LBaaS instances that can run on this host. The Services Controller will not deploy more instances beyond this value. With LBaaS, size is a required property for a host resource. cpu_cores - Specify the set of cores available on the host. The Services Controller uses this property to balance CPU affinity of Traffic Manager instance processes. If you do not set this property, the Linux kernel determines process scheduling that results in reduced achievable densities. Service Host CLI Commands The CLI commands to view, create and update hosts and templates are as follows: show ssc host host-name ssc host create host-name ssc host create template-name ssc host update host-name ssc host update template-name These commands are described in detail in the Stingray Services Controller Command-Line Interface Reference Manual. LBaaS CLI Commands The CLI commands to view, create and update LBaaS services are as follows: show ssc service lbaas lbaas-name show ssc service lbaas task show ssc service lbaas tasks ssc service lbaas create ssc service lbaas task ssc service lbaas update These commands are described in detail in the Stingray Services Controller Command-Line Interface Reference Manual. Stingray Services Controller User s Guide 249

260 Configuring the Stingray Services Controller Virtual Appliance Using the CLI to Manage the Services Controller ESXi vsphere Host Port Mapping You must map the three interfaces of the instance host VA to the corresponding host ports where the instance host VA is deployed. You map the ports and network using the vsphere client. Note: Promiscuous mode in the virtual switch must be enabled. For details about configuring promiscuous mode, see the VMware documentation. You must first verify the mapping of the network adapters to their names given by the kernel. For example, network adapter 1 to eth0, network adapter 2 to eth1, and network adapter 3 to eth2. You map the ports by starting the instance host VA, checking the MAC address of each interface, and matching that to the MAC address given to each network adapter by the ESXi host. Then you must map each of these network adapters to the correct network on the ESXi host. Riverbed assumes that eth0 is always the management network, eth1 is the WAN network, and eth2 is the LAN network. With the instance host VA powered off, on the vsphere client you choose Settings on the instance host VA, select each network adapter, and select the host Network Connection > Network Label for the appropriate mapping. The following example represents port mapping on the ESXi vsphere client: The instance host VA has the following mapping when starting and matching eth0, eth1 and eth2 virtual network cards to the ones given by the ESXi host: network adapter 1 <-> eth0 (mgmtbr0) network adapter 2 <-> eth1 (wanbr0) network adapter 3 <-> eth2 (lanbr0) On the Settings panel of the instance host VA, select each network adapter and change the network label to the appropriate network on the ESXi host. This example assumes that the ESXi host has the following networks: mgmt, wan, and lan: network adapter 1 <-> 'Network label: mgmt' network adapter 2 <-> 'Network label: wan' network adapter 3 <-> 'Network label: lan' Exporting a database To export the MySQL inventory database from the CLI: ssc database local db-file upload <remote-url> Generating Metering Logs To extract metering logs using the Services Controller VA CLI, use the following command: amnesiac (config) # ssc log metering generate [backup [yes no]] In this example, the backup switch indicates whether to regenerate all logs up to the most recent log generation. Any new logs since the most recent log generation will always be included. A maximum of ten metering logs can be generated by this process. Note: To generate metering logs using the Services Controller Virtual Appliance, see Generating Metering Logs on page Stingray Services Controller User s Guide

261 CHAPTER 10 Configuring for High Availability This chapter describes two possible approaches to the configuration of High Availability (HA) of the Services Controller. This chapter includes the following sections: High Availability Configuration Prerequisites on page 251 FLA-based Load Balancing and Mode Settings API on page 253 Making the SSC Database Highly Available on page 256 High Availability Configuration Prerequisites You can deploy, manage, and meter many instances of Stingray with a single copy of the Services Controller. You can run multiple copies of the Services Controller on different machines to provide high availability (HA) against one or more machines failing. To set up an HA system that deploys, manages, and meters many instances of the Traffic Manager, without causing unnecessary network activity, you must meet the criteria described in the following subsections. Shared Access to the Database Each copy of the Services Controller that manages the same group of instance hosts must have access to the same remote inventory database. Note: To fully protect against loss of your services, Riverbed recommends that you install the database on a separate machine from the Services Controllers. Riverbed also recommends that you back up the database regularly. Stingray Services Controller User s Guide 251

262 Configuring for High Availability High Availability Configuration Prerequisites High Availability Mode Settings The Services Controller has settings to support high availability, named mode settings. You can examine and configure these settings using the REST API of any instance of the Services Controller. Mode settings are accessed through the following REST API resource path: /api/tmcm/<version>/manager/<manager-host-name> The table below describes the settings and recommended states. Setting management metering licensing Description This setting has two states, enabled and disabled, and determines whether or not access is allowed to the REST API of the named Services Controller machine. If the management mode of every instance of the Services Controller is set to disabled, there is no way to re-enable access to the REST API through the REST API. Riverbed provides an external executable (called toggle_management) to resolve this situation; contact your support provider for assistance. The recommended state is enabled. This setting has two states, none and all, and determines whether or not the named Services Controller machine performs metering on the deployed instances of Stingray. The recommended state is all (for the Services Controller nominated to perform metering), none (for all other the Services Controller in the cluster. For details, see Usage Metering and Activity Metrics on page 11. This setting has two states: enabled and disabled. This setting determines whether the named Services Controller machine responds with a license authorization response to requests from instances of Traffic Manager. The recommended state is enabled. monitoring These mode settings ensure that: This setting has three states: all, shared, and none. This setting determines whether the named Services Controller monitors other Services Controllers and all Traffic Manager instances in the deployment. Using shared means that while this Services Controller still monitors all other Services Controllers, the full set of Traffic Manager instances is shared equally between this and other Services Controllers in the same monitoring mode. The recommended state is shared. you can use any Services Controller to deploy and manage instances, or perform other administrative tasks on the system. only one Services Controller is metering the currently deployed instances. Although having more than one copy of the Services Controller performing metering is possible, typically it is unnecessary and might adversely impact your network and computing resources. All of the Services Controllers in your cluster issues authorizations to Traffic Manager licenses. This ensures that Traffic Manager licensing is not compromised by the failure of a single Services Controller instance. For details, see the manager Resource on page Stingray Services Controller User s Guide

263 FLA-based Load Balancing and Mode Settings API Configuring for High Availability FLA-based Load Balancing and Mode Settings API This HA approach use the Services Controller s in-built FLA-based load balancing and mode settings API. Figure High Availability: FLA-Based Load Balancing and Mode Settings API This HA configuration is based on the following Riverbed licenses: 1 SSC (Enterprise or CSP) 1 or more Base Bandwidth Packs (not required for CSP) Any optional Feature Packs required. (not required for CSP) Prerequisites for an HA Licensing Environment Ensure the following prerequisite steps are complete: Choose fully qualified domain names for 2 SSC servers. Determine the port that will be used for the REST API on the SSC servers. Obtain SSL certificates that will satisfy both of the following as per your own organizational rules: FLA usage certificate - This is used by the FLA license to verify the identity of the license server(s) it connects to. This process ensures that the license server is authorized to allocate licensed features. This certificate is subject to usage criteria. See FLA Usage Certificate Criteria on page 254. SSC REST API access usage certificate/key - These are used by the SSC when accepting HTTPS connections, to optionally allow the HTTPS client to authenticate the SSC. The FLA license itself is one example of such an HTTPS client. You provide the SSL certificate and key during the installation of SSC. See Required Services Controller Files on page 29. Stingray Services Controller User s Guide 253

264 Configuring for High Availability FLA-based Load Balancing and Mode Settings API Redeeming Licensing Tokens Perform the following steps to redeem your licensing token. 1. Visit the Riverbed support site: 2. Access the Licenses page. 3. Follow the Manage Licenses process: When prompted, enter your token. When prompted, enter the FQDNs, comma separated, for your two SSC servers. When prompted, enter the port number for REST API access. When prompted, enter your FLA usage certificate. The server FQDNs, the port number and the FLA usage certificate are "baked-in" to your FLA license. FLA Usage Certificate Criteria The FLA usage certificate must be a CA certificate within the issuer chain of the certificate for SSC REST API access. This criteria can be met in 2 ways: The simplest is to use the same self signed certificate for SSC REST API access and in the FLA. In situations where a self signed certificate is not acceptable, you can generate whatever chain of certificates suits this organization's needs, as long as it meets the first criteria. A typical scenario might be to: Get a CA signed certificate to use for REST API Access signed by a 3rd party CA (for example, Verisign). Provide the 3rd party CAs (for example, Verisign s) public certificate to the token redemption page for "baking-in" to the FLA license. Provide the obtained CA signed certificate for REST API Access to the SSC Installer. FLA licensing checks do not enforce any CN restrictions on the CA cert provided. Blank, wild card or otherwise incorrect CNs will be accepted. SSC REST API Access Certificate Criteria The criteria for acceptability of the public certificate that is used for access to the SSC REST API will largely depend on the users choice of REST client. Some clients will warn the user that a self signed certificate is in use. Some clients will enforce a valid CN Some clients will not allow wild card CNs. In a situation where a strict validation of a non-wild card CN is required, it is acceptable to install different certificates on each SSC within the HA pair, but only if they are issued by the same CA. This allows for a certificate common to the issuer chain of both REST API access usage certificates to be "baked-in" to the FLA license. 254 Stingray Services Controller User s Guide

265 FLA-based Load Balancing and Mode Settings API Configuring for High Availability Suggested Mode Settings for Enterprise setups Enterprise setup enables the user to access the REST API and GUI from either SSC. Metering will only occur on SSC 1. This process uses up CPU, so it is suggested that the effort not be duplicated across the cluster. Instance Monitoring and Licensing will be shared between the two Services Controllers: SSC 1: Management = true Metering = all Licensing = enabled Monitoring = shared Phone home of metering logs: Disabled SSC 2: Management = true Metering = none Licensing = enabled Monitoring = shared Phone home of metering logs: Disabled Suggested Mode Settings for CSP setups For a CSP deployment, Riverbed suggests that both Services Controllers have metering enabled. This will result in some redundant CPU usage, but will give the customer a backup set of logs should either SSC fail. Customers should only ever provide Riverbed with one set of logs for a give time period, to avoid be charged twice for their usage. The best way to achieve this is to use the phone home mechanism on one SSC in the pair: SSC 1: Management = true Metering = all Licensing = enabled Monitoring = shared Phone home of metering logs (either via included script in SW or added in VA): Enabled SSC 2: Management = true Metering = all Licensing = enabled Monitoring = shared Phone home of metering logs: Disabled Stingray Services Controller User s Guide 255

266 Configuring for High Availability Making the SSC Database Highly Available How Failures are Detected The Services Controller s peer monitoring capability provides a simple health check for clustered Services Controllers. This will the admin if a "failure" is detected. Failure in this case is defined as the inability to access a peer Services Controller s REST API. Also, an external monitoring system (such as Nagios) can be used to provide a more flexible monitoring solution. Recovering from a Failure When a failure is detected: All licensing calls will automatically be handled by the remaining (healthy) SSC. The administrator should adjust the mode settings of the remaining (healthy) SSC to enable all functions. That is: Management = true Metering = all Licensing = enabled Monitoring = all The cause of the failure should be detected, and the failed SSC repaired/replaced as appropriate. Enabling GUI Access over HTTPS If the user requires GUI access over HTTPS, use the same certificate and private key combination that was used for REST API access. You achieve this using the CLI command: web ssl cert import-cert-key "<cert and key in pem format>" By default the GUI will auto generate a self-signed certificate. Making the SSC Database Highly Available Having a highly available database setup is the customer's responsibility. This can be achieved using any MySQL replication and failover pattern that presents the Services Controller with a single MySQL database to which it can connect. The SSC VA makes setting up a 2 x SSC cluster with a single database relatively easy, as a MySQL instance is installed and configured within the VA. If a highly available database is required, the user will need to export the internal database to an external MySQL installation. 256 Stingray Services Controller User s Guide

267 CHAPTER 11 Troubleshooting This chapter provides tips and suggestions for troubleshooting any issues that you encounter. It includes the following sections: Resolving Configuration Errors When Starting the Services Controller VA on page 257 Tracking the Progress of Actions on page 258 Maintaining Service During Services Controller Failure on page 258 Generating a Technical Support Report on page 259 Identifying Instances from a Support Entitlement Key on page 259 Performing Advanced Settings on page 260 Resolving Configuration Errors When Starting the Services Controller VA After you start the Services Controller VA for the first time, it prompts you to run the configuration wizard. If you respond no and enter information manually, you might receive the following message: config file is invalid/missing error To generate a manual configuration file for the Services Controller VA Connect to the Services Controller VA CLI and enter the following set of commands to generate the configuration file: login as: admin Riverbed Stingray Services Controller Last login: Tue Apr 8 22:17: from amnesiac > enable amnesiac # configure terminal amnesiac (config) > ssc generate-config admin-name admin password password Stingray Services Controller User s Guide 257

268 Troubleshooting Tracking the Progress of Actions Tracking the Progress of Actions When a life-cycle action is unsuccessful, you might need to intervene to determine the cause of the problem. Each action is recorded in the inventory database. You can access the actions using the Services Controller REST API. You can obtain information regarding what might have caused the problem by interrogating the inventory database. The resource for the instance that is being affected by the action contains a reference to it. Maintaining Service During Services Controller Failure You might receive an alert as the first sign that a failure or error has occurred. In the case of a failed primary Services Controller, you can move services to a secondary Services Controller by performing the following steps: 1. If possible, disable services on the failed primary Services Controller by changing its mode settings to the values shown below. Setting Management Metering Licensing Value enabled none disabled Monitoring none You can disable services using the REST API of any running Services Controller that has access to the inventory database. However, you must use the machine name of the failed primary Services Controller in the REST path. 2. Enable services on a backup Services Controller by changing the mode settings of a secondary or subsequent Services Controller machine to the values shown below. Setting Management Metering Licensing Monitoring Value enabled all enabled shared 3. Change settings on the local DNS server so that the primary license server name resolves to the selected backup Services Controller. As a result of these actions, the system operates with an alternative Services Controller as the primary instance, with no alerts being issued. You can fix, remove, or replace the failed machine as appropriate. As you install further Services Controllers, you can add them to the system as secondary or subsequent backup Services Controllers. You can also pick a new primary Services Controller by repeating the steps above. 258 Stingray Services Controller User s Guide

269 Generating a Technical Support Report Troubleshooting In a deployment with a primary and one or more backup Services Controllers, it might not be obvious if a backup machine fails. Riverbed recommends that you routinely monitor the health of your Services Controller cluster members. Generating a Technical Support Report A Technical Support Report (TSR) contains configuration information about applications and hosts that is useful for the diagnosis of problems in a Services Controller deployment. You can generate this report by running the following command on the appropriate Services Controller instance: generate_tsr [--directory/-d <DIR>] [--db-dump] Use the optional --directory/-d <DIR> keyword and variable to specify a different output directory for the TSR. The Services Controller uses INSTALLROOT by default. Use the optional --db-dump keyword to include a full dump of the inventory database into the TSR. Note: The --db-dump option includes account names and passwords for your Traffic Manager instances in the resultant TSR. Riverbed recommends storing the TSR securely. You must perform the command as the root user or a user with equivalent privileges. When report generation is complete, a tar archive file is placed in the designated directory with a name in this format: SSC-TSR_hostname_<date>T<time_ordinal>.tgz Send the file to your support provider to assist with problem resolution. Identifying Instances from a Support Entitlement Key You can use the support entitlement identifier to discover information about your Traffic Manager instances, including the valid level of support entitlement. This identifier is displayed on the System > Support page of each Traffic Manager instance's Administration UI. To discover information about a specific instance, use the following script with the hexadecimal component of a support entitlement identifier as a variable: support_instance_info <hexadecimal instance identifier> Stingray Services Controller User s Guide 259

270 Troubleshooting Performing Advanced Settings Performing Advanced Settings The Services Controller includes a special settings resource reserved for support, testing, monitoring, and debugging purposes. This resource contains global settings and options that you do not normally need to modify (unless asked to by your support provider). You can alter these settings, but you cannot add or remove instances of this resource. Note: Riverbed does not recommend that you make any changes to the settings resource unless explicitly instructed to do so. Changes can have an adverse effect on the operation and performance of the Services Controller. 260 Stingray Services Controller User s Guide

VMware Horizon Mirage Load Balancing

VMware Horizon Mirage Load Balancing SOLUTION GUIDE VMware Horizon Mirage Load Balancing Solution Guide Version 1.1 July 2014 2014 Riverbed Technology, Inc. All rights reserved. Riverbed, SteelApp, SteelCentral, SteelFusion, SteelHead, SteelScript,

More information

Granite Solution Guide

Granite Solution Guide Solution Guide Granite Solution Guide Granite with NetApp Storage Systems Riverbed Technical Marketing July 2013 2012 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Granite

More information

SteelHead SaaS User s Guide. RiOS Version 9.1 July 2015

SteelHead SaaS User s Guide. RiOS Version 9.1 July 2015 SteelHead SaaS User s Guide RiOS Version 9.1 July 2015 2015 Riverbed Technology, Inc. All rights reserved. Riverbed and any Riverbed product or service name or logo used herein are trademarks of Riverbed.

More information

SteelCentral Packet Analyzer Installation Guide

SteelCentral Packet Analyzer Installation Guide SteelCentral Packet Analyzer Installation Guide Including the personal edition Version 10.8 January 2015 2014-2015 Riverbed Technology. All rights reserved. Riverbed, SteelApp, SteelCentral, SteelFusion,

More information

OnCommand Performance Manager 1.1

OnCommand Performance Manager 1.1 OnCommand Performance Manager 1.1 Installation and Setup Guide For Red Hat Enterprise Linux NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501

More information

Riverbed Central Management Console User s Guide. Version 8.0 December 2012

Riverbed Central Management Console User s Guide. Version 8.0 December 2012 Riverbed Central Management Console User s Guide Version 8.0 December 2012 2012 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor, RiOS, Steelhead, Think Fast, Virtual

More information

RealPresence Platform Director

RealPresence Platform Director RealPresence CloudAXIS Suite Administrators Guide Software 1.3.1 GETTING STARTED GUIDE Software 2.0 June 2015 3725-66012-001B RealPresence Platform Director Polycom, Inc. 1 RealPresence Platform Director

More information

SteelCentral Controller for SteelHead Mobile User s Guide. Version 4.7 March 2015

SteelCentral Controller for SteelHead Mobile User s Guide. Version 4.7 March 2015 SteelCentral ler for SteelHead Mobile User s Guide Version 4.7 March 2015 Riverbed Technology 680 Folsom Street San Francisco, CA 94107 Phone: 415-247-8800 Fax: 415-247-8801 Web: http://www.riverbed.com

More information

Optimizing NetApp SnapMirror

Optimizing NetApp SnapMirror Technical White Paper Optimizing NetApp SnapMirror WAN Optimization using Riverbed Steelhead appliances Technical White Paper Version 0.1 December 2013 2014 Riverbed Technology. All rights reserved. Riverbed,

More information

RSA Authentication Manager 8.1 Virtual Appliance Getting Started

RSA Authentication Manager 8.1 Virtual Appliance Getting Started RSA Authentication Manager 8.1 Virtual Appliance Getting Started Thank you for purchasing RSA Authentication Manager 8.1, the world s leading two-factor authentication solution. This document provides

More information

Quick Start Guide. for Installing vnios Software on. VMware Platforms

Quick Start Guide. for Installing vnios Software on. VMware Platforms Quick Start Guide for Installing vnios Software on VMware Platforms Copyright Statements 2010, Infoblox Inc. All rights reserved. The contents of this document may not be copied or duplicated in any form,

More information

OnCommand Performance Manager 1.1

OnCommand Performance Manager 1.1 OnCommand Performance Manager 1.1 Installation and Administration Guide For VMware Virtual Appliances NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408)

More information

SteelCentral Controller for SteelHead Deployment Guide. July 2015

SteelCentral Controller for SteelHead Deployment Guide. July 2015 SteelCentral Controller for SteelHead Deployment Guide July 2015 2015 Riverbed Technology, Inc. All rights reserved. Riverbed and any Riverbed product or service name or logo used herein are trademarks

More information

Installing and Configuring vcenter Support Assistant

Installing and Configuring vcenter Support Assistant Installing and Configuring vcenter Support Assistant vcenter Support Assistant 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

Installing and Administering VMware vsphere Update Manager

Installing and Administering VMware vsphere Update Manager Installing and Administering VMware vsphere Update Manager Update 1 vsphere Update Manager 5.1 This document supports the version of each product listed and supports all subsequent versions until the document

More information

Hillstone StoneOS User Manual Hillstone Unified Intelligence Firewall Installation Manual

Hillstone StoneOS User Manual Hillstone Unified Intelligence Firewall Installation Manual Hillstone StoneOS User Manual Hillstone Unified Intelligence Firewall Installation Manual www.hillstonenet.com Preface Conventions Content This document follows the conventions below: CLI Tip: provides

More information

Understanding Flow and Packet Deduplication

Understanding Flow and Packet Deduplication WHITE PAPER Understanding Flow and Packet Deduplication Riverbed Technical Marketing 2012 Riverbed Technology. All rights reserved. Riverbed, Cloud Steelhead, Granite, Interceptor, RiOS, Steelhead, Think

More information

EMC Data Domain Management Center

EMC Data Domain Management Center EMC Data Domain Management Center Version 1.1 Initial Configuration Guide 302-000-071 REV 04 Copyright 2012-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes

More information

OnCommand Performance Manager 2.0

OnCommand Performance Manager 2.0 OnCommand Performance Manager 2.0 Installation and Administration Guide For VMware Virtual Appliances NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408)

More information

vrealize Operations Manager Customization and Administration Guide

vrealize Operations Manager Customization and Administration Guide vrealize Operations Manager Customization and Administration Guide vrealize Operations Manager 6.0.1 This document supports the version of each product listed and supports all subsequent versions until

More information

VMware vcenter Log Insight Getting Started Guide

VMware vcenter Log Insight Getting Started Guide VMware vcenter Log Insight Getting Started Guide vcenter Log Insight 1.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by

More information

Application Discovery Manager User s Guide vcenter Application Discovery Manager 6.2.1

Application Discovery Manager User s Guide vcenter Application Discovery Manager 6.2.1 Application Discovery Manager User s Guide vcenter Application Discovery Manager 6.2.1 This document supports the version of each product listed and supports all subsequent versions until the document

More information

Cisco UCS Director Payment Gateway Integration Guide, Release 4.1

Cisco UCS Director Payment Gateway Integration Guide, Release 4.1 First Published: April 16, 2014 Americas Headquarters Cisco Systems, Inc. 170 West Tasman Drive San Jose, CA 95134-1706 USA http://www.cisco.com Tel: 408 526-4000 800 553-NETS (6387) Fax: 408 527-0883

More information

Interworks. Interworks Cloud Platform Installation Guide

Interworks. Interworks Cloud Platform Installation Guide Interworks Interworks Cloud Platform Installation Guide Published: March, 2014 This document contains information proprietary to Interworks and its receipt or possession does not convey any rights to reproduce,

More information

Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide

Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide Dell SupportAssist Version 2.0 for Dell OpenManage Essentials Quick Start Guide Notes, Cautions, and Warnings NOTE: A NOTE indicates important information that helps you make better use of your computer.

More information

VMware vcenter Log Insight Administration Guide

VMware vcenter Log Insight Administration Guide VMware vcenter Log Insight Administration Guide vcenter Log Insight 1.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by

More information

DameWare Server. Administrator Guide

DameWare Server. Administrator Guide DameWare Server Administrator Guide About DameWare Contact Information Team Contact Information Sales 1.866.270.1449 General Support Technical Support Customer Service User Forums http://www.dameware.com/customers.aspx

More information

EMC Data Protection Search

EMC Data Protection Search EMC Data Protection Search Version 1.0 Security Configuration Guide 302-001-611 REV 01 Copyright 2014-2015 EMC Corporation. All rights reserved. Published in USA. Published April 20, 2015 EMC believes

More information

VMware Identity Manager Connector Installation and Configuration

VMware Identity Manager Connector Installation and Configuration VMware Identity Manager Connector Installation and Configuration VMware Identity Manager This document supports the version of each product listed and supports all subsequent versions until the document

More information

RSA Authentication Manager 8.1 Setup and Configuration Guide. Revision 2

RSA Authentication Manager 8.1 Setup and Configuration Guide. Revision 2 RSA Authentication Manager 8.1 Setup and Configuration Guide Revision 2 Contact Information Go to the RSA corporate website for regional Customer Support telephone and fax numbers: www.emc.com/domains/rsa/index.htm

More information

Extreme Control Center, NAC, and Purview Virtual Appliance Installation Guide

Extreme Control Center, NAC, and Purview Virtual Appliance Installation Guide Extreme Control Center, NAC, and Purview Virtual Appliance Installation Guide 9034968 Published April 2016 Copyright 2016 All rights reserved. Legal Notice Extreme Networks, Inc. reserves the right to

More information

Migrating to vcloud Automation Center 6.1

Migrating to vcloud Automation Center 6.1 Migrating to vcloud Automation Center 6.1 vcloud Automation Center 6.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a

More information

Remote Management. Vyatta System. REFERENCE GUIDE SSH Telnet Web GUI Access SNMP VYATTA, INC.

Remote Management. Vyatta System. REFERENCE GUIDE SSH Telnet Web GUI Access SNMP VYATTA, INC. VYATTA, INC. Vyatta System Remote Management REFERENCE GUIDE SSH Telnet Web GUI Access SNMP Vyatta Suite 200 1301 Shoreway Road Belmont, CA 94002 vyatta.com 650 413 7200 1 888 VYATTA 1 (US and Canada)

More information

Product Manual. MDM On Premise Installation Version 8.1. Last Updated: 06/07/15

Product Manual. MDM On Premise Installation Version 8.1. Last Updated: 06/07/15 Product Manual MDM On Premise Installation Version 8.1 Last Updated: 06/07/15 Parallels IP Holdings GmbH Vordergasse 59 8200 Schaffhausen Switzerland Tel: + 41 52 632 0411 Fax: + 41 52 672 2010 www.parallels.com

More information

NetApp Storage System Plug-In 12.1.0.1.0 for Oracle Enterprise Manager 12c Installation and Administration Guide

NetApp Storage System Plug-In 12.1.0.1.0 for Oracle Enterprise Manager 12c Installation and Administration Guide NetApp Storage System Plug-In 12.1.0.1.0 for Oracle Enterprise Manager 12c Installation and Administration Guide Sachin Maheshwari, Anand Ranganathan, NetApp October 2012 Abstract This document provides

More information

Juniper Secure Analytics

Juniper Secure Analytics Juniper Secure Analytics Virtual Appliance Installation Guide Release 2014.1 Published: 2014-12-04 Juniper Networks, Inc. 1194 North Mathilda Avenue Sunnyvale, California 94089 USA 408-745-2000 www.juniper.net

More information

Installing and Configuring vcloud Connector

Installing and Configuring vcloud Connector Installing and Configuring vcloud Connector vcloud Connector 2.7.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new

More information

Oracle Enterprise Manager

Oracle Enterprise Manager Oracle Enterprise Manager System Monitoring Plug-in Installation Guide for EMC Symmetrix DMX System Release 12.1.0.2.0 E27543-03 February 2014 This document provides installation and configuration instructions

More information

NexentaConnect for VMware Virtual SAN

NexentaConnect for VMware Virtual SAN NexentaConnect for VMware Virtual SAN User Guide 1.0.2 FP3 Date: April, 2016 Subject: NexentaConnect for VMware Virtual SAN User Guide Software: NexentaConnect for VMware Virtual SAN Software Version:

More information

OnCommand Unified Manager 6.3

OnCommand Unified Manager 6.3 OnCommand Unified Manager 6.3 Installation and Setup Guide For VMware Virtual Appliances NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support

More information

SteelFusion Edge Installation and Configuration Guide

SteelFusion Edge Installation and Configuration Guide SteelFusion Edge Installation and Configuration Guide Appliances 2100, 2200, 3100, 3200, and 5100 Version 4.0 June 2015 2015 Riverbed Technology, Inc. All rights reserved. Riverbed, SteelApp, SteelCentral,

More information

FortiAnalyzer VM (VMware) Install Guide

FortiAnalyzer VM (VMware) Install Guide FortiAnalyzer VM (VMware) Install Guide FortiAnalyzer VM (VMware) Install Guide December 05, 2014 05-520-203396-20141205 Copyright 2014 Fortinet, Inc. All rights reserved. Fortinet, FortiGate, FortiCare

More information

HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide

HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide HP Intelligent Management Center v7.1 Virtualization Monitor Administrator Guide Abstract This guide describes the Virtualization Monitor (vmon), an add-on service module of the HP Intelligent Management

More information

Oracle Enterprise Manager

Oracle Enterprise Manager Oracle Enterprise Manager System Monitoring Plug-in Installation Guide for Microsoft Active Directory Release 12.1.0.1.0 E28548-04 February 2014 Microsoft Active Directory, which is included with Microsoft

More information

User's Guide. Product Version: 2.5.0 Publication Date: 7/25/2011

User's Guide. Product Version: 2.5.0 Publication Date: 7/25/2011 User's Guide Product Version: 2.5.0 Publication Date: 7/25/2011 Copyright 2009-2011, LINOMA SOFTWARE LINOMA SOFTWARE is a division of LINOMA GROUP, Inc. Contents GoAnywhere Services Welcome 6 Getting Started

More information

RSA Authentication Manager 7.1 Basic Exercises

RSA Authentication Manager 7.1 Basic Exercises RSA Authentication Manager 7.1 Basic Exercises Contact Information Go to the RSA corporate web site for regional Customer Support telephone and fax numbers: www.rsa.com Trademarks RSA and the RSA logo

More information

Adaptive Log Exporter Users Guide

Adaptive Log Exporter Users Guide IBM Security QRadar Version 7.1.0 (MR1) Note: Before using this information and the product that it supports, read the information in Notices and Trademarks on page page 119. Copyright IBM Corp. 2012,

More information

Installing, Uninstalling, and Upgrading Service Monitor

Installing, Uninstalling, and Upgrading Service Monitor CHAPTER 2 Installing, Uninstalling, and Upgrading Service Monitor This section contains the following topics: Preparing to Install Service Monitor, page 2-1 Installing Cisco Unified Service Monitor, page

More information

HP TippingPoint Security Management System User Guide

HP TippingPoint Security Management System User Guide HP TippingPoint Security Management System User Guide Version 4.0 Abstract This information describes the HP TippingPoint Security Management System (SMS) client user interface, and includes configuration

More information

vcenter Operations Manager for Horizon Supplement

vcenter Operations Manager for Horizon Supplement vcenter Operations Manager for Horizon Supplement vcenter Operations Manager for Horizon 1.6 This document supports the version of each product listed and supports all subsequent versions until the document

More information

RSA Authentication Manager 7.1 to 8.1 Migration Guide: Upgrading RSA SecurID Appliance 3.0 On Existing Hardware

RSA Authentication Manager 7.1 to 8.1 Migration Guide: Upgrading RSA SecurID Appliance 3.0 On Existing Hardware RSA Authentication Manager 7.1 to 8.1 Migration Guide: Upgrading RSA SecurID Appliance 3.0 On Existing Hardware Contact Information Go to the RSA corporate website for regional Customer Support telephone

More information

BlackShield ID Agent for Remote Web Workplace

BlackShield ID Agent for Remote Web Workplace Agent for Remote Web Workplace 2010 CRYPTOCard Corp. All rights reserved. http:// www.cryptocard.com Copyright Copyright 2010, CRYPTOCard All Rights Reserved. No part of this publication may be reproduced,

More information

IBM WebSphere Partner Gateway V6.2.1 Advanced and Enterprise Editions

IBM WebSphere Partner Gateway V6.2.1 Advanced and Enterprise Editions IBM WebSphere Partner Gateway V6.2.1 Advanced and Enterprise Editions Integrated SFTP server 2011 IBM Corporation The presentation gives an overview of integrated SFTP server feature IntegratedSFTPServer.ppt

More information

vsphere Upgrade vsphere 6.0 EN-001721-03

vsphere Upgrade vsphere 6.0 EN-001721-03 vsphere 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent editions of this document,

More information

TIBCO Spotfire Web Player 6.0. Installation and Configuration Manual

TIBCO Spotfire Web Player 6.0. Installation and Configuration Manual TIBCO Spotfire Web Player 6.0 Installation and Configuration Manual Revision date: 12 November 2013 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED

More information

Configuring SSL VPN on the Cisco ISA500 Security Appliance

Configuring SSL VPN on the Cisco ISA500 Security Appliance Application Note Configuring SSL VPN on the Cisco ISA500 Security Appliance This application note describes how to configure SSL VPN on the Cisco ISA500 security appliance. This document includes these

More information

License Administration Guide. FlexNet Publisher 2014 R1 (11.12.1)

License Administration Guide. FlexNet Publisher 2014 R1 (11.12.1) License Administration Guide FlexNet Publisher 2014 R1 (11.12.1) Legal Information Book Name: License Administration Guide Part Number: FNP-11121-LAG01 Product Release Date: March 2014 Copyright Notice

More information

OnCommand Unified Manager

OnCommand Unified Manager OnCommand Unified Manager Operations Manager Administration Guide For Use with Core Package 5.2 NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1(408) 822-6000 Fax: +1(408) 822-4501

More information

Nasuni Management Console Guide

Nasuni Management Console Guide Nasuni Management Console Guide Version 5.5 April 2014 2014 Nasuni Corporation All Rights Reserved Document Information Nasuni Management Console Guide Version 5.5 April 2014 Copyright Copyright 2010-2014

More information

Oracle Virtual Desktop Infrastructure. VDI Demo (Microsoft Remote Desktop Services) for Version 3.2

Oracle Virtual Desktop Infrastructure. VDI Demo (Microsoft Remote Desktop Services) for Version 3.2 Oracle Virtual Desktop Infrastructure VDI Demo (Microsoft Remote Desktop Services) for Version 2 April 2011 Copyright 2011, Oracle and/or its affiliates. All rights reserved. This software and related

More information

Connection Broker Managing User Connections to Workstations and Blades, OpenStack Clouds, VDI, and more. Security Review

Connection Broker Managing User Connections to Workstations and Blades, OpenStack Clouds, VDI, and more. Security Review Connection Broker Managing User Connections to Workstations and Blades, OpenStack Clouds, VDI, and more Security Review Version 8.1 March 31, 2016 Contacting Leostream Leostream Corporation http://www.leostream.com

More information

Citrix EdgeSight Administrator s Guide. Citrix EdgeSight for Endpoints 5.3 Citrix EdgeSight for XenApp 5.3

Citrix EdgeSight Administrator s Guide. Citrix EdgeSight for Endpoints 5.3 Citrix EdgeSight for XenApp 5.3 Citrix EdgeSight Administrator s Guide Citrix EdgeSight for Endpoints 5.3 Citrix EdgeSight for enapp 5.3 Copyright and Trademark Notice Use of the product documented in this guide is subject to your prior

More information

License Administration Guide. FlexNet Publisher Licensing Toolkit 11.11.1

License Administration Guide. FlexNet Publisher Licensing Toolkit 11.11.1 License Administration Guide FlexNet Publisher Licensing Toolkit 11.11.1 Legal Information Book Name: License Administration Guide Part Number: FNP-11111-LAG01 Product Release Date: February 2013 Copyright

More information

Oracle WebCenter Content Service for Microsoft Exchange

Oracle WebCenter Content Service for Microsoft Exchange Oracle WebCenter Content Service for Microsoft Exchange Installation and Upgrade Guide 10g Release 3 (10.3) November 2008 Oracle WebCenter Content Service for Microsoft Exchange Installation and Upgrade

More information

Virtual Appliance Setup Guide

Virtual Appliance Setup Guide Virtual Appliance Setup Guide 2015 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property of their respective

More information

VMware vcenter Support Assistant 5.1.1

VMware vcenter Support Assistant 5.1.1 VMware vcenter.ga September 25, 2013 GA Last updated: September 24, 2013 Check for additions and updates to these release notes. RELEASE NOTES What s in the Release Notes The release notes cover the following

More information

http://docs.trendmicro.com

http://docs.trendmicro.com Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the product, please review the readme files,

More information

http://docs.trendmicro.com

http://docs.trendmicro.com Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the product, please review the readme files,

More information

Copyright 2012 Trend Micro Incorporated. All rights reserved.

Copyright 2012 Trend Micro Incorporated. All rights reserved. Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the software, please review the readme files,

More information

Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice.

Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the software, please review the readme files,

More information

Clearswift SECURE Exchange Gateway Installation & Setup Guide. Version 1.0

Clearswift SECURE Exchange Gateway Installation & Setup Guide. Version 1.0 Clearswift SECURE Exchange Gateway Installation & Setup Guide Version 1.0 Copyright Revision 1.0, December, 2013 Published by Clearswift Ltd. 1995 2013 Clearswift Ltd. All rights reserved. The materials

More information

Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0

Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0 Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0 Third edition (May 2012). Copyright International Business Machines Corporation 2012. US Government Users Restricted

More information

Clustered Data ONTAP 8.3

Clustered Data ONTAP 8.3 Clustered Data ONTAP 8.3 Remote Support Agent Configuration Guide For Use with Clustered Data ONTAP NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408)

More information

Installing and Configuring vcenter Multi-Hypervisor Manager

Installing and Configuring vcenter Multi-Hypervisor Manager Installing and Configuring vcenter Multi-Hypervisor Manager vcenter Server 5.1 vcenter Multi-Hypervisor Manager 1.1 This document supports the version of each product listed and supports all subsequent

More information

Framework 8.1. External Authentication. Reference Manual

Framework 8.1. External Authentication. Reference Manual Framework 8.1 External Authentication Reference Manual The information contained herein is proprietary and confidential and cannot be disclosed or duplicated without the prior written consent of Genesys

More information

Symantec Endpoint Protection Shared Insight Cache User Guide

Symantec Endpoint Protection Shared Insight Cache User Guide Symantec Endpoint Protection Shared Insight Cache User Guide Symantec Endpoint Protection Shared Insight Cache User Guide The software described in this book is furnished under a license agreement and

More information

RSA envision Windows Eventing Collector Service Deployment Overview Guide

RSA envision Windows Eventing Collector Service Deployment Overview Guide RSA envision Windows Eventing Collector Service Deployment Overview Guide Contact Information Go to the RSA corporate web site for regional Customer Support telephone and fax numbers: www.rsa.com Trademarks

More information

IBM Security QRadar SIEM Version 7.1.0 MR1. Log Sources User Guide

IBM Security QRadar SIEM Version 7.1.0 MR1. Log Sources User Guide IBM Security QRadar SIEM Version 7.1.0 MR1 Log Sources User Guide Note: Before using this information and the product that it supports, read the information in Notices and Trademarks on page 108. Copyright

More information

Kaseya 2. Quick Start Guide. for Network Monitor 4.1

Kaseya 2. Quick Start Guide. for Network Monitor 4.1 Kaseya 2 VMware Performance Monitor Quick Start Guide for Network Monitor 4.1 June 7, 2012 About Kaseya Kaseya is a global provider of IT automation software for IT Solution Providers and Public and Private

More information

F-Secure Messaging Security Gateway. Deployment Guide

F-Secure Messaging Security Gateway. Deployment Guide F-Secure Messaging Security Gateway Deployment Guide TOC F-Secure Messaging Security Gateway Contents Chapter 1: Deploying F-Secure Messaging Security Gateway...3 1.1 The typical product deployment model...4

More information

HP Business Availability Center

HP Business Availability Center HP Business Availability Center for the Windows and Solaris operating systems Software Version: 8.05 Business Process Monitor Administration Document Release Date:September 2010 Software Release Date:

More information

OnCommand Unified Manager 6.3

OnCommand Unified Manager 6.3 OnCommand Unified Manager 6.3 Installation and Setup Guide For Microsoft Windows NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501 Support telephone:

More information

Oracle Cloud E66791-05

Oracle Cloud E66791-05 Oracle Cloud Using Oracle Managed File Transfer Cloud Service 16.2.5 E66791-05 June 2016 Oracle Managed File Transfer (MFT) is a standards-based, endto-end managed file gateway. Security is maintained

More information

Tivoli Access Manager Agent for Windows Installation Guide

Tivoli Access Manager Agent for Windows Installation Guide IBM Tivoli Identity Manager Tivoli Access Manager Agent for Windows Installation Guide Version 4.5.0 SC32-1165-03 IBM Tivoli Identity Manager Tivoli Access Manager Agent for Windows Installation Guide

More information

Command Center 5.2 2015-04-28 14:56:41 UTC. 2015 Citrix Systems, Inc. All rights reserved. Terms of Use Trademarks Privacy Statement

Command Center 5.2 2015-04-28 14:56:41 UTC. 2015 Citrix Systems, Inc. All rights reserved. Terms of Use Trademarks Privacy Statement Command Center 5.2 2015-04-28 14:56:41 UTC 2015 Citrix Systems, Inc. All rights reserved. Terms of Use Trademarks Privacy Statement Contents Command Center 5.2... 12 Command Center 5.2... 14 About Command

More information

EMC NetWorker. Licensing Guide. Release 8.0 P/N 300-013-596 REV A01

EMC NetWorker. Licensing Guide. Release 8.0 P/N 300-013-596 REV A01 EMC NetWorker Release 8.0 Licensing Guide P/N 300-013-596 REV A01 Copyright (2011-2012) EMC Corporation. All rights reserved. Published in the USA. Published June, 2012 EMC believes the information in

More information

Citrix XenServer Workload Balancing 6.5.0 Quick Start. Published February 2015 1.0 Edition

Citrix XenServer Workload Balancing 6.5.0 Quick Start. Published February 2015 1.0 Edition Citrix XenServer Workload Balancing 6.5.0 Quick Start Published February 2015 1.0 Edition Citrix XenServer Workload Balancing 6.5.0 Quick Start Copyright 2015 Citrix Systems. Inc. All Rights Reserved.

More information

VMware vcenter Log Insight Getting Started Guide

VMware vcenter Log Insight Getting Started Guide VMware vcenter Log Insight Getting Started Guide vcenter Log Insight 2.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by

More information

/ Preparing to Manage a VMware Environment Page 1

/ Preparing to Manage a VMware Environment Page 1 Configuring Security for a Managed VMWare Enviroment in VMM Preparing to Manage a VMware Environment... 2 Decide Whether to Manage Your VMware Environment in Secure Mode... 2 Create a Dedicated Account

More information

DSView 4 Management Software Transition Technical Bulletin

DSView 4 Management Software Transition Technical Bulletin DSView 4 Management Software Transition Technical Bulletin DSView, Avocent and the Avocent logo are trademarks or registered trademarks of Avocent Corporation or its affiliates in the U.S. and other countries.

More information

SteelFusion with Amazon Web Services Storage Gateway Solution Guide

SteelFusion with Amazon Web Services Storage Gateway Solution Guide SOLUTION GUIDE SteelFusion with Amazon Web Services Storage Gateway Solution Guide July 2014 2014 Riverbed Technology, Inc. All rights reserved. Riverbed, SteelApp, SteelCentral, SteelFusion, SteelHead,

More information

Managing Multi-Hypervisor Environments with vcenter Server

Managing Multi-Hypervisor Environments with vcenter Server Managing Multi-Hypervisor Environments with vcenter Server vcenter Server 5.1 vcenter Multi-Hypervisor Manager 1.0 This document supports the version of each product listed and supports all subsequent

More information

Riverbed SteelFusion Appliance: Snapshot Handoff Host with Pure Storage

Riverbed SteelFusion Appliance: Snapshot Handoff Host with Pure Storage Riverbed SteelFusion Appliance: Snapshot Handoff Host with Pure Storage Solution Guide Version 1.0 March 2015 2015 Riverbed Technology. All rights reserved. Riverbed, Cloud SteelHead, SteelFusion, Interceptor,

More information

BlackShield ID Agent for Terminal Services Web and Remote Desktop Web

BlackShield ID Agent for Terminal Services Web and Remote Desktop Web Agent for Terminal Services Web and Remote Desktop Web 2010 CRYPTOCard Corp. All rights reserved. http:// www.cryptocard.com Copyright Copyright 2010, CRYPTOCard All Rights Reserved. No part of this publication

More information

Siebel Installation Guide for UNIX. Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014

Siebel Installation Guide for UNIX. Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014 Siebel Installation Guide for UNIX Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014 Copyright 2005, 2014 Oracle and/or its affiliates. All rights reserved. This software and related documentation

More information

SuperLumin Nemesis. Administration Guide. February 2011

SuperLumin Nemesis. Administration Guide. February 2011 SuperLumin Nemesis Administration Guide February 2011 SuperLumin Nemesis Legal Notices Information contained in this document is believed to be accurate and reliable. However, SuperLumin assumes no responsibility

More information

Pre-Installation Instructions

Pre-Installation Instructions Agile Product Lifecycle Management PLM Mobile Release Notes Release 2.0 E49504-02 October 2014 These Release Notes provide technical information about Oracle Product Lifecycle Management (PLM) Mobile 2.0.

More information

IBM Security QRadar SIEM Version 7.1.0 MR1. Vulnerability Assessment Configuration Guide

IBM Security QRadar SIEM Version 7.1.0 MR1. Vulnerability Assessment Configuration Guide IBM Security QRadar SIEM Version 7.1.0 MR1 Vulnerability Assessment Configuration Guide Note: Before using this information and the product that it supports, read the information in Notices and Trademarks

More information

NetIQ Aegis Adapter for VMware vcenter Server

NetIQ Aegis Adapter for VMware vcenter Server Contents NetIQ Aegis Adapter for VMware vcenter Server Configuration Guide May 2011 Overview... 1 Product Requirements... 1 Supported Configurations... 2 Implementation Overview... 2 Ensuring Minimum Rights

More information

Integrating idrac7 With Microsoft Active Directory

Integrating idrac7 With Microsoft Active Directory Integrating idrac7 With Microsoft Active Directory Whitepaper Author: Jim Slaughter This document is for informational purposes only and may contain typographical errors and technical inaccuracies. The

More information