Interplay Common Playback Services Installation & Configuration



Similar documents
MediaCentral Platform Services Installation and Configuration Guide Version 2.3

Semantic based Web Application Firewall (SWAF - V 1.6)

DeployStudio Server Quick Install

Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice.

Symantec Database Security and Audit 3100 Series Appliance. Getting Started Guide

Operating System Installation Guidelines

LOCKSS on LINUX. CentOS6 Installation Manual 08/22/2013

Installing the Operating System or Hypervisor

NetIQ Sentinel Quick Start Guide

ThinkServer RD540 and RD640 Operating System Installation Guide

Virtual Appliance for VMware Server. Getting Started Guide. Revision Warning and Disclaimer

How to Test Out Backup & Replication 6.5 for Hyper-V

Deskpool Quick Start. Version: V2.1.x. Based on Hyper-V Server 2012 R2. Shenzhen Jieyun Technology Co., Ltd (

Create a virtual machine at your assigned virtual server. Use the following specs

Installing Operating Systems

SOA Software API Gateway Appliance 7.1.x Administration Guide

McAfee Asset Manager Console

Deploying Windows Streaming Media Servers NLB Cluster and metasan

F-Secure Messaging Security Gateway. Deployment Guide

Syncplicity On-Premise Storage Connector

McAfee SMC Installation Guide 5.7. Security Management Center

Reboot the ExtraHop System and Test Hardware with the Rescue USB Flash Drive

Extreme Control Center, NAC, and Purview Virtual Appliance Installation Guide

v7.8.2 Release Notes for Websense Content Gateway

CommandCenter Secure Gateway

Backup & Disaster Recovery Appliance User Guide

Dell UPS Local Node Manager USER'S GUIDE EXTENSION FOR MICROSOFT VIRTUAL ARCHITECTURES Dellups.com

Using iscsi with BackupAssist. User Guide

EVault Software. Course 361 Protecting Linux and UNIX with EVault

Building a Virtual Desktop Infrastructure A recipe utilizing the Intel Modular Server and VMware View

Plexxi Control Installation Guide Release 2.1.0

LOCKSS on LINUX. Installation Manual and the OpenBSD Transition 02/17/2011

ReadyNAS Setup Manual

Kaseya Server Instal ation User Guide June 6, 2008

Installing and Configuring vcenter Support Assistant

Clearswift SECURE Exchange Gateway Installation & Setup Guide. Version 1.0

System Area Manager. Remote Management

Installing and Configuring vcloud Connector

How to Configure an Initial Installation of the VMware ESXi Hypervisor

Virtual Appliances. Virtual Appliances: Setup Guide for Umbrella on VMWare and Hyper-V. Virtual Appliance Setup Guide for Umbrella Page 1

Intelligent Power Protector User manual extension for Microsoft Virtual architectures: Hyper-V 6.0 Manager Hyper-V Server (R1&R2)

OnCommand Performance Manager 1.1

Cisco FlexFlash: Use and Manage Cisco Flexible Flash Internal SD Card for Cisco UCS C-Series Standalone Rack Servers

HP CloudSystem Enterprise

Cluster Configuration Manual Cluster configuration on Database Servers

Required Virtual Interface Maps to... mgmt0. virtual network = mgmt0 wan0. virtual network = wan0 mgmt1. network adapter not connected lan0

Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario

CommandCenter Secure Gateway

Virtual Appliance Setup Guide

insync Installation Guide

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster

How To Use 1Bay 1Bay From Awn.Net On A Pc Or Mac Or Ipad (For Pc Or Ipa) With A Network Box (For Mac) With An Ipad Or Ipod (For Ipad) With The

Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario

FileMaker Server 15. Getting Started Guide

Required Virtual Interface Maps to... mgmt0. bridge network interface = mgmt0 wan0. bridge network interface = wan0 mgmt1

User Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream

Rally Installation Guide

Thinspace deskcloud. Quick Start Guide

Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015)

McAfee Firewall Enterprise

Getting Started. Websense V10000 Appliance. v1.1

Moxa Device Manager 2.0 User s Guide

TimeIPS Server. IPS256T Virtual Machine. Installation Guide

REQUIREMENTS AND INSTALLATION OF THE NEFSIS DEDICATED SERVER

Getting Started with ESXi Embedded

istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering

READYNAS INSTANT STORAGE. Quick Installation Guide

Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide

Virtual Appliance Setup Guide

Operating System Installation Guide

UltraBac Documentation. UBDR Gold. Administrator Guide UBDR Gold v8.0

WhatsUp Gold v16.1 Installation and Configuration Guide

Virtual CD v10. Network Management Server Manual. H+H Software GmbH

ReadyNAS Duo Setup Manual

CONNECT-TO-CHOP USER GUIDE

WhatsUp Gold v16.3 Installation and Configuration Guide

Vess A2000 Series. NVR Storage Appliance. Windows Recovery Instructions. Version PROMISE Technology, Inc. All Rights Reserved.

INUVIKA TECHNICAL GUIDE

Virtual Managment Appliance Setup Guide

F-Secure Internet Gatekeeper Virtual Appliance

How To Install Extreme Security On A Computer Or Network Device

Setup Cisco Call Manager on VMware

Centralized Mac Home Directories On Windows Servers: Using Windows To Serve The Mac

Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN

Team Foundation Server 2013 Installation Guide


Altor Virtual Network Security Analyzer v1.0 Installation Guide

Quick Start Guide for Linux Based Recovery

UNICORN 7.0. Administration and Technical Manual

Scholastic Reading Inventory Installation Guide

Dual Bay Home Media Store. User Manual


NOC PS manual. Copyright Maxnet All rights reserved. Page 1/45 NOC-PS Manuel EN version 1.3

Network Monitoring User Guide Pulse Appliance

Aqua Connect Load Balancer User Manual (Mac)

Hillstone StoneOS User Manual Hillstone Unified Intelligence Firewall Installation Manual

VMware Identity Manager Connector Installation and Configuration

SmartFiler Backup Appliance User Guide 2.0

Red Hat Linux 7.2 Installation Guide

istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster

Transcription:

Interplay Common Playback Services Installation & Configuration ICPS Version: 1.2 Document Version: 1.00 This document provides instructions to install and configure Avid Interplay Common Playback Services (ICPS) version 1.2 for use with Interplay Central version 1.2 and Interplay MAM version 4.1. Copyright 2012 Avid Technology About ICPS 1.2 Please see the ICPS 1.2 ReadMe.

2

Contents PART I: Introduction and Overview... 5 About ICPS Installation & Configuration... 6 Installation Overview... 7 Decision Points... 7 Before You Begin... 8 Intended Audiences and Prerequisites... 8 Upgrading Instructions... 8 PART II: Installing ICPS on HP DL380/DL360 Hardware... 9 Step #1 Setting Up the HP Server Hardware... 10 Connect the ICPS Server to the ISIS and Network... 10 Connect the ICPS Server to MAM Proxy Storage and Network... 10 Setting Up the DL380 System Drive Volume... 10 Setting Up the DL360 System Drive Volume... 10 Setting the System Clock... 11 Disabling HP DL380/DL360 Power Saving Mode... 11 Step #2 Installing the RHEL and ICPS Software Components... 12 Preparing the ICPS USB Key... 12 Copying Red Hat Enterprise Linux OS Media to the ICPS USB Key... 12 Booting the Server from the USB Key and Running the Installer... 13 Step #3 Set up the High-Availability and Load-Balanced Server Cluster... 15 On All Servers in the Cluster... 16 On One Server in the Cluster... 17 On All Other Servers in the Cluster... 18 Step #4 Create the Interplay MAM Cache Volume... 18 Step #5 Create the Cluster Cache... 19 Before You Begin... 19 On All Servers in the Cluster... 20 On One Server in the Cluster... 20 On One Server in the Cluster... 21 On All Servers in the Cluster... 22 Test the Cache... 22 Monitor the Cluster... 22 3

Step #6a Configure ICPS for Interplay MAM... 22 Configure Port Bonding for Interplay MAM (Optional)... 23 Step #6b Configure ICPS for Interplay Central... 24 Log Into Portal... 24 Configure Interplay... 25 Configure ISIS... 26 Restart Cluster Synchronization Services on All Nodes... 27 Configure Wi-Fi Only Encoding for Facility-Based ipads (Optional)... 27 Step #7 Post-Installation Steps... 27 Monitoring ICPS High-Availability and Load Balancing... 28 Retrieve ICPS Logs... 28 Log Cycling... 29 PART III: Installing ICPS on Non-HP Server Hardware... 30 Step-by-Step Review of the Instructions... 31 Step #1 Setting Up the Non-HP Server Hardware... 31 Step #2 Installing RHEL on Non-HP Servers... 31 Step #3 Installing ICPS on Non-HP Servers... 31 Step #4 Setting up the High-Availability and Load-Balanced Server Cluster... 32 Step #5 Create the Interplay MAM Cache Volume... 32 Step #6 Create the Cluster Cache... 32 Step #7 Configure ICPS for Interplay MAM... 32 Step #8 Post-Installation Steps... 32 Appendix A: Frequently Asked Questions... 33 Appendix B: Troubleshooting... 34 Copyright and Disclaimer... 36 4

PART I: Introduction and Overview 5

About ICPS Installation & Configuration ICPS is a software component that installs on its own set of servers, distinct from Interplay Central or Interplay MAM. Installation follows one of two basic deployment models: Interplay Central on HP DL380 server hardware Interplay MAM on HP DL360 or other server hardware Note: For detailed hardware specifications please consult an Avid representative. The following illustration shows the location of the ICPS servers in the Interplay Central and Interplay MAM deployments. The following table presents the main characteristics of each deployment: Interplay Central HP DL380 server hardware only ISIS storage High-availability and load balancing via multiple ICPS Servers Interplay MAM HP DL360 and other hardware MAM proxy storage High-availability and load balancing via multiple ICPS Servers ICPS for Interplay MAM supports all standard filesystems that can be mounted by a Linux server (XFS, NFS, etc.). This includes proprietary filesystems that are able to expose themselves as standard filesystems. 6

Installation Overview Although the installation process is similar in all cases, the steps vary depending on the deployment model and choice of hardware. For example, installations on the supported HP server (HP DL380) can take advantage of the express installation using a USB key and the supplied Red Hat kickstart (ks.cfg) file. On non-hp servers you must install Red Hat manually. In all cases, optionally configuring for high-availability and load balancing requires additional steps. The following list describes the high-level, general, installation steps: 1. Physically install and set up the ICPS server hardware 2. Install the OS & ICPS software components on the server(s). This step includes copying RHEL OS installation media to the ICPS installation USB key for express installation (HP DL380 /DL360 only) 3. Configure high-availability and load-balancing server cluster(optional) 4. Configure a shared cache for the server cluster (optional) 5. Configure Interplay Central to use the ICPS server or cluster 6. Or, configure Interplay MAM to use the ICPS server or cluster Note: In all cases, these servers require the installation of Red Hat Enterprise Linux (RHEL) 6.0. Do not install any OS updates, patches, and do not upgrade to RHEL 6.1 or higher. Decision Points The main decision points affecting installation can be summarized as follows: What kind of server? HP DL380/DL360 or Other. ICPS supports Interplay Central on HP hardware only. ICPS supports Interplay MAM on both HP and non-hp hardware. For non-hp hardware, review the tips and notes in PART III: Installing ICPS on Non-HP Server Hardware on page 30 before proceeding. What kind of install? Interplay Central or Interplay MAM. While the installation steps are very similar for Interplay Central and Interplay MAM support, the configuration steps are different. Instructions for configuring Interplay Central for ICPS are provided in this document. For Interplay MAM, obtain the MAM configuration instructions before proceeding. What kind of server setup? Single or Cluster. A server cluster provides high-availability and load-balancing. The OS and ICPS install identically on each server in the cluster, but additional steps are required to configure the servers as a cluster. Is this a dual setup? Interplay MAM and Interplay Central? 7

ICPS can serve both Interplay MAM and Interplay Central simultaneously. In this case, install an ICPS server cluster as indicated in this document. Then, perform both configuration operations. Before You Begin Before you begin, make sure you have the following: The ICPS server(s) RHEL 6.0 installation installation.iso file or DVD media The ICPS installation package (USB_v1.2.zip) An 8GB USB key (HP DL380/DL360 installations only; optional for non-hp installations) A Windows XP/Vista/7 laptop or desktop computer Note: The server(s) on which you are installing the ICPS software should be physically installed in your engineering environment, and the appropriate network connection(s) to ISIS and/or the house network should already be made. You also require access to the server s console: Directly by connecting a monitor and keyboard to the server Remotely via KVM or comparable solution Intended Audiences and Prerequisites This guide is for the person responsible for performing a fresh install of ICPS, or upgrading or maintaining an existing ICPS installation. Professional Services: Avid personnel whose responsibilities include installing and upgrading the ICPS system, on-site at a client s facility. In-House Installers: Clients with an in-house IT department that has expertise in systems integration, Linux (including port-bonding), networking, etc. This kind of person might be called on to add a new ICPS node to an already established ICPS cluster, for example. Upgrading Instructions To upgrade from an earlier version of ICPS, mount the USB key and run the installation script. 8

PART II: Installing ICPS on HP DL380/DL360 Hardware The following table provides time estimates for each of the main installation steps. Task Step #1 Setting Up the HP Server Hardware Step #2 Installing the RHEL and ICPS Software Components Step #3 Set up the High-Availability and Load-Balanced Server Cluster Step #4 Create the Interplay MAM Cache Volume Step #5 Create the Cluster Cache Step #6a Configure ICPS for Interplay MAM Step #6b Configure ICPS for Interplay Central Step #7 Post-Installation Steps Total: Approximate Time Needed 1 hr 40 min 20 min 10min 20 min 20 min 10 min 10 min 3 hr 10 min 9

Step #1 Setting Up the HP Server Hardware In this procedure, you connect the HP DL380/DL360 server to your network, set up its hard disk drives, set the system clock, and disable power saving mode (often enabled, by default). Connect the ICPS Server to the ISIS and Network The physical servers must be installed and connected to the ISIS via a Zone 1 (direct) or Zone 2 (through a switch) connection. Note: This procedure applies to Interplay Central deployments only. 1. For 10GigE connections, use the Myricom 10GigE NIC in PCI slot 4 (see diagram below). 2. For GigE connections, do not use the on-board Broadcom GigE ports. You should have an Intel PROset quad-port GigE NIC in PCI slot 2 (see the diagram below). Connect to the left-most port on that NIC. Note: For a specific NIC manufacturer and model, please consult and Avid representative. Connect the ICPS Server to MAM Proxy Storage and Network In an Interplay MAM deployment, you can use the on-board Broadcomm GigE port to connect to the house network. For a 10GigE connection, use a 10GigE NIC of your choosing. Setting Up the DL380 System Drive Volume ICPS installs onto the system drive along with the RHEL OS. The DL380 server has a drive cage for up to 16 drives, of which only 2 are currently occupied. We recommend that you set up a RAID 1 (mirror) volume, using both drives for the system disk, one drive mirroring the other, for redundancy. Setting Up the DL360 System Drive Volume ICPS installs onto the system drive along with the RHEL OS. The DL360 server has a drive cage for up to 16 drives, of which 8 are occupied. The following setup is recommended: 10

Set up 2 drives as a RAID 1 (mirror) volume for use as the system disk. Set up the remaining 6 drives in a RAID 5 configuration, for use as the ICPS cache. Setting the System Clock To ensure the smooth installation of the RHEL OS and ICPS in a later step, set the system clock. To start the server and access the BIOS to set the system clock: 1. Power up the server. 2. When the console displays the option to enter the Setup menu, press F9. The Setup utility appears. 3. Choose Date and Time. Date and Time options appear. Set the date (mm-dd-yyyy) and time (hh:mm:ss). 4. Press Enter to save the changes and return to the main menu. 5. Exit the Setup utility (F10) and save. The server reboots with new options. Disabling HP DL380/DL360 Power Saving Mode The HP DL380 is frequently shipped with BIOS settings set to Power-Saving mode. ICPS is CPU and memory intensive processes, especially when under heavy load you will get much better performance by ensuring that your server is set to operate at Maximum Performance. Note: You can do this before or after the installation process. We recommend making the change immediately. To start the server and access the BIOS to check settings: 1. Power up the server. 2. When the console displays the option to enter the Setup menu, press F9. The Setup utility appears. 3. Choose Power Management Options. Power Management options appear. 4. Choose HP Power Profile. Power Profile options appear. 5. Choose Maximum Performance. You are returned to the HP Power Profile options. 6. Press Esc to return to main menu. 7. Exit the Setup utility (F10) and save. 11

The server reboots with new options. Step #2 Installing the RHEL and ICPS Software Components In this procedure, you copy the RHEL OS installation media to the ICPS installation USB key, and then install both the RHEL OS and the ICPS software components in one continuous step. Preparing the ICPS USB Key Installing ICPS requires a bootable USB key with all the files required for installing ICPS. If instead of an ICPS installation USB key you only have the ICPS installation ZIP file, prepare a USB key using the following steps. To prepare the ICPS USB key: 1. Procure an 8GB USB key. 2. Format the USB key as a FAT32 volume. 3. Get the ICPS software installation package file, USB_v1.2.zip. 4. Unzip the file into a unique directory. Copying Red Hat Enterprise Linux OS Media to the ICPS USB Key Follow this procedure only if you are installing ICPS software components on an HP DL380/DL360 server. To complete this procedure, make sure you have: A Windows computer RHEL 6.0 OS installation DVD or.iso Avid does not redistribute the RHEL system media on the ICPS installation USB key. You must download the installation.iso file from Red Hat directly or get it from the RHEL Installation DVD that comes with your server then copy the.iso file to the USB key Avid provides for ICPS installation. Note: Only RHEL 6.0 OS is supported. Do not install patches, updates, or upgrade to RHEL 6.1. To copy the RHEL OS.iso file to the USB key: 1. Log into a Windows laptop or desktop. 2. Make sure the RHEL 6.0.iso file is accessible locally (preferable) or over the network from your computer Note: If you don t have the RHEL 6.0 installation.iso or a RHEL installation DVD from which to create one, log into rhn.redhat.com using your account credentials and download the.iso. Remember to download the 6.0 version 6.1 or later is not supported. 3. Browse Windows Explorer to the USB key volume. 4. Double-click iso2usb.exe to launch the application. 12

5. Choose the Diskimage radio button then browse to the.iso file. 6. Verify the Hard Disk Name and USB Device Name are correct: Hard Disk Name: sda USB Device Name: sdb Note: If you will be configuring the DL360/DL380 with a separate cache volume, the USB device name will be sdc instead. 7. In the Additional Files field browse to the directory where you unzipped the USB_v1.2.zip file. 8. Click OK. 9. A process begins to copy the.iso file to the USB key. This process will take 5-10 minutes. Once it is complete, the USB key has everything it needs for a complete RHEL and ICPS installation process. Note: Copying the RHEL 6.0 OS.iso file to the USB key is a one-time process. If you ever have to re-install ICPS, you do not need to repeat these steps. Booting the Server from the USB Key and Running the Installer If you are installing ICPS on an HP DL360/DL360, the installation process installs RHEL and ICPS software components. To boot the server from the USB key and run the installer: 1. Before powering on the server, insert the USB key. 2. Power on the server. 3. Wait for the Welcome screen to appear. The first option in the list, Install and upgrade an existing system, is selected. 4. Press Enter. The option is used automatically if you do nothing and wait 60 sections. The RHEL packages are installed this takes 5-10 minutes. When the process is complete, you are prompted to reboot. 5. Do not press Enter! Remove the USB key from the server. If you reboot without removing the USB key the server will reboot from the USB key again. If you pressed Enter by mistake, remove the USB key as quickly as possible (before the system boots up again). To reboot the server for the first time: 1. Press Enter. Rebooting the server at this time triggers the first time boot up on the system drive. The first boot screen appears. 13

2. From the Choose a Tool menu, select Keyboard Configuration. Press Enter. Choose the Language option for your keyboard. 3. Focus the OK button. Press Enter. 4. Choose the Network Configuration option. Press Enter. 5. If you are setting up a cluster of ICPS servers or if you want to set up a static IP, choose the Device Configuration option. Static IP is required when setting up a load balanced cluster of servers. Press Enter. A list of network interface ports appears. 6. Choose the device option corresponding to eth0. Press Enter. 7. Enter network device information: Keep the default name: eth0 Keep the default device: eth0 Disable DHCP (Spacebar) Enter the static IP, network, default gateway IP, and DNS servers 8. Select OK. Press Enter. You are returned to the list of network interface ports. 9. Select Save. Press Enter. 10. Choose the DNS Configuration option. Press Enter. 11. Enter DNS information: Enter the hostname: <machine name> DNS entries should be carried over from step 7 (if you specified static addresses). If you did not enable DHCP, enter the DNS search path domain 12. Select Save & Quit. Press Enter. 13. Select Quit. Press Enter. You are prompted to login to the server. To check the date and time: 1. Login as root (i.e. user name = root). Note: The default root password is Avid123 2. Check the date on the server. Type date and press enter. The date is displayed, for example: Sun Apr 1 11:03:04 EDT 2012 3. If the date is incorrect, change the date. For example, enter: date 040211032012 14

The required format is MMDDHHmmYYYY. (Month-Date-Hour-Minute-Year) 4. When you press enter the reset date is displayed: Mon Apr 2 11:03:00 EDT 2012 To manually edit the network configuration file: Due to an artifact of the RHEL 6.0 installation process, backticks (`) are often added around entries in the network configuration file. Before leaving network configuration, remove the backticks from around any effected entries. 1. Manually edit the network configuration file in /etc/sysconfig/network-scripts/ifcfg-eth0 The first time you edit the file, it may have duplicate entries and backticks similar to the following example: DEVICE=eth0 HWADDR=00:26:55:e6:83:e1 NM_CONTROLLED=yes ONBOOT=yes DHCP_HOSTNAME=`$HOSTNAME` ONBOOT=yes DHCP_HOSTNAME=`$HOSTNAME` 2. Remove the duplicate entries and backticks (e.g. from around the host name). 3. Restart the network service (as root): /etc/init.d/network restart Step #3 Set up the High-Availability and Load-Balanced Server Cluster Redundancy and scale for ICPS can be obtained by setting up a cluster of two or more servers. In a high-availability and load-balanced setup, multiple ICPS servers are exposed using a single IP address. In essence, Interplay Production and Interplay MAM see the cluster as a single machine. Within the cluster, requests for media are automatically distributed to the available nodes. Properly configured, an ICPS server cluster provides the following: Load balancing. All incoming playback connections are routed to a cluster IP address, and are subsequently distributed evenly to the nodes in the cluster. High-availability. If any node in the cluster fails, connections to that node will automatically be redirected to another node. Shared Cache: The media transcoded by one node in the cluster is immediately available for use by the other nodes. 15

Cluster monitoring. You can monitor the status of the cluster by entering a command. If a node fails (or if any other serious problem is detected by the cluster monitoring service), an e-mail is sent to one or more e-mail addresses. Before you begin, make sure of the following: ICPS software components are installed on all servers in the cluster All servers are on the network and are assigned IP addresses You have an assigned cluster IP address (distinct from the servers in the cluster) If your network already uses multicast, IT must issue you a multicast address to avoid potential conflicts. If your network does not use multicast, the cluster can safely use a default multicast address. Note: Unicast is not supported. On All Servers in the Cluster All servers in the cluster must be connected to an Ethernet interface having the same name (eth0 recommended). This may not be the case, by default, for a number of reasons, including systems with multiple network interface cards, re-assignment due to matches in network rules files, or explicit settings in the NIC s configuration file. Follow these steps to verify the Ethernet interface is correctly named. 1. Ensure the Ethernet NIC device has been assigned to the correct physical port by examining the content of the following rules file: /etc/udev/rules.d/70-persistent-net.rules 2. Locate the entry for the NIC of interest (the physical card used for clustering), and verify it is assigned to the correct physical port (eth0 recommended): # PCI device 0x14e4:0x1639 (model) (custom name provided by external tool) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="9c:8e:99:1b:31:d4", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" model is the NIC model name provided by the manufacturer. NAME="eth0" assigns the named (i.e. matched) NIC to the physical port eth0. This same port must be used for each NIC in the cluster. Note: If you will be making use of port bonding, assign the value of the port bonding interface (e.g bond0) instead. For a discussion of port bonding, see Configure Port Bonding for Interplay MAM (Optional) on page 23. 3. Verify that configuration information for the NIC cluster is correct. Examine the contents of the NIC device s ifcfg-<device> file (e.g. ifcfg-eth0) in the /etc/sysconfig/network-scripts directory. The file should look something like this: DEVICE=eth0 HWADDR=9c:8e:99:1b:31:d4 16

NM_CONTROLLED=yes ONBOOT=yes DHCP_HOSTNAME=$HOSTNAME BOOTPROTO=static TYPE=Ethernet USERCTL=no PEERDNS=yes IPV6INIT=no DEVICE=eth0 specifies the name of the physical Ethernet interface device. ONBOOT=yes instructs the OS to bring up the device at boot time. Must be yes. BOOTPROTO=static lets you assign IP address of the device explicitly (recommended), or allow the OS to assign of the IP address device dynamically. Can be static (recommended) or dhcp (system assigned). If you assign the IP addresses statically you are also required to have IPADDR and NETMASK entries. 4. If there are other NIC devices installed on the server, verify their configuration files to ensure there are no naming conflicts. For example, verify that the value assigned to DEVICE is different for each one. On One Server in the Cluster These steps must be completed (as root) on one server in the cluster. It doesn t matter which server. 1. Do one of the following commands (as root): If your network has no other Multicast activity, you can use the default Multicast address with the following command: /usr/maxt/maxedit/cluster/resources/cluster setupcorosync --corosync-bind-iface=eth0 If IT issued you a different Multicast address, use the following command: /usr/maxt/maxedit/cluster/resources/cluster setup-corosync -- corosync-bind-iface=eth0 --corosync-mcast-addr="<multicast address>" <multicast address> is the multicast address that IT provided for the cluster Note: If you will be making use of port bonding, assign the value of the port bonding interface (e.g bond0) instead. For a discussion of port bonding, see Configure Port Bonding for Interplay MAM (Optional) on page 23. Enter the following command: /usr/maxt/maxedit/cluster/resources/cluster setup-cluster -- cluster-ip="<cluster IP address>" --pingable_ip="<router IP address>" --admin_email="<comma separated e-mail list>" <cluster IP address> is the IP address that IT provided for the cluster 17

<router IP address> is an IP address that will always be available on the network, for example, a network router s IP address <comma separated e-mail list> is a comma separated list of e-mail addresses to which to send cluster status notifications On All Other Servers in the Cluster On all other servers in the cluster, do one of the following commands (as root): If your network has no other multicast activity, you can use the default Multicast address with the following command: /usr/maxt/maxedit/cluster/resources/cluster setup-corosync --corosync-bind-iface=eth0 If IT issued you a different multicast address, use the following command: /usr/maxt/maxedit/cluster/resources/cluster setup-corosync --corosync-bind-iface=eth0 --corosync-mcastaddr="<multicast address>" <multicast address> is the multicast address that IT provided for the cluster. Note: If you will be making use of port bonding, assign the value of the port bonding interface (e.g bond0) instead. For a discussion of port bonding, see Configure Port Bonding for Interplay MAM (Optional) on page 23. Step #4 Create the Interplay MAM Cache Volume In this step you create the RAID 5 cache volume for ICPS deployed for Interplay MAM on HP DL360 hardware. 1. Create a disk partition: fdisk /dev/sdb 2. Create a Physical Disk: pvcreate --metadatasize=64k /dev/sdb1 3. Create a Volume group vgcreate -s 256K -M 2 vg_icps_cache /dev/sdb1 4. Obtain the a value for the number of Physical Extents: vgdisplay vg_icps_cache A list of properties for the volume groups appear, including the physical extents (PE). Use this value to create the logical volume (below). 5. Create the Logical Volume lvcreate -l <available_pes> -r 1024 -n lv_icps_cache vg_icps_cache 18

<available_pes> is the value obtained above. 6. Format the volume mkfs.ext4 /dev/vg_icps_cache/lv_icps_cache 7. Add the entry into /etc/fstab: /dev/mapper/vg_icps_cache-lv_icps_cache /cache ext4 rw 0 0 8. Mount the volume: mount /cache Step #5 Create the Cluster Cache Once you have set up the server cluster you can configure a shared cache for the cluster. This is done using GlusterFS, an open source software solution for creating shared filesystems. In ICPS installations with multiple ICPS servers arranged as a cluster it is used to allow cache-sharing amongst the ICPS servers. GlusterFS creates a virtual drive acting as the cache for all ICPS servers in the cluster. Recall that the ICPS server transcodes media from the format in which it is stored on the ISIS (or Standard FS storage) into an alternate delivery format, such as an FLV or MPEG-2 Transport Stream. In a deployment with a single ICPS server, the ICPS server maintains a local cache where it stores recently-transcoded media. In the event that the same media is requested again, the ICPS server can deliver the cached media, without the need to re-transcode it. In a high-availability and load balanced configuration, the caches maintained by the ICPS servers in a cluster are co-located on the virtual shared drive maintained by GlusterFS. Thus, each ICPS server sees and has access to all the transcoded media in the pooled cache. When any particular ICPS server transcodes media into the FLV format streamed to the ICPS Player, the other ICPS servers can make use of it, without re-transcoding. Note: The correct functioning of cluster cache requires that the clocks on each server in the cluster are synchronized. Clock synchronization was performed in Step #1 Physically install and set up the ICPS server hardware on page 7. Before You Begin Make sure you have the files needed for the installation of GlusterFS. The following package can be found on the RedHat DVD: compat-libtermcap-2.0.8-49.el6.x86_64.rpm Download the following packages from the GlusterFS web site (http://www.gluster.org/download/): glusterfs-core-3.2.5-1.x86_64.rpm glusterfs-fuse-3.2.5-1.x86_64.rpm 19

glusterfs-geo-replication-3.2.5-1.x86_64.rpm On All Servers in the Cluster Install the software components needed for GlusterFS and start the service. Next, create the cache folders. To install the software and start the service: 1. Install the compat-libtermcap.x86_64.rpm package: rpm -Uvh compat-libtermcap-2.0.8-49.el6.x86_64.rpm 2. Install the GlusterFS packages in the following order (as root): rpm -Uvh glusterfs-core-3.2.5-1.x86_64.rpm rpm -Uvh glusterfs-fuse-3.2.5-1.x86_64.rpm rpm -Uvh glusterfs-geo-replication-3.2.5-1.x86_64.rpm 3. Ensure GlusterFS is started: service glusterd status 4. If not, start the service: service glusterd start To create the cache folders: Create the physical folders where the original data will reside on each server: mkdir -p /gluster/gluster_data_download mkdir -p /gluster/gluster_data_fl_cache mkdir -p /gluster/gluster_data_metadata On One Server in the Cluster With GlusterFS installed and running on each ICPS server in the cluster, create the shared storage pool by joining the clustered servers together. This is done using the GlusterFS command of the following form: gluster peer probe server The above command joins the server on which it is issued to the one named in the command (server). It must be issued once for each of the other servers in the cluster. In GlusterFS there is no need to self-join. For example, consider an ICPS server cluster consisting of three servers, server-1, server-2 and server-3. To create the GlusterFS pool from server-1, you would issue the following commands: gluster peer probe server-2 gluster peer probe server-3 To create the shared storage pool: Note: These steps must be completed (as root) on one server in the cluster. It doesn t matter which one. 20

1. Ensure connectivity by pinging the server you want to join. ping <server-name> 2. Form the pool of shared storage. gluster peer probe <server-name1> gluster peer probe <server-name2> gluster peer probe <server-name3> Note: Do not self-probe the local host. 3. For each successful join, the system responds as follows: Probe successful 4. Verify peer status. gluster peer status The system will respond by indicating the number of peers, their host names and connection status and other information. On One Server in the Cluster On any server in the cluster, you must create GlusterFS volumes for the physical folders already created, and start the volumes. 1. Create the corresponding GlusterFS volumes for the physical folders already created. This step should be done for each original data source, but only once on any server. gluster volume create gluster-cache replica [n-server] transport tcp [server1]:/gluster_mirror_data/ [server2]:/gluster_mirror_data/ [...] For example, for a cluster consisting of two servers, you would issue commands similar to the following: gluster volume create gl-cache-dl replica 2 transport tcp ${SERVER1}:/gluster/gluster_data_download ${SERVER2}:/gluster/gluster_data_download gluster volume create gl-cache-fl replica 2 transport tcp ${SERVER1}:/gluster/gluster_data_fl_cache ${SERVER2}:/gluster/gluster_data_fl_cache gluster volume create gl-cache-md replica 2 transport tcp ${SERVER1}:/gluster/gluster_data_metadata ${SERVER2}:/ gluster/gluster_data_metadata Where ${SERVER1} and ${SERVER2} are the names of the servers in the cluster. 2. Start the GlusterFS volumes. This step should be done only once on the server where the volume was created. gluster volume start gl-cache-dl gluster volume start gl-cache-fl gluster volume start gl-cache-md 21

On All Servers in the Cluster Finally, configure the local cache on each server in the cluster. Note: If you have already installed ICPS, the folders have already been created and correct ownership has been assigned. 1. Create the following cache folders: mkdir /cache/download mkdir /cache/fl_cache mkdir /cache/metadata 2. Change ownership of the following two folders (original data folders, not the cache folders created above): chown maxmin:maxmin /cache/gluster/gluster_data_download chown maxmin:maxmin /cache/gluster/gluster_data_fl_cache 3. Mount the folder like any other standard mount, specifying the type as glusterfs. Note: In the code below, the server name (SERVER1) is provided as an example. mount -t glusterfs ${SERVER1}:/gl-cache-dl /cache/download mount -t glusterfs ${SERVER1}:/gl-cache-fl /cache/fl_cache mount -t glusterfs ${SERVER1}:/gl-cache-md /cache/metadata 4. Add entries to the /etc/fstab to automount the folders: ${SERVER1}:/gl-cache-dl /cache/download fuse.glusterfs rw,allow_other,default_permissions,max_read=131072 0 0 ${SERVER1}:/gl-cache-fl /cache/fl_cache fuse.glusterfs rw,allow_other,default_permissions,max_read=131072 0 0 ${SERVER1}:/gl-cache-md /cache/metadata fuse.glusterfs rw,allow_other,default_permissions,max_read=131072 0 0 Test the Cache Test the cache setup by writing a file to one of the GlusterFS cache folders (e.g. /cache/download) on one server and make sure it appears on the other servers. Monitor the Cluster For information on monitoring the cluster, see Step #7 Post-Installation Steps on page 27. Step #6a Configure ICPS for Interplay MAM For ICPS to play Interplay MAM media, the path to the filesystem containing the MAM proxies must be mounted on the ICPS servers. The mounting is done at the level of the OS using standard Linux command for mounting volumes (mount). To automate the mounting of the MAM filesystem, create an entry in /etc/fstab. To determine the correct path to be mounted, examine the path associated with the MAM essence pool to which ICPS is being given access. This is found in the Interplay MAM 22

Administrator interface under the Essence Management Configuration tab. Look for the MORPHEUS entry and tease out the path information. It is likely that ICPS has been given access to more than one MAM essence pool. Be sure to mount all the associated filesystems. Note: Configuration must also take place on the Interplay MAM side, to set up permissions for ICPS to access MAM storage, to point Interplay MAM to the ICPS server or server cluster, etc. For instructions on this aspect of setup and configuration, please refer to the Interplay MAM documentation. Note: This step can be performed at any time during the installation. Configure Port Bonding for Interplay MAM (Optional) Port bonding (also called link aggregation) is an OS-level technique for combining multiple Ethernet ports into a group, making them appear and behave as a single port. In ICPS port bonding is configured in round-robin mode. In this mode, Ethernet packets are automatically sent, in turn, to each of the bonded ports, reducing bottlenecks and increasing the available bandwidth. For example, bonding two ports together increases bandwidth by approximately 50% (some efficiency is lost due to overhead). In ICPS, port bonding improves playback performance when multiple clients are making requests of the ICPS server simultaneously. Note: Port bonding is only possible for Interplay MAM deployments. Note: Do not configure port bonding if you have already set up an ICPS server cluster. To configure port bonding for Interplay MAM: 1. Add port bonding configuration information to each of the NIC device s ifcfg- <device> files (e.g. ifcfg-eth0, ifcfg-eth1, etc.) in the /etc/sysconfig/network-scripts directory. The file should look something like this: DEVICE=eth0 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=static DEVICE=eth0 specifies the name of the physical Ethernet interface device. This line will be different for each device. It must correspond to the name of the file itself. MASTER=bond0 specifies the name of the port bonding interface. This must be the same in each network script file in the port bonded group. 2. Create a port bonding network script in the same directory: /etc/sysconfig/network-scripts/ifcfg-bond0 ifcfg-bond0 is the name of the port-bonding group (e.g. ifcfg-bond0). 3. The contents of the file should resemble the following: 23

DEVICE=bond0 ONBOOT=yes BOOTPROTO=static USERCTL=no BONDING_OPTS="mode=0" DEVICE=bond0 specifies the name of the port bonding group interface. It must correspond to the name of the file itself. BOOTPROTO=static lets you assign IP address of the device explicitly (recommended), or allow the OS to assign of the IP address device dynamically. Can be static (recommended) or dhcp (system assigned). If you assign the IP addresses statically you are also required to have IPADDR and NETMASK entries. BONDING_OPTS="mode=0" specifies the type of port bonding (mode=0 specifies round-robin). 4. Restart the network service (as root): /etc/init.d/network restart Step #6b Configure ICPS for Interplay Central Now that you have installed the operating system and ICPS software components, you have to configure the ICPS server(s). ICPS configuration procedures include (for each server): Logging into the configuration portal Configure Interplay Configure the ISIS connection If you are configuring an ICPS cluster, restart the cluster synchronization services on all nodes. Log Into Portal ICPS servers are configured using a web-based configuration portal. You need access to a computer on the network with access to the ICPS server(s) you are configuring, and a web browser. You may want to make some changes to the ICPS administrator s account (e.g., change the password). We recommend using Google Chrome, but any browser is supported. To log into the portal: 1. Launch your web browser and in the address bar, do one of the following: Enter http://<hostname> where <hostname> is either the host name of the ICPS (if you only have a single server) 24

Enter http://<cluster-ip> where <cluster-ip> is the IP address you provisioned for the ICPS cluster The ICPS configuration portal login screen appears. 2. Log in with administrator credentials (case-sensitive): User name: Administrator Password: Avid123 First time login takes you to the Interplay tab. 3. If you want to change the administrator account password to the portal, click Change Password at the top of the page to view the user profile settings. 4. Enter a new password. 5. Re-enter the new password. 6. Click Submit (or Cancel). A message appears indicating that the password was successfully changed. You may now proceed to configure ICPS. Configure Interplay ICPS works with Interplay. Although Interplay Central users log in with their own credentials and use their own Interplay credentials to browse media assets, ICPS use a separate set of Interplay credentials to resolve playback requests and check-in voice-over assets recorded by Interplay Central users. Before you begin: ICPS requires a unique set of user credentials that you must create as an Interplay administrator. The user credentials should have the following attributes: The credentials should not be shared with any human users Permission to read all folders in the workgroup We recommend using a name that indicates the purpose of the user credentials, e.g. icps-interplay In this procedure you configure the user credentials and workgroup properties required by ICPS. To configure Interplay: 1. Click the Interplay tab. 2. Configure Interplay credentials: a. Enter the host name of the Interplay Engine server. b. Enter the name of the Interplay user reserved for ICPS. c. Enter the password for that user. 3. Configure Workgroup Properties: a. Enter the host name for the Media Indexer. 25

Note: If the Interplay media indexer is connected to a High Availability Group (HAG), enter the host name of the active Media Indexer. b. Enter the Interplay Workgroup name. This is case-sensitive. Use the same case as defined in the Interplay engine. c. Enter the host name for the lookup server(s). 4. Enable dynamic relink. Dynamic relink is required for multi-resolution workflows. 5. Click Save. This stops all servers and reconfigures them. Please be patient, since the process can take some time. Configure ISIS ICPS works with ISIS storage. ICPS uses a separate set of ISIS credentials to read media assets for playback and to write audio assets for voice-over recorded by Interplay Central users. Before you begin: ICPS requires a unique set of user credentials that you must create as an ISIS administrator. The user credentials should have the following attributes: The credentials should not be shared with any human users Permission to read all workspaces, and to write to the workspace flagged as VO (voiceover) workspace We recommend using a name that indicated the purpose of the user credentials, e.g. icps-isis In this procedure, configure the ISIS host, user credentials required by ICPS. In some network configuration scenarios, additional settings may be required. To configure ISIS: 1. Click the ISIS tab. 2. Configure ISIS credentials: a. Enter the host name of the ISIS. b. Enter the name of the ISIS user reserved for ICPS. c. Enter the password for that user. 3. If your connection to ISIS is via Zone 2 (through a switch as opposed to a Zone 1 direct connection), enable Remote Host, and then enter IP addresses for the ISIS System Directors. 4. Normally, the only network connection for the ICPS is a single GigE or 10GigE connection. This is both the connection for the ISIS and to the network for outbound compressed playback media. If you have other network connections, you must indicate which network connections are used by ISIS as opposed to other network activity. Do the following: 26

a. Enter the network device ID (usually eth0) used by ISIS. b. Enter all other active network devices (e.g. eth1; eth2; etc.) not used by ISIS. 5. Choose the Client Mode option for your setup: a. If the ICPS server is connected to ISIS via GigE, select GigE. b. If the ICPS server is connected to ISIS via 10GigE, select 10GigE. 6. Click Save. Restart Cluster Synchronization Services on All Nodes Although you updated the settings on the master node in the cluster by logging into the cluster IP address, the changes you made must propagate to the other nodes in the cluster by restarting corosync on all nodes in the cluster. To restart cluster synchronization services on all nodes: For each node in the cluster: 1. Log in as root. 2. Type: service corosync restart Configure Wi-Fi Only Encoding for Facility-Based ipads (Optional) By default, ICPS servers encode three different media streams for Interplay Central applications detected on ipads, for Wi-Fi, Edge, and 3G connections. For Wi-Fi only facilities, it is recommended that you disable the Edge and 3G streams, to improve the encoding capacity of the ICPS servers. To disable Edge and 3G streams: 1. Log in as root and edit the following file using a text editor (such as vi): /usr/maxt/maxedit/share/mpegpresets/mpeg2ts.mpegpresets 2. In each of the [Edge] and [3G] areas, set the active parameter to active=0. 3. Save and close the file. Step #7 Post-Installation Steps The procedures in this section are helpful in verifying the success of the installation, and in preparing for post-installation management of the logs generated by ICPS. 27

Monitoring ICPS High-Availability and Load Balancing If you have configured a highly-available and load-balanced ICPS cluster. Use the following commands to monitor the cluster for problems and if necessary, resolve them. If the following procedure does not resolve problems with the ICPS cluster, please contact an Avid representative. To monitor the status of the cluster: Enter the following command as root. crm_mon This returns the status of services on all nodes. Error messages may appear. A properly running cluster of 2 nodes will return the following: ============ Last updated: Thu Dec 1 15:45:08 2011 Stack: openais Current DC: icps_01 - partition with quorum Version: 1.1.2-f059ec7ced7a86f18e5490b67ebf4a0b963bccfe 2 Nodes configured, 2 expected votes 6 Resources configured. ============ Online: [ icps_mam_small icps_mam_med ] Clone Set: AvidConnectivityMonEverywhere Started: [icps_01 icps_02] AvidClusterIP (ocf::heartbeat:ipaddr2): Started icps_01 AvidClusterMon (lsb:avid-monitor): Started icps_01 Clone Set: AvidClusterDbSyncEverywhere Started: [ icps_02 icps_01 ] Clone Set: pgsqldbeverywhere Started: [ icps_02 icps_01 ] Clone Set: AvidAllEverywhere Started: [ icps_02 icps_01 ] To reset the cluster: If you see errors in the crm_mon report about services not starting properly, enter the following (as root): /usr/maxt/maxedit/cluster/resources/cluster rsc-cleanup Retrieve ICPS Logs This step is not required at installation, but as you use Interplay Central you may encounter performance problems or playback failure. You should report these occurrences to an Avid representative. Avid may ask you to retrieve system and component logs from your ICPS server(s). To retrieve ICPS logs: 1. Launch your web browser and in the address bar, do one of the following: Enter http://<hostname> where <hostname> is either the host name of the ICPS (if you only have a single server) 28

Enter http://<cluster-ip> where <cluster-ip> is the IP address you provisioned for the ICPS cluster The ICPS configuration portal login screen appears. 2. Log in with administrator credentials (case-sensitive): User name: Administrator Password: Avid123 If this is the first time you are logging in you are taken to the Interplay tab where you can change the administrator password. 3. Otherwise, click the Logs tab. 4. Check the box next to the log(s) you want to retrieve. 5. Choose All or Current: Choose All to retrieve all logs of the corresponding type. Choose Current to retrieve only the latest log since last server restart. 6. Click Download Logs. 7. The logs are downloaded to the computer you are using as a.zip file. Inside the.zip file are.log files one for each requested log. Log Cycling Like other Linux logs, the ICPS server logs are stored under the /var/log directory, in /var/log/avid. Logs are automatically rotated on a daily basis as specified in /etc/logrotate.conf. 29

PART III: Installing ICPS on Non-HP Server Hardware The following table provides time estimates for each of the main installation steps. Task Step #1 Setting Up the Non-HP Server Hardware Step #2 Installing RHEL on Non-HP Servers Step #3 Installing ICPS on Non-HP Servers Step #4 Setting up the High-Availability and Load-Balanced Server Cluster Step #5 Create the Interplay MAM Cache Volume Step #6 Create the Cluster Cache Step #7 Configure ICPS for Interplay MAM Step #8 Post-Installation Steps Total: Approximate Time Needed 1 hr 40 min 10 min 20 min 10 min 20 min 20 min 10 min 3 hr 10 min 30

Step-by-Step Review of the Instructions This section provides tips for installing RHEL and ICPS on non-hp hardware. For the most part the steps for installing and configuring ICPS on supported HP hardware are easily generalized to non-hp hardware. The primary difference is that the express installation using a USB key cannot be followed. That is, you must install RHEL and ICPS as separate steps. In addition, there is no guarantee the supplied RHEL kickstart (ks.cfg) file will work on non-hp hardware. However, you can examine its contents and mimic them during a manual installation, or create a kickstart file for your own hardware. Step #1 Setting Up the Non-HP Server Hardware Follow the instructions in Step #1 Setting Up the HP Server Hardware on page 10, noting the following: Connect the ICPS server to the ISIS via a Zone 1 (direct) or Zone 2 (through a switch) connection. We recommend a RAID 1 (mirror) volume for the system disk Set the system clock before installing the OS, if possible. Otherwise set it at the appropriate stage in OS installation. Step #2 Installing RHEL on Non-HP Servers Follow the instructions in Step #2 Installing the RHEL and ICPS Software Components on page 12, modifying them as appropriate for the chosen hardware. Note the following: Prepare a ICPS USB key as instructed, but do not boot from it. Manually install RHEL 6.0. Do not install patches, updates, or upgrade to RHEL 6.1. Select BASIC SERVER during the RHEL installation process. If you will be setting up a cluster of ICPS servers, install compat-libtermcap.x86_64.rpm. Configure the NIC network port as eth0. Disable DHCP. Use a static IP address if configuring a node cluster Manually remove duplicate entries and backticks from the network configuration file. Step #3 Installing ICPS on Non-HP Servers Untar the supplied installer file ICPS_installer_v1.2.tar.gz and run the installer: 1. Untar and unzip the installation script file located on the USB key: tar zxvf ICPS_installer_v1.2.tar.gz 2. Change directories to the ICPS_installer_v1.2 folder and run the installation script: 31

bash install.sh Step #4 Setting up the High-Availability and Load-Balanced Server Cluster Follow the instructions in Step #3 Set up the High-Availability and Load-Balanced Server Cluster on page 15. Step #5 Create the Interplay MAM Cache Volume If your server has enough drives, it is recommended that you create a RAID 5 volume for use as the cache for Interplay MAM deployments. For guidelines, see the instructions in Step #4 Create the Interplay MAM Cache Volume on page 18. Step #6 Create the Cluster Cache Follow the instructions in Step #5 Create the Cluster Cache on page 19. Step #7 Configure ICPS for Interplay MAM Follow the instructions in Step #6a Configure ICPS for Interplay MAM on page 22. Step #8 Post-Installation Steps Follow the instructions in Step #7 Post-Installation Steps on page 27. 32

Appendix A: Frequently Asked Questions Q. What hardware is supported for ICPS servers? A. The hardware supported depends on the installation model: Interplay Central: HP DL380 server hardware only Interplay MAM: HP DL360 and other hardware (contact your Avid representative for details) Q. Is ISIS the only storage supported? No. ISIS storage is required for ICPS deployed for Interplay Central. ICPS deployed for Interplay MAM supports all standard filesystems that can be mounted by a Linux server (XFS, NFS, etc). This includes proprietary filesystems that are able to expose themselves as standard filesystems. Q. Can ICPS support both Interplay Central and Interplay MAM at the same time? A. Yes. In this case it is recommended that you set up an ICPS server cluster to handle the additional load. Q. Under what circumstances can the USB key be used to perform the installation? A. The USB key can be used for installations on supported HP hardware only. Q. What is the advantage of setting up an ICPS server cluster? A. Properly configured, an ICPS server cluster provides the following: Load balancing. All incoming playback connections are routed to a cluster IP address, and are subsequently distributed evenly to the nodes in the cluster. High-availability. If any node in the cluster fails, connections to that node will automatically be redirected to another node. Shared Cache: The media transcoded by one node in the cluster is immediately available for use by the other nodes. Cluster monitoring. You can monitor the status of the cluster by entering a command. If a node fails (or if any other serious problem is detected by the cluster monitoring service), an e-mail is sent to one or more e-mail addresses. 33

Appendix B: Troubleshooting This section provides solutions to the problems most commonly encountered during and immediately after installation and configuration. Problem: The ICPS installation script fails. Solution: The incorrect version of RHEL was installed. ICPS servers require RHEL 6.0. Do not install any OS updates, patches, and do not upgrade to RHEL 6.1 or higher. If you accidentally installed or upgraded to an incompatible version, re-start the installation procedure from the beginning. Problem: Playback falters or halts for a particular clip or presents a blank screen in the player Solution: The clip was transcoded incorrectly. Clear the cache and play again. There are a number of reasons clip playback may falter or halt part way through, but most commonly it is due to a one-time transcoding or transmission error. Cache data can also become corrupted after a service crash or hard reboot (power outage, forced restart, etc.) More often than not, clearing the ICPS server cache and replaying the clip resolves playback issues. To clear the ICPS server cache: Log in to the ICPS server issue the following command: service avid-all clear-cache Note: There is no need to clear any browser-side caches. As of ICPS 1.2, ICPS automatically forces reloads of all the files it serves. Problem: A limited number of clips are played successfully, but playing eventually halts. Solution: An incorrect value was entered in the Client Mode option For the purposes of playback, the ICPS server is a client of the ISIS. During installation, you were asked to specify the connection to the ISIS (GigE or 10GigE), using the ICPS portal. Check that the correct choice was made. See Configure ISIS on page 26. Problem: The ISIS sends a no route to host error message Solution: The wrong NIC was configured for the ISIS connection A message of the following form appears when the Avid services start on the ICPS server: /sbin/mount.avidfos: Failed to mount: FAIL(EHOSTUNREACH):No route to host A no route to host error message most commonly arises when the wrong NIC card was configured as the connection to the ISIS. For GigE connections, this must be an Intel PROset quad-port GigE NIC in PCI slot 2 of the ICPS server. You cannot use the on-board Broadcom GigE ports. For 10GigE connections, this must be the Myricom 10GigE NIC in PCI slot 4. 34

To troubleshoot this problem: 1. Log in to the ICPS configuration portal and verify the Client Mode is configured correctly for your ICPS server hardware (on the ISIS tab). 2. If it is configured correctly above, ensure it is not a network issue by pinging the system director. 3. Reload the registry and restart all Avid services: # avidfos r Restart avid-all services 4. If you still receive the error message, dump the log and check it for a specific error: # avidfos -d {logfile name} Problem: The Player stops working with a log-in failure or PostgreSQL error message Solution: This may indicate the system disk on the ICPS server is full A log-in failure or PostgreSQL error message from the Player can indicate key services have halted due to the system hard disk having filled up with core dump files. To resolve this issue, remove the core dump files and restart the effected services. 1. Verify that the problem is the result of the system disk filling up with core dump files: df h A usage metric of 100% indicates a volume is full. 2. Remove the core dump files: rm rf core* 3. On a system with multiple ICPS servers configured as a cluster, restart the cluster manager: service corosync restart This will restart a number of the needed services. 4. In addition (or on a system with just one ICPS server), restart the PostgreSQL database: service postresql restart 5. Verify the services are back up and running using the following command (cluster configurations only): crm_mon 6. In addition (or on a system with just one ICPS server), verify the Avid-named services are running: service avid -status -all In particular, scan the output to verify that the avid-edit service is running. 35

Copyright and Disclaimer Product specifications are subject to change without notice and do not represent a commitment on the part of Avid Technology, Inc. The software described in this document is furnished under a license agreement. You can obtain a copy of that license by visiting the Avid Web site at www.avid.com. The terms of that license are also available in the product in the same directory as the software. The software may not be reverse assembled and may be used or copied only in accordance with the terms of the license agreement. It is against the law to copy the software on any medium except as specifically allowed in the license agreement. No part of this document may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying and recording, for any purpose without the express written permission of Avid Technology, Inc. Copyright 2012 Avid Technology, Inc. and its licensors. All rights reserved. Attn. Government User(s). Restricted Rights Legend U.S. GOVERNMENT RESTRICTED RIGHTS. This Software and its documentation are commercial computer software or commercial computer software documentation. In the event that such Software or documentation is acquired by or on behalf of a unit or agency of the U.S. Government, all rights with respect to this Software and documentation are subject to the terms of the License Agreement, pursuant to FAR 12.212(a) and/or DFARS 227.7202-1(a), as applicable. Trademarks Adrenaline, AirSpeed, ALEX, Alienbrain, Archive, Archive II, Assistant Avid, Avid Unity, Avid Unity ISIS, Avid VideoRAID, CaptureManager, CountDown, Deko, DekoCast, FastBreak, Flexevent, FXDeko, inews, inews Assign, inews ControlAir, Instinct, IntelliRender, Intelli-Sat, Intelli-sat Broadcasting Recording Manager, Interplay, ISIS, IsoSync, LaunchPad, LeaderPlus, ListSync, MachineControl, make manage move media, Media Composer, NewsCutter, NewsView, OMF, OMF Interchange, Open Media Framework, Open Media Management, SIDON, SimulPlay, SimulRecord, SPACE, SPACEShift, Sundance Digital, Sundance, Symphony, Thunder, Titansync, Titan, UnityRAID, Video the Web Way, VideoRAID, VideoSPACE, VideoSpin, and Xdeck are either registered trademarks or trademarks of Avid Technology, Inc. in the United States and/or other countries. All other trademarks contained herein are the property of their respective owners. ICPS 1.2 Installation and Configuration 27 April 2012 This document is distributed by Avid in online (electronic) form only, and is not available for purchase in printed form. 36