Interplay Common Playback Services Installation & Configuration
|
|
- Madison Dickerson
- 8 years ago
- Views:
Transcription
1 Interplay Common Playback Services Installation & Configuration ICPS Version: 1.2 Document Version: 1.00 This document provides instructions to install and configure Avid Interplay Common Playback Services (ICPS) version 1.2 for use with Interplay Central version 1.2 and Interplay MAM version 4.1. Copyright 2012 Avid Technology About ICPS 1.2 Please see the ICPS 1.2 ReadMe.
2 2
3 Contents PART I: Introduction and Overview... 5 About ICPS Installation & Configuration... 6 Installation Overview... 7 Decision Points... 7 Before You Begin... 8 Intended Audiences and Prerequisites... 8 Upgrading Instructions... 8 PART II: Installing ICPS on HP DL380/DL360 Hardware... 9 Step #1 Setting Up the HP Server Hardware Connect the ICPS Server to the ISIS and Network Connect the ICPS Server to MAM Proxy Storage and Network Setting Up the DL380 System Drive Volume Setting Up the DL360 System Drive Volume Setting the System Clock Disabling HP DL380/DL360 Power Saving Mode Step #2 Installing the RHEL and ICPS Software Components Preparing the ICPS USB Key Copying Red Hat Enterprise Linux OS Media to the ICPS USB Key Booting the Server from the USB Key and Running the Installer Step #3 Set up the High-Availability and Load-Balanced Server Cluster On All Servers in the Cluster On One Server in the Cluster On All Other Servers in the Cluster Step #4 Create the Interplay MAM Cache Volume Step #5 Create the Cluster Cache Before You Begin On All Servers in the Cluster On One Server in the Cluster On One Server in the Cluster On All Servers in the Cluster Test the Cache Monitor the Cluster
4 Step #6a Configure ICPS for Interplay MAM Configure Port Bonding for Interplay MAM (Optional) Step #6b Configure ICPS for Interplay Central Log Into Portal Configure Interplay Configure ISIS Restart Cluster Synchronization Services on All Nodes Configure Wi-Fi Only Encoding for Facility-Based ipads (Optional) Step #7 Post-Installation Steps Monitoring ICPS High-Availability and Load Balancing Retrieve ICPS Logs Log Cycling PART III: Installing ICPS on Non-HP Server Hardware Step-by-Step Review of the Instructions Step #1 Setting Up the Non-HP Server Hardware Step #2 Installing RHEL on Non-HP Servers Step #3 Installing ICPS on Non-HP Servers Step #4 Setting up the High-Availability and Load-Balanced Server Cluster Step #5 Create the Interplay MAM Cache Volume Step #6 Create the Cluster Cache Step #7 Configure ICPS for Interplay MAM Step #8 Post-Installation Steps Appendix A: Frequently Asked Questions Appendix B: Troubleshooting Copyright and Disclaimer
5 PART I: Introduction and Overview 5
6 About ICPS Installation & Configuration ICPS is a software component that installs on its own set of servers, distinct from Interplay Central or Interplay MAM. Installation follows one of two basic deployment models: Interplay Central on HP DL380 server hardware Interplay MAM on HP DL360 or other server hardware Note: For detailed hardware specifications please consult an Avid representative. The following illustration shows the location of the ICPS servers in the Interplay Central and Interplay MAM deployments. The following table presents the main characteristics of each deployment: Interplay Central HP DL380 server hardware only ISIS storage High-availability and load balancing via multiple ICPS Servers Interplay MAM HP DL360 and other hardware MAM proxy storage High-availability and load balancing via multiple ICPS Servers ICPS for Interplay MAM supports all standard filesystems that can be mounted by a Linux server (XFS, NFS, etc.). This includes proprietary filesystems that are able to expose themselves as standard filesystems. 6
7 Installation Overview Although the installation process is similar in all cases, the steps vary depending on the deployment model and choice of hardware. For example, installations on the supported HP server (HP DL380) can take advantage of the express installation using a USB key and the supplied Red Hat kickstart (ks.cfg) file. On non-hp servers you must install Red Hat manually. In all cases, optionally configuring for high-availability and load balancing requires additional steps. The following list describes the high-level, general, installation steps: 1. Physically install and set up the ICPS server hardware 2. Install the OS & ICPS software components on the server(s). This step includes copying RHEL OS installation media to the ICPS installation USB key for express installation (HP DL380 /DL360 only) 3. Configure high-availability and load-balancing server cluster(optional) 4. Configure a shared cache for the server cluster (optional) 5. Configure Interplay Central to use the ICPS server or cluster 6. Or, configure Interplay MAM to use the ICPS server or cluster Note: In all cases, these servers require the installation of Red Hat Enterprise Linux (RHEL) 6.0. Do not install any OS updates, patches, and do not upgrade to RHEL 6.1 or higher. Decision Points The main decision points affecting installation can be summarized as follows: What kind of server? HP DL380/DL360 or Other. ICPS supports Interplay Central on HP hardware only. ICPS supports Interplay MAM on both HP and non-hp hardware. For non-hp hardware, review the tips and notes in PART III: Installing ICPS on Non-HP Server Hardware on page 30 before proceeding. What kind of install? Interplay Central or Interplay MAM. While the installation steps are very similar for Interplay Central and Interplay MAM support, the configuration steps are different. Instructions for configuring Interplay Central for ICPS are provided in this document. For Interplay MAM, obtain the MAM configuration instructions before proceeding. What kind of server setup? Single or Cluster. A server cluster provides high-availability and load-balancing. The OS and ICPS install identically on each server in the cluster, but additional steps are required to configure the servers as a cluster. Is this a dual setup? Interplay MAM and Interplay Central? 7
8 ICPS can serve both Interplay MAM and Interplay Central simultaneously. In this case, install an ICPS server cluster as indicated in this document. Then, perform both configuration operations. Before You Begin Before you begin, make sure you have the following: The ICPS server(s) RHEL 6.0 installation installation.iso file or DVD media The ICPS installation package (USB_v1.2.zip) An 8GB USB key (HP DL380/DL360 installations only; optional for non-hp installations) A Windows XP/Vista/7 laptop or desktop computer Note: The server(s) on which you are installing the ICPS software should be physically installed in your engineering environment, and the appropriate network connection(s) to ISIS and/or the house network should already be made. You also require access to the server s console: Directly by connecting a monitor and keyboard to the server Remotely via KVM or comparable solution Intended Audiences and Prerequisites This guide is for the person responsible for performing a fresh install of ICPS, or upgrading or maintaining an existing ICPS installation. Professional Services: Avid personnel whose responsibilities include installing and upgrading the ICPS system, on-site at a client s facility. In-House Installers: Clients with an in-house IT department that has expertise in systems integration, Linux (including port-bonding), networking, etc. This kind of person might be called on to add a new ICPS node to an already established ICPS cluster, for example. Upgrading Instructions To upgrade from an earlier version of ICPS, mount the USB key and run the installation script. 8
9 PART II: Installing ICPS on HP DL380/DL360 Hardware The following table provides time estimates for each of the main installation steps. Task Step #1 Setting Up the HP Server Hardware Step #2 Installing the RHEL and ICPS Software Components Step #3 Set up the High-Availability and Load-Balanced Server Cluster Step #4 Create the Interplay MAM Cache Volume Step #5 Create the Cluster Cache Step #6a Configure ICPS for Interplay MAM Step #6b Configure ICPS for Interplay Central Step #7 Post-Installation Steps Total: Approximate Time Needed 1 hr 40 min 20 min 10min 20 min 20 min 10 min 10 min 3 hr 10 min 9
10 Step #1 Setting Up the HP Server Hardware In this procedure, you connect the HP DL380/DL360 server to your network, set up its hard disk drives, set the system clock, and disable power saving mode (often enabled, by default). Connect the ICPS Server to the ISIS and Network The physical servers must be installed and connected to the ISIS via a Zone 1 (direct) or Zone 2 (through a switch) connection. Note: This procedure applies to Interplay Central deployments only. 1. For 10GigE connections, use the Myricom 10GigE NIC in PCI slot 4 (see diagram below). 2. For GigE connections, do not use the on-board Broadcom GigE ports. You should have an Intel PROset quad-port GigE NIC in PCI slot 2 (see the diagram below). Connect to the left-most port on that NIC. Note: For a specific NIC manufacturer and model, please consult and Avid representative. Connect the ICPS Server to MAM Proxy Storage and Network In an Interplay MAM deployment, you can use the on-board Broadcomm GigE port to connect to the house network. For a 10GigE connection, use a 10GigE NIC of your choosing. Setting Up the DL380 System Drive Volume ICPS installs onto the system drive along with the RHEL OS. The DL380 server has a drive cage for up to 16 drives, of which only 2 are currently occupied. We recommend that you set up a RAID 1 (mirror) volume, using both drives for the system disk, one drive mirroring the other, for redundancy. Setting Up the DL360 System Drive Volume ICPS installs onto the system drive along with the RHEL OS. The DL360 server has a drive cage for up to 16 drives, of which 8 are occupied. The following setup is recommended: 10
11 Set up 2 drives as a RAID 1 (mirror) volume for use as the system disk. Set up the remaining 6 drives in a RAID 5 configuration, for use as the ICPS cache. Setting the System Clock To ensure the smooth installation of the RHEL OS and ICPS in a later step, set the system clock. To start the server and access the BIOS to set the system clock: 1. Power up the server. 2. When the console displays the option to enter the Setup menu, press F9. The Setup utility appears. 3. Choose Date and Time. Date and Time options appear. Set the date (mm-dd-yyyy) and time (hh:mm:ss). 4. Press Enter to save the changes and return to the main menu. 5. Exit the Setup utility (F10) and save. The server reboots with new options. Disabling HP DL380/DL360 Power Saving Mode The HP DL380 is frequently shipped with BIOS settings set to Power-Saving mode. ICPS is CPU and memory intensive processes, especially when under heavy load you will get much better performance by ensuring that your server is set to operate at Maximum Performance. Note: You can do this before or after the installation process. We recommend making the change immediately. To start the server and access the BIOS to check settings: 1. Power up the server. 2. When the console displays the option to enter the Setup menu, press F9. The Setup utility appears. 3. Choose Power Management Options. Power Management options appear. 4. Choose HP Power Profile. Power Profile options appear. 5. Choose Maximum Performance. You are returned to the HP Power Profile options. 6. Press Esc to return to main menu. 7. Exit the Setup utility (F10) and save. 11
12 The server reboots with new options. Step #2 Installing the RHEL and ICPS Software Components In this procedure, you copy the RHEL OS installation media to the ICPS installation USB key, and then install both the RHEL OS and the ICPS software components in one continuous step. Preparing the ICPS USB Key Installing ICPS requires a bootable USB key with all the files required for installing ICPS. If instead of an ICPS installation USB key you only have the ICPS installation ZIP file, prepare a USB key using the following steps. To prepare the ICPS USB key: 1. Procure an 8GB USB key. 2. Format the USB key as a FAT32 volume. 3. Get the ICPS software installation package file, USB_v1.2.zip. 4. Unzip the file into a unique directory. Copying Red Hat Enterprise Linux OS Media to the ICPS USB Key Follow this procedure only if you are installing ICPS software components on an HP DL380/DL360 server. To complete this procedure, make sure you have: A Windows computer RHEL 6.0 OS installation DVD or.iso Avid does not redistribute the RHEL system media on the ICPS installation USB key. You must download the installation.iso file from Red Hat directly or get it from the RHEL Installation DVD that comes with your server then copy the.iso file to the USB key Avid provides for ICPS installation. Note: Only RHEL 6.0 OS is supported. Do not install patches, updates, or upgrade to RHEL 6.1. To copy the RHEL OS.iso file to the USB key: 1. Log into a Windows laptop or desktop. 2. Make sure the RHEL 6.0.iso file is accessible locally (preferable) or over the network from your computer Note: If you don t have the RHEL 6.0 installation.iso or a RHEL installation DVD from which to create one, log into rhn.redhat.com using your account credentials and download the.iso. Remember to download the 6.0 version 6.1 or later is not supported. 3. Browse Windows Explorer to the USB key volume. 4. Double-click iso2usb.exe to launch the application. 12
13 5. Choose the Diskimage radio button then browse to the.iso file. 6. Verify the Hard Disk Name and USB Device Name are correct: Hard Disk Name: sda USB Device Name: sdb Note: If you will be configuring the DL360/DL380 with a separate cache volume, the USB device name will be sdc instead. 7. In the Additional Files field browse to the directory where you unzipped the USB_v1.2.zip file. 8. Click OK. 9. A process begins to copy the.iso file to the USB key. This process will take 5-10 minutes. Once it is complete, the USB key has everything it needs for a complete RHEL and ICPS installation process. Note: Copying the RHEL 6.0 OS.iso file to the USB key is a one-time process. If you ever have to re-install ICPS, you do not need to repeat these steps. Booting the Server from the USB Key and Running the Installer If you are installing ICPS on an HP DL360/DL360, the installation process installs RHEL and ICPS software components. To boot the server from the USB key and run the installer: 1. Before powering on the server, insert the USB key. 2. Power on the server. 3. Wait for the Welcome screen to appear. The first option in the list, Install and upgrade an existing system, is selected. 4. Press Enter. The option is used automatically if you do nothing and wait 60 sections. The RHEL packages are installed this takes 5-10 minutes. When the process is complete, you are prompted to reboot. 5. Do not press Enter! Remove the USB key from the server. If you reboot without removing the USB key the server will reboot from the USB key again. If you pressed Enter by mistake, remove the USB key as quickly as possible (before the system boots up again). To reboot the server for the first time: 1. Press Enter. Rebooting the server at this time triggers the first time boot up on the system drive. The first boot screen appears. 13
14 2. From the Choose a Tool menu, select Keyboard Configuration. Press Enter. Choose the Language option for your keyboard. 3. Focus the OK button. Press Enter. 4. Choose the Network Configuration option. Press Enter. 5. If you are setting up a cluster of ICPS servers or if you want to set up a static IP, choose the Device Configuration option. Static IP is required when setting up a load balanced cluster of servers. Press Enter. A list of network interface ports appears. 6. Choose the device option corresponding to eth0. Press Enter. 7. Enter network device information: Keep the default name: eth0 Keep the default device: eth0 Disable DHCP (Spacebar) Enter the static IP, network, default gateway IP, and DNS servers 8. Select OK. Press Enter. You are returned to the list of network interface ports. 9. Select Save. Press Enter. 10. Choose the DNS Configuration option. Press Enter. 11. Enter DNS information: Enter the hostname: <machine name> DNS entries should be carried over from step 7 (if you specified static addresses). If you did not enable DHCP, enter the DNS search path domain 12. Select Save & Quit. Press Enter. 13. Select Quit. Press Enter. You are prompted to login to the server. To check the date and time: 1. Login as root (i.e. user name = root). Note: The default root password is Avid Check the date on the server. Type date and press enter. The date is displayed, for example: Sun Apr 1 11:03:04 EDT If the date is incorrect, change the date. For example, enter: date
15 The required format is MMDDHHmmYYYY. (Month-Date-Hour-Minute-Year) 4. When you press enter the reset date is displayed: Mon Apr 2 11:03:00 EDT 2012 To manually edit the network configuration file: Due to an artifact of the RHEL 6.0 installation process, backticks (`) are often added around entries in the network configuration file. Before leaving network configuration, remove the backticks from around any effected entries. 1. Manually edit the network configuration file in /etc/sysconfig/network-scripts/ifcfg-eth0 The first time you edit the file, it may have duplicate entries and backticks similar to the following example: DEVICE=eth0 HWADDR=00:26:55:e6:83:e1 NM_CONTROLLED=yes ONBOOT=yes DHCP_HOSTNAME=`$HOSTNAME` ONBOOT=yes DHCP_HOSTNAME=`$HOSTNAME` 2. Remove the duplicate entries and backticks (e.g. from around the host name). 3. Restart the network service (as root): /etc/init.d/network restart Step #3 Set up the High-Availability and Load-Balanced Server Cluster Redundancy and scale for ICPS can be obtained by setting up a cluster of two or more servers. In a high-availability and load-balanced setup, multiple ICPS servers are exposed using a single IP address. In essence, Interplay Production and Interplay MAM see the cluster as a single machine. Within the cluster, requests for media are automatically distributed to the available nodes. Properly configured, an ICPS server cluster provides the following: Load balancing. All incoming playback connections are routed to a cluster IP address, and are subsequently distributed evenly to the nodes in the cluster. High-availability. If any node in the cluster fails, connections to that node will automatically be redirected to another node. Shared Cache: The media transcoded by one node in the cluster is immediately available for use by the other nodes. 15
16 Cluster monitoring. You can monitor the status of the cluster by entering a command. If a node fails (or if any other serious problem is detected by the cluster monitoring service), an is sent to one or more addresses. Before you begin, make sure of the following: ICPS software components are installed on all servers in the cluster All servers are on the network and are assigned IP addresses You have an assigned cluster IP address (distinct from the servers in the cluster) If your network already uses multicast, IT must issue you a multicast address to avoid potential conflicts. If your network does not use multicast, the cluster can safely use a default multicast address. Note: Unicast is not supported. On All Servers in the Cluster All servers in the cluster must be connected to an Ethernet interface having the same name (eth0 recommended). This may not be the case, by default, for a number of reasons, including systems with multiple network interface cards, re-assignment due to matches in network rules files, or explicit settings in the NIC s configuration file. Follow these steps to verify the Ethernet interface is correctly named. 1. Ensure the Ethernet NIC device has been assigned to the correct physical port by examining the content of the following rules file: /etc/udev/rules.d/70-persistent-net.rules 2. Locate the entry for the NIC of interest (the physical card used for clustering), and verify it is assigned to the correct physical port (eth0 recommended): # PCI device 0x14e4:0x1639 (model) (custom name provided by external tool) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="9c:8e:99:1b:31:d4", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" model is the NIC model name provided by the manufacturer. NAME="eth0" assigns the named (i.e. matched) NIC to the physical port eth0. This same port must be used for each NIC in the cluster. Note: If you will be making use of port bonding, assign the value of the port bonding interface (e.g bond0) instead. For a discussion of port bonding, see Configure Port Bonding for Interplay MAM (Optional) on page Verify that configuration information for the NIC cluster is correct. Examine the contents of the NIC device s ifcfg-<device> file (e.g. ifcfg-eth0) in the /etc/sysconfig/network-scripts directory. The file should look something like this: DEVICE=eth0 HWADDR=9c:8e:99:1b:31:d4 16
17 NM_CONTROLLED=yes ONBOOT=yes DHCP_HOSTNAME=$HOSTNAME BOOTPROTO=static TYPE=Ethernet USERCTL=no PEERDNS=yes IPV6INIT=no DEVICE=eth0 specifies the name of the physical Ethernet interface device. ONBOOT=yes instructs the OS to bring up the device at boot time. Must be yes. BOOTPROTO=static lets you assign IP address of the device explicitly (recommended), or allow the OS to assign of the IP address device dynamically. Can be static (recommended) or dhcp (system assigned). If you assign the IP addresses statically you are also required to have IPADDR and NETMASK entries. 4. If there are other NIC devices installed on the server, verify their configuration files to ensure there are no naming conflicts. For example, verify that the value assigned to DEVICE is different for each one. On One Server in the Cluster These steps must be completed (as root) on one server in the cluster. It doesn t matter which server. 1. Do one of the following commands (as root): If your network has no other Multicast activity, you can use the default Multicast address with the following command: /usr/maxt/maxedit/cluster/resources/cluster setupcorosync --corosync-bind-iface=eth0 If IT issued you a different Multicast address, use the following command: /usr/maxt/maxedit/cluster/resources/cluster setup-corosync -- corosync-bind-iface=eth0 --corosync-mcast-addr="<multicast address>" <multicast address> is the multicast address that IT provided for the cluster Note: If you will be making use of port bonding, assign the value of the port bonding interface (e.g bond0) instead. For a discussion of port bonding, see Configure Port Bonding for Interplay MAM (Optional) on page 23. Enter the following command: /usr/maxt/maxedit/cluster/resources/cluster setup-cluster -- cluster-ip="<cluster IP address>" --pingable_ip="<router IP address>" --admin_ ="<comma separated list>" <cluster IP address> is the IP address that IT provided for the cluster 17
18 <router IP address> is an IP address that will always be available on the network, for example, a network router s IP address <comma separated list> is a comma separated list of addresses to which to send cluster status notifications On All Other Servers in the Cluster On all other servers in the cluster, do one of the following commands (as root): If your network has no other multicast activity, you can use the default Multicast address with the following command: /usr/maxt/maxedit/cluster/resources/cluster setup-corosync --corosync-bind-iface=eth0 If IT issued you a different multicast address, use the following command: /usr/maxt/maxedit/cluster/resources/cluster setup-corosync --corosync-bind-iface=eth0 --corosync-mcastaddr="<multicast address>" <multicast address> is the multicast address that IT provided for the cluster. Note: If you will be making use of port bonding, assign the value of the port bonding interface (e.g bond0) instead. For a discussion of port bonding, see Configure Port Bonding for Interplay MAM (Optional) on page 23. Step #4 Create the Interplay MAM Cache Volume In this step you create the RAID 5 cache volume for ICPS deployed for Interplay MAM on HP DL360 hardware. 1. Create a disk partition: fdisk /dev/sdb 2. Create a Physical Disk: pvcreate --metadatasize=64k /dev/sdb1 3. Create a Volume group vgcreate -s 256K -M 2 vg_icps_cache /dev/sdb1 4. Obtain the a value for the number of Physical Extents: vgdisplay vg_icps_cache A list of properties for the volume groups appear, including the physical extents (PE). Use this value to create the logical volume (below). 5. Create the Logical Volume lvcreate -l <available_pes> -r n lv_icps_cache vg_icps_cache 18
19 <available_pes> is the value obtained above. 6. Format the volume mkfs.ext4 /dev/vg_icps_cache/lv_icps_cache 7. Add the entry into /etc/fstab: /dev/mapper/vg_icps_cache-lv_icps_cache /cache ext4 rw Mount the volume: mount /cache Step #5 Create the Cluster Cache Once you have set up the server cluster you can configure a shared cache for the cluster. This is done using GlusterFS, an open source software solution for creating shared filesystems. In ICPS installations with multiple ICPS servers arranged as a cluster it is used to allow cache-sharing amongst the ICPS servers. GlusterFS creates a virtual drive acting as the cache for all ICPS servers in the cluster. Recall that the ICPS server transcodes media from the format in which it is stored on the ISIS (or Standard FS storage) into an alternate delivery format, such as an FLV or MPEG-2 Transport Stream. In a deployment with a single ICPS server, the ICPS server maintains a local cache where it stores recently-transcoded media. In the event that the same media is requested again, the ICPS server can deliver the cached media, without the need to re-transcode it. In a high-availability and load balanced configuration, the caches maintained by the ICPS servers in a cluster are co-located on the virtual shared drive maintained by GlusterFS. Thus, each ICPS server sees and has access to all the transcoded media in the pooled cache. When any particular ICPS server transcodes media into the FLV format streamed to the ICPS Player, the other ICPS servers can make use of it, without re-transcoding. Note: The correct functioning of cluster cache requires that the clocks on each server in the cluster are synchronized. Clock synchronization was performed in Step #1 Physically install and set up the ICPS server hardware on page 7. Before You Begin Make sure you have the files needed for the installation of GlusterFS. The following package can be found on the RedHat DVD: compat-libtermcap el6.x86_64.rpm Download the following packages from the GlusterFS web site ( glusterfs-core x86_64.rpm glusterfs-fuse x86_64.rpm 19
20 glusterfs-geo-replication x86_64.rpm On All Servers in the Cluster Install the software components needed for GlusterFS and start the service. Next, create the cache folders. To install the software and start the service: 1. Install the compat-libtermcap.x86_64.rpm package: rpm -Uvh compat-libtermcap el6.x86_64.rpm 2. Install the GlusterFS packages in the following order (as root): rpm -Uvh glusterfs-core x86_64.rpm rpm -Uvh glusterfs-fuse x86_64.rpm rpm -Uvh glusterfs-geo-replication x86_64.rpm 3. Ensure GlusterFS is started: service glusterd status 4. If not, start the service: service glusterd start To create the cache folders: Create the physical folders where the original data will reside on each server: mkdir -p /gluster/gluster_data_download mkdir -p /gluster/gluster_data_fl_cache mkdir -p /gluster/gluster_data_metadata On One Server in the Cluster With GlusterFS installed and running on each ICPS server in the cluster, create the shared storage pool by joining the clustered servers together. This is done using the GlusterFS command of the following form: gluster peer probe server The above command joins the server on which it is issued to the one named in the command (server). It must be issued once for each of the other servers in the cluster. In GlusterFS there is no need to self-join. For example, consider an ICPS server cluster consisting of three servers, server-1, server-2 and server-3. To create the GlusterFS pool from server-1, you would issue the following commands: gluster peer probe server-2 gluster peer probe server-3 To create the shared storage pool: Note: These steps must be completed (as root) on one server in the cluster. It doesn t matter which one. 20
21 1. Ensure connectivity by pinging the server you want to join. ping <server-name> 2. Form the pool of shared storage. gluster peer probe <server-name1> gluster peer probe <server-name2> gluster peer probe <server-name3> Note: Do not self-probe the local host. 3. For each successful join, the system responds as follows: Probe successful 4. Verify peer status. gluster peer status The system will respond by indicating the number of peers, their host names and connection status and other information. On One Server in the Cluster On any server in the cluster, you must create GlusterFS volumes for the physical folders already created, and start the volumes. 1. Create the corresponding GlusterFS volumes for the physical folders already created. This step should be done for each original data source, but only once on any server. gluster volume create gluster-cache replica [n-server] transport tcp [server1]:/gluster_mirror_data/ [server2]:/gluster_mirror_data/ [...] For example, for a cluster consisting of two servers, you would issue commands similar to the following: gluster volume create gl-cache-dl replica 2 transport tcp ${SERVER1}:/gluster/gluster_data_download ${SERVER2}:/gluster/gluster_data_download gluster volume create gl-cache-fl replica 2 transport tcp ${SERVER1}:/gluster/gluster_data_fl_cache ${SERVER2}:/gluster/gluster_data_fl_cache gluster volume create gl-cache-md replica 2 transport tcp ${SERVER1}:/gluster/gluster_data_metadata ${SERVER2}:/ gluster/gluster_data_metadata Where ${SERVER1} and ${SERVER2} are the names of the servers in the cluster. 2. Start the GlusterFS volumes. This step should be done only once on the server where the volume was created. gluster volume start gl-cache-dl gluster volume start gl-cache-fl gluster volume start gl-cache-md 21
22 On All Servers in the Cluster Finally, configure the local cache on each server in the cluster. Note: If you have already installed ICPS, the folders have already been created and correct ownership has been assigned. 1. Create the following cache folders: mkdir /cache/download mkdir /cache/fl_cache mkdir /cache/metadata 2. Change ownership of the following two folders (original data folders, not the cache folders created above): chown maxmin:maxmin /cache/gluster/gluster_data_download chown maxmin:maxmin /cache/gluster/gluster_data_fl_cache 3. Mount the folder like any other standard mount, specifying the type as glusterfs. Note: In the code below, the server name (SERVER1) is provided as an example. mount -t glusterfs ${SERVER1}:/gl-cache-dl /cache/download mount -t glusterfs ${SERVER1}:/gl-cache-fl /cache/fl_cache mount -t glusterfs ${SERVER1}:/gl-cache-md /cache/metadata 4. Add entries to the /etc/fstab to automount the folders: ${SERVER1}:/gl-cache-dl /cache/download fuse.glusterfs rw,allow_other,default_permissions,max_read= ${SERVER1}:/gl-cache-fl /cache/fl_cache fuse.glusterfs rw,allow_other,default_permissions,max_read= ${SERVER1}:/gl-cache-md /cache/metadata fuse.glusterfs rw,allow_other,default_permissions,max_read= Test the Cache Test the cache setup by writing a file to one of the GlusterFS cache folders (e.g. /cache/download) on one server and make sure it appears on the other servers. Monitor the Cluster For information on monitoring the cluster, see Step #7 Post-Installation Steps on page 27. Step #6a Configure ICPS for Interplay MAM For ICPS to play Interplay MAM media, the path to the filesystem containing the MAM proxies must be mounted on the ICPS servers. The mounting is done at the level of the OS using standard Linux command for mounting volumes (mount). To automate the mounting of the MAM filesystem, create an entry in /etc/fstab. To determine the correct path to be mounted, examine the path associated with the MAM essence pool to which ICPS is being given access. This is found in the Interplay MAM 22
23 Administrator interface under the Essence Management Configuration tab. Look for the MORPHEUS entry and tease out the path information. It is likely that ICPS has been given access to more than one MAM essence pool. Be sure to mount all the associated filesystems. Note: Configuration must also take place on the Interplay MAM side, to set up permissions for ICPS to access MAM storage, to point Interplay MAM to the ICPS server or server cluster, etc. For instructions on this aspect of setup and configuration, please refer to the Interplay MAM documentation. Note: This step can be performed at any time during the installation. Configure Port Bonding for Interplay MAM (Optional) Port bonding (also called link aggregation) is an OS-level technique for combining multiple Ethernet ports into a group, making them appear and behave as a single port. In ICPS port bonding is configured in round-robin mode. In this mode, Ethernet packets are automatically sent, in turn, to each of the bonded ports, reducing bottlenecks and increasing the available bandwidth. For example, bonding two ports together increases bandwidth by approximately 50% (some efficiency is lost due to overhead). In ICPS, port bonding improves playback performance when multiple clients are making requests of the ICPS server simultaneously. Note: Port bonding is only possible for Interplay MAM deployments. Note: Do not configure port bonding if you have already set up an ICPS server cluster. To configure port bonding for Interplay MAM: 1. Add port bonding configuration information to each of the NIC device s ifcfg- <device> files (e.g. ifcfg-eth0, ifcfg-eth1, etc.) in the /etc/sysconfig/network-scripts directory. The file should look something like this: DEVICE=eth0 USERCTL=no ONBOOT=yes MASTER=bond0 SLAVE=yes BOOTPROTO=static DEVICE=eth0 specifies the name of the physical Ethernet interface device. This line will be different for each device. It must correspond to the name of the file itself. MASTER=bond0 specifies the name of the port bonding interface. This must be the same in each network script file in the port bonded group. 2. Create a port bonding network script in the same directory: /etc/sysconfig/network-scripts/ifcfg-bond0 ifcfg-bond0 is the name of the port-bonding group (e.g. ifcfg-bond0). 3. The contents of the file should resemble the following: 23
24 DEVICE=bond0 ONBOOT=yes BOOTPROTO=static USERCTL=no BONDING_OPTS="mode=0" DEVICE=bond0 specifies the name of the port bonding group interface. It must correspond to the name of the file itself. BOOTPROTO=static lets you assign IP address of the device explicitly (recommended), or allow the OS to assign of the IP address device dynamically. Can be static (recommended) or dhcp (system assigned). If you assign the IP addresses statically you are also required to have IPADDR and NETMASK entries. BONDING_OPTS="mode=0" specifies the type of port bonding (mode=0 specifies round-robin). 4. Restart the network service (as root): /etc/init.d/network restart Step #6b Configure ICPS for Interplay Central Now that you have installed the operating system and ICPS software components, you have to configure the ICPS server(s). ICPS configuration procedures include (for each server): Logging into the configuration portal Configure Interplay Configure the ISIS connection If you are configuring an ICPS cluster, restart the cluster synchronization services on all nodes. Log Into Portal ICPS servers are configured using a web-based configuration portal. You need access to a computer on the network with access to the ICPS server(s) you are configuring, and a web browser. You may want to make some changes to the ICPS administrator s account (e.g., change the password). We recommend using Google Chrome, but any browser is supported. To log into the portal: 1. Launch your web browser and in the address bar, do one of the following: Enter where <hostname> is either the host name of the ICPS (if you only have a single server) 24
25 Enter where <cluster-ip> is the IP address you provisioned for the ICPS cluster The ICPS configuration portal login screen appears. 2. Log in with administrator credentials (case-sensitive): User name: Administrator Password: Avid123 First time login takes you to the Interplay tab. 3. If you want to change the administrator account password to the portal, click Change Password at the top of the page to view the user profile settings. 4. Enter a new password. 5. Re-enter the new password. 6. Click Submit (or Cancel). A message appears indicating that the password was successfully changed. You may now proceed to configure ICPS. Configure Interplay ICPS works with Interplay. Although Interplay Central users log in with their own credentials and use their own Interplay credentials to browse media assets, ICPS use a separate set of Interplay credentials to resolve playback requests and check-in voice-over assets recorded by Interplay Central users. Before you begin: ICPS requires a unique set of user credentials that you must create as an Interplay administrator. The user credentials should have the following attributes: The credentials should not be shared with any human users Permission to read all folders in the workgroup We recommend using a name that indicates the purpose of the user credentials, e.g. icps-interplay In this procedure you configure the user credentials and workgroup properties required by ICPS. To configure Interplay: 1. Click the Interplay tab. 2. Configure Interplay credentials: a. Enter the host name of the Interplay Engine server. b. Enter the name of the Interplay user reserved for ICPS. c. Enter the password for that user. 3. Configure Workgroup Properties: a. Enter the host name for the Media Indexer. 25
26 Note: If the Interplay media indexer is connected to a High Availability Group (HAG), enter the host name of the active Media Indexer. b. Enter the Interplay Workgroup name. This is case-sensitive. Use the same case as defined in the Interplay engine. c. Enter the host name for the lookup server(s). 4. Enable dynamic relink. Dynamic relink is required for multi-resolution workflows. 5. Click Save. This stops all servers and reconfigures them. Please be patient, since the process can take some time. Configure ISIS ICPS works with ISIS storage. ICPS uses a separate set of ISIS credentials to read media assets for playback and to write audio assets for voice-over recorded by Interplay Central users. Before you begin: ICPS requires a unique set of user credentials that you must create as an ISIS administrator. The user credentials should have the following attributes: The credentials should not be shared with any human users Permission to read all workspaces, and to write to the workspace flagged as VO (voiceover) workspace We recommend using a name that indicated the purpose of the user credentials, e.g. icps-isis In this procedure, configure the ISIS host, user credentials required by ICPS. In some network configuration scenarios, additional settings may be required. To configure ISIS: 1. Click the ISIS tab. 2. Configure ISIS credentials: a. Enter the host name of the ISIS. b. Enter the name of the ISIS user reserved for ICPS. c. Enter the password for that user. 3. If your connection to ISIS is via Zone 2 (through a switch as opposed to a Zone 1 direct connection), enable Remote Host, and then enter IP addresses for the ISIS System Directors. 4. Normally, the only network connection for the ICPS is a single GigE or 10GigE connection. This is both the connection for the ISIS and to the network for outbound compressed playback media. If you have other network connections, you must indicate which network connections are used by ISIS as opposed to other network activity. Do the following: 26
27 a. Enter the network device ID (usually eth0) used by ISIS. b. Enter all other active network devices (e.g. eth1; eth2; etc.) not used by ISIS. 5. Choose the Client Mode option for your setup: a. If the ICPS server is connected to ISIS via GigE, select GigE. b. If the ICPS server is connected to ISIS via 10GigE, select 10GigE. 6. Click Save. Restart Cluster Synchronization Services on All Nodes Although you updated the settings on the master node in the cluster by logging into the cluster IP address, the changes you made must propagate to the other nodes in the cluster by restarting corosync on all nodes in the cluster. To restart cluster synchronization services on all nodes: For each node in the cluster: 1. Log in as root. 2. Type: service corosync restart Configure Wi-Fi Only Encoding for Facility-Based ipads (Optional) By default, ICPS servers encode three different media streams for Interplay Central applications detected on ipads, for Wi-Fi, Edge, and 3G connections. For Wi-Fi only facilities, it is recommended that you disable the Edge and 3G streams, to improve the encoding capacity of the ICPS servers. To disable Edge and 3G streams: 1. Log in as root and edit the following file using a text editor (such as vi): /usr/maxt/maxedit/share/mpegpresets/mpeg2ts.mpegpresets 2. In each of the [Edge] and [3G] areas, set the active parameter to active=0. 3. Save and close the file. Step #7 Post-Installation Steps The procedures in this section are helpful in verifying the success of the installation, and in preparing for post-installation management of the logs generated by ICPS. 27
28 Monitoring ICPS High-Availability and Load Balancing If you have configured a highly-available and load-balanced ICPS cluster. Use the following commands to monitor the cluster for problems and if necessary, resolve them. If the following procedure does not resolve problems with the ICPS cluster, please contact an Avid representative. To monitor the status of the cluster: Enter the following command as root. crm_mon This returns the status of services on all nodes. Error messages may appear. A properly running cluster of 2 nodes will return the following: ============ Last updated: Thu Dec 1 15:45: Stack: openais Current DC: icps_01 - partition with quorum Version: f059ec7ced7a86f18e5490b67ebf4a0b963bccfe 2 Nodes configured, 2 expected votes 6 Resources configured. ============ Online: [ icps_mam_small icps_mam_med ] Clone Set: AvidConnectivityMonEverywhere Started: [icps_01 icps_02] AvidClusterIP (ocf::heartbeat:ipaddr2): Started icps_01 AvidClusterMon (lsb:avid-monitor): Started icps_01 Clone Set: AvidClusterDbSyncEverywhere Started: [ icps_02 icps_01 ] Clone Set: pgsqldbeverywhere Started: [ icps_02 icps_01 ] Clone Set: AvidAllEverywhere Started: [ icps_02 icps_01 ] To reset the cluster: If you see errors in the crm_mon report about services not starting properly, enter the following (as root): /usr/maxt/maxedit/cluster/resources/cluster rsc-cleanup Retrieve ICPS Logs This step is not required at installation, but as you use Interplay Central you may encounter performance problems or playback failure. You should report these occurrences to an Avid representative. Avid may ask you to retrieve system and component logs from your ICPS server(s). To retrieve ICPS logs: 1. Launch your web browser and in the address bar, do one of the following: Enter where <hostname> is either the host name of the ICPS (if you only have a single server) 28
29 Enter where <cluster-ip> is the IP address you provisioned for the ICPS cluster The ICPS configuration portal login screen appears. 2. Log in with administrator credentials (case-sensitive): User name: Administrator Password: Avid123 If this is the first time you are logging in you are taken to the Interplay tab where you can change the administrator password. 3. Otherwise, click the Logs tab. 4. Check the box next to the log(s) you want to retrieve. 5. Choose All or Current: Choose All to retrieve all logs of the corresponding type. Choose Current to retrieve only the latest log since last server restart. 6. Click Download Logs. 7. The logs are downloaded to the computer you are using as a.zip file. Inside the.zip file are.log files one for each requested log. Log Cycling Like other Linux logs, the ICPS server logs are stored under the /var/log directory, in /var/log/avid. Logs are automatically rotated on a daily basis as specified in /etc/logrotate.conf. 29
30 PART III: Installing ICPS on Non-HP Server Hardware The following table provides time estimates for each of the main installation steps. Task Step #1 Setting Up the Non-HP Server Hardware Step #2 Installing RHEL on Non-HP Servers Step #3 Installing ICPS on Non-HP Servers Step #4 Setting up the High-Availability and Load-Balanced Server Cluster Step #5 Create the Interplay MAM Cache Volume Step #6 Create the Cluster Cache Step #7 Configure ICPS for Interplay MAM Step #8 Post-Installation Steps Total: Approximate Time Needed 1 hr 40 min 10 min 20 min 10 min 20 min 20 min 10 min 3 hr 10 min 30
31 Step-by-Step Review of the Instructions This section provides tips for installing RHEL and ICPS on non-hp hardware. For the most part the steps for installing and configuring ICPS on supported HP hardware are easily generalized to non-hp hardware. The primary difference is that the express installation using a USB key cannot be followed. That is, you must install RHEL and ICPS as separate steps. In addition, there is no guarantee the supplied RHEL kickstart (ks.cfg) file will work on non-hp hardware. However, you can examine its contents and mimic them during a manual installation, or create a kickstart file for your own hardware. Step #1 Setting Up the Non-HP Server Hardware Follow the instructions in Step #1 Setting Up the HP Server Hardware on page 10, noting the following: Connect the ICPS server to the ISIS via a Zone 1 (direct) or Zone 2 (through a switch) connection. We recommend a RAID 1 (mirror) volume for the system disk Set the system clock before installing the OS, if possible. Otherwise set it at the appropriate stage in OS installation. Step #2 Installing RHEL on Non-HP Servers Follow the instructions in Step #2 Installing the RHEL and ICPS Software Components on page 12, modifying them as appropriate for the chosen hardware. Note the following: Prepare a ICPS USB key as instructed, but do not boot from it. Manually install RHEL 6.0. Do not install patches, updates, or upgrade to RHEL 6.1. Select BASIC SERVER during the RHEL installation process. If you will be setting up a cluster of ICPS servers, install compat-libtermcap.x86_64.rpm. Configure the NIC network port as eth0. Disable DHCP. Use a static IP address if configuring a node cluster Manually remove duplicate entries and backticks from the network configuration file. Step #3 Installing ICPS on Non-HP Servers Untar the supplied installer file ICPS_installer_v1.2.tar.gz and run the installer: 1. Untar and unzip the installation script file located on the USB key: tar zxvf ICPS_installer_v1.2.tar.gz 2. Change directories to the ICPS_installer_v1.2 folder and run the installation script: 31
32 bash install.sh Step #4 Setting up the High-Availability and Load-Balanced Server Cluster Follow the instructions in Step #3 Set up the High-Availability and Load-Balanced Server Cluster on page 15. Step #5 Create the Interplay MAM Cache Volume If your server has enough drives, it is recommended that you create a RAID 5 volume for use as the cache for Interplay MAM deployments. For guidelines, see the instructions in Step #4 Create the Interplay MAM Cache Volume on page 18. Step #6 Create the Cluster Cache Follow the instructions in Step #5 Create the Cluster Cache on page 19. Step #7 Configure ICPS for Interplay MAM Follow the instructions in Step #6a Configure ICPS for Interplay MAM on page 22. Step #8 Post-Installation Steps Follow the instructions in Step #7 Post-Installation Steps on page
33 Appendix A: Frequently Asked Questions Q. What hardware is supported for ICPS servers? A. The hardware supported depends on the installation model: Interplay Central: HP DL380 server hardware only Interplay MAM: HP DL360 and other hardware (contact your Avid representative for details) Q. Is ISIS the only storage supported? No. ISIS storage is required for ICPS deployed for Interplay Central. ICPS deployed for Interplay MAM supports all standard filesystems that can be mounted by a Linux server (XFS, NFS, etc). This includes proprietary filesystems that are able to expose themselves as standard filesystems. Q. Can ICPS support both Interplay Central and Interplay MAM at the same time? A. Yes. In this case it is recommended that you set up an ICPS server cluster to handle the additional load. Q. Under what circumstances can the USB key be used to perform the installation? A. The USB key can be used for installations on supported HP hardware only. Q. What is the advantage of setting up an ICPS server cluster? A. Properly configured, an ICPS server cluster provides the following: Load balancing. All incoming playback connections are routed to a cluster IP address, and are subsequently distributed evenly to the nodes in the cluster. High-availability. If any node in the cluster fails, connections to that node will automatically be redirected to another node. Shared Cache: The media transcoded by one node in the cluster is immediately available for use by the other nodes. Cluster monitoring. You can monitor the status of the cluster by entering a command. If a node fails (or if any other serious problem is detected by the cluster monitoring service), an is sent to one or more addresses. 33
MediaCentral Platform Services Installation and Configuration Guide Version 2.3
MediaCentral Platform Services Installation and Configuration Guide Version 2.3 Important Information This document provides instructions to install and configure Avid MediaCentral Platform Services (MCS)
More informationSemantic based Web Application Firewall (SWAF - V 1.6)
Semantic based Web Application Firewall (SWAF - V 1.6) Installation and Troubleshooting Manual Document Version 1.0 1 Installation Manual SWAF Deployment Scenario: Client SWAF Firewall Applications Figure
More informationDeployStudio Server Quick Install
DeployStudio Server Quick Install v1.7.0 The DeployStudio Team info@deploystudio.com Requirements OS X 10.7.5 to 10.11.1 DeployStudioServer_v1.7.x.pkg and later NetBoot based deployment 100 Mb/s switched
More informationTrend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice.
Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the software, please review the readme files,
More informationSymantec Database Security and Audit 3100 Series Appliance. Getting Started Guide
Symantec Database Security and Audit 3100 Series Appliance Getting Started Guide Symantec Database Security and Audit 3100 Series Getting Started Guide The software described in this book is furnished
More informationOperating System Installation Guidelines
Operating System Installation Guidelines The following document guides you step-by-step through the process of installing the operating systems so they are properly configured for boot camp. The document
More informationLOCKSS on LINUX. CentOS6 Installation Manual 08/22/2013
LOCKSS on LINUX CentOS6 Installation Manual 08/22/2013 1 Table of Contents Overview... 3 LOCKSS Hardware... 5 Installation Checklist... 6 BIOS Settings... 9 Installation... 10 Firewall Configuration...
More informationInstalling the Operating System or Hypervisor
Installing the Operating System or Hypervisor If you purchased E-Series Server Option 1 (E-Series Server without preinstalled operating system or hypervisor), you must install an operating system or hypervisor.
More informationNetIQ Sentinel 7.0.1 Quick Start Guide
NetIQ Sentinel 7.0.1 Quick Start Guide April 2012 Getting Started Use the following information to get Sentinel installed and running quickly. Meeting System Requirements on page 1 Installing Sentinel
More informationThinkServer RD540 and RD640 Operating System Installation Guide
ThinkServer RD540 and RD640 Operating System Installation Guide Note: Before using this information and the product it supports, be sure to read and understand the Read Me First and Safety, Warranty, and
More informationVirtual Appliance for VMware Server. Getting Started Guide. Revision 2.0.2. Warning and Disclaimer
Virtual Appliance for VMware Server Getting Started Guide Revision 2.0.2 Warning and Disclaimer This document is designed to provide information about the configuration and installation of the CensorNet
More informationHow to Test Out Backup & Replication 6.5 for Hyper-V
How to Test Out Backup & Replication 6.5 for Hyper-V Mike Resseler May, 2013 2013 Veeam Software. All rights reserved. All trademarks are the property of their respective owners. No part of this publication
More informationDeskpool Quick Start. Version: V2.1.x. Based on Hyper-V Server 2012 R2. Shenzhen Jieyun Technology Co., Ltd (www.jieyung.com)
Deskpool Quick Start Based on Hyper-V Server 2012 R2 Version: V2.1.x Shenzhen Jieyun Technology Co., Ltd (www.jieyung.com) Last updated on March 18, 2015 Copyright Shenzhen Jieyun Technology Co., Ltd.
More informationCreate a virtual machine at your assigned virtual server. Use the following specs
CIS Networking Installing Ubuntu Server on Windows hyper-v Much of this information was stolen from http://www.isummation.com/blog/installing-ubuntu-server-1104-64bit-on-hyper-v/ Create a virtual machine
More informationInstalling Operating Systems
CHAPTER 6 The unattended operating system installation function helps you install the Microsoft Windows and RedHat Linux operating system families. UCS-SCU has integrated device drivers including RAID
More informationSOA Software API Gateway Appliance 7.1.x Administration Guide
SOA Software API Gateway Appliance 7.1.x Administration Guide Trademarks SOA Software and the SOA Software logo are either trademarks or registered trademarks of SOA Software, Inc. Other product names,
More informationMcAfee Asset Manager Console
Installation Guide McAfee Asset Manager Console Version 6.5 COPYRIGHT Copyright 2012 McAfee, Inc. Do not copy without permission. TRADEMARK ATTRIBUTIONS McAfee, the McAfee logo, McAfee Active Protection,
More informationDeploying Windows Streaming Media Servers NLB Cluster and metasan
Deploying Windows Streaming Media Servers NLB Cluster and metasan Introduction...................................................... 2 Objectives.......................................................
More informationF-Secure Messaging Security Gateway. Deployment Guide
F-Secure Messaging Security Gateway Deployment Guide TOC F-Secure Messaging Security Gateway Contents Chapter 1: Deploying F-Secure Messaging Security Gateway...3 1.1 The typical product deployment model...4
More informationSyncplicity On-Premise Storage Connector
Syncplicity On-Premise Storage Connector Implementation Guide Abstract This document explains how to install and configure the Syncplicity On-Premise Storage Connector. In addition, it also describes how
More informationMcAfee SMC Installation Guide 5.7. Security Management Center
McAfee SMC Installation Guide 5.7 Security Management Center Legal Information The use of the products described in these materials is subject to the then current end-user license agreement, which can
More informationReboot the ExtraHop System and Test Hardware with the Rescue USB Flash Drive
Reboot the ExtraHop System and Test Hardware with the Rescue USB Flash Drive This guide explains how to create and use a Rescue USB flash drive to reinstall and recover the ExtraHop system. When booting
More informationExtreme Control Center, NAC, and Purview Virtual Appliance Installation Guide
Extreme Control Center, NAC, and Purview Virtual Appliance Installation Guide 9034968 Published April 2016 Copyright 2016 All rights reserved. Legal Notice Extreme Networks, Inc. reserves the right to
More informationv7.8.2 Release Notes for Websense Content Gateway
v7.8.2 Release Notes for Websense Content Gateway Topic 60086 Web Security Gateway and Gateway Anywhere 12-Mar-2014 These Release Notes are an introduction to Websense Content Gateway version 7.8.2. New
More informationCommandCenter Secure Gateway
CommandCenter Secure Gateway Quick Setup Guide for CC-SG Virtual Appliance and lmadmin License Server Management This Quick Setup Guide explains how to install and configure the CommandCenter Secure Gateway.
More informationBackup & Disaster Recovery Appliance User Guide
Built on the Intel Hybrid Cloud Platform Backup & Disaster Recovery Appliance User Guide Order Number: G68664-001 Rev 1.0 June 22, 2012 Contents Registering the BDR Appliance... 4 Step 1: Register the
More informationDell UPS Local Node Manager USER'S GUIDE EXTENSION FOR MICROSOFT VIRTUAL ARCHITECTURES Dellups.com
CHAPTER: Introduction Microsoft virtual architecture: Hyper-V 6.0 Manager Hyper-V Server (R1 & R2) Hyper-V Manager Hyper-V Server R1, Dell UPS Local Node Manager R2 Main Operating System: 2008Enterprise
More informationUsing iscsi with BackupAssist. User Guide
User Guide Contents 1. Introduction... 2 Documentation... 2 Terminology... 2 Advantages of iscsi... 2 Supported environments... 2 2. Overview... 3 About iscsi... 3 iscsi best practices with BackupAssist...
More informationEVault Software. Course 361 Protecting Linux and UNIX with EVault
EVault Software Course 361 Protecting Linux and UNIX with EVault Table of Contents Objectives... 3 Scenario... 3 Estimated Time to Complete This Lab... 3 Requirements for This Lab... 3 Computers Used in
More informationBuilding a Virtual Desktop Infrastructure A recipe utilizing the Intel Modular Server and VMware View
Building a Virtual Desktop Infrastructure A recipe utilizing the Intel Modular Server and VMware View December 4, 2009 Prepared by: David L. Endicott NeoTech Solutions, Inc. 2816 South Main St. Joplin,
More informationPlexxi Control Installation Guide Release 2.1.0
Plexxi Control Installation Guide Release 2.1.0 702-20002-10 Rev 1.2 February 19, 2015 100 Innovative Way - Suite 3322 Nashua, NH 03062 Tel. +1.888.630.PLEX (7539) www.plexxi.com Notices The information
More informationLOCKSS on LINUX. Installation Manual and the OpenBSD Transition 02/17/2011
LOCKSS on LINUX Installation Manual and the OpenBSD Transition 02/17/2011 1 Table of Contents Overview... 3 LOCKSS Hardware... 5 Installation Checklist... 7 BIOS Settings... 10 Installation... 11 Firewall
More informationReadyNAS Setup Manual
ReadyNAS Setup Manual NETGEAR, Inc. 4500 Great America Parkway Santa Clara, CA 95054 USA October 2007 208-10163-01 v1.0 2007 by NETGEAR, Inc. All rights reserved. Trademarks NETGEAR, the NETGEAR logo,
More informationKaseya Server Instal ation User Guide June 6, 2008
Kaseya Server Installation User Guide June 6, 2008 About Kaseya Kaseya is a global provider of IT automation software for IT Solution Providers and Public and Private Sector IT organizations. Kaseya's
More informationInstalling and Configuring vcenter Support Assistant
Installing and Configuring vcenter Support Assistant vcenter Support Assistant 5.5 This document supports the version of each product listed and supports all subsequent versions until the document is replaced
More informationClearswift SECURE Exchange Gateway Installation & Setup Guide. Version 1.0
Clearswift SECURE Exchange Gateway Installation & Setup Guide Version 1.0 Copyright Revision 1.0, December, 2013 Published by Clearswift Ltd. 1995 2013 Clearswift Ltd. All rights reserved. The materials
More informationSystem Area Manager. Remote Management
System Area Manager Remote Management Remote Management System Area Manager provides remote management functions for its managed systems, including Wake on LAN, Shutdown, Restart, Remote Console and for
More informationInstalling and Configuring vcloud Connector
Installing and Configuring vcloud Connector vcloud Connector 2.7.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new
More informationHow to Configure an Initial Installation of the VMware ESXi Hypervisor
How to Configure an Initial Installation of the VMware ESXi Hypervisor I am not responsible for your actions or their outcomes, in any way, while reading and/or implementing this tutorial. I will not provide
More informationVirtual Appliances. Virtual Appliances: Setup Guide for Umbrella on VMWare and Hyper-V. Virtual Appliance Setup Guide for Umbrella Page 1
Virtual Appliances Virtual Appliances: Setup Guide for Umbrella on VMWare and Hyper-V Virtual Appliance Setup Guide for Umbrella Page 1 Table of Contents Overview... 3 Prerequisites... 4 Virtualized Server
More informationIntelligent Power Protector User manual extension for Microsoft Virtual architectures: Hyper-V 6.0 Manager Hyper-V Server (R1&R2)
Intelligent Power Protector User manual extension for Microsoft Virtual architectures: Hyper-V 6.0 Manager Hyper-V Server (R1&R2) Hyper-V Manager Hyper-V Server R1, R2 Intelligent Power Protector Main
More informationOnCommand Performance Manager 1.1
OnCommand Performance Manager 1.1 Installation and Setup Guide For Red Hat Enterprise Linux NetApp, Inc. 495 East Java Drive Sunnyvale, CA 94089 U.S. Telephone: +1 (408) 822-6000 Fax: +1 (408) 822-4501
More informationCisco FlexFlash: Use and Manage Cisco Flexible Flash Internal SD Card for Cisco UCS C-Series Standalone Rack Servers
Cisco FlexFlash: Use and Manage Cisco Flexible Flash Internal SD Card for Cisco UCS C-Series Standalone Rack Servers White Paper February 2014 What You Will Learn The Cisco UCS C220 M3, C240 M3, C420 M3,
More informationHP CloudSystem Enterprise
HP CloudSystem Enterprise F5 BIG-IP and Apache Load Balancing Reference Implementation Technical white paper Table of contents Introduction... 2 Background assumptions... 2 Overview... 2 Process steps...
More informationCluster Configuration Manual Cluster configuration on Database Servers
Cluster configuration on Database Servers - 1 - Table of Contents 1. PREREQUISITES BEFORE SETTING UP CLUSTER... 3 2. INSTALLING CLUSTER PACKAGES... 3 3. CLUSTER CONFIGURATION... 4 3.1 CREATE NEW CONFIGURATION...
More informationRequired Virtual Interface Maps to... mgmt0. virtual network = mgmt0 wan0. virtual network = wan0 mgmt1. network adapter not connected lan0
VXOA VIRTUAL APPLIANCES Microsoft Hyper-V Hypervisor Router Mode (Out-of-Path Deployment) 2013 Silver Peak Systems, Inc. Assumptions Windows 2008 server is installed and Hyper-V server is running. This
More informationTesting and Restoring the Nasuni Filer in a Disaster Recovery Scenario
Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario Version 7.0 July 2015 2015 Nasuni Corporation All Rights Reserved Document Information Testing Disaster Recovery Version 7.0 July
More informationCommandCenter Secure Gateway
CommandCenter Secure Gateway Quick Setup Guide for CC-SG Virtual Appliance - VMware, XEN, HyperV This Quick Setup Guide explains how to install and configure the CommandCenter Secure Gateway. For additional
More informationVirtual Appliance Setup Guide
Virtual Appliance Setup Guide 2015 Bomgar Corporation. All rights reserved worldwide. BOMGAR and the BOMGAR logo are trademarks of Bomgar Corporation; other trademarks shown are the property of their respective
More informationinsync Installation Guide
insync Installation Guide 5.2 Private Cloud Druva Software June 21, 13 Copyright 2007-2013 Druva Inc. All Rights Reserved. Table of Contents Deploying insync Private Cloud... 4 Installing insync Private
More informationStarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster
#1 HyperConverged Appliance for SMB and ROBO StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with MARCH 2015 TECHNICAL PAPER Trademarks StarWind, StarWind Software and the
More informationHow To Use 1Bay 1Bay From Awn.Net On A Pc Or Mac Or Ipad (For Pc Or Ipa) With A Network Box (For Mac) With An Ipad Or Ipod (For Ipad) With The
1-bay NAS User Guide INDEX Index... 1 Log in... 2 Basic - Quick Setup... 3 Wizard... 3 Add User... 6 Add Group... 7 Add Share... 9 Control Panel... 11 Control Panel - User and groups... 12 Group Management...
More informationTesting and Restoring the Nasuni Filer in a Disaster Recovery Scenario
Testing and Restoring the Nasuni Filer in a Disaster Recovery Scenario Version 7.2 November 2015 Last modified: November 3, 2015 2015 Nasuni Corporation All Rights Reserved Document Information Testing
More informationFileMaker Server 15. Getting Started Guide
FileMaker Server 15 Getting Started Guide 2007 2016 FileMaker, Inc. All Rights Reserved. FileMaker, Inc. 5201 Patrick Henry Drive Santa Clara, California 95054 FileMaker and FileMaker Go are trademarks
More informationRequired Virtual Interface Maps to... mgmt0. bridge network interface = mgmt0 wan0. bridge network interface = wan0 mgmt1
VXOA VIRTUAL APPLIANCE KVM Hypervisor In-Line Deployment (Bridge Mode) 2012 Silver Peak Systems, Inc. Support Limitations In Bridge mode, the virtual appliance only uses mgmt0, wan0, and lan0. This Quick
More informationUser Manual. Onsight Management Suite Version 5.1. Another Innovation by Librestream
User Manual Onsight Management Suite Version 5.1 Another Innovation by Librestream Doc #: 400075-06 May 2012 Information in this document is subject to change without notice. Reproduction in any manner
More informationRally Installation Guide
Rally Installation Guide Rally On-Premises release 2015.1 rallysupport@rallydev.com www.rallydev.com Version 2015.1 Table of Contents Overview... 3 Server requirements... 3 Browser requirements... 3 Access
More informationThinspace deskcloud. Quick Start Guide
Thinspace deskcloud Quick Start Guide Version 1.2 Published: SEP-2014 Updated: 16-SEP-2014 2014 Thinspace Technology Ltd. All rights reserved. The information contained in this document represents the
More informationDeploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015)
Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015) Access CloudStack web interface via: Internal access links: http://cloudstack.doc.ic.ac.uk
More informationMcAfee Firewall Enterprise
Hardware Guide Revision C McAfee Firewall Enterprise S1104, S2008, S3008 The McAfee Firewall Enterprise Hardware Product Guide describes the features and capabilities of appliance models S1104, S2008,
More informationGetting Started. Websense V10000 Appliance. v1.1
Getting Started Websense V10000 Appliance v1.1 1996 2009, Websense, Inc. 10240 Sorrento Valley Rd., San Diego, CA 92121, USA All rights reserved. Published 2009 Revision C Printed in the United States
More informationMoxa Device Manager 2.0 User s Guide
First Edition, March 2009 www.moxa.com/product 2009 Moxa Inc. All rights reserved. Reproduction without permission is prohibited. Moxa Device Manager 2.0 User Guide The software described in this manual
More informationTimeIPS Server. IPS256T Virtual Machine. Installation Guide
TimeIPS Server IPS256T Virtual Machine Installation Guide TimeIPS License Notification The terms and conditions applicable to the license of the TimeIPS software, sale of TimeIPS hardware and the provision
More informationREQUIREMENTS AND INSTALLATION OF THE NEFSIS DEDICATED SERVER
NEFSIS TRAINING SERIES Nefsis Dedicated Server version 5.1.0.XXX Requirements and Implementation Guide (Rev 4-10209) REQUIREMENTS AND INSTALLATION OF THE NEFSIS DEDICATED SERVER Nefsis Training Series
More informationGetting Started with ESXi Embedded
ESXi 4.1 Embedded vcenter Server 4.1 This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition. To check for more recent
More informationistorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering
istorage Server: High-Availability iscsi SAN for Windows Server 2008 & Hyper-V Clustering Tuesday, Feb 21 st, 2012 KernSafe Technologies, Inc. www.kernsafe.com Copyright KernSafe Technologies 2006-2012.
More informationREADYNAS INSTANT STORAGE. Quick Installation Guide
READYNAS INSTANT STORAGE Quick Installation Guide Table of Contents Step 1 Connect to FrontView Setup Wizard 3 Installing RAIDar on Windows 3 Installing RAIDar on Mac OS X 3 Installing RAIDar on Linux
More informationCloud.com CloudStack Community Edition 2.1 Beta Installation Guide
Cloud.com CloudStack Community Edition 2.1 Beta Installation Guide July 2010 1 Specifications are subject to change without notice. The Cloud.com logo, Cloud.com, Hypervisor Attached Storage, HAS, Hypervisor
More informationVirtual Appliance Setup Guide
The Virtual Appliance includes the same powerful technology and simple Web based user interface found on the Barracuda Web Application Firewall hardware appliance. It is designed for easy deployment on
More informationOperating System Installation Guide
Operating System Installation Guide This guide provides instructions on the following: Installing the Windows Server 2008 operating systems on page 1 Installing the Windows Small Business Server 2011 operating
More informationUltraBac Documentation. UBDR Gold. Administrator Guide UBDR Gold v8.0
UltraBac Documentation UBDR Gold Bare Metal Disaster Recovery Administrator Guide UBDR Gold v8.0 UBDR Administrator Guide UBDR Gold v8.0 The software described in this guide is furnished under a license
More informationWhatsUp Gold v16.1 Installation and Configuration Guide
WhatsUp Gold v16.1 Installation and Configuration Guide Contents Installing and Configuring Ipswitch WhatsUp Gold v16.1 using WhatsUp Setup Installing WhatsUp Gold using WhatsUp Setup... 1 Security guidelines
More informationVirtual CD v10. Network Management Server Manual. H+H Software GmbH
Virtual CD v10 Network Management Server Manual H+H Software GmbH Table of Contents Table of Contents Introduction 1 Legal Notices... 2 What Virtual CD NMS can do for you... 3 New Features in Virtual
More informationReadyNAS Duo Setup Manual
ReadyNAS Duo Setup Manual NETGEAR, Inc. 4500 Great America Parkway Santa Clara, CA 95054 USA February 2008 208-10215-01 v1.0 2008 by NETGEAR, Inc. All rights reserved. Trademarks NETGEAR, the NETGEAR logo,
More informationCONNECT-TO-CHOP USER GUIDE
CONNECT-TO-CHOP USER GUIDE VERSION V8 Table of Contents 1 Overview... 3 2 Requirements... 3 2.1 Security... 3 2.2 Computer... 3 2.3 Application... 3 2.3.1 Web Browser... 3 2.3.2 Prerequisites... 3 3 Logon...
More informationWhatsUp Gold v16.3 Installation and Configuration Guide
WhatsUp Gold v16.3 Installation and Configuration Guide Contents Installing and Configuring WhatsUp Gold using WhatsUp Setup Installation Overview... 1 Overview... 1 Security considerations... 2 Standard
More informationVess A2000 Series. NVR Storage Appliance. Windows Recovery Instructions. Version 1.0. 2014 PROMISE Technology, Inc. All Rights Reserved.
Vess A2000 Series NVR Storage Appliance Windows Recovery Instructions Version 1.0 2014 PROMISE Technology, Inc. All Rights Reserved. Contents Introduction 1 Different ways to backup the system disk 2 Before
More informationINUVIKA TECHNICAL GUIDE
--------------------------------------------------------------------------------------------------- INUVIKA TECHNICAL GUIDE FILE SERVER HIGH AVAILABILITY OVD Enterprise External Document Version 1.0 Published
More informationVirtual Managment Appliance Setup Guide
Virtual Managment Appliance Setup Guide 2 Sophos Installing a Virtual Appliance Installing a Virtual Appliance As an alternative to the hardware-based version of the Sophos Web Appliance, you can deploy
More informationF-Secure Internet Gatekeeper Virtual Appliance
F-Secure Internet Gatekeeper Virtual Appliance F-Secure Internet Gatekeeper Virtual Appliance TOC 2 Contents Chapter 1: Welcome to F-Secure Internet Gatekeeper Virtual Appliance.3 Chapter 2: Deployment...4
More informationHow To Install Extreme Security On A Computer Or Network Device
Extreme Networks Security Installation Guide 9034857 Published July 2015 Copyright 2011 2015 All rights reserved. Legal Notice Extreme Networks, Inc. reserves the right to make changes in specifications
More informationSetup Cisco Call Manager on VMware
created by: Rainer Bemsel Version 1.0 Dated: July/09/2011 The purpose of this document is to provide the necessary steps to setup a Cisco Call Manager to run on VMware. I ve been researching for a while
More informationCentralized Mac Home Directories On Windows Servers: Using Windows To Serve The Mac
Making it easy to deploy, integrate and manage Macs, iphones and ipads in a Windows environment. Centralized Mac Home Directories On Windows Servers: Using Windows To Serve The Mac 2011 ENTERPRISE DEVICE
More informationDell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN
Dell EqualLogic Red Hat Enterprise Linux 6.2 Boot from SAN A Dell EqualLogic best practices technical white paper Storage Infrastructure and Solutions Engineering Dell Product Group November 2012 2012
More informationTeam Foundation Server 2013 Installation Guide
Team Foundation Server 2013 Installation Guide Page 1 of 164 Team Foundation Server 2013 Installation Guide Benjamin Day benday@benday.com v1.1.0 May 28, 2014 Team Foundation Server 2013 Installation Guide
More informationhttp://docs.trendmicro.com
Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the product, please review the readme files,
More informationAltor Virtual Network Security Analyzer v1.0 Installation Guide
Altor Virtual Network Security Analyzer v1.0 Installation Guide The Altor Virtual Network Security Analyzer (VNSA) application is deployed as Virtual Appliance running on VMware ESX servers. A single Altor
More informationQuick Start Guide for Linux Based Recovery
Cristie Bare Machine Recovery Quick Start Guide for Linux Based Recovery June 2007 Cristie Data Products Ltd Cristie Data Products GmbH Cristie Nordic AB New Mill Nordring 53-55 Gamla Värmdövägen 4 Chestnut
More informationUNICORN 7.0. Administration and Technical Manual
UNICORN 7.0 Administration and Technical Manual Page intentionally left blank Table of Contents Table of Contents 1 Introduction... 1.1 Administrator functions overview... 1.2 Network terms and concepts...
More informationScholastic Reading Inventory Installation Guide
Scholastic Reading Inventory Installation Guide For use with Scholastic Reading Inventory version 2.0.1 or later and SAM version 2.0.2 or later Copyright 2011 by Scholastic Inc. All rights reserved. Published
More informationDual Bay Home Media Store. User Manual
Dual Bay Home Media Store User Manual CH3HNAS2 V1.0 CONTENTS Chapter 1: Home Page... 3 Setup Wizard... 3 Settings... 3 User Management... 3 Download Station... 3 Online User Manual... 3 Support... 3 Chapter
More informationhttp://docs.trendmicro.com
Trend Micro Incorporated reserves the right to make changes to this document and to the products described herein without notice. Before installing and using the product, please review the readme files,
More informationNOC PS manual. Copyright Maxnet 2009 2015 All rights reserved. Page 1/45 NOC-PS Manuel EN version 1.3
NOC PS manual Copyright Maxnet 2009 2015 All rights reserved Page 1/45 Table of contents Installation...3 System requirements...3 Network setup...5 Installation under Vmware Vsphere...8 Installation under
More informationNetwork Monitoring User Guide Pulse Appliance
Network Monitoring User Guide Pulse Appliance 2007 Belkin Corporation. All rights reserved. F1DUXXX All trade names are registered trademarks of respective manufacturers listed. Table of Contents Pulse
More informationAqua Connect Load Balancer User Manual (Mac)
Aqua Connect Load Balancer User Manual (Mac) Table of Contents About Aqua Connect Load Balancer... 3 System Requirements... 4 Hardware... 4 Software... 4 Installing the Load Balancer... 5 Configuration...
More informationHillstone StoneOS User Manual Hillstone Unified Intelligence Firewall Installation Manual
Hillstone StoneOS User Manual Hillstone Unified Intelligence Firewall Installation Manual www.hillstonenet.com Preface Conventions Content This document follows the conventions below: CLI Tip: provides
More informationVMware Identity Manager Connector Installation and Configuration
VMware Identity Manager Connector Installation and Configuration VMware Identity Manager This document supports the version of each product listed and supports all subsequent versions until the document
More informationSmartFiler Backup Appliance User Guide 2.0
SmartFiler Backup Appliance User Guide 2.0 SmartFiler Backup Appliance User Guide 1 Table of Contents Overview... 5 Solution Overview... 5 SmartFiler Backup Appliance Overview... 5 Getting Started... 7
More informationRed Hat Linux 7.2 Installation Guide
Red Hat Linux 7.2 Installation Guide Ryan Spangler spanglerrp22@uww.edu http://ceut.uww.edu April 2002 Department of Business Education/ Computer and Network Administration Copyright Ryan Spangler 2002
More informationistorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster
istorage Server: High Availability iscsi SAN for Windows Server 2012 Cluster Tuesday, December 26, 2013 KernSafe Technologies, Inc www.kernsafe.com Copyright KernSafe Technologies 2006-2013.All right reserved.
More information