Implementing a High Availability ENOVIA Synchronicity DesignSync Data Manager Solution



Similar documents
Cluster Configuration Manual Cluster configuration on Database Servers

Veritas Cluster Server

FUJITSU Storage ETERNUS DX Configuration Guide -Server Connection-

Windows Host Utilities Installation and Setup Guide

Parallels Virtuozzo Containers 4.7 for Linux

Implementing Failover Capabilities in Red Hat Network Satellite

Red Hat Network Satellite High Availability Guide By Wayman Smith, Red Hat Consultant

How To Set Up A Two Node Hyperv Cluster With Failover Clustering And Cluster Shared Volume (Csv) Enabled

Windows Host Utilities 6.0 Installation and Setup Guide

Deploying Red Hat Enterprise Virtualization On Tintri VMstore Systems Best Practices Guide

ENOVIA V6 Architecture Performance Capability Scalability

Abstract. Microsoft Corporation Published: November 2011

SAN Implementation Course SANIW; 3 Days, Instructor-led

EMC Data Domain Management Center

Leveraging Windows HPC Server for Cluster Computing with Abaqus FEA

Introduction to MPIO, MCS, Trunking, and LACP

Table of Contents. Introduction Prerequisites Installation Configuration Conclusion Recommended Reading...

Step-by-Step Guide to Open-E DSS V7 Active-Active Load Balanced iscsi HA Cluster

High Availability with Postgres Plus Advanced Server. An EnterpriseDB White Paper

Setup for Failover Clustering and Microsoft Cluster Service

Confidence in a connected world. Veritas NetBackup 6.5 for VMware 3.x Best Practices

RED HAT ENTERPRISE VIRTUALIZATION FOR SERVERS: COMPETITIVE FEATURES

Intel Cloud Builder Guide: Cloud Design and Deployment on Intel Platforms

SQL Server AlwaysOn. Michal Tinthofer 11. Praha What to avoid and how to optimize, deploy and operate.

Cisco Active Network Abstraction Gateway High Availability Solution

SONY SAIT/PetaSite/PetaBack

AlienVault Unified Security Management (USM) x. Configuring High Availability (HA)

Multiple Public IPs (virtual service IPs) are supported either to cover multiple network segments or to increase network performance.

Configuring and Managing a Red Hat Cluster. Red Hat Cluster for Red Hat Enterprise Linux 5

SIOS Protection Suite for Linux v Postfix Recovery Kit Administration Guide

Dell PowerVault MD32xx Deployment Guide for VMware ESX4.1 Server

Clustering ExtremeZ-IP 4.1

Lieberman Software Corporation Enterprise Random Password Manager

Setup for Failover Clustering and Microsoft Cluster Service

Red Hat Cluster Suite

Setup for Failover Clustering and Microsoft Cluster Service

Using iscsi with BackupAssist. User Guide

Linux Integration Services 3.4 for Hyper-V Readme

ENOVIA Semiconductor Accelerator for Enterprise Project Management

Using Red Hat Network Satellite Server to Manage Dell PowerEdge Servers

Installing and Using the vnios Trial

Veritas InfoScale 7.0 Virtualization Guide - Linux

Getting Started Guide

Nutanix NOS 4.0 vs. Scale Computing HC3

Best Practices for Data Sharing in a Grid Distributed SAS Environment. Updated July 2010

Building a Microsoft SQL Server Failover Cluster on the Interoute Virtual Data Centre

Samba on HP StorageWorks Enterprise File Services (EFS) Clustered File System Software

StarWind Virtual SAN Installation and Configuration of Hyper-Converged 2 Nodes with Hyper-V Cluster

RH033 Red Hat Linux Essentials or equivalent experience with Red Hat Linux..

PolyServe Understudy QuickStart Guide

How to configure Failover Clustering for Hyper-V hosts on HP ProLiant c-class server blades with All-in-One SB600c storage blade

StarWind Virtual SAN Installing & Configuring a SQL Server 2012 Failover Cluster

IBM Endpoint Manager Version 9.1. Patch Management for Red Hat Enterprise Linux User's Guide

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

IBM FileNet Image Services

iscsi: Accelerating the Transition to Network Storage

Deploying Remote Desktop Connection Broker with High Availability Step-by-Step Guide

Best Practices for Deploying and Managing Linux with Red Hat Network

Step-by-Step Guide. to configure Open-E DSS V7 Active-Active iscsi Failover on Intel Server Systems R2224GZ4GC4. Software Version: DSS ver. 7.

Bosch Video Management System High Availability with Hyper-V

Linux Cluster. Administration

Formation NetApp Accelerated NCDA

Tintri VMstore with Hyper-V Best Practice Guide

Deploying Windows Streaming Media Servers NLB Cluster and metasan

Using Symantec NetBackup with VSS Snapshot to Perform a Backup of SAN LUNs in the Oracle ZFS Storage Appliance

VMware Site Recovery Manager with EMC RecoverPoint

Setup for Failover Clustering and Microsoft Cluster Service

Storage Management for the Oracle Database on Red Hat Enterprise Linux 6: Using ASM With or Without ASMLib

Using High Availability Technologies Lesson 12

VMware vsphere-6.0 Administration Training

ORACLE OPS CENTER: PROVISIONING AND PATCH AUTOMATION PACK

Category: Design & Spec Note Sub-category: Streaming & Networking, Recording & Storage Model: All Firmware: N/A

Installing and Configuring a. SQL Server 2012 Failover Cluster

Configuring a FlexPod for iscsi Boot

Managing Cisco ISE Backup and Restore Operations

Drobo How-To Guide. Topics Drobo and vcenter SRM Basics Configuring an SRM solution Testing and executing recovery plans

OnCommand Performance Manager 1.1

Deploying IBM Lotus Domino on Red Hat Enterprise Linux 5. Version 1.0

Setup for Failover Clustering and Microsoft Cluster Service

Cannot send Autosupport , error message: Unknown User

CommandCenter Secure Gateway

Develop a process for applying updates to systems, including verifying properties of the update. Create File Systems

HBA Virtualization Technologies for Windows OS Environments

Red Hat Enterprise Linux as a

Drobo How-To Guide. Cloud Storage Using Amazon Storage Gateway with Drobo iscsi SAN

High Availability Essentials

EMC PowerPath Family

Dell UPS Local Node Manager USER'S GUIDE EXTENSION FOR MICROSOFT VIRTUAL ARCHITECTURES Dellups.com

June Blade.org 2009 ALL RIGHTS RESERVED

Step-by-Step Guide to Open-E DSS V7 Active-Active iscsi Failover

PostgreSQL Clustering with Red Hat Cluster Suite

Quick Start Guide. for Installing vnios Software on. VMware Platforms

Enabling Multi-pathing on ESVA with Red Hat Enterprise Linux 6 Device Mapper

How to Backup and Restore a VM using Veeam

USING SUN SYSTEMS TO BUILD A VIRTUAL AND DYNAMIC INFRASTRUCTURE. Jacques Bessoudo, Systems Technical Marketing. Sun BluePrints Online

SAN Conceptual and Design Basics

Transcription:

Dassault Systèmes Technical Whitepaper Implementing a High Availability ENOVIA Synchronicity DesignSync Data Manager Solution A Solutions Whitepaper Prepared by Dassault Systèmes

Implementing a High Availability ENOVIA Synchronicity DesignSync Data Manager Solution A Product Technical Whitepaper Prepared by Dassault Systèmes Overview and Purpose 4 Prerequisites and Assumptions 4 System Overview 5 GFS Creation using iscsi 6 Red Hat Cluster Suite Configuration Process 8 ENOVIA Synchronicity DesignSync Data Manager Installation Details 10 Failure Recovery Procedure 10 Conclusion 11 Appendix 12

Overview and Purpose A computer cluster is a group of linked computers working together as a single unit. Cluster components are often connected to each other through fast local area networks and typically implemented to address one of the following four objectives: Storage Load balancing High performance High Availability (HA) HA clusters (also known as failover clusters) are implemented primarily for improving the availability of services. They operate by taking advantage of redundant computers, called nodes, which provide service when system components fail. The most common size of an HA cluster is two nodes, which is the minimum requirement to provide redundancy. HA cluster implementations attempt to use redundancy of cluster components to eliminate single points of failure. The recent advent of low-cost cluster management offerings have allowed IT organizations to adopt failover techniques for a variety of mission-critical systems including the ENOVIA Synchronicity DesignSync Design Data Management (DDM) product from Dassault Systèmes. Two common vendors in the semiconductor industry are Veritas and Red Hat. The Red Hat Cluster Suite package (www.redhat.com/ cluster_suite) was selected as the basis for this paper because of its low-cost and relatively straightforward implementation. This paper provides a detailed example developed with a current semiconductor customer deploying two production-proven failover systems supporting over 400 ENOVIA Synchronicity DesignSync Data Manager users across three worldwide locations. Prerequisites and Assumptions The real-life customer example provided in this paper was implemented using RHEL 4 plus the Red Hat Cluster Suite package. In RHEL 4, the cluster suite package was sold at an additional charge. However, with RHEL 5, the cluster suite package is included as part of the baseline server distribution. It is recommended that customers start with RHEL 5. Install RHEL 5.X on both the servers (Use of the XEN virtualization version of the kernel is not recommended). Be sure to select installation of the cluster management during the OS installation process as this package is considered optional. Both servers must be communicating with each other across the LAN. Each server must include the iscsi Initiator Package. This package can be downloaded from Red Hat Network (RHN) and installed using the rpm utility. This example assumes a solid IT background, a working knowledge of NetApp filer configuration procedures and ENOVIA Synchronicity DesignSync Data Manager installation experience. Figure 1: A high-level block diagram of the cluster configuration and LAN connectivity. 4

System Overview The ENOVIA Synchronicity DesignSync Data Manager server in this application is considered virtual because the Hostname (i.e. capra ) and IP Address (i.e. 137.57.246.239) do not map to real physical devices. This fake Hostname and IP Address pair is published across the enterprise using DNS or equivalent. On the ENOVIA Synchronicity DesignSync Data Manager side, the use of a Virtual Hostname is made possible by leveraging the SYNC_HOSTNAME variable during the installation process which removes hostname dependencies. The result is an ENOVIA Synchronicity DesignSync Data Manager installation that is hostname neutral working in conjunction with cluster software that manipulates which physical server responds to the virtual IP address. Specific hardware includes two Sun 4100 servers and a NetApp 3020 storage appliance. Each Sun server uses two Network Interface Cards (NIC) with the NetApp requiring four NICs. (See Figure 2 for details.) Server 1 Server 2 Filer Hostname capra3 kolea netapp3 IP Address 1 137.57.246.75 (eth0) 137.57.246.210 (eth0) 137.57.246.X (e0a) IP Address 2 10.1.1.1 (eth1) 10.10.10.1 (eth1) 137.57.246.X (e0b) IP Address 3 n/a n/a 10.1.1.2 (e0c) IP Address 4 n/a n/a 10.10.10.2 (e0d) Figure 2 In the hardware setup, the ISCSI connection Netapp3 filer has been connected to the servers with crossover cables. Physical connections are shown in Figure 3. Figure 3 5

GFS Creation using iscsi In this application, the customer chose the use of a GFS based storage environment for the added isolation and security. (If the use of NFS based storage is preferred, this section of the white paper is not relevant.) o Server 1 (capra3): 1. Set the initiator name in the /etc/initiatorname.iscsi file. An iscsi node name can also be generated using the iscsi-iname command on the Linux server. The format of the node name is: iqn.1987-05.com.cisco:randomnumber It can also be changed to iqn.1987-05.com.cisco:hostname [root@capra3]# more /etc/initiatorname.iscsi ## DO NOT EDIT OR REMOVE THIS FILE! ## If you remove this file, the iscsi daemon will not start. ## If you change the InitiatorName, existing access control lists ## may reject this initiator. The InitiatorName must be unique ## for each iscsi initiator. Do NOT duplicate iscsi InitiatorNames. #InitiatorName=iqn.1987-05.com.cisco:01.bb6be173bc8 # InitiatorName=iqn.1987-05.com.cisco:capra3eth1 2. Edit the /etc/iscsi.conf file and add the filer IP address for the DiscoveryAddress parameter. # grep DiscoveryAddress /etc/iscsi.conf DiscoveryAddress=10.1.1.2 o Server 2 (kolea): 1. Set the initiator name in the /etc/initiatorname.iscsi file. [root@kolea]# more /etc/initiatorname.iscsi ## DO NOT EDIT OR REMOVE THIS FILE! ## If you remove this file, the iscsi daemon will not start. ## If you change the InitiatorName, existing access control lists ## may reject this initiator. The InitiatorName must be unique ## for each iscsi initiator. Do NOT duplicate iscsi InitiatorNames. #InitiatorName=iqn.1987-05.com.cisco:01.4d28978d171e # InitiatorName=iqn.1987-05.com.cisco:koleaeth1 2. Edit the /etc/iscsi.conf file [root@kolea]# grep DiscoveryAddress /etc/iscsi.conf DiscoveryAddress=10.10.10.2 The filer configuration process involves the creation of two LUNs (Logical Unit Number), which requires the creation of an igroup using each server s initiator node name. Each LUN must then be mapped to the igroup for each Linux server. Be sure to specify the LUN type and iscsi igroup type as linux. At least one LUN must be mapped as LUN 0. In our example, the filer volume has been specified as /dev/sdc. Once completed, you must start the iscsi service on each server: # /etc/init.d/iscsi Confirm proper configuration, on each server, using the command below: 6

[root@capra3]# iscsi-ls -l ****************************************************************************** SFNet iscsi Driver Version...4:0.1.11-6(03-Aug-2007) ****************************************************************************** TARGET NAME : iqn.1992-08.com.netapp:sn.101180284 TARGET ALIAS : netapp3.fas3020.iscsi.node HOST ID : 3 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.1.1.2:3260,1002 SESSION STATUS : ESTABLISHED AT Tue Dec 23 14:01:44 PST 2008 SESSION ID : ISID 00023d000001 TSIH 18 DEVICE DETAILS: --------------- LUN ID : 0 Vendor: NETAPP Model: LUN Rev: 0.2 Type: Direct-Access ANSI SCSI revision: 04 page83 type3: 60a98000433467706834443762596852 page80: 4334677068344437625968520a Device: /dev/sdc [root@kolea]# iscsi-ls -l ****************************************************************************** SFNet iscsi Driver Version...4:0.1.11-6(03-Aug-2007) ****************************************************************************** TARGET NAME : iqn.1992-08.com.netapp:sn.101180284 TARGET ALIAS : netapp3.fas3020.iscsi.node HOST ID : 3 BUS ID : 0 TARGET ID : 0 TARGET ADDRESS : 10.10.10.2:3260,1003 SESSION STATUS : ESTABLISHED AT Wed Dec 24 09:18:27 PST 2008 SESSION ID : ISID 00023d000001 TSIH 1a DEVICE DETAILS: --------------- LUN ID : 0 Vendor: NETAPP Model: LUN Rev: 0.2 Type: Direct-Access ANSI SCSI revision: 04 page83 type3: 60a98000433467706834443762596852 page80: 4334677068344437625968520a Device: /dev/sdc Server devices can also be viewed using the fdisk l command. 7

Red Hat Cluster Suite Configuration Process The Red Hat Cluster Suite configuration tasks fall into three broad categories: Cluster Nodes The Cluster Nodes tab allows you to specify the nodes (machines) that will comprise the cluster. In our example, these are capra3 (Server 1) and kolea (Server 2) for a two node cluster. Fence Devices A Fence Device is the component of a cluster project that cuts off access to a resource (hard disk, server, etc.) from a node in your cluster if it loses contact with the rest of the nodes in the cluster. Fencing is often accomplished with a network power switch, which is a power switch that can be controlled through the network. This is known as power fencing. (Our example uses Manual Fencing, which means the problem resource will be dealt with using human intervention.) Managed Resources Managed Resources include the definition of Failover Domains, Resources and Services. Invoke the Red Hat Cluster Suite GUI on capra3 (Server 1) using the command-line option: [root@capra3]# system-config-cluster & Selecting the Cluster Configuration tab, the name of our cluster is Linux-Cluster which utilizes two nodes called capra3 and kolea (Figure 4). As explained earlier, this example uses Manual Fencing with a fence name of sync_fence. This simplifies the cluster operation, but provides less automation (Figure 5). (Manual fencing was chosen because the customer lacked the necessary hardware to support automation.) Figure 5 Figure 4 8

In Figure 6, we have defined a failover domain called sync. The Failover Domain allows you to specify priorities for the various nodes comprising the cluster. An IP address Resource panel (Figure 7) specifies the IP address the cluster will monitor as part of the virtual ENOVIA Synchronicity DesignSync Data Manager Server capra referenced in Figure 1. The Figure 7 panel defines the virtual IP Address (i.e. 137.57.246.239 ) monitored by the cluster. The ENOVIA Synchronicity DesignSync Data Manager installation references the virtual hostname mapped to this IP Address. It is this hostname that ENOVIA Synchronicity DesignSync Data manager utilizes to create client-side metadata in user workspaces (i.e. sync://capra:2647). Figure 6 Figure 7 Our example requires a minimum of one Service. Create a service called sync and then select the Create a new resource for this service tab (Figure 8). Select the Resource Type drop-down as IP Address and enter the virtual IP address specified in the previous step. Figure 9 shows the panel after selecting OK. Figure 8 Figure 9 9

Figure 10 shows the Cluster Management tab for our cluster called Linux-Cluster. The final step in the configuration procedure is selecting the Send to Cluster option in the upper right-hand corner of the Cluster Configuration GUI, which copies the cluster.conf file to all systems defined as part of the cluster. Figure 10 ENOVIA Synchronicity DesignSync Data Manager Installation Details The Linux login for the ENOVIA Synchronicity DesignSync Data Manager installation is the traditional syncmgr account being a valid NIS user. The ENOVIA Synchronicity DesignSync Data Manager installation incorporates the use of the following ENOVIA Synchronicity DesignSync Data Manager specific variables: SYNC_PORT=2647 SYNC_DIR=/apps/syncapp/current/syncinc SYNC_CUSTOM_DIR=/apps/syncapp/current/syncinc/custom SYNC_HOSTNAME=capra SYNC_DIR/bin (added to the path as a minimum requirement) The key setting in this application is the SYNC_HOSTNAME variable. SYNC_HOSTNAME maps the ENOVIA Synchronicity DesignSync Data Manager installation to the virtual hostname (i.e. capra ) identified as part of the cluster configuration. Licensing is handled using the LM_LICENSE_FILE environment variable, which references a machine other than the two discussed as part of this paper. Failure Recovery Procedure In the event of a cluster node failure, the sequence of steps executed by the ENOVIA Synchronicity DesignSync Data Manager Administrator is as follows: 1. Log into the healthy server (remote login is supported) 2. Source the ENOVIA Synchronicity DesignSync Data Manager environment variables 3. Execute the ENOVIA Synchronicity DesignSync Data Manager start_sync_server script 4. Start the diagnosis process of the failed machine This process could be automated by defining another Service that is executed automatically when a failure condition is detected. In fact, the customer participating in the development of this white paper now has another fail-over system using RHEL 5 that incorporates an automatic fail-over approach. 10

Conclusion As mentioned in the opening section of this paper, providing High Availability (HA) Design Data Management systems is more critical than ever. In fact, with the explosion of multi-site/multi-geography design teams witnessed in the semiconductor design community the last 6-8 years, HA systems are becoming the rule rather than the exception. The ever-increasing requirement to support a follow the sun development paradigm demands HA resources. In recent years, the availability of low-cost and easy-to-use cluster management applications has put the goal of cost effective and reliable HA solutions in reach for ENOVIA Synchronicity DesignSync Data Manager customers across the globe. 11

Appendix Example cluster.conf File Shown below is the cluster.conf file for the example referenced throughout this paper: <?xml version= 1.0?> <cluster alias= Linux-Cluster config_version= 113 name= Linux-Cluster > <fence_daemon post_fail_delay= 0 post_join_delay= 300 /> <clusternodes> <clusternode name= capra3 nodeid= 1 votes= 1 > <fence> <method name= 1 > <device name= sync_fence nodename= capra3 /> </method> </fence> </clusternode> <clusternode name= kolea nodeid= 2 votes= 1 > <fence> <method name= 1 > <device name= sync_fence nodename= kolea /> </method> </fence> </clusternode> </clusternodes> <fencedevices> <fencedevice agent= fence_manual name= sync_fence /> </fencedevices> <rm> <failoverdomains> <failoverdomain name= sync ordered= 1 restricted= 1 > <failoverdomainnode name= capra3 priority= 1 /> <failoverdomainnode name= kolea priority= 1 /> </failoverdomain> </failoverdomains> <resources> <ip address= 137.57.246.239 monitor_link= 1 /> </resources> <service autostart= 1 domain= sync name= sync > <ip ref= 137.57.246.239 /> </service> </rm> <cman expected_votes= 1 two_node= 1 /> </cluster> 12

About Dassault Systèmes As a world leader in 3D and Product Lifecycle Management (PLM) solutions, Dassault Systèmes brings value to more than 100,000 customers in 80 countries. A pioneer in the 3D software market since 1981, Dassault Systèmes develops and markets PLM application software and services that support industrial processes and provide a 3D vision of the entire lifecycle of products from conception to maintenance. The Dassault Systèmes portfolio consists of CATIA for designing the virtual product - SolidWorks for 3D mechanical design - DELMIA for virtual production - SIMULIA for virtual testing - ENOVIA for global collaborative lifecycle management, and 3DVIA for online 3D lifelike experiences. For more information, visit http://www/3ds.com. PLM Solutions for Your Business Contact us today to find out how leading companies worldwide are using ENOVIA PLM solutions to beat the competition and speed time-to-market. Dassault Systèmes, 2009. All Rights Reserved. 13

Dassault Systèmes 900 Chelmsford Street, Lowell, Massachusetts 01851 978 442 2500 3DS.com SDSDM-TWP-0910