High Availability of the Polarion Server



Similar documents
INUVIKA TECHNICAL GUIDE

NEXTGEN v5.8 HARDWARE VERIFICATION GUIDE CLIENT HOSTED OR THIRD PARTY SERVERS

Implementing Storage Concentrator FailOver Clusters

Bosch Video Management System High Availability with Hyper-V

Intellicus Enterprise Reporting and BI Platform

High Availability of VistA EHR in Cloud. ViSolve Inc. White Paper February

Table of Contents Introduction and System Requirements 9 Installing VMware Server 35

Symantec Storage Foundation and High Availability Solutions Microsoft Clustering Solutions Guide for Microsoft SQL Server

Hardware/Software Guidelines

Running VirtualCenter in a Virtual Machine

VMware for Bosch VMS. en Software Manual

Solution Brief Availability and Recovery Options: Microsoft Exchange Solutions on VMware

Configuration Maximums VMware Infrastructure 3

QNAP in vsphere Environment

FioranoMQ 9. High Availability Guide

Cloud Server. Parallels. Key Features and Benefits. White Paper.

Storage Sync for Hyper-V. Installation Guide for Microsoft Hyper-V

Bosch Video Management System High availability with VMware

OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available

Kingston Communications Virtualisation Platforms

An Oracle White Paper July Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

WHITE PAPER. Acronis Backup & Recovery 10 Overview

Virtual Server and Storage Provisioning Service. Service Description

Red Hat Cluster Suite

Change Manager 5.0 Installation Guide

PHD Virtual Backup for Hyper-V

Best Practices for Virtualised SharePoint

Symantec NetBackup OpenStorage Solutions Guide for Disk

Performance Optimization Guide

vsphere Upgrade vsphere 6.0 EN

Parallels Cloud Storage

Parallels Server 4 Bare Metal

SAS University Edition: Installation Guide for Linux

CONFIGURATION GUIDELINES: EMC STORAGE FOR PHYSICAL SECURITY

Configuration Maximums

vsphere Upgrade Update 1 ESXi 6.0 vcenter Server 6.0 EN

SAN Conceptual and Design Basics

An Oracle White Paper August Oracle VM 3: Server Pool Deployment Planning Considerations for Scalability and Availability

Whitepaper Continuous Availability Suite: Neverfail Solution Architecture

Using EonStor FC-host Storage Systems in VMware Infrastructure 3 and vsphere 4

HCIbench: Virtual SAN Automated Performance Testing Tool User Guide

What s New with VMware Virtual Infrastructure

Configuration Maximums

Deploying Microsoft Clusters in Parallels Virtuozzo-Based Systems

Monitoring Databases on VMware

CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level. -ORACLE TIMESTEN 11gR1

Best Practices for Monitoring Databases on VMware. Dean Richards Senior DBA, Confio Software

Rally Installation Guide

ZEN LOAD BALANCER EE v3.04 DATASHEET The Load Balancing made easy

Preparing a SQL Server for EmpowerID installation

The Revival of Direct Attached Storage for Oracle Databases

Oracle Primavera P6 Enterprise Project Portfolio Management Performance and Sizing Guide. An Oracle White Paper October 2010

Setup for Failover Clustering and Microsoft Cluster Service

Drobo How-To Guide. Topics. What You Will Need. Prerequisites. Deploy Drobo B1200i with Microsoft Hyper-V Clustering

Quick Start Guide for Parallels Virtuozzo

VMware vsphere Data Protection 5.8 TECHNICAL OVERVIEW REVISED AUGUST 2014

Ultra-Scalable Storage Provides Low Cost Virtualization Solutions

Symantec Endpoint Protection Shared Insight Cache User Guide

Paragon Protect & Restore

ADAM 5.5. System Requirements

Contingency Planning and Disaster Recovery

StarPort iscsi and ATA-over-Ethernet Initiator: Using Mirror (RAID1) disk device

CA ARCserve Family r15

Cloud Server. Parallels. An Introduction to Operating System Virtualization and Parallels Cloud Server. White Paper.

Oracle Virtual Desktop Infrastructure. VDI Demo (Microsoft Remote Desktop Services) for Version 3.2

RUGGEDCOM NMS for Linux v1.6

Drobo How-To Guide. Cloud Storage Using Amazon Storage Gateway with Drobo iscsi SAN

VMware vsphere Data Protection 6.0

White paper. QNAP Turbo NAS with SSD Cache

F-Secure Internet Gatekeeper Virtual Appliance

Enabling Technologies for Distributed Computing

Parallels Cloud Server 6.0

Virtualization, Business Continuation Plan & Disaster Recovery for EMS -By Ramanj Pamidi San Diego Gas & Electric

Introduction to MPIO, MCS, Trunking, and LACP

FlexArray Virtualization

Provisioning Technology for Automation

Maximum Availability Architecture. Oracle Best Practices For High Availability. Backup and Recovery Scenarios for Oracle WebLogic Server: 10.

Clustering in Parallels Virtuozzo-Based Systems

2.6.1 Creating an Acronis account Subscription to Acronis Cloud Creating bootable rescue media... 12

Dragon Medical Enterprise Network Edition Technical Note: Requirements for DMENE Networks with virtual servers

MySQL and Virtualization Guide

Metalogix SharePoint Backup. Advanced Installation Guide. Publication Date: August 24, 2015

Deploying Global Clusters for Site Disaster Recovery via Symantec Storage Foundation on Infortrend Systems

VIDEO SURVEILLANCE WITH SURVEILLUS VMS AND EMC ISILON STORAGE ARRAYS

Data Collection Agent for Active Directory

Software-defined Storage Architecture for Analytics Computing

Creating A Highly Available Database Solution

NTP Software File Auditor for Windows Edition

An Introduction to ProfitBricks VDC

NTP Software File Reporter Analysis Server

3 Red Hat Enterprise Linux 6 Consolidation

Setup Cisco Call Manager on VMware

Automatic Service Migration in WebLogic Server An Oracle White Paper July 2008

Ignify ecommerce. Item Requirements Notes

Integrated Application and Data Protection. NEC ExpressCluster White Paper

Hardware and Software Requirements. Release 7.5.x PowerSchool Student Information System

Transcription:

Polarion Software CONCEPT High Availability of the Polarion Server Installing Polarion in a high availability environment Europe, Middle-East, Africa: Polarion Software GmbH Hedelfinger Straße 60 70327 Stuttgart, GERMANY Tel +49 711 489 9969-0 Fax +49 711 489 9969-20 Americas & Asia-Pacific: Polarion Software, Inc. 100 Pine Street, Suite 1250, San Francisco, CA 94111, USA Tel +1 877 572 4005 (Toll free) Fax +1 510 814 9983 Copyright 2013 Polarion Software - Permission is granted to reproduce and redistribute this document without modifications.

Introduction This document describes a concept for different scenarios when using Polarion in high availability (HA) environments. Goal is to ensure that Polarion meets the HA requirements listed below. Requirements Following requirements have been identified as critical for a high availability solution: 1. Downtime should be less than 5 minutes: If for some reasons (hardware problems, hard disk, memory, etc.) Polarion fails to run the downtime should be less than 5 minutes. 2. Restarting the system should be done automatically. No manual interaction should be necessary to restart the system 3. Updating the software (Polarion, SVN) should be done with a minimum of downtime. 4. The solution should reflect requirements regarding scalability of Polarion. It must address a smooth integration of new business units when they start working with Polarion. Polarion data storage In a high availability environment the data storage is the most crucial part of the whole system. Since Polarion is using subversion as the persistence storage and an in-memory indexer, memory and data storage faults will stop the system. Polarion system starts are normally faster than 5 minutes, except if the index should be newly created. In order to get Polarion restarted within 5 minutes is it necessary to have the data storage available. Polarion data storage The Polarion data storage consists of the following parts: 1. The Subversion Repository Located under the installation directory in data/svn/repo 2. The indexers Located under the installation directory in data/workspace/polarion-data/index where are the different indexers stored 3. The Report Repository Located under the installation directory in data/rr which contains all calculations of jobs (factbase). 4. The Build Repository Located under the installation directory in data/bir. The directory contains information about the build processes. These directories have to be available when Polarion is restarted, otherwise a re-index process starts which is time consuming and the restart within 5 minutes can t be met. 2

Solutions To fulfill the high availability requirements mentioned above two possible scenarios can be identified: Either the Polarion storage is mirrored in real time or the Polarion storage is itself a high available (HA) storage. We summarized this in the following 2 chapters. Scenario 1 In this scenario the hardware is able to duplicate the whole system (heartbeat). In a Polarion environment the HA system should be configured as active/passive, having one Polarion server running and one which is mirrored. In the case of a failure, the passive node becomes active. The configuration should have two systems connected where the Polarion storage is located on the server. This is different to the link where the HA cluster is described. This scenario needs a special hardware that is able to mirror both systems. Scenario 2 In Scenario 2 the Polarion storage is maintained by a SAN storage, which is defined as HA data storage. In this scenario Polarion is installed on a virtual machine. The Polarion storage is configured in the SAN. A snapshot of the current configuration of the primary node is stored and updated whenever the Polarion configuration is changed. If the primary node does fail the secondary node is started, taking over the IP and MAC address of the primary node. End users will need to refresh their browser and reconnect to Polarion. After that they can continue their work in the same manner as few minutes ago. The restarting process should be configured to switch over automatically. This is provided by the clustering software and/or, vendors such as VMware have proven solutions on this. This scenario needs no extra hardware assuming that a SAN is already in use. Another advantage in this scenario is the ability to update the Polarion version in parallel by installing it (after verifying on the test server) on another virtual machine. The data directory is defined as read only. After starting Polarion the index is created and maintained on the updated Polarion instance. After the index is created the new Polarion version may become active with a minimum of downtime. 3

Technical solution The following picture illustrates the architecture of Polarion. Virtual machines Polarion should be installed on a virtual machine which runs, preferably, on a Linux operating system. The clustering solution should be able to switch over to a different node in the case that there is a problem with the primary one. It is necessary to obtain the same MAC/IP address as the primary node. The maximum downtime should be 5-10 minutes, so switching over means to boot the new virtual machine after the cluster manager has detected a problem (losing of heartbeat). System requirements The virtual machines should have the following settings only used by Polarion dependent on the size of usage. S M L XL XXL Operating System 32 or 64-bit 64-bit 64-bit 64-bit 64-bit CPU Cores 4 8 8 16 32 GB RAM 8 (4) 16 (8) 32 (16) 64 (32) 128 (64) (dedicated to Polarion) Storage for Polarion 250GB+ 500GB+ 1TB+ (SCSi or similar) 1TB+ (RAID 10, NAS, SAN) 1TB+ (RAID 10, NAS, SAN) # Projects < 100 > 100 > 300 500 500 # Users < 50 < 150 > 150 > 300 > 500 4

The following pictures show why it is necessary to have more cores available where the Polarion server is running. Polarion installation Polarion is installed on a virtual machine as a normal installation. After the installation the following files are copied from the currently productive installation, the paths are relative to the installation directory, which is by default under Linux /opt/polarion: etc/polarion.properties etc/config.sh polarion/license/polarion.lic polarion/license/users data/svn/access data/svn/passwd (only used for local users, not supported by LDAP) polarion/extensions/* all subdirectoris if any extensions are installed The next step is move the directory <Polarion-Install>/data to the location of the SAN. So all files are copied from the local installation to the SAN and the local data directory then points to the SAN directory. Then the currently productive repository should be copied to the SAN location. This could be done by either make a svn hotcopy of the repository or svn dump the repository in several different dumps Summary The first choice is faster during the migration, because the repository is copied directly to the new SAN location. The second choice is safer and it gives SVN the possibility to optimize the size of the repository by using the latest write algorithms. But this may take a while depending on the size of the repository. After the repository is on the SAN, a reindex should be performed. Therefore the reindex.sh shell script should be executed. Then the Polarion installation is finished and the result could be tested. 5

About Polarion Software Polarion Software s success is best described by the hundreds of Global 1000 companies and over 1 Million users who rely daily on Polarion s Requirements Management, Quality Assurance, and Application Lifecycle Management solutions. Polarion is a thriving international company with offices across Europe and North America, and a wide ecosystem of partners world-wide. For more information, visit www.polarion.com. Europe, Middle-East, Africa: Polarion Software GmbH Americas & Asia-Pacific: Polarion Software, Inc. Hedelfinger Straße 60 70327 Stuttgart, GERMANY 100 Pine Street, Suite 1250, San Francisco, CA 94111, USA Tel +49 711 489 9969-0 Tel +1 877 572 4005 (Toll free) Fax +49 711 489 9969-20 Fax +1 510 814 9983 6