IBM Redistribute Big SQL v4.x Storage Paths IBM. Redistribute Big SQL v4.x Storage Paths



Similar documents
Big SQL v3.0. Metadata. Backup and Resiliency

Backing up the Embedded Oracle database of a Red Hat Network Satellite

Migrate Topaz databases from One Server to Another

MySQL Backups: From strategy to Implementation

Performing Database and File System Backups and Restores Using Oracle Secure Backup

OLH: Oracle Loader for Hadoop OSCH: Oracle SQL Connector for Hadoop Distributed File System (HDFS)

SQL Server Replication Guide

Integrating VoltDB with Hadoop

Release Notes P/N Rev 01

Spectrum Scale HDFS Transparency Guide

Deploying a Virtual Machine (Instance) using a Template via CloudStack UI in v4.5.x (procedure valid until Oct 2015)

Single Node Hadoop Cluster Setup

PUBLIC Installation: SAP Mobile Platform Server for Linux

White Paper. Fabasoft on Linux Cluster Support. Fabasoft Folio 2015 Update Rollup 2

Zen Internet. Online Data Backup. Zen Vault Professional Plug-ins. Issue:

Moving the TRITON Reporting Databases

Script-controlled Backups in SAP HANA Database

Chancery SMS Database Split

Recommended File System Ownership and Privileges

Preparing a SQL Server for EmpowerID installation

Managed File Transfer

Restoring Microsoft SQL Server 7 Master Databases

ORACLE NOSQL DATABASE HANDS-ON WORKSHOP Cluster Deployment and Management

Many DBA s are being required to support multiple DBMS s on multiple platforms. Many IT shops today are running a combination of Oracle and DB2 which

ULTEO OPEN VIRTUAL DESKTOP V4.0

Database Installation Guide Running an SAP System on IBM DB with the purescale Feature

The SkySQL Administration Console

USING FILERELICATIONPRO TO REPLICATE SQL SERVER

Project Iris. econtent Server Cluster Support: High Availability through Microsoft Cluster Service DocBase Version /13/2000

CN=Monitor Installation and Configuration v2.0

A basic create statement for a simple student table would look like the following.

Module 7. Backup and Recovery

Database Administration

Moving the Web Security Log Database

Best Practices. Using IBM InfoSphere Optim High Performance Unload as part of a Recovery Strategy. IBM Smart Analytics System

SQL Server Training Course Content

X-POS GUIDE. v3.4 INSTALLATION SmartOSC and X-POS

Configuring High Availability for VMware vcenter in RMS Distributed Setup

Technical Notes. EMC NetWorker Performing Backup and Recovery of SharePoint Server by using NetWorker Module for Microsoft SQL VDI Solution

Table of Content. Official website:

HADOOP - MULTI NODE CLUSTER

High Availability for Informatica Data Replication in a Cluster Environment

sqlcmd -S.\SQLEXPRESS -Q "select name from sys.databases"

BACKUP & RESTORE (FILE SYSTEM)

How to backup a remote MySQL server with ZRM over the Internet

Support Document: Microsoft SQL Server - LiveVault 7.6X

Percona Server features for OpenStack and Trove Ops

DB2 Backup and Recovery

User Migration Tool. Note. Staging Guide for Cisco Unified ICM/Contact Center Enterprise & Hosted Release 9.0(1) 1

DiskPulse DISK CHANGE MONITOR

Installing IBM Websphere Application Server 7 and 8 on OS4 Enterprise Linux

Incremental Backup Script. Jason Healy, Director of Networks and Systems

Optimizing the Performance of the Oracle BI Applications using Oracle Datawarehousing Features and Oracle DAC

Hadoop Basics with InfoSphere BigInsights

Easy Setup Guide 1&1 CLOUD SERVER. Creating Backups. for Linux

Business Intelligence Tutorial: Introduction to the Data Warehouse Center

HOW TO CONFIGURE SQL SERVER REPORTING SERVICES IN ORDER TO DEPLOY REPORTING SERVICES REPORTS FOR DYNAMICS GP

INF-110. GPFS Installation

CycleServer Grid Engine Support Install Guide. version 1.25

CONVERTING FROM NETKEEPER ISAM v6.32 to NETKEEPER SQL

MS SQL Server Database Management

DB2 9 for LUW Advanced Database Recovery CL492; 4 days, Instructor-led

SAS 9.4 In-Database Products

Backup and Restore the HPOM for Windows 8.16 Management Server

EMC DOCUMENTUM xplore 1.1 DISASTER RECOVERY USING EMC NETWORKER

FalconStor Recovery Agents User Guide

Backup and Recovery. What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases

Deploy and Manage Hadoop with SUSE Manager. A Detailed Technical Guide. Guide. Technical Guide Management.

Data Domain Profiling and Data Masking for Hadoop

Backups and Maintenance

Architecting the Future of Big Data

TEL2821/IS2150: INTRODUCTION TO SECURITY Lab: Operating Systems and Access Control

Database Backup Datacolor Match Pigment and Datacolor Tools

AKCess Pro Server Backup & Restore Manual

Back Up and Restore. Section 11. Introduction. Backup Procedures

Indian Standards on DVDs. Installation Manual Version 1.0. Prepared by Everonn Education Ltd

Introduction to Running Hadoop on the High Performance Clusters at the Center for Computational Research

Telelogic DASHBOARD Installation Guide Release 3.6

StruxureWare Power Monitoring Database Upgrade FAQ

Matisse Installation Guide for MS Windows

This appendix describes the following procedures: Cisco ANA Registry Backup and Restore Oracle Database Backup and Restore

Quick Beginnings for DB2 Servers

Configuring Heterogeneous Replication server for MSSQL

HSearch Installation

FioranoMQ 9. High Availability Guide

Kognitio Technote Kognitio v8.x Hadoop Connector Setup

How to Install SQL Server 2008

NetNumen U31 R06. Backup and Recovery Guide. Unified Element Management System. Version: V

HADOOP CLUSTER SETUP GUIDE:

Instructions for update installation of ElsaWin 5.00

Automated Offsite Backup with rdiff-backup

Setting Up a CLucene and PostgreSQL Federation

EMC Documentum Repository Services for Microsoft SharePoint

VMware vcenter Configuration Manager Backup and Disaster Recovery Guide vcenter Configuration Manager 5.4.1

SolarWinds Migrating SolarWinds NPM Technical Reference

Intellicus Enterprise Reporting and BI Platform

Distributed Filesystems

HP Data Protector Integration with Autonomy IDOL Server

Setting up the Oracle Warehouse Builder Project. Topics. Overview. Purpose

IBM DB Backup and Recovery Hands-On Lab. Information Management Cloud Computing Center of Competence. IBM Canada Lab

Transcription:

Redistribute Big SQL v4.x Storage Paths THE GOAL The Big SQL temporary tablespace is used during high volume queries to spill sorts or intermediate data to disk. To improve I/O performance for these queries, it is recommended to configure the temporary tablespace to be spread out over multiple external disks. Since most Hadoop clusters are configured with multiple external disks on each node, it is recommended to use all of those same disks for the Big SQL storage as well. THE PROBLEM / SCENARIO During the installation of BigInsights v4.x, the Big SQL data-directory field (bigsql_db_path) was left with the default value. This places the Big SQL database storage paths under the /var directory, which is usually on the root filesystem on internal storage. You would like to redefine the Big SQL storage paths to use multiple filesystems, all on external storage. THE SOLUTION The standard Big SQL configuration has only one database STORAGE GROUP defined, which is for tablespaces storing both the schema metadata and the temporary tables. Follow these steps to change the storage path(s): Step 1. Add paths to the default storage group and drop the old one. Step 2. Rebalance the tablespaces to use the new paths Step 3. Re-Activate the Database Step 4. Update the Ambari and XML config files. In our scenario, we will extend the default storage group with a single path on /var/ibm/bigsql/database, to use six filesystems on /data[1-6]. Raanon Reutlinger -1- January 2016

BACKUP THE BIG SQL (METADATA) DATABASE Since we will be making changes to the Big SQL metadata database storage configuration, the database should be backed up, as a precaution. If you don t already have a procedure to do this, you can follow these steps. Be aware, that should you begin making changes to your data or metadata, this backup image cannot be relied upon by itself to recover your system to this point in time. This is NOT a complete backup of your data and metadata. The backup image should only be used to recover immediately from the changes you will make in this document. A. Create the following two scripts. (The first, waitfordeactivate.sh, is described later on.) The backupdb.sh script first makes a directory to hold the backup image on all nodes (you can choose a different path), and then issues the offline backup command which is taken on all nodes simultaneously. [bigsql@testmg1 ~]$ cat waitfordeactivate.sh #!/bin/bash SQLCODE=0 while [[ $SQLCODE!= SQL1611W ]]; do echo... waiting for db deactivate...; db2 -v FORCE APPLICATIONS ALL sleep 2 db2 -v DEACTIVATE DB bigsql SQLCODE=$(db2 LIST ACTIVE DATABASES \ awk '{print > "/dev/stderr"}/^sql[0-9]+/{print $1}') done [bigsql@testmg1 ~]$ cat backupdb.sh #!/bin/bash # Args: [backupdir] CreateBackupDir() { db2_all "mkdir -p ${backupdir}" 1>/dev/null rc=$? [[ $rc!= 0 ]] && echo "Error encountered" && exit } localdbdir=$(db2 LIST DB DIRECTORY \ awk '/Local database directory/{print $NF}' 2>/dev/null) backupdir=${1:-${localdbdir}/backups} [[ -z $backupdir ]] && echo "Missing backupdir" && exit restorescript=$(dirname $0)/restoreDB.sh CreateBackupDir $(dirname $0)/waitForDeactivate.sh # Execute backup and capture the backup timestamp. timestamp=$( db2 -v BACKUP DB bigsql ON ALL DBPARTITIONNUMS TO ${backupdir} \ awk '{print > "/dev/stderr"}/backup successful/{print $NF}' ) db2 -v ACTIVATE DB bigsql [[ -n $timestamp ]] && \ printf "\nto restore from this backup image, run :\n %s %s %s\n\n" \ $restorescript $timestamp $backupdir [bigsql@testmg1 ~]$ chmod +x waitfordeactivate.sh [bigsql@testmg1 ~]$ chmod +x backupdb.sh Raanon Reutlinger -2- January 2016

Here s a sample execution: [bigsql@testmg1 ~]$./backupdb.sh... waiting for db deactivate... FORCE APPLICATIONS ALL DB20000I The FORCE APPLICATION command completed successfully. DB21024I This command is asynchronous and may not be effective immediately. DEACTIVATE DB bigsql DB20000I The DEACTIVATE DATABASE command completed successfully. SQL1611W No data was returned by Database System Monitor. BACKUP DB bigsql ON ALL DBPARTITIONNUMS TO /var/ibm/bigsql/database/bigdb/backups Part Result ---- ------------------------------------------------------------------------ 0000 DB20000I The BACKUP DATABASE command completed successfully. 0001 DB20000I The BACKUP DATABASE command completed successfully. 0002 DB20000I The BACKUP DATABASE command completed successfully. 0003 DB20000I The BACKUP DATABASE command completed successfully. Backup successful. The timestamp for this backup image is : 20160120124437 ACTIVATE DB bigsql DB20000I The ACTIVATE DATABASE command completed successfully. To restore from this backup image, run :./restoredb.sh 20160120124437 /var/ibm/bigsql/database/bigdb/backups B. In the event that you must recover the database, you can use this script, which first restores the image on the Big SQL Master node, and then simultaneously on all other nodes. [bigsql@testmg1]$ cat restoredb.sh #!/bin/bash # Args: timestamp [backupdir] timestamp=$1; shift localdbdir=$(db2 LIST DB DIRECTORY \ awk '/Local database directory/{print $NF}' 2>/dev/null) backupdir=${1:-${localdbdir}/backups} [[ -z $backupdir ]] && echo "Missing backupdir" && exit ls ${backupdir}/bigsql.0.bigsql.dbpart*.${timestamp}.001 2>&1 >/dev/null; rc=$? [[ $rc!= 0 ]] && \ printf "\nbackup image not found (${backupdir}/bigsql.\*.${timestamp}.\*)\n\n" && \ exit $(dirname $0)/waitForDeactivate.sh db2 -v RESTORE DB bigsql FROM ${backupdir} TAKEN AT ${timestamp} REPLACE EXISTING db2_all "<<-0< db2 -v RESTORE DB bigsql FROM ${backupdir} TAKEN AT ${timestamp} REPLACE EXISTING" db2 -v ACTIVATE DB bigsql [bigsql@testmg1 ~]$ chmod +x restoredb.sh Raanon Reutlinger -3- January 2016

Step 0. QUERY THE EXISTING CONFIGURATION Before making any changes, use these queries to capture the current situation. After each of the steps below, return to these queries to confirm the changes. Open a bash session to the bigsql user, and connect to the BIGSQL database: [root@testmg1 ~]# su - bigsql [bigsql@testmg1 ~]$ db2 CONNECT TO bigsql Database Connection Information Database server = DB2/LINUXX8664 10.6.3 SQL authorization ID = BIGSQL Local database alias = BIGSQL A. Use this script to list the existing storage group and its paths. [bigsql@testmg1 ~]$ cat qry_stogroups.sql SELECT VARCHAR( storage_group_name, 15) AS stogroup, VARCHAR( db_storage_path_state, 15) AS state, VARCHAR( db_storage_path, 80) AS storage_path FROM TABLE( admin_get_storage_paths('',-1)) ; [bigsql@testmg1 ~]$ db2 -tvf qry_stogroups.sql tee qry_stogroups.step0.out SELECT VARCHAR( storage_group_name, 30) AS stogroup, VARCHAR( db_storage_path_state, 15) AS state, VARCHAR( db_storage_path, 80) AS storage_path FROM TABLE( admin_get_storage_paths('',-1)) STOGROUP STATE STORAGE_PATH --------------- --------------- -------------------------------------------------------------------------------- IBMSTOGROUP IN_USE /var/ibm/bigsql/database 1 record(s) selected. Make note of the STORAGE_PATH returned, since this is what we will be changing in the next step. B. Use this script to list all paths used by this database on all nodes. [bigsql@testmg1 ~]$ cat qry_dbpaths.sql SELECT dbpartitionnum, VARCHAR( type, 20) AS type, VARCHAR( path, 80) AS path FROM sysibmadm.dbpaths ORDER BY 1,2,3 ; [bigsql@testmg1 ~]$ db2 -tvf qry_dbpaths.sql tee qry_dbpaths.step0.out SELECT dbpartitionnum, VARCHAR( type, 20) AS type, VARCHAR( path, 80) AS path FROM sysibmadm.dbpaths ORDER BY 1,2,3 DBPARTITIONNUM TYPE PATH -------------- -------------------- ------------------------------------------------------------------------------- 0 DBPATH /var/ibm/bigsql/database/bigdb/bigsql/node0000/sql00001/ 0 DBPATH /var/ibm/bigsql/database/bigdb/bigsql/node0000/sql00001/member0000/ 0 DB_STORAGE_PATH /var/ibm/bigsql/database/ 0 LOCAL_DB_DIRECTORY /var/ibm/bigsql/database/bigdb/bigsql/node0000/sqldbdir/ 0 LOGPATH /var/ibm/bigsql/database/bigdb/bigsql/node0000/sql00001/logstream0000/ 1 DBPATH /var/ibm/bigsql/database/bigdb/bigsql/node0001/sql00001/ 1 DBPATH /var/ibm/bigsql/database/bigdb/bigsql/node0001/sql00001/member0001/ 1 DB_STORAGE_PATH /var/ibm/bigsql/database/ 1 LOCAL_DB_DIRECTORY /var/ibm/bigsql/database/bigdb/bigsql/node0001/sqldbdir/ 1 LOGPATH /var/ibm/bigsql/database/bigdb/bigsql/node0001/sql00001/logstream0001/ 2 DBPATH /var/ibm/bigsql/database/bigdb/bigsql/node0002/sql00001/ 2 DBPATH /var/ibm/bigsql/database/bigdb/bigsql/node0002/sql00001/member0002/ 2 DB_STORAGE_PATH /var/ibm/bigsql/database/ 2 LOCAL_DB_DIRECTORY /var/ibm/bigsql/database/bigdb/bigsql/node0002/sqldbdir/ 2 LOGPATH /var/ibm/bigsql/database/bigdb/bigsql/node0002/sql00001/logstream0002/ << output truncated >> Raanon Reutlinger -4- January 2016

C. Use this script to list all tablespace containers on all nodes. [bigsql@testmg1 ~]$ cat qry_containers.sql SELECT dbpartitionnum part, SMALLINT( tbsp_id) id, VARCHAR( tbsp_name, 20) tbsp_name, VARCHAR( container_name, 80) container_name FROM sysibmadm.container_utilization ORDER BY dbpartitionnum, tbsp_id, container_name ; [bigsql@testmg1 ~]$ db2 -tvf qry_containers.sql tee qry_containers.step0.out SELECT dbpartitionnum part, SMALLINT( tbsp_id) id, VARCHAR( tbsp_name, 20) tbsp_name, VARCHAR( container_name, 80) container_name FROM sysibmadm.container_utilization ORDER BY dbpartitionnum, tbsp_id, container_name PART ID TBSP_NAME CONTAINER_NAME ------ ------ -------------------- -------------------------------------------------------------------------------- 0 0 SYSCATSPACE /var/ibm/bigsql/database/bigsql/node0000/bigsql/t0000000/c0000000.cat 0 1 TEMPSPACE1 /var/ibm/bigsql/database/bigsql/node0000/bigsql/t0000001/c0000000.tmp 0 2 BIGSQLCATSPACE /var/ibm/bigsql/database/bigsql/node0000/bigsql/t0000002/c0000000.lrg 0 3 SYSTOOLSPACE /var/ibm/bigsql/database/bigsql/node0000/bigsql/t0000003/c0000000.lrg 0 4 IDAX_USERTEMPSPACE /var/ibm/bigsql/database/bigsql/node0000/bigsql/t0000004/c0000000.utm 0 5 BIGSQLUTILITYSPACE /var/ibm/bigsql/database/bigsql/node0000/bigsql/t0000005/c0000000.lrg 0 6 SYSTOOLSTMPSPACE /var/ibm/bigsql/database/bigsql/node0000/bigsql/t0000006/c0000000.utm 1 1 TEMPSPACE1 /var/ibm/bigsql/database/bigsql/node0001/bigsql/t0000001/c0000000.tmp 1 4 IDAX_USERTEMPSPACE /var/ibm/bigsql/database/bigsql/node0001/bigsql/t0000004/c0000000.utm 1 5 BIGSQLUTILITYSPACE /var/ibm/bigsql/database/bigsql/node0001/bigsql/t0000005/c0000000.lrg 2 1 TEMPSPACE1 /var/ibm/bigsql/database/bigsql/node0002/bigsql/t0000001/c0000000.tmp 2 4 IDAX_USERTEMPSPACE /var/ibm/bigsql/database/bigsql/node0002/bigsql/t0000004/c0000000.utm 2 5 BIGSQLUTILITYSPACE /var/ibm/bigsql/database/bigsql/node0002/bigsql/t0000005/c0000000.lrg << output truncated >> We can see from the output of the above queries that all paths are currently under /var. Here are some other things you might be interested to note: The tablespaces SYSCATSPACE, BIGSQLCATSPACE, SYSTOOLSTMPSPACE and SYSTOOLSPACE are all located only on partition 0 (the Big SQL Master node). The first two, are used for metadata. The tablespaces which are located on all nodes are TEMPSPACE1, IDAX_USERTEMPSPACE and BIGSQLUTILITYSPACE. The first two are temporary tablespaces, while the third one is a regular tablespace. Raanon Reutlinger -5- January 2016

Step 1. ADD PATHS TO THE DEFAULT STORAGE GROUP AND DROP THE OLD ONE. A. Before making the change in Big SQL, create the directories on the new paths. You can run this script as root to make the directories and then verify that user bigsql has sufficient permission to write there (actually, we ll set the permissions starting at the directory above). In the script, we use the db2_all utility in order to run a command on all the nodes. In order to run it as root, passwordless-ssh needs to be setup for root and the db2profile script needs to be sourced (note the dot-space) in order to define some environment variables. You can change the script defaults, or pass it three optional arguments, as shown, to create the new STORAGE_PATH names. [root@testmg1 ~]# cat createpaths.sh #!/bin/bash # args: [prefix(default:/data) [#paths(default:6) [stopath(default:/var/ibm/bigsql/database] ] ] [[ $(id -un)!= root ]] && echo "This script must be run as root" && exit PREFIX=${1:-/data} PATHS=${2:-6} STOPATH=${3:-/var/ibm/bigsql/database}. ~bigsql/sqllib/db2profile db2_all "touch.profile >/dev/null 1>&2 db2_all "for n in {1..${PATHS}}; do newdir=${prefix}\${n}${stopath} mkdir -p \$newdir/dfsio_temp \$newdir/javaio_temp_blk_ins; chown -R bigsql:hadoop \$newdir; done" db2_all "su - bigsql -c \"for n in {1..${PATHS}}; do newdir=${prefix}\\\${n}${stopath} touch \\\$newdir/test; rm \\\$newdir/test; done\"" [root@testmg1 ~]# chmod +x createpaths.sh [root@testmg1 ~]#./createpaths.sh "/data" "6" "/var/ibm/bigsql/database" << output truncated >> (In this example, the arguments were provided for demonstration purposes only, since the defaults were used anyway.) Raanon Reutlinger -6- January 2016

B. These commands will ADD the new storage paths to the storage group and DROP the existing path. (Adjust the paths to suit your environment.) Don t worry about doing this in one step, as the DROP won t occur until the data has been fully rebalanced to the new paths. In order to make changes to the Big SQL configuration, we need to use the big_sql_service_mode() procedure. [bigsql@testmg1 ~]$ cat stogroup_alter.sql CALL syshadoop.big_sql_service_mode('on'); ALTER STOGROUP ibmstogroup ADD '/data1/var/ibm/bigsql/database', '/data2/var/ibm/bigsql/database', '/data3/var/ibm/bigsql/database', '/data4/var/ibm/bigsql/database', '/data5/var/ibm/bigsql/database', '/data6/var/ibm/bigsql/database' DROP '/var/ibm/bigsql/database' ; CALL syshadoop.big_sql_service_mode('off'); [bigsql@testmg1 ~]$ db2 -tvf stogroup_alter.sql CALL syshadoop.big_sql_service_mode('on') Return Status = 0 ALTER STOGROUP ibmstogroup ADD '/data1/var/ibm/bigsql/database', '/data2/var/ibm/bigsql/database', '/data3/var/ibm/bigsql/database', '/data4/var/ibm/bigsql/database', '/data5/var/ibm/bigsql/database', '/data6/var/ibm/bigsql/database' DROP '/var/ibm/bigsql/database' SQL2095W Storage path "/var/ibm/bigsql/database" is in the drop pending state because one or more automatic storage table spaces reside on the path. SQLSTATE=01691 CALL syshadoop.big_sql_service_mode('off') Return Status = 0 Note the warning message returned by the DROP command which states that the storage path is only in drop pending state. Rerun all of the queries in Step 0 and save the outputs to files with step1 in the name (for example, qry_containers.step1.out). You will notice that qry_stogroups.sql shows that the new storage paths on /data[1-6] have been added, but that their state is NOT_IN_USE, and that the old path on /var is in DROP_PENDING state. This also explains why qry_containers.sql shows that the tablespace containers are still not using the new paths. [bigsql@testmg1 ~]$ db2 -tvf qry_stogroups.sql tee qry_stogroups.step1.out SELECT VARCHAR( storage_group_name, 30) AS stogroup, VARCHAR( db_storage_path_state, 15) AS state, VARCHAR( db_storage_path, 80) AS storage_path FROM TABLE( admin_get_storage_paths('',-1)) STOGROUP STATE STORAGE_PATH --------------- --------------- -------------------------------------------------------------------------------- IBMSTOGROUP DROP_PENDING /var/ibm/bigsql/database IBMSTOGROUP NOT_IN_USE /data1/var/ibm/bigsql/database IBMSTOGROUP NOT_IN_USE /data2/var/ibm/bigsql/database IBMSTOGROUP NOT_IN_USE /data3/var/ibm/bigsql/database IBMSTOGROUP NOT_IN_USE /data4/var/ibm/bigsql/database IBMSTOGROUP NOT_IN_USE /data5/var/ibm/bigsql/database IBMSTOGROUP NOT_IN_USE /data6/var/ibm/bigsql/database 1 record(s) selected. Raanon Reutlinger -7- January 2016

Step 2. REBALANCE THE TABLESPACES TO USE THE NEW PATHS A. We must now rebalance the tablespaces so that the database can start using the new storage paths. The list of tablespaces to include in this script was taken from the output of the qry_containers.sql query. The two temporary tablespaces are a different type of tablespace (called System Managed) and cannot be rebalanced, so they will be handled in the next step. [bigsql@testmg1 ~]$ cat tbsp_rebalance.sql CALL syshadoop.big_sql_service_mode('on'); ALTER TABLESPACE syscatspace REBALANCE; ALTER TABLESPACE systoolspace REBALANCE; ALTER TABLESPACE bigsqlutilityspace REBALANCE; ALTER TABLESPACE bigsqlcatspace REBALANCE; ECHO Skip: TABLESPACE tempspace1 ; ECHO Skip: TABLESPACE systoolstmpspace ; ECHO Skip: TABLESPACE IDAX_USERTEMPSPACE ; CALL syshadoop.big_sql_service_mode('off'); [bigsql@testmg1 ~]$ db2 -tvf tbsp_rebalance.sql << output truncated >> B. The rebalance operation should be relatively quick, but you can use this query to monitor its progress. [bigsql@testmg1 ~]$ cat qry_rebalance.sql SELECT VARCHAR( tbsp_name, 30) AS tbsp_name, dbpartitionnum AS part, rebalancer_mode, rebalancer_status AS status, rebalancer_extents_remaining AS extents_remaining, rebalancer_extents_processed AS extents_processed, rebalancer_start_time AS start_time FROM TABLE( mon_get_rebalance_status( NULL, -2)) ; [bigsql@testmg1 ~]$ db2 -tvf qry_rebalance.sql SELECT VARCHAR( tbsp_name, 30) AS tbsp_name, dbpartitionnum AS part, rebalancer_mode, rebalancer_status AS status, rebalancer_extents_remaining AS extents_remaining, rebalancer_extents_processed AS extents_processed, rebalancer_start_time AS start_time FROM TABLE( mon_get_rebalance_status( NULL, -2)) TBSP_NAME PART REBALANCER_MODE STATUS EXTENTS_REMAINING EXTENTS_PROCESSED START_TIME ------------------------------ ------ ------------------------------ ---------- -------------------- ----------- --------- -------------------------- SYSCATSPACE 0 REV_REBAL_OF_2PASS ACTIVE 307 7777 2016-01-21-06.36.43.000000 1 record(s) selected. When the query doesn t return any rows, the rebalance is complete. Rerun all the queries in Step 0 and save the outputs to files with step2 in the name. Now you will see that the state of the new paths in qry_stogroups.sql has changed to IN_USE and that qry_containers.sql shows that the tablespaces containers for non-temporary tablespaces have been allocated there. However, you will also notice that the old storage path on /var is still in DROP_PENDING state and that there are still temporary tablespaces containers located there. Raanon Reutlinger -8- January 2016

Step 3. RE-ACTIVATE THE DATABASE In order to complete the drop of the old storage path on /var, and remove the DROP_PENDING state, it will be necessary to first deactivate the database, and then reactivate it. This allows Big SQL to be absolutely certain that all open files have been closed and are not needed any longer. Since temporary files are only needed when the database is activated, Big SQL can also re-create the temporary tablespaces safely on the new storage paths. Note that it isn t necessary to stop the entire Big SQL service (or instance) to achieve this. A. Run the following script, which forces all connections to the database to be closed, deactivates it and then waits until the deactivate completes. [bigsql@testmg1 ~]$ cat waitfordeactivate.sh #!/bin/bash SQLCODE=0 while [[ $SQLCODE!= SQL1611W ]]; do echo... waiting for db deactivate...; db2 -v FORCE APPLICATIONS ALL sleep 2 db2 -v DEACTIVATE DB bigsql SQLCODE=$(db2 LIST ACTIVE DATABASES \ awk '{print > "/dev/stderr"}/^sql[0-9]+/{print $1}') done [bigsql@testmg1 ~]$ chmod +x waitfordeactivate.sh [bigsql@testmg1 ~]$./waitfordeactivate.sh... waiting for db deactivate... FORCE APPLICATIONS ALL DB20000I The FORCE APPLICATION command completed successfully. DB21024I This command is asynchronous and may not be effective immediately. DEACTIVATE DB bigsql DB20000I The DEACTIVATE DATABASE command completed successfully. SQL1611W No data was returned by Database System Monitor. B. When that completes, reactivate the database and renew the connection. [bigsql@testmg1 ~]$ db2 TERMINATE DB20000I The TERMINATE command completed successfully. [bigsql@testmg1 ~]$ db2 ACTIVATE DB bigsql DB20000I The ACTIVATE DATABASE command completed successfully. [bigsql@testmg1 ~]$ db2 CONNECT TO bigsql Database Connection Information Database server = DB2/LINUXX8664 10.6.3 SQL authorization ID = BIGSQL Local database alias = BIGSQL Rerun the queries in Step 0 again (saved with step3 in the name), and you will find that the final goal has been achieved, as seen from the output of each query: [qry_stogroups.sql] The old storage path on /var has been removed. [qry_containers.sql] All of the containers, even for temporary tablespaces are only on the new storage paths under /data[1-6]. [qry_dbpaths.sql] The only database paths which should still be using /var are DBPATH, LOCAL_DB_DIRECTORY and LOGPATH. Raanon Reutlinger -9- January 2016

(Note: If you don t see these results, for example if the storage path is still in DROP_PENDING state, try restarting Big SQL, as described in the last step.) Step 4. UPDATE THE AMBARI AND XML CONFIG FILES. You should keep the Ambari configuration and bigsql-conf.xml file updated with the changes that were made to the paths. The configuration file sets two variables which need to be changed, as they were based on the old value of bigsql_db_path. A. You could use the following commands to generate the lists of paths that you will need, copying the output up to, but not including, the final comma. The first two will be needed for bigsql-conf.xml and the third is for the Ambari Web UI. [bigsql@testmg1 ~]$ printf "/data%d/var/ibm/bigsql/database/dfsio_temp," {1..6} /data1/var/ibm/bigsql/database/dfsio_temp,/data2/var/ibm/bigsql/database/dfsio_temp,/data3/v ar/ibm/bigsql/database/dfsio_temp,/data4/var/ibm/bigsql/database/dfsio_temp,/data5/var/ibm/b igsql/database/dfsio_temp,/data6/var/ibm/bigsql/database/dfsio_temp,[bigsql@testmg1 ~]$ [bigsql@testmg1 ~]$ printf "/data%d/var/ibm/bigsql/database/javaio_temp_blk_ins," {1..6} /data1/var/ibm/bigsql/database/javaio_temp_blk_ins,/data2/var/ibm/bigsql/database/javaio_tem p_blk_ins,/data3/var/ibm/bigsql/database/javaio_temp_blk_ins,/data4/var/ibm/bigsql/database/ javaio_temp_blk_ins,/data5/var/ibm/bigsql/database/javaio_temp_blk_ins,/data6/var/ibm/bigsql /database/javaio_temp_blk_ins,[bigsql@testmg1 ~]$ [bigsql@testmg1 ~]$ printf "/data%d/var/ibm/bigsql/database," {1..6} /data1/var/ibm/bigsql/database,/data2/var/ibm/bigsql/database,/data3/var/ibm/bigsql/database,/data4/var/ibm/bigsql/database,/data5/var/ibm/bigsql/database,/data6/var/ibm/bigsql/databas e,[bigsql@testmg1 ~]$ (Underscores sometime don t show up in the PDF version. Make sure they appear in dfsio_temp and javaio_temp_blk_ins.) B. Use vi to edit /usr/ibmpacks/bigsql/4.1/bigsql/conf/bigsql-conf.xml (the path is dependent on your version). Find the properties dfsio.temp_data_directory and javaio.temp_blk_ins_directory and replace the values with the appropriate comma separated list of paths. <property> <name>dfsio.temp_data_directory</name> <value>/data1/var/ibm/bigsql/database/dfsio_temp,/data2/var/ibm/bigsql/database/dfsio_ temp,/data3/var/ibm/bigsql/database/dfsio_temp,/data4/var/ibm/bigsql/database/dfsio_te mp,/data5/var/ibm/bigsql/database/dfsio_temp,/data6/var/ibm/bigsql/database/dfsio_temp </value> <description> Temp directories for native writer. </description> </property> <property> <name>javaio.temp_blk_ins_directory</name> <value>/data1/var/ibm/bigsql/database/javaio_temp_blk_ins,/data2/var/ibm/bigsql/databa se/javaio_temp_blk_ins,/data3/var/ibm/bigsql/database/javaio_temp_blk_ins,/data4/var/i bm/bigsql/database/javaio_temp_blk_ins,/data5/var/ibm/bigsql/database/javaio_temp_blk_ ins,/data6/var/ibm/bigsql/database/javaio_temp_blk_ins</value> <description> Temp directories for bulk insert writer. </description> </property> Raanon Reutlinger -10- January 2016

The modified file now needs to be copied to each node. You can use scp as shown, replacing <workernode> for each target hostname: scp /usr/ibmpacks/bigsql/4.1/bigsql/conf/bigsql-conf.xml <workernode>:/usr/ibmpacks/bigsql/4.1/bigsql/conf/ C. Open the Ambari Web UI, select the BigInsights Big SQL service and click on the Configs tab along the top. Under Advanced bigsql-env, modify the bigsql_db_path field and save the new config changes. After hitting Save, you will be notified that the service should be restarted. From the Services Actions, select Restart All. You are now ready to verify that all is working as expected (In some cases, you may need to restart Big SQL from the command line, using $BIGSQL_HOME/bin/bigsql forcestop and $BIGSQL_HOME/bin/bigsql start.) Raanon Reutlinger -11- January 2016