Big SQL v3.0. Metadata. Backup and Resiliency

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "Big SQL v3.0. Metadata. Backup and Resiliency"

Transcription

1 Big SQL v3.0 Metadata Backup and Resiliency Raanon Reutlinger IBM Big SQL Development WW Customer Engineering February 2015

2 Big SQL v3.0 Metadata Backup and Resiliency Contents I. Introduction... 3 II. First Steps... 6 III. Performing Offline Backups IV. Enabling ONLINE Backups V. Setup Automatic Online Backups VI. Restore the Databases from a Backup VII. Purge Old Backups and Archive Logs VIII. Redundancy of the Catalog with HADR IX. Summary X. [Addendum] Detailed Table of Contents Page 2 / 44

3 I. Introduction Big SQL is the SQL-on-Hadoop component of IBM InfoSphere BigInsights v3.0. Big SQL relies on other BigInsights components, such as the Hadoop Distributed File System (HDFS) and Hive (HCatalog). While the BigInsights product supplies many graphical tools for everything from installation to monitoring, exploring and developing, it will be necessary to manage the resiliency and backup of Big SQL metadata using different tools. This guide will help simplify the steps involved. While there are some Graphical User Interfaces (GUIs) to perform many of the steps in this guide (e.g., Eclipse, Data Studio), we will demonstrate the commands using the bash command line in order to minimize dependencies on outside tools. On the cluster head/management node, open a number of SSH sessions (for example, using PUTTY), so that you can stay logged in to the various users required to perform the commands. In the few steps which should be performed on a node other than the head/management node, this will be indicated and the commands will appear in a box with a black background. In this section we ll identify where the Big SQL metadata resides, as well as develop a metadata resiliency strategy. Throughout this guide, look for this Best Practices icon. A. Understanding the relationship between Data and Metadata The data for your Big SQL database resides in the Hadoop infrastructure, on HDFS. Backup of HDFS data is not within the scope of this guide. Rather, this guide will strictly cover Big SQL Metadata. (The term catalog is sometimes used to refer to metadata.) Big SQL uses the Hive catalog services (HCatalog) to register the table metadata, such as: Schema ( database ) names Table and column names and data types Location of the table data files in HDFS Table file storage format Encoding of input files, how many files there are, types of files Basic permissions The Hive metadata is saved in what is called a Metastore, which is a collection of regular tables, managed by the database of the BigInsights Catalog component. Page 3 / 44

4 Other Hadoop components, such as BigSheets and oozie, also store their metadata in the Catalog database. Since the Big SQL tables are defined in the Hive Metastore, they can also be accessed directly through Hive, without even passing through Big SQL. The advantage of accessing the tables via the Big SQL service comes from the superior query optimizer and efficient direct I/O mechanism which bypasses MapReduce. When you create a Hadoop table with Big SQL, after creating it in the Hive Metastore with HCatalog, Big SQL also saves metadata about the table in its own database. Maintaining the metadata in Big SQL not only allows the query optimizer quicker access to it, but Big SQL also maintains additional statistics about the tables which aren t collected by Hive. This helps the performance optimizer further by allowing it to come to more informed decisions. The Big SQL metadata also contains details on advanced features provided only in Big SQL, such as extended security role definitions (FGAC), workload management (WLM) classes, stored procedures and functions, to name just a few. The Big SQL metadata is located in a DB2 database, and the Catalog component also uses a DB2 database, although a different one. These databases reside in totally different DB2 instances on the head/management node. While the Catalog and Big SQL metadata are located only on the head/management node, the Big SQL instance is actually a powerful multinode MPP configuration, with worker processes located on all or a subset of the Hadoop data nodes used by HDFS. Since the metadata resides in a DB2 database in both cases, we will be using DB2 commands and tools. B. Strategy for Resiliency The strategy for achieving resiliency of the metadata in the Big SQL and Catalog databases which we ll cover in this guide will revolve around taking regular backups, as well as setting up redundancy on another server. Redundancy will be achieved by using DB2 s High Availability Disaster Recovery (HADR) feature to maintain a duplicate of the database on another server. At any given time, Page 4 / 44

5 the primary database can be stopped, whether due to a planned or unplanned event, and the BigInsights system will still be able to continue by using the standby database. We will be demonstrating this here only for the Catalog. The option to setup HADR for the Big SQL database will be available in an upcoming release of BigInsights, so that will be described more fully in a future or updated guide. (This guide was written for BigInsights 3.0.) Backups can be used to return the contents of a database to a certain point-in-time, perhaps to return to a point before a human or technical error, or to set up a new system on a different server. If backups are made to a local filesystem (as demonstrated in this guide), then be sure to keep a copy of the backup at a remote location, in case the local storage becomes inaccessible. Backups can be offline, which means that users will be unable to access BigInsights services during the backup, or online, which provides a better service but introduces some additional considerations. For example, online backups will require the maintenance of database log files (discussed further below). When performing backups, offline or online, or when restoring a backup, care must be taken that the Catalog and Big SQL databases remain in synch, since Big SQL shares metadata with the Hive Metastore tables located in the Catalog. (Strictly speaking, since we re talking only about metadata, any definitions made in Big SQL, such as security roles, WLM classes, procedures, etc., can all be recreated without affecting the core of the database, which is its data. Even the schema and table metadata in Big SQL can be recreated from the Hive Metastore using the HCAT_SYNC_OBJECTS()procedure. However, the recreated table metadata will not contain any of the ANALYZE statistics, which can be time-consuming to rebuild, and care must be taken to use the same userid of the user who created the table. Likewise, to avoid the headache of tracking down all of your other definitions made to the Big SQL database, it s much easier to simply keep your backups up to date.) Performing an offline backup requires the fewest changes to the system, so we will start with that. Performing an online backup requires a bit more changes, and finally HADR will require setting up a new database on another server, so we ve saved that for last. In addition, setting up HADR will require you to have taken an online backup, and preparing for online backups will require you to perform at least one offline backup. Part of every recovery strategy should include periodic testing of the backups, before disaster strikes, so we ll demonstrate those steps as well. Page 5 / 44

6 II. First Steps A. Find the instance and database names Since the names of the DB2 instances used for the Catalog and Big SQL components can be entered by the user who first installed BigInsights, the first step will be to identify those names. We will look for the user name which was created to manage (own) each DB2 instance. As a reminder, Figure 1 shows the Components 1 panel of the BigInsights installation GUI where the names may have been entered. This part of the page refers to the Catalog component. The type of Database should have been left as Use DB2 database and the DB2 instance owner has a default value of catalog. Figure 1: Panel from Installation GUI Further down that same page, you ll find the instance name for Big SQL, with the default value of bigsql. Figure 2: Panel from Installation GUI, cont'd Page 6 / 44

7 In order to determine the user/instance names which were chosen during installation, find the configuration file $BIGINSIGHTS_HOME/conf/install.xml. Look under the XML tag hierarchy shown below for the value of <username> for both the BigSQL and Catalog properties. <security> <service-security> <BigSQL> <username>bigsql</username> <uid>222</uid> </BigSQL> <Catalog> <username>catalog</username> <uid>224</uid> <password>{xor}nj0ybjy9mg==</password> </Catalog> To confirm that DB2 was chosen as the type of database for the Catalog, look for another block starting with <Catalog> and find the <catalog-type>, as seen below. <Catalog> <configure>true</configure> <catalog-type>db2</catalog-type> <node>testmg1.iic.il.ibm.com</node> <port>50000</port> </Catalog> Another way to confirm the Catalog user name (DB2 instance) is by checking the value of the USER_CATALOG environment variable (this should be defined for every user since the biginsights-env.sh script is placed in the /etc/profile.d directory). ~]# echo $USER_CATALOG catalog Next, let s find the name of the Catalog database itself (within the instance). The fact is that the name of the database is currently a constant, BIDB, but the next steps will show you where that value is stored. There are two methods to find the database name. The first is to find the Hive configuration file $BIGINSIGHTS_HOME/hive/conf/hive-site.xml, and search for the property containing ConnectionURL in the name. You ll find the database name at the end of the JDBC connection URL value. <property> <name>javax.jdo.option.connectionurl</name> <value>jdbc:db2://testmg1.iic.il.ibm.com:50000/bidb</value> </property> Page 7 / 44

8 Another method is to open a bash session to the DB2 instance owner, catalog, and use the DB2 command LIST DB DIRECTORY to list the available databases (there will be only one), as seen below. (Note, if you are not using su from user root you will need to know the password for the catalog user.) ~]# su - catalog ~]$ db2 LIST DB DIRECTORY System Database Directory Number of entries in the directory = 1 Database 1 entry: Database alias = BIDB Database name = BIDB Local database directory = /var/ibm/biginsights/database/db2 Database release level = Comment = Directory entry type = Indirect Catalog database partition number = 0 Alternate server hostname = Alternate server port number = Note: DB2 commands can be entered in upper or lower case, although the db2 prefix to the command must always be in lower case. The name of the Big SQL database is also currently a constant, BIGSQL, but can also be confirmed as shown. ~]# su - bigsql ~]$ db2 list db directory System Database Directory Number of entries in the directory = 1 Database 1 entry: Database alias = BIGSQL Database name = BIGSQL Local database directory = /media/data1/var/ibm/biginsights/database/bigsql Database release level = Comment = Directory entry type = Indirect Catalog database partition number = 0 Alternate server hostname = Alternate server port number = For the remainder of this document we will assume the following names: Component DB2 Instance Database name (user) Catalog catalog BIDB Big SQL bigsql BIGSQL Page 8 / 44

9 B. Find the size of the database Before we take a backup of the database, it would be a good idea to know how much room we will need. (It might also be useful to know the size of the database in order to ensure that there is enough available space to grow on the storage path where it resides.) A quick and dirty way to determine the size of the database might be to use the du (disk usage) unix command with the Local database directory path returned by the db2 LIST DB DIRECTORY command used above. However, this might give an inaccurate picture as this path includes files which will not be included in a database backup and may be missing other paths which might be included in the backup. The more accurate way to determine the amount of storage used by the database (as well as the capacity remaining on the storage where it resides), is to use a built-in DB2 stored procedure, called GET_DBSIZE_INFO(). You can use this procedure on both the BIDB and BIGSQL databases. ~]$ db2 CONNECT TO bidb Database Connection Information Database server = DB2/LINUXX SQL authorization ID = CATALOG Local database alias = BIDB ~]$ db2 "CALL GET_DBSIZE_INFO(?,?,?, -1)" Value of output parameters Parameter Name : SNAPSHOTTIMESTAMP Parameter Value : Parameter Name : DATABASESIZE Parameter Value : Parameter Name : DATABASECAPACITY Parameter Value : Return Status = 0 ~]$ db2 terminate DB20000I The TERMINATE command completed successfully. The value returned for the DATABASESIZE parameter is in bytes. You can read more about this stored procedure at 01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.sql.rtn.doc/do c/r html?cp=ssepgg_10.5.0%2f &lang=en Remember to issue the db2 TERMINATE command to close your connection to the database (otherwise, further steps below may not work as expected). Page 9 / 44

10 C. [Optional] Free up space by purging old oozie log records The workflow component of BigInsights (Hadoop), called oozie, stores its logs inside the Catalog database. Oozie already provides a mechanism for automatic purges of this log, but the default is to keep 180 days worth. Since we will be making multiple copies of this database in the form of regular backups, it would be prudent to keep the database as small as possible by keeping fewer log records. This is done by changing the oozie.service.purgeservice.older.than property in the $BIGINSIGHTS_HOME/hdm/components/oozie/conf/oozie-site.xml configuration file. For example: <property> <name>oozie.service.purgeservice.older.than</name> <value>30</value> <description> Jobs older than this value, in days, will be purged by the PurgeService. </description> </property> Then propagate the change as user biadmin by running: syncconf.sh oozie You can check the current records in the oozie job log with the command: $BIGINSIGHTS_HOME/oozie/bin/oozie jobs Find more details on this topic see 01.ibm.com/support/knowledgecenter/SSPT3X_3.0.0/com.ibm.swg.im.infosphere.bigin sights.install.doc/doc/upgr_prep_upgrade_tasks.html%23upgr_prep_upgrade_tasks cl eanupjobs?lang=en. To further save on space, you can use the DB2 REORG command to recover space left over from the rows which were deleted (purged) from the tables (this is similar to defragmenting). Here is a script you can create and an example of how to run it as user catalog. ~]# su - catalog ~]$ vi oozie_reorg.sql ~]$ cat oozie_reorg.sql CONNECT TO bidb; REORG TABLE oozie.coord_actions; REORG TABLE oozie.coord_jobs; REORG TABLE oozie.sla_events; REORG TABLE oozie.wf_actions; REORG TABLE oozie.wf_jobs; TERMINATE; ~]$ db2 -tvf oozie_reorg.sql Page 10 / 44

11 III. Performing Offline Backups If you can allow yourself a maintenance window to bring the BigInsights system offline, then this is certainly the easiest method to perform backups. You will need to be sure to bring the system down on a regular basis in order to keep the backups up to date. If you plan on implementing online backups and/or an HADR solution, then you can skip this section for now and move on to <IV-Enabling ONLINE Backups>. In any case, you will return to this section to perform an offline backup after making some configuration changes. There are many options for performing a DB2 backup which will not be discussed here, such as choosing a target devices for your backup (tape, TSM or third party backup managers). Since we need to keep the content of both metadata databases in synch, we will backup both of them while the BigInsights system is down. A. Stop BigInsights, but restart the DB2 instances Stopping all of the BigInsights components will disconnect any connections to the databases, either by users or made internally by other BigInsights component. Once that completes, we will need to restart the DB2 instances to allow us to perform some commands, but there shouldn t be any connections to the databases. 1. Stop BigInsights Note that this is performed as the biadmin user. ~]# su - biadmin ~]$ stop.sh all... [INFO] Progress - 100% [INFO] DeployManager - Stop; SUCCEEDED components: [alert, httpfs, console, oozie, bigsql, hive, hbase, catalog, hadoop, zookeeper, hdm]; Consumes : 81863ms 2. Restart the catalog instance This is done as the catalog user. ~]# su - catalog ~]$ db2start SQL1063N DB2START processing was successful. Page 11 / 44

12 3. Restart the bigsql instance This is done as the bigsql user. Note that there will be a response from each of the Big SQL Worker nodes (data nodes). ~]# su - bigsql ~]$ db2start 01/28/ :57: SQL1063N DB2START processing was successful. 01/28/ :57: SQL1063N DB2START processing was successful. 01/28/ :57: SQL1063N DB2START processing was successful. 01/28/ :57: SQL1063N DB2START processing was successful. 01/28/ :57: SQL1063N DB2START processing was successful. 01/28/ :57: SQL1063N DB2START processing was successful. SQL1063N DB2START processing was successful. B. Perform an OFFLINE database backup In our example, we will backup the database to a path on a directly attached disk which is mounted at /media/data1. Be sure to choose a path on a storage device which has sufficient space for multiple database backups. Be sure to perform these steps after <III.A-Stop BigInsights, but restart the DB2 instances>. 1. Create the backup target directory for BIDB We will create the DB2Backups directory as the catalog user, but then give the biadmin group read-write permissions to it. This is so we can use the same directory for the bigsql user, who is in the same group. ~]# su - catalog ~]$ mkdir -p /media/data1/db2backups/catalog ~]$ chmod g+rw /media/data1/db2backups ~]$ ls -ld /media/data1/db2backups/ drwxrwxr-x 3 catalog biadmin 4096 Jan 25 16:55 /media/data1/db2backups/ 2. Backup the BIDB database We are now ready to backup the database. This should only take a few seconds (8 seconds in our test). ~]$ db2 BACKUP DB bidb TO /media/data1/db2backups/catalog Backup successful. The timestamp for this backup image is : Page 12 / 44

13 3. [Optional] Use the COMPRESS argument You might be interested in using another backup option to compress the backup image. Purely to demonstrate the compression, we ll take another backup with the COMPRESS option. ~]$ db2 BACKUP DB bidb TO /media/data1/db2backups/catalog COMPRESS Backup successful. The timestamp for this backup image is : ~]$ ls -l /media/data1/db2backups/catalog total rw catalog biadmin Jan 25 17:22 BIDB.0.catalog.DBPART rw catalog biadmin Jan 25 17:25 BIDB.0.catalog.DBPART We see here that the compressed backup is about 9% the size of the non-compressed backup. 4. Create the backup target directory for BIGSQL Unlike the catalog instance which resides on a single node, the bigsql instance is a multi-node implementation, residing on the worker/data nodes and the head/management node. So, we need to create a target backup directory on each node in the Big SQL cluster. DB2 provides the db2_all utility to execute a command on every node of the cluster. ~]$ db2_all "mkdir -p /media/data1/db2backups/bigsql; chmod g+rw /media/data1/db2backups; ls -ld /media/data1/db2backups/bigsql" egrep v completed ok chmod: changing permissions of `/media/data1/db2backups': Operation not permitted drwxr-xr-x 2 bigsql biadmin 4096 Feb 4 16:20 /media/data1/db2backups/bigsql drwxr-xr-x 2 bigsql biadmin 4096 Feb 4 16:20 /media/data1/db2backups/bigsql drwxr-xr-x 2 bigsql biadmin 4096 Feb 4 16:20 /media/data1/db2backups/bigsql drwxr-xr-x 2 bigsql biadmin 4096 Feb 4 16:20 /media/data1/db2backups/bigsql drwxr-xr-x 2 bigsql biadmin 4096 Feb 4 16:20 /media/data1/db2backups/bigsql drwxr-xr-x 2 bigsql biadmin 4096 Feb 4 16:20 /media/data1/db2backups/bigsql The grep used here is just to shorten the output a bit. (You can ignore the chmod error, since that directory was created by catalog on the head node in the previous step.) Page 13 / 44

14 5. Backup the BIGSQL database Backing up the BIGSQL database is almost the same as for the BIDB database, but the backup will actually take place in parallel on each node. ~]$ db2 BACKUP DB bigsql ON ALL DBPARTITIONNUMS TO /media/data1/db2backups/bigsql Part Result DB20000I The BACKUP DATABASE command completed successfully DB20000I The BACKUP DATABASE command completed successfully DB20000I The BACKUP DATABASE command completed successfully DB20000I The BACKUP DATABASE command completed successfully DB20000I The BACKUP DATABASE command completed successfully DB20000I The BACKUP DATABASE command completed successfully. Backup successful. The timestamp for this backup image is : ~]$ db2_all "ls -l /media/data1/db2backups/bigsql" egrep -v "total completed ok" -rw bigsql biadmin Jan 28 20:27 BIGSQL.0.bigsql.DBPART rw bigsql biadmin Jan 28 20:27 BIGSQL.0.bigsql.DBPART rw bigsql biadmin Jan 28 20:27 BIGSQL.0.bigsql.DBPART rw bigsql biadmin Jan 28 20:27 BIGSQL.0.bigsql.DBPART rw bigsql biadmin Jan 28 20:27 BIGSQL.0.bigsql.DBPART rw bigsql biadmin Jan 28 20:27 BIGSQL.0.bigsql.DBPART Notice that the size of the backup on the first node is considerable larger than the other nodes. This is because the head node is where all of the metadata resides, while no real data resides within DB2 on the other nodes (the data resides in HDFS). (The backup image size of 32 MB on the other nodes can be accounted for as the space which gets pre-allocated for a database.) C. Restart BigInsights If you will be sticking with an OFFLINE backup solution, then you can restart BigInsights now. If you plan on continuing with any of the further steps, then keep BigInsights in the down state and skip this step. Restart all BigInsights components using the biadmin user. ~]# su - biadmin ~]$ start.sh all... [INFO] Progress - 100% [INFO] DeployManager - Start; SUCCEEDED components: [hdm, zookeeper, hadoop, catalog, hbase, hive, bigsql, oozie, console, httpfs, alert]; Consumes : ms Page 14 / 44

15 IV. Enabling ONLINE Backups Enabling ONLINE backups will allow you to do any of the following: Perform online backups manually or with an automated script (e.g., using cron). Configure DB2 to maintain automatic online backups. Configure DB2 High Availability Disaster Recovery (HADR). Recovery to a point-in-time AFTER the latest backup. In order for DB2 to allow online backups, it must be configured to assure that in-flight transactions occurring during a backup are always saved to an archive transaction log location. The default mechanism for log transaction files is called circular logging, which means that completed log files are reused (overwritten). For archive logging, completed log files are copied to a safe location in case they are needed later for recovery. Aside from allowing online backups, archive logging will also allow recovery (restore) to any point-in-time which can be found in the logs. Remember that the actual data in a Big SQL and Hive database reside in Hadoop s HDFS, which doesn t log transactional changes to DB2 s log files. So, the transactions which will be written to DB2 s log files will revolve entirely around changes to metadata. We can expect the volume of metadata changes to be quite small, certainly in relation to the entire Big Data environment. Once we enable archive logging, we will be able to perform online backups, but only after completing an initial offline backup. So, if you haven t already done so, start by bringing down BigInsights, by following the steps in <III.A-Stop BigInsights, but restart the DB2 instances>. A. Enable ARCHIVE logging Let s use the catalog user to inspect the current configuration parameters relevant to logging (you can do this for bigsql, as well). ~]# su - catalog ~]$ db2 get db cfg for bidb egrep 'Path to log files LOGARCHMETH1' Path to log files = /var/ibm/biginsights/database/db2/catalog/node0000/sql00001/logstream0000/ First log archive method (LOGARCHMETH1) = OFF Here we see that active logs are being written to the long path under the /var path (the output has wrapped to the next line) and that archive logging is currently OFF, which is the default state. For this example, we ve chosen to keep archived logs on a local filesystem (by using the DISK keyword, below). One of the implications of this is that we should consider how to regularly remove unneeded log files from that path. An automated method to do this is explained in <VII-Purge Old Backups and Archive Logs>. Page 15 / 44

16 1. Enable ARCHIVE logging for BIDB After creating a directory for the archived logs on another filesystem (and giving it group read-write access, as before) we can activate archive logging to use it. ~]$ mkdir /media/data1/db2archivelogs ~]$ chmod g+rw /media/data1/db2archivelogs ~]$ db2 UPDATE DB CFG FOR bidb USING LOGARCHMETH1 'DISK:/media/data1/DB2ArchiveLogs' DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. ~]$ db2 get db cfg for bidb egrep 'Path to log files LOGARCHMETH1' Path to log files = /var/ibm/biginsights/database/db2/catalog/node0000/sql00001/logstream0000/ First log archive method (LOGARCHMETH1) = DISK:/media/data1/DB2ArchiveLogs/ 2. Enable ARCHIVE logging for BIGSQL Once again, we will create the log archive directory on each node of the cluster (ignoring the error that it already exists on the local node). ~]$ db2_all mkdir /media/data1/db2archivelogs; chmod g+rw /media/data1/db2archivelogs" egrep -v "completed ok" mkdir: cannot create directory `/media/data1/db2archivelogs': File exists chmod: changing permissions of `/media/data1/db2archivelogs': Operation not permitted ~]$ db2 UPDATE DB CFG FOR bigsql USING LOGARCHMETH1 'DISK:/media/data1/DB2ArchiveLogs' DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. The DB2 command applies the change to the database as a whole, despite the instance residing on multiple nodes. Although we used the same path to DB2ArchiveLogs for both databases, BIDB and BIGSQL, the logging mechanism will keep them separate by creating subdirectories for each instance and database. B. [Optional] Further changes to log files Since we re making some changes to database log file parameters, there is another parameter which we recommend to modify while the system is already stopped. The modification is not related to backups, but rather to the ability to perform larger transactions (changes to the metadata database). It has been observed that some Big SQL commands which modify a large quantity of metadata in the Hive Metastore may report that the catalog BIDB database transaction log has been exhausted. The default size of the transaction logs allow for approximately 100 MB worth of transaction log records. This is calculated by looking at the log file size and the number of primary and secondary log files allowed in a single transaction. ~]$ db2 get db cfg for bidb egrep 'LOGFILSIZ LOGPRIMARY LOGSECOND' Log file size (4KB) (LOGFILSIZ) = 1024 Number of primary log files (LOGPRIMARY) = 13 Number of secondary log files (LOGSECOND) = 12 ~]$ echo "1024 * 4096 * ( ) / 1024^2" bc 100 Page 16 / 44

17 By increasing the value of LOGSECOND, we can allow the size of a transaction to grow only when it s absolutely needed, without the need to pre-allocate that space (LOGPRIMARY indicates pre-allocated log files). ~]$ db2 update db cfg for bidb using LOGSECOND 100 DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. (We increased the capacity here by about 4.5 times.) C. [Optional] Capping diagnostic log size (DIAGSIZE) Another configuration change you might want to make while BigInsights is stopped, which can simplify DB2 maintenance, is to cap the size of the DB2 diagnostic logs. By default, all diagnostic messages are written to files db2diag.log and <instance>.nfy located in the sqllib/db2dump directory under the instance home directory. Unchecked, these files can grow indefinitely. To avoid the need to occasionally truncate the log files, you can set the DIAGSIZE configuration parameter to the number of megabytes you are willing to allocate for them. You can do this for both the catalog and bigsql instances (users). ~]# su - catalog ~]$ db2 GET DBM CFG grep DIAGSIZE Size of rotating db2diag & notify logs (MB) (DIAGSIZE) = 0 ~]$ db2 UPDATE DBM CFG USING diagsize 1024 DB20000I The UPDATE DATABASE MANAGER CONFIGURATION command completed successfully. You can read more about this parameter at 01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.admin.config. doc/doc/r html?cp=ssepgg_10.5.0%2f &lang=en. D. Perform the initial OFFLINE backup Once archive logging is configured, you are required to perform an initial offline backup in order to establish a starting point for your database recoverability. If you try to connect to the database now, you will see the following message: ~]$ db2 CONNECT TO bidb SQL1116N A connection to or activation of database "BIDB" failed because the database is in BACKUP PENDING state. SQLSTATE=57019 Follow the steps in <III.B-Perform an OFFLINE database backup> to perform the offline backup for both metadata databases, BIDB and BIGSQL. Page 17 / 44

18 After the offline backup, the CONNECT statement should succeed. Rather than wait for completed log files to be archived in due course you can test the archive logging as seen below: ~]$ db2 BACKUP DB bidb TO /media/data1/db2backups/catalog Backup successful. The timestamp for this backup image is : ~]$ db2 CONNECT TO bidb Database Connection Information Database server = DB2/LINUXX SQL authorization ID = CATALOG Local database alias = BIDB ~]$ db2 terminate DB20000I The TERMINATE command completed successfully. ~]$ db2 ARCHIVE LOG FOR DB bidb DB20000I The ARCHIVE LOG command completed successfully. ~]$ find /media/data1/db2archivelogs/ /media/data1/db2archivelogs/ /media/data1/db2archivelogs/catalog /media/data1/db2archivelogs/catalog/bidb /media/data1/db2archivelogs/catalog/bidb/node0000 /media/data1/db2archivelogs/catalog/bidb/node0000/logstream0000 /media/data1/db2archivelogs/catalog/bidb/node0000/logstream0000/c Note the subdirectories for instance and database that were created under the archive log directory. E. [Optional] Restart BigInsights If you wish, you can follow the steps in <III.C-Restart BigInsights>. But if you plan on continuing with any of the further steps (we will soon try restoring from a backup), then keep BigInsights in the down state. To verify that backups can now be performed online, you can activate the database, which is similar to establishing a connection to it. ~]$ db2 ACTIVATE DB bidb DB20000I The ACTIVATE DATABASE command completed successfully. (The reason we don t simply establish a connection as before is that the backup command in the next step would simply close it if run from the same session. So this would not really be a valid test of our online backup.) F. Perform an ONLINE backup As the catalog user, perform an online backup. Note, this will be necessary for setting up HADR. ~]$ db2 BACKUP DB bidb ONLINE TO /media/data1/db2backups/catalog Backup successful. The timestamp for this backup image is : Page 18 / 44

19 And similarly, as the bigsql user: ~]$ db2 ACTIVATE DB bigsql DB20000I The ACTIVATE DATABASE command completed successfully. ~]$ db2 BACKUP DB bigsql ON ALL DBPARTITIONNUMS ONLINE TO /media/data1/db2backups/bigsql Part Result DB20000I The BACKUP DATABASE command completed successfully DB20000I The BACKUP DATABASE command completed successfully DB20000I The BACKUP DATABASE command completed successfully DB20000I The BACKUP DATABASE command completed successfully DB20000I The BACKUP DATABASE command completed successfully DB20000I The BACKUP DATABASE command completed successfully. Backup successful. The timestamp for this backup image is : Page 19 / 44

20 V. Setup Automatic Online Backups At this point you can write your own scripts to perform an online backup and perhaps automate them using cron. But DB2 comes with a built-in mechanism for automating backups, which is part of the DB2 Health Monitor. This monitor will wake up approximately every two hours to check if criteria has been met to initiate an online backup of the database. When setup for both the BIDB and BIGSQL database, these automatic backups can be seen as a safety feature to assure that you have a backup at least when the criteria has been met. However, since it s important to keep the content of both databases in synch, you might want to regularly perform a backup on both databases, or whenever major changes are made to Big SQL metadata. You can read more about the automatic online backup mechanism from 01.ibm.com/support/knowledgecenter/SSEPGG_10.5.0/com.ibm.db2.luw.admin.ha.doc/doc/c html?cp=SSEPGG_10.5.0%2F &lang=en (and its sub-topics). These instructions are for the BIDB database, but you can do the same for BIGSQL. A. Define an automatic backup policy Before enabling the automatic backup mechanism, we will establish a backup policy by providing an XML file to the AUTOMAINT_SET_POLICYFILE() stored procedure. It is recommended to use the provided sample XML file as a starting point. Note that the sample file has only read permissions enabled, so we will have to enable our copy for writing. We will work in the sqllib/tmp directory as this is the default location where the procedure looks for the XML file. ~]$ cd sqllib/tmp tmp]$ cp../samples/automaintcfg/db2autobackuppolicysample.xml bidb_db2autobackuppolicy.xml tmp]$ chmod +w bidb_db2autobackuppolicy.xml tmp]$ vi bidb_db2autobackuppolicy.xml The sample XML file has many comments, so here is a diff showing the only two changes which we ve made. tmp]$ diff../samples/automaintcfg/db2autobackuppolicysample.xml bidb_db2autobackuppolicy.xml 64c64 < <PathName/> --- > <PathName>/media/data1/DB2Backups/catalog</PathName> 117c117 < <BackupCriteria numberoffullbackups="1" timesincelastbackup="168" logspaceconsumedsincelastbackup="6400" /> --- > <BackupCriteria numberoffullbackups="1" timesincelastbackup="168" logspaceconsumedsincelastbackup="6144" /> Here is the full block of the first change: <BackupOptions mode="online" > <BackupTarget> <DiskBackupTarget> <PathName>/media/data1/DB2Backups/catalog</PathName> </DiskBackupTarget> This block indicate that we have chosen to perform online backups to disk, and rather than use the default target location (located on the same storage as the database itself), Page 20 / 44

21 we will use the path that we created earlier on a different filesystem (with plenty of room). The second change (on the line with BackupCriteria) tells DB2 to perform the backup in any of the following conditions: There must be at least 1 full backup. Perform a backup at least once a week (168 hours). Perform a backup if at least six log files (6 x 1024) have been used (indicating changes to the database). (See in an earlier section that each log file is 1024 pages of 4096 bytes.) Now connect to the BIDB database. After using the AUTOMAINT_GET_POLICYFILE() stored procedure to save the current policy to a file prefixed with orig_, use AUTOMAINT_SET_POLICYFILE() to load the new policy. Note, the policy will be saved without all of the comments of the XML file. As mentioned, the default location of the XML file referenced by the stored procedures is the ~/sqllib/tmp directory. ~]$ cd sqllib/tmp tmp]$ db2 CONNECT TO bidb Database Connection Information Database server = DB2/LINUXX SQL authorization ID = CATALOG Local database alias = BIDB tmp]$ db2 "call sysproc.automaint_get_policyfile( 'AUTO_BACKUP', 'orig_db2autobackuppolicy.xml')" Return Status = 0 tmp]$ db2 "call sysproc.automaint_set_policyfile( 'AUTO_BACKUP', 'bidb_db2autobackuppolicy.xml')" Return Status = 0 B. Enable the automatic backup policy Check out the current value of the AUTO_DB_BACKUP configuration parameter, which is part of a series of automated maintenance parameters. Make sure both of these parameters are turned ON. ~]$ db2 GET DB CFG for bidb egrep 'AUTO_MAINT AUTO_DB_BACKUP' Automatic maintenance (AUTO_MAINT) = ON Automatic database backup (AUTO_DB_BACKUP) = OFF ~]$ db2 UPDATE DB CFG FOR bidb USING AUTO_DB_BACKUP ON DB20000I The UPDATE DATABASE CONFIGURATION command completed successfully. Page 21 / 44

22 C. Check for the automatic backups Instead of just periodically checking the target backup path for the appearance of new backup images, you can use the db2diag command to search the DB2 diagnostic log for messages from the automatic backup facility. (This is just a sample of the filtering that can be done with the db2diag command. This method is preferred to simply using grep on the db2diag.log file, since the entire block/entry is displayed.) ~]$ db2diag -g "PROC=db2acd,FUNCTION:=hmonBkpBackupDBOnline" more I E431 LEVEL: Event PID : TID : PROC : db2acd INSTANCE: catalog NODE : 000 DB : BIDB APPID : *LOCAL.catalog HOSTNAME: testmg1.iic.il.ibm.com FUNCTION: DB2 UDB, Health Monitor, hmonbkpbackupdbonline, probe:500 START : Automatic job "Backup database online" has started on database BIDB, alias BIDB I E381 LEVEL: Event PID : TID : PROC : db2acd INSTANCE: catalog NODE : 000 HOSTNAME: testmg1.iic.il.ibm.com FUNCTION: DB2 UDB, Health Monitor, hmonbkpbackupdbonline, probe:530 STOP : Automatic job "Backup database online" has completed successfully on database BIDB, alias BIDB I E431 LEVEL: Event PID : TID : PROC : db2acd INSTANCE: catalog NODE : 000 DB : BIDB APPID : *LOCAL.catalog HOSTNAME: testmg1.iic.il.ibm.com FUNCTION: DB2 UDB, Health Monitor, hmonbkpbackupdbonline, probe:500 START : Automatic job "Backup database online" has started on database BIDB, alias BIDB I E381 LEVEL: Event PID : TID : PROC : db2acd INSTANCE: catalog NODE : 000 HOSTNAME: testmg1.iic.il.ibm.com FUNCTION: DB2 UDB, Health Monitor, hmonbkpbackupdbonline, probe:530 STOP : Automatic job "Backup database online" has completed successfully on database BIDB, alias BIDB This output shows two backups, two hours apart, but that might be unusual in your environment. (In order to demonstrate this feature, we purposely created a lot of bogus metadata to force a new backup.) Page 22 / 44

23 VI. Restore the Databases from a Backup It s always a good idea to test your recovery preparedness before disaster strikes. So, let s restore our databases from the backup images which we ve taken. This should be done after completing <IV.F-Perform an ONLINE backup>, or at least <III.B- Perform an OFFLINE database backup>. Verify that you have a current backup by looking in the DB2Backups sub-directories. A. Stop BigInsights If you ve started BigInsights, refer to <III.A-Stop BigInsights, but restart the DB2 instances>. B. Restore the BIDB database If restoring to a new server which didn t have this database before, you would follow the preparation steps described in <VIII.A-[Optional] Install BigInsights on the standby server>, <VIII.B-Install DB2 on the standby server> (which also creates the catalog instance) and <VIII.C.1-Create directories needed for the database>. But the examples here will assume that you are replacing an existing database with the one from the backup image. Choose the latest backup image from the DB2Backups/catalog directory and use the timestamp found in the second-to-last portion of the name to identify it with the TAKEN AT option of the RESTORE command. ~]$ ls -l /media/data1/db2backups/catalog total rw catalog biadmin Jan 29 17:56 BIDB.0.catalog.DBPART rw catalog biadmin Feb 4 14:13 BIDB.0.catalog.DBPART rw catalog biadmin Feb 4 14:14 BIDB.0.catalog.DBPART ~]$ db2 RESTORE DB bidb FROM /media/data1/db2backups/catalog TAKEN AT REPLACE EXISTING SQL2539W The specified name of the backup image to restore is the same as the name of the target database. Restoring to an existing database that is the same as the backup image database will cause the current database to be overwritten by the backup version. DB20000I The RESTORE DATABASE command completed successfully. (If not replacing an existing database, you would want to replace the REPLACE EXISTING option with LOGTARGET DEFAULT. This extracts the transaction log which was active during the backup and copies it to the default location for active logs.) Page 23 / 44

24 If the backup chosen was an OFFLINE backup, in other words, without archive logging enabled, then the database would now be available for connections. However, if the image was of an ONLINE backup, then the database is now in what is called a rollfoward pending state. This allows you to now rollforward through all of the archive logs taken since the backup was taken, in order to bring it fully up to date (to the latest point-intime). Here is how to check for the rollfoward pending state and perform the ROLLFORWARD. ~]$ db2 GET DB CFG FOR bidb grep Rollforward Rollforward pending = DATABASE ~]$ db2 ROLLFORWARD DB bidb TO END OF LOGS AND STOP Rollforward Status Input database alias = bidb Number of members have returned status = 1 Member ID = 0 Rollforward status = not pending Next log file to be read = Log files processed = S LOG - S LOG Last committed transaction = UTC DB20000I The ROLLFORWARD command completed successfully. ~]$ db2 GET DB CFG FOR bidb grep Rollforward Rollforward pending = NO You can investigate on your own how to use the ROLLFORWARD command to select a different point-in-time. Page 24 / 44

25 C. Restore the BIGSQL database As with backing up, the restore procedure is slightly different for the BIGSQL database since it exists on multiple nodes. With a database of this sort, it s necessary to first restore the data on the management node and then the rest of the nodes can be restored in parallel. ~]$ db2 RESTORE DB bigsql FROM /media/data1/db2backups/bigsql TAKEN AT REPLACE EXISTING SQL2539W The specified name of the backup image to restore is the same as the name of the target database. Restoring to an existing database that is the same as the backup image database will cause the current database to be overwritten by the backup version. DB20000I The RESTORE DATABASE command completed successfully. ~]$ db2_all "<<-0<; db2 RESTORE DB bigsql FROM /media/data1/db2backups/bigsql TAKEN AT REPLACE EXISTING" grep 'The RESTORE DATABASE command' testdn1.iic.il.ibm.com: DB20000I The RESTORE DATABASE command completed successfully. testdn2.iic.il.ibm.com: DB20000I The RESTORE DATABASE command completed successfully. testdn4.iic.il.ibm.com: DB20000I The RESTORE DATABASE command completed successfully. testdn5.iic.il.ibm.com: DB20000I The RESTORE DATABASE command completed successfully. testdn3.iic.il.ibm.com: DB20000I The RESTORE DATABASE command completed successfully. ~]$ db2 ROLLFORWARD DB bigsql TO END OF LOGS ON ALL DBPARTITIONNUMS AND STOP Rollforward Status Input database alias = bigsql Number of members have returned status = 6 Member ID Rollforward Next log Log files processed Last committed transaction status to be read not pending S LOG-S LOG UTC 1 not pending S LOG-S LOG UTC 2 not pending S LOG-S LOG UTC 3 not pending S LOG-S LOG UTC 4 not pending S LOG-S LOG UTC 5 not pending S LOG-S LOG UTC DB20000I The ROLLFORWARD command completed successfully. The db2_all utility was used here again, but with a new notation before the actual DB2 RESTORE command. The notation starts with <<-0<, which means to run on all nodes EXCEPT node 0 (the management node), and then is followed by a semicolon, which means that the command should be run in parallel. (Once again, grep was used to shorten the output displayed.) D. Restart BigInsights If you will NOT be continuing to the setup of HADR, refer to <III.C-Restart BigInsights>. Page 25 / 44

26 VII. Purge Old Backups and Archive Logs Hopefully you ve decided to set up archive logging and enable automatic online backups (and/or established your own automated backup scripts). However, left unchecked, the logs and backups will now continue to fill up your storage. Proper maintenance dictates that you should now set up an automated purging mechanism to delete unneeded logs and backups. Care should be taken, however, not to delete archive logs which can be used by ROLLFORWARD after a database RESTORE. On the other hand, there s no need to keep around archive logs which are older than your oldest backup image. Fortunately, DB2 provides an automated purging mechanism which can make all these decisions for you (Best Practices). Before setting that up, we ll show you how to retrieve the relevant information on your backups and archive logs (should you decide to write your own purge mechanism). A. Using the LIST HISTORY command You can get a report of the BACKUP and RESTORE commands which you ve issued using the LIST HISTORY command as follows (this has been truncated): ~]$ db2 LIST HISTORY BACKUP ALL FOR bidb List History File for bidb Number of matching file entries = 11 Op Obj Timestamp+Sequence Type Dev Earliest Log Current Log Backup ID B D F D S LOG S LOG Contains 3 tablespace(s): SYSCATSPACE USERSPACE SYSTOOLSPACE Comment: DB2 BACKUP BIDB OFFLINE Start Time: End Time: Status: A EID: 124 Location: /media/data1/db2backups/catalog Op Obj Timestamp+Sequence Type Dev Earliest Log Current Log Backup ID B D N D S LOG S LOG Contains 3 tablespace(s): SYSCATSPACE USERSPACE SYSTOOLSPACE Comment: DB2 BACKUP BIDB ONLINE Start Time: End Time: Status: A EID: 218 Location: /media/data1/db2backups/catalog (You might be interested in a similar command, LIST HISTORY ARCHIVE LOG.) Page 26 / 44

27 B. Using the DB_HISTORY view Another way to display the history, which allows you to be more selective of how and what information is shown, is to connect to the database and query an administrative view called DB_HISTORY. ~]$ db2 CONNECT TO bidb Database Connection Information Database server = DB2/LINUXX SQL authorization ID = CATALOG Local database alias = BIDB ~]$ db2 "SELECT CHAR( operation, 1) oper, CHAR( operationtype, 1) type, start_time, num_log_elems, VARCHAR( firstlog, 12) firstlog, VARCHAR( lastlog, 12) lastlog FROM sysibmadm.db_history WHERE objecttype = 'D' AND operation = 'B' ORDER BY start_time" OPER TYPE START_TIME NUM_LOG_ELEMS FIRSTLOG LASTLOG B F S LOG S LOG B F S LOG S LOG B F S LOG S LOG B N S LOG S LOG B N S LOG S LOG B F S LOG S LOG B N S LOG S LOG B N S LOG S LOG B N S LOG S LOG B N S LOG S LOG B N S LOG S LOG 11 record(s) selected. C. Setup automatic purge DB2 provides a mechanism to allow you to establish how many backup images you wish to retain. This mechanism is also aware of which archive logs are relevant to the retained backup images. When a new backup is taken, it automatically purges the oldest image, its relevant archive logs, and even cleans out the records from the history tracking file. Here are the relevant configuration parameters and their default values: ~]$ db2 GET DB CFG FOR bidb egrep 'AUTO_DEL_REC_OBJ REC_HIS_RETENTN NUM_DB_BACKUPS' Number of database backups to retain (NUM_DB_BACKUPS) = 12 Recovery history retention (days) (REC_HIS_RETENTN) = 366 Auto deletion of recovery objects (AUTO_DEL_REC_OBJ) = OFF Page 27 / 44

IBM Redistribute Big SQL v4.x Storage Paths IBM. Redistribute Big SQL v4.x Storage Paths

IBM Redistribute Big SQL v4.x Storage Paths IBM. Redistribute Big SQL v4.x Storage Paths Redistribute Big SQL v4.x Storage Paths THE GOAL The Big SQL temporary tablespace is used during high volume queries to spill sorts or intermediate data to disk. To improve I/O performance for these queries,

More information

IBM DB2 9.7. Backup and Recovery Hands-On Lab. Information Management Cloud Computing Center of Competence. IBM Canada Lab

IBM DB2 9.7. Backup and Recovery Hands-On Lab. Information Management Cloud Computing Center of Competence. IBM Canada Lab IBM DB2 9.7 Backup and Recovery Hands-On Lab I Information Management Cloud Computing Center of Competence IBM Canada Lab 1 Contents CONTENTS...1 1. INTRODUCTION...3 2. BASIC SETUP...3 2.1 Environment

More information

A Practical Guide to Backup and Recovery of IBM DB2 for Linux, UNIX and Windows in SAP Environments Part 1 Backup and Recovery Overview

A Practical Guide to Backup and Recovery of IBM DB2 for Linux, UNIX and Windows in SAP Environments Part 1 Backup and Recovery Overview A Practical Guide to Backup and Recovery of IBM DB2 for Linux, UNIX and Windows in SAP Environments Part 1 Backup and Recovery Overview Version 1.4 IBM SAP DB2 Center of Excellence Revision date: 20.08.2009

More information

DB2 backup and recovery

DB2 backup and recovery DB2 backup and recovery IBM Information Management Cloud Computing Center of Competence IBM Canada Lab 1 2011 IBM Corporation Agenda Backup and recovery overview Database logging Backup Recovery 2 2011

More information

DB2 Backup and Recovery

DB2 Backup and Recovery Information Management Technology Ecosystem DB2 Backup and Recovery Information Management Agenda Why back up data Basic backup and recovery concept Logging Log file states Logging types Infinite logging

More information

Hadoop Basics with InfoSphere BigInsights

Hadoop Basics with InfoSphere BigInsights An IBM Proof of Technology Hadoop Basics with InfoSphere BigInsights Part: 1 Exploring Hadoop Distributed File System An IBM Proof of Technology Catalog Number Copyright IBM Corporation, 2013 US Government

More information

DB2 - BACKUP AND RECOVERY

DB2 - BACKUP AND RECOVERY DB2 - BACKUP AND RECOVERY http://www.tutorialspoint.com/db2/db2_backup_and_recovery.htm Copyright tutorialspoint.com This chapter describes backup and restore methods of database. Introduction Backup and

More information

Backup and Recovery. Presented by DB2 Developer Domain http://www7b.software.ibm.com/dmdd/

Backup and Recovery. Presented by DB2 Developer Domain http://www7b.software.ibm.com/dmdd/ Backup and Recovery Table of Contents If you're viewing this document online, you can click any of the topics below to link directly to that section. 1. Introduction... 2 2. Database recovery concepts...

More information

DB2 9 for LUW Advanced Database Recovery CL492; 4 days, Instructor-led

DB2 9 for LUW Advanced Database Recovery CL492; 4 days, Instructor-led DB2 9 for LUW Advanced Database Recovery CL492; 4 days, Instructor-led Course Description Gain a deeper understanding of the advanced features of DB2 9 for Linux, UNIX, and Windows database environments

More information

DB2 9 DBA exam 731 prep, Part 6: High availability: Backup and recovery

DB2 9 DBA exam 731 prep, Part 6: High availability: Backup and recovery DB2 9 DBA exam 731 prep, Part 6: High availability: Backup and recovery Skill Level: Intermediate Sylvia Qi (sylviaq@ca.ibm.com) WebSphere Application Server Function Verification Tester IBM Toronto Lab

More information

Backup and Recovery...

Backup and Recovery... 7 Backup and Recovery... Fourteen percent (14%) of the DB2 UDB V8.1 for Linux, UNIX, and Windows Database Administration certification exam (Exam 701) and seventeen percent (17%) of the DB2 UDB V8.1 for

More information

Database Upgrade Guide Upgrading to Version 10.5 of IBM DB2 for Linux, UNIX, and Windows

Database Upgrade Guide Upgrading to Version 10.5 of IBM DB2 for Linux, UNIX, and Windows Database Upgrade Guide Upgrading to Version 10.5 of IBM DB2 for Linux, UNIX, and Windows Target Audience Technical Consultants System Administrators CUSTOMER Document version: 1.00 2013-07-26 Document

More information

6. Backup and Recovery 6-1. DBA Certification Course. (Summer 2008) Recovery. Log Files. Backup. Recovery

6. Backup and Recovery 6-1. DBA Certification Course. (Summer 2008) Recovery. Log Files. Backup. Recovery 6. Backup and Recovery 6-1 DBA Certification Course (Summer 2008) Chapter 6: Backup and Recovery Log Files Backup Recovery 6. Backup and Recovery 6-2 Objectives After completing this chapter, you should

More information

Cloudera Manager Training: Hands-On Exercises

Cloudera Manager Training: Hands-On Exercises 201408 Cloudera Manager Training: Hands-On Exercises General Notes... 2 In- Class Preparation: Accessing Your Cluster... 3 Self- Study Preparation: Creating Your Cluster... 4 Hands- On Exercise: Working

More information

Hadoop Basics with InfoSphere BigInsights

Hadoop Basics with InfoSphere BigInsights An IBM Proof of Technology Hadoop Basics with InfoSphere BigInsights Unit 4: Hadoop Administration An IBM Proof of Technology Catalog Number Copyright IBM Corporation, 2013 US Government Users Restricted

More information

How to Install and Configure EBF15328 for MapR 4.0.1 or 4.0.2 with MapReduce v1

How to Install and Configure EBF15328 for MapR 4.0.1 or 4.0.2 with MapReduce v1 How to Install and Configure EBF15328 for MapR 4.0.1 or 4.0.2 with MapReduce v1 1993-2015 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic,

More information

IBM Sterling Control Center

IBM Sterling Control Center IBM Sterling Control Center System Administration Guide Version 5.3 This edition applies to the 5.3 Version of IBM Sterling Control Center and to all subsequent releases and modifications until otherwise

More information

Big Data Operations Guide for Cloudera Manager v5.x Hadoop

Big Data Operations Guide for Cloudera Manager v5.x Hadoop Big Data Operations Guide for Cloudera Manager v5.x Hadoop Logging into the Enterprise Cloudera Manager 1. On the server where you have installed 'Cloudera Manager', make sure that the server is running,

More information

Support Document: Microsoft SQL Server - LiveVault 7.6X

Support Document: Microsoft SQL Server - LiveVault 7.6X Contents Preparing to create a Microsoft SQL backup policy... 2 Adjusting the SQL max worker threads option... 2 Preparing for Log truncation... 3 Best Practices... 3 Microsoft SQL Server 2005, 2008, or

More information

IBM Software InfoSphere Guardium. Planning a data security and auditing deployment for Hadoop

IBM Software InfoSphere Guardium. Planning a data security and auditing deployment for Hadoop Planning a data security and auditing deployment for Hadoop 2 1 2 3 4 5 6 Introduction Architecture Plan Implement Operationalize Conclusion Key requirements for detecting data breaches and addressing

More information

Administration GUIDE. SharePoint Server idataagent. Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 201

Administration GUIDE. SharePoint Server idataagent. Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 201 Administration GUIDE SharePoint Server idataagent Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 201 Getting Started - SharePoint Server idataagent Overview Deployment Configuration Decision Table

More information

Performing Database and File System Backups and Restores Using Oracle Secure Backup

Performing Database and File System Backups and Restores Using Oracle Secure Backup Performing Database and File System Backups and Restores Using Oracle Secure Backup Purpose This lesson introduces you to Oracle Secure Backup which enables you to perform database and file system backups

More information

OLH: Oracle Loader for Hadoop OSCH: Oracle SQL Connector for Hadoop Distributed File System (HDFS)

OLH: Oracle Loader for Hadoop OSCH: Oracle SQL Connector for Hadoop Distributed File System (HDFS) Use Data from a Hadoop Cluster with Oracle Database Hands-On Lab Lab Structure Acronyms: OLH: Oracle Loader for Hadoop OSCH: Oracle SQL Connector for Hadoop Distributed File System (HDFS) All files are

More information

SQL Server Database Administrator s Guide

SQL Server Database Administrator s Guide SQL Server Database Administrator s Guide Copyright 2011 Sophos Limited. All rights reserved. No part of this publication may be reproduced, stored in retrieval system, or transmitted, in any form or by

More information

VirtualCenter Database Maintenance VirtualCenter 2.0.x and Microsoft SQL Server

VirtualCenter Database Maintenance VirtualCenter 2.0.x and Microsoft SQL Server Technical Note VirtualCenter Database Maintenance VirtualCenter 2.0.x and Microsoft SQL Server This document discusses ways to maintain the VirtualCenter database for increased performance and manageability.

More information

Oracle Recovery Manager

Oracle Recovery Manager 1 sur 6 05/08/2014 14:17 ORACLE.COM TECHNOLOGY NETWORK PARTNERS STORE SUPPORT (Sign In / Register for a free DownloadsDocumentation Discussion Forums Articles Sample Code Training RSS Resources For PRODUCT

More information

Together with SAP MaxDB database tools, you can use third-party backup tools to backup and restore data. You can use third-party backup tools for the

Together with SAP MaxDB database tools, you can use third-party backup tools to backup and restore data. You can use third-party backup tools for the Together with SAP MaxDB database tools, you can use third-party backup tools to backup and restore data. You can use third-party backup tools for the following actions: Backing up to data carriers Complete

More information

This article Includes:

This article Includes: Log shipping has been a mechanism for maintaining a warm standby server for years. Though SQL Server supported log shipping with SQL Server 2000 as a part of DB Maintenance Plan, it has become a built-in

More information

Quick Beginnings for DB2 Servers

Quick Beginnings for DB2 Servers IBM DB2 Universal Database Quick Beginnings for DB2 Servers Version 8 GC09-4836-00 IBM DB2 Universal Database Quick Beginnings for DB2 Servers Version 8 GC09-4836-00 Before using this information and

More information

Data processing goes big

Data processing goes big Test report: Integration Big Data Edition Data processing goes big Dr. Götz Güttich Integration is a powerful set of tools to access, transform, move and synchronize data. With more than 450 connectors,

More information

Easy Setup Guide 1&1 CLOUD SERVER. Creating Backups. for Linux

Easy Setup Guide 1&1 CLOUD SERVER. Creating Backups. for Linux Easy Setup Guide 1&1 CLOUD SERVER Creating Backups for Linux Legal notice 1&1 Internet Inc. 701 Lee Road, Suite 300 Chesterbrook, PA 19087 USA www.1and1.com info@1and1.com August 2015 Copyright 2015 1&1

More information

Cloudera Backup and Disaster Recovery

Cloudera Backup and Disaster Recovery Cloudera Backup and Disaster Recovery Important Note: Cloudera Manager 4 and CDH 4 have reached End of Maintenance (EOM) on August 9, 2015. Cloudera will not support or provide patches for any of the Cloudera

More information

IBM Software Hadoop Fundamentals

IBM Software Hadoop Fundamentals Hadoop Fundamentals Unit 2: Hadoop Architecture Copyright IBM Corporation, 2014 US Government Users Restricted Rights - Use, duplication or disclosure restricted by GSA ADP Schedule Contract with IBM Corp.

More information

Workflow Templates Library

Workflow Templates Library Workflow s Library Table of Contents Intro... 2 Active Directory... 3 Application... 5 Cisco... 7 Database... 8 Excel Automation... 9 Files and Folders... 10 FTP Tasks... 13 Incident Management... 14 Security

More information

HareDB HBase Client Web Version USER MANUAL HAREDB TEAM

HareDB HBase Client Web Version USER MANUAL HAREDB TEAM 2013 HareDB HBase Client Web Version USER MANUAL HAREDB TEAM Connect to HBase... 2 Connection... 3 Connection Manager... 3 Add a new Connection... 4 Alter Connection... 6 Delete Connection... 6 Clone Connection...

More information

Administration GUIDE. Exchange Database idataagent. Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 233

Administration GUIDE. Exchange Database idataagent. Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 233 Administration GUIDE Exchange Database idataagent Published On: 11/19/2013 V10 Service Pack 4A Page 1 of 233 User Guide - Exchange Database idataagent Table of Contents Overview Introduction Key Features

More information

This appendix describes the following procedures: Cisco ANA Registry Backup and Restore Oracle Database Backup and Restore

This appendix describes the following procedures: Cisco ANA Registry Backup and Restore Oracle Database Backup and Restore APPENDIXA This appendix describes the following procedures: Cisco ANA Registry Oracle Database Cisco ANA Registry This section describes the Cisco ANA Registry backup and restore procedure. Overview Provides

More information

DiskPulse DISK CHANGE MONITOR

DiskPulse DISK CHANGE MONITOR DiskPulse DISK CHANGE MONITOR User Manual Version 7.9 Oct 2015 www.diskpulse.com info@flexense.com 1 1 DiskPulse Overview...3 2 DiskPulse Product Versions...5 3 Using Desktop Product Version...6 3.1 Product

More information

RMAN What is Rman Why use Rman Understanding The Rman Architecture Taking Backup in Non archive Backup Mode Taking Backup in archive Mode

RMAN What is Rman Why use Rman Understanding The Rman Architecture Taking Backup in Non archive Backup Mode Taking Backup in archive Mode RMAN - What is Rman - Why use Rman - Understanding The Rman Architecture - Taking Backup in Non archive Backup Mode - Taking Backup in archive Mode - Enhancement in 10g For Rman - 9i Enhancement For Rman

More information

Informatica Corporation Proactive Monitoring for PowerCenter Operations Version 3.0 Release Notes May 2014

Informatica Corporation Proactive Monitoring for PowerCenter Operations Version 3.0 Release Notes May 2014 Contents Informatica Corporation Proactive Monitoring for PowerCenter Operations Version 3.0 Release Notes May 2014 Copyright (c) 2012-2014 Informatica Corporation. All rights reserved. Installation...

More information

Using Symantec NetBackup with Symantec Security Information Manager 4.5

Using Symantec NetBackup with Symantec Security Information Manager 4.5 Using Symantec NetBackup with Symantec Security Information Manager 4.5 Using Symantec NetBackup with Symantec Security Information Manager Legal Notice Copyright 2007 Symantec Corporation. All rights

More information

How to protect, restore and recover SQL 2005 and SQL 2008 Databases

How to protect, restore and recover SQL 2005 and SQL 2008 Databases How to protect, restore and recover SQL 2005 and SQL 2008 Databases Introduction This document discusses steps to set up SQL Server Protection Plans and restore protected databases using our software.

More information

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores

More information

Cloudera Backup and Disaster Recovery

Cloudera Backup and Disaster Recovery Cloudera Backup and Disaster Recovery Important Notice (c) 2010-2013 Cloudera, Inc. All rights reserved. Cloudera, the Cloudera logo, Cloudera Impala, and any other product or service names or slogans

More information

DB2. Data Recovery and High Availability Guide and Reference. DB2 Version 9 SC10-4228-00

DB2. Data Recovery and High Availability Guide and Reference. DB2 Version 9 SC10-4228-00 DB2 DB2 Version 9 for Linux, UNIX, and Windows Data Recovery and High Availability Guide and Reference SC10-4228-00 DB2 DB2 Version 9 for Linux, UNIX, and Windows Data Recovery and High Availability Guide

More information

Data Recovery and High Availability Guide and Reference

Data Recovery and High Availability Guide and Reference IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference Version 8 SC09-4831-00 IBM DB2 Universal Database Data Recovery and High Availability Guide and Reference Version 8 SC09-4831-00

More information

Table of Content. Official website: www.no-backup.eu

Table of Content. Official website: www.no-backup.eu This chapter will describe in details how to use No-Backup Software to backup your Lotus Domino server / Notes client 5 / 6 / 6.5 and how you can restore your Lotus Domino server / Notes client 5 / 6 /

More information

Database Administration

Database Administration Unified CCE, page 1 Historical Data, page 2 Tool, page 3 Database Sizing Estimator Tool, page 11 Administration & Data Server with Historical Data Server Setup, page 14 Database Size Monitoring, page 15

More information

Backup and Maintenance Wizard / Backup Reminder

Backup and Maintenance Wizard / Backup Reminder Backup and Maintenance Wizard / Backup Reminder Overview The Backup Reminder and the Backup and Maintenance Wizard are progra ms that are included on your RS-SQL CD. The Backup and Maintenance Wizard program

More information

Best Practices. IBMr. Building a Recovery Strategy for an IBM Smart Analytics System Data Warehouse. IBM Smart Analytics System

Best Practices. IBMr. Building a Recovery Strategy for an IBM Smart Analytics System Data Warehouse. IBM Smart Analytics System IBM Smart Analytics System IBMr Best Practices Building a Recovery Strategy for an IBM Smart Analytics System Data Warehouse Dale McInnis IBM DB2 Availability Architect Garrett Fitzsimons IBM Smart Analytics

More information

Best practices. IBMr. Troubleshooting DB2 servers. IBM DB2 for Linux, UNIX, and Windows

Best practices. IBMr. Troubleshooting DB2 servers. IBM DB2 for Linux, UNIX, and Windows IBMr IBM DB2 for Linux, UNIX, and Windows Best practices Troubleshooting DB2 servers Nikolaj Richers Information Architect IBM Amit Rai Advisory Software Engineer IBM Serge Boivin Senior Writer IBM Issued:

More information

Backup and Restore of SAP Systems on Amazon Web Services Infrastructure

Backup and Restore of SAP Systems on Amazon Web Services Infrastructure Backup and Restore of SAP Systems on Amazon Web Services Infrastructure For MaxDB and DB2 LUW Databases on Linux Authors: Version: Amazon Web Services sap- on- aws@amazon.com Protera Technologies http://www.protera.biz

More information

IBM Tivoli Storage Manager for Virtual Environments Version 7.1.6. Data Protection for VMware User's Guide IBM

IBM Tivoli Storage Manager for Virtual Environments Version 7.1.6. Data Protection for VMware User's Guide IBM IBM Tivoli Storage Manager for Virtual Environments Version 7.1.6 Data Protection for VMware User's Guide IBM IBM Tivoli Storage Manager for Virtual Environments Version 7.1.6 Data Protection for VMware

More information

Best Practices. Using IBM InfoSphere Optim High Performance Unload as part of a Recovery Strategy. IBM Smart Analytics System

Best Practices. Using IBM InfoSphere Optim High Performance Unload as part of a Recovery Strategy. IBM Smart Analytics System IBM Smart Analytics System Best Practices Using IBM InfoSphere Optim High Performance Unload as part of a Recovery Strategy Garrett Fitzsimons IBM Data Warehouse Best Practices Specialist Konrad Emanowicz

More information

AKCess Pro Server Backup & Restore Manual

AKCess Pro Server Backup & Restore Manual www.akcp.com AKCess Pro Server Backup & Restore Manual Copyright 2015, AKCP Co., Ltd. Table of Contents Introduction... 3 Backup process... 4 Copying the backup export files to other media... 9 Tips for

More information

ADSMConnect Agent for Oracle Backup on Sun Solaris Installation and User's Guide

ADSMConnect Agent for Oracle Backup on Sun Solaris Installation and User's Guide ADSTAR Distributed Storage Manager ADSMConnect Agent for Oracle Backup on Sun Solaris Installation and User's Guide IBM Version 2 SH26-4063-00 IBM ADSTAR Distributed Storage Manager ADSMConnect Agent

More information

Netezza PureData System Administration Course

Netezza PureData System Administration Course Course Length: 2 days CEUs 1.2 AUDIENCE After completion of this course, you should be able to: Administer the IBM PDA/Netezza Install Netezza Client Software Use the Netezza System Interfaces Understand

More information

11. Configuring the Database Archiving Mode.

11. Configuring the Database Archiving Mode. 11. Configuring the Database Archiving Mode. Abstract: Configuring an Oracle database for backup and recovery can be complex. At a minimum, you must understand the archive process, the initialization parameters

More information

Automated Offsite Backup with rdiff-backup

Automated Offsite Backup with rdiff-backup Automated Offsite Backup with rdiff-backup Michael Greb 2003-10-21 Contents 1 Overview 2 1.1 Conventions Used........................................... 2 2 Setting up SSH 2 2.1 Generating SSH Keys........................................

More information

IDERA WHITEPAPER. The paper will cover the following ten areas: Monitoring Management. WRITTEN BY Greg Robidoux

IDERA WHITEPAPER. The paper will cover the following ten areas: Monitoring Management. WRITTEN BY Greg Robidoux WRITTEN BY Greg Robidoux Top SQL Server Backup Mistakes and How to Avoid Them INTRODUCTION Backing up SQL Server databases is one of the most important tasks DBAs perform in their SQL Server environments

More information

NovaBACKUP. User Manual. NovaStor / November 2011

NovaBACKUP. User Manual. NovaStor / November 2011 NovaBACKUP User Manual NovaStor / November 2011 2011 NovaStor, all rights reserved. All trademarks are the property of their respective owners. Features and specifications are subject to change without

More information

Backup and Recovery. What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases

Backup and Recovery. What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases Backup and Recovery What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases CONTENTS Introduction 3 Terminology and concepts 3 Database files that make up a database 3 Client-side

More information

Managed File Transfer

Managed File Transfer , page 1 External Database, page 3 External File Server, page 5 Cisco XCP File Transfer Manager RTMT Alarms and Counters, page 9 Workflow, page 11 Troubleshooting, page 22 Cisco Jabber Client Interoperability,

More information

SQL Server Training Course Content

SQL Server Training Course Content SQL Server Training Course Content SQL Server Training Objectives Installing Microsoft SQL Server Upgrading to SQL Server Management Studio Monitoring the Database Server Database and Index Maintenance

More information

Database Maintenance Guide

Database Maintenance Guide Database Maintenance Guide Medtech Evolution - Document Version 5 Last Modified on: February 26th 2015 (February 2015) This documentation contains important information for all Medtech Evolution users

More information

FalconStor Recovery Agents User Guide

FalconStor Recovery Agents User Guide FalconStor Recovery Agents User Guide FalconStor Software, Inc. 2 Huntington Quadrangle Melville, NY 11747 Phone: 631-777-5188 Fax: 631-501-7633 Web site: www.falconstor.com Copyright 2007-2009 FalconStor

More information

SonicWALL CDP 5.0 Microsoft Exchange InfoStore Backup and Restore

SonicWALL CDP 5.0 Microsoft Exchange InfoStore Backup and Restore SonicWALL CDP 5.0 Microsoft Exchange InfoStore Backup and Restore Document Scope This solutions document describes how to configure and use the Microsoft Exchange InfoStore Backup and Restore feature in

More information

1 Introduction FrontBase is a high performance, scalable, SQL 92 compliant relational database server created in the for universal deployment.

1 Introduction FrontBase is a high performance, scalable, SQL 92 compliant relational database server created in the for universal deployment. FrontBase 7 for ios and Mac OS X 1 Introduction FrontBase is a high performance, scalable, SQL 92 compliant relational database server created in the for universal deployment. On Mac OS X FrontBase can

More information

Backups and Maintenance

Backups and Maintenance Backups and Maintenance Backups and Maintenance Objectives Learn how to create a backup strategy to suit your needs. Learn how to back up a database. Learn how to restore from a backup. Use the Database

More information

HPCx Archiving User Guide V 1.2

HPCx Archiving User Guide V 1.2 HPCx Archiving User Guide V 1.2 Elena Breitmoser, Ian Shore April 28, 2004 Abstract The Phase 2 HPCx system will have 100 Tb of storage space, of which around 70 Tb comprises offline tape storage rather

More information

Zen Internet. Online Data Backup. Zen Vault Professional Plug-ins. Issue: 2.0.08

Zen Internet. Online Data Backup. Zen Vault Professional Plug-ins. Issue: 2.0.08 Zen Internet Online Data Backup Zen Vault Professional Plug-ins Issue: 2.0.08 Contents 1 Plug-in Installer... 3 1.1 Installation and Configuration... 3 2 Plug-ins... 5 2.1 Email Notification... 5 2.1.1

More information

Getting to Know the SQL Server Management Studio

Getting to Know the SQL Server Management Studio HOUR 3 Getting to Know the SQL Server Management Studio The Microsoft SQL Server Management Studio Express is the new interface that Microsoft has provided for management of your SQL Server database. It

More information

Destiny system backups white paper

Destiny system backups white paper Destiny system backups white paper Establishing a backup and restore plan for Destiny Overview It is important to establish a backup and restore plan for your Destiny installation. The plan must be validated

More information

Integrating VoltDB with Hadoop

Integrating VoltDB with Hadoop The NewSQL database you ll never outgrow Integrating with Hadoop Hadoop is an open source framework for managing and manipulating massive volumes of data. is an database for handling high velocity data.

More information

Application Note - JDSU PathTrak Video Monitoring System Data Backup and Restore Process

Application Note - JDSU PathTrak Video Monitoring System Data Backup and Restore Process Application Note - JDSU PathTrak Video Monitoring System Data Backup and Restore Process This Application Note provides instructions on how to backup and restore JDSU PathTrak Video Monitoring data. Automated

More information

11. Oracle Recovery Manager Overview and Configuration.

11. Oracle Recovery Manager Overview and Configuration. 11. Oracle Recovery Manager Overview and Configuration. Abstract: This lesson provides an overview of RMAN, including the capabilities and components of the RMAN tool. The RMAN utility attempts to move

More information

RapidSeed for Replicating Systems Version 7.4

RapidSeed for Replicating Systems Version 7.4 RapidSeed for Replicating Systems Version 7.4 7 Technology Circle, Suite 100 Columbia, SC 29203 Phone: 803.454.0300 Contents Overview...3 Supported seed devices by system...4 Prerequisites...4 General

More information

Chapter 25 Backup and Restore

Chapter 25 Backup and Restore System 800xA Training Chapter 25 Backup and Restore TABLE OF CONTENTS Chapter 25 Backup and Restore... 1 25.1 General Information... 2 25.1.1 Objectives... 2 25.1.2 Legend... 2 25.1.3 Reference Documentation...

More information

TechComplete Test Productivity Pack (TPP) Backup Process and Data Restoration

TechComplete Test Productivity Pack (TPP) Backup Process and Data Restoration Introduction The TPP backup feature backs up all TPP data folders on to a storage device which can be used to recover data in case of problems with the TPP server. TPP data folders include TPP server data,

More information

RecoveryVault Express Client User Manual

RecoveryVault Express Client User Manual For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by

More information

Oracle Insurance Policy Administration

Oracle Insurance Policy Administration Oracle Insurance Policy Administration Databases Installation Instructions Step 1 Version 10.1.2.0 Document Part Number: E59346-01 December, 2014 Copyright 2009, 2014, Oracle and/or its affiliates. All

More information

Microsoft Exchange 2003 Disaster Recovery Operations Guide

Microsoft Exchange 2003 Disaster Recovery Operations Guide Microsoft Exchange 2003 Disaster Recovery Operations Guide Microsoft Corporation Published: December 12, 2006 Author: Exchange Server Documentation Team Abstract This guide provides installation and deployment

More information

CA Workload Automation Agent for Databases

CA Workload Automation Agent for Databases CA Workload Automation Agent for Databases Implementation Guide r11.3.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the

More information

IGEL Universal Management. Installation Guide

IGEL Universal Management. Installation Guide IGEL Universal Management Installation Guide Important Information Copyright This publication is protected under international copyright laws, with all rights reserved. No part of this manual, including

More information

EMC Avamar 7.2 for IBM DB2

EMC Avamar 7.2 for IBM DB2 EMC Avamar 7.2 for IBM DB2 User Guide 302-001-793 REV 01 Copyright 2001-2015 EMC Corporation. All rights reserved. Published in USA. Published June, 2015 EMC believes the information in this publication

More information

WHITEPAPER. A Technical Perspective on the Talena Data Availability Management Solution

WHITEPAPER. A Technical Perspective on the Talena Data Availability Management Solution WHITEPAPER A Technical Perspective on the Talena Data Availability Management Solution BIG DATA TECHNOLOGY LANDSCAPE Over the past decade, the emergence of social media, mobile, and cloud technologies

More information

Installation and Setup: Setup Wizard Account Information

Installation and Setup: Setup Wizard Account Information Installation and Setup: Setup Wizard Account Information Once the My Secure Backup software has been installed on the end-user machine, the first step in the installation wizard is to configure their account

More information

1. Product Information

1. Product Information ORIXCLOUD BACKUP CLIENT USER MANUAL LINUX 1. Product Information Product: Orixcloud Backup Client for Linux Version: 4.1.7 1.1 System Requirements Linux (RedHat, SuSE, Debian and Debian based systems such

More information

Symantec Backup Exec Desktop Laptop Option ( DLO )

Symantec Backup Exec Desktop Laptop Option ( DLO ) The following is a short description of our backup software: BACKUP-EXEC-DLO. We have set many of the parameters for you but, if necessary, these parameters can be changed. Symantec Backup Exec Desktop

More information

Backup and Restore of CONFIGURATION Object on Windows 2008

Backup and Restore of CONFIGURATION Object on Windows 2008 Backup and Restore of CONFIGURATION Object on Windows 2008 Technical Whitepaper Contents Introduction... 3 CONFIGURATION Backup... 3 Windows configuration objects... 3 Active Directory... 4 DFS... 4 DHCP

More information

Cloudera Manager Backup and Disaster Recovery

Cloudera Manager Backup and Disaster Recovery Cloudera Manager Backup and Disaster Recovery Important Notice (c) 2010-2015 Cloudera, Inc. All rights reserved. Cloudera, the Cloudera logo, Cloudera Impala, and any other product or service names or

More information

CTERA Agent for Linux

CTERA Agent for Linux User Guide CTERA Agent for Linux September 2013 Version 4.0 Copyright 2009-2013 CTERA Networks Ltd. All rights reserved. No part of this document may be reproduced in any form or by any means without written

More information

TP1: Getting Started with Hadoop

TP1: Getting Started with Hadoop TP1: Getting Started with Hadoop Alexandru Costan MapReduce has emerged as a leading programming model for data-intensive computing. It was originally proposed by Google to simplify development of web

More information

WhatsUp Gold v16.3 Installation and Configuration Guide

WhatsUp Gold v16.3 Installation and Configuration Guide WhatsUp Gold v16.3 Installation and Configuration Guide Contents Installing and Configuring WhatsUp Gold using WhatsUp Setup Installation Overview... 1 Overview... 1 Security considerations... 2 Standard

More information

Online Backup Linux Client User Manual

Online Backup Linux Client User Manual Online Backup Linux Client User Manual Software version 4.0.x For Linux distributions August 2011 Version 1.0 Disclaimer This document is compiled with the greatest possible care. However, errors might

More information

RAID Utility User s Guide Instructions for setting up RAID volumes on a computer with a MacPro RAID Card or Xserve RAID Card.

RAID Utility User s Guide Instructions for setting up RAID volumes on a computer with a MacPro RAID Card or Xserve RAID Card. RAID Utility User s Guide Instructions for setting up RAID volumes on a computer with a MacPro RAID Card or Xserve RAID Card. 1 Contents 3 RAID Utility User s Guide 3 Installing the RAID Software 4 Running

More information

Data Domain Profiling and Data Masking for Hadoop

Data Domain Profiling and Data Masking for Hadoop Data Domain Profiling and Data Masking for Hadoop 1993-2015 Informatica LLC. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or

More information

Oracle Data Integrator for Big Data. Alex Kotopoulis Senior Principal Product Manager

Oracle Data Integrator for Big Data. Alex Kotopoulis Senior Principal Product Manager Oracle Data Integrator for Big Data Alex Kotopoulis Senior Principal Product Manager Hands on Lab - Oracle Data Integrator for Big Data Abstract: This lab will highlight to Developers, DBAs and Architects

More information

Reflection DBR USER GUIDE. Reflection DBR User Guide. 995 Old Eagle School Road Suite 315 Wayne, PA 19087 USA 610.964.8000 www.evolveip.

Reflection DBR USER GUIDE. Reflection DBR User Guide. 995 Old Eagle School Road Suite 315 Wayne, PA 19087 USA 610.964.8000 www.evolveip. Reflection DBR USER GUIDE 995 Old Eagle School Road Suite 315 Wayne, PA 19087 USA 610.964.8000 www.evolveip.net Page 1 of 1 Table of Contents Overview 3 Reflection DBR Client and Console Installation 4

More information

Online Backup Client User Manual

Online Backup Client User Manual For Linux distributions Software version 4.1.7 Version 2.0 Disclaimer This document is compiled with the greatest possible care. However, errors might have been introduced caused by human mistakes or by

More information