Sybase Adaptive Server Enterprise

Size: px
Start display at page:

Download "Sybase Adaptive Server Enterprise"

Transcription

1 technical white paper Sybase Adaptive Server Enterprise Data Transfer Utility

2 Contents 1. Executive Summary Feature Offerings The Basics of DTU Table Eligibility for Incremental Transfer Physical Changes in the Database Supported Output Formats The transfer table Command Setting Up to Use DTU Considerations for the Receiving Table The transfer table to Command Options Controlling the Overall Command Options Controlling Output for IQ Options Controlling Character-Coded Output Interaction with Other Ongoing Work Interaction with Transactions Interaction with Other Transfers Tracking Transfers File Handling for Transfer Resending a Transfer Transfer Speed Tests DTU, bcp, and Replication Server License Requirements Security Restrictions

3 Executive Summary Sharing data between applications requires sending data, often repeatedly, from one application to the other(s). Selecting all data repeatedly from a given table and sending it to an outside receiver can potentially send huge volumes of repeated data, most of which is not necessary because it already exists at the receiving application. This slows down processing speed and requires a lot of work at the receiver. The Data Transfer Utility (DTU) helps lessen this workload within ASE by providing a fast, efficient means of determining which rows in a table have changed since table data was most recently transferred. It then sends only those rows in the current transfer. Feature Offerings The Data Transfer Utility provides a means of transferring data from a table to a file on the ASE host, selecting only the rows that have changed since a previous transfer. This is accomplished through a new T-SQL command, transfer table. Transfers are fast and lightweight, interfering only minimally with normal database operations. During normal processing, the utility keeps track of the physical location of data changes, providing a means of going directly to the changed data instead of reading every row in the table. The transfer table command is somewhat similar to the bulk copy (bcp) utility: it obtains data by scanning the table, unlike replication, which scans the transaction log. The utility can format its output data in one of several forms, suitable for a variety of receivers, primarily ASE and Sybase IQ. It provides a new internal file format suitable for exchanging data with other ASE servers. It offers a configurable history mechanism for remembering the results of previous transfers, applying previous command options as the defaults for the current transfer. A new monitoring table allows users to track the progress of ongoing transfers as they occur. Internal tests show that the utility transfers data out from tables at between four and five times the speed of the equivalent bcp out command. The Basics of DTU Table Eligibility for Incremental Transfer While any table may be transferred via DTU, only user-created tables may be marked for incremental transfer. We refer to these marked tables as eligible tables. Transferring an eligible table will send only changed or new data, and only committed data (that is, data whose updating transaction has committed at the time the transfer starts). Optionally, transfer of an eligible table can also interact with ongoing transactions to assure that only transactionally consistent data is sent. Transferring other tables will always send every row in the table, and will not interact with ongoing transactions. Tables are marked as eligible via a new option to the create table and alter table commands: transfer table on. Once made eligible, tables remain eligible until that option is specifically removed via alter table set transfer table off. While eligible, they can participate in incremental transfers and will retain a history of their transfers, as described below. Physical Changes in the Database Providing incremental transfer requires a way of knowing what data has already been sent and what hasn t been sent since data was inserted or most recently changed. To accomplish this, transfer table uses two things: a marker in the data and a transfer history. 1

4 Row Changes for Eligible Tables The data marker is an 8-byte timestamp in the data row. This is not a date or time, just a sequence number that tells when this data was most recently updated. Having these timestamps is what makes a table eligible for incremental transfer. Every row in an eligible table has this marker. This means that rows in eligible tables are larger than rows in otherwise identical ineligible tables, so an eligible table requires more space in the database. Tables can be made eligible when they are created. They can also be altered to have their eligibility added or removed. However, because altering the table in this way makes the row larger or smaller, doing this alteration completely reallocates the table, just as it would when altering the table byadding or removing a column. Because of these physical row changes, only user-created tables can be eligible for incremental transfer. System catalogs have a required row format that cannot be changed except as required by upgrade. New Table spt_tabletransfer Transfer history is stored in a new table, spt_tabletransfer. This table exists in each database containing eligible tables, and is owned by that database s Database Owner (DBO). It contains information about past transfers of eligible tables. The length of the retained history is configurable, with the configuration controlling the number of succeeding and of failing transfers ASE will remember for each eligible table. This aids in troubleshooting and recovery: you can tell right away which transfers succeeded, and which failed and why they failed. Also, if something happens later to damage the output data, you can use these history entries to resend data from previous transfers. Supported Output Formats The utility can create output files suitable for a variety of receivers: bcp, Sybase IQ, character-coded output, or a new ASE internal file format. Of these, only ASE s internal format is suitable for directly processing incremental data, because it is the only format that carries status information along with the data. The other formats contain only the data; and while the data itself will be only an incremental data set, there is no way to tell whether an individual row represents new data or a change to previously existing data. Features of the individual output file formats are: Data for bcp ( for bcp ) is written in bcp s binary file format. Each output file is accompanied by a format file in the same directory that bcp uses to format rows for transfer back to ASE. Data for Sybase IQ ( for iq ) is written in IQ s binary format, similar to sending the result of an IQ select statement out to a file in binary form. Where necessary, data has been converted to IQ format. This file is suitable for loading into IQ via load table. Character-coded data ( for csv ) is written with user-definable column and row separators. This file is suitable for loading into ASE or IQ via bcp, into IQ via load table, or to third-party programs. Data in ASE internal form ( for ase ) is suitable for loading via ASE s new transfer table from command. Again, this is the only file format in which individual rows are marked to show whether they are new rows or updates to older rows. This format also automatically detects when data is being loaded into a machine having a different byte order than the machine where it was produced. 2

5 The transfer table Command To invoke DTU, use a new command, transfer table. The basic form of this command is: transfer table table_name to destination_file Command options exist to modify this command s operation, as explained in the transfer table to command below. Without any options, the first-ever transfer of a table produces output in ASE s internal format. Subsequent transfers of an eligible table use the command options from the previous successful transfer of that table as their defaults. When table data has been transferred out in ASE internal format, a variant of the same command will transfer it in to another ASE: transfer table table_name from source_file This will import the extracted data into another table. The receiving table must be identical to the source table. (Some exceptions exist to this rule. Actually, the receiving table must be nearly identical to the source.) Setting Up to Use DTU Setup consists of four steps: Create table spt_tabletransfer. Do this once in each database that will contain tables marked for incremental transfer, using procedure sp_setup_transfer_table. This should be done by the database owner, or by a user with sa_role on behalf of the database owner. This step is not absolutely required. If you don t complete this step, ASE will create this table automatically when you mark a table for incremental transfer. However, to avoid potential problems we suggest you do it manually. Configure the transfer history length. Do this once for the entire server, using sp_configure max transfer history, N, where N is an integer in the range The default is 10, meaning that for each eligible table ASE will remember as many as 10 successful transfers and 10 failing transfers. These history entries are useful for remembering which file transfers were written to, and for recovering from problems such as data files that were damaged before they were loaded into the receiving system. As a precaution, you might choose to configure this history longer if you were producing incremental transfers much more often than you were loading them. Or you might choose to configure it shorter if you have hundreds of tables that you transfer incrementally, and want to limit the size of the transfer history table. 3

6 Configure the transfer memory pool. Do this configuration once for the entire server, using sp_configure transfer utility memory size, N, where N is a number of memory pages (that is, blocks of 2048 bytes). The default is 4096 pages, or 8 megabytes. This memory is used during normal runtime to describe the table and to track information about where changed data for each table is physically located in the database. It is used during transfers to hold rows being written to or read from the file. You will probably not need to change this configuration. The default is large enough to transfer about 100 normal tables simultaneously. If you don t intend to transfer any tables incrementally, you might choose to configure this memory smaller in order to reclaim the memory for other purposes. Conversely, if you have hundreds of tables marked for incremental transfer, or if the tables you intend to transfer are extremely large (hundreds of gigabytes), you may wish to configure it larger to avoid potentially running out of memory when starting a transfer. This memory pool is dynamic, meaning it can be reconfigured larger or smaller without needing to restart ASE. Create or modify tables for transfer table. You must do this for each table that you plan to transfer incrementally. Considerations for the Receiving Table Requirements for the table that will receive the data vary. When importing data that was written in bcp format, the receiving table should have the same columns, in the same order, as the source table. When importing data that was written character-coded, the receiving table need only be capable of accepting the data that appears in the output file. When importing data that was written for Sybase IQ, IQ s load table command allows you to specify the receiving table s columns in any order necessary to match what appears in the output file, as long as the receiving columns match the size and datatype of the imported data. All of the above formats are only suitable for importing new data. They cannot handle changes to existing rows. If you know that your data includes changes to existing rows, then you should format the data for ASE, so that you can use the transfer table command to load it. The receiving table must have the same columns, in the same order, as the source table. Customers having IMDB licenses must assure that either the source table or the destination table reside in an IMDB or RDDB. The columns of the two tables need not be identical, but they must be close enough : A varying-length nullable column from the source table need not be nullable in the receiving table. A varying-length column in the receiving table must be at least as long as its equivalent in the source table, but it may be longer. An encrypted column in the source table need not be encrypted in the destination table, as long as the data was written unencrypted and the receiving column is either nullable or varying-length. (ASE stores all encrypted columns as though they were varying-length.) Columns in the destination table may be encrypted where the source columns were not, as long as the source columns were either nullable or varying-length. 4

7 The requirement is that for each column as it appears in the output file, there must be a column capable of receiving the data without converting the column to a different datatype. Additionally, in order to receive changes to existing data, the receiving table must have a primary index. The transfer uses this index to locate and remove existing rows so that it can store the new row (until the old row is gone it is an error to store the replacement row). The transfer table to Command As explained above, the basic command for extracting incremental data from an eligible table and writing it to a file is: transfer table table_name to destination_file Those are the only required parts of the command. Options provided to this command control its operation, describing how the data is formatted and controlling some aspects of the formatting, as well as controlling how the command behaves. Specify options controlling basic formatting via the for clause: transfer table table_name to destination_file for {ase bcp csv iq } These options are: for ase writes data in ASE s internal file format. Use the T-SQL command transfer table from to load this file. For example: transfer table table_name from /path/to/file for ase for bcp writes data in bcp s binary data format. This format produces an associated format file, placed in the same directory as the output data, named {table_name},{dbid},{object_id}.fmt. Use bcp to load this data, and provide the path to the format file using bcp s -f flag. for csv writes character-coded output. This is not a file in standard csv format. Rather, it provides userdefinable column and row separators, and writes a file suitable for loading into ASE or Sybase IQ using bcp c, or into Sybase IQ via load table format ascii. See Options Controlling Character-Coded Output below for more information about the column and row separators. for iq writes data in Sybase IQ s binary data format. Use IQ s load table format binary command to load this file. Specify options controlling command operation, and modifying the for clause, using the with clause: transfer table table_name to destination_file for format with { column_order={ id offset name name_utf8 }, column_separator=string, encryption={ true false }, fixed_length=( true false ), null_byte={ true false }, progress=nnn, resend=nnn, row_separator=string, sync={ true false }, tracking_id=nnn } 5

8 These options affect various things about the command as described below. Options Controlling the Overall Command Options column_order, encryption, progress, resend, sync, and tracking_id affect the overall command: column_order determines the order in which columns are written from table rows into the file. Different options serve different purposes: Order id : columns will be written in ascending order by their defined column ID in syscolumns. This order is required when doing a transfer for bcp. It is the order in which bcp naturally writes columns, so it is also suitable for transfers for csv when you want to mimic what bcp c would do. Order offset : columns will be written in the order in which they are physically stored in the data row. This order is required when doing a transfer for ase. Orders name and name_utf8 : columns will be written in ascending order, sorted by their column name. The name option sorts the names in bytewise order, whereas name_utf8 converts the names to UTF-8 characters before sorting. These options are useful when the source and destination tables have identically named columns, but you are not sure that the columns appear in the same ID or offset order in both tables. To be able to take advantage of this option, you must be able to specify the column order when importing to the destination table. This is primarily useful for Sybase IQ: its load table command allows you to specify column presentation order as needed. encryption controls whether or not the command decrypts encrypted columns. When true, the command writes encrypted column data to the output file without decrypting it. With this option, the receiving column must use exactly the same encryption method and key as the source column, or the data will not be readable. However, the user executing the command does not need permission to access the encryption key(s) used to encrypt the data. When false, the command decrypts columns before writing them to the output file. Using this option, the user must have permission to decrypt the data, and the data will be written to the output file without encryption; but it will then be readable at its destination without needing to know the details of how it was encrypted. This is the default. progress causes the command to produce a progress message every NNN seconds. This is useful when the user executing the command wants assurance that the command has not stalled. However, progress information is also available through the monitoring table montabletransfer, as described in Tracking Transfers, below. resend aids disaster recovery by instructing the command not to start with the beginning timestamp it would normally use, but with a timestamp taken from an indicated previous successful transfer. Output files can be damaged after transfer is complete but before they are loaded. When this happens, you can locate the history entry in spt_tabletransfer that contains that output file, find its sequence ID, and cause this transfer to use that transfer s base timestamp instead of the one it would otherwise select. Please note that this does not send exactly what the previous transfer sent. Rather, it uses the base timestamp to establish its low bound for sending rows. Generally, that means the transfer will send all those rows, plus rows sent by any intervening transfers, plus rows that would be sent by the current transfer without the resend option. 6

9 sync determines whether this transfer does or does not synchronize with current transactions. When true, transfer interacts with transactions as described in Interaction with Transactions, below. This can slow down transaction processing, but assures that the transfer includes only transactionally consistent data. When false, transfer works independently of transactions. The transfer will include only committed data, but the output file is not guaranteed to contain every row affected by a particular transaction. This is the default. tracking_id associates a user-supplied ID with the current transfer. It will be stored in the history entry for this transfer. Users can then use that ID as a way of locating a particular transfer. Note that ASE does not control this ID, nor does it care whether many different transfers use the same ID. This is purely a user convenience. The default is to use no tracking ID. Options Controlling Output for IQ Two options exist that are specific to transfers for iq : fixed_length determines whether the transfer sends non-nullable varying-length character strings as their correct length, or pads them with blanks to the column s maximum possible width. When true, columns are padded. This produces a larger output file, but can be safer. (See below.) When false, columns are sent as their correct length. This option only affects non-nullable varying-length string columns. Transfer for iq always sends all other columns padded out to their full possible width. This occurs because when loading using `format binary, the load table command only provides syntax to accept varying length input for character columns. You should use this option if your table contains a mix of nullable columns and non-nullable varying-length string columns. Otherwise, we have seen that load table can misinterpret data in the transfer file, which will cause the load to be corrupt. null_byte determines whether transfer appends a null byte to every column in the output file, or only to nullable columns. When true, every column has an appended null byte. If you use this option, it forces the transfer to use the option fixed_length=true regardless of whether you specify differently. In Sybase IQ s binary format, only fixedlength columns may include a null byte. When false, only nullable columns have the null byte. In Sybase IQ, nullable columns are followed by a single byte indicating whether the column is null or not. In this byte, zero means not null, and non-zero means null. This option is useful when the source table has different nullable columns than the destination table does. The load table syntax allows you to specify with null byte to indicate that columns in the input data have the null byte regardless whether the column they are loaded into is nullable. 7

10 Options Controlling Character-Coded Output Two options exist that are specific to transfers for csv : column_separator is a string written between output columns. The default for the first transfer for csv is a tab character. The default for subsequent transfers is the string used in the most recent successful transfer. row_separator is a string written at the end of each output row. The default for the first transfer for csv is a line feed (ctrl-j) on Unix or Linux, or a carriage return (ctrl-m) and line feed pair on Windows. The default for subsequent transfers is the string used in the most recent successful transfer. These strings may each be as long as 64 bytes. You can specify some special characters to be included in the string, as follows: \b inserts a backspace (ctrl-h) \t inserts a tab (ctrl-i) \n inserts a line feed (ctrl-j) \r inserts a carriage return (ctrl-m) \\ inserts a backslash ( \ ) Occurrences of the backslash character that are not part of one of the sequences above are not considered special. They simply insert the characters without interpreting them. Interaction with Other Ongoing Work Interaction with Transactions By default, DTU transfers committed data, which is not the same as transactionally consistent data. For transfer, data is committed if the transaction that modified it committed before the transfer began. On the other hand, transactionally consistent transfers require that if one row changed by a given transaction is sent, then all rows changed by that transaction are also sent in the same transfer. This section explains how command options cause DTU to send either committed or transactionally consistent data. The utility will not send data it considers to be uncommitted, because it cannot know whether a transaction will roll back, which would mean that some changes should not have been sent. However, it cannot afford to spend the time needed to check every row to see whether its transaction committed. Thus, it considers data to be committed only for transactions that committed before the transfer starts. If a transaction commits while a transfer is in progress, and before the transfer reads rows affected by that transaction, the transfer still considers those rows to be uncommitted. Transmitting committed data is the default action because we assume that this transfer will be followed by another transfer. However, this can cause DTU not to send all the rows changed by a given transaction: it can happen that an uncommitted transaction changes rows while a transfer is in progress and before the transfer inspects them, causing those rows not to be sent. If those rows were originally part of some other update, this can cause a transfer to send some of the rows from the previous update but not others. Such a transfer is thus transactionally inconsistent. If the possibility of that inconsistency is not acceptable, DTU provides an option to prevent it. The command transfer table with sync= true causes DTU to synchronize this transfer with currently ongoing transactions. Using this option: A transfer may not begin against a table until all currently open transactions that modify that table have ended. The transfer will wait until the table has no transactions open against it. While a transfer is waiting to begin, no transaction may change the affected table unless that transaction has already changed that table. After the transfer begins, transactions may change the table, but they may only change pages that the transfer has already inspected. 8

11 For very large, very active tables, this can cause significant delays in normal transaction processing. It can happen that transactions attempt to change data on pages that a transfer will not inspect for some time yet. Transfer reads pages in a predetermined order, and no mechanism exists to cause it to inspect a page early because a transaction wants to modify that page. Interaction with Other Transfers Only one transfer at a time may be active against one table. If you attempt to start a transfer while another transfer is in progress, the second transfer will sleep until the first transfer completes. This prevents confusion about whether a row has or has not been sent during a transfer. Transfers of separate tables may happen simultaneously. The number of simultaneous transfers you may do is limited only by the number of files your operating system permits ASE to open simultaneously. Each ongoing transfer uses one file. Please note, though, that connections to database devices also count as open files, so if your site uses many database devices, that could limit the number of simultaneous transfers you are able to complete. Tracking Transfers Statistical and historical information for each transfer is stored in table spt_tabletransfer. For each eligible table, this table keeps up to a configured number of successful and unsuccessful transfer history records. Records are written to spt_tabletransfer after the transfer completes. Transfers store information about the transfer s completion status, which is not known until it finishes. Note too that this table only stores history entries for eligible tables other tables that you may transfer do not keep history entries, since DTU will not transfer them incrementally. However, while a transfer is in progress, statistical information about that transfer is kept in memory, including the amount of data transferred so far and estimates of how much more data the transfer expects to send. That information is available in real time through monitoring table montabletransfer. This includes all transfers, whether or not the table being transferred is eligible for incremental transfer. This table also contains historical information about transfers of tables for which ASE currently stores information in memory. Thus: spt_tabletransfer stores information for completed transfers of eligible tables. This table exists in every database that contains an eligible table. Its history records are limited by configuration option max transfer table history : it stores up to the configured number of successful and of unsuccessful transfers of each eligible table. montabletransfer has information about transfers of tables for which ASE currently holds information in memory. This includes all eligible tables that have been transferred at least once since ASE was started, unless those tables memory has been scavenged. It also includes non-eligible tables while they are actively being transferred. For the eligible tables that montabletransfer can report, it also can extract and report historical information from spt_tabletransfer. But again, this is only true while the table s information is available in memory. If a table has not been transferred since ASE was most recently restarted, or if ASE has had to reclaim that table s in-memory information, montabletransfer will not report it. 9

12 File Handling for Transfer The transfer table command writes data to or reads data from a file on the same system where ASE is currently running. In a multi-engine server, the file will be on a file system visible to the engine that services the command. The transfer opens the file when the transfer begins and closes it when the transfer ends. Under certain circumstances, DTU will delete the file as it closes it. This occurs when: Transfer fails for any reason. Transfer opens the file, but finishes without writing any data to the file. There are two possible scenarios in which DTU might not write any data for a particular transfer. The first is that DTU can tell in advance that there is no data available to send: its tracking information shows that no data in the table has been modified since the last successful transfer. The second is that data has been modified, but all the changes are uncommitted and thus cannot be sent by this transfer. In the first case, DTU does not try to remove the file (because it never opened it); in the second case, it does. There is an exception to this rule. If the file is a special file such as a FIFO (first-in, first-out, also called a named pipe ), ASE will not try to remove that file. Please also note two additional points about transfers using a FIFO: A FIFO is not acceptable for transfers done for bcp. This is because those transfers require a second file, the format file; but ASE does not know where to write that file. When transferring to or from a FIFO, it is the customer s responsibility to see that there is something on the other end of the pipe to read or write data. ASE simply opens the FIFO and begins writing or reading. With no coordinating program, the transfer will seem to hang, and can even encounter timeslice errors if it waits too long. Resending a Transfer If something happens after transfer out to a file is complete but before the file is loaded to its intended receiver, you can instruct DTU to resend that data so as not to lose any updates you may have made. You do this by directing DTU to take its floor timestamp from a history entry in spt_tabletransfer. Please note: it is not possible to resend exactly the same data sent by a prior transfer. Subsequent updates may have changed the data, so those rows may no longer exist. What you are actually doing when you resend a transfer is starting a transfer that behaves as though that transfer and any subsequent transfers never happened (however, DTU uses the selected history entry to provide command defaults). Do this as follows: transfer table table_name to destination_file with resend=nnn The value NNN is a sequence ID as stored in spt_tabletransfer. There are two ways to specify this sequence number: as a positive, non-zero integer, or as a negative integer. Positive integers are sequence IDs from spt_ TableTransfer.sequence_id. You can obtain that sequence ID by selecting from spt_tabletransfer, for example: select sequence_id from spt_tabletransfer where pathname like %file_name% If you provide a sequence_id that does not exist, DTU assumes you want to resend all data in the table. It treats this transfer as though it were the first transfer ever done for this table. The other way to designate sequence_id is as a negative integer. Here, you are directing DTU to locate a previous successful transfer by its relative position in spt_tabletransfer: -1 is the most recent successful transfer, -2 the next most recent, and so on. 10

13 As with positive IDs, if you provide a negative sequence ID for which ASE has no history of a transfer, DTU assumes you want to send all rows in the table. If your history has only 5 successful transfers and you execute transfer table with resend=-6, DTU will treat this transfer as the first-ever transfer of this table. Transfer Speed Tests We ran internal tests of transfer table to to measure transfer speed. Tests were performed using a single in-memory database in ASE on a Linux (AMD 64-bit) machine. We measured throughput on five tables simultaneously, using bcp in to add data to the tables while transfer table was extracting data from them. In these tests: bcp in inserted data at Gb per hour transfer table extracted data at Gb per hour Total I/O throughput was Gb per hour. We also ran tests to measure data extraction speed for transfer table for bcp versus single-threaded bcp out. In these tests: bcp out averaged about 17,000 rows per second transfer table averaged about 75,000 rows per second. DTU, bcp, and Replication Server The transfer table command, the bulk copy utility, and Replication Server are all different from one another in important ways. They can all be used together; you should determine which one is most appropriate for a given purpose. Like bcp, transfer table scans the table for data and writes its output to a file. Unlike bcp, transfer table requires that the output file be one that ASE can open directly it does not currently write output to the network. Also, where bcp is able to address a single partition of a table, transfer table inspects the entire table. Currently, transfer table and bcp cannot capture deleted rows. (This will be added to transfer table... for ase in a subsequent release.) When transfer table sends a row, it sends only the image of that row as it exists when it scans the table. It does not capture individual changes that occur to that row over time. Also, transfer table depends on the user to issue commands to send or fetch the data database changes are not captured in real time. Replication Server can do all these things, and has the notion of subscribing to the data, so it can propagate changes automatically to a collection of subscribers. Tables transferred via transfer table... for ase can be loaded into databases with a different byte order, but the command cannot change the data s character set. 11

14 License Requirements The DTU feature comes bundled with the ASE IMDB (In-Memory Databases) license. Please refer to the Sybase ASE documentation for further details on licensing. Note: customers with IMDB licenses who want to transfer data in ASE s internal format are subject to the restriction that either the source table or the destination table (or both) must reside in an IMDB or RDDB (Relaxed Durability Database). Security Restrictions Permission to use transfer table to transfer a given table defaults to the owner of that table and to System Administrators (that is, users having role sa_role ). Table owners may grant permission to transfer their tables to other users. The transfer table command does not encrypt data before writing it to the file. Where tables contain encrypted columns, a command option controls whether DTU will decrypt columns before transfer or if it will write the data in its encrypted form. Where DTU decrypts data, the user performing the transfer must have permission to access all necessary decryption keys, and that data will be written to the file unencrypted. Where DTU does not decrypt data, the receiving application must know the precise details of how the data was encrypted, otherwise it will not be readable after loading. Sybase, Inc. Worldwide Headquarters One Sybase Drive Dublin, CA U.S.A sybase 12 Copyright 2010 Sybase, Inc. All rights reserved. Unpublished rights reserved under U.S. copyright laws. Sybase, the Sybase logo and Adaptive Server are trademarks of Sybase, Inc. or its subsidiaries. All other trademarks are the property of their respective owners. indicates registration in the United States. Specifications are subject to change without notice. 04/10

SQL Server An Overview

SQL Server An Overview SQL Server An Overview SQL Server Microsoft SQL Server is designed to work effectively in a number of environments: As a two-tier or multi-tier client/server database system As a desktop database system

More information

CA Workload Automation Agent for Databases

CA Workload Automation Agent for Databases CA Workload Automation Agent for Databases Implementation Guide r11.3.4 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the

More information

Commander. The World's Leading Software for Label, Barcode, RFID & Card Printing

Commander. The World's Leading Software for Label, Barcode, RFID & Card Printing The World's Leading Software for Label, Barcode, RFID & Card Printing Commander Middleware for Automatically Printing in Response to User-Defined Events Contents Overview of How Commander Works 4 Triggers

More information

Database Encryption Design Considerations and Best Practices for ASE 15

Database Encryption Design Considerations and Best Practices for ASE 15 Database Encryption Design Considerations and Best Practices for ASE 15 By Jeffrey Garbus, Soaring Eagle Consulting Executive Overview This article will explore best practices and design considerations

More information

sql server best practice

sql server best practice sql server best practice 1 MB file growth SQL Server comes with a standard configuration which autogrows data files in databases in 1 MB increments. By incrementing in such small chunks, you risk ending

More information

CA ARCserve Backup for Windows

CA ARCserve Backup for Windows CA ARCserve Backup for Windows Agent for Sybase Guide r16 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation

More information

Backup and Restore Back to Basics with SQL LiteSpeed

Backup and Restore Back to Basics with SQL LiteSpeed Backup and Restore Back to Basics with SQL December 10, 2002 Written by: Greg Robidoux Edgewood Solutions www.edgewoodsolutions.com 888.788.2444 2 Introduction One of the most important aspects for a database

More information

Raima Database Manager Version 14.0 In-memory Database Engine

Raima Database Manager Version 14.0 In-memory Database Engine + Raima Database Manager Version 14.0 In-memory Database Engine By Jeffrey R. Parsons, Senior Engineer January 2016 Abstract Raima Database Manager (RDM) v14.0 contains an all new data storage engine optimized

More information

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems Chapter 13 File and Database Systems Outline 13.1 Introduction 13.2 Data Hierarchy 13.3 Files 13.4 File Systems 13.4.1 Directories 13.4. Metadata 13.4. Mounting 13.5 File Organization 13.6 File Allocation

More information

Chapter 13 File and Database Systems

Chapter 13 File and Database Systems Chapter 13 File and Database Systems Outline 13.1 Introduction 13.2 Data Hierarchy 13.3 Files 13.4 File Systems 13.4.1 Directories 13.4. Metadata 13.4. Mounting 13.5 File Organization 13.6 File Allocation

More information

WHITE PAPER: ENTERPRISE SOLUTIONS. Symantec Backup Exec Continuous Protection Server Continuous Protection for Microsoft SQL Server Databases

WHITE PAPER: ENTERPRISE SOLUTIONS. Symantec Backup Exec Continuous Protection Server Continuous Protection for Microsoft SQL Server Databases WHITE PAPER: ENTERPRISE SOLUTIONS Symantec Backup Exec Continuous Protection Server Continuous Protection for Microsoft SQL Server Databases White Paper: Enterprise Solutions Symantec Backup Exec Continuous

More information

Increasing Driver Performance

Increasing Driver Performance Increasing Driver Performance DataDirect Connect Series ODBC Drivers Introduction One of the advantages of DataDirect Connect Series ODBC drivers (DataDirect Connect for ODBC and DataDirect Connect64 for

More information

How To Backup A Database In Navision

How To Backup A Database In Navision Making Database Backups in Microsoft Business Solutions Navision MAKING DATABASE BACKUPS IN MICROSOFT BUSINESS SOLUTIONS NAVISION DISCLAIMER This material is for informational purposes only. Microsoft

More information

Database Maintenance Guide

Database Maintenance Guide Database Maintenance Guide Medtech Evolution - Document Version 5 Last Modified on: February 26th 2015 (February 2015) This documentation contains important information for all Medtech Evolution users

More information

TIBCO StreamBase High Availability Deploy Mission-Critical TIBCO StreamBase Applications in a Fault Tolerant Configuration

TIBCO StreamBase High Availability Deploy Mission-Critical TIBCO StreamBase Applications in a Fault Tolerant Configuration TIBCO StreamBase High Availability Deploy Mission-Critical TIBCO StreamBase s in a Fault Tolerant Configuration TIBCO STREAMBASE HIGH AVAILABILITY The TIBCO StreamBase event processing platform provides

More information

vsphere Replication for Disaster Recovery to Cloud

vsphere Replication for Disaster Recovery to Cloud vsphere Replication for Disaster Recovery to Cloud vsphere Replication 6.0 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

Installation and Administration Guide

Installation and Administration Guide Installation and Administration Guide BlackBerry Enterprise Transporter for BlackBerry Enterprise Service 12 Version 12.0 Published: 2014-11-06 SWD-20141106165936643 Contents What is BES12?... 6 Key features

More information

Backups and Maintenance

Backups and Maintenance Backups and Maintenance Backups and Maintenance Objectives Learn how to create a backup strategy to suit your needs. Learn how to back up a database. Learn how to restore from a backup. Use the Database

More information

Irecently worked on the implementation

Irecently worked on the implementation Global Peer-to-Peer Data Using Sybase By Mich Talebzadeh Mich Talebzadeh is an independent Sybase consultant, working largely in investment banking. He teaches on topics including SQL administration, performance

More information

Release Bulletin Sybase ETL Small Business Edition 4.2

Release Bulletin Sybase ETL Small Business Edition 4.2 Release Bulletin Sybase ETL Small Business Edition 4.2 Document ID: DC00737-01-0420-02 Last revised: November 16, 2007 Topic Page 1. Accessing current release bulletin information 2 2. Product summary

More information

vsphere Replication for Disaster Recovery to Cloud

vsphere Replication for Disaster Recovery to Cloud vsphere Replication for Disaster Recovery to Cloud vsphere Replication 5.8 This document supports the version of each product listed and supports all subsequent versions until the document is replaced

More information

NovaBACKUP. User Manual. NovaStor / November 2011

NovaBACKUP. User Manual. NovaStor / November 2011 NovaBACKUP User Manual NovaStor / November 2011 2011 NovaStor, all rights reserved. All trademarks are the property of their respective owners. Features and specifications are subject to change without

More information

Documentum Content Distribution Services TM Administration Guide

Documentum Content Distribution Services TM Administration Guide Documentum Content Distribution Services TM Administration Guide Version 5.3 SP5 August 2007 Copyright 1994-2007 EMC Corporation. All rights reserved. Table of Contents Preface... 7 Chapter 1 Introducing

More information

CA ARCserve Backup for Windows

CA ARCserve Backup for Windows CA ARCserve Backup for Windows Agent for Sybase Guide r16.5 This Documentation, which includes embedded help systems and electronically distributed materials, (hereinafter referred to as the Documentation

More information

Support Document: Microsoft SQL Server - LiveVault 7.6X

Support Document: Microsoft SQL Server - LiveVault 7.6X Contents Preparing to create a Microsoft SQL backup policy... 2 Adjusting the SQL max worker threads option... 2 Preparing for Log truncation... 3 Best Practices... 3 Microsoft SQL Server 2005, 2008, or

More information

Backup and Recovery. What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases

Backup and Recovery. What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases Backup and Recovery What Backup, Recovery, and Disaster Recovery Mean to Your SQL Anywhere Databases CONTENTS Introduction 3 Terminology and concepts 3 Database files that make up a database 3 Client-side

More information

SQL Server Database Coding Standards and Guidelines

SQL Server Database Coding Standards and Guidelines SQL Server Database Coding Standards and Guidelines http://www.sqlauthority.com Naming Tables: Stored Procs: Triggers: Indexes: Primary Keys: Foreign Keys: Defaults: Columns: General Rules: Rules: Pascal

More information

How To Use The Correlog With The Cpl Powerpoint Powerpoint Cpl.Org Powerpoint.Org (Powerpoint) Powerpoint (Powerplst) And Powerpoint 2 (Powerstation) (Powerpoints) (Operations

How To Use The Correlog With The Cpl Powerpoint Powerpoint Cpl.Org Powerpoint.Org (Powerpoint) Powerpoint (Powerplst) And Powerpoint 2 (Powerstation) (Powerpoints) (Operations orrelog SQL Table Monitor Adapter Users Manual http://www.correlog.com mailto:info@correlog.com CorreLog, SQL Table Monitor Users Manual Copyright 2008-2015, CorreLog, Inc. All rights reserved. No part

More information

Connecting Software. CB Mobile CRM Windows Phone 8. User Manual

Connecting Software. CB Mobile CRM Windows Phone 8. User Manual CB Mobile CRM Windows Phone 8 User Manual Summary This document describes the Windows Phone 8 Mobile CRM app functionality and available features. The document is intended for end users as user manual

More information

SQL Anywhere 12.0.1 New Features Summary

SQL Anywhere 12.0.1 New Features Summary New Features Summary WHITE PAPER www.sybase.com/sqlanywhere Contents: Introduction... 2 Out of Box Performance... 3 Spatial Enhancements... 3 Developer Productivity... 4 Enhanced Database Management...

More information

RSA Authentication Manager 7.1 Microsoft Active Directory Integration Guide

RSA Authentication Manager 7.1 Microsoft Active Directory Integration Guide RSA Authentication Manager 7.1 Microsoft Active Directory Integration Guide Contact Information Go to the RSA corporate web site for regional Customer Support telephone and fax numbers: www.rsa.com Trademarks

More information

Hypertable Architecture Overview

Hypertable Architecture Overview WHITE PAPER - MARCH 2012 Hypertable Architecture Overview Hypertable is an open source, scalable NoSQL database modeled after Bigtable, Google s proprietary scalable database. It is written in C++ for

More information

Seagate Manager. User Guide. For Use With Your FreeAgent TM Drive. Seagate Manager User Guide for Use With Your FreeAgent Drive 1

Seagate Manager. User Guide. For Use With Your FreeAgent TM Drive. Seagate Manager User Guide for Use With Your FreeAgent Drive 1 Seagate Manager User Guide For Use With Your FreeAgent TM Drive Seagate Manager User Guide for Use With Your FreeAgent Drive 1 Seagate Manager User Guide for Use With Your FreeAgent Drive Revision 1 2008

More information

Cisco Unified CM Disaster Recovery System

Cisco Unified CM Disaster Recovery System Disaster Recovery System, page 1 Quick-Reference Tables for Backup and Restore s, page 3 Supported Features and Components, page 4 System Requirements, page 5 Log In to Disaster Recovery System, page 7

More information

Adaptive Server Enterprise

Adaptive Server Enterprise In-Memory Database Users Guide Adaptive Server Enterprise 15.7 DOCUMENT ID: DC01186-01-1570-01 LAST REVISED: September 2011 Copyright 2011 by Sybase, Inc. All rights reserved. This publication pertains

More information

Oracle Essbase Integration Services. Readme. Release 9.3.3.0.00

Oracle Essbase Integration Services. Readme. Release 9.3.3.0.00 Oracle Essbase Integration Services Release 9.3.3.0.00 Readme To view the most recent version of this Readme, see the 9.3.x documentation library on Oracle Technology Network (OTN) at http://www.oracle.com/technology/documentation/epm.html.

More information

PAYMENTVAULT TM LONG TERM DATA STORAGE

PAYMENTVAULT TM LONG TERM DATA STORAGE PAYMENTVAULT TM LONG TERM DATA STORAGE Version 3.0 by Auric Systems International 1 July 2010 Copyright c 2010 Auric Systems International. All rights reserved. Contents 1 Overview 1 1.1 Platforms............................

More information

HP Quality Center. Upgrade Preparation Guide

HP Quality Center. Upgrade Preparation Guide HP Quality Center Upgrade Preparation Guide Document Release Date: November 2008 Software Release Date: November 2008 Legal Notices Warranty The only warranties for HP products and services are set forth

More information

CA ARCserve and CA XOsoft r12.5 Best Practices for protecting Microsoft SQL Server

CA ARCserve and CA XOsoft r12.5 Best Practices for protecting Microsoft SQL Server CA RECOVERY MANAGEMENT R12.5 BEST PRACTICE CA ARCserve and CA XOsoft r12.5 Best Practices for protecting Microsoft SQL Server Overview Benefits The CA Advantage The CA ARCserve Backup Support and Engineering

More information

IceWarp to IceWarp Server Migration

IceWarp to IceWarp Server Migration IceWarp to IceWarp Server Migration Registered Trademarks iphone, ipad, Mac, OS X are trademarks of Apple Inc., registered in the U.S. and other countries. Microsoft, Windows, Outlook and Windows Phone

More information

3.GETTING STARTED WITH ORACLE8i

3.GETTING STARTED WITH ORACLE8i Oracle For Beginners Page : 1 3.GETTING STARTED WITH ORACLE8i Creating a table Datatypes Displaying table definition using DESCRIBE Inserting rows into a table Selecting rows from a table Editing SQL buffer

More information

Chancery SMS 7.5.0 Database Split

Chancery SMS 7.5.0 Database Split TECHNICAL BULLETIN Microsoft SQL Server replication... 1 Transactional replication... 2 Preparing to set up replication... 3 Setting up replication... 4 Quick Reference...11, 2009 Pearson Education, Inc.

More information

Moving the TRITON Reporting Databases

Moving the TRITON Reporting Databases Moving the TRITON Reporting Databases Topic 50530 Web, Data, and Email Security Versions 7.7.x, 7.8.x Updated 06-Nov-2013 If you need to move your Microsoft SQL Server database to a new location (directory,

More information

Together with SAP MaxDB database tools, you can use third-party backup tools to backup and restore data. You can use third-party backup tools for the

Together with SAP MaxDB database tools, you can use third-party backup tools to backup and restore data. You can use third-party backup tools for the Together with SAP MaxDB database tools, you can use third-party backup tools to backup and restore data. You can use third-party backup tools for the following actions: Backing up to data carriers Complete

More information

µtasker Document FTP Client

µtasker Document FTP Client Embedding it better... µtasker Document FTP Client utaskerftp_client.doc/1.01 Copyright 2012 M.J.Butcher Consulting Table of Contents 1. Introduction...3 2. FTP Log-In...4 3. FTP Operation Modes...4 4.

More information

An Oracle White Paper June 2012. High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

An Oracle White Paper June 2012. High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database An Oracle White Paper June 2012 High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database Executive Overview... 1 Introduction... 1 Oracle Loader for Hadoop... 2 Oracle Direct

More information

TIBCO Fulfillment Provisioning Session Layer for FTP Installation

TIBCO Fulfillment Provisioning Session Layer for FTP Installation TIBCO Fulfillment Provisioning Session Layer for FTP Installation Software Release 3.8.1 August 2015 Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE. USE OF SUCH EMBEDDED

More information

Managing Cisco ISE Backup and Restore Operations

Managing Cisco ISE Backup and Restore Operations CHAPTER 14 This chapter describes the Cisco Identity Services Engine (ISE) database backup and restore operations, which include Cisco ISE application configuration and Cisco Application Deployment Engine

More information

User Guide. Laplink Software, Inc. Laplink DiskImage 7 Professional. User Guide. UG-DiskImagePro-EN-7 (REV. 5/2013)

User Guide. Laplink Software, Inc. Laplink DiskImage 7 Professional. User Guide. UG-DiskImagePro-EN-7 (REV. 5/2013) 1 Laplink DiskImage 7 Professional Laplink Software, Inc. Customer Service/Technical Support: Web: http://www.laplink.com/contact E-mail: CustomerService@laplink.com Laplink Software, Inc. 600 108th Ave.

More information

MARCH 2005. Conversion Software User Guide for Windows. Version 2.0

MARCH 2005. Conversion Software User Guide for Windows. Version 2.0 MARCH 2005 CDS Conversion Software User Guide for Windows Version 2.0 Updated: 2/24/2006 Table of Contents CDS Conversion Software V2 for Windows User Guide... 1 System Requirements... 1 Introduction...

More information

SafeGuard Enterprise Web Helpdesk. Product version: 6 Document date: February 2012

SafeGuard Enterprise Web Helpdesk. Product version: 6 Document date: February 2012 SafeGuard Enterprise Web Helpdesk Product version: 6 Document date: February 2012 Contents 1 SafeGuard web-based Challenge/Response...3 2 Installation...5 3 Authentication...8 4 Select the Web Helpdesk

More information

ODBC Driver Version 4 Manual

ODBC Driver Version 4 Manual ODBC Driver Version 4 Manual Revision Date 12/05/2007 HanDBase is a Registered Trademark of DDH Software, Inc. All information contained in this manual and all software applications mentioned in this manual

More information

SOS Suite Installation Guide

SOS Suite Installation Guide SOS Suite Installation Guide rev. 8/31/2010 Contents Overview Upgrading from SOS 2009 and Older Pre-Installation Recommendations Network Installations System Requirements Preparing for Installation Installing

More information

Plug-In for Informatica Guide

Plug-In for Informatica Guide HP Vertica Analytic Database Software Version: 7.0.x Document Release Date: 2/20/2015 Legal Notices Warranty The only warranties for HP products and services are set forth in the express warranty statements

More information

WhatsUp Gold v16.3 Installation and Configuration Guide

WhatsUp Gold v16.3 Installation and Configuration Guide WhatsUp Gold v16.3 Installation and Configuration Guide Contents Installing and Configuring WhatsUp Gold using WhatsUp Setup Installation Overview... 1 Overview... 1 Security considerations... 2 Standard

More information

Network Security EDA491 2011/2012. Laboratory assignment 4. Revision A/576, 2012-05-04 06:13:02Z

Network Security EDA491 2011/2012. Laboratory assignment 4. Revision A/576, 2012-05-04 06:13:02Z Network Security EDA491 2011/2012 Laboratory assignment 4 Revision A/576, 2012-05-04 06:13:02Z Lab 4 - Network Intrusion Detection using Snort 1 Purpose In this assignment you will be introduced to network

More information

Disaster Recovery System Administration Guide for Cisco Unified Communications Manager Release 8.5(1)

Disaster Recovery System Administration Guide for Cisco Unified Communications Manager Release 8.5(1) Disaster Recovery System Administration Guide for Cisco Unified Communications Manager Release 8.5(1) Published: Decemer 02, 2010 This guide provides an overview of the Disaster Recovery System, describes

More information

ICE for Eclipse. Release 9.0.1

ICE for Eclipse. Release 9.0.1 ICE for Eclipse Release 9.0.1 Disclaimer This document is for informational purposes only and is subject to change without notice. This document and its contents, including the viewpoints, dates and functional

More information

Connectivity. Alliance Access 7.0. Database Recovery. Information Paper

Connectivity. Alliance Access 7.0. Database Recovery. Information Paper Connectivity Alliance Access 7.0 Database Recovery Information Paper Table of Contents Preface... 3 1 Overview... 4 2 Resiliency Concepts... 6 2.1 Database Loss Business Impact... 6 2.2 Database Recovery

More information

StorageCraft ShadowStream User Guide StorageCraft Copyright Declaration

StorageCraft ShadowStream User Guide StorageCraft Copyright Declaration StorageCraft Copyright Declaration StorageCraft ImageManager, StorageCraft ShadowProtect, StorageCraft Cloud, and StorageCraft Cloud Services, together with any associated logos, are trademarks of StorageCraft

More information

IM and Presence Disaster Recovery System

IM and Presence Disaster Recovery System Disaster Recovery System, page 1 Access the Disaster Recovery System, page 2 Back up data in the Disaster Recovery System, page 3 Restore scenarios, page 9 Backup and restore history, page 15 Data authentication

More information

High Availability for Citrix XenServer

High Availability for Citrix XenServer WHITE PAPER Citrix XenServer High Availability for Citrix XenServer Enhancing XenServer Fault Tolerance with High Availability www.citrix.com Contents Contents... 2 Heartbeating for availability... 4 Planning

More information

TIBCO ActiveMatrix BusinessWorks Plug-in for Big Data User s Guide

TIBCO ActiveMatrix BusinessWorks Plug-in for Big Data User s Guide TIBCO ActiveMatrix BusinessWorks Plug-in for Big Data User s Guide Software Release 1.0 November 2013 Two-Second Advantage Important Information SOME TIBCO SOFTWARE EMBEDS OR BUNDLES OTHER TIBCO SOFTWARE.

More information

Database Administration with MySQL

Database Administration with MySQL Database Administration with MySQL Suitable For: Database administrators and system administrators who need to manage MySQL based services. Prerequisites: Practical knowledge of SQL Some knowledge of relational

More information

Synchronization Tool. Administrator Guide

Synchronization Tool. Administrator Guide Synchronization Tool Administrator Guide Synchronization Tool Administrator Guide Documentation version: 1.5 Legal Notice Legal Notice Copyright 2013 Symantec Corporation. All rights reserved. Symantec,

More information

Connecting Software Connect Bridge - Mobile CRM Android User Manual

Connecting Software Connect Bridge - Mobile CRM Android User Manual Connect Bridge - Mobile CRM Android User Manual Summary This document describes the Android app Mobile CRM, its functionality and features available. The document is intended for end users as user manual

More information

SAS Visual Analytics 7.2 for SAS Cloud: Quick-Start Guide

SAS Visual Analytics 7.2 for SAS Cloud: Quick-Start Guide SAS Visual Analytics 7.2 for SAS Cloud: Quick-Start Guide Introduction This quick-start guide covers tasks that account administrators need to perform to set up SAS Visual Statistics and SAS Visual Analytics

More information

Integrating VoltDB with Hadoop

Integrating VoltDB with Hadoop The NewSQL database you ll never outgrow Integrating with Hadoop Hadoop is an open source framework for managing and manipulating massive volumes of data. is an database for handling high velocity data.

More information

JBackpack Manual. Version 0.9.3. Abstract

JBackpack Manual. Version 0.9.3. Abstract JBackpack Manual JBackpack Manual Version 0.9.3 Abstract JBackpack is a personal backup program. It features incremental backups, network transparency and encryption. Table of Contents 1. Overview... 1

More information

Connectivity. Alliance Access 7.0. Database Recovery. Information Paper

Connectivity. Alliance Access 7.0. Database Recovery. Information Paper Connectivity Alliance 7.0 Recovery Information Paper Table of Contents Preface... 3 1 Overview... 4 2 Resiliency Concepts... 6 2.1 Loss Business Impact... 6 2.2 Recovery Tools... 8 3 Manual Recovery Method...

More information

Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0

Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0 Enhanced Connector Applications SupportPac VP01 for IBM WebSphere Business Events 3.0.0 Third edition (May 2012). Copyright International Business Machines Corporation 2012. US Government Users Restricted

More information

About this release. McAfee Application Control and Change Control 6.1.1. Addendum. Content change tracking. Configure content change tracking rule

About this release. McAfee Application Control and Change Control 6.1.1. Addendum. Content change tracking. Configure content change tracking rule Addendum McAfee Application Control and Change Control 6.1.1 About this release For use with epolicy Orchestrator 4.6 5.0 Software This document is an addendum to the McAfee Change Control and Application

More information

Bulk Downloader. Call Recording: Bulk Downloader

Bulk Downloader. Call Recording: Bulk Downloader Call Recording: Bulk Downloader Contents Introduction... 3 Getting Started... 3 Configuration... 4 Create New Job... 6 Running Jobs... 7 Job Log... 7 Scheduled Jobs... 8 Recent Runs... 9 Storage Device

More information

SAP Business Objects Business Intelligence platform Document Version: 4.1 Support Package 7 2015-11-24. Data Federation Administration Tool Guide

SAP Business Objects Business Intelligence platform Document Version: 4.1 Support Package 7 2015-11-24. Data Federation Administration Tool Guide SAP Business Objects Business Intelligence platform Document Version: 4.1 Support Package 7 2015-11-24 Data Federation Administration Tool Guide Content 1 What's new in the.... 5 2 Introduction to administration

More information

ZooKeeper. Table of contents

ZooKeeper. Table of contents by Table of contents 1 ZooKeeper: A Distributed Coordination Service for Distributed Applications... 2 1.1 Design Goals...2 1.2 Data model and the hierarchical namespace...3 1.3 Nodes and ephemeral nodes...

More information

Best Practices. Using IBM InfoSphere Optim High Performance Unload as part of a Recovery Strategy. IBM Smart Analytics System

Best Practices. Using IBM InfoSphere Optim High Performance Unload as part of a Recovery Strategy. IBM Smart Analytics System IBM Smart Analytics System Best Practices Using IBM InfoSphere Optim High Performance Unload as part of a Recovery Strategy Garrett Fitzsimons IBM Data Warehouse Best Practices Specialist Konrad Emanowicz

More information

Version 5.0. MIMIX ha1 and MIMIX ha Lite for IBM i5/os. Using MIMIX. Published: May 2008 level 5.0.13.00. Copyrights, Trademarks, and Notices

Version 5.0. MIMIX ha1 and MIMIX ha Lite for IBM i5/os. Using MIMIX. Published: May 2008 level 5.0.13.00. Copyrights, Trademarks, and Notices Version 5.0 MIMIX ha1 and MIMIX ha Lite for IBM i5/os Using MIMIX Published: May 2008 level 5.0.13.00 Copyrights, Trademarks, and Notices Product conventions... 10 Menus and commands... 10 Accessing online

More information

Introduction. Part I: Finding Bottlenecks when Something s Wrong. Chapter 1: Performance Tuning 3

Introduction. Part I: Finding Bottlenecks when Something s Wrong. Chapter 1: Performance Tuning 3 Wort ftoc.tex V3-12/17/2007 2:00pm Page ix Introduction xix Part I: Finding Bottlenecks when Something s Wrong Chapter 1: Performance Tuning 3 Art or Science? 3 The Science of Performance Tuning 4 The

More information

Networking Best Practices Guide. Version 6.5

Networking Best Practices Guide. Version 6.5 Networking Best Practices Guide Version 6.5 Summer 2010 Copyright: 2010, CCH, a Wolters Kluwer business. All rights reserved. Material in this publication may not be reproduced or transmitted in any form

More information

MapGuide Open Source Repository Management Back up, restore, and recover your resource repository.

MapGuide Open Source Repository Management Back up, restore, and recover your resource repository. MapGuide Open Source Repository Management Back up, restore, and recover your resource repository. Page 1 of 5 Table of Contents 1. Introduction...3 2. Supporting Utility...3 3. Backup...4 3.1 Offline

More information

KMx Enterprise: Integration Overview for Member Account Synchronization and Single Signon

KMx Enterprise: Integration Overview for Member Account Synchronization and Single Signon KMx Enterprise: Integration Overview for Member Account Synchronization and Single Signon KMx Enterprise includes two api s for integrating user accounts with an external directory of employee or other

More information

SafeGuard Enterprise Web Helpdesk

SafeGuard Enterprise Web Helpdesk SafeGuard Enterprise Web Helpdesk Product version: 5.60 Document date: April 2011 Contents 1 SafeGuard web-based Challenge/Response...3 2 Installation...5 3 Authentication...8 4 Select the Web Help Desk

More information

PageR Enterprise Monitored Objects - AS/400-5

PageR Enterprise Monitored Objects - AS/400-5 PageR Enterprise Monitored Objects - AS/400-5 The AS/400 server is widely used by organizations around the world. It is well known for its stability and around the clock availability. PageR can help users

More information

SIEBEL ANALYTICS SCHEDULER GUIDE

SIEBEL ANALYTICS SCHEDULER GUIDE SIEBEL ANALYTICS SCHEDULER GUIDE VERSION 7.7 DECEMBER 2003 Siebel Systems, Inc., 2207 Bridgepointe Parkway, San Mateo, CA 94404 Copyright 2003 Siebel Systems, Inc. All rights reserved. Printed in the United

More information

Novell Identity Manager

Novell Identity Manager Password Management Guide AUTHORIZED DOCUMENTATION Novell Identity Manager 3.6.1 June 05, 2009 www.novell.com Identity Manager 3.6.1 Password Management Guide Legal Notices Novell, Inc. makes no representations

More information

HYPERION DATA RELATIONSHIP MANAGEMENT RELEASE 9.3.1 BATCH CLIENT USER S GUIDE

HYPERION DATA RELATIONSHIP MANAGEMENT RELEASE 9.3.1 BATCH CLIENT USER S GUIDE HYPERION DATA RELATIONSHIP MANAGEMENT RELEASE 9.3.1 BATCH CLIENT USER S GUIDE Data Relationship Management Batch Client User s Guide, 9.3.1 Copyright 1999, 2007, Oracle and/or its affiliates. All rights

More information

OPEN APPLICATION INTERFACE (OAI) INSTALLATION GUIDE NEC

OPEN APPLICATION INTERFACE (OAI) INSTALLATION GUIDE NEC CODE MASTER AN OPEN APPLICATION INTERFACE (OAI) INSTALLATION GUIDE NEC America, Inc. NDA-30013-006 Revision 6.0 April, 1999 Stock # 241713 LIABILITY DISCLAIMER NEC America reserves the right to change

More information

webmethods Certificate Toolkit

webmethods Certificate Toolkit Title Page webmethods Certificate Toolkit User s Guide Version 7.1.1 January 2008 webmethods Copyright & Document ID This document applies to webmethods Certificate Toolkit Version 7.1.1 and to all subsequent

More information

Disk-to-Disk-to-Offsite Backups for SMBs with Retrospect

Disk-to-Disk-to-Offsite Backups for SMBs with Retrospect Disk-to-Disk-to-Offsite Backups for SMBs with Retrospect Abstract Retrospect backup and recovery software provides a quick, reliable, easy-to-manage disk-to-disk-to-offsite backup solution for SMBs. Use

More information

Oracle Enterprise Manager

Oracle Enterprise Manager Oracle Enterprise Manager System Monitoring Plug-in for Oracle TimesTen In-Memory Database Installation Guide Release 11.2.1 E13081-02 June 2009 This document was first written and published in November

More information

Cloud Backup Express

Cloud Backup Express Cloud Backup Express Table of Contents Installation and Configuration Workflow for RFCBx... 3 Cloud Management Console Installation Guide for Windows... 4 1: Run the Installer... 4 2: Choose Your Language...

More information

SAP Note 1642148 - FAQ: SAP HANA Database Backup & Recovery

SAP Note 1642148 - FAQ: SAP HANA Database Backup & Recovery Note Language: English Version: 1 Validity: Valid Since 14.10.2011 Summary Symptom To ensure optimal performance, SAP HANA database holds the bulk of its data in memory. However, it still uses persistent

More information

Gentran Integration Suite. Archiving and Purging. Version 4.2

Gentran Integration Suite. Archiving and Purging. Version 4.2 Gentran Integration Suite Archiving and Purging Version 4.2 Copyright 2007 Sterling Commerce, Inc. All rights reserved. Additional copyright information is located on the Gentran Integration Suite Documentation

More information

Symantec NetBackup Vault Operator's Guide

Symantec NetBackup Vault Operator's Guide Symantec NetBackup Vault Operator's Guide UNIX, Windows, and Linux Release 7.5 Symantec NetBackup Vault Operator's Guide The software described in this book is furnished under a license agreement and may

More information

13 Managing Devices. Your computer is an assembly of many components from different manufacturers. LESSON OBJECTIVES

13 Managing Devices. Your computer is an assembly of many components from different manufacturers. LESSON OBJECTIVES LESSON 13 Managing Devices OBJECTIVES After completing this lesson, you will be able to: 1. Open System Properties. 2. Use Device Manager. 3. Understand hardware profiles. 4. Set performance options. Estimated

More information

Discovery Guide. Secret Server. Table of Contents

Discovery Guide. Secret Server. Table of Contents Secret Server Discovery Guide Table of Contents Introduction... 3 How Discovery Works... 3 Active Directory / Local Windows Accounts... 3 Unix accounts... 3 VMware ESX accounts... 3 Why use Discovery?...

More information

ORACLE GOLDENGATE BIG DATA ADAPTER FOR HIVE

ORACLE GOLDENGATE BIG DATA ADAPTER FOR HIVE ORACLE GOLDENGATE BIG DATA ADAPTER FOR HIVE Version 1.0 Oracle Corporation i Table of Contents TABLE OF CONTENTS... 2 1. INTRODUCTION... 3 1.1. FUNCTIONALITY... 3 1.2. SUPPORTED OPERATIONS... 4 1.3. UNSUPPORTED

More information

Isilon OneFS. Version 7.2.1. OneFS Migration Tools Guide

Isilon OneFS. Version 7.2.1. OneFS Migration Tools Guide Isilon OneFS Version 7.2.1 OneFS Migration Tools Guide Copyright 2015 EMC Corporation. All rights reserved. Published in USA. Published July, 2015 EMC believes the information in this publication is accurate

More information