Real-time High Volume Data Replication White Paper

Size: px
Start display at page:

Download "Real-time High Volume Data Replication White Paper"

Transcription

1 Real-time High Volume Data Replication White Paper Version 2.4

2 Table of Contents 1 HVR overview HVR Usage Scenarios Product Overview Technology Architecture and Key Capabilities Continuous Database Replication Database Compare and Refresh/Repair File Replication HVR Management and Operations Platform Support About HVR High Volume Replication

3 1 HVR overview Companies and organizations rely ever more on intensive flows of information to carry out or sustain their business processes. The information needs to be available at the right time at the right place. As a result IT systems must be able to distribute large amounts of data to a multitude of computing platforms on geographically dispersed locations. And, in today s ultra-connected world this must be achieved in near realtime. This is the where HVR comes in. HVR is a software package that can be deployed to configure and perform data replication and synchronization between various kinds of databases and other types of data repositories within distributed computing environments. Using HVR enterprises can easily manage very large and sophisticated data integration scenarios from a central point of control. HVR is designed for infinite scalability and to most efficiently use network and computing resources whilst aiming for minimum latency in data transfers. Figure 1. HVR Real-time Data Integration All of HVR s functionality is provided by a single product and can be used interchangeably without extra configuration. HVR can coexist with other integration solutions. It can operate in conjunction with an Enterprise Service Bus (ESB) for integration closer to the application layer or with an ETL (Extract, Transform, Load) tool when in addition to real time integration extensive transformations are required. HVR s open architecture simplifies cooperation with other integration technologies. 3

4 1.1 HVR Usage Scenarios HVR s unique set of capabilities make it a very versatile integration tool that can be applied as a solution in wide variety of usage scenarios. The first four use cases center around trends in the data integration space. Real-time analytics, Business Intelligence and reporting More and more companies are using their own data to obtain competitive advantage through sophisticated analysis and reporting. In many cases data is moving from transactional systems, often more than one, into a consolidated system optimized for analytics. Today s competitive environment increases the need for up-to-date information in support of much more operational and personalized analytics. This kind of operational analytics requires real-time integration and consolidation from all relevant data sources. HVR fulfills these requirements through its real-time data integration capabilities and support for a variety of analytical database technologies. Figure 2. Real-time Data Integration trends Big data Data volumes are ever increasing as organizations want to store and mine all available data generated anywhere. Besides transactional data, organizations also want to store behavioral data like click paths and social media interactions. Add data emitted by devices such as sensors and mobile phones and even looking at volume alone one could say this is big data. Organization will somehow need bring this data together to make sense out of it, often using technologies such as Hadoop and NoSQL databases that were not traditionally part of the data center. HVR s heterogeneous nature and efficient operations make it ideally suited to support big data uses cases with its scalable infrastructure and support for structured and unstructured data. Hybrid Cloud Computing To many organizations, cloud computing services offer many advantages in terms of cost flexibility, scalability, availability and speed of deployment. As a result many organizations are migrating parts if not all of their IT environment to the cloud. With this, at least temporarily, the IT landscape turns into a combination of cloud hosted and internally hosted applications. IT departments are now faced with the challenge of exchanging, replicating and synchronizing data between these applications. HVR s rich capabilities and focus on performance, efficiency and security makes it particularly suitable in cloud environments. HVR is specifically adapted to the most popular cloud environments. 4

5 Data Integration : Separate => Single solutions Enterprise Integration covers a multitude of the use cases mentioned before. HVR provides a very versatile technology that can be used for many use cases, and multiple organizations have selected HVR to consolidate a variety of different technologies into a single, easy to use and maintain, real-time integration environment. On top of the trends there are still a number of traditional data integration use cases. Geographical Distributed Computing Some organizations have to spread their computing systems over a number of locations to minimize latency and offer a great user experience across the globe. Globally operating companies often run IT services from regional data centers to ensure local availability and performance. Organizations that operate on a regional level may find it useful to retain data and computing facilities in local branch offices, especially if the local availability of business applications is business-critical and network performance is relatively expensive or unreliable. HVR can be deployed to selectively distribute large databases and files between geographically dispersed sites reliably in real-time with minimum use of network resources. Figure 3. Geographical Distributed Computing High Availability Business-critical operations often warrant multiple copies of the data and the databases to provide failover and disaster recovery solutions, and to minimize downtime during planned or unplanned downtime. Multiple copies of the data also allows for load-balancing of concurrent systems to avoid degradation of performance. HVR can be used to create multiple database and file copies. Because of HVR s fast and efficient replication and transport mechanisms, there will be minimal if any data loss in case of a failover. 5

6 Figure 4. High availability between databases Migrations In every large IT environment migrations are a fact of life. They may range from operating system and software upgrades and to server installations or upgrades to large scale projects implementing new platforms or business applications. Such migrations often introduce downtime and may introduce integration challenges when legacy systems have to integrate with new applications, either temporarily or on an ongoing basis. The combined benefits that HVR provides for data integration and High Availability scenarios makes it a very adequate solution for migrations. Figure 5. Cross-platform migrations 6

7 1.2 Product Overview HVR offers a number of powerful features that can be grouped into three major functionalities. These are provided within a common software framework and can be managed from a single integrated management console. Figure 6. HVR framework With those functionalities, HVR can support the total data integration lifecycle, from table create and initial data load to continuous integration to compare and repair until decommissioning. Continuous Database Replication HVR supports continuous replication using log-based capture between databases within large distributed computing environments. Changes applied to the source database are detected by HVR in real-time and transmitted over the network to be copied to one or more target databases. Changes include both Data (DML replication) and Definition (DDL replication). Replication schemes can be quite complex. Implementations vary from a simple one-to-one replication on identical systems to multi-way active/active environments to multi-way distribution into a variety of different database technologies. All replication scenarios are configured, initiated and monitored from a single point of control within the enterprise and through a highly intuitive Graphical User Interface. Replications may be applied to entire databases, but can also be done selectively on specific tables within a database or even rows within a table. Data replication may be configured between instances of different DBMS products or versions, for example between OLTP databases such as Oracle and MS SQL Server, or into analytical database such as Teradata, Pivotal Greenplum or Actian Vector. Uniquely, HVR is able to replicate DDL changes in those scenario s too. HVR s algorithms are optimized to make the replication processes as fast and efficient as possible. For example, transactions are captured efficiently using log-based change data capture, data is highly compressed when sent over the network, and whenever possible the software uses native DBMS interfaces to connect or load data. 7

8 Database Compare and Refresh/Repair HVR Refresh is used during the initial load process. Optionally absent tables can be created with keys as part of this process. HVR can also be used to compare the contents of different databases. If there are differences, use refresh/repair to bring the databases in sync. This may be required before enabling real-time replication, or after recovering a database taking part in a replication after it has crashed. HVR supports the complete replication lifecycle. As with continuous real-time replication, Database Compare and Refresh can be applied to heterogeneous DBMS instances and operates fast and efficiently. Any transformations defined in HVR are considered when performing the comparison. File replication HVR offers the functionality to schedule, execute and monitor complex chains of file transfers. Files can be exchanged between multiple platforms, including Unix/Linux, Windows, Microsoft SharePoint and ftp/sftp locations. Optionally apply extensive and flexible rules to select, route and rename files. Managed File Transfer is used simply to manage file streams, but also in conjunction with data integration (e.g. picking up data from a database and delivering it onto a big data platform like Hadoop). Figure 7. File replication Though functionally distinct, all capabilities are delivered within the same integrated software package using a common architecture. As a result all use cases take advantage of the same architecture in terms of scalability, performance, efficiency, and single control through an intuitive graphical user interface (offering a low TCO and quick time to market). HVR excels in complex real-time data integration scenarios in heterogeneous environments. 8

9 2 Technology 2.1 Architecture and Key Capabilities Internally HVR uses a common software framework using shared code components. All functions, refresh, continuous replication, compare and repair as well as managed file transfer use this framework. HVR tasks operate on database and file stores that are generically referred to as locations. One server running the HVR software must be assigned a central role and is called the HVR hub. The hub serves as the central point from which all processes are run and tasks are scheduled, monitored and logged. The hub typically interacts with other installations of HVR called agents. The hub also interfaces with the HVR Graphical User Interface (GUI) to configure tasks and generate code for execution, and to start/stop jobs. Metadata for the hub is stored in relational tables in a database (the hub database). Locations interact with the hub by having either a HVR agent installed or using protocols like ODBC, FTP etc. The HVR agents at source locations take care of data capture from their local database or file store and sending the captured data to the hub, which distributes it to the target locations (often to agents at the target locations. Agents at the target locations integrate the data they receive into their local database or file store. Data exchange between the hub and agents benefits from the optimized HVR protocols and algorithms to provide optimum performance, efficiency and stability. The HVR hub also serves as the central queuing node for the data exchanges between source and target locations if for whatever reason a target system cannot keep up with the data volumes that must be consumed, for example if there is a network outage. Since the hub only requires limited resources it is often deployed on one of the source or target machines and does not require dedicated hardware. Figure 8. HVR architecture 9

10 HVR operates well in large distributed and heterogeneous environments thanks to its specific design criteria: High performance and efficiency The HVR agents interface to the local databases or file systems through native interfaces, avoiding any overhead due to compatibility layers, such as ODBC or middleware. The protocols and algorithms that the HVR agents use to communicate over the network are optimized for high performance and low bandwidth usage. This is achieved through proprietary compression and smart data packaging techniques. Scalability Most of the HVR processing is done by the local HVR agents to capture or integrate data. Almost every additional location adds an additional HVR agent so that a single hub can scale to thousands of locations. Availability and continuity If multiple locations are selected as the source or target of an HVR task, and one of the locations becomes unavailable in the process, HVR will still complete the task for the remaining locations. This is achieved by creating independent jobs handling the communication with each of the locations, especially on the hub. To prevent the hub from becoming a single point of failure it should be installed on a highly available server or inside a cluster with failover set up between the nodes in the cluster. Beyond that it is possible to include a standby hub in the setup to which the HVR operation can be switched over when the primary hub fails. In this case, all HVR tasks can be continued without disruptions or loss of data. Robustness and error recovery HVR has a range of options and mechanisms to detect and resolve errors that may occur, such as database collisions due to bi-directional replication, database errors, application errors and operator mistakes. 2.2 Continuous Database Replication HVR can be used to selectively deliver captured changes from one or more designated source databases to one or more target databases. The databases may be of different types and versions, or be located on distributed sites. HVR is optimized for performance, efficient use of network and system resources, and large data volumes. HVR s continuous database replication works through HVR agents that reside on the database servers involved in the replication process and that directly connect to their local databases. The HVR agent on a source database server continuously scans the source database transaction log for changes, extracts the relevant changes and sends these over the network to the HVR agents on the target database servers. The receiving agents apply the changes to the local database as SQL transactions through the native interface of the database server. The consistency and integrity of the replicated data is preserved based on the following principles: Changes are acknowledged after they have arrived and have been applied to the target database, Changes are not replicated until they are committed, Changes are re-played on the target database in the same order that they occurred on the source database (except when using burst mode see using burst mode ) By default transaction boundaries are maintained This paragraph describes the primary technical features and components of continuous database replication and explains how these contribute to HVR s unique capabilities. Change Data Capture Change Data Capture (CDC) is the mechanism through which changes to a database can be detected and extracted. HVR uses CDC to retrieve changes to the source database. Almost all implementations use Log-based capture. Log-based capture reads the transaction log files in which the database server records all transactions that are applied to its databases, (e.g. the redo and archive logs in an Oracle Database server). In a typical operating mode HVR should be reading the on-line log files at the tail end of the log where active transactions 10

11 are written. However, if HVR falls behind for whatever reason, it will resort to reading from the archived log files until it catches up again to the current point in the online log files. Log-based capture is non-intrusive to the database server as it only needs read-permission to the database log files, and generally it reads the log directly out of the file system cache. While reading the DBMS log file, HVR also captures data model changes (DDL statements). When a DDL statement is captured, HVR will analyze the statement and its consequences and choose a strategy to apply the same change on the target. HVR also supports Trigger-based Capture which is rarely used. Unlike Log-based Capture, Trigger-based Capture does involve interference with the database and therefore additional overhead and latency. It does however also allow for specific customizations and narrow selections. Trigger based capture does not capture DDL statements. Network Transport Captured changes from the source databases must be sent through the hub to the target databases, which may be situated in sites connected via slow networks. To perform this transfer efficiently HVR features a built-in transport mechanism allowing for very efficient use of the available system and network resources. The transport of change data is achieved over a direct socket connection between the HVR agents on the database servers involved in the replication process. This symmetrical streaming approach with collaborating HVR agents on both ends of each transport connection enables the following specific features and benefits: Data compression HVR implements a proprietary compression algorithm that exploits specific information on the transported data, such as the column data types and table widths involved. The resulting compression ratios can be as high as 95+%, hence making very efficient use of the available bandwidth. Data packaging Prior to transmitting the data HVR combines whole queues of changes into only a few network packets to minimize the overhead of network protocols and performance reductions due to network round-trip latencies. Data encryption Sensitive data can be protected using Secure Socket Layer (SSL) encryption. Bandwidth management HVR can be configured to only use a defined fraction of the maximum bandwidth on a network connection to ensure other types of network traffic will go through concurrently. Coalescing of changes Multiple changes applied to the same row within a transaction can be coalesced by HVR before transmission. Coalescing means that multiple changes to the same data are replaced by a single change having the same end result when it is applied to the target database. For instance, an insert and two updates to the same row in a single transaction can be merged into a single insert. On the destination by default transactions are applied in groups of 100 source transactions across which HVR can perform another coalesce operation. Applying coalesced changes reduces the number of changes to be transported and applied, which further boosts replication speed. Minimization of replication hops Every change that is written to the file system causes additional I/O overhead and may introduce additional latency. HVR only queues data changes to disk once in order to keep the process running if one of the target databases is offline. Change data written to disk is highly compressed. 11

12 Integration optimizations Several options exist to optimize HVR s performance at the target location. The default mode of HVR is trickle integrate in which every transaction is immediately propagated but transactions are grouped into larger transactions for optimum performance (so long as there is enough transaction volume). On a busy system endto-end latency is often seconds at most, and less than a second in many cases. Trickle integrate works well on an OLTP database that is optimized for fast inserts, updates and deletes. Figure 9. HVR optimizations Alternatively, HVR can run in batch mode, where transactions are not applied individually but are collected and applied periodically. Batch mode works well for analytic databases that typically perform well for large bulk operations, but poorly when applying single-row changes or deletes. Coalescing changes HVR can coalesce multiple operations on the same row into a single change. E.g. instead of performing an insert followed by 5 updates HVR would perform a single insert of the final row image. Also, an insert followed by delete results in a no-op. Burst mode For optimal performance integrating at the target location applying captured changes, HVR provides a burst mode. In this mode, HVR automatically coalesces changes and uses SQL set operations to integrate the results. This feature is essential if large volumes of updates or deletes have to be loaded continuously into a target database that is not optimized for single row operations (e.g. an analytic database that does not support regular indexes). Parallel integrate to leverage massive parallel (scale out) clustered targets HVR can be configured to launch parallel integration tasks to speed up performance. On top of that, using the sharded key feature of specific (scale-out) databases, HVR can deliver those parallel jobs the right set of data to apply to each separate cluster node. This implies nearly 100% scalability in the number of cluster nodes and HVR jobs can be achieved. Fast load from file location Specific MPP databases have data loading utilities direct from a file storage location. Examples are Redshifts Copy from S3 location, Greenplums GPFdist server and Teradata s Fast Load option. This is 12

13 generally the fastest way of loading data into these databases. HVR can load data using these mechanisms, both during refresh and integrate. Using HVR means the mechanism is integrated completely in the replication stream, including all HVR s enterprise features on manageability, performance, security and robustness. Replication Topologies Figure 10. HVR topologies HVR can both be used for uni-directional, for bi-directional and for multi-directional replication between databases. In uni-directional mode application transactions are applied to a source database and the resulting changes are replicated to a target database. Special cases for uni-directional replication with multiple locations include 1-to-many ( broadcast) and many-to-1 (consolidation). In bi-directional or multi-directional mode application transactions run against two or more databases and the resulting changes travel in all directions between the databases. Every database acts both as the source and as the target in the replication process (multi-way active/active). Replications streams can be concatenated to from cascaded replication. HVR can be used in complex meshed topologies with a large number of databases that each may have multiple incoming and outgoing replication streams. In all these topologies HVR routes the changes through the server that has been designated as the HVR hub. Changes originating from a source database are sent to the HVR hub where they are stored in a queue file. From there on the change is sent to the target databases. Especially in complex topologies with many data flows, this central queuing mechanism greatly simplifies the configuration and management of the replication environment improving robustness and scalability. It is very easy to add or remove a database without disrupting the rest of the replication scheme. Collision Detection and Resolution Collisions can occur in bi-directional or multi-directional replication scenarios when a multiple users perform a conflicting change on different databases. For example two users make changes to the same row at the same time on different databases. Undetected collisions can lead to inconsistencies within and between the replicated databases. HVR has an efficient collision detection and resolution mechanism using the timestamps of the changes involved. It can be configured to operate selectively on the databases and tables where collisions may occur, such as in a bi-directional or multi-directional replication process. If the replicated table contains timestamp columns these can be used without the need to maintain a separate timestamp table, reducing overhead. 13

14 Heterogeneous Replication In contrast to many other database replication tools HVR is not tied to a single DBMS product. HVR can just as easily replicate data between an Oracle database and a DB2 database as between two different versions of Microsoft SQL databases. Figure 11. Example: HVR replication from Oracle Database to Microsoft SQL Server HVR s performance and functional features are independent of the DBMS environments and whether they are homogeneous or heterogeneous, including bi-directional replication and collision handling. There is no need for additional tools or interfaces, as the local HVR agents interface directly with the local DBMS instance and automatically take care of any necessary data type conversions. Moreover, HVR can be configured to selectively replicate data from the source to the target databases, map between different table or column names and convert data values during the replication. The database schemas of the source and target databases need not be identical. Selective replication HVR can be configured to selectively replicate changes in a given table. Only changes to rows that correspond to a definable criterion are replicated and changes to other rows are ignored. This can for example be used to allow old rows to be purged from a source production database to be kept in the target database that is used for archiving or reporting. It can also be used for horizontal partitioning in which different parts of a table are replicated into different directions. For example if a SAAS vendor distribute data out of its central database to local customers who can only see their own data. Name and data conversion HVR supports replication between tables or columns that have different names between the source and the target databases. On top of that HVR can be instructed to calculate new column values for the target database table from the source database table column values using an SQL expression, or to provide a default value. As a result source and target database tables can have different columns. Columns that are present in the source database table but not in the target database table may be ignored or can be used to provide input to calculate values for another column in the target database. Alternatively, columns that are present in the target database table but not in the source database table may be filled with a default value or a calculated value. 14

15 Figure 12. Name and data conversion 2.3 Database Compare and Refresh/Repair The HVR Database Compare and Refresh/Repair functionality are very useful to bring two or more databases in sync. Refresh can also be used for fast batch data loading or to perform the initial load for continuous database replication. Database Compare and Refresh/Repair can also be used to resynchronize replicated databases after a crash has occurred. The data transformation rules that were defined for the continuous database replication are taken into consideration during Compare, Repair and Refresh operations. Database Compare and Refresh/Repair shares many of the features and benefits of continuous data replication. Specifically, it can be deployed in heterogeneous database environments and between databases with different schemas and it can handle large data volumes through its high performance and efficiency. Database Compare can in itself be useful to monitor and verify the consistency of the databases. Both database Compare and Database Refresh/Repair can be done in two modes, a bulk mode and a row-wise mode: Bulk compare In the case of bulk compare HVR performs checksums on each of the corresponding tables in the compared databases and subsequently compares these checksums. The actual data does not travel over the network, so this method is very efficient to verify whether large databases are in sync. Figure 13. Bulk compare 15

16 Bulk refresh/repair During bulk refresh HVR reads the data from all tables in the source database. If the databases are on different servers, the data is compressed and transported over the network. As the final step, the data is loaded into the target database whereby the table constraints and indexes are reinitialized. Bulk refresh is the fastest option to perform a database copy. Figure 14. Bulk refresh Row-wise compare Row-wise compare HVR extracts sorted data from the source database, submits it in compressed format over the network (if required), and compares the data table by table and row by row with the target database. For any detected differences it then generates the minimal SQL script required to apply the inserts, updates or deletes to the target database to make it consistent with the source database. Row-wise refresh/repair Row-wise refresh performs the same steps as row-wise compare, but the resulting SQL script is applied directly to the target database for resynchronization. Row-wise refresh is efficient in cases where the source and the target database are largely equal. Figure 15. Row-wise compare and repair (refresh) All modes of Database Compare and Refresh/Repair are very flexible. They respect any transformation rules that may have been defined for a continuous database replication, such as selective replications and name and data conversions. Database Compare and Refresh/Repair are optimized for performance and flexibility. For example data streams can be parallelized across tables and across destinations. HVR orders the tables in groups of similar size by checking the DBMS catalogue information on the table sizes to ensure optimal load-balancing across the streams and processes. Moreover, the source database does not have to be taken off-line during a refresh 16

17 action to avoid loss of changes. HVR will capture all changes applied to the source database during the refresh process and include them in the update of the target database. In case a row-wise refresh is applied, it is not even necessary to take the target database off-line. Using refresh for optimizing target performance In some cases optimal target performance can be achieved by dropping the target tables and recreating them with fresh data. HVR s bulk refresh feature performs this task and can be scheduled regularly the same way as change data capture. 2.4 File Replication In many organizations a large part of the exchange and distribution of information is realized by the copy and transfer of data files. This may result in countless numbers of unsecured and unmonitored point-to-point connections using various protocols, such as FTP or proprietary protocols. However, business critical purposes benefit from a more infrastructural solution. Managed File Transfer enables centralized management and control of end-to-end file transfers within the enterprise. It also enables secure and auditable file transports and it can be used between disparate platforms, applications and people. HVR provides Managed File Transfer functionality with its File Replication functionality. This can be used as a tool in its own right, but also as part of the integrated HVR suite for enterprise data integration. Complex file transfer chains can be configured, scheduled and controlled from a central point on an enterprise-wide level. HVR supports 3 types of file replication: file-to-file, database-to-file and file-to-database. File-to-file transfer An HVR file-to-file transfer will copy the files from one file location (the source location) to one or more other file locations (the target locations). A file location is a directory or a tree of directories, which can either be accessed through the local file system (Unix, Linux or Windows) or through a network file protocol (FTP, FTPs, sftp, WebDAV or HDFS). Files can be copied or moved. In the latter case, the files on the source location are deleted after they have been copied to the target locations. The file contents are normally preserved, but it is possible to include file transformations in the copy process using external commands. File distribution HVR provides the possibility to copy files selectively from the source location by matching their names to a predefined pattern. This feature also enables the routing of files within the same source location to different target locations on the basis of their file names. Thus enabling selective file distribution scenarios. File-to-database In a file-to-database transfer data will be read from files in the source file location and replicated to one or more target databases. The source files are by default expected to be in a specific HVR XML format, which contains the table information required to determine to which tables and rows the changes should be written in the target database. It is also possible to use other input file formats by including an additional transformation step in the file capture. Support for CSV is already provided, but any format can be handled by providing an external command. 17

18 Database-to-file Alternatively, in a database-to-file transfer the data is read from a source database and copied into one or more files on the source file location. The resulting files are by default in the HVR XML format, so that the table information is preserved. However, CSV is also supported and other file formats can be obtained by including an additional transformation command in the file output. As in the continuous database replication between databases, it is possible to select specific tables and rows form the source database and convert names and column values. Advanced Scenarios HVR s flexible architecture and seamless file integration makes a combination of various scenarios also possible. For example, combining distribution and conversion into a database distribution and file distribution channel. HVR s flexible architecture enables all kinds of scenarios easily! It is no surprise that HVR File Replication is optimized for maximum performance, efficiency and scalability as it benefits from the same mechanisms for data compression, data encryption, network efficiency and queuing as continuous data replication and Database Compare and Refresh/Repair. It therefore can handle multi-gigabyte files sent in a single task to or from 100 or more locations. 2.5 HVR Management and Operations User Interface All HVR tasks can be managed from a central intuitive and integrated Graphical User Interface (GUI). The GUI connects to the HVR hub and can be directly installed on the hub machine. Alternatively, it can be installed on the user s PC and connect to the hub machine over the network. The hub controls and monitors all HVR agents. Administrators use the GUI configure HVR tasks, such as continuous database replication or Managed File Transfer schemes, and to schedule, execute and monitor these tasks. Instead of using the GUI, all HVR actions can also be initiated from the command line and hence be included in scripts for automating operations. 18

19 Figure 16. HVR GUI - Configuration panel HVR Channels Setting up or modifying a HVR task consists of two parts: Location Configuration and Channel Definition. Location Configuration The Location Configuration contains the parameters that describe the addresses and the access credentials of each database and file store ( location ) that will be involved in the HVR task. Typical parameters include network node names or IP addresses, port numbers and login passwords. The type of storage (database or file store) and the kind of database (e.g. Oracle or Ingres) are also defined here. Each location has a logical name. Channel Definition The Channel Definition defines the logical transformation rules for the HVR task. Locations are referenced by their logical names assigned in the Location Configuration. The locations are combined into location groups which act as the source and target of the HVR operation. Tables that should be replicated can be selected out of the database data dictionary. Finally, channel actions define the type of operation (e.g. a database replication consisting of a capture and an integration action) and the options and parameters that should be applied (such as tuning options or column transformations). Once an HVR task has been configured it can be generated into a job on hub machine and scheduled for execution. The separation between the Location Group in the channel definition and the Location Definition allows for flexibility in terms of reuse and role separation. With the implementation details of the environment included in the Location Configuration and hidden from the Channel Definition, it is easy to change physical parameters within the environment, such as addresses or even replacing one type of DBMS by another, without having to change the logical parameters in the Channel Definition. Location Configuration could be done by the system administrators, without them having knowledge of the replication logic. On the other hand, Channel Definition can be done by the developers of replication schemes who do not need to be aware of the 19

20 particular details of the deployment environment. With this separation a certain Channel Definition could first be tested within a testing environment and then be deployed to the production environment by only changing the Location Configuration. Or a high availability implementation can simply be reversed by flipping the allocation of physical source and target environments. Monitoring and Statistics The monitor as part of the GUI provides status information and statistics on the progress of the various HVR tasks on all involved operations in the distributed environment. HVR can also create alerts and interface through SMTP traps to popular system management tools, such as HP Open View and Nagios. 2.6 Platform Support Figure 17. Monitoring HVR has been designed to support complex data integration scenarios across large heterogeneous computing environments with divergent hardware platforms, database products and file systems. HVR can be installed on a number of Operating Systems and interface with most of the mainstream DBMS products and file systems. Supported Database Management Systems HVR can interact with the following DBMS products through native support: Oracle Database (including Exadata and Amazon RDS Oracle) Microsoft SQL Server (including SQL Azure and Amazon RDS SQL) PostgreSQL (including Amazon RDS PostgreSQL) Actian Ingres, Vector (Vectorwise) and Matrix (ParAccel) IBM DB2 LUW & DB2 for I editions Salesforce Teradata Database Pivotal Greenplum Amazon Redshift XtremeData 20

21 HVR supports all editions of the above databases, like Enterprise, Standard, Express, BI, RAC, ASM... Through its XML file interface and support of external agents HVR provides an API into any database or application platform. Supported Operating Systems & Cloud platforms HVR can be installed on the following Operating Systems: Linux for x86 and x86-64 Microsoft Windows for x86 and x86-64 Solaris for Sparc and Intel HP-UX for Itanium IBM AIX HVR supports real and virtual instances of these OS's. HVR can be installed on the following Cloud Platforms: Microsoft Azure VM IAAS Microsoft Azure Cloud Services PAAS Amazon EC2 IAAS Amazon RDS PAAS Salesforce SAAS HVR also supports supported databases and file systems running on an IAAS / PAAS environment, including (virtualized) environments provided by the cloud vendor. Supported File Locations HVR can directly access files on the local file system of the servers on which it is installed or access files over network file protocols. The following file systems are supported: Unix and Linux file systems Windows file systems Cloud IAAS file systems (e.g. Amazon S3 & Microsoft Azure Storage) Hadoop Distributed File System (HDFS) Microsoft Sharepoint (WebDAV) FTP(s) and SFTP 21

22 3 About HVR High Volume Replication At HVR, we believe it should be easy to deliver large volumes of data efficiently, reliably and at the right time into your data store of choice. Our software, the HVR High Volume Replicator, does exactly this using real-time data capture between data sources including SQL databases, Hadoop, data warehousing and business intelligence data stores as well as the most commonly used file systems. For those organizations where real-time data replication is a mission critical process, HVR has been proven to be a reliable, secure and scalable solution by some of the largest global companies and leading government and defense organizations. HVR Software is a privately held company with offices in North America, Europe and Asia Pacific. For more information or to request a trial please visit us at Copyright 2016 HVR Software. All rights reserved. Trademarks referenced in this document are the sole property of their respective owners. 22

How to Implement Multi-way Active/Active Replication SIMPLY

How to Implement Multi-way Active/Active Replication SIMPLY How to Implement Multi-way Active/Active Replication SIMPLY The easiest way to ensure data is always up to date in a 24x7 environment is to use a single global database. This approach works well if your

More information

Managed File Transfer

Managed File Transfer Managed File Transfer In many organizations a large part of the exchange and distribution of information is realized by the copy and transfer of data files. As the number of files is ever increasing, the

More information

Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications

Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications White Paper Table of Contents Overview...3 Replication Types Supported...3 Set-up &

More information

ORACLE DATABASE 10G ENTERPRISE EDITION

ORACLE DATABASE 10G ENTERPRISE EDITION ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.

More information

Real-time Data Replication

Real-time Data Replication Real-time Data Replication from Oracle to other databases using DataCurrents WHITEPAPER Contents Data Replication Concepts... 2 Real time Data Replication... 3 Heterogeneous Data Replication... 4 Different

More information

Multi-Datacenter Replication

Multi-Datacenter Replication www.basho.com Multi-Datacenter Replication A Technical Overview & Use Cases Table of Contents Table of Contents... 1 Introduction... 1 How It Works... 1 Default Mode...1 Advanced Mode...2 Architectural

More information

Informatica Data Replication

Informatica Data Replication White Paper Moving and Synchronizing Real-Time Data in a Heterogeneous Environment WHITE PAPER This document contains Confidential, Proprietary and Trade Secret Information ( Confidential Information )

More information

OWB Users, Enter The New ODI World

OWB Users, Enter The New ODI World OWB Users, Enter The New ODI World Kulvinder Hari Oracle Introduction Oracle Data Integrator (ODI) is a best-of-breed data integration platform focused on fast bulk data movement and handling complex data

More information

Online Transaction Processing in SQL Server 2008

Online Transaction Processing in SQL Server 2008 Online Transaction Processing in SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 provides a database platform that is optimized for today s applications,

More information

IBM Software Information Management Creating an Integrated, Optimized, and Secure Enterprise Data Platform:

IBM Software Information Management Creating an Integrated, Optimized, and Secure Enterprise Data Platform: Creating an Integrated, Optimized, and Secure Enterprise Data Platform: IBM PureData System for Transactions with SafeNet s ProtectDB and DataSecure Table of contents 1. Data, Data, Everywhere... 3 2.

More information

IBM Campaign Version-independent Integration with IBM Engage Version 1 Release 3 April 8, 2016. Integration Guide IBM

IBM Campaign Version-independent Integration with IBM Engage Version 1 Release 3 April 8, 2016. Integration Guide IBM IBM Campaign Version-independent Integration with IBM Engage Version 1 Release 3 April 8, 2016 Integration Guide IBM Note Before using this information and the product it supports, read the information

More information

How To Use Shareplex

How To Use Shareplex Data consolidation and distribution with SharePlex database replication Written by Sujith Kumar, Chief Technologist Executive summary In today s fast-paced mobile age, data continues to accrue by leaps

More information

Companies are moving more and more IT services and

Companies are moving more and more IT services and Adding High Availability to the Cloud Paul J. Holenstein Executive Vice President Gravic, Inc. Companies are moving more and more IT services and utility applications to public clouds to take advantage

More information

Using Attunity Replicate with Greenplum Database Using Attunity Replicate for data migration and Change Data Capture to the Greenplum Database

Using Attunity Replicate with Greenplum Database Using Attunity Replicate for data migration and Change Data Capture to the Greenplum Database White Paper Using Attunity Replicate with Greenplum Database Using Attunity Replicate for data migration and Change Data Capture to the Greenplum Database Abstract This white paper explores the technology

More information

IBM Campaign and IBM Silverpop Engage Version 1 Release 2 August 31, 2015. Integration Guide IBM

IBM Campaign and IBM Silverpop Engage Version 1 Release 2 August 31, 2015. Integration Guide IBM IBM Campaign and IBM Silverpop Engage Version 1 Release 2 August 31, 2015 Integration Guide IBM Note Before using this information and the product it supports, read the information in Notices on page 93.

More information

EII - ETL - EAI What, Why, and How!

EII - ETL - EAI What, Why, and How! IBM Software Group EII - ETL - EAI What, Why, and How! Tom Wu 巫 介 唐, wuct@tw.ibm.com Information Integrator Advocate Software Group IBM Taiwan 2005 IBM Corporation Agenda Data Integration Challenges and

More information

Constant Replicator: An Introduction

Constant Replicator: An Introduction Data Availability Storage Software Constant Replicator: An Introduction Configurations, Applications and Solutions of Constant Replicator A White Paper A Constant Data Technology Document August 2004 Copyright

More information

Oracle Database 11g Comparison Chart

Oracle Database 11g Comparison Chart Key Feature Summary Express 10g Standard One Standard Enterprise Maximum 1 CPU 2 Sockets 4 Sockets No Limit RAM 1GB OS Max OS Max OS Max Database Size 4GB No Limit No Limit No Limit Windows Linux Unix

More information

Jitterbit Technical Overview : Salesforce

Jitterbit Technical Overview : Salesforce Jitterbit allows you to easily integrate Salesforce with any cloud, mobile or on premise application. Jitterbit s intuitive Studio delivers the easiest way of designing and running modern integrations

More information

Maximum Availability Architecture

Maximum Availability Architecture Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine Oracle Maximum Availability Architecture White Paper April 2010 Maximum Availability Architecture Oracle Best Practices For High Availability

More information

Informatica Data Replication 9.1.1 FAQs

Informatica Data Replication 9.1.1 FAQs Informatica Data Replication 9.1.1 FAQs 2012 Informatica Corporation. No part of this document may be reproduced or transmitted in any form, by any means (electronic, photocopying, recording or otherwise)

More information

Highly Available Unified Communication Services with Microsoft Lync Server 2013 and Radware s Application Delivery Solution

Highly Available Unified Communication Services with Microsoft Lync Server 2013 and Radware s Application Delivery Solution Highly Available Unified Communication Services with Microsoft Lync Server 2013 and Radware s Application Delivery Solution The Challenge Businesses that rely on Microsoft Lync Server must guarantee uninterrupted

More information

Disaster Recovery for Oracle Database

Disaster Recovery for Oracle Database Disaster Recovery for Oracle Database Zero Data Loss Recovery Appliance, Active Data Guard and Oracle GoldenGate ORACLE WHITE PAPER APRIL 2015 Overview Oracle Database provides three different approaches

More information

SAP Sybase Replication Server What s New in 15.7.1 SP100. Bill Zhang, Product Management, SAP HANA Lisa Spagnolie, Director of Product Marketing

SAP Sybase Replication Server What s New in 15.7.1 SP100. Bill Zhang, Product Management, SAP HANA Lisa Spagnolie, Director of Product Marketing SAP Sybase Replication Server What s New in 15.7.1 SP100 Bill Zhang, Product Management, SAP HANA Lisa Spagnolie, Director of Product Marketing Agenda SAP Sybase Replication Server Overview Replication

More information

Assignment # 1 (Cloud Computing Security)

Assignment # 1 (Cloud Computing Security) Assignment # 1 (Cloud Computing Security) Group Members: Abdullah Abid Zeeshan Qaiser M. Umar Hayat Table of Contents Windows Azure Introduction... 4 Windows Azure Services... 4 1. Compute... 4 a) Virtual

More information

be architected pool of servers reliability and

be architected pool of servers reliability and TECHNICAL WHITE PAPER GRIDSCALE DATABASE VIRTUALIZATION SOFTWARE FOR MICROSOFT SQL SERVER Typical enterprise applications are heavily reliant on the availability of data. Standard architectures of enterprise

More information

Integrated Application and Data Protection. NEC ExpressCluster White Paper

Integrated Application and Data Protection. NEC ExpressCluster White Paper Integrated Application and Data Protection NEC ExpressCluster White Paper Introduction Critical business processes and operations depend on real-time access to IT systems that consist of applications and

More information

An Oracle White Paper February 2014. Oracle Data Integrator 12c Architecture Overview

An Oracle White Paper February 2014. Oracle Data Integrator 12c Architecture Overview An Oracle White Paper February 2014 Oracle Data Integrator 12c Introduction Oracle Data Integrator (ODI) 12c is built on several components all working together around a centralized metadata repository.

More information

Module 14: Scalability and High Availability

Module 14: Scalability and High Availability Module 14: Scalability and High Availability Overview Key high availability features available in Oracle and SQL Server Key scalability features available in Oracle and SQL Server High Availability High

More information

Postgres Plus xdb Replication Server with Multi-Master User s Guide

Postgres Plus xdb Replication Server with Multi-Master User s Guide Postgres Plus xdb Replication Server with Multi-Master User s Guide Postgres Plus xdb Replication Server with Multi-Master build 57 August 22, 2012 , Version 5.0 by EnterpriseDB Corporation Copyright 2012

More information

Cloud Based Application Architectures using Smart Computing

Cloud Based Application Architectures using Smart Computing Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products

More information

Shadowbase Data Replication Solutions. William Holenstein Senior Manager of Product Delivery Shadowbase Products Group

Shadowbase Data Replication Solutions. William Holenstein Senior Manager of Product Delivery Shadowbase Products Group Shadowbase Data Replication Solutions William Holenstein Senior Manager of Product Delivery Shadowbase Products Group 1 Agenda Introduction to Gravic Shadowbase Product Overview Shadowbase for Business

More information

Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework

Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Many corporations and Independent Software Vendors considering cloud computing adoption face a similar challenge: how should

More information

CA XOsoft Content Distribution v4

CA XOsoft Content Distribution v4 PRODUCT BRIEF: CA XOSOFT CONTENT DISTRIBUTION CA XOsoft Content Distribution v4 CA XOSOFT CONTENT DISTRIBUTION (FORMERLY CA XOSOFT WANSYNC CD) IS A SIMPLE, HIGHLY FLEXIBLE AND COST-EFFECTIVE CONTENT DELIVERY,

More information

HP Shadowbase Solutions Overview

HP Shadowbase Solutions Overview HP Solutions Overview 2015 GTUG Conference & Exhibition Paul J. Holenstein Executive Vice President Products Group Gravic, Inc. Introduction Corporate HQ Malvern, PA USA Paul J. Holenstein Executive Vice

More information

Migration and Building of Data Centers in IBM SoftLayer with the RackWare Management Module

Migration and Building of Data Centers in IBM SoftLayer with the RackWare Management Module Migration and Building of Data Centers in IBM SoftLayer with the RackWare Management Module June, 2015 WHITE PAPER Contents Advantages of IBM SoftLayer and RackWare Together... 4 Relationship between

More information

Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale

Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale WHITE PAPER Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale Sponsored by: IBM Carl W. Olofson December 2014 IN THIS WHITE PAPER This white paper discusses the concept

More information

Sisense. Product Highlights. www.sisense.com

Sisense. Product Highlights. www.sisense.com Sisense Product Highlights Introduction Sisense is a business intelligence solution that simplifies analytics for complex data by offering an end-to-end platform that lets users easily prepare and analyze

More information

Veritas Cluster Server by Symantec

Veritas Cluster Server by Symantec Veritas Cluster Server by Symantec Reduce application downtime Veritas Cluster Server is the industry s leading clustering solution for reducing both planned and unplanned downtime. By monitoring the status

More information

VMware vsphere Data Protection

VMware vsphere Data Protection VMware vsphere Data Protection Replication Target TECHNICAL WHITEPAPER 1 Table of Contents Executive Summary... 3 VDP Identities... 3 vsphere Data Protection Replication Target Identity (VDP-RT)... 3 Replication

More information

Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration

Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration Solutions Integrated Storage Appliances Management Optimized Storage & Migration Archive Data Retention & Compliance Services Global Installation & Support SECURING THE FUTURE OF YOUR DATA w w w.q sta

More information

Oracle Data Integration: CON7926 Oracle Data Integration: A Crucial Ingredient for Cloud Integration

Oracle Data Integration: CON7926 Oracle Data Integration: A Crucial Ingredient for Cloud Integration Oracle Data Integration: CON7926 Oracle Data Integration: A Crucial Ingredient for Cloud Integration Julien Testut Principal Product Manager, Oracle Data Integration Sumit Sarkar Principal Systems Engineer,

More information

An Oracle White Paper. Using Oracle GoldenGate to Achieve Operational Reporting for Oracle Applications

An Oracle White Paper. Using Oracle GoldenGate to Achieve Operational Reporting for Oracle Applications An Oracle White Paper Using Oracle GoldenGate to Achieve Operational Reporting for Oracle Applications Executive Overview... 1 Introduction: Right Time For Reporting... 2 Common Solutions for Reporting...

More information

SQL Server 2012 Gives You More Advanced Features (Out-Of-The-Box)

SQL Server 2012 Gives You More Advanced Features (Out-Of-The-Box) SQL Server 2012 Gives You More Advanced Features (Out-Of-The-Box) SQL Server White Paper Published: January 2012 Applies to: SQL Server 2012 Summary: This paper explains the different ways in which databases

More information

IBM WebSphere MQ File Transfer Edition, Version 7.0

IBM WebSphere MQ File Transfer Edition, Version 7.0 Managed file transfer for SOA IBM Edition, Version 7.0 Multipurpose transport for both messages and files Audi logging of transfers at source and destination for audit purposes Visibility of transfer status

More information

Migration and Building of Data Centers in IBM SoftLayer with the RackWare Management Module

Migration and Building of Data Centers in IBM SoftLayer with the RackWare Management Module Migration and Building of Data Centers in IBM SoftLayer with the RackWare Management Module June, 2015 WHITE PAPER Contents Advantages of IBM SoftLayer and RackWare Together... 4 Relationship between

More information

An Oracle White Paper March 2014. Best Practices for Real-Time Data Warehousing

An Oracle White Paper March 2014. Best Practices for Real-Time Data Warehousing An Oracle White Paper March 2014 Best Practices for Real-Time Data Warehousing Executive Overview Today s integration project teams face the daunting challenge that, while data volumes are exponentially

More information

Big data management with IBM General Parallel File System

Big data management with IBM General Parallel File System Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers

More information

Overview Western 12.-13.9.2012 Mariusz Gieparda

Overview Western 12.-13.9.2012 Mariusz Gieparda Overview Western 12.-13.9.2012 Mariusz Gieparda 1 Corporate Overview Company Global Leader in Business Continuity Easy. Affordable. Innovative. Technology Protection Operational Excellence Compliance Customer

More information

High Availability for Citrix XenApp

High Availability for Citrix XenApp WHITE PAPER Citrix XenApp High Availability for Citrix XenApp Enhancing XenApp Availability with NetScaler Reference Architecture www.citrix.com Contents Contents... 2 Introduction... 3 Desktop Availability...

More information

Cloud Based Distributed Databases: The Future Ahead

Cloud Based Distributed Databases: The Future Ahead Cloud Based Distributed Databases: The Future Ahead Arpita Mathur Mridul Mathur Pallavi Upadhyay Abstract Fault tolerant systems are necessary to be there for distributed databases for data centers or

More information

Oracle Database 12c Plug In. Switch On. Get SMART.

Oracle Database 12c Plug In. Switch On. Get SMART. Oracle Database 12c Plug In. Switch On. Get SMART. Duncan Harvey Head of Core Technology, Oracle EMEA March 2015 Safe Harbor Statement The following is intended to outline our general product direction.

More information

Beyond High Availability Replication s Changing Role

Beyond High Availability Replication s Changing Role Beyond High Availability Replication s Changing Role Wednesday, November 19 th, 2014 Susan Wong SharePlex Solutions Architect Agenda Replication becomes Mainstream Dell s approach to replication solutions

More information

IBM Tivoli Directory Integrator

IBM Tivoli Directory Integrator IBM Tivoli Directory Integrator Synchronize data across multiple repositories Highlights Transforms, moves and synchronizes generic as well as identity data residing in heterogeneous directories, databases,

More information

Web Application Hosting Cloud Architecture

Web Application Hosting Cloud Architecture Web Application Hosting Cloud Architecture Executive Overview This paper describes vendor neutral best practices for hosting web applications using cloud computing. The architectural elements described

More information

Data Integration Overview

Data Integration Overview Data Integration Overview Phill Rizzo Regional Manager, Data Integration Solutions phillip.rizzo@oracle.com Oracle Products for Data Movement Comparing How They Work Oracle Data

More information

Deployment Options for Microsoft Hyper-V Server

Deployment Options for Microsoft Hyper-V Server CA ARCserve Replication and CA ARCserve High Availability r16 CA ARCserve Replication and CA ARCserve High Availability Deployment Options for Microsoft Hyper-V Server TYPICALLY, IT COST REDUCTION INITIATIVES

More information

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage

EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage EMC Backup and Recovery for Microsoft SQL Server 2008 Enabled by EMC Celerra Unified Storage Applied Technology Abstract This white paper describes various backup and recovery solutions available for SQL

More information

Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam sastry.vedantam@oracle.com

Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam sastry.vedantam@oracle.com Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam sastry.vedantam@oracle.com Agenda The rise of Big Data & Hadoop MySQL in the Big Data Lifecycle MySQL Solutions for Big Data Q&A

More information

SharePlex for SQL Server

SharePlex for SQL Server SharePlex for SQL Server Improving analytics and reporting with near real-time data replication Written by Susan Wong, principal solutions architect, Dell Software Abstract Many organizations today rely

More information

WHITEPAPER. HIGH VOLUME REPLICATION IN INGRES Robust, Scalable Replication is Essential in Environments Where High Availability Is Key

WHITEPAPER. HIGH VOLUME REPLICATION IN INGRES Robust, Scalable Replication is Essential in Environments Where High Availability Is Key WHITEPAPER HIGH VOLUME REPLICATION IN INGRES Robust, Scalable Replication is Essential in Environments Where High Availability Is Key TABLE OF CONTENTS: Introduction...1 Features...2 How is HVR used?...3

More information

How To Install Powerpoint 6 On A Windows Server With A Powerpoint 2.5 (Powerpoint) And Powerpoint 3.5.5 On A Microsoft Powerpoint 4.5 Powerpoint (Powerpoints) And A Powerpoints 2

How To Install Powerpoint 6 On A Windows Server With A Powerpoint 2.5 (Powerpoint) And Powerpoint 3.5.5 On A Microsoft Powerpoint 4.5 Powerpoint (Powerpoints) And A Powerpoints 2 DocAve 6 Service Pack 1 Installation Guide Revision C Issued September 2012 1 Table of Contents About the Installation Guide... 4 Submitting Documentation Feedback to AvePoint... 4 Before You Begin...

More information

Active-Active and High Availability

Active-Active and High Availability Active-Active and High Availability Advanced Design and Setup Guide Perceptive Content Version: 7.0.x Written by: Product Knowledge, R&D Date: July 2015 2015 Perceptive Software. All rights reserved. Lexmark

More information

An Oracle White Paper January 2013. A Technical Overview of New Features for Automatic Storage Management in Oracle Database 12c

An Oracle White Paper January 2013. A Technical Overview of New Features for Automatic Storage Management in Oracle Database 12c An Oracle White Paper January 2013 A Technical Overview of New Features for Automatic Storage Management in Oracle Database 12c TABLE OF CONTENTS Introduction 2 ASM Overview 2 Total Storage Management

More information

Siebel Installation Guide for UNIX. Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014

Siebel Installation Guide for UNIX. Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014 Siebel Installation Guide for UNIX Siebel Innovation Pack 2013 Version 8.1/8.2, Rev. A April 2014 Copyright 2005, 2014 Oracle and/or its affiliates. All rights reserved. This software and related documentation

More information

VERITAS Cluster Server Traffic Director Option. Product Overview

VERITAS Cluster Server Traffic Director Option. Product Overview VERITAS Cluster Server Traffic Director Option Product Overview V E R I T A S W H I T E P A P E R Table of Contents Traffic Director Option for VERITAS Cluster Server Overview.............................................1

More information

RDS Migration Tool Customer FAQ Updated 7/23/2015

RDS Migration Tool Customer FAQ Updated 7/23/2015 RDS Migration Tool Customer FAQ Updated 7/23/2015 Amazon Web Services is now offering the Amazon RDS Migration Tool a powerful utility for migrating data with minimal downtime from on-premise and EC2-based

More information

Dedicated Real-time Reporting Instances for Oracle Applications using Oracle GoldenGate

Dedicated Real-time Reporting Instances for Oracle Applications using Oracle GoldenGate Dedicated Real-time Reporting Instances for Oracle Applications using Oracle GoldenGate Keywords: Karsten Stöhr ORACLE Deutschland B.V. & Co. KG Hamburg Operational Reporting, Query Off-Loading, Oracle

More information

IBM Tivoli Storage Manager Version 7.1.4. Introduction to Data Protection Solutions IBM

IBM Tivoli Storage Manager Version 7.1.4. Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.4 Introduction to Data Protection Solutions IBM IBM Tivoli Storage Manager Version 7.1.4 Introduction to Data Protection Solutions IBM Note: Before you use this

More information

SHARPCLOUD SECURITY STATEMENT

SHARPCLOUD SECURITY STATEMENT SHARPCLOUD SECURITY STATEMENT Summary Provides details of the SharpCloud Security Architecture Authors: Russell Johnson and Andrew Sinclair v1.8 (December 2014) Contents Overview... 2 1. The SharpCloud

More information

Oracle9i Database Release 2 Product Family

Oracle9i Database Release 2 Product Family Database Release 2 Product Family An Oracle White Paper January 2002 Database Release 2 Product Family INTRODUCTION Database Release 2 is available in three editions, each suitable for different development

More information

Welcome. Changes and Choices

Welcome. Changes and Choices Welcome Changes and Choices Today s Session Thursday, February 23, 2012 Agenda 1. The Fillmore Group Introduction 2. Reasons to Implement Replication 3. IBM s Replication Options How We Got Here 4. The

More information

AppSense Environment Manager. Enterprise Design Guide

AppSense Environment Manager. Enterprise Design Guide Enterprise Design Guide Contents Introduction... 3 Document Purpose... 3 Basic Architecture... 3 Common Components and Terminology... 4 Best Practices... 5 Scalability Designs... 6 Management Server Scalability...

More information

Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software

Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication September 2002 IBM Storage Products Division Raleigh, NC http://www.storage.ibm.com Table of contents Introduction... 3 Key

More information

Configure AlwaysOn Failover Cluster Instances (SQL Server) using InfoSphere Data Replication Change Data Capture (CDC) on Windows Server 2012

Configure AlwaysOn Failover Cluster Instances (SQL Server) using InfoSphere Data Replication Change Data Capture (CDC) on Windows Server 2012 Configure AlwaysOn Failover Cluster Instances (SQL Server) using InfoSphere Data Replication Change Data Capture (CDC) on Windows Server 2012 Introduction As part of the SQL Server AlwaysOn offering, AlwaysOn

More information

High Availability of VistA EHR in Cloud. ViSolve Inc. White Paper February 2015. www.visolve.com

High Availability of VistA EHR in Cloud. ViSolve Inc. White Paper February 2015. www.visolve.com High Availability of VistA EHR in Cloud ViSolve Inc. White Paper February 2015 1 Abstract Inspite of the accelerating migration to cloud computing in the Healthcare Industry, high availability and uptime

More information

Overview. Timeline Cloud Features and Technology

Overview. Timeline Cloud Features and Technology Overview Timeline Cloud is a backup software that creates continuous real time backups of your system and data to provide your company with a scalable, reliable and secure backup solution. Storage servers

More information

High Availability Database Solutions. for PostgreSQL & Postgres Plus

High Availability Database Solutions. for PostgreSQL & Postgres Plus High Availability Database Solutions for PostgreSQL & Postgres Plus An EnterpriseDB White Paper for DBAs, Application Developers and Enterprise Architects November, 2008 High Availability Database Solutions

More information

Advanced In-Database Analytics

Advanced In-Database Analytics Advanced In-Database Analytics Tallinn, Sept. 25th, 2012 Mikko-Pekka Bertling, BDM Greenplum EMEA 1 That sounds complicated? 2 Who can tell me how best to solve this 3 What are the main mathematical functions??

More information

Upgrading to Microsoft SQL Server 2008 R2 from Microsoft SQL Server 2008, SQL Server 2005, and SQL Server 2000

Upgrading to Microsoft SQL Server 2008 R2 from Microsoft SQL Server 2008, SQL Server 2005, and SQL Server 2000 Upgrading to Microsoft SQL Server 2008 R2 from Microsoft SQL Server 2008, SQL Server 2005, and SQL Server 2000 Your Data, Any Place, Any Time Executive Summary: More than ever, organizations rely on data

More information

METALOGIX REPLICATOR FOR SHAREPOINT: Supporting Government and Military Missions Worldwide

METALOGIX REPLICATOR FOR SHAREPOINT: Supporting Government and Military Missions Worldwide METALOGIX REPLICATOR FOR SHAREPOINT: Supporting Government and Military Missions Worldwide Contents Introduction...2 Coalition and extranet collaboration... 3 Deploying military units... 4 Fob-rob collaboration...4

More information

The Benefits of Virtualizing

The Benefits of Virtualizing T E C H N I C A L B R I E F The Benefits of Virtualizing Aciduisismodo Microsoft SQL Dolore Server Eolore in Dionseq Hitachi Storage Uatummy Environments Odolorem Vel Leveraging Microsoft Hyper-V By Heidi

More information

ORACLE BUSINESS INTELLIGENCE SUITE ENTERPRISE EDITION PLUS

ORACLE BUSINESS INTELLIGENCE SUITE ENTERPRISE EDITION PLUS Oracle Fusion editions of Oracle's Hyperion performance management products are currently available only on Microsoft Windows server platforms. The following is intended to outline our general product

More information

Neverfail for Windows Applications June 2010

Neverfail for Windows Applications June 2010 Neverfail for Windows Applications June 2010 Neverfail, from Neverfail Ltd. (www.neverfailgroup.com), ensures continuity of user services provided by Microsoft Windows applications via data replication

More information

Managed File Transfer

Managed File Transfer Managed File Transfer How do most organizations move files today? FTP Typically File Transfer Protocol (FTP) is combined with writing and maintaining homegrown code to address its limitations Limited Reliability

More information

Data processing goes big

Data processing goes big Test report: Integration Big Data Edition Data processing goes big Dr. Götz Güttich Integration is a powerful set of tools to access, transform, move and synchronize data. With more than 450 connectors,

More information

CA ARCserve Replication and High Availability Deployment Options for Hyper-V

CA ARCserve Replication and High Availability Deployment Options for Hyper-V Solution Brief: CA ARCserve R16.5 Complexity ate my budget CA ARCserve Replication and High Availability Deployment Options for Hyper-V Adding value to your Hyper-V environment Overview Server virtualization

More information

System Migrations Without Business Downtime. An Executive Overview

System Migrations Without Business Downtime. An Executive Overview System Migrations Without Business Downtime An Executive Overview Businesses grow. Technologies evolve. System migrations may be inevitable, but business downtime isn t. All businesses strive for growth.

More information

FIFTH EDITION. Oracle Essentials. Rick Greenwald, Robert Stackowiak, and. Jonathan Stern O'REILLY" Tokyo. Koln Sebastopol. Cambridge Farnham.

FIFTH EDITION. Oracle Essentials. Rick Greenwald, Robert Stackowiak, and. Jonathan Stern O'REILLY Tokyo. Koln Sebastopol. Cambridge Farnham. FIFTH EDITION Oracle Essentials Rick Greenwald, Robert Stackowiak, and Jonathan Stern O'REILLY" Beijing Cambridge Farnham Koln Sebastopol Tokyo _ Table of Contents Preface xiii 1. Introducing Oracle 1

More information

openft Enterprise File Transfer Copyright 2011 FUJITSU

openft Enterprise File Transfer Copyright 2011 FUJITSU openft Enterprise File Transfer Introduction 1 Enterprise File Transfer openft Ready to Transfer your Business Critical Data 2 openft in a nutshell openft is a high-performance solution for enterprise-wide

More information

ORACLE DATA SHEET KEY FEATURES AND BENEFITS ORACLE WEBLOGIC SERVER STANDARD EDITION

ORACLE DATA SHEET KEY FEATURES AND BENEFITS ORACLE WEBLOGIC SERVER STANDARD EDITION KEY FEATURES AND BENEFITS STANDARD EDITION Java EE 7 full platform support Java SE 8 certification, support Choice of IDEs, development tools and frameworks Oracle Cloud compatibility Industry-leading

More information

CONDIS. IT Service Management and CMDB

CONDIS. IT Service Management and CMDB CONDIS IT Service and CMDB 2/17 Table of contents 1. Executive Summary... 3 2. ITIL Overview... 4 2.1 How CONDIS supports ITIL processes... 5 2.1.1 Incident... 5 2.1.2 Problem... 5 2.1.3 Configuration...

More information

CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level. -ORACLE TIMESTEN 11gR1

CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level. -ORACLE TIMESTEN 11gR1 CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level -ORACLE TIMESTEN 11gR1 CASE STUDY Oracle TimesTen In-Memory Database and Shared Disk HA Implementation

More information

HA / DR Jargon Buster High Availability / Disaster Recovery

HA / DR Jargon Buster High Availability / Disaster Recovery HA / DR Jargon Buster High Availability / Disaster Recovery Welcome to Maxava s Jargon Buster. Your quick reference guide to Maxava HA and industry technical terms related to High Availability and Disaster

More information

IBM i25 Trends & Directions

IBM i25 Trends & Directions Gl. Avernæs 20. November 2013 Erik Rex Cert. Consultant rex@dk.ibm.com Thanks to Steve Will IBM i Chief Architect 2013 IBM Corporation The Family Tree 1975 1988 2013 2013 IBM Corporation 3 2013 IBM Corporation

More information

High Availability with Postgres Plus Advanced Server. An EnterpriseDB White Paper

High Availability with Postgres Plus Advanced Server. An EnterpriseDB White Paper High Availability with Postgres Plus Advanced Server An EnterpriseDB White Paper For DBAs, Database Architects & IT Directors December 2013 Table of Contents Introduction 3 Active/Passive Clustering 4

More information

ORACLE BUSINESS INTELLIGENCE SUITE ENTERPRISE EDITION PLUS

ORACLE BUSINESS INTELLIGENCE SUITE ENTERPRISE EDITION PLUS ORACLE BUSINESS INTELLIGENCE SUITE ENTERPRISE EDITION PLUS PRODUCT FACTS & FEATURES KEY FEATURES Comprehensive, best-of-breed capabilities 100 percent thin client interface Intelligence across multiple

More information

An Oracle White Paper June 2012. High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

An Oracle White Paper June 2012. High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database An Oracle White Paper June 2012 High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database Executive Overview... 1 Introduction... 1 Oracle Loader for Hadoop... 2 Oracle Direct

More information

Data Security and Governance with Enterprise Enabler

Data Security and Governance with Enterprise Enabler Copyright 2014 Stone Bond Technologies, L.P. All rights reserved. The information contained in this document represents the current view of Stone Bond Technologies on the issue discussed as of the date

More information

Data Integration Checklist

Data Integration Checklist The need for data integration tools exists in every company, small to large. Whether it is extracting data that exists in spreadsheets, packaged applications, databases, sensor networks or social media

More information