Oracle s In-Memory Database Strategy for OLTP and Analytics
|
|
|
- Clinton Dorsey
- 10 years ago
- Views:
Transcription
1 Oracle s In-Memory Database Strategy for OLTP and Analytics Tirthankar Lahiri, Markus Kissling Oracle Corporation Keywords: TimesTen, Database In-Memory, Oracle Database 12c Introduction With increasing DRAM densities, systems with hundreds of gigabytes and even terabytes of memory are becoming common. This trend is exemplified by Oracle s SPARC M6-32 SuperCluster, an enormous system featuring up to 32 TeraBytes of main memory. A large fraction of user data, and in many cases, all user data, can thus be stored entirely in DRAM avoiding the need for costly disk IO. Oracle has two complementary In-Memory Database Technology offerings for the Application Tier and for the Database Tier, as depicted in Illustration 1 below. 1) Oracle TimesTen: For applications requiring ultra-low response time, Oracle TimesTen is a specialized, memory-resident relational database that can be directly embedded within an application, achieving microsecond response times. TimesTen can be deployed either as a standalone database, or as a high performance fully transactional cache of data sourced from an underlying Oracle Database. TimesTen is therefore typically deployed within the Application- Tier, and its primary purpose is to provide ultra-low latency for custom OLTP applications. 2) Database In-Memory: Within the Database Tier, the Oracle Database In-Memory option features a unique dual-format architecture enabling orders of magnitude faster Analytic workloads (Billions of rows per second processing speeds), as well as significantly faster Enterprise workloads such as CRM, HCM, ERP through the elimination of analytic indexes. With Database In-Memory, the total size of the database is not limited by the size of main memory: The inmemory column store can be used for important tables, while the buffer cache and smart flash cache (on Exadata storage) can be used to provide tiered storage across memory, flash, and disk. Illustration 1: Cross-Tier In-Memory Database Technology
2 In this two part paper, we first provide an overview of the TimesTen In-Memory Database and then conclude with an overview of Oracle Database In-Memory. 1.1 Architectural Overview of TimesTen Part 1: Oracle TimesTen In-Memory Database The Oracle TimesTen In-Memory database is a memory-resident and completely persistent ACID compliant database. As shown in Illustration 2 below, TimesTen features extremely high performance as a result of its fully memory-based design: The entire database is in-memory at all times and can be accessed in-process, without any network latency. The stunning performance of TimesTen is the reason it is deployed by thousands of customers worldwide in applications requiring real-time responsiveness: Financial Services, Telecommunications, Network Management, Ecommerce and Online Retail, to name a few of TimesTen s many customer segments. Illustration 2: Oracle TimesTen Performance Overview: APIs and Connectivity TimesTen provides standard APIs for database operations. These include ODBC, JDBC, as well as the Oracle Call Interface (OCI). Applications connect to TimesTen either via a conventional client-server protocol, or an optimized direct-link protocol in which the application directly links the TimesTen library. In direct-link mode, all database API invocations are treated simply as function calls into the TimesTen shared library allowing for in-process execution of database code Overview: Basic Storage Manager Design Some basic design principles guide the design of the storage manager. Concurrency control mechanisms are both light-weight (e.g. using lightweight latches instead of database locks whenever possible) and fine-grained,to allow for maximal scaling on multi-core architectures (as demonstrated by the throughput shown in Illustration 2 above). Another over-arching design principle is the use of memory-based addressing rather than logical addressing. For instance, indexes in TimesTen contain pointers to the tuples in the base table. The metadata describing the layout of a table contains pointers to the pages comprising the table. Therefore both index scans and tablescans can operate via pointer traversal. This design approach is repeated over and over again in the storage manager, with the result that TimesTen is significantly faster than a disk-oriented database even one that is completely cached since there is no overhead from having to translate logical rowids to physical memory addresses of buffers in a buffer cache. Multiple types of indexes are supported by TimesTen. TimesTen supports Hash Indexes for speeding up lookup queries, Bitmap Indexes for accelerating star joins with complex predicates, as well as Range Indexes for accelerating range scans. Since the table data is always in memory, index leaf nodes do not need to store key values, with significant space savings.
3 1.1.3 Overview: Transactional Durability TimesTen has been designed from the ground up to have full ACID properties. TimesTen employs checkpointing with write-ahead logging for durability. Logging is frequently a scaling bottleneck on large multi-core architectures, and for this purpose again in keeping with the fine-grained concurrency control design motif, TimesTen has a multi-threaded logging mechanism so that generation of log is not serialized. In this mechanism, the TimesTen log buffer is divided into multiple partitions which can be populated in parallel by concurrent processes. Sequential ordering of the log is restored when the log is read from disk. For the highest performance, TimesTen offers an optional delayed-durability mode: an application commits by writing a commit record into the log buffer without waiting for the log to be forced to disk. The log is periodically flushed to disk every 200 msec as a background activity. This mode of operation allows the highest possible throughput with the possibility of a small (bounded) amount of data loss, unless the system is configured with 2-safe replication as described later. The database has 2 checkpoint files for the permanent data, and a variable number of log files. Once a database checkpoint is completed, the checkpointing operation switches to the other file. With this approach, one of the checkpoint files is always a completed checkpoint of the in-memory contents and can be used for backup and roll-forward recovery. TimesTen supports fuzzy checkpointing with user configurable rate controls. TimesTen offers read-committed transactional isolation by default (serializable isolation as an application choice). With read-committed isolation, application bottlenecks are minimized since TimesTen has a lockless multi-versioning mechanism for read-write concurrency and row-level locking for maximal write-write concurrency. Row-level multi-versioning allows readers to avoid getting any locks altogether, while row-level locking provides the highest possible scalability for update intensive workloads. 1.2 TimesTen High Availability The vast majority of TimesTen deployments use replication for high-availability. TimesTen replication is log-based, relying on shipping log records for committed transactions from a transmitter database to a receiver database, on which the changes in the log are then applied. Replication can be configured with multiple levels of resiliency. For the highest resiliency, TimesTen provides 2-safe replication. With 2-safe replication, a transaction is committed locally only after the commit has been successfully acknowledged by the receiver. 2-safe replication is often used with non-durable commit. This combination allows applications to achieve commit durability in two memories, without requiring any disk IO. Illustration 3: TimesTen Replication
4 Replication can be at the level of individual tables or at the level of the whole database. Replication can be configured to be multi-master and bi-directional. The preferred configuration for replication is to have whole database replication from an Active Database to a Standby database. Standby databases can in turn propagate changes to a number of Read-Only Subscriber databases, as depicted in Illustration 3. In this type of replication scheme, the active can host applications that both read and modify the database. The standby and the read-only subscriber databases can host read-only applications. If a failure of the Active occurs, the Standby can be promoted to an Active database. If a failure of the Standby occurs, the Active will then replicate directly to the Read Only Subscribers. In keeping with the design principle of end to end parallelism, replication can be parallelized across multiple replication tracks. Each track consists of a replication transmitter on the sending side, and a replication receiver on the receiving side. Parallel replication preserves commit ordering, but allows transactions without any data dependencies to be applied in parallel by the receiver. Together with log parallelism, replication parallelism provides end to end parallelism between the master and the receiver, and the highest possible throughput. 1.3 Application-Tier Cache Grid Illustration 4: Application-Tier Cache Grid The Application-Tier Cache Grid configuration (shown above in Illustration 4) allows TimesTen to be deployed as a persistent transactional cache for data in an Oracle database. Deploying TimesTen as an application-tier cache can greatly accelerate an application even when the database working set is fully cached in memory within the buffer cache. This is for two reasons: 1) Application Proximity: The TimesTen database can be collocated in the application-tier, resulting in lower communication costs to the database for access to cached data. For the best response time, TimesTen can be directly linked to the application providing in-process execution of operations on cached data. 2) In-Memory Optimizations: As stated earlier, the TimesTen storage manager is built assuming full memory residency, it requires far fewer instructions for database operations than disk-based databases.
5 TimesTen allows a collection of tables to be declared as a Cache Group against corresponding tables on an Oracle database. A cache group is a syntactic declaration and comprises one Root table, and optional Child tables related by foreign key constraints. For instance, for an ecommerce application, one approach to creating a cache group would be to have a root table for all user profiles, and child tables for open orders placed by each user, as well as a list of service requests for each user. This cache group is depicted in Illustration 5 below. Users Orders Service Requests Items Illustration 5: Example Cache Group Cache groups can be loaded from an Oracle database in one of two ways: 1) Pre-loaded: In this mode, the cache group is explicitly loaded before the workload is run. For instance in the above example, all users could be preloaded from the Oracle RDBMS into the cache group. 2) Dynamically loaded: In this mode, data is brought into the cache when referenced. For instance, when a user logs in, it could be at that point that the user profile record for the user and all the associated Orders and Preference records are brought into the cache group so that all subsequent references by that user will benefit from in-memory processing (with a penalty on the first reference only). The data to be referenced must be identified by an equality predicate on the primary key of the root table (in this case, by the user profile Id). The root table row and related child table rows are referred to as a cache instance. For dynamically loaded cache groups the user can set one of two different aging policies to remove data to ensure that the database does not run out of space. TimesTen supports time-based aging (age on the value of a timestamp column) or LRU aging, based on recency of usage. When the aging mechanism removes rows from a cache group, it removes them in units of cache instances. Cache instances form the unit of caching since they are atomic units for cache replacement. Cache groups can be of two basic types in order to handle a variety of caching scenarios. There are two corresponding data synchronization mechanisms for these cache group types. 1) Read-Only Cache Groups: For data that is infrequently updated, but widely read, a read-only cache group can be created on TimesTen to offload the backend Oracle database. Very hot reference data, as online catalogs, airline gate arrival/departure information, etc. is a candidate for this type of caching. The Oracle side tables corresponding to Read-Only cache groups are updated on Oracle. The updates are periodically refreshed into TimesTen using an automatic refresh mechanism. 2) Updatable Cache Groups: For frequently updated data, an updatable cache group with writethrough synchronization is appropriate. Account balance information for an online ecommerce application, the location of subscribers in a cellular network, streaming sensor data, etc. are all
6 candidates for write-through caching. TimesTen provides a number of alternative mechanisms for propagating writes to Oracle, but the most commonly used and highest performing mechanism is referred to as Asynchronous Writethrough where the changes are replicated to Oracle using a log-based transport mechanism. This mechanism is also capable of applying changes to Oracle in parallel, in keeping with the parallel-everywhere design theme of the system. It is possible to deploy multiple TimesTen cache databases against a single Oracle database. This architecture is referred to as the Application-Tier Database Cache Grid (depicted in Illustration 5 above). Cache groups can once again be classified into two categories of data visibility on a grid. 1) Local Cache Groups: The contents of a local cache group are not shared and are visible only to the grid member they are defined on. This type of cache group is useful when the data can be statically partitioned across grid members; for instance, different ranges of user profile Ids may be cached on different grid members. 2) Global Cache Groups: In many cases, an application cannot be statically partitioned and Global Cache Groups allow applications to transparently share cached contents across a grid of independent TimesTen databases. With this type of cache group, cache instances are migrated across the grid on reference. Only consistent (committed) changes are propagated across the grid. In our example, a user profile and the related records can be loaded into one grid member on initial login by the user. When the user disconnects and later logs in onto a different grid member, the records for the user profile and related child table entries are migrated from the first grid member to the second grid member. Thus, the contents of the global cache group are accessible from any location, via data shipping. The cache grid also provides data sharing via function shipping or global query. A global query is a query executed in parallel across multiple grid members. For example, if a global query needs to compute a COUNT(*) on the user profiles table, it would ship the count operation to all the grid members, and then collate the results by summing the received counts. This mechanism can be generalized to much more complex queries involving joins, aggregations, and groupings. Global queries are useful for running reporting operations on a grid, for example, what is the average value of the current outstanding orders for all users on the grid.
7 2.1 Overview of Oracle Database In-Memory Part II: Oracle Database In-Memory The Oracle Database In-Memory option provides a unique dual-format row/column in-memory representation thus avoiding the tradeoffs inherent in single-format in-memory databases. Unlike traditional in-memory databases, the new In-Memory option does not limit the size of the database to the size of available DRAM: the numerous optimizations at all levels in the storage hierarchy: DRAM, Flash, Disk, as well across machines in a RAC cluster, continue to allow databases of virtually unlimited size. The key novelties in the approach taken with the Oracle Database In-Memory feature are as follows: 1) Dual format: It is well known that columnar data format is well suited for analytics since analytic queries typically access a small number of columns in a large number of rows. On the other hand, row-storage is better for OLTP workloads that typically access a large number of columns in a small number of rows. This is why row-storage is used by most OLTP databases, including present-generation in-memory OLTP databases such as Oracle TimesTen. Since neither format is optimal for all workloads, Oracle Database In-Memory features a Dual Format in-memory representation (Figure 1) with data simultaneously accessible in an In-Memory column store (IM column store) as well as in an in-memory row store (the buffer cache) as depicted in Illustration 6. OLTP Analytics Analytics Illustration 6: Dual-Format Database Architecture Note that this dual format representation does not double memory requirements. The Oracle Database buffer cache has been highly optimized over decades, and achieves extremely high hit-rates even with a very small size compared to the database size. Also, since the IM column store can replace analytic indexes, the buffer cache size can be further reduced. We therefore expect customers to be able to use 80% or more of their available memory for the IM column store. 2) Unlimited Capacity: The entire database is not required to have to fit in the IM column store since it is pointless to waste memory storing data that is infrequently accessed. It is possible to selectively designate important tables or partitions (for instance, the most recent months of orders in an orders table spanning several years) to be resident in the IM column store, while using the buffer cache for the remainder of the table. With Engineered Systems such as Exadata, the data can therefore span multiple storage tiers: high capacity disk drives, PCI Flash cache with extremely high IOPS, and DRAM with the lowest possible response time. 3) Complete Transparency: The IM column store is built into the data access layer within the database engine and presents the optimizer with an alternate execution method for extremely fast table scans. The only impact to the optimizer and query processor is a different cost function for in-memory scans. By building the column store into the database engine, we ensure that all of the
8 enterprise features of Oracle Database such as database recovery, disaster recovery, backup, replication, storage mirroring, and node clustering work transparently with the IM column store enabled. 2.2 New In-Memory Columnar Format The new column format is a pure in-memory format, introduced in Oracle Database The use of the new in-memory column format has no effect on the existing persistent format for Oracle Database. Because the column store is in-memory only, maintenance of the column store in the presence of DML changes is very efficient, requiring only lightweight in-memory data manipulation. The size of the IM column store is specified by the initialization parameter INMEMORY_SIZE Populating the In-Memory Column Store The contents of the IM column store are built from the persistent contents within the row store, a mechanism referred to as populate. It is possible to selectively populate the IM column store with a chosen subset of the database. The new INMEMORY clause for CREATE / ALTER statements is used to indicate that the corresponding object is a candidate to be populated into the IM column store. Correspondingly, an object may be removed from the column store using the NO INMEMORY clause. The INMEMORY declaration can be performed for multiple classes of objects: Entire Tablespace Entire Table Entire Partition of a table Only a specific Sub-Partition within a Partition Some examples of the INMEMORY clause are listed below in Illustration 7. Note from the last example that it is also possible to exclude one or more columns in an object from being populated in-memory. In the example, the photo column is excluded since it is not needed by the analytical workload, and having it in memory would waste memory. Illustration 7: Example uses of the INMEMORY clause The IM column store is populated using a pool of background server processes, the size of which is controlled by the INMEMORY_MAX_POPULATE_SERVERS initialization parameter. If unspecified, the value of this parameter defaults to half the number of CPU cores.
9 Note that there is no application downtime while a table is being populated since it continues to be accessible via the buffer cache. In contrast, most other in-memory databases require that applications must wait till all objects are completely brought into memory before processing any user or application requests Since some objects may be more important to the application than others, it is possible to control the order in which objects are populated using the PRIORITY subclause. By default, in-memory objects are assigned a priority of NONE, indicating that the object is only populated when it is first scanned. Note that the PRIORITY setting only affects the order in which objects are populated, not the speed of population. The different priority levels are enumerated in Table 1 below. Priority Description CRITICAL Object is populated immediately after the database is opened. HIGH MEDIUM LOW NONE Object is populated after all CRITICAL objects have been populated Object is populated after all CRITICAL and HIGH objects have been populated Object is populated after all CRITICAL, HIGH, and MEDIUM objects have been populated Object is only populated after it is scanned for the first time (Default) Table 1: Different Priority Levels In-Memory Compression Each table selected for in-memory storage is populated into the IM column store in contiguously allocated units referred to as In-Memory Compression Units (IMCUs). The target size for an IMCU is a large number of rows, e.g. half a million. Within the IMCU, each column is stored contiguously as a column Compression Unit (CU). The column vector itself is compressed with user selectable compression levels, depending on the expected use case. The selection is controlled by a MEMCOMPRESS subclause (The clause NO MEMCOMPRESS can be used if no compression is desired). There are basically three primary classes of compression algorithms optimized for different criteria: 1) FOR DML: This level of compression performs minimal compression in order to optimize the performance of DML intensive workloads, at the expense of some query performance. 2) FOR QUERY: These schemes provide the fastest performance for queries while providing a 2x- 10x level of compression compared to on-disk. These schemes allow data to be accessed in-place without any decompression and no additional run-time memory. 3) FOR CAPACITY: These schemes trade some performance for higher compression ratios. They require that the data be decompressed into run-time memory buffers before it can be queried. The decompression imposes a slight performance penalty but provides compression factors of 5x 30x. The CAPACITY LOW sublevel uses an Oracle custom Zip algorithm (OZIP) that provides
10 very fast decompression speeds; many times faster than LZO (which is the typical standard for fast decompression). OZIP is also implemented in a special-purpose coprocessor in the SPARC M7 processor. Compression levels are enumerated in Table II in ascending order of compression. MEMCOMPRESS DML QUERY LOW QUERY HIGH CAPACITY LOW CAPACITY HIGH Description Minimal compression optimized for DML performance Optimized for the fastest query performance (default) Also optimized for query performance, but with some more emphasis on space saving Balanced between query performance and space saving (uses OZIP) Optimized for space saving Table II: MEMCOMPRESS levels for inmemory objects It is possible to use different compression levels for different partitions within the same table. For example in a SALES table range partitioned on time by week, the current week s partition is probably experiencing a high volume of updates and inserts, and it would be best compressed FOR DML. On the other hand, earlier partitions up to a year ago are probably used intensively for reporting purposes (including year-over-year comparisons) and can be compressed FOR QUERY. Beyond the one year horizon, even earlier partitions are probably queried much less frequently and should be compressed for CAPACITY to maximize the amount of data that can be kept in memory. 2.3 In-Memory Scans Scans against the column store are optimized using vector processing (SIMD) instructions which can process multiple operands in a single CPU instruction. For instance, finding the occurrences of a value in a set of values as in Illustration 8 below, adding successive values as part of an aggregation operation, etc., can all be vectorized down to one or two instructions. Illustration 8: SIMD vector processing
11 Efforts are made to optimize the use of memory bandwidth by using bit-packed formats to represent the column vectors. As a result, a large number of column values can be stored within a CPU cache line. A further reduction in the amount of data accessed is possible due to the In-Memory Storage Indexes that are automatically created and maintained on each of the columns in the IM column store. Storage Indexes allow data pruning to occur based on the filter predicates supplied in a SQL statement. An In- Memory Storage Index keeps track of minimum and maximum values for each column CU. When a query specifies a WHERE clause predicate, the In-Memory Storage Index on the referenced column is examined to determine if any entries with the specified column value exist in each CU by comparing the specified value(s) to the minimum and maximum values maintained in the Storage Index. If the value is outside the minimum and maximum range for a CU, the scan of that CU is avoided. For equality, in-list, and some range predicates an additional level of data pruning is possible via the metadata dictionary when dictionary-based compression is used. The metadata dictionary contains a list of the distinct values for each column within each IMCU. Thus dictionary based pruning allows Oracle Database to determine if the value being searched for actually exists within an IMCU, ensuring only the necessary IMCUs are scanned. All of these optimizations combine to provide scan rates exceeding billions of rows per second per CPU core. 2.4 In-Memory Joins Apart from accelerating scans, the IM column store also provides substantial performance benefits for joins. Typical star joins between fact and dimension tables can be reduced to a filtered scan on a dimension table to build a compact Bloom filter of qualifying dimension keys (Illustration 9), and then a subsequent scan on the fact table using the Bloom filter to eliminate values that will not be join candidates. The Bloom filter thus greatly reduces the number of values processed by the join. While this method is not specific to in-memory, the optimizer is now able to select this method more frequently due to the massive reduction in the underlying table scan costs. Illustration 9: Bloom filter based inmemory join 2.5 In-Memory Aggregations and Groupings The basic primitives of very fast scans and joins can be further leveraged to accelerate complex reports featuring multiple levels of aggregations and groupings. A new optimizer transformation, called Vector Group By, is used to compute multi-dimensional aggregates in real-time. The Vector Group By transformation is a two-part process similar to the well known star transformation. Let s consider the following query as an example, which lists the total revenue from sales of footwear products in outlet stores, grouped by store and by product:
12 Select Stores.id, Products.id, sum(sales.revenue) From Sales, Stores, Products Where Sales.store_id = Stores.id And Sales.product_id = Products.id And Stores.type = Outlet And Products.type = Footwear Group by Stores.id, Products.id; The execution steps (depicted in Illustration 10) are as follows: Step 1: The Query begins by scanning the STORES and PRODUCTS dimensions. Step 2: A new data structure called a Key Vector is created based on the results of each of these scans. A key vector maps a qualifying dimension key to a dense grouping key (e.g. each store that is of type outlet will have a non-zero grouping key numbered 1..N, while each product that is of type footwear will have a non-zero grouping key numbered 1..M ) Step 3: The key vectors are then used to create an additional multi-dimensional array known as an In- Memory Accumulator. Each dimension of the In-Memory accumulator will have as many entries as the number of non-zero grouping keys corresponding to that dimension; in the above example, it will be an (NxM) array. Step 4: The second part of the execution plan is the scan of the SALES table and the application of the key vectors. For each entry in the SALES table that matches the join conditions based on the key vector entries for the dimension values (i.e. has a non-zero grouping key for each dimension), the corresponding sales amount will be added to the appropriate cell in the In-Memory Accumulator. At the end, the In-Memory Accumulator will have the results of this aggregation. The combination of these two phases dramatically improves the efficiency of a multiple table join with complex aggregations. Again, this is not strictly an in-memory specific optimization, but one that can be frequently chosen by the optimizer in view of the massive speedup of scans. Illustration 10: Vector Group By example
13 2.6 In-Memory Column Store and Full Transactional Consistency The default isolation level provided by Oracle Database is known as Consistent Read. With Consistent Read, every transaction in the database is associated with a monotonically increasing timestamp referred to as a System Change Number (SCN). A multi-versioning protocol is employed by the buffer cache to ensure that a given transaction or query only sees changes made by transactions with older SCNs. The IM column store similarly maintains the same consistent read semantics as the buffer cache. Each IMCU is marked with the SCN of the time of its creation. An associated metadata area, known as a Snapshot Metadata Unit (SMU) tracks changes to the rows within the IMCU made beyond that SCN. Illustration 11: Change tracking and repopulate The SMU tracks the validity of rows within the IMC. Changes made by transactions are processed as usual via the Buffer Cache, but for tables populated into the IM column store, the changes made to them are also logged in an in-memory transaction journal within the SMU. When a column scan runs against the base IMCU, it fetches the changed column values from the journal for all the rows that have been modified within the IMCU at an SCN earlier than the SCN of the scan (according to Consistent Read semantics, changes made at an SCN higher than the SCN of the scan are not visible to the scan). As changes accumulate in the journal, retrieving data from the transaction journal causes increased overhead for scans. Therefore, a background repopulate mechanism periodically rebuilds the IMCU at a new SCN, a process depicted in Illustration 11 above. There are two basic kinds of repopulate: 1) Threshold Repopulate: Threshold repopulate is a heuristic driven mechanism that repopulates an IMCU once the changes to the IMCU exceed a certain threshold and various other criteria are met. 2) Trickle Repopulate: Trickle repopulate runs constantly and unobtrusively in the background, consuming a small fraction of the available repopulate server processes. The goal of trickle repopulate is to ensure that eventually any given IMCU will be completely clean even if it is had not been sufficiently modified to qualify for threshold repopulate. The number of populate background processes that can be used for trickle repopulate is controlled by another initialization
14 parameter: INMEMORY_TRICKLE_REPOPULATE_SERVERS_PERCENT. This parameter defaults to 1. For example, if INMEMORY_MAX_POPULATE_SERVERS = 10, at most one-tenth of a single CPU core is dedicated to trickle repopulate by default. 2.7 In-Memory Column Store Scale-Out Apart from providing in-memory columnar storage on a single machine, Oracle Database In-Memory can scale out across multiple machines using the Oracle Real Application Cluster (RAC) configuration. Illustration 12: In-Memory Scale-Out On a RAC cluster, queries on in-memory tables are automatically parallelized across multiple instances as depicted in Illustration 12 above. This results in each parallel query process accessing local in-memory columnar data on each instance, with each process then only having to perform a fraction of the work Scale-Out Distribution and Parallel Execution The user can specify how the data within a table is to be distributed across the instances of a RAC cluster by using the DISTRIBUTE clause. There are four possible options for thus distributing the contents of a table: 1) DISTRIBUTE BY PARTITION: When a table is partitioned, this distribution scheme assigns all data from the same partition to the same instance and maps different partitions to different instances. This type of distribution is specially suited to hash partitions which usually exhibit uniform access across partitions. Distributing by partition has another potential benefit: It can allow in-memory partition-wise joins between two tables that are partitioned on the same attribute. For instance, if a SALES table is hash partitioned on order_id, and a SHIPMENTS table is likewise hash partitioned by order_id, the SALES to SHIPMENTS join can be accomplished as a set of local in-memory joins on each instance since the partitions will be co-located. 2) DISTRIBUTE BY SUBPARTITION: In Oracle Database, tables can be composite partitioned: i.e. Partitioned by some criteria, and then each partition further sub-partitioned by a different criteria. Distributing by sub-partition can be helpful when the top-level partitioning criteria could cause skewed data access. For instance if a SALES table is partitioned by weekly date ranges, it is likely that distributing by partition would cause skewed access, since more recent date ranges would probably exhibit far more activity. However, if the table were further sub-partitioned by hash on order_id, distributing by sub-partition would provide uniform accesses across instances. Furthermore the above join between SALES and SHIPMENTS could still be done as inmemory local joins, since all the SALES sub-partitions will be collocated with all the SHIPMENTS partitions with the same order_id hash (another example of partition-wise join).
15 3) DISTRIBUTE BY ROWID RANGE: When a table is not partitioned, or the partitioning criteria used would cause severely skewed access, then it is possible to ensure that the table contents are uniformly distributed across the instances using ROWID RANGE based distribution which places an IMCU on an instance by applying a uniform hash function to the rowid of the first row within that IMCU. This scheme therefore distributes the contents of the table without regard to the actual values. While this does result in uniform distribution of the table contents and uniform accesses across the instances, it does have the drawback of precluding partition-wise joins. 4) DISTRIBUTE AUTO: Choosing this scheme (which is the default distribution mode) indicates that it is up to the system to automatically select the best distribution scheme from the above three, based on the table s partitioning criteria and optimizer statistics Scale-Out In-Memory Fault Tolerance On Oracle Engineered systems such as Exadata and SPARC Supercluster, a fault-tolerance protocol specially optimized for high-speed networking (direct-to-wire Infiniband) is used to allow in-memory fault tolerance as depicted in Illustration 12 below. Fault-tolerance is specified by the DUPLICATE subclause. For a table, partition, or sub-partition marked as DUPLICATE, all IMCUs for that object are populated in two instances. Having a second copy ensures that there is no downtime if an instance fails since the second copy is instantly available. Both IMCUs are master copies that are populated and maintained independently, and both can be used at all times for query processing. Illustration 13: In-memory column store fault tolerance For small tables containing just a few IMCUs (such as small dimension tables that frequently participate in joins), it is advantageous to populate them on every instance, in order to ensure that all queries obtain local access to them at all times. This full duplication across all instances is achieved using the DUPLICATE ALL subclause. 2.8 Conclusions and Future Work While column oriented storage is known to provide substantial speedup for Analytics workloads, it is also unsuited to OLTP workloads characterized by high update volumes and selective queries. The Oracle Database In-Memory Option provides a true dual-format in-memory approach that is seamlessly built into the Oracle Data Access Layer, which allows it to be instantly compatible with all of the rich functionality of the Oracle Database. In-Memory storage can be selectively applied to
16 important tables and to recent partitions, while leveraging the mature buffer cache, flash storage and disk for increasingly colder data. As a result of this balanced approach, Oracle Database In-Memory allows the use of in-memory technology without any of the compromises or tradeoffs inherent in many existing in-memory databases. Ongoing work includes integrating Oracle Database In-Memory into the Automatic Data Optimization (ADO) framework (introduced with Oracle Database 12c) that automatically migrates objects between different storage tiers based on access frequency, the ability to create the in-memory column store on an Active Dataguard (physical standby) Database (in the release it is supported on Logical Standby databases but not on a Physical Standby), and extending the in-memory columnar format to Flash and other emerging persistent memory technologies. Contact address: Name Tirthankar Lahiri 400 Oracle Parkway, 4OP14 Redwood Shores, CA USA Phone: +1(650) Fax: +1(650) [email protected] Internet: Markus Kissling Liebknechtstrasse 35 Stuttgart, DE Phone : Fax : [email protected] Internet:
Oracle Database In-Memory: A Dual Format In-Memory Database
Oracle Database In-Memory: A Dual Format In-Memory Database Tirthankar Lahiri #1, Shasank Chavan #2, Maria Colgan #3, Dinesh Das #4, Amit Ganesh #5, Mike Gleeson #6, Sanket Hase #7, Allison Holloway #8,
Who am I? Copyright 2014, Oracle and/or its affiliates. All rights reserved. 3
Oracle Database In-Memory Power the Real-Time Enterprise Saurabh K. Gupta Principal Technologist, Database Product Management Who am I? Principal Technologist, Database Product Management at Oracle Author
Oracle TimesTen: An In-Memory Database for Enterprise Applications
Oracle TimesTen: An In-Memory Database for Enterprise Applications Tirthankar Lahiri [email protected] Marie-Anne Neimat [email protected] Steve Folkman [email protected] Abstract In-memory
Safe Harbor Statement
Safe Harbor Statement The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment
Preview of Oracle Database 12c In-Memory Option. Copyright 2013, Oracle and/or its affiliates. All rights reserved.
Preview of Oracle Database 12c In-Memory Option 1 The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any
Oracle Database In-Memory The Next Big Thing
Oracle Database In-Memory The Next Big Thing Maria Colgan Master Product Manager #DBIM12c Why is Oracle do this Oracle Database In-Memory Goals Real Time Analytics Accelerate Mixed Workload OLTP No Changes
<Insert Picture Here> Oracle In-Memory Database Cache Overview
Oracle In-Memory Database Cache Overview Simon Law Product Manager The following is intended to outline our general product direction. It is intended for information purposes only,
When to Use Oracle Database In-Memory
When to Use Oracle Database In-Memory Identifying Use Cases for Application Acceleration O R A C L E W H I T E P A P E R M A R C H 2 0 1 5 Executive Overview Oracle Database In-Memory is an unprecedented
2009 Oracle Corporation 1
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,
An Oracle White Paper July 2014. Oracle Database In-Memory
An Oracle White Paper July 2014 Oracle Database In-Memory Disclaimer The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated
Oracle InMemory Database
Oracle InMemory Database Calgary Oracle Users Group December 11, 2014 Outline Introductions Who is here? Purpose of this presentation Background Why In-Memory What it is How it works Technical mechanics
Under The Hood. Tirthankar Lahiri Vice President Data Technologies and TimesTen October 28, 2015. #DBIM12c
Database In-Memory 2 Under The Hood Tirthankar Lahiri Vice President Data Technologies and TimesTen October 28, 2015 #DBIM12c Confidential Internal 2 A Brief History of Time Before OOW 2013 Miscellaneous
HOUG Konferencia 2015. Oracle TimesTen In-Memory Database and TimesTen Application-Tier Database Cache. A few facts in 10 minutes
HOUG Konferencia 2015 Oracle TimesTen In-Memory Database and TimesTen Application-Tier Database Cache A few facts in 10 minutes [email protected] What is TimesTen An in-memory relational database
CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level. -ORACLE TIMESTEN 11gR1
CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level -ORACLE TIMESTEN 11gR1 CASE STUDY Oracle TimesTen In-Memory Database and Shared Disk HA Implementation
Oracle Database In- Memory & Rest of the Database Performance Features
Oracle Database In- Memory & Rest of the Database Performance Features #DBIM12c Safe Harbor Statement The following is intended to outline our general product direcmon. It is intended for informamon purposes
Tips and Tricks for Using Oracle TimesTen In-Memory Database in the Application Tier
Tips and Tricks for Using Oracle TimesTen In-Memory Database in the Application Tier Simon Law TimesTen Product Manager, Oracle Meet The Experts: Andy Yao TimesTen Product Manager, Oracle Gagan Singh Senior
Distributed Architecture of Oracle Database In-memory
Distributed Architecture of Oracle Database In-memory Niloy Mukherjee, Shasank Chavan, Maria Colgan, Dinesh Das, Mike Gleeson, Sanket Hase, Allison Holloway, Hui Jin, Jesse Kamp, Kartik Kulkarni, Tirthankar
Query Acceleration of Oracle Database 12c In-Memory using Software on Chip Technology with Fujitsu M10 SPARC Servers
Query Acceleration of Oracle Database 12c In-Memory using Software on Chip Technology with Fujitsu M10 SPARC Servers 1 Table of Contents Table of Contents2 1 Introduction 3 2 Oracle Database In-Memory
Query Optimization in Oracle 12c Database In-Memory
Query Optimization in Oracle 12c Database In-Memory Dinesh Das *, Jiaqi Yan *, Mohamed Zait *, Satyanarayana R Valluri, Nirav Vyas *, Ramarajan Krishnamachari *, Prashant Gaharwar *, Jesse Kamp *, Niloy
Inge Os Sales Consulting Manager Oracle Norway
Inge Os Sales Consulting Manager Oracle Norway Agenda Oracle Fusion Middelware Oracle Database 11GR2 Oracle Database Machine Oracle & Sun Agenda Oracle Fusion Middelware Oracle Database 11GR2 Oracle Database
ORACLE DATABASE 12C IN-MEMORY OPTION
Oracle Database 12c In-Memory Option 491 ORACLE DATABASE 12C IN-MEMORY OPTION c The Top Tier of a Multi-tiered Database Architecture There is this famous character, called Mr. Jourdain, in The Bourgeois
An Oracle White Paper May 2011. Exadata Smart Flash Cache and the Oracle Exadata Database Machine
An Oracle White Paper May 2011 Exadata Smart Flash Cache and the Oracle Exadata Database Machine Exadata Smart Flash Cache... 2 Oracle Database 11g: The First Flash Optimized Database... 2 Exadata Smart
Availability Digest. www.availabilitydigest.com. Raima s High-Availability Embedded Database December 2011
the Availability Digest Raima s High-Availability Embedded Database December 2011 Embedded processing systems are everywhere. You probably cannot go a day without interacting with dozens of these powerful
Oracle Database In-Memory
Oracle Database In-Memory Powering the Real-Time Enterprise Oracle Database In-Memory transparently accelerates analytics by orders of magnitude while simultaneously speeding up mixed-workload OLTP. With
SQL Server 2014 New Features/In- Memory Store. Juergen Thomas Microsoft Corporation
SQL Server 2014 New Features/In- Memory Store Juergen Thomas Microsoft Corporation AGENDA 1. SQL Server 2014 what and when 2. SQL Server 2014 In-Memory 3. SQL Server 2014 in IaaS scenarios 2 SQL Server
SAP HANA PLATFORM Top Ten Questions for Choosing In-Memory Databases. Start Here
PLATFORM Top Ten Questions for Choosing In-Memory Databases Start Here PLATFORM Top Ten Questions for Choosing In-Memory Databases. Are my applications accelerated without manual intervention and tuning?.
Oracle EXAM - 1Z0-117. Oracle Database 11g Release 2: SQL Tuning. Buy Full Product. http://www.examskey.com/1z0-117.html
Oracle EXAM - 1Z0-117 Oracle Database 11g Release 2: SQL Tuning Buy Full Product http://www.examskey.com/1z0-117.html Examskey Oracle 1Z0-117 exam demo product is here for you to test the quality of the
1Z0-117 Oracle Database 11g Release 2: SQL Tuning. Oracle
1Z0-117 Oracle Database 11g Release 2: SQL Tuning Oracle To purchase Full version of Practice exam click below; http://www.certshome.com/1z0-117-practice-test.html FOR Oracle 1Z0-117 Exam Candidates We
An Oracle Technical White Paper July 2009. Using Oracle In-Memory Database Cache to Accelerate the Oracle Database
An Oracle Technical White Paper July 2009 Using Oracle In-Memory Database Cache to Accelerate the Oracle Database 1. Introduction... 2 2. Application-Tier Caching... 4 3. The Oracle TimesTen In-Memory
Application-Tier In-Memory Analytics Best Practices and Use Cases
Application-Tier In-Memory Analytics Best Practices and Use Cases Susan Cheung Vice President Product Management Oracle, Server Technologies Oct 01, 2014 Guest Speaker: Kiran Tailor Senior Oracle DBA and
Overview: X5 Generation Database Machines
Overview: X5 Generation Database Machines Spend Less by Doing More Spend Less by Paying Less Rob Kolb Exadata X5-2 Exadata X4-8 SuperCluster T5-8 SuperCluster M6-32 Big Memory Machine Oracle Exadata Database
<Insert Picture Here> Best Practices for Extreme Performance with Data Warehousing on Oracle Database
1 Best Practices for Extreme Performance with Data Warehousing on Oracle Database Rekha Balwada Principal Product Manager Agenda Parallel Execution Workload Management on Data Warehouse
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications White Paper Table of Contents Overview...3 Replication Types Supported...3 Set-up &
Oracle TimesTen IMDB - An Introduction
Oracle TimesTen IMDB - An Introduction Who am I 12+ years as an Oracle DBA Working as Vice President with an Investment Bank Member of AIOUG Since 2009 Cer$fied ITIL V3 Founda$on IT Service Management
The Sierra Clustered Database Engine, the technology at the heart of
A New Approach: Clustrix Sierra Database Engine The Sierra Clustered Database Engine, the technology at the heart of the Clustrix solution, is a shared-nothing environment that includes the Sierra Parallel
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
Microsoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010
Microsoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010 Better Together Writer: Bill Baer, Technical Product Manager, SharePoint Product Group Technical Reviewers: Steve Peschka,
SQL Server 2008 Performance and Scale
SQL Server 2008 Performance and Scale White Paper Published: February 2008 Updated: July 2008 Summary: Microsoft SQL Server 2008 incorporates the tools and technologies that are necessary to implement
Oracle Big Data, In-memory, and Exadata - One Database Engine to Rule Them All Dr.-Ing. Holger Friedrich
Oracle Big Data, In-memory, and Exadata - One Database Engine to Rule Them All Dr.-Ing. Holger Friedrich Agenda Introduction Old Times Exadata Big Data Oracle In-Memory Headquarters Conclusions 2 sumit
Oracle MulBtenant Customer Success Stories
Oracle MulBtenant Customer Success Stories Mul1tenant Customer Sessions at Customer Session Venue Title SAS Cigna CON6328 Mon 2:45pm SAS SoluBons OnDemand: A MulBtenant Cloud Offering CON6379 Mon 5:15pm
Optimize Oracle Business Intelligence Analytics with Oracle 12c In-Memory Database Option
Optimize Oracle Business Intelligence Analytics with Oracle 12c In-Memory Database Option Kai Yu, Senior Principal Architect Dell Oracle Solutions Engineering Dell, Inc. Abstract: By adding the In-Memory
Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale
WHITE PAPER Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale Sponsored by: IBM Carl W. Olofson December 2014 IN THIS WHITE PAPER This white paper discusses the concept
EMC VFCACHE ACCELERATES ORACLE
White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes
Hypertable Architecture Overview
WHITE PAPER - MARCH 2012 Hypertable Architecture Overview Hypertable is an open source, scalable NoSQL database modeled after Bigtable, Google s proprietary scalable database. It is written in C++ for
low-level storage structures e.g. partitions underpinning the warehouse logical table structures
DATA WAREHOUSE PHYSICAL DESIGN The physical design of a data warehouse specifies the: low-level storage structures e.g. partitions underpinning the warehouse logical table structures low-level structures
WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression
WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression Sponsored by: Oracle Steven Scully May 2010 Benjamin Woo IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA
EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server
White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
The Oracle Universal Server Buffer Manager
The Oracle Universal Server Buffer Manager W. Bridge, A. Joshi, M. Keihl, T. Lahiri, J. Loaiza, N. Macnaughton Oracle Corporation, 500 Oracle Parkway, Box 4OP13, Redwood Shores, CA 94065 { wbridge, ajoshi,
Expert Oracle Exadata
Expert Oracle Exadata Kerry Osborne Randy Johnson Tanel Poder Apress Contents J m About the Authors About the Technical Reviewer a Acknowledgments Introduction xvi xvii xviii xix Chapter 1: What Is Exadata?
How to Choose Between Hadoop, NoSQL and RDBMS
How to Choose Between Hadoop, NoSQL and RDBMS Keywords: Jean-Pierre Dijcks Oracle Redwood City, CA, USA Big Data, Hadoop, NoSQL Database, Relational Database, SQL, Security, Performance Introduction A
An Oracle White Paper June 2012. High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database
An Oracle White Paper June 2012 High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database Executive Overview... 1 Introduction... 1 Oracle Loader for Hadoop... 2 Oracle Direct
EMC XtremSF: Delivering Next Generation Performance for Oracle Database
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
An Oracle White Paper December 2013. Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine
An Oracle White Paper December 2013 Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine Flash Technology and the Exadata Database Machine... 2 Oracle Database 11g: The First Flash
Bryan Tuft Sr. Sales Consultant Global Embedded Business Unit [email protected]
Bryan Tuft Sr. Sales Consultant Global Embedded Business Unit [email protected] Agenda Oracle Approach Embedded Databases TimesTen In-Memory Database Snapshots Q&A Real-Time Infrastructure Challenges
Disaster Recovery for Oracle Database
Disaster Recovery for Oracle Database Zero Data Loss Recovery Appliance, Active Data Guard and Oracle GoldenGate ORACLE WHITE PAPER APRIL 2015 Overview Oracle Database provides three different approaches
Module 14: Scalability and High Availability
Module 14: Scalability and High Availability Overview Key high availability features available in Oracle and SQL Server Key scalability features available in Oracle and SQL Server High Availability High
Maximum Availability Architecture
Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine Oracle Maximum Availability Architecture White Paper April 2010 Maximum Availability Architecture Oracle Best Practices For High Availability
Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
The Methodology Behind the Dell SQL Server Advisor Tool
The Methodology Behind the Dell SQL Server Advisor Tool Database Solutions Engineering By Phani MV Dell Product Group October 2009 Executive Summary The Dell SQL Server Advisor is intended to perform capacity
In-memory Tables Technology overview and solutions
In-memory Tables Technology overview and solutions My mainframe is my business. My business relies on MIPS. Verna Bartlett Head of Marketing Gary Weinhold Systems Analyst Agenda Introduction to in-memory
Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database
WHITE PAPER Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Executive
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture Ron Weiss, Exadata Product Management Exadata Database Machine Best Platform to Run the
EMC XTREMIO EXECUTIVE OVERVIEW
EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying
Big Data Functionality for Oracle 11 / 12 Using High Density Computing and Memory Centric DataBase (MCDB) Frequently Asked Questions
Big Data Functionality for Oracle 11 / 12 Using High Density Computing and Memory Centric DataBase (MCDB) Frequently Asked Questions Overview: SGI and FedCentric Technologies LLC are pleased to announce
SQL Server 2012 Performance White Paper
Published: April 2012 Applies to: SQL Server 2012 Copyright The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication.
ORACLE BUSINESS INTELLIGENCE, ORACLE DATABASE, AND EXADATA INTEGRATION
ORACLE BUSINESS INTELLIGENCE, ORACLE DATABASE, AND EXADATA INTEGRATION EXECUTIVE SUMMARY Oracle business intelligence solutions are complete, open, and integrated. Key components of Oracle business intelligence
Cloud Based Application Architectures using Smart Computing
Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products
In-memory databases and innovations in Business Intelligence
Database Systems Journal vol. VI, no. 1/2015 59 In-memory databases and innovations in Business Intelligence Ruxandra BĂBEANU, Marian CIOBANU University of Economic Studies, Bucharest, Romania [email protected],
TIBCO ActiveSpaces Use Cases How in-memory computing supercharges your infrastructure
TIBCO Use Cases How in-memory computing supercharges your infrastructure is a great solution for lifting the burden of big data, reducing reliance on costly transactional systems, and building highly scalable,
How To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory)
WHITE PAPER Oracle NoSQL Database and SanDisk Offer Cost-Effective Extreme Performance for Big Data 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Abstract... 3 What Is Big Data?...
Optimizing Storage for Better TCO in Oracle Environments. Part 1: Management INFOSTOR. Executive Brief
Optimizing Storage for Better TCO in Oracle Environments INFOSTOR Executive Brief a QuinStreet Excutive Brief. 2012 To the casual observer, and even to business decision makers who don t work in information
IBM Data Retrieval Technologies: RDBMS, BLU, IBM Netezza, and Hadoop
IBM Data Retrieval Technologies: RDBMS, BLU, IBM Netezza, and Hadoop Frank C. Fillmore, Jr. The Fillmore Group, Inc. Session Code: E13 Wed, May 06, 2015 (02:15 PM - 03:15 PM) Platform: Cross-platform Objectives
SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications. Jürgen Primsch, SAP AG July 2011
SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications Jürgen Primsch, SAP AG July 2011 Why In-Memory? Information at the Speed of Thought Imagine access to business data,
Actian Vector in Hadoop
Actian Vector in Hadoop Industrialized, High-Performance SQL in Hadoop A Technical Overview Contents Introduction...3 Actian Vector in Hadoop - Uniquely Fast...5 Exploiting the CPU...5 Exploiting Single
ORACLE DATABASE 10G ENTERPRISE EDITION
ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.
An Oracle White Paper August 2013. Automatic Data Optimization with Oracle Database 12c
An Oracle White Paper August 2013 Automatic Data Optimization with Oracle Database 12c Introduction... 1 Storage Tiering and Compression Tiering... 2 Heat Map: Fine-grained Data Usage Tracking... 3 Automatic
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
<Insert Picture Here> Oracle Database Directions Fred Louis Principal Sales Consultant Ohio Valley Region
Oracle Database Directions Fred Louis Principal Sales Consultant Ohio Valley Region 1977 Oracle Database 30 Years of Sustained Innovation Database Vault Transparent Data Encryption
Oracle Database In-Memory A Practical Solution
Oracle Database In-Memory A Practical Solution Sreekanth Chintala Oracle Enterprise Architect Dan Huls Sr. Technical Director, AT&T WiFi CON3087 Moscone South 307 Safe Harbor Statement The following is
WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE
WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE 1 W W W. F U S I ON I O.COM Table of Contents Table of Contents... 2 Executive Summary... 3 Introduction: In-Memory Meets iomemory... 4 What
CitusDB Architecture for Real-Time Big Data
CitusDB Architecture for Real-Time Big Data CitusDB Highlights Empowers real-time Big Data using PostgreSQL Scales out PostgreSQL to support up to hundreds of terabytes of data Fast parallel processing
Using In-Memory Computing to Simplify Big Data Analytics
SCALEOUT SOFTWARE Using In-Memory Computing to Simplify Big Data Analytics by Dr. William Bain, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T he big data revolution is upon us, fed
Using an In-Memory Data Grid for Near Real-Time Data Analysis
SCALEOUT SOFTWARE Using an In-Memory Data Grid for Near Real-Time Data Analysis by Dr. William Bain, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 IN today s competitive world, businesses
Oracle Database 12c Built for Data Warehousing O R A C L E W H I T E P A P E R F E B R U A R Y 2 0 1 5
Oracle Database 12c Built for Data Warehousing O R A C L E W H I T E P A P E R F E B R U A R Y 2 0 1 5 Contents Executive Summary 1 Overview 2 A Brief Introduction to Oracle s Information Management Reference
Innovative technology for big data analytics
Technical white paper Innovative technology for big data analytics The HP Vertica Analytics Platform database provides price/performance, scalability, availability, and ease of administration Table of
F1: A Distributed SQL Database That Scales. Presentation by: Alex Degtiar ([email protected]) 15-799 10/21/2013
F1: A Distributed SQL Database That Scales Presentation by: Alex Degtiar ([email protected]) 15-799 10/21/2013 What is F1? Distributed relational database Built to replace sharded MySQL back-end of AdWords
Well packaged sets of preinstalled, integrated, and optimized software on select hardware in the form of engineered systems and appliances
INSIGHT Oracle's All- Out Assault on the Big Data Market: Offering Hadoop, R, Cubes, and Scalable IMDB in Familiar Packages Carl W. Olofson IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA
Oracle BI EE Implementation on Netezza. Prepared by SureShot Strategies, Inc.
Oracle BI EE Implementation on Netezza Prepared by SureShot Strategies, Inc. The goal of this paper is to give an insight to Netezza architecture and implementation experience to strategize Oracle BI EE
Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage
White Paper Scaling Objectivity Database Performance with Panasas Scale-Out NAS Storage A Benchmark Report August 211 Background Objectivity/DB uses a powerful distributed processing architecture to manage
Promise of Low-Latency Stable Storage for Enterprise Solutions
Promise of Low-Latency Stable Storage for Enterprise Solutions Janet Wu Principal Software Engineer Oracle [email protected] Santa Clara, CA 1 Latency Sensitive Applications Sample Real-Time Use Cases
ORACLE COHERENCE 12CR2
ORACLE COHERENCE 12CR2 KEY FEATURES AND BENEFITS ORACLE COHERENCE IS THE #1 IN-MEMORY DATA GRID. KEY FEATURES Fault-tolerant in-memory distributed data caching and processing Persistence for fast recovery
An Oracle White Paper November 2012. Hybrid Columnar Compression (HCC) on Exadata
An Oracle White Paper November 2012 Hybrid Columnar Compression (HCC) on Exadata Introduction... 3 Hybrid Columnar Compression: Technology Overview... 4 Warehouse Compression... 5 Archive Compression...
Scaling Distributed Database Management Systems by using a Grid-based Storage Service
Scaling Distributed Database Management Systems by using a Grid-based Storage Service Master Thesis Silviu-Marius Moldovan [email protected] Supervisors: Gabriel Antoniu, Luc Bougé {Gabriel.Antoniu,Luc.Bouge}@irisa.fr
Architectural patterns for building real time applications with Apache HBase. Andrew Purtell Committer and PMC, Apache HBase
Architectural patterns for building real time applications with Apache HBase Andrew Purtell Committer and PMC, Apache HBase Who am I? Distributed systems engineer Principal Architect in the Big Data Platform
Optimizing Performance. Training Division New Delhi
Optimizing Performance Training Division New Delhi Performance tuning : Goals Minimize the response time for each query Maximize the throughput of the entire database server by minimizing network traffic,
Contents. SnapComms Data Protection Recommendations
Contents Abstract... 2 SnapComms Solution Environment... 2 Concepts... 3 What to Protect... 3 Database Failure Scenarios... 3 Physical Infrastructure Failures... 3 Logical Data Failures... 3 Service Recovery
