1 SQL Server In-Memory OLTP and Columnstore Feature Comparison Technical White Paper Authors: Jos de Bruijn (Microsoft), Sunil Agarwal (Microsoft), Darmadi Komo (Microsoft), Badal Bordia (ASPL), Vishal Soni (ASPL) Technical reviewers: Bill Ramos (Indigo Slate) Published: May 2015 Applies to: Microsoft SQL Server 2016 and SQL Server 2014 Summary: The in-memory features of Microsoft SQL Server are a unique combination of fully integrated tools that are currently running on thousands of production systems. These tools consist primarily of inmemory Online Transactional Processing (OLTP) and in-memory Columnstore. In-memory OLTP provides row-based in-memory data access and modification capabilities, used mostly for transaction processing workloads, though it can also be used in data warehousing scenarios. This technology uses lock- and latch-free architecture that enables linear scaling. In-memory OLTP has memory-optimized data structures and provides native compilation, creating more efficient data access and querying capabilities. This technology is integrated into the SQL Server database engine, which enables lower total cost of ownership, since developers and database administrators (DBAs) can use the same T-SQL, client stack, tooling, backups, and AlwaysOn features. Furthermore, the same database can have both on-disk and in-memory features. In-memory OLTP can dramatically improve throughput and latency on transactional processing workloads and can provide significant performance improvements. In-memory Columnstore uses column compression for reducing the storage footprint and improving query performance and allows running analytic queries concurrently with data loads. Columnstore indexes are updatable, memory-optimized, column-oriented indexes used primarily in data warehousing scenarios, though they can also be used for operational analytics. Columnstore indexes can be created in both the clustered and nonclustered varieties. Columnstore indexes organize data into row groups that can be efficiently compressed, which improves performance. Queries that utilize a Columnstore index can use batch-mode processing, which is optimized for in-memory performance. Columnstore indexes can be particularly useful on memory-optimized tables. The SQL Server in-memory solutions lead to dramatic improvements in performance, providing faster transactions, faster queries, and faster insights all on a proven data platform architecture. This white paper examines the key components of these tools and compares them with solutions from other providers, demonstrating how in-memory technology in SQL Server outpaces other solutions.
2 Copyright The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication. This white paper is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED, OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT. Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in, or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation. Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property Microsoft Corporation. All rights reserved. Microsoft, Active Directory, Microsoft Azure, Excel, SharePoint, SQL Server, Windows, and Windows Server, are trademarks of the Microsoft group of companies. All other trademarks are property of their respective owners. Page 2
3 Contents Overview of SQL Server in-memory advances... 4 Drive real-time business with real-time insights... 4 OLTP transactions... 4 Analytics queries... 4 SQL Server in-memory OLTP overview... 4 Memory-optimized tables... 4 Indexes on memory-optimized tables... 5 Concurrency improvements... 5 Natively compiled stored procedures... 5 In-memory OLTP customer success story... 6 SQL Server in-memory Columnstore overview... 6 Clustered and nonclustered Columnstore indexes... 6 Batch-mode query processing... 7 In-memory Columnstore customer success story... 8 Overview of in-memory technologies for SAP, Oracle, and IBM... 8 SAP HANA... 8 Oracle Database In-Memory... 9 IBM DB2 BLU Acceleration... 9 Comparison of Features... 9 SAP HANA... 9 SAP HANA as a database platform... 9 Running SAP on SQL Server vs. HANA Comparing SAP HANA and SQL Server Oracle Oracle TimesTen: In-memory OLTP Comparing Oracle and SQL Server in-memory OLTP Oracle 12c: In-memory Columnstore Comparing Oracle 12c and SQL Server in-memory Columnstore IBM IBM DB2 10.5: Analytics with BLU Acceleration Comparing IBM DB2 BLU Acceleration and SQL Server in-memory Myths and reality: SQL Server in-memory OLTP and in-memory Latch-free architecture Separate database engine Not suitable for OLTP Response to competitors in-memory offerings In-memory OLTP is the same as the old SQL feature DBCC PINTABLE Conclusion Key resources Page 3
4 Overview of SQL Server in-memory advances Drive real-time business with real-time insights In-memory OLTP and in-memory Columnstore of Microsoft SQL Server leap a generation of OLTP and data warehousing solutions to show typical performance improvements of x. With in-memory OLTP, organizations can accelerate their transactional applications by up to 30x and can deliver faster business insights with more than 100x faster queries and reports. In-memory Columnstore indexes, with scan rates of tens of billions of rows per second on typical industry hardware, provide users the ability to interact with and explore an unprecedented amount of data at the speed of thought. SQL Server provides organizations with a robust data platform architecture that can advance your business goals with the ability to transact and analyze data in near real-time. OLTP transactions In-memory OLTP is a memory-optimized database engine integrated into the SQL Server engine, optimized for OLTP workloads. The OLTP engine uses latch-free and lock-free data structures and multiversion concurrency control to support extremely high concurrency rates. The result is high throughput with linear scaling for database transactions. The actual performance gain depends on many factors, but a 2 20x improvement in performance is very common. In fact, a database can see transactional performance gains as high as 30x over traditional table and database engines. Analytics queries The SQL Server in-memory Columnstore index stores and manages data by using column-based data storage and column-based query processing. Columnstore indexes can transform the data warehouse experience for users by enabling faster performance for common data warehouse queries such as filtering, aggregating, grouping, and star-join queries. Columnstore indexing can be used to achieve up to 100x query-performance gains over traditional row-oriented storage and significant (typically 10x) data compression for common data patterns. SQL Server in-memory OLTP overview Memory-optimized tables The SQL Server in-memory OLTP engine allows you to create in-memory optimized OLTP tables within your existing relational database. Memory-optimized tables (as opposed to standard, disk-based tables) reside completely in-memory. A key difference of memory-optimized tables over disk-based tables is that memory-optimized tables store the data as rows. No pages need to be read into memory when the memory-optimized tables are accessed. A set of checkpoint files (data and delta file pairs) is created in a memory-optimized file group for data persistence, similar to the data files used for disk-based tables. However, these checkpoint files, unlike data files used for disk-based tables, are append-only, allowing SQL Server to leverage the full I/O bandwidth of the storage. These checkpoint files, along with the transaction log, provide full durability and are used for recovery at restart or for database backup/restore. There are two main types of in-memory-optimized OLTP tables: SCHEMA_AND_DATA (i.e. durable tables) and SCHEMA_ONLY (i.e. non-durable tables). The first type provides full durability guarantee just like disk-based tables for your OLTP workload. It is the default setting when creating memory-optimized tables. The second type persists the schema of the table but not the data. SCHEMA_ONLY would be used in scenarios where OLTP workloads do not require data persistence, such as session state Page 4
5 management for an application, for staging tables in an ETL scenario, or as a replacement for temporary tables in TempDB that use a table type. Indexes on memory-optimized tables Memory-optimized tables support two types of nonclustered indexes: hash and range indexes. Hash indexes provide optimal access paths for equality searches, while range indexes are used for queries involving range predicates or for ordered retrieval of the data. Every memory-optimized table must have at least one index. For durable memory-optimized tables, a unique index is required to uniquely identify a row when processing transaction log records change during recovery. Indexes on in-memory tables reside only in-memory. They are not stored in checkpoint files nor are any changes to the indexes logged. The indexes are maintained automatically during all modification operations on memory-optimized tables, just like B-tree indexes on disk-based tables, but if SQL Server restarts, the indexes on the memory-optimized tables are rebuilt as the data are streamed into memory. Concurrency improvements Applications whose performance is affected by engine-level concurrency, such as latch contention or blocking, improve significantly when the application is migrated to in-memory OLTP. Memory-optimized tables do not have pages, so there are no latches and hence no latch wait. If your database application encounters blocking issues between read and write operations, in-memory OLTP removes the blocking issues because it uses optimistic concurrency control to access data. The optimistic control is implemented using row versions, but unlike disk-based tables, the row versions are kept in-memory. Since data for memory-optimized tables are always in-memory, the waits due to I/O path are eliminated. Also, there will be no waits for reading data from disks and no waits for locks on data rows. Natively compiled stored procedures SQL Server can natively compile stored procedures that access memory-optimized tables. A natively compiled stored procedure optimizes TSQL statements, transforms them into Visual C code, and then generates a DLL. This enables SQL Server to execute the business logic in the stored procedure at an order of magnitude of better efficiency using fewer instructions compared to traditional stored procedures. SQL 14 allows commonly used TSQL statements in OLTP workloads to be used inside natively stored procedures. The SQL Server team continues to expand the TSQL surface area in new releases. Page 5
6 In-memory OLTP customer success story bwin.party was formed from a merger between two gaming giants, bwin Interactive Entertainment and PartyGaming, each of which had high-traffic gaming websites. Consolidation of their websites resulted in huge overloads on their infrastructure. bwin.party needed to overcome its scalability issues and wanted to improve the performance of its gaming website to support rapid business growth. Solution: Every second counts for a player waiting to place a bet, and by using in-memory OLTP in SQL Server 2014, we provide a faster-loading site and a faster overall experience, so players can place more bets and play games more smoothly. Rick Kutschera Manager of Database Engineering, Using the Microsoft in-memory OLTP solution in SQL Server 2014, their gaming systems can now handle 250,000 requests per second (almost 20 times the original load) and offer players a faster, smoother gaming experience. Benefits: Scalability: bwin.party gaming systems can scale to 250,000 requests per second, close to a 20x improvement. Faster, smoother playing experience: The standard system response time improved from 50 milliseconds to 2 3 milliseconds, resulting in faster and better performance. Reduced costs and increased revenue: bwin.party is able to run in-memory OLTP on less hardware, which could save it as much as $100,000 a year. bwin.party For more details, see SQL Server in-memory Columnstore overview In-memory Columnstore uses columnar storage format to reduce the storage footprint significantly, typically 10x, while still delivering up to 100x better analytics query performance. You can obtain an additional 30% compression by using COLUMNSTORE_ARCHIVE compression for data that are not frequently referenced. Columnar storage format provides significant data compression by storing each column separately. Since the data within a column are similar and often repeated, SQL Server can achieve a very high level of data compression. Higher compression rates improve query performance by using a smaller in-memory footprint. Analytic queries often select only a few columns from the FACT table. With columnar storage, only the required columns are read into memory, reducing I/O even further, unlike row-based storage format where all columns get loaded into memory as part of the rows. Clustered and nonclustered Columnstore indexes SQL Server 2014 supports updatable clustered Columnstore indexes, which replace the traditional rowstore tables. The clustered Columnstore index allows users to modify data and load data concurrently for data warehouse and Decision Support System (DSS) workloads. Improved query performance of up to 100x speed up is provided with reduced I/Os and optimized query execution using techniques such as Page 6
7 applying predicates in compressed format, pushing down predicates to storage layer when possible, leveraging new processor architectures, and a new BATCH execution mode. SQL Server also allows you to create nonclustered Columnstore indexes on rowstore tables primarily for operational analytics, such as the ability to do analytics on operational store. The nonclustered Columnstore index is updatable in SQL Server The rows in Columnstore indexes are generally grouped in a set of 1 million rows to achieve optimal data compression. This grouping of rows is referred to as a rowgroup. Within a rowgroup, the multiple values for each column are compressed and stored as LOBs. These LOB units are referred to as segments. The column segments are the unit of transfer between disk and memory. Batch-mode query processing Batch-mode query processing is basically a vector-based query execution mechanism, which is tightly integrated with the Columnstore index. Queries that target a Columnstore index can use batch-mode to process up to 900 rows together, which enables efficient query execution, providing 3-10x in query performance improvement. In SQL Server, batch-mode processing is optimized for Columnstore indexes to take full advantage of their structure and in-memory capabilities. Page 7
8 In-memory Columnstore customer success story Part of the mission of Clalit Health Services, which provides healthcare to 60 percent of the Israeli population, is to continually improve clinical outcomes for its members. With a large, complex data infrastructure, it faces a real challenge to manage more users, more data, and more complex queries. The query response time is the real productivity killer for business analysts, and they needed a more effective way to conduct analyses on complex queries. A query that took a half hour to run could delay a report by a day or more. With SQL Server 2014, we do not expect that to happen. Solution: Clalit conducted a proof of concept using the Columnstore index feature of Microsoft SQL Server It ran the problem queries on each database version without implementing any code changes or tuning. Anatoly Ternov Database Administrator and ETL Team Leader, Clalit Health Services Benefits: Supports greater analyst productivity: With query time reduced from 20 minutes to three seconds, the 300 analysts who work with the data warehouse daily see their productivity soar. Reduces disk needs by 40 percent: Queries ran in 40 percent less disk space than queries to earlier data software. Delivers deeper data insights: Analysts and researchers will not just run more analyses in less time, they will also run more complex analyses in less time. For more details, see Overview of in-memory technologies for SAP, Oracle, and IBM At present, several organizations offer in-memory databases for a variety of tasks. We are going to examine in-memory technologies from SAP, Oracle, and IBM. SAP HANA SAP has marketed SAP HANA extensively as an in-memory, column-oriented, relational database management system designed to handle both high transaction rates and complex query processing on the same platform. SAP HANA is a platform for real-time analytics and applications delivered either as a pre-configured SUSE Linux-based appliance from a number of vendors (including IBM, HP, Dell, Fujitsu, and Cisco), or via cloud providers (like AWS, VMware, or SAP itself). SAP in-memory computing is the core technology underlying the SAP HANA platform, but the appliance also includes tools for data modeling, data management, and security, as well as trigger-based and ETL-based replication. SAP HANA provides both in-memory OLTP and in-memory Columnstore. Page 8
9 Oracle Database In-Memory Oracle Database In-Memory is a new option for Oracle database 12c, designed to counter SAP HANA. It is an optional add-on to Oracle Database 12c that enables the flagship relational software of Oracle to function as an in-memory database. Oracle Database In-Memory works on any platform running Oracle Database 12c. Oracle requires a separate product Oracle TimesTen for in-memory OLTP, which is not fully integrated into their main database. IBM DB2 BLU Acceleration IBM DB2 BLU Acceleration, developed by the IBM Research and Development Labs, is a new in-memory feature integrated into IBM DB It uses the same storage and memory constructs (storage groups, table spaces, buffer pools, etc.); SQL language interfaces; and administrative tools as traditional DB2. IBM DB2 BLU Acceleration is an in-memory Columnstore implementation suitable for data warehouse and analytics applications. Comparison of Features SAP HANA SAP HANA as a database platform SAP started HANA as a database platform and an analytics appliance. SAP now uses HANA branding and messaging for a wide variety of products that are being marketed with HANA messaging. SAP HANA Columnstore for OLTP and analytics The primary objective of SAP HANA is to run both OLTP and analytics on the same tables in the database. HANA supports both applications on the same instance of data at the same time. Using an example from traditional systems, databases have OLTP tables, normalized schema, and ETL workloads and processes, but these tables are not optimized for performing analytics. For analytics-related operations, users need to transform these OLTP tables to another schema, such as the STAR schema, and hold it as a data warehouse that is optimized for analytics. This process adds a great deal of complexity and effort to create and manage the required data warehouse. It also causes some data latency; hence, users cannot get real-time insights into business using these traditional systems. Focusing on these issues, SAP introduced HANA with the promise of running OLTP and analytics on the same table. To use Columnstore for OLTP, SAP has optimized the main OLTP system in such a way that they can also run analytics on the same system. The key is Columnstore technology, which performs well for analytics and is capable of processing OLTP workloads. With this solution, an organization can now run OLTP on in-memory technology against a Columnstore database. SAP implemented some optimizations to make HANA work better for OLTP for example, the data are sorted in, and primary key constraints are enforced in, the Columnstore format itself. However, HANA still does not work as well with traditional B-tree tables. Traditional SQL Server B-tree tables still provide better performance for OLTP operations. Therefore, HANA compromises its performance for both OLTP and analytics. SQL Server does not compromise for OLTP or analytics: Microsoft provides optimal in-memory solution for OLTP workloads using memory-optimized tables and optimal analytics solution using Columnstore indexes to allow greater performance for each type of workload. SQL Server has extended the in-memory Columnstore for analytics to provide an updatable clustered Columnstore index. This accelerates queries Page 9
10 up to 100x and significantly reduces storage costs by 10x. For typical analytics data patterns, in-memory OLTP extends its existing DBMS with a new engine, which is both memory-optimized and free of the bottlenecks from locks and latches that databases have traditionally used to manage concurrent activity. SQL Server 2016 combines these technologies to allow customers to do real-time analytics on operational data running either with memory-optimized tables or traditional disk-based tables. Scale-out support in SAP HANA SAP HANA can scale out the database across multiple machines. With SAP HANA, the Columnstore table needs to be entirely in-memory. HANA has options to flush some columns to disk, but most data need to fit in-memory. HANA Servers may include 2 TB or 4 TB of memory, but not much more. Because data need to fit in-memory, the memory will need to be scaled out for large databases. This may cause basic performance issues with OLTP scale-out configurations. For example, if there are joins across tables, and data need to move between different nodes for a query, the performance would degrade. SQL Server is not limited by needing to fit all of the Columnstore indexes in-memory Columnstore indexes fully reside on-disk and only the required data are brought into memory on an as-needed basis as part of running queries. Therefore, one can have a much larger data warehouse configuration on a single server, like a server with 2 TB memory and 40 TB of data running on it. In most cases, that will run very well. To accommodate large amounts of data warehouse data, such as petabytes or more, Microsoft offers the Analytics Platform System (APS) and the upcoming Azure SQL Data Warehouse. Analytics functionality in SAP HANA SAP HANA provides support for advanced analytics functions such as R integration built into the database engine. SQL Server 2016 also supports R integration. Running SAP on SQL Server vs. HANA SAP owns both application and database SAP has a distinct advantage in owning enterprise-grade applications like SAP ERP or SAP CRM as well as a supporting database: the recently developed SAP HANA database. The SAP HANA database has been designed to work with SAP applications, and SAP applications are tweaked to suit the HANA database. However, Microsoft still has some advantages in running SAP applications on SQL Server. Microsoft is an established player in databases running SAP applications, and this is reflected in the growing share of SAP deployments Microsoft holds. In fact, Microsoft and SAP have been working closely to ensure that SAP applications work well on Microsoft platforms. The SAP certification of SQL Server is the technical stamp of approval of this joint effort. Currently, there are more than 65,000 SAP installations running on the Windows Server platform, which is more than 57 percent of existing SAP installations, and more than 30,000 of these SAP installations are on SQL Server. Both the number of professional database architects and developers who are proficient with these technologies and the adoption of SQL Server for SAP continue to grow worldwide. SQL Server is the established enterprise-ready solution for SAP, and it leads the industry with the lowest total cost of ownership (TCO) for database platforms through lower pricing, reduced administration overhead, built-in high availability, and scalability. SAP Business Warehouse on SAP HANA vs. SQL Server 2014 clustered Columnstore SAP claims that its Business Warehouse (BW) works well with SAP HANA Database. SAP BW adds many new features that can perform well with the in-memory functionality with SAP HANA. Page 10
11 Microsoft SQL Server also can help boost the performance of SAP BW with its in-memory capabilities. The SQL Server column-oriented store is optimized for aggregating mass data and can be used for BW aggregates. SQL Server 2014 clustered Columnstore works very well with SAP BW and delivers the same competitive performance as SAP HANA. Clustered column-oriented store is optimized for aggregating bulk historical data and enables faster performance for common data warehouse queries such as filtering, aggregating, grouping, and star-join queries. SQL Server provides a lower TCO because it does not require data to be memory resident, so one can have a larger SAP BW deployment on a single SQL Server deployment. SAP Business Suite on SAP HANA vs. SQL Server 2014 SAP claims that SAP Business Suite is optimized to work with SAP HANA to deliver a broad array of realtime applications on a single in-memory platform. It also provides multiple deployment options: onpremises, cloud, or a hybrid environment, depending on the business requirements. Microsoft SQL Server 2014 is an optimal database for enabling mission-critical environments, offering availability and performance for SAP ERP applications of all sizes at a lower TCO. SAP customers can take advantage of comprehensive data management capabilities such as high-availability, manageability, storage and backup-compression, table-partitioning, auditing, and security capabilities. The AlwaysOn technology of SQL Server combines high-availability and disaster-recovery functionalities, providing greater flexibility when managing SAP configuration and architecture. With Transparent Data Encryption, SAP customers can encrypt an entire database, data files, and log files without the need for application changes. Users can also run SAP solutions on Microsoft Azure by providing on-demand resources in the cloud. Microsoft Azure offers a reliable and secure cloud infrastructure platform that enables businesses to quickly deploy SAP solutions in the cloud. It also supports creating a disaster recovery (DR) site on Azure, ensuring higher availability. Page 11
12 Comparing SAP HANA and SQL Server Table 1 provides a summary of the feature comparison between SQL Server and SAP HANA. Feature SAP HANA SQL Server 2014 SQL Server 2016 Columnstore optimization of analytics workloads In-memory optimization of OLTP workloads Large Columnstore tables automatically spill to disk Yes Yes Yes No Yes Yes No Yes Yes Scale-out data warehouse Yes Yes (APS & Azure SQL DW) Yes (APS & Azure SQL DW) Scale-out OLTP Limited Read scale-out (AlwaysOn RS) Read scale-out (AlwaysOn RS) Real-time analytics Yes No Yes (disk-based and memory-optimized) Integration with R Yes No Yes Table 1: Feature comparison of SAP HANA with SQL Server Columnstore optimization of analytics workloads: SAP HANA and SQL Server both support Columnstore optimization for analytics. Columnstore optimization of OLTP workloads: SAP HANA uses the same solution for both analytics and OLTP. SAP implemented some optimizations to make HANA work better for OLTP for example, the data are sorted in and primary key constraints are enforced in the Columnstore format itself. However, HANA still it does not work very well as traditional B-tree tables. Microsoft provides optimal in-memory solution for OLTP workload using memory-optimized tables and optimal analytics solution using Columnstore indexes to allow greater performance for each type of workload. Large Columnstore tables automatically spill to disk: With SAP HANA, the Columnstore table needs to be entirely in-memory. HANA has options to flush some columns to disk, but most data need to fit in-memory. SQL Server is not limited by needing to fit all of the Columnstore indexes inmemory Columnstore indexes fully reside on-disk and only the required data are brought into memory on an as-needed basis as part of running queries. Scale-out data warehouse: Although SQL Server does not have a generic scale-out solution for data warehouse, there is a read scale-out possibility with the readable AlwaysOn secondaries. To accommodate large amounts of data warehouse data, such as petabytes of data, Microsoft offers the Analytics Platform System (APS) and the upcoming Azure SQL Data Warehouse. Scale-out OLTP: Because SAP HANA data need to fit in-memory, the memory will need to be scaled out for large databases. This may cause basic performance issues with OLTP scale-out configurations. SQL Server has a read scale-out possibility with the readable AlwaysOn secondaries. Real-time analytics: SAP HANA supports real-time analytics. This functionality is now also supported in SQL Server 2016 for both disk-based and memory-optimized tables. R integration: SAP HANA supports R integration. This functionality is now also supported in SQL Server Page 12
13 Oracle At present, Oracle Database does not provide native support for in-memory capability for OLTP workload but supports in-memory as an optional add-on with Oracle Database 12c for analytics workloads. Oracle also offers standalone in-memory capability with a separate product called "TimesTen that has been integrated with Oracle Database, but these are two different products that need to be installed/managed separately. Oracle TimesTen: In-memory OLTP TimesTen, an in-memory relational database system, started off as a research project in HP in 1995 and it later was moved into a separate company. TimesTen was acquired by Oracle in 2005, and since then, they have been working on integrating it with Oracle stack, including PL/SQL and OCI stack, and in their MAA (max availability architecture). TimesTen can be deployed in the following ways: Client/server interface: Traditional client/server interfaces are supported, enabling scenarios like reporting and access to common in-memory databases for a large number of application-tier platforms. Directly linked applications: Applications can be directly linked to the TimesTen address space, eliminating the IPC overhead involved and streamlining query processing for optimized query performance. Cache for Oracle Database (TimesTen cache): When a dedicated Oracle database already exists for a workload, TimesTen can be deployed as an additional layer of cache database between the application and the Oracle database. The cache tables in this middle layer can be read-only or updatable. Applications can access the TimesTen cache tables using a standard SQL interface, and the synchronization between these cache tables and the Oracle database is performed automatically. Comparing Oracle and SQL Server in-memory OLTP Oracle TimesTen is a relatively old product. In contrast, the in-memory OLTP of SQL Server leverages a number of technology advancements, making it a superior toolset in many ways. Table 2 provides a feature comparison between Oracle TimesTen and SQL Server 2014 in-memory OLTP and SQL Server 2016 in-memory OLTP, which adds an increased SQL language surface area for native compilation and additional durability features. Page 13
14 Feature TimesTen SQL Server 2014 SQL Server 2016 SQL language Supports most of PL/SQL (including DW) InterOp supports most OLTP Increase T-SQL surface area Native compile No Yes. T-SQL surface area targeting OLTP workloads Increased T-SQL surface area Lock-based Yes; row, table, database locks; choose at connection level No. Uses optimistic concurrency control No new changes Integration Loose integration with Oracle Database Fully integrated with SQL No new changes Durability At database level (permanent and temporary database) At table level. All tables are durable by default but individual tables can be marked as non-durable. 2 TB of durable memory optimized tables in the database Security Auditing and Permissions to control access to tables and for Admin operation Transparent data encryption Table 2. In-memory OLTP: Oracle TimesTen vs. SQL Server SQL language: Both TimesTen and TimesTen Cache support PL/SQL, including data warehouse. The key advantage of this capability is that any PL/SQL application running on an Oracle server can be easily migrated to TimesTen with little change. In a similar fashion, SQL Server in InterOp mode supports most OLTP. Native compile: Native compilation for in-memory OLTP is not supported by Oracle TimesTen. In SQL Server, the operations (stored procedures) can be natively compiled on the in-memory OLTP tables to achieve maximum business processing performance. In future releases of SQL Server, the surface area will be further expanded to further enhance native compilation capabilities. Lock-based: Oracle provides locking mechanisms at row, table, and database levels, which can be configured at the time of connection. This method often leads to concurrency bottlenecks. SQL Server has no locks because it provides optimistic concurrency. Thus, it provides a friction-free scale-up. Integration: Because Oracle TimesTen is a separate product, it needs a mechanism to integrate it with Oracle Database. Microsoft in-memory OLTP is not a separate product, but a part of SQL Server. This makes it more efficient from backup, restore, and management perspectives. Also, because SQL Server in-memory OLTP is integrated with the database, you can choose to migrate only performancecritical tables into memory-optimized tables. SQL Server includes some useful reports that help you identify which tables should be migrated to memory optimization. Durability: In Oracle, durability is at the database level. In SQL Server, durability is at the table level, which allows flexibility to configure tables within an application appropriately. For example, you can create a non-durable memory-optimized table to stage data. Oracle 12c: In-memory Columnstore Oracle provides an in-memory Columnstore as an optional add-on with Oracle Database 12c. When this option is enabled for the table or the tablespace, Oracle internally creates a copy of the subset of the data in columnar storage format. This allows the database administrators to choose and populate only the Page 14
15 most performance-critical data in-memory to speed up analytics. The changes to in-memory data are not persisted and need to be rebuilt on the fly when Oracle DB is restarted. Oracle in-memory is designed to be transparent to applications and tools (Figure 1). Figure 1: The dual-format architecture of Oracle 1 Comparing Oracle 12c and SQL Server in-memory Columnstore Table 3 presents a feature comparison between Oracle 12c in-memory Columnstore and SQL Server inmemory Columnstore. Feature Oracle 12c SQL Server 2014 SQL Server 2016 Persistence of Columnstore No Yes Yes Aggregate pushdown Yes No Yes Query on secondary replica No No Yes Materialized views with Columnstore Yes No No Integration with in-memory OLTP No No Yes Batch processing No Yes Yes Integration with R Yes No Yes Configuration and operations Provides many knobs making the configuration and operations lot more complex Fully automated after creating Columnstore indexes Fully automated after creating Columnstore indexes, including management of deleted rows 1 Page 15
16 Table 3. In-memory: Oracle 12c vs. SQL Server Persistence of Columnstore: Data warehouses are typically large, and everything cannot be expected to be kept in-memory at all times. When Oracle Database is restarted, there are no data in the Columnstore index, and it is built in the background. Until then the analytics query cannot leverage columnar storage. SQL Server persists the Columnstore index; therefore, there is no need to rebuild it or provision memory to keep it all in-memory. SQL Server brings data in and out of memory based on queries run. Aggregate pushdown: This refers to having a pre-calculated aggregation (sum, average, etc.) of database values and pushing it down to the storage layer. Oracle 12c supports aggregate pushdown. In SQL Server 2014, sum and aggregates are calculated outside the storage engine in the execution layer. In the upcoming release of SQL Server, this will be pushed down to the storage layer, leading up to 4x or better performance. Query on secondary replica: In the upcoming version of SQL Server, users will be able to run data warehouse queries on the secondary replica. Oracle includes a configuration similar to AlwaysOn, but it does not allow users to allow the data warehouse queries on the secondary replica. This is a huge win for SQL Server as data warehouse workloads are mission-critical and are configured for high availability. Materialized views with Columnstore: Materialized views have a very peculiar use case. If a user can create a materialized view on an OLTP table and store that materialized view in a Columnstore format, this could result in almost cube-like functionality. There would be pre-aggregated data, which are always kept updated, like OLTP. This is quite expensive to maintain, however, and although it has value, it will impede OLTP and data-load performance. Integration with In-Memory OLTP: At present, neither SQL Server 2014 nor Oracle 12c provides support for integration with in-memory OLTP. However, SQL Server 2016 does integrate Columnstore index transparently with the OLTP workloads. To enable this, an updatable Columnstore index is created on one or more tables in an OLTP workload. This allows users to run the OLTP workload on the table and perform queries on the same table because Columnstore is updatable. Batch mode processing: Another major advantage of SQL Server 2014 is its batch mode processing capability, which significantly improves (typically 3-10x) query performance. Oracle currently does not have any such technology. IBM IBM DB2 10.5: Analytics with BLU Acceleration IBM DB2 BLU Acceleration was released as an integrated in-memory computing solution with IBM DB It has several optimizations, but it is primarily driven by a concept called a shadow table that stores and maintains a copy of the data in columnar storage. Both tables remain in sync automatically. OLTP transactions are performed directly on the relational tables, but any analytical queries on those tables are redirected toward the column-based shadow tables, which provides faster analytical processing. IBM DB2 BLU Acceleration encompasses Seven Big Ideas, or key functionalities: Simple to use: IBM claims that this feature is ready to use as soon as it is enabled. A user only needs to load the data and query. No other indexes or fine-tuning are required. Related operations like load, backup, and restore operations are also simplified. Actionable compression: Aside from using an optimized compression mechanism, IBM also claims to allow operations to be performed on the compressed data directly. In fact, BLU Acceleration can perform joins or aggregates and also apply predicates directly to the compressed data without having to decompress the data first. Page 16
17 Multiplied CPU power: IBM uses a new concept, called Single Instruction Multiple Datasets (SIMD), to apply a single instruction to multiple data elements. Using SIMD, many routines can be executed simultaneously, resulting in faster query execution. Core-friendly parallelism: If the workload is running on a multi-core machine, IBM claims to leverage all of the processing power to parallelize the process. This is possible because the software is written from the ground up to take advantage of multiple cores. Columnar storage: Data are organized as columns, which provides benefits like efficient storage, faster query performance, and simplified fine-tuning. Scan-friendly caching: IBM claims that it uses optimized memory and cache-management techniques that are separately optimized for OLTP and data warehouse workloads. IBM claims that by using scan-friendly caching, it can minimize the effect on I/O performance. Data skipping: Data skipping ignores large segments of data that do not qualify for any query. This provides performance savings at the CPU, RAM, and I/O levels, thus enabling faster queries with no fine-tuning. Data skipping is the same mechanism as the segment elimination concept in SQL Server. Comparing IBM DB2 BLU Acceleration and SQL Server in-memory Table 4 provides a feature comparison between IBM DB2 BLU and SQL Server. Feature DB2 BLU Acceleration SQL Server 2014 SQL Server 2016 Seven Big Ideas Yes Yes (mileage will vary) Yes (mileage will vary) Query performance Yes Yes (batch mode) Concurrent DML No Yes Yes (batch mode, aggregate pushdown, lookup/shortrange queries) Yes (improved concurrency with row-level locking and non-blocking reads) Automatic index maintenance Yes No Yes Operational analytics Yes (using shadow table) No (could achieve by manually moving to CCI) Yes (fully integrated) Table 4. In-memory: IBM DB2 BLU Acceleration vs. SQL Server Seven Big Ideas : IBM DB2 BLU Acceleration is built around Seven Big Ideas, which include concepts and technologies like Columnstore tables, data compression, hardware-level processingrelated optimizations, and memory and cache management. Microsoft uses the same concepts and technologies, but at different levels of implementation. These same ideas are also being propagated in the upcoming version of SQL Server. Query performance: IBM DB2 BLU Acceleration has implemented a mechanism for improving query performance for analytics. Microsoft also has this, but with a special mode called batch mode that delivers better performance. DB2 makes no mention of batch mode. In upcoming versions of SQL Server, batch mode execution will be added for more operators. For example, it is not possible to perform order-by queries in batch mode with SQL Server 2014, but it will be possible in upcoming versions. Microsoft is investing more in batch mode so more data warehouse queries may run much faster. Concurrent DML: IBM DB2 BLU Acceleration workloads do not behave well with concurrent DML because of blocking-related issues. Although it is not available in SQL Server 2014, the upcoming Page 17
18 version of SQL Server includes an implementation for running concurrent DMLs on Columnstore at a row-level locking. This implementation will be more concurrent in light of DML operations. Automatic index maintenance: IBM DB2 BLU Acceleration provides automatic index maintenance. In general, when data are deleted, corresponding rows are not removed from the clustered Columnstore index immediately. Those rows are marked with a delete label or flag, which signifies that the rows are deleted. Over time, when many rows have been deleted, they still occupy space in the Columnstore. One way to remove those deleted rows is to rebuild the indexes after a certain period. DB2 provides an automated index maintenance that removes the deleted rows automatically from the index. This feature is not currently available with SQL Server 2014, but it is available in the upcoming version of SQL Server. Operational analytics: DB2 uses the concept of shadow tables, which create new Columnstore tables based on an existing relational table. From an application perspective, the user has an OLTP table and a shadow table linked to it. The user can run workloads on the OLTP table, and analytics queries will be redirected toward the shadow table automatically. In SQL Server 2014, this needs to be done manually. The upcoming version of SQL Server will allow users to create a non-clustered Columnstore index, with which they can run OLTP and analytics on same table. There will not be a shadow table, but there will be an index that will be updatable. Myths and reality: SQL Server in-memory OLTP and inmemory Latch-free architecture Myth: Since there is no locking, latching, or blocking in SQL Server in-memory OLTP, it can cause inconsistencies and data corruption. Reality: SQL Server in-memory OLTP has full ACID support it ensures Atomicity, Consistency, Isolation, and Durability of the data. The innovative lock-free architecture using optimistic concurrency control for in-memory improves performance by eliminating the need to take locks (i.e. no lock manager) and friction-free scaling by removing blocking. To prevent corruption of the links between the indexes and the row versions, SQL Server uses atomic test/set instruction that guarantees the operation is atomic across all the processors in the system. Transaction logging ensures durability of data in SQL Server. A transaction is marked as committed only when its log has been flushed to the disk to ensure durability of memory-optimized tables. Logging for memory-optimized tables is very efficient, and the amount of log data generated is minimal. Separate database engine Myth: SQL in-memory OLTP is a separate product, and you need to redesign or rebuild your database. Reality: Unlike the other major in-memory database products available today, in-memory OLTP is fully integrated into SQL Server. This means that it requires no separate installation and there is no need to learn different tools. It also allows an incremental investment strategy, where the user can selectively move tables to the most appropriate storage for the data represented by each table. Since it is built into the core SQL Server, other SQL Server functionalities can also be leveraged in addition to in-memory OLTP. SQL Server is the only mainstream DBMS with an integrated in-memory solution optimized for OLTP. Oracle TimesTen is a separate product; SAP HANA is optimized for analytics. Not suitable for OLTP Myth: Columnstore in SQL Server 2014 is not suitable for OLTP. Page 18
19 Reality: Clustered Columnstore in SQL Server 2014 is positioned as a data warehouse technology, not OLTP. SQL Server 2014 adds options for clustered and non-clustered Columnstore indexes. The clustered Columnstore index permits data modifications and bulk load operations. Future releases of SQL Server will allow an updatable non-clustered Columnstore index both on memory-optimized and diskbased tables for real-time analytics on operational workload. Response to competitors in-memory offerings Myth: In-memory OLTP is a recent response to competitors in-memory offerings Reality: The project, code-named Hekaton, was started five years ago in response to business and hardware trends. Microsoft started an incubation project in partnership with Microsoft Research to imagine what a database engine designed from scratch for today s hardware realities would look like. The in-memory OLTP feature is the outcome of that incubation. In-memory OLTP is the same as the old SQL feature DBCC PINTABLE Myth: In-memory OLTP is the same as the old SQL 7.0 feature DBCC PINTABLE, which allowed pinning buffer pool pages or tables in memory. Reality: In-memory OLTP uses a completely new design built from the ground up to optimize for efficient in-memory data operations. Data in memory-optimized tables are not organized in pages, and do not use the buffer pool. By dispensing with data structures and other infrastructure intended to facilitate paging subsets of data between disk and memory, in-memory OLTP provides a much leaner and more efficient data engine while still retaining the essential characteristics of the data engine. Conclusion Hardware trends such as declining memory costs, multi-core processors, and stalling CPU clock rate increases prompted the architectural design of in-memory computing. In-memory computing is one of the fastest-growing trends in the technology industry. Most technology vendors, including Microsoft, SAP, IBM, and Oracle, offer in-memory solutions to speed query performance. The Microsoft in-memory processing capability built into SQL Server 2014 delivers breakthrough performance to enable new transformational scenarios to accelerate your business. Microsoft offers comprehensive in-memory technologies for OLTP, data warehouse, and analytics built directly into SQL Server. SQL Server 2016 continues to enhance in-memory technologies to provide improved functionality and performance. The Microsoft in-memory OLTP solution not only accelerates transactions but also increases concurrency, giving you true scale-up without requiring the entire database to be loaded into memory. Likewise, its in-memory Columnstore solution for data warehouse accelerates queries and significantly reduces storage costs with high data compression. Only SQL Server has built in in-memory technology optimized for OLTP, which means faster transactions. Plus, its enhanced in-memory Columnstore gives you faster queries and insights while minimizing total cost of ownership. For example, using the in-memory technology in SQL Server, bwin.party, a leader in online gaming, was able to boost performance gains by 17x and queries by 340x.) The in-memory design in SQL Server removes database contention with lock-free and latch-free table architecture while maintaining 100-percent data durability. Neither Oracle, IBM, nor SAP HANA provide this. This means that you can take advantage of all your computing resources in parallel for more concurrent users. Unlike other offerings, SQL Server provides built-in in-memory capabilities, so there is Page 19
20 no need to learn new development tools or APIs. Finally, there is no additional cost to use the in-memory OLTP feature, unlike Oracle and IBM. Key resources To learn more, visit: Microsoft SQL Server Microsoft SQL Server In-Memory In-Memory OLTP documentation Download and evaluate SQL Server Feedback Did this paper help you? Please give us your feedback by telling us on a scale of 1 (poor) to 5 (excellent) how you rate this paper and why you have given it this rating. More specifically: Are you rating it highly because of relevant examples, helpful screen shots, clear writing, or another reason? Are you rating it poorly because of examples that don t apply to your concerns, fuzzy screen shots, or unclear writing? This feedback will help us improve the quality of white papers we release. Please send your feedback to: Page 20
Published: April 2012 Applies to: SQL Server 2012 Copyright The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication.
SQL Server 2014 New Features/In- Memory Store Juergen Thomas Microsoft Corporation AGENDA 1. SQL Server 2014 what and when 2. SQL Server 2014 In-Memory 3. SQL Server 2014 in IaaS scenarios 2 SQL Server
Online Transaction Processing in SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 provides a database platform that is optimized for today s applications,
PLATFORM Top Ten Questions for Choosing In-Memory Databases Start Here PLATFORM Top Ten Questions for Choosing In-Memory Databases. Are my applications accelerated without manual intervention and tuning?.
SQL Server 2014 In-Memory by Design Anu Ganesan August 8, 2014 Drive Real-Time Business with Real-Time Insights Faster transactions Faster queries Faster insights All built-in to SQL Server 2014. 2 Drive
CASE STUDY: Oracle TimesTen In-Memory Database and Shared Disk HA Implementation at Instance level -ORACLE TIMESTEN 11gR1 CASE STUDY Oracle TimesTen In-Memory Database and Shared Disk HA Implementation
WHITE PAPER Improve Business Productivity and User Experience with a SanDisk Powered SQL Server 2014 In-Memory OLTP Database 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Executive
White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
Kronos Workforce Central 6.1 with Microsoft SQL Server: Performance and Scalability for the Enterprise Providing Enterprise-Class Performance and Scalability and Driving Lower Customer Total Cost of Ownership
Server Consolidation with SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 supports multiple options for server consolidation, providing organizations
WITH A FUSION POWERED SQL SERVER 2014 IN-MEMORY OLTP DATABASE 1 W W W. F U S I ON I O.COM Table of Contents Table of Contents... 2 Executive Summary... 3 Introduction: In-Memory Meets iomemory... 4 What
Microsoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010 Better Together Writer: Bill Baer, Technical Product Manager, SharePoint Product Group Technical Reviewers: Steve Peschka,
SAP HANA SAP s In-Memory Database Dr. Martin Kittel, SAP HANA Development January 16, 2013 Disclaimer This presentation outlines our general product direction and should not be relied on in making a purchase
SQL Server 2012 Gives You More Advanced Features (Out-Of-The-Box) SQL Server White Paper Published: January 2012 Applies to: SQL Server 2012 Summary: This paper explains the different ways in which databases
Tips and Tricks for Using Oracle TimesTen In-Memory Database in the Application Tier Simon Law TimesTen Product Manager, Oracle Meet The Experts: Andy Yao TimesTen Product Manager, Oracle Gagan Singh Senior
Grow your Business with SQL Server 2016 Ramnik Gulati, Director of Product Marketing, SQL Server November 4, 2015 SQL Server 2016: Everything built-in built-in built-in built-in built-in built-in $2,230
SQL Server 2016 New Features! Improvements on Always On Availability Groups: Standard Edition will come with AGs support with one db per group synchronous or asynchronous, not readable (HA/DR only). Improved
IBM DB2 Near-Line Storage Solution for SAP NetWeaver BW A high-performance solution based on IBM DB2 with BLU Acceleration Highlights Help reduce costs by moving infrequently used to cost-effective systems
WHITE PAPER Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale Sponsored by: IBM Carl W. Olofson December 2014 IN THIS WHITE PAPER This white paper discusses the concept
SQL Server 2012 Parallel Data Warehouse Solution Brief Published February 22, 2013 Contents Introduction... 1 Microsoft Platform: Windows Server and SQL Server... 2 SQL Server 2012 Parallel Data Warehouse...
2 Data platform evolution Top Reasons Reasons to to upgrade 1) End of extended support 2) Enhanced SQL Server 2014 features and performance 3) Impact on security and compliance 4) Cloud strategy Top Blockers
Oracle Database In-Memory The Next Big Thing Maria Colgan Master Product Manager #DBIM12c Why is Oracle do this Oracle Database In-Memory Goals Real Time Analytics Accelerate Mixed Workload OLTP No Changes
Using SQL Server 2014 In-Memory Optimized Columnstore with SAP BW Applies to: SAP Business Warehouse 7.0 and higher running on Microsoft SQL Server 2014 and higher Summary SQL Server 2014 In-Memory Optimized
An Oracle White Paper May 2011 Exadata Smart Flash Cache and the Oracle Exadata Database Machine Exadata Smart Flash Cache... 2 Oracle Database 11g: The First Flash Optimized Database... 2 Exadata Smart
Azure Scalability Prescriptive Architecture using the Enzo Multitenant Framework Many corporations and Independent Software Vendors considering cloud computing adoption face a similar challenge: how should
In-Memory Data Management for Enterprise Applications Jens Krueger Senior Researcher and Chair Representative Research Group of Prof. Hasso Plattner Hasso Plattner Institute for Software Engineering University
Microsoft Analytics Platform System Solution Brief Contents 4 Introduction 4 Microsoft Analytics Platform System 5 Enterprise-ready Big Data 7 Next-generation performance at scale 10 Engineered for optimal
Actian Vector in Hadoop Industrialized, High-Performance SQL in Hadoop A Technical Overview Contents Introduction...3 Actian Vector in Hadoop - Uniquely Fast...5 Exploiting the CPU...5 Exploiting Single
SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications Jürgen Primsch, SAP AG July 2011 Why In-Memory? Information at the Speed of Thought Imagine access to business data,
Dell One Identity Manager Scalability and Performance Scale up and out to ensure simple, effective governance for users. Abstract For years, organizations have had to be able to support user communities
Advisory Infrastructure Matters: POWER8 vs. Xeon x86 Executive Summary This report compares IBM s new POWER8-based scale-out Power System to Intel E5 v2 x86- based scale-out systems. A follow-on report
ORACLE DATABASE 10G ENTERPRISE EDITION OVERVIEW Oracle Database 10g Enterprise Edition is ideal for enterprises that ENTERPRISE EDITION For enterprises of any size For databases up to 8 Exabytes in size.
ORACLE BUSINESS INTELLIGENCE, ORACLE DATABASE, AND EXADATA INTEGRATION EXECUTIVE SUMMARY Oracle business intelligence solutions are complete, open, and integrated. Key components of Oracle business intelligence
IN-MEMORY DATABASE SYSTEMS Prof. Dr. Uta Störl Big Data Technologies: In-Memory DBMS - SoSe 2015 1 Analytical Processing Today Separation of OLTP and OLAP Motivation Online Transaction Processing (OLTP)
When to Use Oracle Database In-Memory Identifying Use Cases for Application Acceleration O R A C L E W H I T E P A P E R M A R C H 2 0 1 5 Executive Overview Oracle Database In-Memory is an unprecedented
White Paper Backup and Recovery for SAP Environments using EMC Avamar 7 Abstract This white paper highlights how IT environments deploying SAP can benefit from efficient backup with an EMC Avamar solution.
Microsoft Analytics Inspiration Day Mandag 9. November 2015 15.10-16.00: Modern Data Platform Trond Amundsen Solution Sales Professional Microsoft Dataplattform Sesjoner dere har vært igjennom tidligere
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,
White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes
Database Server Configuration Best Practices for Aras Innovator 10 Aras Innovator 10 Running on SQL Server 2012 Enterprise Edition Contents Executive Summary... 1 Introduction... 2 Overview... 2 Aras Innovator
Whitepaper: Back Up SAP HANA and SUSE Linux Enterprise Server with SEP sesam firstname.lastname@example.org www.sepusa.com Table of Contents INTRODUCTION AND OVERVIEW... 3 SOLUTION COMPONENTS... 4-5 SAP HANA... 6 SEP
White Paper EMC XtremSF: Delivering Next Generation Performance for Oracle Database Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
SQL Server 2014 What s New? Christopher Speer Technology Solution Specialist (SQL Server, BizTalk Server, Power BI, Azure) email@example.com The evolution of the Microsoft data platform What s New
Microsoft Dynamics NAV 2013 R2 Sizing Guidelines for Multitenant Deployments February 2014 Contents Microsoft Dynamics NAV 2013 R2 3 Test deployment configurations 3 Test results 5 Microsoft Dynamics NAV
Optimizing Storage for Better TCO in Oracle Environments INFOSTOR Executive Brief a QuinStreet Excutive Brief. 2012 To the casual observer, and even to business decision makers who don t work in information
MCSE: Data Platform Description As you move from your role as database administrator to database professional in a cloud environment, you ll demonstrate your indispensable expertise in building enterprise-scale
IBM Data Retrieval Technologies: RDBMS, BLU, IBM Netezza, and Hadoop Frank C. Fillmore, Jr. The Fillmore Group, Inc. Session Code: E13 Wed, May 06, 2015 (02:15 PM - 03:15 PM) Platform: Cross-platform Objectives
Protect SAP HANA Based on SUSE Linux Enterprise Server with SEP sesam Many companies of different sizes and from all sectors of industry already use SAP s inmemory appliance, HANA benefiting from quicker
MS-40074: Microsoft SQL Server 2014 for Oracle DBAs Description This four-day instructor-led course provides students with the knowledge and skills to capitalize on their skills and experience as an Oracle
CitusDB Architecture for Real-Time Big Data CitusDB Highlights Empowers real-time Big Data using PostgreSQL Scales out PostgreSQL to support up to hundreds of terabytes of data Fast parallel processing
Module 14: Scalability and High Availability Overview Key high availability features available in Oracle and SQL Server Key scalability features available in Oracle and SQL Server High Availability High
ENZO UNIFIED SOLVES THE CHALLENGES OF OUT-OF-BAND SQL SERVER PROCESSING Enzo Unified Extends SQL Server to Simplify Application Design and Reduce ETL Processing CHALLENGES SQL Server does not scale out
SQL Server 2012 and PostgreSQL 9 A Detailed Comparison of Approaches and Features SQL Server White Paper Published: April 2012 Applies to: SQL Server 2012 Introduction: The question whether to implement
Optimize Oracle Business Intelligence Analytics with Oracle 12c In-Memory Database Option Kai Yu, Senior Principal Architect Dell Oracle Solutions Engineering Dell, Inc. Abstract: By adding the In-Memory
Oracle Database 12c In-Memory Option 491 ORACLE DATABASE 12C IN-MEMORY OPTION c The Top Tier of a Multi-tiered Database Architecture There is this famous character, called Mr. Jourdain, in The Bourgeois
VERITAS Business Solutions for DB2 V E R I T A S W H I T E P A P E R Table of Contents............................................................. 1 VERITAS Database Edition for DB2............................................................
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
and SQL Azure Robin Cable Robin.Cable@TCSC.com BI Consultant AGENDA Azure SQL What's New in SQL 2016 Azure SQL Azure SQL Azure is a cloud based SQL service, provided to subscribers, to host their databases.
The Arts & Science of Tuning HANA models for Performance Abani Pattanayak, SAP HANA CoE Nov 12, 2015 Disclaimer This presentation outlines our general product direction and should not be relied on in making
News and trends in Data Warehouse Automation, Big Data and BI Johan Hendrickx & Dirk Vermeiren Extreme Agility from Source to Analysis DWH Appliances & DWH Automation Typical Architecture 3 What Business
Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine Oracle Maximum Availability Architecture White Paper April 2010 Maximum Availability Architecture Oracle Best Practices For High Availability
IS IN-MEMORY COMPUTING MAKING THE MOVE TO PRIME TIME? EMC and Intel work with multiple in-memory solutions to make your databases fly Thanks to cheaper random access memory (RAM) and improved technology,
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial
DATA WAREHOUSE PHYSICAL DESIGN The physical design of a data warehouse specifies the: low-level storage structures e.g. partitions underpinning the warehouse logical table structures low-level structures
The New Economics of SAP Business Suite powered by SAP HANA 2013 SAP AG. All rights reserved. 2 COMMON MYTH Running SAP Business Suite on SAP HANA is more expensive than on a classical database 2013 2014
Database Systems Journal vol. VI, no. 1/2015 59 In-memory databases and innovations in Business Intelligence Ruxandra BĂBEANU, Marian CIOBANU University of Economic Studies, Bucharest, Romania firstname.lastname@example.org,
SYSTEM X SERVERS SOLUTION BRIEF Minimize cost and risk for data warehousing Microsoft Data Warehouse Fast Track for SQL Server 2014 on System x3850 X6 (55TB) Highlights Improve time to value for your data
Data sheet HP Vertica OnDemand Enterprise-class Big Data analytics in the cloud Enterprise-class Big Data analytics for any size organization Vertica OnDemand Organizations today are experiencing a greater
MICROSOFT S ANALYTICS SOLUTIONS WITH PARALLEL DATA WAREHOUSE Parallel Data Warehouse Stefan Cronjaeger Microsoft May 2013 AGENDA PDW overview Columnstore and Big Data Business Intellignece Project Ability
Instant-On Enterprise Winning with NonStop SQL 2011Hewlett-Packard Dev elopment Company,, L.P. The inf ormation contained herein is subject to change without notice LIBERATE Your infrastructure with HP
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V White Paper July 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...
The Essential Guide to SQL Server Virtualization S p o n s o r e d b y Virtualization in the Enterprise Today most organizations understand the importance of implementing virtualization. Virtualization
INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability
Oracle Database 12c Plug In. Switch On. Get SMART. Duncan Harvey Head of Core Technology, Oracle EMEA March 2015 Safe Harbor Statement The following is intended to outline our general product direction.
Technology Insight Paper Converged, Real-time Analytics Enabling Faster Decision Making and New Business Opportunities By John Webster February 2015 Enabling you to make the best technology decisions Enabling
Columnstore in SQL Server 2016 Niko Neugebauer 3 Sponsor Sessions at 11:30 Don t miss them, they might be getting distributing some awesome prizes! HP SolidQ Pyramid Analytics Also Raffle prizes at the
SYSTEM X SERVERS SOLUTION BRIEF Maximum performance, minimal risk for data warehousing Microsoft Data Warehouse Fast Track for SQL Server 2014 on System x3850 X6 (95TB) The rapid growth of technology has
Focus on the business, not the business of data warehousing! Adam M. Ronthal Technical Product Marketing and Strategy Big Data, Cloud, and Appliances @ARonthal 1 Disclaimer Copyright IBM Corporation 2014.
Real-Time Big Data Analytics with the Intel Distribution for Apache Hadoop software Executive Summary is already helping businesses extract value out of Big Data by enabling real-time analysis of diverse
SQL Server 2012 Parallel Data Warehouse Solution Brief Contents Introduction... 1 Microsoft Platform: Windows Server and SQL Server... 2 SQL Server 2012 Parallel Data Warehouse... 3 Built for Big Data...
Safe Harbor Statement "Safe Harbor" Statement: Statements in this presentation relating to Oracle's future plans, expectations, beliefs, intentions and prospects are "forward-looking statements" and are
SAP HANA forms the future technology foundation for new, innovative applications based on in-memory technology. It enables better performing business strategies, including planning, forecasting, operational
Accelerate SQL Server 2014 AlwaysOn Availability Groups with Seagate Nytro Flash Accelerator Cards Technology Paper Authored by: Mark Pokorny, Database Engineer, Seagate Overview SQL Server 2014 provides
A New Approach: Clustrix Sierra Database Engine The Sierra Clustered Database Engine, the technology at the heart of the Clustrix solution, is a shared-nothing environment that includes the Sierra Parallel
FREQUENTLY ASKED QUESTIONS VMware vsphere Data Protection vsphere Data Protection Advanced Overview Q. What is VMware vsphere Data Protection Advanced? A. VMware vsphere Data Protection Advanced is a backup
MS SQL Performance (Tuning) Best Practices: 1. Don t share the SQL server hardware with other services If other workloads are running on the same server where SQL Server is running, memory and other hardware