Cost/Benefit Case for IBM DB for High Performance Analytics
|
|
|
- Alfred Reeves
- 10 years ago
- Views:
Transcription
1 Management Report October 2014 Cost/Benefit Case for IBM DB for High Performance Analytics Compared to Microsoft SQL Server 2014 International Technology Group 609 Pacific Avenue, Suite 102 Santa Cruz, California Telephone: Website: ITGforInfo.com
2 Table of Contents EXECUTIVE SUMMARY 1 The Players 1 The Landscape Changes 2 Lost Opportunity Costs 3 Technology Differentiators 4 Performance 4 Complexity 5 Conclusions 5 SOLUTIONS 6 SQL Server Overview 6 Clustered Columnstore Indexes 6 DB Overview 6 Overall Capabilities 8 DETAILED DATA 10 Basis of Calculations 10 Installations 10 Costs of Ownership 10 Costs Breakdowns 11 List of Figures 1. Three-year Costs of Ownership for IBM DB with BLU Acceleration and Microsoft SQL Server 2014 with Clustered Columstore Indexes 2. Key Technologies Incorporated in IBM DB with BLU Acceleration and Microsoft SQL Server 2014 with Clustered Columnstore Indexes 3. Three-year Lost Opportunity Costs for Use of IBM DB with BLU Acceleration and Microsoft SQL Server 2014 with Clustered Columnstore Indexes All Installations 4. BLU Acceleration Data Reduction Processes 4 5. Principal IBM DB BLU Acceleration Technologies for Analytics Processing 7 6. Principal Capabilities of Overall IBM DB Environment 8 7. Installations Summary Three-year Costs of Ownership Breakdowns for IBM DB and Microsoft SQL Server Compared to Microsoft SQL Server 2014
3 Executive Summary The Players The database world is undergoing unprecedented change. Data growth continues to accelerate, and database structures and contents to become more complex. New challenges must be met as Big Data technologies gain traction. Cloud computing defines new deployment and operating models. Demand for increasingly powerful analytics solutions has become pervasive. These shifts have changed the strategies of major database vendors. IBM and Microsoft have implemented new technologies in their mainstream databases. For high-performance analytics applications, which are the focus of this report, key new capabilities were provided in IBM DB with BLU Acceleration, and in Microsoft SQL Server 2014 with Clustered Columnstore Indexes. There are some commonalties between these solutions. User experiences, however, indicate that BLU Acceleration is more powerful, incorporates a broader range of technologies and is better optimized to deliver sustained performance for high-volume analytics queries. BLU Acceleration, moreover, employs a dramatically simplified SQL design. Application delivery times are reduced, database administrator (DBA) staffing is lower and system overhead is less than for use of Clustered Columnstore Indexes. Data compression and space reclamation are also a great deal more effective. BLU Acceleration employs global table-wide technology; i.e., the system searches for compression opportunities across entire tables. Users report higher compression rates than for Microsoft s segment-based approach. In further contrast, space reclamation in BLU Acceleration is an automated online process; i.e., space is reclaimed on an ongoing basis during production operations. Clustered Columnstore Indexes require offline administrator intervention; i.e., it is a great deal slower. Users may not be able to realize the full potential of compression, or may not apply it all to avoid the service interruptions and administrative overhead of the Microsoft approach. These differences are reflected in costs of ownership. In representative installations presented in this report, threeyear costs for use of SQL Server 2014 with Clustered Columnstore Indexes ranged from 1.7 to 2.1 times more, and averaged 1.9 times more than for DB with BLU Acceleration. Figure 1 illustrates these results. IBM DB Microsoft SQL Server ,217.4 $ thousands Databases Servers Deployment Personnel Facilities Figure 1: Three-year Costs of Ownership for IBM DB with BLU Acceleration and Microsoft SQL Server 2014 with Clustered Columnstore Indexes Lost opportunity costs meaning bottom-line business losses due to deployment delays averaged 2.6 times more for use of SQL Server 2014 with Clustered Columnstore Indexes than for DB with BLU Acceleration. Compared to Microsoft SQL Server
4 The Landscape Changes Growing demands for information to deal with more complex and volatile business conditions have intersected with an acceleration of decision-making cycles at all levels of organizations. Increasingly, information must be analyzed in real time. Bottlenecks, however, have emerged in the ability of conventional data warehouse architectures to meet these demands. One is that application development and deployment practices often impose delays that business users find unacceptable. A second is that, in conventional architectures, latencies in movement of data between processors and disks impair throughput. In high-volume environments, performance impact may be severe. New technologies to address this constraint have emerged. Columnar data structures process only data in specific columns, and enable significantly higher levels of compression. In-memory technologies maintain data in RAM rather than on disk, increasing performance by wide margins. Data skipping avoids processing of data unnecessary to specific queries, further reducing I/O loading. Latest-generation server platforms also accelerate performance through single instruction, multiple data (SIMD) techniques that parallel and vector processing at the microprocessor level. The is the case for Intel E7-based systems, as well as IBM Power Systems, including new models based on IBM POWER8 processors. POWER8-based systems offer new DB performance optimization capabilities, including support for largescale concurrent multithreading (up to eight threads per core) and exploitation of 128-bit registers. New reliability features are also implemented. The extent to which vendors have exploited new technologies varies. SQL Server 2014 with Clustered Columnstore Indexes, and IBM DB with BLU Acceleration implement the technologies shown in figure 2. Technology IBM DB with BLU Acceleration Microsoft SQL Server 2014 with Clustered Columnstore Indexes High-performance compression Table-based Segment-based In-memory technology Data skipping Limited SIMD exploitation N/A Figure 2: Key Technologies Incorporated in IBM DB with BLU Acceleration and Microsoft SQL Server 2014 with Clustered Columnstore Indexes A DB capability introduced in August 2014 allows users to create columnar shadow tables of row-based transactional data, and to execute queries directly on these. Shadow tables are updated automatically. Early adopters have typically employed this capability for real-time operational queries and reporting. Users report that DBA overhead is minimal, and that there is no impact on transactional performance. Other distinctive DB BLU Acceleration features include automatic workload management (Clustered Columnstore Indexes requires manual techniques); full separation of data management and security privileges (not supported by Microsoft); and the ability to use key and unique constraints to avoid duplication of data in tables (supported by SQL Server 2014 only for non-clustered columnstores, which cannot be updated). Costs of Ownership Costs of ownership calculations include database licenses and support; server hardware and operating systems; personnel costs for database administration and related tasks; deployment costs and facilities (primarily energy) costs. Acquisition, maintenance and support costs are based on discounted prices reported by users. Compared to Microsoft SQL Server
5 Database costs are similar. Although Microsoft per core pricing is aggressive, IBM per terabyte pricing leverages BLU Acceleration strengths in data compression to reduce overall costs. Server costs are marginally higher for SQL Server 2014 with Clustered Columnstore Indexes, reflecting use of Windows Server rather than the less expensive Linux distribution employed by IBM. The largest disparities are in people-related costs for database administration (costs for use of SQL Server 2014 with Clustered Columnstore Indexes average three times more than for DB with BLU Acceleration) and deployment (costs for use of SQL Server 2014 with Clustered Columnstore Indexes average 2.6 times more). In these and other areas, the key differentiator is that DB with BLU Acceleration is less complex. Lost Opportunity Costs Analytical applications may yield significant bottom-line gains in a matter of weeks to months. The corollary is that delays in bringing such applications into production may represent significant lost revenue and profit. This effect is apparent in the same installations employed for costs of ownership comparisons. In these cases, lost opportunity costs for use of SQL Server 2014 with Clustered Columnstore Indexes ranged from 2.5 to three times more than for DB with BLU Acceleration. Figure 3 illustrates disparities. Financial Services Company IBM DB with BLU Acceleration Manufacturing Company Microsoft SQL Server 2014 with Clustered Columnstore Indexes IT Services Company ,065.3 $ thousands Figure 3: Three-year Lost Opportunity Costs for Use of IBM DB with BLU Acceleration and Microsoft SQL Server 2014 with Clustered Columnstore Indexes All Installations These costs are for initial applications only. In practice, organizations would continue to deploy new applications. The cumulative impact of faster deployment over multi-year periods would be a great deal larger. Disparities in lost opportunity costs would increase by wide margins. These and other results presented in this report are based on input from 24 companies in the same industries and size ranges, with generally similar business profiles employing IBM DB with BLU Acceleration or Microsoft SQL Server 2014 with Clustered Columnstore Indexes in comparable roles. Further information on profiles, methodology and assumptions employed for calculations, along with cost breakdowns for installations and platforms may be found in the Detailed Data section of this report. Compared to Microsoft SQL Server
6 Technology Differentiators Performance In terms of performance, the two platforms are differentiated as follows: 1. SQL Server 2014 incorporates in-memory and columnar technology based on Microsoft s earlier VertiPaq engine and on the company s Apollo development project. Columnar technology was introduced in SQL Server 2008 R2 and enhanced in SQL Server Early limitations limited adoption. For example, implementation of columnar technology was limited to indexes, and could be employed only in read-only mode. Tables could not be changed without extensive workarounds. In SQL Server 2014, tables may be more easily modified. Users have reported significantly higher performance compared to use of SQL Server without Clustered Columnstore Indexes. Microsoft benchmark tests have shown acceleration levels of 8 to 20 times using cold buffer pools (i.e., data is fully loaded into RAM) and 4 to 10 times using warm pools (i.e., data is divided between RAM and disk) for queries that benefit from columnar technology. Overall performance gains are reported to be typically two to five times, with an average of slightly more than 2.9 times. Performance increases of 300 to 800 times have been reported for individual queries. Although Microsoft has variously claimed up to seven and up to 10 times compression, users report between 40 and 80 percent (1.7 to 5 times). The norm appears to be two to three times compared to pagelevel compression employed in SQL Server These values, moreover, refer to raw and actual data sizes for indexes only. SQL Server 2014 continues to employ row-based structures for other data. 2. DB with BLU Acceleration, introduced in April 2013, implements a range of new-generation technologies in a more integrated and optimized manner. These include columnar and in-memory processing, high-performance compression and caching, data skipping (i.e., the ability to avoid processing data that is not necessary to specific queries), and microprocessor-level parallel and vector processing. Users have reported increases of between 5 and 74 times in overall query performance compared with previous DB2 versions (typically DB2 10 or DB2 9.7), with an overall average of 31.6 times. Compression levels ranged from 9 to more than 15 times, and averaged around 12.6 times. In comparison with Clustered Columnstore Indexes, which compress only at the column segment level, BLU Acceleration extends compression across entire tables. Key technologies in BLU Acceleration are combined in a manner that progressively reduces the amount of data that must be processed and moved through I/O to boost performance. The sequence of processes through which this occurs is illustrated in figure 4. IBM BLU ACCELERATION User Data 10 TB Actionable Compression 1 TB Column Processing 10 GB Data Skipping 1 GB Parallel Processing* MB Vector Processing* 7.8 MB *32 cores Figure 4: BLU Acceleration Data Reduction Processes Compared to Microsoft SQL Server
7 These numbers should be taken as indicative actual volumes may vary widely by database and workload. But they highlight the efficiency of the BLU Acceleration design. A further BLU Acceleration characteristic should be highlighted. The IT world has seen a progressive shift toward higher-density memory media for high-performance workloads. This shift has been reflected in use or RAM for in-memory databases. BLU Acceleration is designed to move even beyond this stage, to the point where processing is conducted overwhelmingly in cache. In addition, new shadow tables allow concurrent DB processing of queries and transactions. Row-based transactional data is continuously and automatically replicated to columnar tables, and execution of queries to these fully leverages BLU Acceleration in-memory technology. Users have reported the same performance improvements of 10x or more as for dedicated DB query workloads. Shadow tables are automatically synchronized with corresponding row-oriented tables. If access to other data sources is not required, users may avoid delays due to use of extract, transformation and load (ETL) tools. Complexity In contrast to SQL Server 2014 with Clustered Columnstore Indexes, BLU Acceleration employs a simple SQL design. The system does not, for example, employ schemas, indexes, or aggregate tables. Simplicity reduces the time required for tasks such as system design, application development, testing and tuning, and ongoing administration. Deployment also becomes a faster and more reliable process. Applications may be created and deployed in a few comparatively simple steps. Once individuals became proficient with the system, these took minutes less than 20 minutes less than an hour a few hours. One commented: We create tables and load data. Period. A key benefit was reported to be that end users might develop server-based applications directly, rather than going through programming staff. In normal operations, processes such as tuning, optimizer and compression administration, space reclamation, database reorganizations, statistics collection and reporting, and workload management are largely automated. Performance tuning requirements were said to be minimal virtually non-existent. An IBM-supplied tool automated conversion of row-based tables to columnar format. Organizations reported that, once initial deployment had been completed, DBA overhead for BLU Acceleration was minimal. User estimates ranged from six hours a week to maybe a quarter of an FTE (full time equivalent). Degrees of complexity affect comparative deployment times. Users of Clustered Columnstore Indexes reported that systems were typically brought into production in two weeks to six months most responses were in the six weeks to six months range with an average of around 93 days. In comparison, BLU Acceleration users reported eight days to three months, with an average of around 38 days. Conclusions The selection of DB or SQL Server 2014 also involves a larger choice as to how use of high-performance analytics will evolve within organizations. The Microsoft approach focuses on end-user control, and assumes that Microsoft will supply basic tools that will be enhanced by third parties, customized, applied and maintained by DBAs and system administrators. This is the traditional Microsoft approach to IT, and it will no doubt appeal to many organizations. But DB represents a stronger offering in terms of performance, architectural simplicity, back-end data integration, manageability and time to bring new applications into production. Its focus is on quality and timeliness of information. Where these are critical business priorities, DB with BLU Acceleration is a better option. Compared to Microsoft SQL Server
8 Solutions SQL Server 2014 Overview SQL Server 2014 is the latest version of Microsoft s core database, which originated in the 1990s. Since the mid- 2000s, Microsoft has positioned SQL Server aggressively for data warehousing and business intelligence (BI) applications. Clustered Columnstore Index was added as part of Microsoft s xvelocity solution in SQL Server 2008 R2, and enhanced in SQL Server Other SQL Server 2014 analytics-related features include an expanded version of the Microsoft extract, transformation and load (ETL) suite, SQL Server Integration Services. New components include Master Data Services, a master data management tool originally developed by Stratature, which Microsoft acquired in 2007; and Data Quality Services for data quality management. SQL Server 2014 also supports Microsoft AlwaysOn clustering and Windows Server Core Support, which reduces memory footprint and disk space consumption. Clustered Columnstore Indexes This technology organizes data in memory into columnar form, and compresses it. While not all data required for a query must fit into memory, performance will be significantly higher when this is the case. Data may be processed in batch mode; i.e., multiple rows of data are fetched and processed in a single operation. According to Microsoft, this approach is most effective for queries involving large numbers of joins, filters and aggregations. It employs a proprietary Microsoft form of vector processing. The determination of whether to use batch or row-by-row processing for a given query is made by the SQL Server Query Optimizer. Normally, batches contain around 1,000 rows. Smaller data blocks are processed row-by-row. Clustered Columnstore Indexes incorporate a limited form of data skipping, which enables the system to bypass segments of data that are not required for a specific query. Segment size is, however, comparatively large one million rows which means that in practice large amounts of unnecessary data are often processed. DB Overview Introduced in 1996, DB2 for Linux, UNIX and Windows (LUW) has progressively evolved toward greater performance and functionality. Recent versions have included DB2 9.5 (2008), DB2 9.7 (2009), DB2 9.8 (2010), DB2 10 (2012) and DB2 10.5, which added BLU Acceleration (2013). In its initial form, BLU Acceleration is designed primarily to support data warehouse systems with from 1 TB to 10 TBs of raw user data. It is a single-server solution, although in-memory, caching and processor optimization technologies enable levels of analytics performance and capacity utilization that are significantly higher than for most x86 platforms. BLU Acceleration is supported on x86 hardware using Red Hat Enterprise Linux (RHEL) 5 or 6 x86-64 or SUSE Linux 10 or 11 x86-64, and on IBM Power Systems, including new POWER8-based models. Compared to Microsoft SQL Server
9 The principal BLU Acceleration technologies for analytics processing are summarized in figure 5. ANALYTICS PROCESSING DB2 integration In-Memory & Caching Technologies Data Compression & Space Reclamation Column Store Data Skipping Shadow Tables Intel CPU Optimization POWER CPU Optimization Workload Management New runtime technology embedded in DB2 kernel. Columnar & row-based tables can be processed simultaneously on the same system, & employ the same schema. Can also be accessed using the same SQL & language interfaces, process model, storage, memory & utilities. Tables can be accessed within the same SQL statement. BLU Acceleration uses existing DB2 client server infrastructure, compilers, tablespaces, buffer pools, sort heap & package cache, & utilities including LOAD, BACKUP, RESTORE, EXPORT, SNAPSHOT, db2top, db2pd & others. New utility enables conversion of row-based to columnar tables in a specified database. Row-organized tables remain online during processing. System monitors conversion processes. DB2 tooling supports conventional & BLU Acceleration functions. Tools include Optim Query Workload Tuner (may be employed to recommend BLU Acceleration deployments & table transformations), along with IBM Data Studio, InfoSphere Data Architect, InfoSphere Optim Performance Manager, & InfoSphere Optim Configuration Manager. Dynamic in-memory technology loads & processes data in RAM. New memory paging architecture means entire database table does not have to reside in main memory to be processed. Blocks of BLU data may be moved into main memory as needed to query. According to IBM, expectation is that 70 to 80 percent of active data will reside in RAM. Performance may, however, be maintained even when the volume of data processed exceeds RAM capacity. Scan-friendly memory caching algorithm, unique to BLU Acceleration, optimizes cache performance for scanintensive workloads. Automatically adapts operation to data characteristics. Represents alternative to least recently used (LRU) algorithms designed primarily for transactional applications. Enables even (egalitarian) access to cache resources for commonly used values. Register-friendly encoding enables compressed data to be packed into cache structures for further efficiency in use of processor, memory & I/O resources. Encoded values are packed into bits matching CPU register width includes support for 128-bit wide POWER8 registers. Actionable Compression enables processing of columnar data while still compressed; i.e., analytics may be performed without decompression. Operates on row-based & columnar structures. Automatically adapts to data characteristics. Combines multiple IBM compression techniques including register-friendly encoding, described above. Users have reported compression rates of 10 times or more compared to uncompressed tables, with corresponding performance enhancements & storage savings. Real-time automated space reclamation extends to row-based & columnar data. Space is freed online during processing. DBA intervention not required for space management & REORGs. IBM implementation of technology enabling higher performance & reduced consumption of processor, memory & I/O resources for analytics workloads. Scans are directed to values in a particular column or columns, avoiding the need to process all data in a table. Reduces processor, memory & I/O resource consumption by excluding data unnecessary to query from processing. Process is automatic (no DBA intervention is required), based on system-stored metadata on parent table columns. New feature in DB2 Fix Pack 4 allows concurrent processing of queries & transactions. Row-based transactional data is continuously & automatically replicated to columnar tables. Execution of queries to these fully leverages BLU Acceleration in-memory technology. Shadow tables are automatically synchronized with corresponding roworiented tables. Shadow tables are implemented as a form of Materialized Query Table (MQT) using InfoSphere Data Replication Change Data Capture (CDC). Exploits latest Intel Single Instruction, Multiple Data (SIMD) parallel processing enhancements for E5 processors, including expanded Streaming SIMD Extensions (SSE) & Advanced Vector Extensions (AVX) instructions. According to Intel, enables parallel execution across multiple cores on single E7 processor, for up to 8x performance boost on 8-core processor. Vector processing provides additional performance of up to 4x for floating point-intensive applications. Actual performance boosts depend upon workloads x improvement is common. Exploits SIMD parallel processing & other performance-related features on Power Systems. Optimization for POWER8-based systems includes support for concurrent multithreading; use of 128-bit registers; & enhanced reliability features including improvements in Data Page Memory Checking & expanded integrity checking. Enables simplified operation of DB2 Workload Manager (WLM) for BLU Acceleration mode. Maintains concurrency subject to predefined threshold criteria. Optimizes use of processor, memory & I/O resources, & performs automatic, ongoing performance tuning based on knowledge of underlying hardware. Optionally, allows users to define more complex policies based on DB2 WLM. Where shadow tables are employed, WLM automatically routes queries to these, & transactions to row-based tables. Figure 5: Principal IBM DB BLU Acceleration Technologies for Analytics Processing Compared to Microsoft SQL Server
10 BLU Acceleration leverages a number of established DB2 strengths. In particular, the core DB2 Workload Manager (WLM) supports a new BLU Acceleration mode; and IBM- and Optim-branded DBA tools have been adapted to support BLU Acceleration as well as conventional administration and optimization functions. Overall Capabilities DB leverages longstanding DB2 strengths in such areas as performance optimization, data compression, workload management, high availability, and simplification and automation. Capabilities of the overall DB environment are summarized in figure 6. GENERAL CAPABILITIES Time Travel Query Continuous Data Ingest AVAILABILITY & RECOVERY DB2 purescale High Availability Disaster Recovery (HADR) Tivoli System Automation Online Reorg PERFORMANCE-RELATED Query Parallelism Table (Range) Partitioning Materialized Query Tables Multi-Dimensional Clustering Scan sharing DATA COMPRESSION Actionable Compression Adaptive Compression Backup Compression STORAGE-RELATED Multi-temperature Data Management purexml Storage Advanced Copy Services (ACS) Differentiates system time (when an event is logged) & business time (an alternative date &/or time associated with the event) in maintaining & querying records. Obviates need for custom-developed applications to analyze multiple timelines. Complies with temporal features of ANSI/ISO SQL:2011. Employs IBM parallel loading technology for extremely fast, low-overhead data transfers. Enables realtime data warehousing applications. Offers alternative to conventional batch & trickle feed techniques. Enables scale-out failover clustering for continuous availability. Generates <5% system overhead with clusters of up to 64 nodes (installations are typically in this range), & <16% with 128 nodes. Based on IBM Parallel Sysplex Data Sharing & General Parallel File System (GPFS). Migration requires no application changes. Does not currently support BLU Acceleration, but may be employed for other workloads. Enables replication of data changes to one or more standby servers, & recovery from these. Supports up to three hot standby servers, & allows delays to be set to prevent replication of problems. Mainframe-derived high availability & policy-based automation solution. Manages failover, restart & recovery within purescale & HADR clusters. Reduces time when data is not available to users during reorg processes. Data remains available during reload & rebuild phases. Enables parallel query execution for more efficient use of CPU & I/O resources. Most effective for longrunning queries reading large amounts of data. Allows data in a single table to be placed in multiple tablespaces for greater scalability, processing efficiency & data roll-in/roll-out. Query-specific table structure offering higher performance than indexes for certain types of query, especially complex queries. Enables flexible clustering of across multiple dimensions. Optimized for use in large data warehouse environments. Typically accelerates query performance by around three times, & improvements of ten times or more have been reported. Enables sharing of system resources by multiple scans. May significantly improve concurrency & performance, & reduce I/O loading for high-volume scanning workloads. Enables processing of columnar data while still compressed. Operates on row-based & columnar structures. Automatically adapts to data characteristics. Combines multiple IBM compression techniques including register-friendly encoding. Supports real-time automated space reclamation, described above. Algorithms integrate table, index & compression. Overall rates are typically four to ten times, with average of around seven times. Enables compression of data in all DB2 structures during backup. Enables automated storage tiering for higher performance & lower overall disk costs. Obviates need for controller-based tiering for most workloads. Tightly integrated with DB2 workload management. Enables storage of IBM purexml (IBM implementation of Extensible Markup Language) data in native hierarchical mode. Supports fast copying by IBM DS8000, Storwize V7000, SAN Volume Controller (SVC) & XIV systems during backup & restore operations. Includes Tivoli Storage FlashCopy Manager. ACS Scripted Interface enables use with non-ibm storage. Figure 6: Principal Capabilities of Overall IBM DB Environment Compared to Microsoft SQL Server
11 Recent features include temporal processing (Time Travel Query), software-based storage tiering (Multi Temperature Data Management) and an IBM parallel loading tool (Continuous Data Ingest) designed for realtime data warehouse updates. IBM has also moved to enable integration of new Big Data types, including Hadoop and MapReduce. DB supports SPARQL, Resource Definition Framework (RDF), JavaScript Object Notation (JSON) data interchange format, and other emerging standards. JSON has proved increasingly popular as an alternative to XML. Compared to Microsoft SQL Server
12 Detailed Data Basis of Calculations Installations Cost comparisons presented in this report were based on the installations summarized in figure 7. FINANCIAL SERVICES COMPANY MANUFACTURING COMPANY IT SERVICES COMPANY BUSINESS PROFILE Diversified retail bank $70+ billion assets 5,000+ employees 250+ branches APPLICATIONS Risk & compliance analysis/reporting, financial & profitability analysis IBM DB WITH BLU ACCELERATION 2/16 x Intel E FTE DBA Deployment time: 3 weeks Contract electronics manufacturer $6+ billion sales 40,000+ employees 50+ manufacturing plants Sales & customer profitability analysis, demand forecasting & related 4/32 x Intel E FTE DBA Deployment time: 4 weeks IT outsourcing & professional services $500+ million sales 6,000+ employees 20+ facilities Customer financial & operational key performance indicator (KPI) applications 8/64 x Intel E FTE DBA Deployment time: 10 weeks MICROSOFT SQL SERVER 2014 WITH CLUSTERED COLUMNSTORE INDEXES 2/16 x Intel E FTE DBA Deployment time: 2 months 4/32 x Intel E FTE DBA Deployment time: 3 months 8/64 x Intel E5 1.5 FTE DBAs Deployment time: 6 months Figure 7: Installations Summary Hardware platforms for both solutions are based on x86 servers from major vendors. Configurations and FTE DBA staffing levels were based on user-reported data. Costs of Ownership These were calculated as follows: DB with BLU Acceleration costs were calculated for per terabyte licenses, plus two years of support (the first year is included in initial licenses); hardware acquisition and three years of maintenance for x86 servers; and three-year premium Linux subscriptions. SQL Server 2014 with Clustered Columnstore Indexes costs were calculated for SQL Server 2014 R2 Enterprise Edition per core licenses, and Windows Server 2012 R2 Datacenter Edition per processor licenses and Client Access Licenses (CALs), plus three-year Microsoft Software Assurance coverage. Calculations also include hardware acquisition and three years of maintenance for x86 servers. All maintenance and support costs for both platforms are for 24/7 coverage with four-hour response time. Personnel costs were calculated based on annual salaries of $104,316/year for DB DBAs with BLU Acceleration training and $97,103/year for SQL Server 2014 DBAs with Clustered Columnstore Indexes training. Salaries were increased by 56.7 percent for bonuses, benefits and other per capita costs, and multiplied for three years. Compared to Microsoft SQL Server
13 Calculations also include appropriate training courses provided by Microsoft Learning Partners (Microsoft does not offer classroom training directly) or IBM. The duration of these, and the number of individuals trained varied between installations. Deployment costs were calculated for external professional services staff, charged at $2,000 or $3,000 per person-day, depending on required skill levels, plus travel and entertainment (T&E) expenses. Facilities costs are for energy consumption. Calculations are based on vendor specifications and assume near-24/365 operations over a three-year period. A conservative assumption for average cost per kilowatthour was employed. All cost values are for the United States. Costs Breakdowns Costs of ownership breakdowns are presented in figure 8. Financial Services Company IBM DB WITH BLU ACCELERATION Manufacturing Company IT Services Company Databases 157, , ,800 Servers 9,915 42, ,810 Deployment 69,834 95, ,420 Personnel 82, , ,215 Facilities 1,745 2,848 7,631 TOTAL ($) 322, ,715 1,092,876 MICROSOFT SQL SERVER 2014 WITH CLUSTERED COLUMNSTORE INDEXES Databases 115, , ,933 Servers 26,274 75, ,075 Deployment 190, , ,050 Personnel 201, , ,632 Facilities 2,043 3,274 8,820 TOTAL ($) 536,234 1,077,353 2,038,510 Figure 8: Three-year Costs of Ownership Breakdowns for IBM DB and Microsoft SQL Server 2014 Compared to Microsoft SQL Server
14 International Technology Group ITG sharpens your awareness of what s happening and your competitive edge... this could affect your future growth and profit prospects International Technology Group (ITG), established in 1983, is an independent research and management consulting firm specializing in information technology (IT) investment strategy, cost/benefit metrics, infrastructure studies, deployment tactics, business alignment and financial analysis. ITG was an early innovator and pioneer in developing total cost of ownership (TCO) and return on investment (ROI) processes and methodologies. In 2004, the firm received a Decade of Education Award from the Information Technology Financial Management Association (ITFMA), the leading professional association dedicated to education and advancement of financial management practices in end-user IT organizations. Client services are designed to provide factual data and reliable documentation to assist in the decision-making process. Information provided establishes the basis for developing tactical and strategic plans. Important developments are analyzed and practical guidance is offered on the most effective ways to respond to changes that may impact complex IT deployment agendas. A broad range of services is offered, furnishing clients with the information necessary to complement their internal capabilities and resources. Clients include a cross section of IT end users in the private and public sectors representing multinational corporations, industrial companies, financial institutions, service organizations, educational institutions, federal and state government agencies as well as IT system suppliers, software vendors and service firms. Federal government clients have included agencies within the Department of Defense (e.g., DISA), Department of Transportation (e.g., FAA) and Department of Treasury (e.g., US Mint). Copyright 2014 International Technology Group. All rights reserved. Material, in whole or part, contained in this document may not be reproduced or distributed by any means or in any form, including original, without the prior written permission of the International Technology Group (ITG). Information has been obtained from sources assumed to be reliable and reflects conclusions at the time. This document was developed with International Business Machines Corporation (IBM) funding. Although the document may utilize publicly available material from various sources, including IBM, it does not necessarily reflect the positions of such sources on the issues addressed in this document. Material contained and conclusions presented in this document are subject to change without notice. All warranties as to the accuracy, completeness or adequacy of such material are disclaimed. There shall be no liability for errors, omissions or inadequacies in the material contained in this document or for interpretations thereof. Trademarks included in this document are the property of their respective owners. Compared to Microsoft SQL Server
SQL Server 2012 Performance White Paper
Published: April 2012 Applies to: SQL Server 2012 Copyright The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication.
IBM DB2 Near-Line Storage Solution for SAP NetWeaver BW
IBM DB2 Near-Line Storage Solution for SAP NetWeaver BW A high-performance solution based on IBM DB2 with BLU Acceleration Highlights Help reduce costs by moving infrequently used to cost-effective systems
Cost/Benefit Case for IBM PureData System for Analytics
August 2013 MANAGEMENT BRIEF Cost/Benefit Case for IBM PureData System for Analytics Comparing Costs and Time to Value with Oracle Exadata Database Machine X3 International Technology Group 609 Pacific
Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale
WHITE PAPER Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale Sponsored by: IBM Carl W. Olofson December 2014 IN THIS WHITE PAPER This white paper discusses the concept
SQL Server 2012 Gives You More Advanced Features (Out-Of-The-Box)
SQL Server 2012 Gives You More Advanced Features (Out-Of-The-Box) SQL Server White Paper Published: January 2012 Applies to: SQL Server 2012 Summary: This paper explains the different ways in which databases
SAP HANA PLATFORM Top Ten Questions for Choosing In-Memory Databases. Start Here
PLATFORM Top Ten Questions for Choosing In-Memory Databases Start Here PLATFORM Top Ten Questions for Choosing In-Memory Databases. Are my applications accelerated without manual intervention and tuning?.
Integrating Apache Spark with an Enterprise Data Warehouse
Integrating Apache Spark with an Enterprise Warehouse Dr. Michael Wurst, IBM Corporation Architect Spark/R/Python base Integration, In-base Analytics Dr. Toni Bollinger, IBM Corporation Senior Software
SQL Server 2005 Features Comparison
Page 1 of 10 Quick Links Home Worldwide Search Microsoft.com for: Go : Home Product Information How to Buy Editions Learning Downloads Support Partners Technologies Solutions Community Previous Versions
Innovative technology for big data analytics
Technical white paper Innovative technology for big data analytics The HP Vertica Analytics Platform database provides price/performance, scalability, availability, and ease of administration Table of
Infrastructure Matters: POWER8 vs. Xeon x86
Advisory Infrastructure Matters: POWER8 vs. Xeon x86 Executive Summary This report compares IBM s new POWER8-based scale-out Power System to Intel E5 v2 x86- based scale-out systems. A follow-on report
Why DBMSs Matter More than Ever in the Big Data Era
E-PAPER FEBRUARY 2014 Why DBMSs Matter More than Ever in the Big Data Era Having the right database infrastructure can make or break big data analytics projects. TW_1401138 Big data has become big news
SQL Server 2012 Parallel Data Warehouse. Solution Brief
SQL Server 2012 Parallel Data Warehouse Solution Brief Published February 22, 2013 Contents Introduction... 1 Microsoft Platform: Windows Server and SQL Server... 2 SQL Server 2012 Parallel Data Warehouse...
SQL Server 2008 Performance and Scale
SQL Server 2008 Performance and Scale White Paper Published: February 2008 Updated: July 2008 Summary: Microsoft SQL Server 2008 incorporates the tools and technologies that are necessary to implement
Cost/Benefit Case for IBM PureData System for Analytics
April 2013 MANAGEMENT BRIEF Cost/Benefit Case for IBM PureData System for Analytics Comparing Costs and Time to Value with Teradata Data Warehouse Appliance International Technology Group 609 Pacific Avenue,
Actian Vector in Hadoop
Actian Vector in Hadoop Industrialized, High-Performance SQL in Hadoop A Technical Overview Contents Introduction...3 Actian Vector in Hadoop - Uniquely Fast...5 Exploiting the CPU...5 Exploiting Single
Performance Verbesserung von SAP BW mit SQL Server Columnstore
Performance Verbesserung von SAP BW mit SQL Server Columnstore Martin Merdes Senior Software Development Engineer Microsoft Deutschland GmbH SAP BW/SQL Server Porting AGENDA 1. Columnstore Overview 2.
Server Consolidation with SQL Server 2008
Server Consolidation with SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 supports multiple options for server consolidation, providing organizations
SAP HANA SAP s In-Memory Database. Dr. Martin Kittel, SAP HANA Development January 16, 2013
SAP HANA SAP s In-Memory Database Dr. Martin Kittel, SAP HANA Development January 16, 2013 Disclaimer This presentation outlines our general product direction and should not be relied on in making a purchase
Cost/Benefit Case for IBM PureData System for Analytics Comparing Costs and Time to Value with Teradata Data Warehouse Appliance
Management Report May 2014 Cost/Benefit Case for IBM PureData System for Analytics Comparing Costs and Time to Value with Teradata Data Warehouse Appliance International Technology Group 609 Pacific Avenue,
Preview of Oracle Database 12c In-Memory Option. Copyright 2013, Oracle and/or its affiliates. All rights reserved.
Preview of Oracle Database 12c In-Memory Option 1 The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any
Well packaged sets of preinstalled, integrated, and optimized software on select hardware in the form of engineered systems and appliances
INSIGHT Oracle's All- Out Assault on the Big Data Market: Offering Hadoop, R, Cubes, and Scalable IMDB in Familiar Packages Carl W. Olofson IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA
Online Transaction Processing in SQL Server 2008
Online Transaction Processing in SQL Server 2008 White Paper Published: August 2007 Updated: July 2008 Summary: Microsoft SQL Server 2008 provides a database platform that is optimized for today s applications,
Upgrading to Microsoft SQL Server 2008 R2 from Microsoft SQL Server 2008, SQL Server 2005, and SQL Server 2000
Upgrading to Microsoft SQL Server 2008 R2 from Microsoft SQL Server 2008, SQL Server 2005, and SQL Server 2000 Your Data, Any Place, Any Time Executive Summary: More than ever, organizations rely on data
Key Attributes for Analytics in an IBM i environment
Key Attributes for Analytics in an IBM i environment Companies worldwide invest millions of dollars in operational applications to improve the way they conduct business. While these systems provide significant
The New Economics of SAP Business Suite powered by SAP HANA. 2013 SAP AG. All rights reserved. 2
The New Economics of SAP Business Suite powered by SAP HANA 2013 SAP AG. All rights reserved. 2 COMMON MYTH Running SAP Business Suite on SAP HANA is more expensive than on a classical database 2013 2014
Parallel Data Warehouse
MICROSOFT S ANALYTICS SOLUTIONS WITH PARALLEL DATA WAREHOUSE Parallel Data Warehouse Stefan Cronjaeger Microsoft May 2013 AGENDA PDW overview Columnstore and Big Data Business Intellignece Project Ability
EMC VFCACHE ACCELERATES ORACLE
White Paper EMC VFCACHE ACCELERATES ORACLE VFCache extends Flash to the server FAST Suite automates storage placement in the array VNX protects data EMC Solutions Group Abstract This white paper describes
News and trends in Data Warehouse Automation, Big Data and BI. Johan Hendrickx & Dirk Vermeiren
News and trends in Data Warehouse Automation, Big Data and BI Johan Hendrickx & Dirk Vermeiren Extreme Agility from Source to Analysis DWH Appliances & DWH Automation Typical Architecture 3 What Business
Inge Os Sales Consulting Manager Oracle Norway
Inge Os Sales Consulting Manager Oracle Norway Agenda Oracle Fusion Middelware Oracle Database 11GR2 Oracle Database Machine Oracle & Sun Agenda Oracle Fusion Middelware Oracle Database 11GR2 Oracle Database
ORACLE BUSINESS INTELLIGENCE, ORACLE DATABASE, AND EXADATA INTEGRATION
ORACLE BUSINESS INTELLIGENCE, ORACLE DATABASE, AND EXADATA INTEGRATION EXECUTIVE SUMMARY Oracle business intelligence solutions are complete, open, and integrated. Key components of Oracle business intelligence
The IBM Cognos Platform
The IBM Cognos Platform Deliver complete, consistent, timely information to all your users, with cost-effective scale Highlights Reach all your information reliably and quickly Deliver a complete, consistent
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V White Paper July 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...
Microsoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010
Microsoft SQL Server 2008 R2 Enterprise Edition and Microsoft SharePoint Server 2010 Better Together Writer: Bill Baer, Technical Product Manager, SharePoint Product Group Technical Reviewers: Steve Peschka,
James Serra Sr BI Architect [email protected] http://jamesserra.com/
James Serra Sr BI Architect [email protected] http://jamesserra.com/ Our Focus: Microsoft Pure-Play Data Warehousing & Business Intelligence Partner Our Customers: Our Reputation: "B.I. Voyage came
HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
SQL Server Virtualization
The Essential Guide to SQL Server Virtualization S p o n s o r e d b y Virtualization in the Enterprise Today most organizations understand the importance of implementing virtualization. Virtualization
Big data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
IS IN-MEMORY COMPUTING MAKING THE MOVE TO PRIME TIME?
IS IN-MEMORY COMPUTING MAKING THE MOVE TO PRIME TIME? EMC and Intel work with multiple in-memory solutions to make your databases fly Thanks to cheaper random access memory (RAM) and improved technology,
Performance and Scalability Overview
Performance and Scalability Overview This guide provides an overview of some of the performance and scalability capabilities of the Pentaho Business Analytics Platform. Contents Pentaho Scalability and
Database Decisions: Performance, manageability and availability considerations in choosing a database
Database Decisions: Performance, manageability and availability considerations in choosing a database Reviewing offerings from Oracle, IBM and Microsoft 2012 Oracle and TechTarget Table of Contents Defining
IBM DB2 specific SAP NetWeaver Business Warehouse Near-Line Storage Solution
IBM DB2 specific SAP NetWeaver Business Warehouse Near-Line Storage Solution Karl Fleckenstein ([email protected]) IBM Deutschland Research & Development GmbH June 22, 2011 Important Disclaimer
SQL Server 2016 New Features!
SQL Server 2016 New Features! Improvements on Always On Availability Groups: Standard Edition will come with AGs support with one db per group synchronous or asynchronous, not readable (HA/DR only). Improved
IBM Storwize Rapid Application Storage solutions
IBM Storwize Rapid Application Storage solutions Efficient, integrated, pretested and powerful solutions to accelerate deployment and return on investment. Highlights Improve disk utilization by up to
CONSOLIDATING MICROSOFT SQL SERVER OLTP WORKLOADS ON THE EMC XtremIO ALL FLASH ARRAY
Reference Architecture CONSOLIDATING MICROSOFT SQL SERVER OLTP WORKLOADS ON THE EMC XtremIO ALL FLASH ARRAY An XtremIO Reference Architecture Abstract This Reference architecture examines the storage efficiencies
Query Acceleration of Oracle Database 12c In-Memory using Software on Chip Technology with Fujitsu M10 SPARC Servers
Query Acceleration of Oracle Database 12c In-Memory using Software on Chip Technology with Fujitsu M10 SPARC Servers 1 Table of Contents Table of Contents2 1 Introduction 3 2 Oracle Database In-Memory
Accelerating Business Intelligence with Large-Scale System Memory
Accelerating Business Intelligence with Large-Scale System Memory A Proof of Concept by Intel, Samsung, and SAP Executive Summary Real-time business intelligence (BI) plays a vital role in driving competitiveness
INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT
INCREASING EFFICIENCY WITH EASY AND COMPREHENSIVE STORAGE MANAGEMENT UNPRECEDENTED OBSERVABILITY, COST-SAVING PERFORMANCE ACCELERATION, AND SUPERIOR DATA PROTECTION KEY FEATURES Unprecedented observability
2009 Oracle Corporation 1
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,
Accelerating Business Intelligence with Large-Scale System Memory
Accelerating Business Intelligence with Large-Scale System Memory A Proof of Concept by Intel, Samsung, and SAP Executive Summary Real-time business intelligence (BI) plays a vital role in driving competitiveness
Oracle Database In-Memory The Next Big Thing
Oracle Database In-Memory The Next Big Thing Maria Colgan Master Product Manager #DBIM12c Why is Oracle do this Oracle Database In-Memory Goals Real Time Analytics Accelerate Mixed Workload OLTP No Changes
Sawmill Log Analyzer Best Practices!! Page 1 of 6. Sawmill Log Analyzer Best Practices
Sawmill Log Analyzer Best Practices!! Page 1 of 6 Sawmill Log Analyzer Best Practices! Sawmill Log Analyzer Best Practices!! Page 2 of 6 This document describes best practices for the Sawmill universal
Einsatzfelder von IBM PureData Systems und Ihre Vorteile.
Einsatzfelder von IBM PureData Systems und Ihre Vorteile [email protected] Agenda Information technology challenges PureSystems and PureData introduction PureData for Transactions PureData for Analytics
Why Big Data in the Cloud?
Have 40 Why Big Data in the Cloud? Colin White, BI Research January 2014 Sponsored by Treasure Data TABLE OF CONTENTS Introduction The Importance of Big Data The Role of Cloud Computing Using Big Data
DB2 for Linux, UNIX, and Windows Performance Tuning and Monitoring Workshop
DB2 for Linux, UNIX, and Windows Performance Tuning and Monitoring Workshop Duration: 4 Days What you will learn Learn how to tune for optimum performance the IBM DB2 9 for Linux, UNIX, and Windows relational
Columnstore Indexes for Fast Data Warehouse Query Processing in SQL Server 11.0
SQL Server Technical Article Columnstore Indexes for Fast Data Warehouse Query Processing in SQL Server 11.0 Writer: Eric N. Hanson Technical Reviewer: Susan Price Published: November 2010 Applies to:
Integrated Grid Solutions. and Greenplum
EMC Perspective Integrated Grid Solutions from SAS, EMC Isilon and Greenplum Introduction Intensifying competitive pressure and vast growth in the capabilities of analytic computing platforms are driving
An Oracle White Paper May 2012. Oracle Database Cloud Service
An Oracle White Paper May 2012 Oracle Database Cloud Service Executive Overview The Oracle Database Cloud Service provides a unique combination of the simplicity and ease of use promised by Cloud computing
ORACLE TAX ANALYTICS. The Solution. Oracle Tax Data Model KEY FEATURES
ORACLE TAX ANALYTICS KEY FEATURES A set of comprehensive and compatible BI Applications. Advanced insight into tax performance Built on World Class Oracle s Database and BI Technology Design after the
IBM DB2: LUW Performance Tuning and Monitoring for Single and Multiple Partition DBs
coursemonster.com/au IBM DB2: LUW Performance Tuning and Monitoring for Single and Multiple Partition DBs View training dates» Overview Learn how to tune for optimum performance the IBM DB2 9 for Linux,
Microsoft Analytics Platform System. Solution Brief
Microsoft Analytics Platform System Solution Brief Contents 4 Introduction 4 Microsoft Analytics Platform System 5 Enterprise-ready Big Data 7 Next-generation performance at scale 10 Engineered for optimal
How To Use Hp Vertica Ondemand
Data sheet HP Vertica OnDemand Enterprise-class Big Data analytics in the cloud Enterprise-class Big Data analytics for any size organization Vertica OnDemand Organizations today are experiencing a greater
Real-Time Big Data Analytics SAP HANA with the Intel Distribution for Apache Hadoop software
Real-Time Big Data Analytics with the Intel Distribution for Apache Hadoop software Executive Summary is already helping businesses extract value out of Big Data by enabling real-time analysis of diverse
What Is In-Memory Computing and What Does It Mean to U.S. Leaders? EXECUTIVE WHITE PAPER
What Is In-Memory Computing and What Does It Mean to U.S. Leaders? EXECUTIVE WHITE PAPER A NEW PARADIGM IN INFORMATION TECHNOLOGY There is a revolution happening in information technology, and it s not
EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server
White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads
89 Fifth Avenue, 7th Floor New York, NY 10003 www.theedison.com @EdisonGroupInc 212.367.7400 IBM Spectrum Scale vs EMC Isilon for IBM Spectrum Protect Workloads A Competitive Test and Evaluation Report
Kronos Workforce Central 6.1 with Microsoft SQL Server: Performance and Scalability for the Enterprise
Kronos Workforce Central 6.1 with Microsoft SQL Server: Performance and Scalability for the Enterprise Providing Enterprise-Class Performance and Scalability and Driving Lower Customer Total Cost of Ownership
Optimizing SQL Server AlwaysOn Implementations with OCZ s ZD-XL SQL Accelerator
White Paper Optimizing SQL Server AlwaysOn Implementations with OCZ s ZD-XL SQL Accelerator Delivering Accelerated Application Performance, Microsoft AlwaysOn High Availability and Fast Data Replication
Oracle Database 11g Comparison Chart
Key Feature Summary Express 10g Standard One Standard Enterprise Maximum 1 CPU 2 Sockets 4 Sockets No Limit RAM 1GB OS Max OS Max OS Max Database Size 4GB No Limit No Limit No Limit Windows Linux Unix
Streamline SAP HANA with Nearline Storage Solutions by PBS and IBM Elke Hartmann-Bakan, IBM Germany Dr. Klaus Zimmer, PBS Software DMM127
Streamline SAP HANA with Nearline Storage Solutions by PBS and IBM Elke Hartmann-Bakan, IBM Germany Dr. Klaus Zimmer, PBS Software DMM127 Agenda 2 Introduction Motivation Approach Solution IBM/PBS Software
ENZO UNIFIED SOLVES THE CHALLENGES OF OUT-OF-BAND SQL SERVER PROCESSING
ENZO UNIFIED SOLVES THE CHALLENGES OF OUT-OF-BAND SQL SERVER PROCESSING Enzo Unified Extends SQL Server to Simplify Application Design and Reduce ETL Processing CHALLENGES SQL Server does not scale out
SAS and Oracle: Big Data and Cloud Partnering Innovation Targets the Third Platform
SAS and Oracle: Big Data and Cloud Partnering Innovation Targets the Third Platform David Lawler, Oracle Senior Vice President, Product Management and Strategy Paul Kent, SAS Vice President, Big Data What
Focus on the business, not the business of data warehousing!
Focus on the business, not the business of data warehousing! Adam M. Ronthal Technical Product Marketing and Strategy Big Data, Cloud, and Appliances @ARonthal 1 Disclaimer Copyright IBM Corporation 2014.
Colgate-Palmolive selects SAP HANA to improve the speed of business analytics with IBM and SAP
selects SAP HANA to improve the speed of business analytics with IBM and SAP Founded in 1806, is a global consumer products company which sells nearly $17 billion annually in personal care, home care,
High-Performance Business Analytics: SAS and IBM Netezza Data Warehouse Appliances
High-Performance Business Analytics: SAS and IBM Netezza Data Warehouse Appliances Highlights IBM Netezza and SAS together provide appliances and analytic software solutions that help organizations improve
Retail POS Data Analytics Using MS Bi Tools. Business Intelligence White Paper
Retail POS Data Analytics Using MS Bi Tools Business Intelligence White Paper Introduction Overview There is no doubt that businesses today are driven by data. Companies, big or small, take so much of
Information management software solutions White paper. Powerful data warehousing performance with IBM Red Brick Warehouse
Information management software solutions White paper Powerful data warehousing performance with IBM Red Brick Warehouse April 2004 Page 1 Contents 1 Data warehousing for the masses 2 Single step load
Microsoft s SQL Server Parallel Data Warehouse Provides High Performance and Great Value
Microsoft s SQL Server Parallel Data Warehouse Provides High Performance and Great Value Published by: Value Prism Consulting Sponsored by: Microsoft Corporation Publish date: March 2013 Abstract: Data
SQL Server 2012 and PostgreSQL 9
SQL Server 2012 and PostgreSQL 9 A Detailed Comparison of Approaches and Features SQL Server White Paper Published: April 2012 Applies to: SQL Server 2012 Introduction: The question whether to implement
Rackspace Cloud Databases and Container-based Virtualization
Rackspace Cloud Databases and Container-based Virtualization August 2012 J.R. Arredondo @jrarredondo Page 1 of 6 INTRODUCTION When Rackspace set out to build the Cloud Databases product, we asked many
Breaking News! Big Data is Solved. What Is In-Memory Computing and What Does It Mean to U.S. Leaders? EXECUTIVE WHITE PAPER
Breaking News! Big Data is Solved. What Is In-Memory Computing and What Does It Mean to U.S. Leaders? EXECUTIVE WHITE PAPER There is a revolution happening in information technology, and it s not just
An Oracle White Paper May 2011. Exadata Smart Flash Cache and the Oracle Exadata Database Machine
An Oracle White Paper May 2011 Exadata Smart Flash Cache and the Oracle Exadata Database Machine Exadata Smart Flash Cache... 2 Oracle Database 11g: The First Flash Optimized Database... 2 Exadata Smart
Enterprise Performance Tuning: Best Practices with SQL Server 2008 Analysis Services. By Ajay Goyal Consultant Scalability Experts, Inc.
Enterprise Performance Tuning: Best Practices with SQL Server 2008 Analysis Services By Ajay Goyal Consultant Scalability Experts, Inc. June 2009 Recommendations presented in this document should be thoroughly
Emerging Technologies Shaping the Future of Data Warehouses & Business Intelligence
Emerging Technologies Shaping the Future of Data Warehouses & Business Intelligence Appliances and DW Architectures John O Brien President and Executive Architect Zukeran Technologies 1 TDWI 1 Agenda What
CitusDB Architecture for Real-Time Big Data
CitusDB Architecture for Real-Time Big Data CitusDB Highlights Empowers real-time Big Data using PostgreSQL Scales out PostgreSQL to support up to hundreds of terabytes of data Fast parallel processing
Course 10977A: Updating Your SQL Server Skills to Microsoft SQL Server 2014
www.etidaho.com (208) 327-0768 Course 10977A: Updating Your SQL Server Skills to Microsoft SQL Server 2014 5 Days About this Course This five day instructor led course teaches students how to use the enhancements
Optimizing Storage for Better TCO in Oracle Environments. Part 1: Management INFOSTOR. Executive Brief
Optimizing Storage for Better TCO in Oracle Environments INFOSTOR Executive Brief a QuinStreet Excutive Brief. 2012 To the casual observer, and even to business decision makers who don t work in information
Reduce your data storage footprint and tame the information explosion
IBM Software White paper December 2010 Reduce your data storage footprint and tame the information explosion 2 Reduce your data storage footprint and tame the information explosion Contents 2 Executive
Netezza and Business Analytics Synergy
Netezza Business Partner Update: November 17, 2011 Netezza and Business Analytics Synergy Shimon Nir, IBM Agenda Business Analytics / Netezza Synergy Overview Netezza overview Enabling the Business with
Accelerate SQL Server 2014 AlwaysOn Availability Groups with Seagate. Nytro Flash Accelerator Cards
Accelerate SQL Server 2014 AlwaysOn Availability Groups with Seagate Nytro Flash Accelerator Cards Technology Paper Authored by: Mark Pokorny, Database Engineer, Seagate Overview SQL Server 2014 provides
Virtual Data Warehouse Appliances
infrastructure (WX 2 and blade server Kognitio provides solutions to business problems that require acquisition, rationalization and analysis of large and/or complex data The Kognitio Technology and Data
ICONICS Choosing the Correct Edition of MS SQL Server
Description: This application note aims to assist you in choosing the right edition of Microsoft SQL server for your ICONICS applications. OS Requirement: XP Win 2000, XP Pro, Server 2003, Vista, Server
