How Oracle Exadata Delivers Extreme Performance
|
|
- Vanessa Natalie Mills
- 8 years ago
- Views:
Transcription
1 How Oracle Exadata Delivers Extreme Performance John Clarke Sr. Oracle Architect Dallas Detroit Los Angeles Singapore India 1
2 Ø Ø Agenda Oracle Exadata: Hardware and Software Optimized Exadata Hardware Components Ø Ø Ø Ø Ø Ø Ø Ø Ø How Oracle Works on Exadata Exadata Software Components Smart Scan and Cell Offload Processing Storage Indexes Smart Flash Cache Hybrid Columnar Compression IO Resource Management How to Get There from Here A New Way to Look at Optimization Page 2
3 Exadata: Hardware and Software Optimized The Exadata Database Machine is designed for extreme performance, manageability, consolidation, and high-availability. Oracle achieves this by: Ø Utilizing an optimally-balanced hardware infrastructure Ø Using fast hardware components designed to eliminate IO, CPU, and network bottlenecks Ø Delivering significant Oracle software enhancements to leverage balanced hardware configuration The Exadata Database Machine hardware is just part of the solution the combination of a balanced hardware configuration with Exadata s storage software is what delivers extreme performance. Page 3
4 Exadata Hardware Database servers, or Compute nodes InfiniBand Interconnect Exadata Storage Servers, or Cells Page 4
5 Ø Ø Ø Ø Ø Ø Ø Exadata Storage Servers Exadata storage servers are self-contained storage platforms that house disk storage and run the Exadata Storage Server software provided by Oracle A single Exadata storage server is also known as a cell A cell is the building block for the Exadata storage grid More cells provide greater capacity and IO bandwidth Databases are typically deployed across multiple cells The storage cells communicate with the Oracle database and ASM instances over InfiniBand Each cell is wholly dedicated to Oracle database files Page 5
6 Ø Ø Ø Ø Exadata Storage Servers Exadata storage servers provide high-performance storage for Oracle databases Ø Ø Up to 1.8 Gb/sec raw data bandwidth per cell Up to 75,000 IOPs using Flash per cell Based on 64-bit Sun Fire servers Comes installed with Exadata storage server software, Oracle Linux x86_64, drivers and utilities Storage servers only available with the Exadata Database Machine Page 6
7 Exadata Storage Servers Component Processors Memory Local Disks Flash Disk Controller Network Remote Management Power Supplies Details 2 six-core Intel Xeon L5640 Processors (2.26GHz) 24 GB (6 x 4GB) 12 x 600 GB 15K RPM High Performance SAS, or 12 x 2Tb 7.2k RPM High Capacity SAS 4 x 96Gb Sun Flash Accelerator F20 PCIe Cards Disk controller HBA with 512 MB battery backed cache Two InfiniBand 4X QDR (40Gb/s) ports 1 dual-port PCIe HCA Four embedded GbE ports 1 Ethernet port for ILOM 2 redundant hot-swappable power supplies Sun Fire X4270 M2 Page 7
8 Ø Ø Ø Exadata Database Servers The compute node grid consists of multiple database servers Oracle 11gR2 and ASM run on the database servers Companies typically run RAC databases on the database server to achieve high availability and maximize the aggregate CPU and memory horsepower in the compute node grid Page 8
9 Exadata Database Servers Component Processors Memory Local Disks Disk Controller Network Remote Management Power Supplies Details 2 6-core Intel Xeon X5670 Processors (2.93 GHz) 96 GB (12 x 8 GB) 4 x 300GB 10K RPM SAS disks Disk controller HBA with 512MB battery backed cache Two InfiniBand 4X QDR (40Gb/s) ports Four 1GbE Ethernet ports Two 10GbE Ethernet ports 1 Ethernet port for ILOM 2 redundant hot-swappable power supplies Operating System 64-bit Oracle Enterprise Linux 5.5 Solaris on X2-8 Page 9
10 Ø Ø Ø Ø InfiniBand Network InfiniBand is the Exadata Storage Network Looks like normal Ethernet to hosts Efficiency of a SAN Used for both storage and RAC interconnect Ø Uses high-performance ZDP InfiniBand protocol (RDS V3) Page 10
11 Ø InfiniBand Network Oracle uses InfiniBand because of its proven track record with high-performance computing - provides 40 Gb/ sec in each direction Ø Looks like Ethernet but is much faster Ø Ø Uses zero copy, which means data is transferred across the network without intermediate buffer copies in the various network layers Uses buffer reservation, which means that the hardware knows exactly where to place buffers ahead of time Ø Ø Ø Unified network fabric for both Exadata storage and RAC interconnect simplifies cabling and networking On top of InfiniBand, Exadata uses the Zero Data loss (ZDP) UDP protocol. The ZDP protocol has a very low CPU overhead with tests showing only a 2 percent CPU utilization while transferring 1 GB/sec of data. Each Exadata server is configured with one dual-port InfiniBand card designed to be connected to two separate InfiniBand switches for high availability. Each InfiniBand link is able to carry the full data bandwidth of the entire cell, which means you can lose an entire network without losing any performance. Page 11
12 Other Hardware Components Ø Ø Ø Ø Embedded Cisco switch for data center network uplink KVM switch for management of storage server and compute nodes Multiple PDUs with management interfaces ILOM (Integrated Lights Out Management) capability for each component rack Page 12
13 Exadata Configuration Options Oracle has four (4) configuration options: Ø Ø Ø Ø Ø Ø Ø Ø Exadata X2-2 Quarter Rack Exadata X2-2 Half Rack Exadata X2-2 Full Rack Exadata X2-8 Full Rack Each configuration is available with either High Performance or High Capacity disks The difference between the configurations is the # of storage servers and compute nodes Oracle has a short list of hardware configuration options because it requires the balanced hardware configuration A half rack can be upgraded to a full rack, and multiple racks can be interconnected Ø Quarter racks have 2 InfiniBand Switches; Half and Full racks have 3 Page 13
14 Exadata Configuration Options Compute Nodes X2-2 Quarter Rack X2-2 Half Rack X2-2 Full Rack X2-8 Full Rack # Compute nodes Processor Cores per Node Total Processor Cores Memory/Node 96 Gb 96 Gb 96 Gb 1 TB Total Memory 192 Gb 384 Gb 768 Gb 2 TB Page 14
15 Exadata Configuration Options Exadata Storage Servers X2-2 Quarter Rack X2-2 Half Rack X2-2 Full Rack X2-8 Full Rack # Storage cells Disks/Cell Total Disks Flash 1.15 Tb 2.7 Tb 5.4 Tb 5.4 Tb Raw Storage (HP) 21.6 Tb 50.4 Tb Tb Tb Raw Storage (HC) 72 Tb 168 Tb 336 Tb 336 Tb Page 15
16 Exadata Configuration Options Exadata Storage Servers Raw disk throughput, High Performance disks Raw disk throughput, High Capacity disks Disk IOPs, High Performance disks Disk IOPS, High Capacity disks X2-2 Quarter Rack X2-2 Half Rack X2-2 Full Rack X2-8 Full Rack Up to 5,400 MBPS Up to 12,600 MBPS Up to 25,200 MBPS Up to 25,200 MBPS Up to 3,000 MBPS Up to 7,000 MBPS Up to 14,000 MBPS Up to 14,000 MBPS Up to 10,800 Up to 25,200 Up to 50,400 Up to 50,400 Up to 4,320 Up to 10,080 Up to 20,160 Up to 20,160 Flash IOPs Up to 225,000 Up to 525,000 Up to 1,050,000 Up to 1,050,000 Page 16
17 How Oracle Works on Exadata Oracle 11gR2 Oracle RAC Database storage uses ASM Page 17
18 How Oracle Works on Exadata Each storage cell contains: Exadata Storage Software Disks (LUNs, cell disks, and Grid Disks) ASM disk groups built on Exadata Grid Disks Exadata features implemented on storage servers and exploited by databases on compute nodes Page 18
19 How Oracle Works on Exadata Oracle communicates with the Exadata cells using idb idb is implemented using LIBCELL Exadata binaries are linked with LIBCELL to facilitate cell communication Page 19
20 How Oracle Works on Exadata CELLSRV is the primary component and provides majority of services Oracle Databases and ASM processes use LIBCELL to communicate to CELLSRV processes CELLSRV is what delivers the unique Exadata features Page 20
21 Oracle on Exadata Each Exadata cell has 12 physical disks Oracle reserves 29Gb on each of the first two disks in each cell for the system area Page 21
22 Oracle on Exadata One LUN is created on each physical disk A LUN is the unit on which a Cell Disk can be created Page 22
23 Oracle on Exadata Cell Disks are created on LUNs Cell Disks represent the storage area for each LUN Page 23
24 Oracle on Exadata Grid Disks are built on Cell Disks A Grid Disk can consume all space on a cell disk or an administrator-specified chunk of the Cell Disk The outermost tracks on a Grid Disk have the highest performance characteristics Building multiple Grid Disks per Cell Disk allows the administrator to segregate storage by usage and performance demands Interleaved Grid Disks can increase probability of important extents being on higher-performing disk tracks Page 24
25 Oracle on Exadata Grid Disk are directly exposed to ASM ASM Disk Groups are built on Grid Disks Page 25
26 Exadata Software Features at a Glance Exadata Software Goals Fully and evenly utilize all Computing resources in Exadata Database Machine to eliminate bottlenecks and deliver consistent high performance Reduce demand for resources by eliminating any IO that can be discarded without impacting result Page 26
27 Exadata Smart Scan One of the most important Exadata software features Typically where Exadata software features discussions start One of several cell offload features in Exadata cell offload is defined as Exadata s shifting of database work from the database servers to Exadata storage servers Primary goal of Smart Scan is to perform majority of IO processing on storage servers and returned smaller amounts of data from the storage infrastructure to the database servers Provides dramatic performance improvements for eligible SQL operations (full table scans, fast full index scans, joins, etc.) Smart Scan is implemented automatically on Exadata for eligible SQL operations Smart Scan only works on Exadata Page 27
28 Traditional SQL Processing User submits a query select state,count(*)! from census! group by state! The query is parsed and an execution path is determined. Extents are identified IO is issued Oracle retrieves blocks based on extents identified. A block is the smallest unit of transfer from storage system, contains rows and columns Oracle filters rows and columns, once loaded to buffer cache, and returns to user Database instance processes blocks all required blocks read from storage system to buffer cache Page 28
29 Smart Scan Processing User submits a query select state,count(*)! from census! group by state! An idb command is constructed and sent to Exadata cells Each Exadata cell scans data blocks and extracts relevant rows and columns that satisfy the SQL query The database consolidates results from across each Exadata cell and returns rows to the client Each Exadata cell returns to the database instance an idb message containing the requested rows and columns. These are not block images and are returned to the PGA Page 29
30 Smart Scan and Cell Offload Processing Filtering operations are offloaded to the Exadata storage cell Column Filtering Predicate Filtering (i.e., row filtering) Join Filtering Only requested rows and columns are returned to the database server Significantly less IO transfer over storage network Less memory and CPU required on database tier nodes Rows/columns retrieved to user s PGA via direct path read mechanism, not through buffer cache. Large IO requests don t saturate buffer cache Page 30
31 Smart Scan and Cell Offload Processing What s Eligible for Smart Scan on Exadata? Single-table full table scans (not IOTs, not BLOBs, no hash clusters) - Oracle is a C program - kcfis_read is function that does smart can - kcfis_read is called by direct path read function, klbldrget - klbkdrget is called from full scan functions Serial direct reads enabled Join filtering using Bloom Filters Bloom filtering determines which rows are required to satisfy a join A Bloom filter is created with the values of the join column for the smaller table of a join, and this filter is used by the Exadata storage server to eliminate row candidates from the larger table Page 31
32 Smart Scan in Action Our table is 4.53 Gb in size It s got almost 180 million rows All its indexes are invisible A full scan with returned in 5.27 seconds! Page 32
33 Measuring Smart Scan: cell_offload_plan_display cell_offload_plan_display setting AUTO ALWAYS NEVER Meaning Shows offload-related information in plan display if an Exadata cell is present and the objects are on the cell Shows offload-related information if the SQL statement is off-loadable, whether or not running on an Exadata cell or not. Useful for 11gR2 databases not running on Exadata to do simulated plans Never shows offload-related information in plans. TABLE ACCESS STORAGE FULL indicates that the query is cell-off-loadable, eligible for Smart Scan Page 33
34 Measuring Smart Scan with Statistics Statistic/Event cell physical IO bytes eligible for predicate offloading cell physical IO interconnect bytes Meaning Number of bytes eligible for cell offload Bytes returned from storage cells to database server cell physical IO interconnect bytes return by smart scan Bytes returned from storage cells via Smart Scan operations Page 34
35 Measuring Smart Scan with Statistics 4.8 Gb eligible for Smart Scan and 45MB returned from storage grid Smart Scan Efficiency of 99.05% Page 35
36 Measuring Smart Scan with Statistics PGA max before Smart Scan is about 4Mb PGA max after smart scan is over 10Mb Page 36
37 Controlling Smart Scan Behavior SQL> alter system set cell_offload_processing=<true FALSE>; Control cell offload systemwide SQL> alter session set cell_offload_processing=<true FALSE>; Control cell offload for current session SQL> select /*+ opt_param ( cell_offload_processing, <true false> ) */. Control cell offload for SQL statement Page 37
38 Controlling Smart Scan Behavior Full scan in 31+ seconds All 4.8GB returned from cell Page 38
39 Controlling Smart Scan Behavior Page 39
40 Smart Scan Predicate Filtering Ran in 6.64 seconds 2.1GB returned via Smart Scan, savings of 56.11% Page 40
41 Smart Scan Predicate Filtering 410K rows returned in 5.48 seconds 45MB returned via Smart Scan, savings of 99.03% Page 41
42 Smart Scan Predicate Filtering 2 rows returned in 5.39 seconds 8MB returned via Smart Scan, savings of 99.82% Page 42
43 Smart Scan Column Projection 1.17 MB of data through interconnect Page 43
44 Smart Scan Column Projection 651,744 bytes through interconnect Page 44
45 Smart Scan Column Projection 1,176,032 bytes when selecting BIGCOL 651,744 bytes when selecting BIGCOL 2,048 rows returned (1,874, ,744) / 2,048 = 524,288 bytes difference measured Predicted delta = 522,240 bytes, which his pretty close Page 45
46 Smart Scan Column Projection Another Example Page 46
47 Smart Scan Column Projection Another Example Query ran in 5.47 seconds Query ran in seconds Page 47
48 Smart Scan: Join Filtering In some cases, Oracle will use bloom filters to optimize join filtering Bloom filters are implemented on the storage server on Exadata Bloom filters work by: Examining the columns in a join For the smaller table in a join, Oracle determines the space required to store the result of data and if small compared to the size of the joined (larger) table, may decide to use a bloom filter Bloom filters identify which columns are required to satisfy a join and build an in-memory set of values to compare against the larger table s rows Typically are built to reduce data communication between slaves for PQ joins and work well with PQO and partitioned tables Page 48
49 Smart Scan: Join Filtering with offloaded Bloom Filters Result returned in 7.45 second SYS_OP_BLOOM_FILTER indicates Bloom Filters. When it happens on Storage line, it means it s offloaded Page 49
50 Smart Scan: Join Filtering with offloaded Bloom Filters With _bloom_filter_predicate_pushdown_to_storage = false SYS_OP_BLOOM_FILTER indicates Bloom Filters. Here, does NOT exist on storage line, hence not offloaded Page 50
51 Smart Scan: Table Scans or Index Scans? Do we just drop all indexes and let Smart Scan do its magic? Completed in a little over 3 minutes Over 6.7 GB returned from cell note CPU and LIO values Page 51
52 Smart Scan: Table Scans or Index Scans? Do we just drop all indexes and let Smart Scan do its magic? Completed in less than a second with indexes TEST TEST TEST Far less IO and less CPU/LIO Indexes still work better for this type of query! Page 52
53 Smart Scan: Disabling Serial Direct Read Mechanism Query completed in 3.81 seconds 15Gb eligible for cell offload 100Mb transferred over from storage cell Smart Scan efficiency of 99.34% Page 53
54 Smart Scan: Disabling Serial Direct Read Mechanism Disable serial direct reads Query time increased from 3.8 seconds to 101 seconds Nothing eligible for cell offload no Smart Scan Page 54
55 Querying V$SQL to Determine Cell Off-loadable Queries How can you determine which SQL statements are Exadata cell offload-able? Page 55
56 Querying V$SQL to Determine Cell Off-loadable Queries How can you determine which SQL statements are Exadata cell offload-able? Page 56
57 Querying V$SQL to Determine Cell Off-loadable Queries How can you determine which SQL statements are Exadata cell offload-able? Page 57
58 Smart Scan Summary Smart Scan is software engineered for Exadata that can provide significant IO savings and dramatically improved performance Smart Scan works by offloading IO operations to the Exadata storage cell and only returning rows and columns requested by the SQL statement back to the database instance Row filtering provides significant cell scan efficiency gains Column filtering provides IO savings Join filtering using Bloom Filters, performed on Exadata cells, also can offer huge performance gains Don t drop all your OLTP indexes without testing single-row index access will often still outpace full scan with cell offload One of the great things about Exadata and Smart Scan is that nothing needs to be explicitly done by a developer or DBA to take advantage of it - it just works with Exadata Page 58
59 Exadata Storage Indexes The goal of storage indexes is to eliminate IO requests to Exadata disks Storage indexes are NOT at all like traditional Oracle indexes Storage indexes are like anti-indexes they help Oracle determine which blocks the requested data are NOT in Knowing which storage areas will NOT contain the requested data allows Exadata to skip IO requests to these disks Bypassing IO to disks reduces work required on the storage grid and improves performance Page 59
60 Exadata Storage Indexes: Architecture Each disk in an Exadata storage cell is divided into 1MB pieces called storage regions For each 1MB storage region, data distribution statistics are held in a memory structure called a region index Region indexes maintain min and max values for the storage region as data as selected from the disk storage region, for up to 8 columns in a table Region indexes are maintained over time as data is access from the disk Storage indexes are maintained automatically by Exadata; there is no way to influence their behavior Storage server reboots will erase storage index data region indexes are stored in memory Performance impact can be dramatic, but performance impact is not deterministic as its subject to Exadata s automatic management of region indexes Page 60
61 Exadata Storage Indexes: Architecture Region index select * from soe.orders where order_total > 60; Min 1 Max 22 Min: 62 Max 100 Min 1 Max 60 Min 1 Max 101 ASM AU Storage region Min = 1, Max = 60 ASM Disk Table: ORDERS Table: ORDERS Table: ORDERS C1 C2 C3 C C1 C2 C3 C Min = 62, Max = 100 C1 C2 C3 C Min = 1, Max = Page 61
62 Exadata Storage Indexes: Scenarios Data needs to be well-ordered according to query predicates If all regions contain randomly ordered column values, region index min and max values will be too widespread to eliminate the region from an IO request Since Exadata only maintain regions indexes on 8 columns per table, your range of distinct predicates should be relatively small and static Storage indexes can return false-positive, but will never skip regions and jeopardize query integrity select * from soe.orders where order_total between 10 and 15; Min 1 Max 22 Min: 62 Max 100 Min 1 Max 60 Min 1 Max 101 Page 62
63 Exadata Storage Indexes in Action This is the key statistic: cell physical IO bytes saved by storage index Page 63
64 Exadata Storage Indexes in Action 0 rows returned in 4.85 seconds ~ 17Mb of IO saved by storage index Smart scan efficiency of 99.98% Page 64
65 ~ 9Gb of IO sav by storage index Exadata Storage Indexes in Action 00 rows returned in in seconds Smart scan efficiency of 99.98% Page 65
66 Exadata Storage Indexes in Action 0 rows returned in 0.09 seconds!! ~ 15Gb of IO saved by storage index Page 66
67 Exadata Storage Indexes in Action 7.8 million rows returned in 4.23 seconds No storage index savings Page 67
68 Exadata Storage Indexes in Action 7.8 million rows returned in 1.37 seconds Over 12Gb of IO savings with storage indexes Page 68
69 Exadata Storage Indexes in Action No storage index savings Page 69
70 Exadata Storage Indexes in Action No storage index savings Page 70
71 Exadata Storage Indexes and Bind Variables Storage indexes used Page 71
72 Exadata Storage Indexes and NULL values Storage index used Page 72
73 Exadata Storage Indexes with LIKE, BETWEEN, Wildcards No storage index savings Page 73
74 Exadata Storage Indexes with LIKE, BETWEEN, Wildcards 14GB of IO saved by storage index Page 74
75 Exadata Storage Indexes with LIKE, BETWEEN, Wildcards No storage index savings LIKE is fine, but Wildcards prevent storage index IO pruning Page 75
76 Exadata Storage Indexes with LIKE, BETWEEN, Wildcards Storage index used Page 76
77 Exadata Storage Indexes with LIKE, BETWEEN, Wildcards Storage index used but less IO saved with wildcards Page 77
78 Exadata Storage Indexes with LIKE, BETWEEN, Wildcards No storage index used Page 78
79 Exadata Storage Indexes with OLTP Tables Page 79
80 Exadata Storage Indexes with OLTP Tables Eliminated over 4Gb of IO via storage index Page 80
81 Exadata Storage Indexes with OLTP Tables Ran in 0.26 seconds Eliminated over 4GB of IO from via storage index Page 81
82 Exadata Storage Indexes with OLTP Tables Page 82
83 Exadata Storage Indexes with OLTP Tables No storage index when upperbound in range is higher than max value Page 83
84 Exadata Storage Indexes with Normal Index Access Page 84
85 Exadata Storage Indexes with Normal Index Access No storage index Page 85
86 Exadata Storage Indexes with Normal Index Access Storage indexes are only used for cell offload functions Page 86
87 Disabling Storage Indexes Storage indexes can be disabled by setting _kcfis_storageidx_disabled = TRUE Query ran in < 1 second ~ 12Gb saved via storage index Page 87
88 Disabling Storage Indexes Storage indexes can be disabled by setting _kcfis_storageidx_disabled = TRUE Query ran in 4.71 seconds 0 Bytes saved by storage index Page 88
89 Tracing Storage Indexes Storage index operations can be traced by setting _kcfis_storageidx_diag_mode = 2 Enable storage index tracing Indicates storage indexes were in use Page 89
90 Tracing Storage Indexes Page 90
91 Tracing Storage Indexes SQL_ID for given transaction DATA_OBJECT_ID Page 91
92 Tracing Storage Indexes strt = 0 end = 2048 Memory=2K Storage region size = 1Mb Column # 11 Low and high values of actual data Page 92
93 Tracing Storage Indexes No storage index savings Page 93
94 Summary Storage indexes are designed to eliminate IO requests on storage servers Storage indexes are automatically maintained in memory areas on the storage cell Exadata tracks minimum and maximum values in each storage region while processing IO requests and stores these region-level boundaries in region indexes Exadata examines region indexes to determine whether a storage region can be excluded from access based on a query predicate Storage indexes are automatically created and maintained Storage indexes work for cell offload functions (Smart Scan) Storage indexes can help performance dramatically and at worst-case, will never hurt performance Storage indexes work on well-ordered tables based on the query predicates issued against the tables Page 94
95 Exadata Smart Flash Cache Smart Flash Cache goal: intelligently cache frequently used data Exadata caches in PCI flash cards on the storage cell Data is cached to improve IO response time and deliver better database performance Flash cards can deliver tens of thousands of I/Os per seconds SAS or SATA drives can deliver a couple hundred I/Os per second On Exadata, Oracle only caches data likely to be requested again Page 95
96 Exadata Smart Flash Cache: Hardware Four 96Gb PCI flash cards per cell 384 Gb of PCI flash per cell 5.4 Tb of flash on full rack, 2.7 Tb for half rack, 1.15 Tb for quarter One cell can support 75,000 IOPs Full rack - over 1,000,000 IOPs Page 96
97 Exadata Smart Flash Cache: Software Smart Flash Cache provides a storage cell caching algorithm to cache appropriate data in the storage cell PCI flash cards Each database IO is tagged with metadata The CELL_FLASH_CACHE setting for the object(s): DEFAULT means Smart Flash Cache is used normally, KEEP means that Smart Flash Cache is used more aggressively, NONE means Smart Flash Cache is disabled for the object A cache hint: CACHE means the IO should be cached, NOCACHE means it shouldn t, and EVICT indicates that the cached data should be removed from Smart Flash Cache Smart Flash Cache takes the following into consideration when caching data: IO size: Large objects with CELL_FLASH_CACHE=DEFAULT are not cached Current cache load: Smart table scans are usually directed to disk but if CELL_FLASH_CACHE=KEEP and the cache load is low, they may be satisfied from Smart Flash Cache Specific operations (backups, Data Pump export/import, etc) are not cached Page 97
98 Exadata Smart Flash Cache: Software Smart Flash Cache is a write-through cache After a write is acknowledged, it s written to Smart Flash Cache if suitable for caching Write performance is not improved or diminished with Smart Flash Cache Because it s not a write-back cache, writes are not cushioned by PCI flash cards during write operations the writes need to happen to disk A small battery-backed cache on each cell performs write-back caching Technology comparisons: EMC FastCache: EMC EFDs as an extension of storage processor controller cache, write-back caching to cushion write IO, very good write performance FusionIO: data placed directly on PCI flash cards, very high write performance Exadata can match write IO performance of EMC FastCache, FusionIO by using Smart Flash Cache for Flash Grid Disk storage be wary of depleting capacity for normal Smart Flash Cache! Page 98
99 Smart Flash Cache: Write Operations #1: Database issues a write operation! #2: CELLSRV inspects IO metadata! #3: Data is written to disk! #6: CELLSRV uses LRU algorithm to determine which data to replace! #5: If IO is Smart Flash Cache suitable, IO is written to Flash Cache! #4: IO is acknowledged and database process continues! Page 99
100 Smart Flash Cache: Read from Previously Cached Data #1: Database issues a read request! #2: CELLSRV inspects IO metadata! #3: If data exists in Flash Cache, it will be read from cache no disk access! #4: Read is satisfied from cache! Page 100
101 Smart Flash Cache: Read from Un-cached Data #1: Database issues a read operation! #2: CELLSRV inspects IO metadata! #3: Data is read from disk! #6: CELLSRV uses LRU algorithm to determine which data to replace! #5: If IO is Smart Flash Cache suitable, IO is written to Flash Cache! #4: Read is acknowledged and database process continues! Page 101
102 Smart Flash Cache in Action: Writes exa_fc_mystat.sql Page 102
103 Smart Flash Cache in Action: Writes Page 103
104 Smart Flash Cache in Action: Writes Before updating, 97 flash cache hits 1000 rows updated Performed index range scan Page 104
105 Smart Flash Cache in Action: Writes Query Statistics Update yielded (368 97) = 271 flash cache read hits Page 105
106 Smart Flash Cache in Action: Writes No physical reads No flash cache reads Page 106
107 Smart Flash Cache in Action: Writes 180 physical reads 52 1 = 51 cell flash cache read hits Page 107
108 Smart Flash Cache in Action: Reads with Un-cached Data 1,072 flash cache hits for dbm1 Ran in 1: ,452 flash cache hits for dbm1 = 3,380 flash cache hits Page 108
109 Smart Flash Cache in Action: Reads with Cached Data Ran in seconds flash cache read hits Page 109
110 Smart Flash Cache in Action: Dropping Flash Cache Page 110
111 Smart Flash Cache in Action: Dropping Flash Cache Ran in 1:24.36 No flash cache read hits Page 111
112 Smart Flash Cache in Action: Dropping Flash Cache Page 112
113 Smart Flash Cache in Action: Smart Scan Data ~ 12Mb cached/cell for the index ~ 18Mb cached for one of the table partitions Page 113
114 Smart Flash Cache in Action: Smart Scan Data We re using about 350Mb/cell currently Page 114
115 Smart Flash Cache in Action: Smart Scan Data Page 115
116 Smart Flash Cache in Action: Smart Scan Data Very small delta in flash cache read hits Page 116
117 Smart Flash Cache in Action: Smart Scan Data Page 117
118 CELL_FLASH_CACHE KEEP Page 118
119 CELL_FLASH_CACHE KEEP Query time dropped to 28 seconds cell flash cache read hits jumped from 136,709 to 252,095 Page 119
120 CELL_FLASH_CACHE KEEP About 26Gb per cell per partition Page 120
121 CELL_FLASH_CACHE KEEP Page 121
122 Summary Exadata Smart Flash Cache is a caching mechanism delivered by Exadata Smart Flash Cache uses PCI flash cards on each storage cell to cache appropriate data Each storage cell has 384Gb of PCI flash available for Smart Flash Cache a full rack delivers 5.3TB of flash storage Smart Flash Cache intelligently caches appropriate data (OLTP) Smart Flash Cache is a write-through cache writes need to be performed on disk before acknowledged to foreground process. After writes and initial reads, Exadata determines whether data is eligible for cache and writes to Flash Cache. Subsequent IO benefits from Smart Flash Cache Segments can be tagged to be kept in Smart Flash Cache, similar to buffer cache keep Grid Disks (and ASM disk groups) can be created on Flash Disks, improving performance in some cases We recommend using all flash storage for Smart Flash Cache Page 122
123 Oracle 11g OLTP Compression VENDOR_ID VEND_NAME STATE VNDR_RATING VENDOR_TYPE ========== =========== ===== =========== ========== 100 ACME ONE MI 100 DIRECT 101 ACME ONE CA 90 DIRECT 102 NORTON IA 95 INDIRECT 103 WINGDINGS MI 96 INDIRECT 104 WINGDINGS GA 96 INDIRECT Uncompressed <- Header -> Symbol Table-> Compressed ACME ONEDIRECT WINGDINGS96INDIRECT Duplicate values inserted into symbol table 100ACME ONEMI100DIRECT 101ACME ONECA()DIRECT 102NORTONIA95INDIRECT 103WINGDINGSMS96INDIR ECT 104WINGDINGSGA96INDIR ECT <- Data -> <- Free Space -> 100*MI100* 101CA90* 102NORTONIA95INDIRECT 103*MS** 104*GA** As rows are inserted, only unique values are inserted into blocks Results in more free space Free space in block Rows are inserted into blocks, no compression Page 123
124 Oracle 11g OLTP Compression VENDOR_ID VEND_NAME STATE VNDR_RATING VENDOR_TYPE ========== =========== ===== =========== ========== 100 ACME ONE MI 100 DIRECT 101 ACME ONE CA 90 DIRECT 102 NORTON IA 95 INDIRECT 103 WINGDINGS MI 96 INDIRECT 104 WINGDINGS GA 96 INDIRECT When the first row is inserted into the block, it is not compressed Subsequent row inserts check for duplicates and move duplicate values to symbol table Uncompressed 100ACME ONEMI100DIRECT 101ACME ONECA()DIRECT 102NORTONIA95INDIRECT 103WINGDINGSMS96INDIR ECT 104WINGDINGSGA96INDIR ECT <- Header -> Symbol Table-> <- Free Space -> <- Data -> Compressed ACME ONEDIRECT WINGDINGS96INDIRECT 100*MI100* 101CA90* 102NORTONIA95INDIRECT 103*MS** 104*GA** After free space is exhausted in block, block will be fully compressed There is a slight CPU overhead in compressing data; Oracle has to build/maintain symbol table The unit of compression for OLTP Advanced Compression is the block Page 124
125 Exadata Hybrid Columnar Compression Data is stored by columns in a compression unit A compression unit is a set of blocks, not a single block Basic compression and Advanced Compression loads data into blocks row by row. As rows are added, compression algorithms are invoked to reference or update symbol tables and duplicate values are not inserted HCC examines data before being placed on a block and divides the data into arrays of columns HCC takes a stream if input values and places all values for column 1 in array 1, all values for column 2 in array 2, and so on HCC takes four 8k blocks of data in the stream, by default HCC then performs its de-duplication process on the entire stream of 4 blocks, so the odds of finding duplicate data are greatly increased over standard or Advanced compression because we re considering a large subset of data HCC then does leading-edge compression on the array values and inserts the columns into a compression unit (set of blocks), not row-by-row Page 125
126 Exadata Hybrid Columnar Compression VENDOR_ID VEND_NAME STATE VNDR_RATING VENDOR_TYPE ========== =========== ===== =========== ========== 100 ACME ONE MI 100 DIRECT 101 ACME ONE CA 90 DIRECT 102 NORTON IA 95 INDIRECT 103 WINGDINGS MI 96 INDIRECT 104 WINGDINGS GA 96 INDIRECT Uncompressed Hybrid Columnar Compression Logical Compression Unit <- Header -> 100ACME ONEMI100DIRECT 101ACME ONECA()DIRECT 102NORTONIA95INDIRECT 103WINGDINGSMS96INDIR ECT 104WINGDINGSGA96INDIR ECT CU Header-> VENDOR_ID VEND_NAME VNDR_RATING STATE VEND OR_TY PE COL7 COL6 COL8 COL10 COL9 Page 126
127 Exadata Hybrid Columnar Compression Compression Unit #1 COL2 COL5 COL6 COL8 COL9 4 blocks, first 50,000 rows COL1 COL3 COL7 COL10 COL4 Compression Unit #2 COL2 COL5 COL6 COL8 COL9 Second CU, new set of 4 blocks, 48,500 rows COL1 COL3 COL7 COL10 COL4 Compression Unit #2 Third CU, new set of 4 blocks, 51,000 rows COL2 COL5 COL6 COL8 COL9 COL1 COL3 COL7 COL10 COL4 Page 127
128 Exadata Hybrid Columnar Compression Recall, data is stored by columns inside a compression unit The greater the degree of duplicate column values, the less space required per compression unit and the fewer compression units required to store the data If queries select a single column or subset of columns, Oracle will only need to read from blocks units on which the columns exist This is different than other types of compression and un-compressed tables Not only are we savings space, but we re saving IO Saving IO means better performance! Page 128
129 Exadata Hybrid Columnar Compression and DML EHCC compress data for bulk direct path loads only Subsequent bulk load inserts will appends new compression units Single-row inserts will insert rows as OLTP compressed DELETEs against HCC tables lock entire CU When updating EHCC tables: The updated row is moved (i.e., deleted + re-inserted, i.e., migrated) New row is OLTP-compressed Locks impact entire CU, not just row! DML on EHCC tables is very expensive! Page 129
130 Exadata Hybrid Columnar Compression Types QUERY LOW compression is recommended for data warehouse tables in which data load times are important QUERY HIGH is recommended for data warehouse data in which space savings is important. Offers better compression ratio than QUERY LOW but more expensive ARCHIVE LOW is intended for archive data in which load times are important. Better compression than QUERY compression, in most cases, but more overhead for DML ARCHIVE HIGH is designed for archival data in which space savings is the most important factor. Best compression ratio, most expensive algorithm Page 130
131 Using the Compression Advisor DBMS_COMPRESSION.GET_COMPRESSION_RATIO is Oracle s compression advisor Indicates compression type in this case, EHCC archive low compression comp_for_archive_low = EHCC Archive Low comp_for_archive_high = EHCC Archive High comp_for_query_low = EHCC Query Low comp_for_query_high = EHCC Query High comp_for_oltp = OLTP Advanced Compression Page 131
132 Using the Compression Advisor DBMS_COMPRESSION.GET_COMPRESSION_RATIO is Oracle s compression advisor OLTP Compression gives a low 1.1 compression ratio Page 132
133 Using the Compression Advisor DBMS_COMPRESSION.GET_COMPRESSION_RATIO is Oracle s compression advisor EHCC Query Low Compression gives 2.7 compression ratio Page 133
134 Using the Compression Advisor DBMS_COMPRESSION.GET_COMPRESSION_RATIO is Oracle s compression advisor EHCC Query High Compression gives 5.0 compression ratio Page 134
135 Using the Compression Advisor DBMS_COMPRESSION.GET_COMPRESSION_RATIO is Oracle s compression advisor EHCC Archive High Compression gives 9.2 compression ratio Page 135
136 Hybrid Columnar Compression in Action Uncompressed table: SOE.CUSTOMERS Use PCTAS to create 5 new compressed tables: one compressed for OLTP, one EHCC compressed for query low, one compressed for query high, one compressed for archive low, and one compressed for archive high Measure time it takes to create compressed table Measure time it takes to run sample SELECT against compressed table Measure time it takes to bulk insert rows into compressed table Measure time it takes to update rows on compressed table CUSTOMERS CUST_OLTP CUST_QLOW CUST_QHIGH CUST_ALOW CUST_AHIGH Estimated Blocks 19,670 17,881 7,285 3,934 3,391 2,138 Page 136
137 Storage Impact: Different Compression Scenarios Page 137
138 Storage Impact: Different Compression Scenarios In our example, archive compression is not as efficient as HCC query high compression Page 138
139 Storage Impact: Different Compression Scenarios Good estimates from DBMS_COMPRESSION CUSTOMERS CUST_OLTP CUST_QLOW CUST_QHIGH CUST_ALOW CUST_AHIGH Estimated Blocks 19,670 17,881 7,285 3,934 3,391 2,138 Page 139
140 Performance Impact: Creating Compressed Tables Page 140
141 Performance Impact: Creating Compressed Tables Page 141
142 Performance Impact: Creating Compressed Tables Create Table Time (CUSTOMERS) OLTP QUERYLOW QUERYHIGH ARCHIVELOW ARCHIVEHIGH Page 142
143 Performance Impact: Creating Compressed Tables 2500 Create Table Time (MYOBJ) OLTP QUERYLOW QUERYHIGH ARCHIVELOW ARCHIVEHIGH Page 143
144 Performance Impact: Querying Compressed Tables We expect least IO and best full-scan time against this table Page 144
145 Performance Impact: Querying Compressed Tables Query ran in 4.63 seconds ~600Mb of IO 96% smart scan efficiency Page 145
146 Performance Impact: Querying Compressed Tables 4.91 seconds ~700Mb of IO 17% less LIO compared to uncompressed 94.46% smart scan efficiency Page 146
147 Performance Impact: Querying Compressed Tables 3.03 second runtime 97MB over interconnect Less CPU and LIO 99.07% smart scan efficiency Page 147
148 Performance Impact: Querying Compressed Tables Query ran in 2.07 seconds 35 MB over interconnect Less CPU and LIO 99.52% smart scan efficiency Page 148
149 Performance Impact: Querying Compressed Tables Ran in 2.08 seconds 19MB over interconnect Even less CPU and LIO 99.74% smart scan efficiency Page 149
150 Performance Impact: Querying Compressed Tables 2.06 seconds 17MB over interconnect 99.76% smart scan efficiency Page 150
151 Performance Impact: Querying Compressed Tables Page 151
152 Performance Impact: Updating Compressed Tables Page 152
153 Performance Impact: Updating Compressed Tables Took the longest amount of time More CPU required to update HCC compressed tables Page 153
154 Proving Oracle migrated HCC Updated Rows Page 154
155 Proving Oracle migrated HCC Updated Rows Row location: File 11 Block Slot 165 Page 155
156 Proving Oracle migrated HCC Updated Rows Page 156
157 Proving Oracle migrated HCC Updated Rows Page 157
158 Proving Oracle migrated HCC Updated Rows Value=1 means fetch was done via migrated row Page 158
159 Examining the Compression Unit Archive high # Blocks/CU = 32 Cols/Rows CU Length Page 159
160 Examining the Compression Unit Page 160
161 Examining the Compression Unit Archive high CU size = 7 blocks smaller than previous block dump 1 deleted row, slot 165 Page 161
162 HCC Decompression HCC compresses data in-flight as it s inserted via direct path Where and when is the data decompressed when queried? Definitions: Upper Half = Compute Node Lower Half = Storage Server HCC data is decompressed on lower half when queried via Smart Scan (i.e., full scan, i.e., serial direct read). Uncompressed data transmitted over the interconnect, storage server CPUs used for decompression HCC data is decompressed on upper half when queried without Smart Scan (i.e., single-block reads). Compressed data transmitted over interconnect, compute node CPUs used for decompression Plan for CPU impact and application design! Page 162
163 HCC Decompression 42 cs of CPU time on compute node Page 163
164 HCC Decompression Index range scan Page 164
165 HCC Decompression 382 cs of CPU time Page 165
166 Hybrid Columnar Compression Summary Exadata Hybrid Columnar Compression is a unique compression feature available only to Exadata Hybrid Columnar Compression compresses tables by column for a set of rows and stores inside a logical compression unit The unit of compression for basic and OLTP Advanced Compression is a database block The unit of compression for Hybrid Columnar Compression is a compression unit Oracle provides 4 different flavors of Hybrid Columnar Compression: compress for query low, compress for query high, compress for archive low, and compress for archive high IO against Hybrid Columnar Compressed tables is reduced compared to un-compressed tables SELECTs against Hybrid Columnar Compressed tables can run much faster less IO! Rows need to be loaded into HCC tables using DIRECT PATH load DML against HCC tables can be expensive: single-row INSERT operations use OLTP compression, direct path inserts perform HCC compression, and UPDATEs migrate rows (and can be expensive!) Page 166
167 Exadata IO Resource Management IO Resource Management (IORM) provides a means to govern and meter IO from different workloads in the Exadata Storage Server Database consolidation is a key driver to customer adoption of Exadata Consolidation means that multiple databases and applications could share Exadata storage Different databases in a shared Exadata storage grid could have different IO performance requirements One of the common challenges with shared storage infrastructure is that of competing IO workloads Batch vs. OLTP Warehouse vs. OLTP Production vs. Test and Development You can mitigate competing priorities by over-provisioning storage, but this becomes expensive Exadata addresses this challenge with IO Resource Management. Page 167
168 Database Resource Management A single database may have many types of workloads with different performance requirements Resource consumer groups allow you to group sessions by workload After creating resource consumer groups, you specify how resources are used within a resource consumer group Once resource consumer groups are established, you must map sessions to a consumer group based on distinguishing characteristics The combination of resource consumer groups and session mappings comprises a resource plan One resource plan can be active in a database at a time A database resource plan is also called an intradatabase resource plan Let s show an example Page 168
169 Database Resource Management - Example Database DBM OM OLTP Consumer group Other OLTP Consumer group Database XBM Online query Consumer group Reporting Consumer group Batch query Consumer group Page 169
170 Database Resource Management - Example Database DBM OM OLTP Consumer group Other OLTP Consumer group Database XBM Online query Consumer group Interactive category Batch category Reporting Consumer group Batch query Consumer group Page 170
171 IO Resource Management Plans IORM provides different approaches for managing resource allocations If you have multiple workloads within a database that you wish to control database resource usage with, you need to configure an intradatabase resource plan If you only have one database in your Exadata Database Machine, your intradatabase resource plan is all you need IO resource management is handled automatically inside the storage servers based on this intradatabase resource plan If you have multiple databases in your Exadata Database Machine that you wish to govern IO resource amongst, you create an interdatabase resource plan Rules in an interdatabase resource plan specify allocations to databases, not consumer groups Category resource management is used when you want to control resources primary by the category of work being done it allows for allocation of resources amongst categories spanning multiple databases An IORM plan is the combination of an interdatabase plan and a category plan Page 171
172 IORM Architecture CELLSRV queue 3 CELLSRV queue 1 CELLSRV queue 4 CELLSRV queue 2 Page 172
173 IORM Architecture Resource Plans IORM schedules IO according to resource plans onto disk queues Page 173
174 IORM Architecture: Rules IORM is only engaged when needed IORM does not intervene if there is only one active consumer group on one database Any disk allocation that is not fully utilized is made available to other workloads in relation to the configured resource plans Background IO is scheduled based on their priority relative to user IO Redo and control file writes always take precedence DBWR writes are scheduled at the same priority as user IO For each cell disk, each database accessing the cell has one IO queue per consumer group and three background queues Background IO queues are mapped to high, medium, and low priority requests with different IO types mapped to each queue If no intradatabase plan is set, all non-background IO requests are grouped into a single consumer group called OTHER_GROUPS Page 174
175 IORM in Action: Planning DBM has three consumer groups, OM OLTP, OTHER OLTP, and REPORTING XBM will have two consumer groups, ONLINE QUERY and BATCH QUERY DBM Intradatabase Resource Plan 50% of resources allocated to OM OLTP 30% of resources allocated to OTHER OLTP 20% of resources allocated to REPORTING XBM Intradatabase Resource Plan 70% of resources allocated to ONLINE QUERY 30% of resources allocated to BATCH QUERY Page 175
176 IORM in Action: Planning Category Plan 70% of resources allocated to INTERACTIVE category OM OLTP and OTHER OLTP in INTERACTIVE category for DBM ONLINE QUERY in INTERACTIVE category for XBM 30% of resources allocated to BATCH category REPORTING in BATCH category for DBM BATCH QUERY in BATCH category for XBM Interdatabase Plan 60% of resources allocated to database DBM 40% of resources allocated to database XBM Page 176
177 IORM in Action: Understanding the Math All User IO = 100% Category Plan 70% Interactive 30% Batch Interdatabase Plan 40% XBM 60% DBM 40% XBM 60% DBM Intradatabase Plan 50% 30% 70% 20% 30% IORM Allocation DBM OM OLTP 26.25% DBM OTHER OLTP: 15.75% XBM: ONLINE QUERY 28.00% DBM: REPORTING 18.00% XBM: BATCH QUERY 12.00% Page 177
178 IORM in Action: Understanding the Math CG% = (Intra CG% / sum (X)) * db% * cat% CG% = IORM determined resource allocation for consumer group sessions Intra CG% = resource allocation for consumer group within an intradatabase plan X = sum of intradatabase consumer group allocations for all consumer groups in the same category Db% = percentage of database allocation in the interdatabase plan cat% = percentage of resource allocation for the category in which the consumer group belongs Page 178
179 IORM in Action Page 179
180 IORM in Action: Configuration on DBM Database Page 180
181 IORM in Action: Configuration on XBM Database Page 181
182 IORM in Action: On the Exadata Cells Page 182
183 IORM in Action: On the Exadata Cells Page 183
184 Mapping Sessions to Consumer Groups in DBM Page 184
185 Validate Consumer Group Mapping in DBM Page 185
186 Mapping Sessions to Consumer Groups in XBM Page 186
187 Validate Consumer Group Mapping in XBM Page 187
188 Using Cell database IO metrics Use to measure DB load Use to measure DB load Use to monitor what databases waited for IO to be queued by IORM Metric Name DB_IO_RQ_SM DB_IO_RQ_LG DB_IO_RQ_SM_SEC DB_IO_RQ_LG_SEC DB_IO_WT_SM DB_IO_WT_LG Meaning Total number of IO requests issues by the database since any resource plan was set IO requests per second issued by the database in the last minute Total number of seconds that IO requests issued by the database waited to be scheduled Page 188
189 Using Cell database IO metrics Small IO load per database Large IO load per database Page 189
190 Using Cell database IO metrics Small IO load per database/ sec, last minute Large IO load per database/ sec, last minute Page 190
191 Using Cell database IO metrics Small IO Waits/DB/ Min Large IO Waits/DB/ Min Page 191
192 Testing our IORM Plan Page 192
193 Imposing Database Limits with IORM Page 193
194 Imposing Database Limits with IORM Page 194
195 Imposing Database Limits with IORM Page 195
196 Imposing Database Limits with IORM Page 196
197 Summary IO Resource Management is an Exadata feature designed to govern IO to the Exadata Storage Cells IORM is used in conjunction with DBRM Consumer groups, resource plans, resource group mappings and directives are part of an intradatabase resource plan; i.e., resource allocation management within a database Interdatabase plans are used to control IO resource allocation between multiple databases sharing an Exadata storage server Category plans are a way to group resource consumer groups from an intradatabase plan into an IORM plan and further control IO resource allocation in the Exadata storage servers. If you re consolidating on Exadata, you should be using IORM Instance caging is a way to control CPU utilization for a database in an Exadata compute grid Page 197
198 Top Reasons to Migrate to Exadata Your database requires the extreme performance that Exadata can deliver You have business goals that can only be met by Exadata s ability to satisfy extreme performance and consolidation capabilities You need to consolidate your database tier platform into a single high-performance, high-capacity, fault tolerant platform You want to reduce the number of people and amount of time required to build and maintain Oracle database tier infrastructures Your capacity or performance requirements have placed you in a position to evaluate enterprise database platform re-architecting You are looking to simplify your database tier infrastructure into a common technology stack with a single-point-of-contact for support and management Exadata is a business enabler Page 198
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture
Oracle Exadata: The World s Fastest Database Machine Exadata Database Machine Architecture Ron Weiss, Exadata Product Management Exadata Database Machine Best Platform to Run the
More informationInge Os Sales Consulting Manager Oracle Norway
Inge Os Sales Consulting Manager Oracle Norway Agenda Oracle Fusion Middelware Oracle Database 11GR2 Oracle Database Machine Oracle & Sun Agenda Oracle Fusion Middelware Oracle Database 11GR2 Oracle Database
More information2009 Oracle Corporation 1
The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment to deliver any material,
More informationExadata Performance, Yes You Still Need to Tune Kathy Gibbs Senior Database Administrator, CONFIO Software
Exadata Performance, Yes You Still Need to Tune Kathy Gibbs Senior Database Administrator, CONFIO Software Who Am I? Over 18 years in IT and 12+ Years in Oracle & SQL Server DBA and Developer Worked for
More informationAn Oracle White Paper May 2011. Exadata Smart Flash Cache and the Oracle Exadata Database Machine
An Oracle White Paper May 2011 Exadata Smart Flash Cache and the Oracle Exadata Database Machine Exadata Smart Flash Cache... 2 Oracle Database 11g: The First Flash Optimized Database... 2 Exadata Smart
More informationExpert Oracle Exadata
Expert Oracle Exadata Kerry Osborne Randy Johnson Tanel Poder Apress Contents J m About the Authors About the Technical Reviewer a Acknowledgments Introduction xvi xvii xviii xix Chapter 1: What Is Exadata?
More informationAn Oracle White Paper June 2012. A Technical Overview of the Oracle Exadata Database Machine and Exadata Storage Server
An Oracle White Paper June 2012 A Technical Overview of the Oracle Exadata Database Machine and Exadata Storage Server Introduction... 2 Exadata Product Family... 4 Exadata Database Machine... 4 Exadata
More informationHow To Build An Exadata Database Machine X2-8 Full Rack For A Large Database Server
Oracle Exadata Database Machine Overview Exadata Database Machine Best Platform to Run the Oracle Database Best Machine for Data Warehousing Best Machine for OLTP Best Machine for
More information<Insert Picture Here> Oracle Exadata Database Machine Overview
Oracle Exadata Database Machine Overview Exadata Database Machine Best Platform to Run the Oracle Database Best Machine for Data Warehousing Best Machine for OLTP Best Machine for
More informationAn Oracle White Paper December 2013. A Technical Overview of the Oracle Exadata Database Machine and Exadata Storage Server
An Oracle White Paper December 2013 A Technical Overview of the Oracle Exadata Database Machine and Exadata Storage Server Introduction... 2 Exadata Product Family... 5 The Exadata Engineered System...
More informationAn Oracle White Paper October 2010. A Technical Overview of the Oracle Exadata Database Machine and Exadata Storage Server
An Oracle White Paper October 2010 A Technical Overview of the Oracle Exadata Database Machine and Exadata Storage Server Disclaimer The following is intended to outline our general product direction.
More informationSUN ORACLE EXADATA STORAGE SERVER
SUN ORACLE EXADATA STORAGE SERVER KEY FEATURES AND BENEFITS FEATURES 12 x 3.5 inch SAS or SATA disks 384 GB of Exadata Smart Flash Cache 2 Intel 2.53 Ghz quad-core processors 24 GB memory Dual InfiniBand
More informationOverview: X5 Generation Database Machines
Overview: X5 Generation Database Machines Spend Less by Doing More Spend Less by Paying Less Rob Kolb Exadata X5-2 Exadata X4-8 SuperCluster T5-8 SuperCluster M6-32 Big Memory Machine Oracle Exadata Database
More informationOracle Database In-Memory The Next Big Thing
Oracle Database In-Memory The Next Big Thing Maria Colgan Master Product Manager #DBIM12c Why is Oracle do this Oracle Database In-Memory Goals Real Time Analytics Accelerate Mixed Workload OLTP No Changes
More informationSafe Harbor Statement
Safe Harbor Statement The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any contract. It is not a commitment
More informationExadata Database Machine Administration Workshop NEW
Exadata Database Machine Administration Workshop NEW Duration: 4 Days What you will learn This course introduces students to Oracle Exadata Database Machine. Students learn about the various Exadata Database
More informationSUN ORACLE DATABASE MACHINE
SUN ORACLE DATABASE MACHINE FEATURES AND FACTS FEATURES From 2 to 8 database servers From 3 to 14 Sun Oracle Exadata Storage Servers Up to 5.3 TB of Exadata QDR (40 Gb/second) InfiniBand Switches Uncompressed
More informationApplying traditional DBA skills to Oracle Exadata. Marc Fielding March 2013
Applying traditional DBA skills to Oracle Exadata Marc Fielding March 2013 About Me Senior Consultant with Pythian s Advanced Technology Group 12+ years Oracle production systems experience starting with
More informationORACLE EXADATA STORAGE SERVER X2-2
ORACLE EXADATA STORAGE SERVER X2-2 KEY FEATURES AND BENEFITS FEATURES 12 x 3.5 inch High Performance or High Capacity SAS disks 384 GB of Exadata Smart Flash Cache 12 CPU cores dedicated to SQL processing
More informationORACLE EXADATA STORAGE SERVER X4-2
ORACLE EXADATA STORAGE SERVER X4-2 KEY FEATURES AND BENEFITS FEATURES 12 x 3.5 inch High Performance or High Capacity disks 3.2 TB of Exadata Smart Flash Cache 12 CPU cores dedicated to SQL processing
More informationExadata and Database Machine Administration Seminar
Oracle University Contact Us: 1.800.529.0165 Exadata and Database Machine Administration Seminar Duration: 2 Days What you will learn The first section of this course introduces you to Exadata Storage
More informationAn Oracle White Paper April 2010. A Technical Overview of the Sun Oracle Database Machine and Exadata Storage Server
An Oracle White Paper April 2010 A Technical Overview of the Sun Oracle Database Machine and Exadata Storage Server Sun Oracle Database Machine and Exadata Storage Server... 2 Exadata Product Family...
More informationORACLE EXADATA DATABASE MACHINE X2-8
ORACLE EXADATA DATABASE MACHINE X2-8 FEATURES AND FACTS FEATURES 128 CPU cores and 2 TB of memory for database processing 168 CPU cores for storage processing 2 database servers 14 Oracle Exadata Storage
More information<Insert Picture Here> Best Practices for Extreme Performance with Data Warehousing on Oracle Database
1 Best Practices for Extreme Performance with Data Warehousing on Oracle Database Rekha Balwada Principal Product Manager Agenda Parallel Execution Workload Management on Data Warehouse
More informationExtreme Data Warehouse Performance with Oracle Exadata
Managed Services Cloud Services Consul3ng Services Licensing Extreme Data Warehouse Performance with Oracle Exadata Kasey Parker Enterprise Architect Kasey.Parker@centroid.com Who is Centroid? QUICK FACTS
More informationMACHINE X2-22 ORACLE EXADATA DATABASE ORACLE DATA SHEET FEATURES AND FACTS FEATURES
ORACLE EXADATA DATABASE MACHINE X2-22 FEATURES AND FACTS FEATURES Up to 96 CPU cores and 1,152 GB memory for database processing Up to 168 CPU cores dedicated to SQL processing in storage From 2 to 8 database
More informationExpert Oracle Exadata
Expert Oracle Exadata Second Edition Martin Bach Karl Arao Andy Colvin Frits Hoogland Kerry Osborne Randy Johnson Tanel Poder (ioug)* A IndafMndentoracle u*cn group Apress Contents J About the Authors
More informationOracle Exadata Database Machine for SAP Systems - Innovation Provided by SAP and Oracle for Joint Customers
Oracle Exadata Database Machine for SAP Systems - Innovation Provided by SAP and Oracle for Joint Customers Masood Ahmed EMEA Infrastructure Solutions Oracle/SAP Relationship Overview First SAP R/3 release
More informationAn Oracle White Paper September 2009. A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine
An Oracle White Paper September 2009 A Technical Overview of the Sun Oracle Exadata Storage Server and Database Machine Sun Oracle Exadata Storage Server and Database Machine... 2 Today s Limits On Database
More informationHow To Use Exadata
Exadata V2 - Oracle Exadata Database Machine Robert Pastijn Platform Technology Services (PTS) Product Development 2010 Oracle Corporation Exadata V2 Goals Ideal Database Platform
More informationAn Oracle White Paper December 2013. Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine
An Oracle White Paper December 2013 Exadata Smart Flash Cache Features and the Oracle Exadata Database Machine Flash Technology and the Exadata Database Machine... 2 Oracle Database 11g: The First Flash
More informationExadata for Oracle DBAs. Longtime Oracle DBA
Exadata for Oracle DBAs Longtime Oracle DBA Why this Session? I m an Oracle DBA Familiar with RAC, 11gR2 and ASM About to become a Database Machine Administrator (DMA) How much do I have to learn? How
More informationMACHINE X2-8 ORACLE EXADATA DATABASE ORACLE DATA SHEET FEATURES AND FACTS FEATURES
ORACLE EXADATA DATABASE MACHINE X2-8 FEATURES AND FACTS FEATURES 160 CPU cores and 4 TB of memory for database processing 168 CPU cores dedicated to SQL processing in storage 2 database servers 14 Oracle
More informationTuning Exadata. But Why?
Tuning Exadata But Why? whoami Work for Enkitec (www.enkitec.com) Working with Exadata since early 2010 Many Exadata customers and POCs Many Exadata Presentations (some to Oracle) Working on Exadata Book
More informationOracle Database - Engineered for Innovation. Sedat Zencirci Teknoloji Satış Danışmanlığı Direktörü Türkiye ve Orta Asya
Oracle Database - Engineered for Innovation Sedat Zencirci Teknoloji Satış Danışmanlığı Direktörü Türkiye ve Orta Asya Oracle Database 11g Release 2 Shipping since September 2009 11.2.0.3 Patch Set now
More informationAutomatic Data Optimization
Automatic Data Optimization Saving Space and Improving Performance! Erik Benner, Enterprise Architect 1 Who am I? Erik Benner @erik_benner TalesFromTheDatacenter.com Enterprise Architect Ebenner@mythics.com
More informationExadata Database Machine
Database Machine Extreme Extraordinary Exciting By Craig Moir of MyDBA March 2011 Exadata & Exalogic What is it? It is Hardware and Software engineered to work together It is Extreme Performance Application-to-Disk
More informationSUN ORACLE DATABASE MACHINE
SUN ORACLE DATABASE MACHINE FEATURES AND FACTS FEATURES From 1 to 8 database servers From 1 to 14 Sun Oracle Exadata Storage Servers Each Exadata Storage Server includes 384 GB of Exadata Smart Flash Cache
More informationMain Memory Data Warehouses
Main Memory Data Warehouses Robert Wrembel Poznan University of Technology Institute of Computing Science Robert.Wrembel@cs.put.poznan.pl www.cs.put.poznan.pl/rwrembel Lecture outline Teradata Data Warehouse
More informationOracle Aware Flash: Maximizing Performance and Availability for your Database
Oracle Aware Flash: Maximizing Performance and Availability for your Database Gurmeet Goindi Principal Product Manager Oracle Kirby McCord Database Architect US Cellular Kodi Umamageswaran Vice President,
More informationCapacity Management for Oracle Database Machine Exadata v2
Capacity Management for Oracle Database Machine Exadata v2 Dr. Boris Zibitsker, BEZ Systems NOCOUG 21 Boris Zibitsker Predictive Analytics for IT 1 About Author Dr. Boris Zibitsker, Chairman, CTO, BEZ
More informationBoost Database Performance with the Cisco UCS Storage Accelerator
Boost Database Performance with the Cisco UCS Storage Accelerator Performance Brief February 213 Highlights Industry-leading Performance and Scalability Offloading full or partial database structures to
More informationSQL Server Business Intelligence on HP ProLiant DL785 Server
SQL Server Business Intelligence on HP ProLiant DL785 Server By Ajay Goyal www.scalabilityexperts.com Mike Fitzner Hewlett Packard www.hp.com Recommendations presented in this document should be thoroughly
More information1Z0-117 Oracle Database 11g Release 2: SQL Tuning. Oracle
1Z0-117 Oracle Database 11g Release 2: SQL Tuning Oracle To purchase Full version of Practice exam click below; http://www.certshome.com/1z0-117-practice-test.html FOR Oracle 1Z0-117 Exam Candidates We
More informationExadata: from Beginner to Advanced in 3 Hours. Arup Nanda Longtime Oracle DBA (and now DMA)
Exadata: from Beginner to Advanced in 3 Arup Nanda Longtime Oracle DBA (and now DMA) Why this Session? If you are an Oracle DBA Familiar with RAC, 11gR2 and ASM about to be a Database Machine Administrator
More informationConverged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers
Converged storage architecture for Oracle RAC based on NVMe SSDs and standard x86 servers White Paper rev. 2015-11-27 2015 FlashGrid Inc. 1 www.flashgrid.io Abstract Oracle Real Application Clusters (RAC)
More informationRemoving Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays. Red Hat Performance Engineering
Removing Performance Bottlenecks in Databases with Red Hat Enterprise Linux and Violin Memory Flash Storage Arrays Red Hat Performance Engineering Version 1.0 August 2013 1801 Varsity Drive Raleigh NC
More informationPreview of Oracle Database 12c In-Memory Option. Copyright 2013, Oracle and/or its affiliates. All rights reserved.
Preview of Oracle Database 12c In-Memory Option 1 The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated into any
More informationNews and trends in Data Warehouse Automation, Big Data and BI. Johan Hendrickx & Dirk Vermeiren
News and trends in Data Warehouse Automation, Big Data and BI Johan Hendrickx & Dirk Vermeiren Extreme Agility from Source to Analysis DWH Appliances & DWH Automation Typical Architecture 3 What Business
More informationMACHINE X3-8 ORACLE DATA SHEET ORACLE EXADATA DATABASE FEATURES AND FACTS FEATURES
ORACLE EXADATA DATABASE MACHINE X3-8 FEATURES AND FACTS FEATURES 160 CPU cores and 4 TB of memory for database processing 168 CPU cores dedicated to SQL processing in storage 2 database servers 14 Oracle
More informationSMB Direct for SQL Server and Private Cloud
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
More informationORACLE EXADATA STORAGE SERVER X3-2
ORACLE EXADATA STORAGE SERVER X3-2 KEY FEATURES AND BENEFITS FEATURES 12 x 3.5 inch High Performance or High Capacity disks 1.6 TB of Exadata Smart Flash Cache 12 CPU cores dedicated to SQL processing
More informationWho am I? Copyright 2014, Oracle and/or its affiliates. All rights reserved. 3
Oracle Database In-Memory Power the Real-Time Enterprise Saurabh K. Gupta Principal Technologist, Database Product Management Who am I? Principal Technologist, Database Product Management at Oracle Author
More informationOracle EXAM - 1Z0-117. Oracle Database 11g Release 2: SQL Tuning. Buy Full Product. http://www.examskey.com/1z0-117.html
Oracle EXAM - 1Z0-117 Oracle Database 11g Release 2: SQL Tuning Buy Full Product http://www.examskey.com/1z0-117.html Examskey Oracle 1Z0-117 exam demo product is here for you to test the quality of the
More informationPerformance Characteristics of VMFS and RDM VMware ESX Server 3.0.1
Performance Study Performance Characteristics of and RDM VMware ESX Server 3.0.1 VMware ESX Server offers three choices for managing disk access in a virtual machine VMware Virtual Machine File System
More informationORACLE DATA SHEET RELATED PRODUCTS AND SERVICES RELATED PRODUCTS
ORACLE EXADATA STORAGE EXPANSION RACK FEATURES AND FACTS FEATURES Grow the storage capacity of Oracle Exadata Database Machines and Oracle SPARC SuperCluster Includes from 4 to 18 Oracle Exadata Storage
More informationOracle Maximum Availability Architecture with Exadata Database Machine. Morana Kobal Butković Principal Sales Consultant Oracle Hrvatska
Oracle Maximum Availability Architecture with Exadata Database Machine Morana Kobal Butković Principal Sales Consultant Oracle Hrvatska MAA is Oracle s Availability Blueprint Oracle s MAA is a best practices
More informationPerformance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle
Performance Baseline of Hitachi Data Systems HUS VM All Flash Array for Oracle Storage and Database Performance Benchware Performance Suite Release 8.5 (Build 131015) November 2013 Contents 1 System Configuration
More informationORACLE EXADATA DATABASE MACHINE X4-2
ORACLE EXADATA DATABASE MACHINE X4-2 FEATURES AND FACTS FEATURES Up to 192 CPU cores and 4 TB memory for database processing Up to 168 CPU cores dedicated to SQL processing in storage From 2 to 8 database
More informationORACLE EXADATA DATABASE MACHINE X5-2
ORACLE EXADATA DATABASE MACHINE X5-2 The Oracle Exadata Database Machine is engineered to be the highest performing, most cost effective and most available platform for running Oracle Database. Exadata
More informationSAP HANA - Main Memory Technology: A Challenge for Development of Business Applications. Jürgen Primsch, SAP AG July 2011
SAP HANA - Main Memory Technology: A Challenge for Development of Business Applications Jürgen Primsch, SAP AG July 2011 Why In-Memory? Information at the Speed of Thought Imagine access to business data,
More informationFlash Performance for Oracle RAC with PCIe Shared Storage A Revolutionary Oracle RAC Architecture
Flash Performance for Oracle RAC with PCIe Shared Storage Authored by: Estuate & Virident HGST Table of Contents Introduction... 1 RAC Share Everything Architecture... 1 Oracle RAC on FlashMAX PCIe SSDs...
More informationOracle Database 12c Built for Data Warehousing O R A C L E W H I T E P A P E R F E B R U A R Y 2 0 1 5
Oracle Database 12c Built for Data Warehousing O R A C L E W H I T E P A P E R F E B R U A R Y 2 0 1 5 Contents Executive Summary 1 Overview 2 A Brief Introduction to Oracle s Information Management Reference
More informationAccelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
More informationNovinky v Oracle Exadata Database Machine
ORACLE PRODUCT LOGO Novinky v Oracle Exadata Database Machine Gabriela Hečková 1 Copyright 2012, Oracle and/or its affiliates. All rights reserved. Agenda Exadata vývoj riešenia Nové vlastnosti Management
More informationDirect NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
More informationBenchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
More informationORACLE SUPERCLUSTER T5-8
ORACLE SUPERCLUSTER T5-8 ENGINEERED SYSTEM FOR DATABASES AND APPLICATIONS KEY FEATURES Up to 256 compute processors and 4 TB of memory in a single rack Supports Oracle Solaris 11, Oracle Solaris 10, Oracle
More information<Insert Picture Here> Refreshing Your Data Protection Environment with Next-Generation Architectures
1 Refreshing Your Data Protection Environment with Next-Generation Architectures Dale Rhine, Principal Sales Consultant Kelly Boeckman, Product Marketing Analyst Program Agenda Storage
More informationHow to Migrate your Database to Oracle Exadata. Noam Cohen, Oracle DB Consultant, E&M Computing
How to Migrate your Database to Oracle Exadata Noam Cohen, Oracle DB Consultant, E&M Computing Who am I Working with Oracle Since 2000 Versions 8.0 11g Consulting on all areas from Infrastructure to Application
More informationOracle Database Scalability in VMware ESX VMware ESX 3.5
Performance Study Oracle Database Scalability in VMware ESX VMware ESX 3.5 Database applications running on individual physical servers represent a large consolidation opportunity. However enterprises
More informationOracle Big Data SQL Technical Update
Oracle Big Data SQL Technical Update Jean-Pierre Dijcks Oracle Redwood City, CA, USA Keywords: Big Data, Hadoop, NoSQL Databases, Relational Databases, SQL, Security, Performance Introduction This technical
More informationComparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet. September 2014
Comparing SMB Direct 3.0 performance over RoCE, InfiniBand and Ethernet Anand Rangaswamy September 2014 Storage Developer Conference Mellanox Overview Ticker: MLNX Leading provider of high-throughput,
More informationEMC Unified Storage for Microsoft SQL Server 2008
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
More informationHigh Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper
High Performance Oracle RAC Clusters A study of SSD SAN storage A Datapipe White Paper Contents Introduction... 3 Disclaimer... 3 Problem Statement... 3 Storage Definitions... 3 Testing Method... 3 Test
More informationThe Revival of Direct Attached Storage for Oracle Databases
The Revival of Direct Attached Storage for Oracle Databases Revival of DAS in the IT Infrastructure Introduction Why is it that the industry needed SANs to get more than a few hundred disks attached to
More informationAn Oracle White Paper June 2012. High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database
An Oracle White Paper June 2012 High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database Executive Overview... 1 Introduction... 1 Oracle Loader for Hadoop... 2 Oracle Direct
More information<Insert Picture Here>
1 Database Technologies for Archiving Kevin Jernigan, Senior Director Product Management Advanced Compression, EHCC, DBFS, SecureFiles, ILM, Database Smart Flash Cache, Total Recall,
More informationMaximum Availability Architecture
Oracle Data Guard: Disaster Recovery for Sun Oracle Database Machine Oracle Maximum Availability Architecture White Paper April 2010 Maximum Availability Architecture Oracle Best Practices For High Availability
More informationMichael Kagan. michael@mellanox.com
Virtualization in Data Center The Network Perspective Michael Kagan CTO, Mellanox Technologies michael@mellanox.com Outline Data Center Transition Servers S as a Service Network as a Service IO as a Service
More informationEnkitec Exadata Storage Layout
Enkitec Exadata Storage Layout 1 Randy Johnson Principal Consultant, Enkitec LP. 20 or so years in the IT industry Began working with Oracle RDBMS in 1992 at the launch of Oracle 7 Main areas of interest
More informationSolving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
More informationOptimize Oracle Business Intelligence Analytics with Oracle 12c In-Memory Database Option
Optimize Oracle Business Intelligence Analytics with Oracle 12c In-Memory Database Option Kai Yu, Senior Principal Architect Dell Oracle Solutions Engineering Dell, Inc. Abstract: By adding the In-Memory
More informationOracle Exadata Database Machine Aké jednoznačné výhody prináša pre finančné inštitúcie
Oracle Exadata Database Machine Aké jednoznačné výhody prináša pre finančné inštitúcie Gabriela Hečková Technology Sales Consultant, Engineered Systems Oracle Slovensko Copyright 2014 Oracle and/or its
More informationInfrastructure Matters: POWER8 vs. Xeon x86
Advisory Infrastructure Matters: POWER8 vs. Xeon x86 Executive Summary This report compares IBM s new POWER8-based scale-out Power System to Intel E5 v2 x86- based scale-out systems. A follow-on report
More informationBenchmarking Hadoop & HBase on Violin
Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages
More informationHP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief
Technical white paper HP ProLiant BL660c Gen9 and Microsoft SQL Server 2014 technical brief Scale-up your Microsoft SQL Server environment to new heights Table of contents Executive summary... 2 Introduction...
More informationSAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
More informationEMC XtremSF: Delivering Next Generation Storage Performance for SQL Server
White Paper EMC XtremSF: Delivering Next Generation Storage Performance for SQL Server Abstract This white paper addresses the challenges currently facing business executives to store and process the growing
More informationSUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION
SUN HARDWARE FROM ORACLE: PRICING FOR EDUCATION AFFORDABLE, RELIABLE, AND GREAT PRICES FOR EDUCATION Optimized Sun systems run Oracle and other leading operating and virtualization platforms with greater
More informationIn-memory Tables Technology overview and solutions
In-memory Tables Technology overview and solutions My mainframe is my business. My business relies on MIPS. Verna Bartlett Head of Marketing Gary Weinhold Systems Analyst Agenda Introduction to in-memory
More informationSQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V
SQL Server Consolidation Using Cisco Unified Computing System and Microsoft Hyper-V White Paper July 2011 Contents Executive Summary... 3 Introduction... 3 Audience and Scope... 4 Today s Challenges...
More informationAn Oracle White Paper November 2012. Hybrid Columnar Compression (HCC) on Exadata
An Oracle White Paper November 2012 Hybrid Columnar Compression (HCC) on Exadata Introduction... 3 Hybrid Columnar Compression: Technology Overview... 4 Warehouse Compression... 5 Archive Compression...
More informationDIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
More informationEvaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
More informationCloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
More informationFlash Memory Arrays Enabling the Virtualized Data Center. July 2010
Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,
More informationHP SN1000E 16 Gb Fibre Channel HBA Evaluation
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
More informationBest Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card
Best Practices for Optimizing SQL Server Database Performance with the LSI WarpDrive Acceleration Card Version 1.0 April 2011 DB15-000761-00 Revision History Version and Date Version 1.0, April 2011 Initial
More informationORACLE EXADATA DATABASE MACHINE X3-2
ORACLE EXADATA DATABASE MACHINE X3-2 FEATURES AND FACTS FEATURES Up to 128 CPU cores and 2 TB memory for database processing Up to 168 CPU cores dedicated to SQL processing in storage From 2 to 8 database
More information