Table Of Contents. Page 2 of 26. *Other brands and names may be claimed as property of others.

Size: px
Start display at page:

Download "Table Of Contents. Page 2 of 26. *Other brands and names may be claimed as property of others."

Transcription

1 Technical White Paper Revision 1.1 4/28/10 Subject: Optimizing Memory Configurations for the Intel Xeon processor 5500 & 5600 series Author: Scott Huck; Intel DCG Competitive Architect Target Audience: IT Managers/Server End Users (NDA not required) Legal Disclaimer: This document contains information on products in the design phase of development. The information here is subject to change without notice. Do not finalize a design with this information. Contact your local Intel sales office or your distributor to obtain the latest specification before placing your product order. INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. EXCEPT AS PROVIDED IN INTEL S TERMS AND CONDITIONS OF SALE FOR SUCH PRODUCTS, INTEL ASSUMES NO LIABILITY WHATSOEVER, AND INTEL DISCLAIMS ANY EXPRESS OR IMPLIED WARRANTY RELATING TO SALE AND/OR USE OF INTEL PRODUCTS, INCLUDING LIABILITY OR WARRANTIES RELATING TO FITNESS FOR A PARTICULAR PURPOSE, MERCHANTABILITY, OR INFRINGEMENT OF ANY PATENT, COPYRIGHT, OR OTHER INTELLECTUAL PROPERTY RIGHT. Intel products are not intended for use in medical, life saving, or life sustaining applications. Intel may make changes to specifications, product descriptions, and plans at any time, without notice. All products, dates, and figures are preliminary for planning purposes and are subject to change without notice. Designers must not rely on the absence or characteristics of any features or instructions marked reserved or undefined. Intel reserves these for future definition and shall have no responsibility whatsoever for conflicts or incompatibilities arising from future changes to them. The Intel Xeon processors may contain design defects or errors known as errata which may cause the product to deviate from published specifications. Current characterized errata are available on request. The code names Penryn, Harpertown, Dunnington, Nehalem, Westmere presented in this document are only for use by Intel to Identify products, technologies, and services in development, that have not been made commercially available to the public, i.e., announced, launched or shipped. They are not commercial names for products or services and are not intended to function as trademarks. Copies of documents which have an order number and are referenced in this document, or other Intel literature, may be obtained by calling , or by visiting Intel s website at Performance tests and ratings are measured using specific computer systems and/or components and reflect the approximate performance of Intel products as measured by those tests. Any difference in system hardware or software design or configuration may affect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, visit or call (U.S.) or Intel, Xeon, Core are trademarks or registered trademarks of Intel Corporation or its subsidiaries in the United States and other countries. Copyright , Intel Corporation. All rights reserved. Page 1 of 26

2 Table Of Contents 1 Introduction/Objective Revision History Background Summary DIMM Differences UDIMM vs. RDIMM vs. FB-DIMM DDR2 vs. DDR3 vs LV-DDR DIMM Rank (Single vs. Dual vs. Quad) Memory Population Options DDR3 UDIMM Memory Population Options DDR3 RDIMM Memory Population Options Supported Memory Frequency by processor SKUs Memory RAS Features x4 and x8 SDDC Memory Lockstep Memory Mirroring Memory Lockstep vs. Memory Mirroring Memory Population for Best Performance Definitions and Guidance Balanced Memory Configurations Near Balanced Memory Configurations Un-Balanced Memory Configurations Best 4GB Solutions Memory Interleaving Performance based on Memory Bandwidth Other Memory Performance Details step guide to picking the optimal memory configuration Memory Population Guidelines Intel Xeon processor 5600 series differences Summary References Page 2 of 26

3 1 Introduction/Objective The purpose of this paper is to answer a broad range of questions around the memory subsystem of the Intel Xeon processor 5500 & 5600 series (code named Nehalem EP & Westmere EP respectively). Topics include, but are not limited to, how to populate memory, which memory population options provide the best power or performance, details about the various RAS features etc. Unless noted otherwise, the information in this document applies to both the 5500 series and 5600 series processors. Chapter 8 addresses how memory sub-system is different between the 5500 series and the 5600 series processors. 1.1 Revision History Revision History Revision Changes Date 1.0 Initial Release 8/10/ Added Chapter 8 to address differences between the 5500 and 5600 series processors, removed references to MetaRAM (no longer supported). Tables updated/added to reflect differences between the 5500 and 5600 series processors (changes noted in pink text) 4/28/ Background The Intel Xeon processor 5500 series (code named Nehalem EP) has an integrated memory controller and three channels of DDR3 memory. Since many of these features are a first for Intel, the existing rules of thumb for memory optimizations may not be valid, so this document will identify how to populate memory and how to populate for best performance. Figure 1 shows the basic block diagram of a typical 2-socket Xeon 5500 series platform (processor and memory). Intel Xeon Processor 5500 Series QPI Intel Xeon Processor 5500 Series Memory Channels CH 0 CH1 CH2 Memory Channels CH 0 CH1 CH2 DIMM 2 DIMM 2 DIMM 2 DIMM 2 DIMM 2 DIMM 2 DIMM 1 DIMM 1 DIMM 1 DIMM 1 DIMM 1 DIMM 1 DIMM 0 DIMM 0 DIMM 0 DIMM 0 DIMM 0 DIMM 0 Figure 1: Memory sub-system block diagram (Intel Xeon Processor 5500 series) Page 3 of 26

4 1.3 Summary Below is a list of the various topics covered in this document. - Difference between DDR2 and DDR3 (and FB-DIMM) - Difference between UDIMMs and RDIMMs - What are Ranks? - Supported memory population options - Supported memory RAS features - Differences in memory support for the Intel Xeon processor 5500 and 5600 series - Populating for best performance based on various attributes including application, capacity, price, bandwidth 2 DIMM Differences This section discusses the various DIMMs which can be supported in the Xeon 5500 series platform, and how they are the same/different from each other and other types of memory. 2.1 UDIMM vs. RDIMM vs. FB-DIMM Memory DIMMs are made up of primarily memory devices plus a few other devices. It is these other devices, which distinguish unbuffered memory (UDIMMS) from registered memory (RDIMMS) and from fully-buffered memory (FB-DIMM). Below is a brief table summarizing the difference between DIMMs based on DDR2 technology. Differences between DDR2 and DDR3 are detailed in the next section. Feature UDIMM (DDR2) RDIMM (DDR2) FB-DIMM (Gen 1) Device Used DDR2 DDR2 DDR2 Buffered Address & No Yes Yes Command Signals? Buffered Outputs No No Yes Available with ECC? Yes Yes Yes Available w/o ECC? Yes No No On-board clock device No No Yes Data Interface Parallel Parallel Serial Power ~5W UDIMM + ~.75W RDIMM + 3-5W** Table 1 -- UDIMM vs. RDIMM vs. FB-DIMM ** Rough approximation. Power will vary by DIMM manufacturer and configuration. In general the main difference between UDIMMs and RDIMMs, is the address and command inputs are buffered in a register before being sent to the devices (see Figure 1 and Figure 2 below). Figure 2 shows a simplified bus layout for a UDIMM device. The figures below show two DIMMS connected on a single channel. Other memory channels are not shown for simplicity, but would be identical. Page 4 of 26

5 Processor/ Mem Controller Memory Data Bus (Channel 0) 72 Memory Channel 0 Memory Address & CS (Channel 0) Figure 2 -- UDIMM Bus Connections Processor/ Mem Controller Memory Data Bus (Channel 0) 72 Memory Channel 0 Memory Address & CS (Channel 0) Register Register Figure 3 -- RDIMM Bus Connections (note the registers) Each memory channel has a set of address and command/chip Select (CS) signals which go to each DIMM. In a UDIMM these signals connect directly to each device on the DIMM. Thus the electrical loading on these signals is quite high (Figure 2). In comparison, on a RDIMM device (Figure 3), there is a register which buffers the address and command signals thus reducing the electrical loading on these signals. By reducing the electrical loading from each Page 5 of 26

6 DIMM, more DIMMS can be populated on a given memory channel. Typically, the max number of UDIMM s per channel is 2. Where as RDIMMs can typically support up to 4 DIMMs/channel. Although these numbers may vary with frequency (fewer DIMMs per channel as the memory frequency goes up). Buffering the address and command signals with the register adds 1 cycle of latency to a memory request. Although if you have numerous requests pipelined, this latency typically can be hidden. While FB-DIMMs use the same DDR2 devices, the similarities stop there. FB-DIMM s use a completely different interface on the outputs than UDIMM or RDIMMs, and have an entirely different protocol for tracking memory requests. For the sake of time, I will not discuss the details of how FB-DIMMs work, but if interested, you may refer to another Tech-Tid-bit which will provide these details (see the reference section for link to the FB-DIMM document) UDIMMs are lowest in power since they have the fewest devices. RDIMMs typically are ~0.75W more power than an equivalent UDIMM. FB-DIMMs have he most devices and thus dissipate the most power. On average, an FB-DIMM dissipates 3-5W more power than an equivalent RDIMM. All three types of memory (based on DDR2 technology), use essentially the same DDR2 connector, however the DIMM key location is different for FB-DIMM technology preventing the user from inserting UDIMMs or RDIMMS into a motherboard designed to support FB-DIMMs. A given platform can support both UDIMMs or RDIMMs (if designed as such). But not both at the same time. If a platform is designed to support FB-DIMMs it can only support FB-DIMMs (UDIMMs and RDIMMs are not supported). 2.2 DDR2 vs. DDR3 vs LV-DDR3 The key differences between DDR2 and DDR3 are lower voltage, and higher frequency. LV- DDR3 is even lower voltage. The table below summarizes these differences: Feature DDR2 DDR3 LV-DDR3** Voltage 1.8V 1.5V 1.35V On-Die Termination Yes Yes Yes On-Die Thermal Sensor No Yes Yes Separate Vref for Address No Yes Yes & Data Adjustable Output No Yes Yes Impedance Frequencies Supported (per JEDEC spec may vary by platform) 400, 533, 667, , 1066, 1333, , 1066, 1333, 1600 **Supported by the Intel Xeon processor 5600 Series (not supported on the Intel Xeon processor 5500 Series.) Table 2 -- DDR2 vs. DDR3 Most of the differences between DDR2 and DDR3 improve the electrical signaling properties and thus allow the part to run at the higher frequencies. Power is not so cut and dry. If both DDR2 and DDR3 DIMMs were operating at the same frequency (800MHz), then DDR3 would have lower power. However since most DDR3 DIMMs will be operating at higher frequencies, typically a 1066MHz DDR3 DIMM will dissipate slightly more power than a DDR2 667MHz DIMM. As you go to higher frequencies, the DDR3 DIMM will typically dissipate little more power. LV-DDR3 DIMMs run at a lower voltage than standard DDR3 DIMMs. The lower voltage allows the DIMMs to dissipate lower power. LV-DDR3 DIMMs are available in both UDIMM and RDIMM configurations. Power savings will vary by DIMM/DIMM manufacturer, but you should expect at least the following power savings per DIMM: - ~ 0.5 Watts (at idle) - ~ Watts (at 100% load) Page 6 of 26

7 2.3 DIMM Rank (Single vs. Dual vs. Quad) DIMMs can be designed as either single rank (SR), dual rank (DR) or quad rank. The differences are essentially how many rows of device are used to provide a given DIMM capacity. Single Rank: One device is connected to a given data bus pin Dual Rank: Two devices are connected to a given data bus pin Quad Rank: Four devices are connected to a given data bus pin In a single Rank DIMM, one x8 device, for example, will be responsible for driving 8 of the 64 data bits. (See Figure 4) In a dual rank DIMM, two x8 devices will be connected to the same 8 data bits, and based on a chip select signal, will determine which drives the data onto the data bus. (See Figure 4) In a quad rank DIMM, four x8 devices will be connect to the same 8 data bits, and based on chip select signal, will determine which drives the data onto the bus. (See Figure 5) The effect of a more ranks on a DIMM is a higher electrical loading on the Address/command, and data buses (Quad more than Dual, dual more than single), but slightly lower power (Quad less than dual dual less than a single rank) since only half of the devices on the DIMM need to be active at any given time. A dual rank DIMM also allows you to have twice the amount of memory on a DIMM (vs. a single rank DIMM using the same memory technology) (and Quad twice as much as a dual). More ranks mean you have higher electrical loading in a given system. Dependent on the maximum frequency supported by the platform, you often cannot populate as many quad rank DIMMs as dual rank DIMMs and/or must run at lower frequencies with QR DIMMs vs. DR DIMMs. Figure 4 -- Typical connection for SR and DR DIMMs (ECC w/x8 ) Page 7 of 26

8 Figure 5 -- Typical connection for QR DIMMs (ECC w/x8 ) See Figures 4 & 5 for the typical layout of a single, dual and quad rank DIMM. Note that the physical size of the DIMM module does not grow as shown in the figures. The physical size of the DIMM module remains the same for all SR, DR and QR DIMMS, however to fit all of the DIMMs required by a DR or QR DIMMs, the DIMMs will often be placed on both sides of the DIMM module and/or sometimes be stacked on top of each other. One of the advantages of a DR or QR DIMM is you can often use older, less dense and thus cheaper memory devices to create a large capacity memory module. Other advantages of a DR and QR DIMM is they are often lower power (because only ½ or ¼ of the devices are being accessed at any given time) and they also tend to have higher performance, since you can have multiple pages simultaneously open, thus reducing latency. 3 Memory Population Options The Intel Xeon processor 5500 & 5600 series support a number of different memory configurations. And while these numerous options allow the customer to customize the memory configuration to their advantage, the many options often leave people paralyzed, since they don t know which is best for their particular situation. When reading this section, and helping your customer to decide which configuration is best, remember that it doesn t have to be confusing. The many options available are meant to allow the customer to determine exactly which configuration is optimal for their situation based on the various criteria including capacity, bandwidth, price and power. And there are probably many right answers for each customer. Memory configuration caveats: 1. The configuration information below is what Intel designed the components to support. The actual configurations supported by each platform, by each OEM may vary (some may support more or less than shown here). You will have to confirm with the OEM to determine if a specific memory configuration is supported on their particular platform. 2. Memory support tends to change over time and newer products become available, additional validation is done, etc. The below information is valid as of April 17, The platform supports the following different memory module options: - DDR3 Unbuffered DIMMs (ECC or non-ecc) (aka UDIMM) Page 8 of 26

9 - DDR3 Registered DIMMs (ECC Only) (aka RDIMM) - DDR3 800, 1066 or 1333MHz - Single Rank, Dual Rank or Quad Rank DIMMs (QR only on RDIMMs) - 1, 2, 4, 8 or 16GB DIMMs (16GB was added ~4/17/09) - DDR3 MetaS (MetaRAM) is no longer supported by this platform The sections below provide additional details on exactly which DIMMS are supported and in what configurations, frequencies etc. 3.1 DDR3 UDIMM Memory Population Options DDR3 UDIMM & LV DDR3 UDIMM Detailed Memory Population Options (with or w/o ECC) Platforms with UDIMM Routing Only (max 2 DIMM slots on board) # DIMMS populated per channel Supported Speeds (MHz) Ranks per DIMM (any combination) Population Rules 1 800, 1066, 1333 SR or DR 1. Any combination of x8 and x16 UDIMMs, with 2 800, 1066, 1333** SR, DR (Mixing okay) 1Gb or 2Gb density. 2. Populate DIMMs starting with slot 0 (slot furthest from the CPU). ** MHz only supported on Intel Xeon processor 5600 series with 1.5V DIMMs. Platforms with Combo UDIMM / RDIMM Routing (max 3 DIMM slots on board) # DIMMS populated per channel Supported Speeds (MHz) Ranks per DIMM (any combination) Population Rules 1 800, 1066, 1333 SR or DR 1. Any combination of x8 UDIMMs, with 1Gb or 2 800, 1066, 1333** SR, DR (Mixing okay) 2Gb density. 2. Populate DIMMs starting with slot 0 (slot furthest from the CPU). ** MHz only supported on Intel Xeon processor 5600 series with 1.5V DIMMs. Table 3 -- DDR3 UDIMM Population Options Additional Notes on UDIMM Population Rules: - 256Mb, 512Mb, 4Gb technologies are NOT supported - x4 on UDIMM are NOT supported - Quad rank UDIMM are NOT supported x16 is NOT supported on combo routing (routing that supports either UDIMM or RDIMM) All channels in a system will run at the fastest common frequency No mixing of registered and unbuffered DIMMs Mixing ECC & non-ecc UDIMMs anywhere on platform will force system to run in non-ecc mode No RAS support for non-ecc UDIMMs Supported UDIMMs are available in 1GB, 2GB and 4GB sizes No x4 SDDC support with UDIMM w/ecc, however, x8 SDDC is supported in lock step mode with x8 UDIMMs w/ecc Above table and notes are valid for 1.35V LV UDIMMs with the following additional notes: - LV-UDIMMs are only supported on Intel Xeon processor 5600 series - No plans to validate LV UDIMMs w/o ECC - If 1.35V and 1.5V DIMMs are mixed, the DIMMs will run at 1.5V MHz is NOT supported with LV-UDIMMs. Page 9 of 26

10 3.2 DDR3 RDIMM Memory Population Options # DIMMS populated per channel DDR3 RDIMM Detailed Memory Population Options (only with ECC) Supported Speeds (MHz) Ranks per DIMM (any combination) Population Rules 1 800, 1066, 1333 SR or DR 1. Any combination of x4 and x8 RDIMMs, 1 800, 1066 QR Only with 1Gb or 2Gb density. 4Gb density supported only on Intel Xeon processor 5600 series , 1066, 1333** SR, DR (Mixing okay) SR, DR, QR (Mixing okay) SR, DR (Mixing okay; no QR) Table 4 -- DDR3 RDIMM Population Options 2. Populate DIMMs starting with slot 0 (slot furthest from the CPU). ** MHz only supported on Intel Xeon processor 5600 series with 1.5V DIMMs. Additional Notes on RDIMM Population Rules: - 256Mb, 512Mb technologies are NOT supported - 4Gb technologies are supported only on only Intel Xeon processor 5600 series - x16 on RDIMM are NOT supported All channels in a system will run at the fastest common frequency No mixing of registered and unbuffered DIMMs Mixing quad rank RDIMMs in one channel and 3 DPC in another on same CPU socket is not supported If a QR RDIMM is mixed with a SR or DR DIMM in a channel, QR DIMM must be populated in slot 0 (furthest from the CPU) Supported RDIMMs are available in 1GB, 2GB, 4GB, 8GB and 16GB sizes As noted above, QR DIMMS are not supported at 1333MHz (only 800 and 1066MHz) and QR DIMMs are not supported with 3 DPC (only 1 or 2 DPC). This is because of the higher electrical loading of the QR DIMMs. All 1333MHz QR DIMMs, if installed, will be clocked down to 1066MHz (1 DPC) or 800MHz (2 DPC) Page 10 of 26

11 # DIMMS populated per channel LV DDR3 RDIMM (Intel Xeon processor 5600 series only) Detailed Memory Population Options (only with ECC) Supported Speeds (MHz) Ranks per DIMM (any combination) Population Rules 1 800, 1066, 1333 SR or DR 1. Any combination of x4 and x8 RDIMMs, QR Only with 1Gb or 2Gb density , Populate DIMMs starting with slot 0 (slot SR, DR furthest from the CPU). (Mixing okay) SR, DR, QR (Mixing okay) Table 5 -- LV DDR3 RDIMM Population Options (Xeon 5600 only) Additional Notes on RDIMM Population Rules: - 256Mb, 512Mb and 4Gb technologies are NOT supported - x16 on RDIMM are NOT supported All channels in a system will run at the fastest common frequency No mixing of registered and unbuffered DIMMs If a QR RDIMM is mixed with a SR or DR DIMM in a channel, QR DIMM must be populated in slot 0 (furthest from the CPU) Supported RDIMMs are available in 1GB, 2GB, 4GB, 8GB and 16GB sizes LV RDIMMS are supported only at 1 and 2 DPC. 3 DPC is not supported with LV RDIMMs As noted above, QR DIMMS are not supported at 1333MHz or 1066MHz (only 800MHz) and QR DIMMs are not supported with 3 DPC (only 1 or 2 DPC). This is because of the higher electrical loading of the QR DIMMs. All 1333MHz or 1066MHz QR DIMMs, if installed, will be clocked down to 800MHz 3.3 Supported Memory Frequency by processor SKUs The Intel Xeon processor 5500 and 5600 series have a number of different models or SKUs. Not all SKUs support all memory frequencies. Tables 6 & 7, below, show the different SKUs, and the memory frequencies supported by each SKU for the Intel Xeon processor 5500 & 5600 series respectively. Core Frequency Memory Frequency Model (GHz) 800MHz 1066MHz 1333MHz X Yes Yes Yes X Yes Yes Yes X Yes Yes Yes E Yes Yes No E Yes Yes No E5520/L Yes Yes No E Yes No No E Yes No No E5504/L Yes No No E Yes No No E Yes No No Table 6 -- Intel Xeon processor 5500 series memory support options (by SKUs) Page 11 of 26

12 Core Frequency Memory Frequency Model (GHz) 800MHz 1066MHz 1333MHz X Yes Yes Yes X Yes Yes Yes X Yes Yes Yes X Yes Yes Yes X Yes Yes Yes X Yes Yes Yes W Yes Yes Yes W Yes Yes Yes W Yes Yes Yes W Yes Yes Yes L Yes Yes Yes L Yes Yes No L Yes Yes No E Yes Yes No E Yes Yes No E Yes Yes No Table 7 -- Intel Xeon processor 5600 series memory support options (by SKUs) 4 Memory RAS Features The Intel Xeon processor 5500 & 5600 series supports the following memory RAS features - x4 SDDC - x8 SDDC - Lock Step - Mirroring Notes: 1. Mirrored mode and lockstep mode both require that channel 0 and channel 1 are exact copies of each other Channels must have DIMMs that are identical in organization (#ranks, #rows, #cols, #banks) Can run with differently rated DIMMs and use the settings of the slowest DIMM on both channels Channel 2 is not usable in either of these modes 2. With memory installed behind both sockets of a DP system, both sockets must be running the same RAS mode and paging policy - Only exception is when mirroring redundancy is lost on one socket, this will not force the loss on the other socket 3. Memory sparing was initially going to be supported on the Xeon 5500 series platform, but was dropped. Sparing is expected to return with the Westmere EP platform. 4.1 x4 and x8 SDDC SDDC (Single Device Data Correction) is a memory RAS feature which allows for an entire device to fail on a DIMM, yet the system will continue to operate as normal without any loss of data. The Intel Xeon processor 5500 series supports x4 and x8 SDDC, however x8 SDDC is only supported in lockstep mode. SDDC x4 or x8 is supported on UDIMMS with ECC or RDIMMS. x4 SDDC means that a 4-bit wide (x4) device may fail, and the system will continue to operate normally. x8 SDDC means that an 8-bit wide (x8) device (or 2 x4 devices) may fail and the system will continue to operate normally. Most DIMMs today are made up of either x4 or x8 devices. x4 SDDC can only be supported if all the DIMMS installed use x4 devices. x8 SDDC can be supported if the DIMMS installed all use x4 or x8 devices. Page 12 of 26

13 4.2 Memory Lockstep Multiple memory channels are accessed simultaneously, but different contents written to each. Typically a cache line is split across both memory channels, so different parts of the same cache line are stored in multiple places. When reading from memory, the processor then needs to reassemble the cache lines which add additional latency. Figure 6 below shows a typical lockstep configuration. Lockstep Ch 0 & 1 operate in lockstep Ch 2 unused CPU Figure 6 -- Typical Memory Lockstep Configuration (1 processor shown) The following apply when in memory lockstep mode: - 2 channels operating in lockstep (cache line is split across both channels), 3rd channel is unused - Both channels must be populated identically - x8 SDDC is supported (x8 SDDC is supported only in lockstep mode) - No mirroring or sparing support in this mode - Vs. a 3-channel configuration with only x4 SDDC, reliability increases, but performance drops and capacity drops (only 2 channels used) 4.3 Memory Mirroring Memory contents are written to two different places in physical system memory. 100% redundancy is assured, but it comes at cost of reduced capacity and performance. When memory mirroring is enabled (as shown in Figure 7), the memory capacity of the system is on ½ of what is populated. Memory performance will be reduced when memory mirroring is enabled since effectively, only 1 channel of memory is used (the second is used for redundancy). Both mirroring and lock-step will have reduced performance vs. all other configurations. However we believe mirroring will have slightly better performance than lock-step. Mirroring Ch 2 unused CPU Figure 7 -- Typical Memory Mirroring Configuration (1 processor shown) The following apply when in memory mirroring mode: - 2 memory channels operating as mirrors of each other o Same content is written to both channels simultaneously - Both channels must be populated identically - Only half the populated memory is usable as system memory - No sparing, lockstep or x8 SDDC in this mode (x4 SDDC is supported) - Increased reliability vs. lockstep, but max memory capacity drops (you effectively have only one channel of memory both from a capacity standpoint) - Higher reliability than memory lockstep or x4 SDDC Page 13 of 26

14 4.4 Memory Lockstep vs. Memory Mirroring So what is the difference, and why choose one of the other? Both mirroring and lockstep provide higher reliability than the other memory configurations (which only provide x4 SDDC). However, with the additional reliability, there are tradeoffs. Both mirroring and lockstep will reduce the performance of the memory subsystem and the amount of memory available in the system by 2/3 for mirroring and by 1/3 for lockstep. Lockstep is probably lower performance (vs. mirroring). The real difference is that lockstep will allow the system to run un-interrupted if a single x8 (or 2 x4) devices on a DIMM fail, while memory mirroring will continue to operate, uninterrupted, if an entire DIMM fails. Thus Mirroring is a higher reliability solution. 5 Memory Population for Best Performance When trying to find the optimal memory configuration for a given application, there are many variables to consider including: Capacity, cost, power and application being run on the server. Because there are so many variables, it is often difficult to create simple guidelines to determine exactly which memory configuration is best for each application. For example, how sensitive is your application to memory? Is it most sensitive to overall capacity, latency or memory bandwidth. Depending on the application s sensitivities, different memory configurations may have different results. And which processor is being used determines which memory frequencies can be supported. Only the X W and one L series processors support 1333MHz memory. If you are choosing to use an E series processor, then 1066MHz or 800MHz is the max frequency supported and different options may provide better performance. Caveats: When we say one configuration has better performance, sometime it may be a few percent better, other times it may be 20-30% better. So while one configuration may provide say 3-5% better performance, it may cost 40% more. Is the added performance worth it? It is up to the user to decide. That said, the sections below attempt to provide some guidelines and examples on how to choose the best memory configuration for different situations. 5.1 Definitions and Guidance For this document we will refer to a memory configuration as either being balanced, unbalanced or near-balanced. Below is a brief description of each. See the sections below for more detailed definitions. Memory Configuration All DIMMs Identical All Memory Channels Populated All Channels Identically Populated Balanced Yes Yes Yes Un-Balanced Maybe Maybe No Near-Balanced No Yes Yes Table 8 -- DIMM Configuration Definitions The recommendation, for best performance, is to populate memory in a balanced configuration. The reason is that a balanced configuration will not only provide the best memory sub-system performance, but will ensure all regions of memory have identical performance. When populating in an un-balanced memory configuration, it is likely that some regions of memory will have different performance than other regions of memory. This is due to the fact that if the memory configuration is not balanced then the memory controller will need to configure different regions of memory differently. Page 14 of 26

15 For example, some regions may be configured with a higher performance 3-way interleaving scheme, while other regions of memory may need to be configured with a lower performing 1-way interleaving scheme (the details of why this is done is beyond the scope of this document, however section 5.5 does discuss why higher memory interleaving provides higher performance). Consequently, some regions of memory might perform differently than other regions of memory. How an application is affected by this varied memory performance is unknown and may vary depending on what portion of the application is assigned which what region of memory, and/or may change over time as the memory footprint of the server, OS and other applications change over time. Say a database stretches across different performing memory regions, and one portion of the database may perform differently than another portion of the same database. Suffice it to say, the more unbalanced the memory configuration gets, the lower performance one should expect from the memory sub-system. 5.2 Balanced Memory Configurations If at all possible, for best performance all memory channels (for both processors) should be populated, with identical number/size/speed/ranked DIMMs. Such memory configurations we call balanced and are shown in Table 9 for UDIMMs and Table 10 for RDIMMs. These configurations should be used if at all possible, and will usually provide the best performance for almost any application. Which configuration you choose depends on the capacity needed, and the cost sensitivity. Fewer, but larger DIMMs per channel (DPC) typically will provide better performance, since the memory bus can operate at a higher frequency as you reduce the number of DIMMs per channel. However, the larger DIMMs typically cost more. Total System Memory (2 CPU Sockets Populated) 6GB 12GB 18 GB 24GB 36GB 48GB 72GB 96GB 144GB 1GB UDIMMs 6 x 1GB 12 x 1GB 2GB UDIMMs 6 x 2GB 12 x 2GB 4GB UDIMMs 6 X 4GB 12 x 4GB. Configuration supports DDR3 1333, 1066 or 800MHz. Configuration supports DDR or 800MHz (or 1333MHz with Intel Xeon processor 5600). Configuration supports DDR3 800MHz Only Table 9 -- Balanced UDIMM Memory Configurations Total System Memory (2 CPU Sockets Populated) 6GB 12GB 18 GB 24GB 36GB 48GB 72GB 96GB 144GB 192GB 1GB RDIMMs 6 x 1GB 12 x 1GB 18 x 1GB 2GB RDIMMs 6 x 2GB 12 x 2GB 18 x 2GB 4GB RDIMMs 6 X 4GB 12 x 4GB 18 x 4GB 8GB RDIMMs 6 x 8GB 12 x 8GB 18 x 8GB 16GB RDIMM 1 6 x 16GB 12 x 16GB. Configuration supports DDR3 1333, 1066 or 800MHz. Configuration supports DDR or 800MHz (or 1333MHz with Intel Xeon processor 5600). Configuration supports DDR3 800MHz Only GB configuration (18 x 16GB) supported only on Xeon 5600 with DR DIMMs (4Gb density) Table Balanced RDIMM Memory Configurations Page 15 of 26

16 5.3 Near Balanced Memory Configurations As stated above, balanced memory configurations have all memory channels populated with identical number/size/speed/ranked DIMMs. You may vary from this guidance have performance which is most likely equal to or better than any un-balanced configuration and in some cases might approach the performance of a balanced configuration. Data to quantify these statements are provided in section 5.5. Near balanced configurations as I will call them have the following attributes: - All DIMMs capable of the same speed - All memory channels are used - All memory channels are identically populated - However you may use DIMMs with different sizes and/or ranks within a channel The Near balanced configuration should operate equal to or better than any un-balanced configuration and in some cases will approach the performance of a balanced configuration The key is that all memory channels must be identically populated. An example of this near balanced memory configuration is below (DIMM0 is furthest from the processor, the 2 nd processor is populated same as below) Near Balanced Config (18GB) (Config #1a) CH1 CH2 CH3 DIMM0 2GB 2GB 2GB DIMM1 1GB 1GB 1GB DIMM The advantage of near balanced configuration is it gives one the ability to hit more memory sizes than those offered with a truly balanced configuration. Below are example near balanced memory configurations (balanced configs in blue italics are also shown). Note: Some memory sizes have multiple near-balanced configuration options. For example: six 8GB DIMMS could be replaced with twelve 4GB DIMMs. For brevity sake, in Table 11 below, configuration options with 16GB DIMMs are not shown, and some memory sizes below do not list all possible near-balanced configuration options. Page 16 of 26

17 Total Capacity DIMMs used Notes 6 GB 6 x 1GB Balanced 12 GB 6 x 2GB Balanced 12 GB 12 x 1GB Balanced 18 GB 6 x 1GB & 6 x 2GB Near balanced 24 GB 12 x 2GB Balanced 24 GB 6 x 4GB Balanced 30 GB 6 x 4GB & 6 x 1GB Near Balanced 36 GB 18 x 2GB Balanced 36 GB 6 x 4GB & 6 x 2GB Near Balanced 42 GB 6 x 4GB & 6 x 2GB & 6 x 1GB Near Balanced 48 GB 6 x 8GB Balanced 48 GB 12 x 4GB Balanced 54 GB 6 x 8GB & 6 x 1GB Near Balanced 54 GB 12 x 4GB & 6 x 1GB Near Balanced 60 GB 6 x 8GB & 6 x 2GB Near Balanced 60 GB 12 x 4GB & 6 x 2GB Near Balanced 66 GB 6 x 8GB & 6 x 2GB & 6 x 1GB Near Balanced 72 GB 18 x 4GB Balanced 72 GB 6 x 8GB & 6 x 4GB Near Balanced 78 GB 6 x 8GB & 6 x 4GB & 6 x 1GB Near Balanced 84 GB 6 x 8GB & 6 x 4GB & 6 x 2GB Near Balanced 96 GB 12 x 8GB Balanced 102 GB 12 x 8GB & 6 x 1GB Near Balanced 108GB 12 x 8GB & 6 x 2GB Near Balanced 120 GB 12 x 8GB & 6 x 4GB Near Balanced 144 GB 18 x 8GB Balanced Table Near Balanced RDIMM Memory Configurations 5.4 Un-Balanced Memory Configurations If you must hit an exact memory size, and any of the balanced or near balanced configurations listed above are not possible for what ever reason, for example the customer requires exactly 16GB of memory it the system, then use the guidelines below to populate an unbalanced configuration for best performance. Note, for performance reasons, I would urge your customer to use one of the above balanced or near balanced configurations first and use an unbalanced configurations only as a last resort. For example, a 60GB near balanced config will likely perform much better than an unbalanced 64GB config. If you must have exactly 8, 16, 32 or 64GB memory configurations, below are likely the best options for achieving such configurations and which are likely the best. The below is an example for 16GB, but by changing the DIMM sizes as documented in the notes, the below applies to 8 or 32GB configurations and similar logic can be applied to 64GB configs. Page 17 of 26

18 Config #2a Config #3a Config #4a. CH1 CH2 CH3 CH1 CH2 CH3 CH1 CH2 CH3 DIMM0 4GB 2GB 2GB 2GB 2GB 2GB 1GB 1GB 1GB DIMM GB GB 1GB 1GB DIMM GB 1GB --- Notes: 1. CH1 = Memory Channel 1, CH2 = Memory Channel 2, CH3 = Memory Channel 3 2. DIMM population shown above is for a single CPU socket Assume the 2 nd CPU socket is populated identically as above. 3. Conclusions below are valid for 8 & 32GB memory config as well: 8GB: Replace the 2GB and 4GB DIMMs in the above diagrams, with 1 and 2GB DIMMs, respectively, and reduce the number of 1GB DIMMS by half. 32GB: Replace the 1GB, 2GB and 4GB DIMMs with 2, 4 and 8GB DIMMs, respectively. Assumptions: - Objective is for a system to have exactly 16GB of memory - Memory frequency will cause the results to vary see below for different assumptions Performance Comparison: - If all three configs are operating at 800MHz, performance is probably similar for all three configurations although Config #4a might be slightly better than #3a and #3a slightly better than #2a due to more DIMMs per channel (more pages open, thus better memory interleaving). - If Config #2a and #3a are both operating at 1066MHz, their results will be very similar and depending on the benchmark, one might be slightly better than the other (see the next section for specific results between these two configs). Both would likely be better than Config #4a since #4a can only run at 800MHz - If all configs are running at their fastest frequency, Config #2a (1333MHz) will be best, followed by Config #3a (1066MHz) followed by Config #4a (800MHz) Note: All of the configurations in the above example have exactly 16GB of memory. However, the near balanced configuration shown on the previous page (6 x 2GB and 6 x 1GB identified as Config #1a) will likely outperform configurations #2a, #3a & #4a when all are running at the same frequency (either 800 or 1066MHz where possible). And as the data in the next section shows, config #1a (near balanced), running at 1066MHz will likely outperform even Config #2a (un-balanced) running at 1333MHz Best 4GB Solutions What is the best 4GB memory solution? When using a 32-bit OS or application that can t access more than 4GB of memory, the best performing option is to use a balanced configuration and populate all memory channels equally. This will mean you are populating more than 4GB of memory, but it will ensure the best performance because the 4GB of memory that is used, will be configured in a 3-way interleave. If you populate exactly 4GB, you will, at best, have a 2-way interleave (2 DIMMs per processor) which is lower performance than 3-way interleave. Below are the best options for 4GB solutions: With 2 processors: With 1 processor: 6 x 1GB DIMMS (one DIMM in each of the 6 memory channels) 3 x 2GB DIMMS (one DIMM in each of the 3 memory channels) Page 18 of 26

19 5.5 Memory Interleaving Interleaving memory is an important way to get the most performance out of the memory subsystem. Memory interleaving is the process of organizing memory so that adjacent sections of memory (cache lines) are physically located on different memory channels. For example, let s say you have a single processor system with 3GB of memory. If you 3 x 1GB DIMMs (one DIMM in each channel) the BIOS will configure the memory in a 3-chanel interleave. Meaning the first cache line (64Bytes) in memory will be placed in on memory channel 0. The second cache line will be placed on memory Channel 1 and the third cache line on memory channel 2. The 4 th, cache line will then be placed on Channel 0 and so on. Thus interleaving the memory on all three memory channels. Note: The processor typically accesses data from memory on a cache line basis. The reason for interleaving memory is that memory needs to be pre-charged every so often, and if you continue to read from the same memory channel, and thus the same DIMM, you often will have to wait for the memory to pre-charge before you read your data. This waiting will delay your access to memory. However if you interleave your memory accesses you can be accessing from one cache line, while the other cache line is pre-charging. The more channels interleaved, the less likely you will have to wait for a pre-charge. The specific memory interleaving is determined at power on. The processor s BIOS looks at how the memory is populated, and determines the optimal memory interleaving scheme. Note depending on how the memory is populated, different section of memory will have different interleaving setups. For example let s look at the three different memory configurations below: Config #1c Config 2c Config #3c. CH1 CH2 CH3 CH1 CH2 CH3 CH1 CH2 CH3 DIMM0 1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB 1GB DIMM GB GB --- 1GB DIMM Config 1c: BIOS will configure memory as follows: 0 3GB 3-way interleave (best performance) Config 2c: BIOS will configure memory as follows: 0 2GB 2-way interleave (okay performance) (DIMM 0 CH1/CH2) 2 4GB 2-way interleave (okay performance) (DIMM 1 CH2/ DIMM0 CH3) Note: The BIOS chooses the above interleaving for an unbalanced configuration to allow similar performance for all regions of memory. The other option possible, but not implemented in the BIOS reference code, was to designate 0-3GB as a 3-way interleave, and 3-4GB as a 1-way interleave. Config 3c: BIOS will configure memory as follows: 0 3GB 3-way interleave (best performance) 3 5GB 2-way interleave (okay performance) Config 1c is best, since the entire memory region will be configured in a 3-way memory interleave. The other two, however, will have some portions of memory which perform okay (2-way interleave), but not great. Thus it is most important to populate all memory channels equally so that all regions of memory can be configured with a 3-way interleave, and thus deliver the best possible performance. Page 19 of 26

20 Interleaving of memory behind the 2 nd processor in a 2-socket system is determined independent of the first processor. For example: if your memory is configured as shown below for both processor. Processor #1 Processor #2. CH1 CH2 CH3 CH1 CH2 CH3 DIMM0 1GB 1GB 1GB 1GB 1GB --- DIMM DIMM BIOS will likely configure memory as follows: Processor #1: 0 3GB 3-way interleave (best performance) Processor #2 3 5GB 2-way interleave (okay performance) 5.6 Performance based on Memory Bandwidth What if memory bandwidth is most important? Below are two tables (one each for the Intel Xeon processor 5500 and 5600 series, respectively) with a number of different configurations, with the STREAM benchmark results listed. Use these results as a guideline for memory bandwidth intensive applications. The below results may not be representative of a workload which is not memory bandwidth intensive. STREAM Triad Result (MB/s) Memory Frequency DIMM Population 4.8GT/s QPI 5.86GT/s QPI 6.4GT/s QPI (MHz) (CPU 0 / CPU 1) Balanced Configs / , / ,218 33, / ,912 33, / ,265 26,750 27, / ,866 25,844 26, / ,052 26,305 27,208 Unbalanced Configs / , / , / 2-2-2** , / , / , / 2-2-2** , Table Memory bandwidth for different memory configurations (Xeon 5500 Series) Notes: means one DIMM in each memory channel. (all identical DIMMs) means two DIMMs in each memory channel (all identical DIMMs) means two DIMMs in the first two channels, and no DIMMs in the third memory channel. - Source: Intel internal measurements March 2009 (Xeon E5540 w/5.86gt/s QPI, Xeon X5570 w/6.4gt/s QPI) - ** These are actually balanced configurations, but listed here for easy reference to show the bandwidth differences between balanced and unbalanced configurations Page 20 of 26

21 STREAM Triad Result (MB/s) Memory Frequency DIMM Population 4.8GT/s QPI 5.86GT/s QPI 6.4GT/s QPI (MHz) (CPU 0 / CPU 1) Balanced Configs / , / , / ,897 TBD / ,497 TBD / ,459 TBD TBD / ,171 TBD TBD / TBD TBD TBD Table Memory bandwidth for different memory configurations (Xeon 5600 Series) Notes: means one DIMM in each memory channel. (all identical DIMMs) means two DIMMs in each memory channel (all identical DIMMs) means two DIMMs in the first two channels, and no DIMMs in the third memory channel. - Source: Intel internal measurements Feb 2010 (Xeon E5507 w/4.8gt/s, Xeon E5640 w/5.86gt/s QPI, Xeon X5680 w/6.4gt/s QPI) - Unbalanced configurations are not shown/tested, but relative performance impact should be similar as seen with the 5500 series shown in Table Other Memory Performance Details All things being equal, other variables may affect application performance. Below is a list of some of these variables. UDIMM vs. RDIMM UDIMM s tend to have slightly better performance than RDIMMs due to lower latencies, but the difference is very small and does vary by application. For example: with 1 DPC, the two have nearly identical bandwidth. However with 2 DPC, the RDIMM actually provides more max memory bandwidth (~10%) than the UDIMM solution. Also, UDIMMs have 1N timing with 1 DPC and 2N timing for 2 DPC. RDIMMs have 1N timing at all times (1, 2 or 3 DPC). Hence UDIMM performance tends to drop when you go to 2 DPC. (1N/2N refers to the number of cycles the memory controller allows for the DIMM to effectively latch the address and command data being sent to the DIMM and thus determines how many cycles until it expects to see data returned by the DIMM on the data pins. 1N = 1 cycle, 2N = 2 cycles. Thus UDIMMs operating with 2N timing will have 1 cycle additional latency vs. 1N timing.) UDIMMs tend to have lower power (~.75 watts per DIMM). UDIMMs typically are lower cost at lower densities (1GB and 2GB) Non-ECC UDIMMs are not recommended for server platforms Page 21 of 26

22 Memory Rank (Single Rank vs. Dual Rank vs. Quad Rank) - More ranks, typically provide better performance due to more pages potentially being open at a given time thus better response/lower latency for some memory accesses (see notes below) - More ranks, typically dissipate lower power However, as noted below, Quad rank DIMMs, due to their additional electrical loading, will run at a lower frequency than similar memory configurations. As such, Dual ranked DIMMs are typically the best option for best performance. Notes about Quad rank DIMMs: - The max memory frequency with a QR DIMM is 1066MHz - If a QR DIMM is installed in the platform the BIOS will reduce the frequency to 1066MHz if 1 DPC and 800MHz if 2 DPC (DPC = DIMM per channel) (even if the DIMM supports 1333MHz) - 3 DPC is not supported with QR DIMMs - So while QR DIMMs may have higher performance than SR or DR DIMMs (at the same frequency), if you have to run the QR DIMMs at a lower frequency than equivalent DR or SR configurations, the QR DIMMS probably will not provide higher performance. DIMM vs. LV DIMM (Xeon 5600 only) LV (Low Voltage) DIMMs simply run at a lower voltage, and thus lower power. While the power difference will vary between specific products and vendors, we expect the power savings to be: Power savings at Idle: Power savings at 100% utilization: ~0.5W per DIMM ~1.5-2W per DIMM All other things being equal, an LV DIMM will perform the same as a regular voltage DIMM. One exception is that LV DIMMs are limited to a maximum of 2 DPC (DIMMS per channel). 3 DPC operation is not supported with LV-DIMMS due to their lower voltage and thus lower signal drive capabilities. DIMM/ Configuration (x4 vs. x8 vs. x16) - No performance difference between a x4 vs. x8 vs. x16 - DIMMs build with x16 devices tend to be lower power than x8, which tend to be lower power than x4 6 4-step guide to picking the optimal memory configuration The Intel Xeon processor 5500 & 5600 series have many options for configuring memory. Consequently there are many possible combinations and identifying the best configuration for a given customer is often difficult since it very much depends on many variables and what is most important. The variables typically are, but are not limited to the following: - Capacity - Application - # DIMMs supported on the platform - Speed - Sensitivity to latency - Desired capacity - Cost - Sensitivity to bandwidth - Power - Upgradability Below is a suggested 4 step process for determining the optimal memory configuration for a given application: Page 22 of 26

23 Step 1: Answer the following questions: - Is my application most sensitive to capacity or memory bandwidth? o If more memory bandwidth sensitive: Stick to balanced or near balanced configurations Stick to higher frequency, lower DIMM per channel (DPC) configs o If capacity sensitive: Lower frequency configs may be okay Higher DPC solutions may to offer better price/performance by using more DIMMS, but lower capacity per DIMM (due to better interleaving with more DIMMs) - Which processor am I going to use (not all processors support all memory frequencies)? o If using an Advanced processor SKUs All memory options are available o If using Standard processor SKU s 1333MHz memory options are not available Configurations with 2 DPC could be better than configurations with 1 DPC (lower cost and higher performance) (both will run at 1066MHz, and more DPC means better memory interleaving) o If using Basic processor SKUs Only 800MHz memory options are available Configurations with 3 DPC could be better than configurations with 1 or 2 DPC (lower cost and higher performance) (both will run at 800MMHz, and more DPC means better memory interleaving) - How sensitive am I to cost and power? o Higher density DIMMs tend to cost more (especially 8GB DIMMs) o But higher density DIMMS, and thus fewer total DIMMs in the system, tend to dissipate less power o Higher density DIMMs also tend to have more ranks and higher ranked DIMMS tend to have lower power (fewer devices are active at a given time), higher performance (better interleaving) and lower cost (older and cheaper silicon technology used) Once you have answered the above questions, you probably have narrowed down the choices and it is easier to start making optimizations within your acceptable solution set. Step 2: Pick a balanced memory configuration which best meets your criteria. If you can pick one you are done. If not, go to step 3. Step 3: Pick a near balanced memory configuration which best meets your criteria. If you can pick one you are done. If not, go to step 4. Step 4: Pick an un-balanced configuration using the following guidelines: - All things being equal, faster memory will result in better performance (e.g is better than 800) - Populate equal number of DIMMs on each processor - Try to populate at least one DIMM in each memory channel - If you have un-equal numbers of DIMMs in the memory channels, never allow the difference in DIMM count, between channels, to be more than one. (e.g. don t have 3 DIMMS in one channel, and 1 DIMM in another channel) (exceptions are if you are using a Lock-step or Mirroring configuration where the 3 rd memory channel isn t used) - All things the same (e.g. same speed and same total memory size), more DIMMs per channel will likely give equal to better performance than fewer DIMMs per channel. (e.g. 12 x 2GB running at 1066MHz is probably equal to better performance than 6 x 4GB at 1066MHz). This is because with more DIMMs on each channel, more memory pages are maybe open, resulting in reduced latency from better memory interleaving performance Page 23 of 26

Memory Configuration Guide

Memory Configuration Guide Super Micro Computer, Inc. Memory Configuration Guide X10 Series DP Motherboards (Socket R3) 9/8/2014 Introduction This document is designed to provide the reader with an easy-to-use guide for proper memory

More information

Intel 965 Express Chipset Family Memory Technology and Configuration Guide

Intel 965 Express Chipset Family Memory Technology and Configuration Guide Intel 965 Express Chipset Family Memory Technology and Configuration Guide White Paper - For the Intel 82Q965, 82Q963, 82G965 Graphics and Memory Controller Hub (GMCH) and Intel 82P965 Memory Controller

More information

Intel X38 Express Chipset Memory Technology and Configuration Guide

Intel X38 Express Chipset Memory Technology and Configuration Guide Intel X38 Express Chipset Memory Technology and Configuration Guide White Paper January 2008 Document Number: 318469-002 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Memory Configuration Guide

Memory Configuration Guide Super Micro Computer, Inc. Memory Configuration Guide X9 Socket R Series DP Motherboards Version 1.1R March 12, 2012 Introduction This document is designed to provide the reader with an easy-to-use guide

More information

Intel Q35/Q33, G35/G33/G31, P35/P31 Express Chipset Memory Technology and Configuration Guide

Intel Q35/Q33, G35/G33/G31, P35/P31 Express Chipset Memory Technology and Configuration Guide Intel Q35/Q33, G35/G33/G31, P35/P31 Express Chipset Memory Technology and Configuration Guide White Paper August 2007 Document Number: 316971-002 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION

More information

Configuring and using DDR3 memory with HP ProLiant Gen8 Servers

Configuring and using DDR3 memory with HP ProLiant Gen8 Servers Engineering white paper, 2 nd Edition Configuring and using DDR3 memory with HP ProLiant Gen8 Servers Best Practice Guidelines for ProLiant servers with Intel Xeon processors Table of contents Introduction

More information

Intel Xeon 5500 Memory Performance Ganesh Balakrishnan System x and BladeCenter Performance

Intel Xeon 5500 Memory Performance Ganesh Balakrishnan System x and BladeCenter Performance Intel Memory Performance Ganesh Balakrishnan System x and BladeCenter Performance 1 ABSTRACT...3 1.0 INTRODUCTION...3 2.0 SYSTEM ARCHITECTURE...4 2.1 HS22...4 2.2 X3650 M2, X3550 M2, IDATAPLEX 2.0...5

More information

Dell PowerEdge Servers 2009 - Memory

Dell PowerEdge Servers 2009 - Memory Dell PowerEdge Servers 2009 - Memory A Dell Technical White Paper By Paul Benson Dell Enterprise Development February 2009 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

More information

Intel Server Board S1200KP. Configuration Guide

Intel Server Board S1200KP. Configuration Guide Intel Server Board S1200KP Configuration Guide A reference guide to assist customers in ordering the necessary components to configure the S1200KP Intel Server board Revision 3.1 August 2011 Enterprise

More information

Memory Configuration for Intel Xeon 5500 Series Branded Servers & Workstations

Memory Configuration for Intel Xeon 5500 Series Branded Servers & Workstations Memory Configuration for Intel Xeon 5500 Series Branded Servers & Workstations May 2009 In March 2009, Intel Corporation launched its new Xeon 5500 series server processors, code-named Nehalem-EP. This

More information

Intel Desktop Board DP43BF

Intel Desktop Board DP43BF Intel Desktop Board DP43BF Specification Update September 2010 Order Number: E92423-004US The Intel Desktop Board DP43BF may contain design defects or errors known as errata, which may cause the product

More information

Measuring Cache and Memory Latency and CPU to Memory Bandwidth

Measuring Cache and Memory Latency and CPU to Memory Bandwidth White Paper Joshua Ruggiero Computer Systems Engineer Intel Corporation Measuring Cache and Memory Latency and CPU to Memory Bandwidth For use with Intel Architecture December 2008 1 321074 Executive Summary

More information

Intel Extreme Memory Profile (Intel XMP) DDR3 Technology

Intel Extreme Memory Profile (Intel XMP) DDR3 Technology Intel Extreme Memory Profile (Intel XMP) DDR3 Technology White Paper January 2009 Document Number: 319124-002 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS

More information

DDR3 memory technology

DDR3 memory technology DDR3 memory technology Technology brief, 3 rd edition Introduction... 2 DDR3 architecture... 2 Types of DDR3 DIMMs... 2 Unbuffered and Registered DIMMs... 2 Load Reduced DIMMs... 3 LRDIMMs and rank multiplication...

More information

DDR2 x16 Hardware Implementation Utilizing the Intel EP80579 Integrated Processor Product Line

DDR2 x16 Hardware Implementation Utilizing the Intel EP80579 Integrated Processor Product Line Utilizing the Intel EP80579 Integrated Processor Product Line Order Number: 320296-002US Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Charles Stephan & Alex Lee IBM Flex System, System x and BladeCenter Performance IBM Systems and Technology Group

Charles Stephan & Alex Lee IBM Flex System, System x and BladeCenter Performance IBM Systems and Technology Group Understanding Intel Xeon Processor E5-2600 v2 Series Memory Performance and Optimization in IBM Flex System, System x, NeXtScale and BladeCenter Platforms Charles Stephan & Alex Lee IBM Flex System, System

More information

Intel Desktop Board DG45FC

Intel Desktop Board DG45FC Intel Desktop Board DG45FC Specification Update July 2010 Order Number: E46340-007US The Intel Desktop Board DG45FC may contain design defects or errors known as errata, which may cause the product to

More information

Hybrid parallelism for Weather Research and Forecasting Model on Intel platforms (performance evaluation)

Hybrid parallelism for Weather Research and Forecasting Model on Intel platforms (performance evaluation) Hybrid parallelism for Weather Research and Forecasting Model on Intel platforms (performance evaluation) Roman Dubtsov*, Mark Lubin, Alexander Semenov {roman.s.dubtsov,mark.lubin,alexander.l.semenov}@intel.com

More information

Benefits of Intel Matrix Storage Technology

Benefits of Intel Matrix Storage Technology Benefits of Intel Matrix Storage Technology White Paper December 2005 Document Number: 310855-001 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED,

More information

Intel Desktop Board DQ45CB

Intel Desktop Board DQ45CB Intel Desktop Board DQ45CB Specification Update July 2010 Order Number: E53961-007US The Intel Desktop Board DQ45CB may contain design defects or errors known as errata, which may cause the product to

More information

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

Intel Data Direct I/O Technology (Intel DDIO): A Primer > Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Main Memory Background

Main Memory Background ECE 554 Computer Architecture Lecture 5 Main Memory Spring 2013 Sudeep Pasricha Department of Electrical and Computer Engineering Colorado State University Pasricha; portions: Kubiatowicz, Patterson, Mutlu,

More information

Quantifying the Power Savings by Upgrading to DDR4 Memory on Lenovo Servers

Quantifying the Power Savings by Upgrading to DDR4 Memory on Lenovo Servers Front cover Quantifying the Power Savings by Upgrading to DDR4 Memory on Lenovo Servers Demonstrates how clients can significantly save on operational expense by using DDR4 memory Compares DIMMs currently

More information

Kingston Technology. How To Benefit Most From DDR4. Pasi Siukonen March 30, 2016 SR FIELD APPLICATION ENGINEER

Kingston Technology. How To Benefit Most From DDR4. Pasi Siukonen March 30, 2016 SR FIELD APPLICATION ENGINEER Kingston Technology How To Benefit Most From DDR4 Pasi Siukonen March 30, 2016 SR FIELD APPLICATION ENGINEER 2015 Kingston Technology Europe Co LLP and Kingston Digital Europe Co LLP. All rights reserved.

More information

Kingston Technology. Server Architecture and Kingston Memory Solutions. May 2015. Ingram Micro. Mike Mohney Senior Technology Manager, TRG

Kingston Technology. Server Architecture and Kingston Memory Solutions. May 2015. Ingram Micro. Mike Mohney Senior Technology Manager, TRG Kingston Technology Server Architecture and Kingston Memory Solutions Ingram Micro May 2015 Mike Mohney Senior Technology Manager, TRG 2015 Kingston Technology Corporation. All rights reserved. All trademarks

More information

DDR4 Memory Technology on HP Z Workstations

DDR4 Memory Technology on HP Z Workstations Technical white paper DDR4 Memory Technology on HP Z Workstations DDR4 is the latest memory technology available for main memory on mobile, desktops, workstations, and server computers. DDR stands for

More information

IT@Intel. Comparing Multi-Core Processors for Server Virtualization

IT@Intel. Comparing Multi-Core Processors for Server Virtualization White Paper Intel Information Technology Computer Manufacturing Server Virtualization Comparing Multi-Core Processors for Server Virtualization Intel IT tested servers based on select Intel multi-core

More information

Intel 865G, Intel 865P, Intel 865PE Chipset Memory Configuration Guide

Intel 865G, Intel 865P, Intel 865PE Chipset Memory Configuration Guide G, P, PE Chipset Memory Configuration Guide White Paper May 2003 Document Number: 253036-001 INFOMATION IN THIS DOCUMENT IS POVIDED IN CONNECTION WITH INTEL PODUCTS. NO LICENSE, EXPESS O IMPLIED, BY ESTOPPEL

More information

Intel Core i5 processor 520E CPU Embedded Application Power Guideline Addendum January 2011

Intel Core i5 processor 520E CPU Embedded Application Power Guideline Addendum January 2011 Intel Core i5 processor 520E CPU Embedded Application Power Guideline Addendum January 2011 Document Number: 324818-001 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Intel Wi-Fi Adapters and Channel Support

Intel Wi-Fi Adapters and Channel Support Intel Wi-Fi Adapters and Channel Support Revision 1.0 August 24, 2009 Legal Disclaimer This document is provided as is with no warranties whatsoever, including any warranty of merchantability, noninfringement

More information

Server: Performance Benchmark. Memory channels, frequency and performance

Server: Performance Benchmark. Memory channels, frequency and performance KINGSTON.COM Best Practices Server: Performance Benchmark Memory channels, frequency and performance Although most people don t realize it, the world runs on many different types of databases, all of which

More information

Kingston Technology. KingstonConsult WHD Local. October 2014

Kingston Technology. KingstonConsult WHD Local. October 2014 Kingston Technology KingstonConsult WHD Local October 2014 2012 Kingston Technology Corporation. All rights reserved. All trademarks and registered trademarks are the property of their respective owners.

More information

HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads

HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads HP ProLiant Gen8 vs Gen9 Server Blades on Data Warehouse Workloads Gen9 Servers give more performance per dollar for your investment. Executive Summary Information Technology (IT) organizations face increasing

More information

WHITE PAPER FUJITSU PRIMERGY SERVERS MEMORY PERFORMANCE OF XEON E5-2600/4600 (SANDY BRIDGE-EP) BASED SYSTEMS

WHITE PAPER FUJITSU PRIMERGY SERVERS MEMORY PERFORMANCE OF XEON E5-2600/4600 (SANDY BRIDGE-EP) BASED SYSTEMS WHITE PAPER MEMORY PERFORMANCE OF XEON E5-2600/4600 BASED SYSTEMS WHITE PAPER FUJITSU PRIMERGY SERVERS MEMORY PERFORMANCE OF XEON E5-2600/4600 (SANDY BRIDGE-EP) BASED SYSTEMS The Xeon E5-2600/4600 (Sandy

More information

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms

Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Intel Xeon Processor E7 v2 Family-Based Platforms Maximize Performance and Scalability of RADIOSS* Structural Analysis Software on Family-Based Platforms Executive Summary Complex simulations of structural and systems performance, such as car crash simulations,

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

How does the new HP Workstation family (HP Z1, HP Z220, HP Z420, HP Z620 and HP Z820) provide ultimate NX performance?

How does the new HP Workstation family (HP Z1, HP Z220, HP Z420, HP Z620 and HP Z820) provide ultimate NX performance? HP and NX Introduction The purpose of this document is to provide information that will aid in selection of HP Workstations for running Siemens PLMS NX. A performance study was completed by benchmarking

More information

COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service

COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service COLO: COarse-grain LOck-stepping Virtual Machine for Non-stop Service Eddie Dong, Yunhong Jiang 1 Legal Disclaimer INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Best combination of performance, power efficiency, and cost.

Best combination of performance, power efficiency, and cost. Platform Microarchitecture Processor Socket Chipset Intel Xeon E5 Series Processors and the Intel C600 Chipset Sandy Bridge Intel Xeon E5-2600 Series R Intel C600 Chipset (formerly codenamed Romley-EP

More information

Memory for Dell PowerEdge 12 th Generation Servers

Memory for Dell PowerEdge 12 th Generation Servers Memory for Dell PowerEdge 12 th Generation Servers This technical overview explains the memory options available for the Dell PowerEdge 12 th generation servers and how these options will affect your environment

More information

Cloud based Holdfast Electronic Sports Game Platform

Cloud based Holdfast Electronic Sports Game Platform Case Study Cloud based Holdfast Electronic Sports Game Platform Intel and Holdfast work together to upgrade Holdfast Electronic Sports Game Platform with cloud technology Background Shanghai Holdfast Online

More information

21152 PCI-to-PCI Bridge

21152 PCI-to-PCI Bridge Product Features Brief Datasheet Intel s second-generation 21152 PCI-to-PCI Bridge is fully compliant with PCI Local Bus Specification, Revision 2.1. The 21152 is pin-to-pin compatible with Intel s 21052,

More information

Managing server DRAM configuration to optimise ROI within hosting environments

Managing server DRAM configuration to optimise ROI within hosting environments Managing server DRAM configuration to optimise ROI within hosting environments Richard Kanadjian richard_kanadjian@kingston.com Manager, Field Applications Engineering Kingston Technology Disclaimer: All

More information

Configuring Memory on the HP Business Desktop dx5150

Configuring Memory on the HP Business Desktop dx5150 Configuring Memory on the HP Business Desktop dx5150 Abstract... 2 Glossary of Terms... 2 Introduction... 2 Main Memory Configuration... 3 Single-channel vs. Dual-channel... 3 Memory Type and Speed...

More information

White Paper FUJITSU Server PRIMERGY & PRIMEQUEST Memory Performance of Xeon E7 v3 (Haswell-EX) based Systems

White Paper FUJITSU Server PRIMERGY & PRIMEQUEST Memory Performance of Xeon E7 v3 (Haswell-EX) based Systems White Paper Memory Performance of Xeon E7 v3 (Haswell-EX) based Systems White Paper FUJITSU Server PRIMERGY & PRIMEQUEST Memory Performance of Xeon E7 v3 (Haswell-EX) based Systems The Xeon E7 v3 (Haswell-EX)

More information

Performance Benchmarking for PCIe* and NVMe* Enterprise Solid-State Drives

Performance Benchmarking for PCIe* and NVMe* Enterprise Solid-State Drives Performance Benchmarking for PCIe* and NVMe* Enterprise Solid-State Drives Order Number: 330909-003US INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR

More information

Intel Storage System SSR212CC

Intel Storage System SSR212CC Intel Storage System SSR212CC RAID Sets & Volume Capacity Application Note Server Products Group Technical Marketing Revision History Intel Storage System SSR212CC Revision History Date Revision Number

More information

Intel X58 Express Chipset

Intel X58 Express Chipset Product Brief Intel X58 Express Chipset Highest performing desktop platform for extreme gamers and demanding enthusiasts Desktop PC platforms based on the Intel X58 Express Chipset and Intel Core i7 processor

More information

2013 Server SSD & Memory Configurations

2013 Server SSD & Memory Configurations 2013 Server SSD & Memory Configurations Kingston Technology Europe Co LLP Pasi Siukonen What does optimization mean to you? Sweat your assets? Intelligent system configuration? Getting more bang for your

More information

Intel Solid-State Drives Increase Productivity of Product Design and Simulation

Intel Solid-State Drives Increase Productivity of Product Design and Simulation WHITE PAPER Intel Solid-State Drives Increase Productivity of Product Design and Simulation Intel Solid-State Drives Increase Productivity of Product Design and Simulation A study of how Intel Solid-State

More information

Configuring RAID for Optimal Performance

Configuring RAID for Optimal Performance Configuring RAID for Optimal Performance Intel RAID Controller SRCSASJV Intel RAID Controller SRCSASRB Intel RAID Controller SRCSASBB8I Intel RAID Controller SRCSASLS4I Intel RAID Controller SRCSATAWB

More information

Intel Itanium Quad-Core Architecture for the Enterprise. Lambert Schaelicke Eric DeLano

Intel Itanium Quad-Core Architecture for the Enterprise. Lambert Schaelicke Eric DeLano Intel Itanium Quad-Core Architecture for the Enterprise Lambert Schaelicke Eric DeLano Agenda Introduction Intel Itanium Roadmap Intel Itanium Processor 9300 Series Overview Key Features Pipeline Overview

More information

Itanium 2 Platform and Technologies. Alexander Grudinski Business Solution Specialist Intel Corporation

Itanium 2 Platform and Technologies. Alexander Grudinski Business Solution Specialist Intel Corporation Itanium 2 Platform and Technologies Alexander Grudinski Business Solution Specialist Intel Corporation Intel s s Itanium platform Top 500 lists: Intel leads with 84 Itanium 2-based systems Continued growth

More information

Leading Virtualization 2.0

Leading Virtualization 2.0 Leading Virtualization 2.0 How Intel is driving virtualization beyond consolidation into a solution for maximizing business agility within the enterprise White Paper Intel Virtualization Technology (Intel

More information

Intel Core TM i3 Processor Series Embedded Application Power Guideline Addendum

Intel Core TM i3 Processor Series Embedded Application Power Guideline Addendum Intel Core TM i3 Processor Series Embedded Application Power Guideline Addendum July 2012 Document Number: 327705-001 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

Accelerating Business Intelligence with Large-Scale System Memory

Accelerating Business Intelligence with Large-Scale System Memory Accelerating Business Intelligence with Large-Scale System Memory A Proof of Concept by Intel, Samsung, and SAP Executive Summary Real-time business intelligence (BI) plays a vital role in driving competitiveness

More information

Innovativste XEON Prozessortechnik für Cisco UCS

Innovativste XEON Prozessortechnik für Cisco UCS Innovativste XEON Prozessortechnik für Cisco UCS Stefanie Döhler Wien, 17. November 2010 1 Tick-Tock Development Model Sustained Microprocessor Leadership Tick Tock Tick 65nm Tock Tick 45nm Tock Tick 32nm

More information

The Transition to PCI Express* for Client SSDs

The Transition to PCI Express* for Client SSDs The Transition to PCI Express* for Client SSDs Amber Huffman Senior Principal Engineer Intel Santa Clara, CA 1 *Other names and brands may be claimed as the property of others. Legal Notices and Disclaimers

More information

Intel Desktop Board DP55WB

Intel Desktop Board DP55WB Intel Desktop Board DP55WB Specification Update July 2010 Order Number: E80453-004US The Intel Desktop Board DP55WB may contain design defects or errors known as errata, which may cause the product to

More information

Evaluating Intel Virtualization Technology FlexMigration with Multi-generation Intel Multi-core and Intel Dual-core Xeon Processors.

Evaluating Intel Virtualization Technology FlexMigration with Multi-generation Intel Multi-core and Intel Dual-core Xeon Processors. Evaluating Intel Virtualization Technology FlexMigration with Multi-generation Intel Multi-core and Intel Dual-core Xeon Processors. Executive Summary: In today s data centers, live migration is a required

More information

Intel Q45 and Q43 Express Chipsets

Intel Q45 and Q43 Express Chipsets Product Brief Intel Q45 and Q43 Express Chipsets Advancing business solutions by enhancing manageability and security The new Intel Q45 and Q43 Express Chipsets, when combined with the Intel Core 2 processor

More information

Accelerating Business Intelligence with Large-Scale System Memory

Accelerating Business Intelligence with Large-Scale System Memory Accelerating Business Intelligence with Large-Scale System Memory A Proof of Concept by Intel, Samsung, and SAP Executive Summary Real-time business intelligence (BI) plays a vital role in driving competitiveness

More information

Intel Matrix Storage Console

Intel Matrix Storage Console Intel Matrix Storage Console Reference Content January 2010 Revision 1.0 INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR IMPLIED, BY ESTOPPEL OR OTHERWISE,

More information

Intel 852GM / 852GMV Chipset Graphics and Memory Controller Hub (GMCH)

Intel 852GM / 852GMV Chipset Graphics and Memory Controller Hub (GMCH) Intel 852GM / 852GMV Chipset Graphics and Memory Controller Hub (GMCH) Specification Update November 2004 Notice: The Intel 852GM/852GMV chipset may contain design defects or errors known as errata, which

More information

Enabling Cloud Computing and Server Virtualization with Improved Power Efficiency

Enabling Cloud Computing and Server Virtualization with Improved Power Efficiency Enabling Cloud Computing and Server Virtualization with Improved Power Efficiency I. Enabling cloud computing and server virtualization without power penalties Enterprise servers are the workhorses of

More information

ThinkServer PC3-10600 DDR3 1333MHz UDIMM and RDIMM PC3-8500 DDR3 1066MHz RDIMM options for the next generation of ThinkServer systems TS200 and RS210

ThinkServer PC3-10600 DDR3 1333MHz UDIMM and RDIMM PC3-8500 DDR3 1066MHz RDIMM options for the next generation of ThinkServer systems TS200 and RS210 Hardware Announcement ZG09-0894, dated vember 24, 2009 ThinkServer PC3-10600 DDR3 1333MHz UDIMM and RDIMM PC3-8500 DDR3 1066MHz RDIMM options for the next generation of ThinkServer systems TS200 and RS210

More information

ThinkServer PC2-5300 DDR2 FBDIMM and PC2-6400 DDR2 SDRAM Memory options boost overall performance of ThinkServer solutions

ThinkServer PC2-5300 DDR2 FBDIMM and PC2-6400 DDR2 SDRAM Memory options boost overall performance of ThinkServer solutions , dated September 30, 2008 ThinkServer PC2-5300 DDR2 FBDIMM and PC2-6400 DDR2 SDRAM Memory options boost overall performance of ThinkServer solutions Table of contents 2 Key prerequisites 2 Product number

More information

Intel Atom Processor E3800 Product Family

Intel Atom Processor E3800 Product Family Intel Atom Processor E3800 Product Family Thermal Design Guide October 2013 Document Number: 329645-001 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS.

More information

Measuring Processor Power

Measuring Processor Power White Paper Intel Xeon Processor Processor Architecture Analysis Measuring Processor Power TDP vs. ACP Specifications for the power a microprocessor can consume and dissipate can be complicated and may

More information

New Dimensions in Configurable Computing at runtime simultaneously allows Big Data and fine Grain HPC

New Dimensions in Configurable Computing at runtime simultaneously allows Big Data and fine Grain HPC New Dimensions in Configurable Computing at runtime simultaneously allows Big Data and fine Grain HPC Alan Gara Intel Fellow Exascale Chief Architect Legal Disclaimer Today s presentations contain forward-looking

More information

Memory - DDR1, DDR2, and DDR3. Brought to you by please visit our site!

Memory - DDR1, DDR2, and DDR3. Brought to you by  please visit our site! Memory - DDR1, DDR2, and DDR3 Brought to you by http://www.rmroberts.com please visit our site! DDR1 Double Data Rate-SDRAM, or simply DDR1, was designed to replace SDRAM. DDR1 was originally referred

More information

The MAX5 Advantage: Clients Benefit running Microsoft SQL Server Data Warehouse (Workloads) on IBM BladeCenter HX5 with IBM MAX5.

The MAX5 Advantage: Clients Benefit running Microsoft SQL Server Data Warehouse (Workloads) on IBM BladeCenter HX5 with IBM MAX5. Performance benefit of MAX5 for databases The MAX5 Advantage: Clients Benefit running Microsoft SQL Server Data Warehouse (Workloads) on IBM BladeCenter HX5 with IBM MAX5 Vinay Kulkarni Kent Swalin IBM

More information

Intel Cloud Builder Guide to Cloud Design and Deployment on Intel Platforms

Intel Cloud Builder Guide to Cloud Design and Deployment on Intel Platforms Intel Cloud Builder Guide to Cloud Design and Deployment on Intel Platforms Ubuntu* Enterprise Cloud Executive Summary Intel Cloud Builder Guide Intel Xeon Processor Ubuntu* Enteprise Cloud Canonical*

More information

Intel Virtualization and Server Technology Update

Intel Virtualization and Server Technology Update Intel Virtualization and Server Technology Update Petar Torre Lead Architect Service Provider Group 29 March 2012 1 Legal Disclaimer Intel may make changes to specifications and product descriptions at

More information

WHITE PAPER FUJITSU PRIMERGY SERVERS MEMORY PERFORMANCE OF XEON E7-8800/4800/2800 (WESTMERE-EX) BASED SYSTEMS

WHITE PAPER FUJITSU PRIMERGY SERVERS MEMORY PERFORMANCE OF XEON E7-8800/4800/2800 (WESTMERE-EX) BASED SYSTEMS WHITE PAPER MEMORY PERFORMANCE OF WESTMERE-EX BASED SYSTEMS WHITE PAPER FUJITSU PRIMERGY SERVERS MEMORY PERFORMANCE OF XEON E7-8800/4800/2800 (WESTMERE-EX) BASED SYSTEMS The Xeon E7-8800/4800/2800 (Westmere-EX)

More information

Leading Virtualization Performance and Energy Efficiency in a Multi-processor Server

Leading Virtualization Performance and Energy Efficiency in a Multi-processor Server Leading Virtualization Performance and Energy Efficiency in a Multi-processor Server Product Brief Intel Xeon processor 7400 series Fewer servers. More performance. With the architecture that s specifically

More information

Achieving a High Performance OLTP Database using SQL Server and Dell PowerEdge R720 with Internal PCIe SSD Storage

Achieving a High Performance OLTP Database using SQL Server and Dell PowerEdge R720 with Internal PCIe SSD Storage Achieving a High Performance OLTP Database using SQL Server and Dell PowerEdge R720 with This Dell Technical White Paper discusses the OLTP performance benefit achieved on a SQL Server database using a

More information

Bandwidth Calculations for SA-1100 Processor LCD Displays

Bandwidth Calculations for SA-1100 Processor LCD Displays Bandwidth Calculations for SA-1100 Processor LCD Displays Application Note February 1999 Order Number: 278270-001 Information in this document is provided in connection with Intel products. No license,

More information

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server

Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket server node for PRIMERGY CX420 cluster server Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket node for PRIMERGY CX420 cluster Data Sheet FUJITSU Server PRIMERGY CX272 S1 Dual socket node for PRIMERGY CX420 cluster Strong Performance and Cluster

More information

Figure 1A: Dell server and accessories Figure 1B: HP server and accessories Figure 1C: IBM server and accessories

Figure 1A: Dell server and accessories Figure 1B: HP server and accessories Figure 1C: IBM server and accessories TEST REPORT SEPTEMBER 2007 Out-of-box comparison between Dell, HP, and IBM servers Executive summary Dell Inc. (Dell) commissioned Principled Technologies (PT) to compare the out-of-box experience of a

More information

Oracle Database Reliability, Performance and scalability on Intel Xeon platforms Mitch Shults, Intel Corporation October 2011

Oracle Database Reliability, Performance and scalability on Intel Xeon platforms Mitch Shults, Intel Corporation October 2011 Oracle Database Reliability, Performance and scalability on Intel platforms Mitch Shults, Intel Corporation October 2011 1 Intel Processor E7-8800/4800/2800 Product Families Up to 10 s and 20 Threads 30MB

More information

HP DDR4 SmartMemory Is finding reliable DRAM memory for your HP ProLiant Server series in your data center a major challenge?

HP DDR4 SmartMemory Is finding reliable DRAM memory for your HP ProLiant Server series in your data center a major challenge? Overview Is finding reliable DRAM memory for your HP ProLiant Server series in your data center a major challenge? When you choose HP SmartMemory you get the same high quality, reliability, and confidence

More information

The Foundation for Better Business Intelligence

The Foundation for Better Business Intelligence Product Brief Intel Xeon Processor E7-8800/4800/2800 v2 Product Families Data Center The Foundation for Big data is changing the way organizations make business decisions. To transform petabytes of data

More information

Intel RAID RS25 Series Performance

Intel RAID RS25 Series Performance PERFORMANCE BRIEF Intel RAID RS25 Series Intel RAID RS25 Series Performance including Intel RAID Controllers RS25DB080 & PERFORMANCE SUMMARY Measured IOPS surpass 200,000 IOPS When used with Intel RAID

More information

Intel Desktop Board D925XECV2 Specification Update

Intel Desktop Board D925XECV2 Specification Update Intel Desktop Board D925XECV2 Specification Update Release Date: July 2006 Order Number: C94210-005US The Intel Desktop Board D925XECV2 may contain design defects or errors known as errata, which may cause

More information

Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V. Technical Brief v1.

Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V. Technical Brief v1. Intel Ethernet and Configuring Single Root I/O Virtualization (SR-IOV) on Microsoft* Windows* Server 2012 Hyper-V Technical Brief v1.0 September 2012 2 Intel Ethernet and Configuring SR-IOV on Windows*

More information

Intel Media SDK Library Distribution and Dispatching Process

Intel Media SDK Library Distribution and Dispatching Process Intel Media SDK Library Distribution and Dispatching Process Overview Dispatching Procedure Software Libraries Platform-Specific Libraries Legal Information Overview This document describes the Intel Media

More information

Three Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture

Three Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture White Paper Intel Xeon processor E5 v3 family Intel Xeon Phi coprocessor family Digital Design and Engineering Three Paths to Faster Simulations Using ANSYS Mechanical 16.0 and Intel Architecture Executive

More information

Accelerating Data Compression with Intel Multi-Core Processors

Accelerating Data Compression with Intel Multi-Core Processors Case Study Predictive Enterprise Intel Xeon processors Intel Server Board Embedded technology Accelerating Data Compression with Intel Multi-Core Processors Data Domain incorporates Multi-Core Intel Xeon

More information

QUESTIONS & ANSWERS. ItB tender 72-09: IT Equipment. Elections Project

QUESTIONS & ANSWERS. ItB tender 72-09: IT Equipment. Elections Project QUESTIONS & ANSWERS ItB tender 72-09: IT Equipment. Elections Project In lot 1, position 1 - Server for Data Base 1. Q: You order Microsoft Windows Server 2008, 64 bit, Enterprise, License with 25 or more

More information

Power efficiency and power management in HP ProLiant servers

Power efficiency and power management in HP ProLiant servers Power efficiency and power management in HP ProLiant servers Technology brief Introduction... 2 Built-in power efficiencies in ProLiant servers... 2 Optimizing internal cooling and fan power with Sea of

More information

How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1

How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1 How to Configure Intel Ethernet Converged Network Adapter-Enabled Virtual Functions on VMware* ESXi* 5.1 Technical Brief v1.0 February 2013 Legal Lines and Disclaimers INFORMATION IN THIS DOCUMENT IS PROVIDED

More information

Intel Xeon Processor E5-2600

Intel Xeon Processor E5-2600 Intel Xeon Processor E5-2600 Best combination of performance, power efficiency, and cost. Platform Microarchitecture Processor Socket Chipset Intel Xeon E5 Series Processors and the Intel C600 Chipset

More information

Memory technology evolution: an overview of system memory technologies

Memory technology evolution: an overview of system memory technologies Memory technology evolution: an overview of system memory technologies Technology brief, 9 th edition Introduction... 2 Basic DRAM operation... 2 DRAM storage density and power consumption... 4 Memory

More information

Intel architecture. Platform Basics. White Paper Todd Langley Systems Engineer/ Architect Intel Corporation. September 2010

Intel architecture. Platform Basics. White Paper Todd Langley Systems Engineer/ Architect Intel Corporation. September 2010 White Paper Todd Langley Systems Engineer/ Architect Intel Corporation Intel architecture Platform Basics September 2010 324377 Executive Summary Creating an Intel architecture design encompasses some

More information

Intel RAID Controllers

Intel RAID Controllers Intel RAID Controllers Best Practices White Paper April, 2008 Enterprise Platforms and Services Division - Marketing Revision History Date Revision Number April, 2008 1.0 Initial release. Modifications

More information

ThinkServer PC DDR3 1333MHz UDIMM and RDIMM PC DDR3 1066MHz RDIMM options for the next generation of ThinkServer systems TS200 and RS210

ThinkServer PC DDR3 1333MHz UDIMM and RDIMM PC DDR3 1066MHz RDIMM options for the next generation of ThinkServer systems TS200 and RS210 Announcement AG09-0790, dated vember 24, 2009 ThinkServer PC3-10600 DDR3 1333MHz UDIMM and RDIMM PC3-8500 DDR3 1066MHz RDIMM options for the next generation of ThinkServer systems TS200 and RS210 Table

More information

Data Sheet Fujitsu PRIMERGY CX122 S1 Cloud server unit for PRIMERGY CX1000

Data Sheet Fujitsu PRIMERGY CX122 S1 Cloud server unit for PRIMERGY CX1000 Data Sheet Fujitsu PRIMERGY CX122 S1 Cloud server unit for PRIMERGY CX1000 Datasheet for Red Hat certification PRIMERGY CX1000 is a new product category within the PRIMERGY x86 server family. Its focus

More information

T-Platforms V210 Series Mainboard Specification

T-Platforms V210 Series Mainboard Specification T-Platforms V210 Series Mainboard Specification 2014 Contents 1 Mainboard SKUs... 3 2 Specification... 4 2.1 Mainboard Functional Diagram... 6 2.2 Mainboard Details... 7 Appendix A. Intel Xeon Processor

More information

Communicating with devices

Communicating with devices Introduction to I/O Where does the data for our CPU and memory come from or go to? Computers communicate with the outside world via I/O devices. Input devices supply computers with data to operate on.

More information