1 Windows 8 SMB 2.2 File Sharing Performance Abstract This paper provides a preliminary analysis of the performance capabilities of the Server Message Block (SMB) 2.2 file sharing protocol with 10 gigabit Ethernet interfaces. The Multi-Channel feature, introduced with SMB 2.2 in Windows Developer Preview, enables the use of multiple physical network interfaces in an SMB 2.2 client and server. This paper assumes the reader is familiar with the basics of SMB file sharing, networking technologies, and file system performance measurement with the Iometer tool. This information applies to the following operating systems: Windows Developer Preview Windows Server Developer Preview The current version of this paper is maintained on the Web at: Windows 8 SMB 2.2 File Sharing Performance Disclaimer: This document is provided as-is. Information and views expressed in this document, including URL and other Internet website references, may change without notice. Some information relates to prereleased product which may be substantially modified before it s commercially released. Microsoft makes no warranties, express or implied, with respect to the information provided here. You bear the risk of using it. This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes.
2 Contents Introduction... 3 Hardware Shifts and Trends...4 Targeted Workloads...4 SMB Connection Scaling...5 Connection Resiliency...6 Network Utilization...6 Load Balancing... 7 Transport Flexibility... 7 Multi-Channel Behaviors... 7 Multi-Channel Performance (Preliminary)...9 Test Environment...9 Hardware Configuration...9 Iometer Configuration...10 Non-cached I/O Server Baseline Server Read Performance...11 Server Write Performance...12 SMB 2.2 Client Performance...12 Server vs. SMB 2.2 Client Throughput Comparison Operations Per Second and CPU Consumption Comparison SMB 2.2 Client Network Interface Scaling Summary... 17
3 Introduction With versions 2.0 and 2.1 of the Server Message Block (SMB) protocol, the SMB client associates a session with a single TCP connection. The session represents a single authentication context between the SMB client and the server. In these versions of SMB, the association of an SMB session to a single TCP connection is unduly limiting for resource utilization and connection resiliency. The one-to-one mapping of a session to a connection is sufficient for single-user SMB clients that host simple applications with limited demands on networking and system resources. After the use cases are expanded to include large multiuser or multiserver application environments, the one-to-one session to connection limitation represents a bottleneck for CPU, network, and storage utilization. Issues of resiliency are also present when a single TCP connection is used. Failure of a network interface or intervening network route can result in a loss of connectivity for the SMB client and its applications. By introducing more than one connection per session, the SMB client has many advantages, such as the following: If a network interface or network route failure occurs, the SMB client can be resilient to recover connectivity to the server. The SMB client can be more adaptable to server hardware configuration and provide flexibility to an ever changing network infrastructure. The SMB client and server can scale proportionately with gains in CPU and networking capabilities. This white paper will focus on the scaling capability that can be used by SMB 2.2 clients and servers. The feature of using multiple TCP or Remote DMA (RDMA) connections over one or more physical network interfaces is called SMB 2.2 Multi-Channel. The Multi-Channel feature is just one portion of a larger scope of new capabilities that are provided in Windows Developer Preview. SMB 2.2 integrates the capabilities of RDMA networking technologies in a way that provides greater performance scale and reduced CPU utilization. These RDMA-based networking technologies include the following: InfiniBand Internet Wide Area RDMA Protocol (iwarp) RDMA over Converged Ethernet (RoCE) To fully utilize the resiliency capabilities of the Multi-Channel feature, the SMB 2.2 protocol introduces resiliency mechanisms in the SMB 2.2 client and server. These mechanisms allow the client and server to fully recover from network connection faults and server failures.
4 Note: The results in this paper are based on the performance in Windows Developer Preview. Therefore, the results in this paper are preliminary and do not represent any future capabilities. Hardware Shifts and Trends There are many system hardware capabilities that are well established and continue to trend positively. The SMB protocol that is used in client and server implementations have adapted to a degree. However, there is room for improvement for full utilization of system features and performance. For example, CPU socket and core counts continue their trend upward. The combination of this CPU capability with virtualization is adding to the requirements for ease of management and greater performance capabilities of the overall storage system. Therefore, the SMB client and server have to offer an adaptable platform for the full range of system capability. Delivering to the capability of scaling "up" must be accomplished while guaranteeing that smaller systems are not burdened with the accommodation of larger systems. For example, the SMB client and server have to account for the full range of networking interfaces, such as the following interface types: WLANs WWANs MANs Standard Ethernet (100 bits per second or greater) Hyper-V virtual interfaces Receive-side Scaling (RSS) capable interfaces Link Aggregated interfaces, such as load balancing and failover (LBFO) RDMA-based interfaces (iwarp, InfiniBand, RoCE). At the SMB file server, individual disks can be aggregated into a single logical unit number (LUN) or volume. This practice has long been established with either hardware-based or software-based RAID solutions. These aggregated LUNs provide the best throughput for handling a reasonable queue of outstanding I/O requests. For a workload like SQL Server, the queued I/O count is best at two per physical device. With the introduction of flash-based storage devices, such as discrete PCI or solid-state drives (SSDs), file server storage characteristics are changing. For example, flash-based storage devices may have several internal memory channels and exhibit the best throughput for handling concurrent or larger I/O requests. Storage subsystems are being delivered to the market that automatically and transparently tier data between SSD and traditional spinning disks. These tiered solutions deliver good performance for both random and sequential workloads while delivering the capacity of traditional storage subsystems. Targeted Workloads As noted, the SMB client must be effective at handling the wide range of network environments and general system types (client and server). The same is true of
5 application workloads. The focus on one workload or class of workloads must not be done at the expense of others. For Windows Server Developer Preview, one major focus area has been server application workloads, such as SQL Server workloads. In terms of storage I/O requests or transactions per second, the SMB client and server must be able to deliver comparable capabilities as local storage. Virtualization via Hyper-V and its use of SMB 2.2 storage provides another major conduit of server application workloads, such as the following: SQL Server running across a set of virtual machines (VMs). Virtual desktop environments that are deployed and executed in large numbers on a single system. By hosting this variety of applications, the SMB 2.2 client will, by definition, service a variety of workload types, ranging from small, random I/O requests to large sequential I/O requests. Within a virtualized environment, the SMB 2.2 client will not only service a variety of workloads, but will do so simultaneously. While the I/O patterns will vary, there are stringent requirements for storage resiliency in this environment. Server applications require resilient access to storage. As a result, the SMB 2.2 client must adapt to that requirement along with the new workload patterns. SMB Connection Scaling Given the capabilities of hardware, changes in storage workloads, and resiliency requirements, the SMB 2.2 client must move beyond its history and adapt to more effective use of network connections. An SMB 2.1 session is an authenticated context established between a client and a server over a single TCP connection for a specific security principal. This is depicted in Figure 1. Session 1 Connection 1 Session 2 Connection 2 Figure 1: SMB 2.1 Session to Connection Association The one-to-one association between a session and TCP connection works reasonably well when there is a limited need for bandwidth scaling. A single TCP connection is very effective for a 1 gigabit Ethernet (GbE) network interface or a reasonable portion of a 10 GbE network interface. However, a single TCP connection can be limiting because of CPU loading by other workloads on the system that compete for resources that are needed for TCP processing. If multiple security principals are present, multiple TCP connections can be used to provide reasonable network scaling. Unfortunately, many server application workloads use one or a limited set of security principals. Therefore, the scaling for throughput and I/O request handling is limited in an environment where each session is mapped to a single TCP connection.
6 If many TCP connections are present, RSS can be used effectively to distribute the workload across available CPU resources. With a single TCP connection, CPU scaling is unavailable. With single connections, there is limited capability in being resilient to network interface or network path failures. With the LBFO feature in Windows Server Developer Preview, a portion of the failure modes are addressed. However, not all potential failure modes are recoverable with LBFO alone. For example, interior network path failures may not be observable at the endpoints. To provide for effective scaling and resiliency, the SMB 2.2 protocol in Windows Server Developer Preview adapts by allowing for a many-to-many association between SMB 2.2 session and network connection as depicted in Figure 2. Session 1 Connection1 Session 2 Connection2 Figure 2: SMB 2.2 Session to Connection Association With this model, not only can multiple sessions use a single connection to communicate with the server but multiple connections can be used for a single session. This association is created dynamically and allows for an increase and decrease in session and connection counts. This also allows for a shift from a less capable network interface (such as WLAN interfaces) to a network interface that may more closely match that of the workload (such as 1 GbE interfaces). Connection Resiliency With the ability to associate multiple connections with a single session, the SMB 2.2 client can be resilient to connection failures. These are most likely associated with failures of a network interface or of an intermediate network component (such as a router or switch). Technologies exist that deal with both types of failures, such as LBFO for network interface failures. The user may choose not to deploy these technologies. As an alternative to these technologies, another approach to connection resiliency would be to provide multiple network paths between SMB 2.2 client and server. If the paths consist of similar capability, as described later, the SMB 2.2 client will actively manage those paths for resiliency. The client does this by creating multiple connections that span all paths between client and server. If a connection fails or becomes unresponsive, the client will choose the path that remains available. Network Utilization Using a single TCP connection, the SMB 2.2 client and server are not assured that they will be unable to fully use the available bandwidth of a 10 GbE network. If multiple connections are used over a network interface that supports RSS, the SMB client and server can easily use all of the bandwidth of a 10 GbE network link. The ability to scale by using multiple connections is important to both client and server. For the client, significant networked data movement comes in the form of
7 SMB read operations. For the server, the incoming data movement of SMB write operations benefits from multiple connections. The load can be effectively distributed across the available CPU cores. Load Balancing Enabling the SMB 2.2 client to adapt its load across a set of TCP connections is important to the overall capabilities of the client. For example, shared network paths may experience uneven loading from other clients at the server interface or by routers or switches that experience congestion. Other conditions that could lead to uneven resource capability are server-side CPU loading or server storage hot-spots. Most of these can be overcome or minimized by allowing the client to schedule requests dynamically across a set of connections. Transport Flexibility Supporting the association of one session to multiple TCP connections allows for greater flexibility. For example, if a client and server share connectivity with 1 GbE and 10 GbE network interfaces, the client can start its server interaction over 1 GbE. In order to handle requested workloads, the client can move the requests to a second or multiple connections associated with the 10 GbE interface. This type of connection adaptation applies as well to shifting requests from an IP over InfiniBand (IPoIB) connection to a standard InfiniBand/RDMA connection. By design, the transport types (such as TCP or RDMA) that are used for connections do not need to be homogenous. This type of transport flexibility allows for gradual and effective deployment of an SMB 2.2 multiconnection or multi-channel capable solution. For example, physical network interfaces may be present in a client and server but not initially enabled. The user may choose to delay deployment of network connectivity because of switch utilization or cost, and can add connectivity at a later time. When those network interfaces become active, the SMB 2.2 client and server will dynamically adapt to their presence and use them if appropriate. Multi-Channel Behaviors The SMB 2.2 Multi-Channel feature accomplishes its design goals by combining the following behaviors: Grouping or aggregating similar kinds of physical interfaces and associate connections with each physical interface. This provides for resiliency in the event of hardware or path failures. Establishing multiple connections (such as TCP) for a single physical interface. This allows for effective utilization of scalable hardware configurations. The SMB 2.2 client can use these behaviors individually or in combinations. The client decides how to apply these behaviors based on the attributes of the available network interfaces.
8 The SMB 2.2 client is responsible for the policy decision of selecting the network interfaces and number of connections. The SMB 2.2 server only needs to identify available network interfaces to the client along with associated attributes. Upon initial contact to the server, the client will use a new SMB 2.2 operation to enumerate the server s network interfaces and their attributes. If multiple interfaces exist at the client or server (or both) and the network interfaces represent a valid network path between them, the interfaces are placed in an available list. From the available list, the client will group network interfaces of similar kind (such as 1 GbE, 10 GbE, and RDMA). From these groupings, the client selects which interfaces are to be actively used. It does this by ordering the group type, and then choosing from the highest priority group which contains available interfaces. The priority ranking of interfaces is ordered as follows: 1. RDMA-capable network interfaces, such as InfiniBand, iwarp, or RoCE. 2. RSS-capable interfaces. The RSS capability allows the client to identify conditions where connection scaling may improve throughput or responsiveness. 3. LBFO or aggregate interfaces that represent the collection of two or more physical interfaces. 4. Standard interfaces and Hyper-V virtual interfaces are next in priority. Note: The interface types for this and higher priorities are considered capable of multi-channel operations. 5. Wireless interfaces, which are not considered capable of multi-channel operations. As mentioned earlier, the SMB client will group interfaces into like-kinds. For example, InfiniBand-based interfaces will be placed into the same grouping and will be used together. If the SMB client and server each have access to two InfiniBand interfaces, the client will use both interfaces for SMB 2.2. If the client and server have single 10 GbE interfaces each of which are RSS capable, the client will create multiple TCP connections for the single interface. The client could have a mismatch for interface speed. For example, the client could have access to four 1 GbE interfaces, while the server has access to a single 10 GbE interface. In this case, the client could create a TCP connection for each of the 1 GbE interfaces. This achieves better performance because the server s overall network capability exceeds that of the client. The interface enumeration will occur dynamically. If an interface is added or removed after the client has begun interaction with the server, the change in interface availability will be dynamically used by the client. For example, if a user starts an SMB 2.2 file copy over a wireless interface and subsequently plugs in or enables a 1 GbE interface, the SMB 2.2 client will move its requests to the newly available interface because it has a higher priority in the ranking that was described earlier.
9 Multi-Channel Performance (Preliminary) All data that is presented in this paper is based on the SMB 2.2 implementation in pre-release versions of Windows Developer Preview and Windows Server Developer Preview. By definition, these same results will change before the release of Windows Server Developer Preview. In the following sections, the performance of the SMB 2.2 Multi-Channel feature is presented. To allow for comparison, the same hardware configuration was used for the SMB 2.2 client and server. Details of their configuration are provided in the following sections. If you use similar hardware for your evaluations, a comparison of local storage access and remote access with SMB 2.2 can be made. Both client and server are installed with Windows Server Developer Preview. Measurements are taken by using the Iometer tool. The configuration of the Iometer for the data collection is also described in the following sections. Test Environment This section discusses the various configurations that were used in the test environment. Hardware Configuration This section describes the hardware configuration used to collect the reported data. As mentioned earlier, the SMB 2.2 client and SMB 2.2 server have the same hardware configuration. Table 1. Hardware configuration Hardware component Client Server CPU type 2 sockets with 6 cores each at 2.66 GHz (12 cores total) 2 sockets with 6 cores each at 2.66 GHz (12 cores total) Memory 48 GB 48 GB Network type 2 network interface adapter cards, Each card has two 10 GbE interfaces. 2 network interface adapter cards. Each card has two 10 GbE interfaces. Number of network 4 x 10 GbE 4 x 10 GbE links Storage adapter N/A 2 RAID Host Bus Adapters 6 Gbps SAS connectivity All network and storage adapters were installed in PCIe 2.0 multi-lane x8 slots. As mentioned earlier, all network connectivity in the test-bed was 10 GbE. Each system had two network interface adapters, and each adapter has two network interfaces for a total of 4 x 10 GbE network interfaces. The 10 GbE network switch was configured with a single VLAN. Both client and server had 48 GB of memory. The server had two storage host bus adapters (HBAs). Each HBA was attached to a single JBOD enclosure for a total of two enclosures. Each JBOD enclosure had two SAS expanders. Each HBA was dual attached to a single enclosure with 4 lane, 6 Gbit SAS cables for a total of 8 lanes of 6 Gbit SAS. Each enclosure housed 12 SSDs.
10 Two LUNs were presented to the server (one LUN from each HBA/JBOD). The SSDs were configured into a single RAID 0 virtual disk via RAID HBA, with a stripe size of 64 KB. The RAID HBA was configured so that the HBA cache was not used (no readaheads and writes were write-through). Each LUN had a formatted capacity of 1.7 terabytes. The following figure shows the configuration of the test hardware. RAID S S D s RAID S S D s Clie nt 1 0 G b E Se rve r Figure 3: Test Hardware Configuration It is recognized that file server configurations typically will not be constructed of SSDs alone. However, because the focus of this study is on the characteristics of the SMB 2.2 client and server, the use of SSDs allows for a focus on all components, except for the underlying HDD storage devices. Iometer Configuration Iometer 1.1 was used for workload generation. The Iometer is configured to use a set of target objects/files and workload specifications (such as I/O size, queue depth, sequential, random, alignment). For the data presented, Iometer iobw.tst test files were sized at 1 terabyte. Table 2 contains the worker and queue depth settings that are used for the various I/O sizes for the Iometer runs. These configuration parameters were used throughout the data collection. Table 2: IOMETER Configuration I/O Size in Bytes Number of Workers Queue Depth Total Queued
11 I/O Size in Bytes Number of Workers Queue Depth Total Queued These values were chosen to balance the need for longer queue depth to obtain the best throughput and I/O requests handled per second. These values were also chosen to keep latency reasonably low. To determine these values for queue depth and work threads, a set of experiments were run where each iteration varied the thread and queue depth. The best values from these experiments are shown in Table 2. Non-cached I/O The Iometer benchmark is written so that all of its I/O requests are performed noncached or without buffering. Therefore, the throughput results included in this paper represent I/O requests that are initiated from the Iometer (as the client application) to and from the storage medium (the SSDs in this test configuration). This applies to both the local access measured at the server and the over-the-wire access performed by the SMB 2.2 client. Server Baseline The following data represents the capability of the file server s storage system as it is accessed locally. The same configuration values were used for the Iometer for all results that are presented in this paper. This consistency in hardware configuration and benchmark configuration and use allows for direct comparison of local and remote performance. Server Read Performance The data in the following figure represents the read performance capabilities of the server when the data is accessed locally. Figure 4: Server/Local Read Throughput
12 The results in Figure 4 represent the maximum capabilities of the HBAs. The maximum throughput capability of these HBAs is approximately 3000 MB/sec, so the total for the larger I/O sizes is approximately 6000 MB/sec. For the smaller I/O sizes, the maximum operations per second for these HBAs have also been reached. Performance data from additional operations is included in this paper. Server Write Performance The data in the following figure represents the server s limited capability to write data (as compared to the read results shown in Figure 4). The SSDs are the limiting factor in these results. Because of this relatively low write capability of the server, additional write results are not included in this paper. Figure 5: Server/Local Write Throughput SMB 2.2 Client Performance With the server baseline established, the data in the following figure represents the full capabilities of the client. Using all four of the 10 GbE network interfaces, the SMB 2.2 Multi-Channel feature creates multiple TCP connections across those physical interfaces and multiple TCP connections for each physical interface. The SMB 2.2 client will then send read requests across all of the available connections allowing for full utilization of all of the available resources.
13 Figure 6: Client (4 x 10 GbE) Read Throughput With the larger I/O sizes, the maximum throughput for the SMB 2.2 client in this configuration was measured at approximately 4300 MB/sec. Later in this paper, throughput results are presented for a single 10 GbE network interface. For those results, the SMB 2.2 client can achieve about 1150 MB/sec. For a unidirectional 10 GbE network interface, the 1150 MB/sec represents the physical limitations of the network. More bytes are being transferred but they are consumed by network protocol headers. If the SMB 2.2 client is capable of perfect scaling with four 10 GbE network interfaces, it should be able to achieve 4600 MB/sec. So as measured at 4300 MB/sec, the SMB 2.2 client achieves approximately 93.5 percent of the maximum possible. Server vs. SMB 2.2 Client Throughput Comparison The data in the following figure represents the direct comparison of the server throughput capability as compared to the SMB 2.2 Client. Up to the I/O size of bytes, the SMB 2.2 client is capable of the same throughput as measured locally at the server. At this point, the client reaches the limit of network utilization.
14 Figure 7: Server/Local vs. SMB 2.2 Client Throughput Operations Per Second and CPU Consumption Comparison The data in the following figure compares the I/O operations executed per second (IOPS) by the server and the SMB 2.2 Client. The results show that the client is capable of equivalent operations per second rate as the server up to the I/O size.
15 Figure 8: Server/Local vs. SMB 2.2 Client (IOPS with CPU Utilization Percentage) The second axis of this graph represents the privileged CPU utilization for the client and server. Use of the privileged CPU measurement is a better approximation of the cost of servicing the Iometer workload. Thus, privileged CPU will exclude the application s use of CPU during the measurement period. By using this information, it is easy to observe that, whereas the SMB 2.2 client is capable of the same operations per second, for smaller I/O sizes, it is using more CPU than the server. SMB 2.2 Client Network Interface Scaling As the previous data exhibits, the Multi-Channel enabled SMB 2.2 client is very effective at utilizing available network interfaces. The data in the following figure demonstrates the near linear scaling capability of the SMB 2.2 client when network interfaces are added. The results represent the four Iometer runs, each with an increasing number of network interfaces enabled.
16 Figure 9: SMB 2.2 Client Interface Scaling Throughput In Figure 9, the maximum throughput of the 1 x 10 GbE configuration is approximately 1150 MB/sec. The maximum throughput of the 2 x 10 GBE configuration is approximately 2330 MB/sec. The 3 x 10 GbE configuration has a maximum throughput of approximately 3320 MB/sec. Finally, the 4 x 10 GbE configuration demonstrates a maximum throughput of 4300 MB/sec. As mentioned previously, the four-interface configuration is about 6.5% lower than the 4600 MB/sec, if the system is scaled perfectly. The data in the following figure represents operations executed per second. This data is derived from the same Iometer executions that were used in the Figure 9. For the single 10 GbE data, it appears that two network interfaces provide better scaling for small I/O sizes. The case of two interfaces allows the SMB 2.2 client to spread the work over a larger set of CPU cores with the use of RSS.
17 Summary Figure 10: SMB 2.2 Client Interface Scaling IOPS Since this paper focused on SMB 2.2 performance, the manageability aspects of the SMB 2.2 Multi-Channel feature were not fully explored. The SMB 2.2 Multi-Channel client and server will dynamically adapt to the available network interfaces. While the main intent is to provide for resiliency for failure recovery, the dynamic addition of interfaces is a significant side-effect. Deploying new connectivity to the SMB 2.2 client is easy to achieve. The SMB 2.2 client and server will dynamically adapt to newly added network interfaces and utilize them when they are available. And as the data in this paper demonstrates, the SMB 2.2 client will be able to fully utilize that additional network capability. As demonstrated, the SMB 2.2 Multi-Channel feature brings significant performance scalability to Windows Developer Preview and Windows Server Developer Preview that has not existed in earlier versions of Windows. The combination of multiple network interfaces and other resiliency improvements in the SMB 2.2 client and server allow for deployment of server applications in a way that has not been previously available. The SMB 2.2 client will be able to obtain the performance capabilities that were previously available only for locally attached storage configurations.
SMB Advanced Networking for Fault Tolerance and Performance Jose Barreto Principal Program Managers Microsoft Corporation Agenda SMB Remote File Storage for Server Apps SMB Direct (SMB over RDMA) SMB Multichannel
Hyper-V over SMB Remote File Storage support in Windows Server 8 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Agenda Hyper-V over SMB - Overview How to set it up Configuration Options
Hyper-V over SMB: Remote Storage Support in Windows Server 2012 Hyper-V Jose Barreto Principal Program Manager Microsoft Corporation Abstract In this session, we cover the Windows Server 2012 Hyper-V support
StarWind Virtual SAN for Microsoft SOFS Cutting down SMB and ROBO virtualization cost by using less hardware with Microsoft Scale-Out File Server (SOFS) By Greg Schulz Founder and Senior Advisory Analyst
ESG Lab Review Performance Analysis: Scale-Out File Server Cluster with Windows Server 2012 R2 Date: December 2014 Author: Mike Leone, ESG Lab Analyst Abstract: This ESG Lab review documents the storage
SMB Direct for SQL Server and Private Cloud Increased Performance, Higher Scalability and Extreme Resiliency June, 2014 Mellanox Overview Ticker: MLNX Leading provider of high-throughput, low-latency server
Building Enterprise-Class Storage Using 40GbE Unified Storage Hardware Solution using T5 Executive Summary This white paper focuses on providing benchmarking results that highlight the Chelsio T5 performance
Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.
High Performance Tier Implementation Guideline A Dell Technical White Paper PowerVault MD32 and MD32i Storage Arrays THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS
Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Building High-Performance iscsi SAN Configurations An Alacritech and McDATA Technical Note Internet SCSI (iscsi)
Deployments and Tests in an iscsi SAN SQL Server Technical Article Writer: Jerome Halmans, Microsoft Corp. Technical Reviewers: Eric Schott, EqualLogic, Inc. Kevin Farlee, Microsoft Corp. Darren Miller,
White Paper MS Exchange Server Acceleration Using virtualization to dramatically maximize user experience for Microsoft Exchange Server Allon Cohen, PhD Scott Harlin OCZ Storage Solutions, Inc. A Toshiba
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Mellanox Cloud and Database Acceleration Solution over Windows Server 2012 Direct Increased Performance, Scaling and Resiliency July 2012 Motti Beck, Director, Enterprise Market Development Motti@mellanox.com
Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD
WHITE PAPER BASICS OF DISK I/O PERFORMANCE WHITE PAPER FUJITSU PRIMERGY SERVER BASICS OF DISK I/O PERFORMANCE This technical documentation is aimed at the persons responsible for the disk I/O performance
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
Web Site and Portal Page 1 Storage Spaces 24 March 2014 09:31 inshare5 Why Microsoft Created SMB 3.0 for Application Data The Server Message Block (SMB) protocol is the access protocol for file shares.
Cloud Optimize Your IT Windows Server 2012 The information contained in this presentation relates to a pre-release product which may be substantially modified before it is commercially released. This pre-release
Post-production Video Editing Solution Guide with Microsoft SMB 3 File Serving AssuredSAN 4000 Dot Hill Systems introduction 1 INTRODUCTION Dot Hill Systems offers high performance network storage products
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
: Moving Storage to The Memory Bus A Technical Whitepaper By Stephen Foskett April 2014 2 Introduction In the quest to eliminate bottlenecks and improve system performance, the state of the art has continually
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...
Bringing the Public Cloud to Your Data Center Jim Pinkerton Partner Architect Lead 1/20/2015 Microsoft Corporation A Dream Hyper-Scale Cloud efficiency is legendary Reliable, available services using high
Optimizing SQL Server Storage Performance with the PowerEdge R720 Choosing the best storage solution for optimal database performance Luis Acosta Solutions Performance Analysis Group Joe Noyola Advanced
Using Synology SSD Technology to Enhance System Performance Based on DSM 5.2 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD Cache as Solution...
Introduction Storage Area Networks dominate today s enterprise data centers. These specialized networks use fibre channel switches and Host Bus Adapters (HBAs) to connect to storage arrays. With software,
8Gb Fibre Channel Adapter of Choice in Microsoft Hyper-V Environments QLogic 8Gb Adapter Outperforms Emulex QLogic Offers Best Performance and Scalability in Hyper-V Environments Key Findings The QLogic
Technical white paper HP Smart Array Controllers and basic RAID performance factors Technology brief Table of contents Abstract 2 Benefits of drive arrays 2 Factors that affect performance 2 HP Smart Array
MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Best Practice of Server Virtualization Using Qsan SAN Storage System F300Q / F400Q / F600Q Series P300Q / P400Q / P500Q / P600Q Series Version 1.0 July 2011 Copyright Copyright@2011, Qsan Technology, Inc.
Virtualizing SQL Server 2008 Using EMC VNX Series and Microsoft Windows Server 2008 R2 Hyper-V Copyright 2011 EMC Corporation. All rights reserved. Published February, 2011 EMC believes the information
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
VMware SAN Backup Using VMware vsphere Data Protection Advanced SEPTEMBER 2014 VMware SAN Backup Using VMware vsphere Table of Contents Introduction.... 3 vsphere Architectural Overview... 4 SAN Backup
EMC Unified Storage for Microsoft SQL Server 2008 Enabled by EMC CLARiiON and EMC FAST Cache Reference Copyright 2010 EMC Corporation. All rights reserved. Published October, 2010 EMC believes the information
Transition Guide November 2010 2 Introduction Key points Apple will not be developing a future version of Orders for will be accepted through January 31, 2011 Apple will honor all warranties and extended
I/O Virtualization Using Mellanox InfiniBand And Channel I/O Virtualization (CIOV) Technology Reduce I/O cost and power by 40 50% Reduce I/O real estate needs in blade servers through consolidation Maintain
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
Best Practices for Deploying SSDs in a Microsoft SQL Server 2008 OLTP Environment with Dell EqualLogic PS-Series Arrays Database Solutions Engineering By Murali Krishnan.K Dell Product Group October 2009
WHITE PAPER July 2014 Achieving Real-Time Business Solutions Using Graph Database Technology and High Performance Networks Contents Executive Summary...2 Background...3 InfiniteGraph...3 High Performance
A Diablo Technologies Whitepaper Diablo and VMware TM powering SQL Server TM in Virtual SAN TM May 2015 Ricky Trigalo, Director for Virtualization Solutions Architecture, Diablo Technologies Daniel Beveridge,
Performance evaluation sponsored by NetApp, Inc. Introduction Ethernet storage is advancing towards a converged storage network, supporting the traditional NFS, CIFS and iscsi storage protocols and adding
Microsoft SMB 2.2 - Running Over RDMA in Windows Server 8 Tom Talpey, Architect Microsoft March 27, 2012 1 SMB2 Background The primary Windows filesharing protocol Initially shipped in Vista and Server
High Availability (HA) Aidan Finn About Aidan Finn Technical Sales Lead at MicroWarehouse (Dublin) Working in IT since 1996 MVP (Virtual Machine) Experienced with Windows Server/Desktop, System Center,
Cost Efficient VDI XenDesktop 7 on Commodity Hardware 1 Introduction An increasing number of enterprises are looking towards desktop virtualization to help them respond to rising IT costs, security concerns,
White Paper I/O Performance of Cisco UCS M-Series Modular Servers with Cisco UCS M142 Compute Cartridges October 2015 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public.
WHITE PAPER Guide to 50% Faster VMs No Hardware Required WP_v03_20140618 Visit us at Condusiv.com GUIDE TO 50% FASTER VMS NO HARDWARE REQUIRED 2 Executive Summary As much as everyone has bought into the
Linux NIC and iscsi Performance over 4GbE Chelsio T8-CR vs. Intel Fortville XL71 Executive Summary This paper presents NIC and iscsi performance results comparing Chelsio s T8-CR and Intel s latest XL71
Evaluation Report: Database Acceleration with HP 3PAR StoreServ 7450 All-flash Storage Evaluation report prepared under contract with HP Executive Summary Solid state storage is transforming the entire
The Transition to PCI Express* for Client SSDs Amber Huffman Senior Principal Engineer Intel Santa Clara, CA 1 *Other names and brands may be claimed as the property of others. Legal Notices and Disclaimers
White Paper Dell Microsoft Business Intelligence and Data Warehousing Reference Configuration Performance Results Phase III Performance of Microsoft SQL Server 2008 BI and D/W Solutions on Dell PowerEdge
Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary
Analysis of VDI Storage Performance During Bootstorm Introduction Virtual desktops are gaining popularity as a more cost effective and more easily serviceable solution. The most resource-dependent process
Measuring Interface Latencies for SAS, Fibre Channel and iscsi Dennis Martin Demartek President Santa Clara, CA 1 Demartek Company Overview Industry analysis with on-site test lab Lab includes servers,
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
Optimizing Large Arrays with StoneFly Storage Concentrators All trademark names are the property of their respective companies. This publication contains opinions of which are subject to change from time
Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical
Evaluation Report: HP Blade Server and HP MSA 16GFC Storage Evaluation Evaluation report prepared under contract with HP Executive Summary The computing industry is experiencing an increasing demand for
What DBAs Should Know About Windows Server 2012 [DBA-208] Victor Isakov Database Architect Trainer SQL Server Solutions November 6-9, Seattle, WA Victor Isakov Victor Isakov is a Database Architect / Trainer
Can High-Performance Interconnects Benefit Memcached and Hadoop? D. K. Panda and Sayantan Sur Network-Based Computing Laboratory Department of Computer Science and Engineering The Ohio State University,
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
WHITE PAPER Optimizing Virtual Platform Disk Performance Think Faster. Visit us at Condusiv.com Optimizing Virtual Platform Disk Performance 1 The intensified demand for IT network efficiency and lower
D1. Network Load Balancing Ronald van der Pol, Freek Dijkstra, Igor Idziejczak, and Mark Meijerink SARA Computing and Networking Services, Science Park 11, 9 XG Amsterdam, The Netherlands June email@example.com,firstname.lastname@example.org,
1 WWW.FUSIONIO.COM WHITE PAPER WHITE PAPER Executive Summary Fusion iovdi is the first desktop- aware solution to virtual desktop infrastructure. Its software- defined approach uniquely combines the economics
Test Validation Comparison of Hybrid Flash Storage System Performance Author: Russ Fellows March 23, 2015 Enabling you to make the best technology decisions 2015 Evaluator Group, Inc. All rights reserved.
VMware Virtual SAN Hardware Guidance TECHNICAL MARKETING DOCUMENTATION v 1.0 Table of Contents Introduction.... 3 Server Form Factors... 3 Rackmount.... 3 Blade.........................................................................3
Scaling from Datacenter to Client KeunSoo Jo Sr. Manager Memory Product Planning Samsung Semiconductor Audio-Visual Sponsor Outline SSD Market Overview & Trends - Enterprise What brought us to NVMe Technology
Microsoft Windows Server Hyper-V in a Flash Combine Violin s enterprise- class all- flash storage arrays with the ease and capabilities of Windows Storage Server in an integrated solution to achieve higher
Offline Operations Traffic ManagerLarge Memory SKU SQL, SharePoint, BizTalk Images HDInsight Windows Phone Support Per Minute Billing HTML 5/CORS Android Support Custom Mobile API AutoScale BizTalk Services
The Performance Impact of NVMe and NVMe over Fabrics PRESENTATION TITLE GOES HERE Live: November 13, 2014 Presented by experts from Cisco, EMC and Intel Webcast Presenters! J Metz, R&D Engineer for the
White Paper Broadcom Ethernet Network Controller Enhanced Virtualization Functionality Advancements in VMware virtualization technology coupled with the increasing processing capability of hardware platforms
Windows Server 2012 2,500-user pooled VDI deployment guide Microsoft Corporation Published: August 2013 Abstract Microsoft Virtual Desktop Infrastructure (VDI) is a centralized desktop delivery solution
White Paper Intel PRO Network Adapters Network Performance Network Connectivity Express* Ethernet Networking Express*, a new third-generation input/output (I/O) standard, allows enhanced Ethernet network
Performance Report Modular RAID for PRIMERGY Version 1.1 March 2008 Pages 15 Abstract This technical documentation is designed for persons, who deal with the selection of RAID technologies and RAID controllers