Reference Design: Scalable Object Storage with Seagate Kinetic, Supermicro, and SwiftStack May 2015 Copyright 2015 SwiftStack, Inc. swiftstack.com Page 1 of 19
Table of Contents INTRODUCTION... 3 OpenStack Swift... 3 SwiftStack... 3 Seagate Kinetic... 4 SOLUTION COMPONENTS... 5 Kinetic Drives and Chassis... 5 Swift PACO Nodes... 5 Swift-Kinetic Plugin... 6 Networking... 6 DHCP Server... 7 Load Balancer(s)... 7 CONFIGURATION EXAMPLES... 7 0.5PB Raw... 8 1.0PB Raw... 9 NETWORK CONFIGURATION... 10 Client Systems... 10 Rack 1... 11 Rack 2... 11 Rack 3... 12 Installation... 12 EARLY BENCHMARK RESULTS... 13 Tests Performed... 13 Using ssbench... 13 Simulating Backup, File Sync-and-Share, and Web Applications... 14 Interpretation of Results: Meeting Real-World Requirements... 14 Initial Result Data: Writes... 15 Initial Result Data: Reads... 15 COST: KINETIC VS. CONVENTIONAL DRIVES... 15 CONCLUSION... 18 Copyright 2015 SwiftStack, Inc. swiftstack.com Page 2 of 19
INTRODUCTION OpenStack Swift OpenStack Object Storage (code named Swift ) is an open source engine for object storage, available for use under the Apache 2 open source license through the OpenStack Foundation. Swift is the engine that runs the world s largest storage clouds, and is designed to operate on industry-standard x86 server hardware and scale-out to store billions of files and hundreds of petabytes without any single point-of-failure. SwiftStack SwiftStack is built-on OpenStack Swift and was designed to empower enterprises to harness the power of Swift in their own data centers. SwiftStack enables enterprise users to manage, deploy, scale, upgrade, and monitor single and multi-site Swift clusters. Additional capabilities empowering Enterprise users include LDAP and Active Directory integration, CIFS/NFS access with the SwiftStack Filesystem Gateway, and 24x7 enterprise support. The SwiftStack Controller decouples the control, management, and configuration of the Swift cluster nodes from the physical hardware, which makes it an ideal fit with the Kinetic architecture. The SwiftStack Object Storage System can span multiple geographically distributed data centers with a single namespace providing built-in disaster recovery and flexibility for data storage. Administrators can use the SwiftStack Controller, which provides a unified view across all clusters, to dynamically tune and optimize performance and non-disruptively upgrade the entire storage infrastructure. Copyright 2015 SwiftStack, Inc. swiftstack.com Page 3 of 19
Support for managing the Kinetic architecture using the SwiftStack Controller is planned for availability in mid-2015. Seagate Kinetic Seagate s Kinetic drives, introduced in late 2014, are the first hard drives specifically designed to address the challenges of hyperscale cloud data growth. Kinetic drives are similar to conventional SAS or SATA drives with two additional features. The first is an additional chip that turns the drives into native key/value stores. The second is native Ethernet connectivity with two 1GB Ethernet connections per drive. Using the Kinetic storage architecture, any node in the network can now communicate with any Kinetic drive in the cluster. The result is a new storage model that provides a new level of flexibility in scaling. For OpenStack Swift, this means that all Swift services such as the proxy, account, container, and object (PACO) services along with the replicators and auditors can run in combined PACO nodes, while the actual drives storing data can be scaled out independently. In other words, administrators can now build and operate large object storage clusters with less server hardware and lower Total Cost of Ownership while still achieving performance comparable to conventional architectures. Benefits of the Kinetic-based architecture: Fully fault-tolerant storage system: By decoupling compute resources from storage resources, failure domains are smaller, and storage responsibilities are more distributed. Improved compute utilization: By creating a pool of compute nodes, capacity can be scaled to manage storage resources more efficiently. Lower TCO: Total cost can be reduced in terms of both lower capex and reduced operational complexity, power, and cooling costs. This guide provides an example configuration built with Seagate Kinetic drives and SwiftStack software, running on servers available from Supermicro. Hardware details, configuration steps, and benchmark results are included. Copyright 2015 SwiftStack, Inc. swiftstack.com Page 4 of 19
SOLUTION COMPONENTS Kinetic Drives and Chassis On the surface, Kinetic drives look identical to conventional 3.5 drives with what looks like a SAS connector in the standard location. But under the hood, Kinetic drives are fundamentally different. Instead of a regular SAS connector, Kinetic drives have two 1-Gbps Ethernet over SGMII (for redundancy and performance) on a connector that is mechanically identical to the standard drive SAS connector. Instead of SCSI, Kinetic drives have a Key/Value interface. Storage services such as Swift speak directly to the Kinetic Key/Value interface via Ethernet. Kinetic drives are deployed in integrated JBOD ( just a bunch of disks ) chassis that include the drives, power supplies, and network fabric. Unlike traditional JBODs, Kinetic JBODs provide redundant Ethernet fabrics that connect directly to the disk drives replacing the SAS expanders commonly found in traditional JBODs with dual Ethernet switches. The reference architecture in this paper leverage the 1U 12-drive Supermicro SSG-K1048-RT server. Swift PACO Nodes Swift, on a high level, consists of the following primary services: Proxy services: Communicate with the external clients and coordinate read/write requests Account services: Provide metadata for the individual accounts and a list of the containers within an account Container services: Provide metadata for the individual containers and a list of objects within a container Object services: Provide a blob storage service that can store, retrieve, and delete objects on the node s drives Consistency services: Find and correct errors caused by data corruption or hardware failures For a Swift cluster with Kinetic drives, Swift services including Proxy, Account, Container, and Object will run on each PACO node. PACO nodes require a fair bit of CPU and RAM and are network IO intensive. Typically, PACO nodes are 1U systems with a minimum of 48 GB RAM. As the Kinetic drives and the disk shelf systems field each incoming API request, it is best practice to provision the shelves with two high-throughput (10GbE) interfaces. One interface is used for frontend incoming requests and the other for back- end access to the Kinetic JBOD shelves to store and retrieve data. Copyright 2015 SwiftStack, Inc. swiftstack.com Page 5 of 19
This approach enables operators to take advantage of all of Swift s current functionality without storage node server hardware that would otherwise be deployed in non-kinetic deployments. And with Swift s storage policies, an operator can add Kinetic drives to an existing Swift cluster already running storage nodes with standard hard drives. Recommended specifications for PACO nodes using Kinetic drives: CPU: 64-bit x86 CPU (Intel/AMD), quad-core or greater, running at least 2GHz RAM: 48 GB of RAM Network: 2 x 10 GbE Swift-Kinetic Plugin The Swift Disk File abstraction defines how Swift communicates to a storage volume that is not a conventional hard disk drive. This abstraction enables Swift to speak to storage media other than traditional file systems via definitions specified in a Disk File plugin. The Swift-Kinetic plugin (https://github.com/swiftstack/kinetic-swift) is a disk file plugin that enables Swift PACO Servers to communicate with Kinetic drives. Through the plugin, the Swift object replicator, auditors, and object services communicate directly to the Kinetic API. Storage policies in Swift also make it feasible for operators to mix and match Kinetic drives and nodes with conventional hardware in the same cluster. Networking A typical Swift deployment will have an outward-facing network and an internal cluster-facing network. When designing the network capacity for a Swift deployment using the default three-replica storage policy, keep in mind that writes fan-out in triplicate in the storage network. Since there are three copies of each object, an incoming write is sent to three Kinetic drives. Therefore, network capacity for writes needs to be considered in proportion to overall workload. Starting from the client's perspective, the client-facing IP is the IP address (or DNS record) that clients connect to. Typically, that would be a WAN IP on a load balancer or on a firewall. The "outward-facing" IP(s) are the IP(s) on the proxy node(s). It is important to note that outwardfacing is not the same as a public or WAN IP. These outward-facing IPs are simply the IP addresses of the proxy nodes facing out towards the load balancer. Thus, outward-facing IPs are the IPs that should be included in the load balancing pool. Copyright 2015 SwiftStack, Inc. swiftstack.com Page 6 of 19
The PACO nodes communicate with the Kinetic drives through the "cluster-facing" IP(s), and these IP(s) are also used by the PACO nodes to communicate with each other. In summary, while all networking in a Swift cluster is done via Layer-3, a Swift-Kinetic cluster will have these network segments: Outward-facing Network: Used for client traffic (i.e., API access). If an external load balancer is used, it will exist in the outward-facing network. Cluster-facing Network: Used for Swift PACO node communication to the Kinetic JBOD shelves and communication between the Swift PACO nodes Management Network: Used for IPMI, ilo, etc., for hardware management DHCP Server Swift needs to be able to map each Kinetic drive by its IP address, which requires the Kinetic drives to be assigned a static IP. However, it is not possible to set static IPs on Kinetic drives, so a DHCP server is required to gather the MAC addresses of Kinetic network interfaces and subsequently statically map them to IP addresses on the cluster-facing (Kinetic) network. The statically mapped IP addresses of the Kinetic drives will be used to build the Swift rings that enable Swift to deterministically place data in the cluster. Load Balancer(s) For a Swift-Kinetic cluster, a dedicated load balancing solution can use any method from simple round-robin DNS to layer-7 load balancing. Open-source load balancers such as HAProxy or commercial solutions from companies like F5 and A10 can be used. For Swift, load-balancing works the same whether using Kinetic drives or conventional drives. Consequently, you will need to loadbalance across your Swift PACO nodes to divide the client request load between your proxy servers. The outward-facing IPs of the PACO nodes need to be included into the load balancing pool, and the load balancer must be assigned a virtual IP address (VIP). Load balancers should be set up to run health checks against the proxy nodes in its pool so that a proxy will be automatically excluded if it is unresponsive. To remove a failed proxy node from the load balancing pool, configure the load balancer to check the proxy node s health check URL. A healthy Swift proxy node should respond on http://<outward-ip-address>/healthcheck, such as http://192.168.11.81/healthcheck. CONFIGURATION EXAMPLES The following two examples were designed for approximately one half-petabyte and one-petabyte of raw storage capacity, respectively. The diagrams depict the rack space used when deployed in three standard 42-RU datacenter racks, which would be common to in a multi-region deployment or even in a single-datacenter deployment when it is preferred to minimize physical failure domains and Copyright 2015 SwiftStack, Inc. swiftstack.com Page 7 of 19
provide room for expansion. 0.5PB Raw Component Qty. Part Number Description Data Center Rack 3 10Gb Top-of-Rack Switch 3 One per rack PACO-to-Kinetic Switch 3 One per rack PACO Node 6-2x E5-2630 v3 CPUs - 64GB DRAM - 50GB SATADOM - 2x 120GB SSD - 1x dual-port 10Gb NIC Kinetic Node 12 12x 4TB Kinetic drives Copyright 2015 SwiftStack, Inc. swiftstack.com Page 8 of 19
1.0PB Raw Component Qty. Part Number Description Data Center Rack 3 10Gb Top-of-Rack Switch 3 One per rack PACO-to-Kinetic Switch 3 One per rack PACO Node 9-2x E5-2630 v3 CPUs - 64GB DRAM - 50GB SATADOM - 2x 120GB SSD - 1x dual-port 10Gb NIC Kinetic Node 21 12x 4TB Kinetic drives Copyright 2015 SwiftStack, Inc. swiftstack.com Page 9 of 19
NETWORK CONFIGURATION Using the example subnets in the diagram above, the components might be given IP addresses like these; note that physical management interfaces (e.g., IPMI) are necessary but not listed here: Client Systems Description Hostname IP Address Benchmark Node 1 BM01 172.27.23.21 Benchmark Node 2 BM02 172.27.23.22 Benchmark Node 3 BM03 172.27.23.23 Benchmark Node 4 BM04 172.27.23.24 Copyright 2015 SwiftStack, Inc. swiftstack.com Page 10 of 19
Rack 1 Description Hostname Outward-Facing Cluster-Facing Drives Top-of-Rack Switch ToR-1 VLAN 1: 172.27.23.0/24 VLAN 10: 10.0.10.0/21 VLAN 10: 10.0.11.0/21 PACO Node 1 PACO-1-1 172.27.23.201 10.0.8.11/21 N/A PACO Node 2 PACO-1-2 172.27.23.202 10.0.8.12/21 N/A Kinetic Node 1 KIN-1-1 N/A N/A NIC 0: 10.0.10.11-22 NIC 1: 10.0.11.11-22 Kinetic Node 2 KIN-1-2 N/A N/A NIC 0: 10.0.10.23-34 NIC 1: 10.0.11.23-34 Kinetic Node 3 KIN-1-3 N/A N/A NIC 0: 10.0.10.35-46 NIC 1: 10.0.11.35-46 Kinetic Node 4 KIN-1-4 N/A N/A NIC 0: 10.0.10.47-58 NIC 1: 10.0.11.47-58 Kinetic Node 5 KIN-1-5 N/A N/A NIC 0: 10.0.10.59-70 NIC 1: 10.0.11.59-70 Kinetic Node 6 KIN-1-6 N/A N/A NIC 0: 10.0.10.71-82 NIC 1: 10.0.11.71-82 Kinetic Node 7 KIN-1-7 N/A N/A NIC 0: 10.0.10.83-94 NIC 1: 10.0.11.83-94 N/A Rack 2 Description Hostname Outward-Facing Cluster-Facing Drives Top-of-Rack Switch ToR-2 VLAN 1: 172.27.23.0/24 VLAN 10: 10.0.12.0/21 VLAN 10: 10.0.13.0/21 PACO Node 1 PACO-2-1 172.27.23.211 10.0.8.21/21 N/A PACO Node 2 PACO-2-2 172.27.23.212 10.0.8.22/21 N/A Kinetic Node 1 KIN-2-1 N/A N/A NIC 0: 10.0.12.11-22 NIC 1: 10.0.13.11-22 Kinetic Node 2 KIN-2-2 N/A N/A NIC 0: 10.0.12.23-34 NIC 1: 10.0.13.23-34 Kinetic Node 3 KIN-2-3 N/A N/A NIC 0: 10.0.12.35-46 NIC 1: 10.0.13.35-46 Kinetic Node 4 KIN-2-4 N/A N/A NIC 0: 10.0.12.47-58 NIC 1: 10.0.13.47-58 Kinetic Node 5 KIN-2-5 N/A N/A NIC 0: 10.0.12.59-70 NIC 1: 10.0.13.59-70 Kinetic Node 6 KIN-2-6 N/A N/A NIC 0: 10.0.12.71-82 NIC 1: 10.0.13.71-82 Kinetic Node 7 KIN-2-7 N/A N/A NIC 0: 10.0.12.83-94 NIC 1: 10.0.13.83-94 N/A Copyright 2015 SwiftStack, Inc. swiftstack.com Page 11 of 19
Rack 3 Description Hostname Outward-Facing Cluster-Facing Drives Top-of-Rack Switch ToR-3 VLAN 1: 172.27.23.0/24 VLAN 10: 10.0.14.0/21 VLAN 10: 10.0.15.0/21 PACO Node 1 PACO-3-1 172.27.23.221 10.0.8.31/21 N/A PACO Node 2 PACO-3-2 172.27.23.222 10.0.8.32/21 N/A Kinetic Node 1 KIN-3-1 N/A N/A NIC 0: 10.0.14.11-22 NIC 1: 10.0.15.11-22 Kinetic Node 2 KIN-3-2 N/A N/A NIC 0: 10.0.14.23-34 NIC 1: 10.0.15.23-34 Kinetic Node 3 KIN-3-3 N/A N/A NIC 0: 10.0.14.35-46 NIC 1: 10.0.15.35-46 Kinetic Node 4 KIN-3-4 N/A N/A NIC 0: 10.0.14.47-58 NIC 1: 10.0.15.47-58 Kinetic Node 5 KIN-3-5 N/A N/A NIC 0: 10.0.14.59-70 NIC 1: 10.0.15.59-70 Kinetic Node 6 KIN-3-6 N/A N/A NIC 0: 10.0.14.71-82 NIC 1: 10.0.15.71-82 Kinetic Node 7 KIN-3-7 N/A N/A NIC 0: 10.0.14.83-94 NIC 1: 10.0.15.83-94 N/A Installation The following high-level steps are involved in deploying these reference architectures: 1. Install a SwiftStack controller. This can be done by leveraging the hosted SwiftStack Management Service or by deploying an on-premises instance of this controller. To sign up for an account on the SwiftStack Management Service, visit https://swiftstack.com/try-it-now/, or for an on-premises instance, contact sales@swiftstack.com. 2. Install hardware including PACO nodes, Kinetic JBODs, network switches, and a load balancer (if needed). 3. Ensure Kinetic drives are upgraded to the latest available firmware. For details, contact kinetic.support@seagate.com. 4. Assign static IP addresses to Kinetic drives. Kinetic drives leverage DHCP by default but need to be assigned static IP addresses for use in a SwiftStack deployment. For assistance in this step, contact support@swiftstack.com. 5. Configure SwiftStack nodes. Much like a cluster using conventional drives, the SwiftStack management software greatly simplifies the process of preparing and allocating servers and drives for use. Details can be found in SwiftStack s Quick-Start Guide: https://swiftstack.com/docs/install/index.html Copyright 2015 SwiftStack, Inc. swiftstack.com Page 12 of 19
EARLY BENCHMARK RESULTS It is often helpful to understand the capabilities of a reference architecture when placed under simulated client load. Supermicro, Seagate, and SwiftStack performed initial benchmark testing to simulate a relatively common backup and archive use case. For the purposes of this benchmark, four client servers were used to execute the ssbench tests as described below against the 1PB reference configuration described in this document. Tests Performed Using ssbench SwiftStack has enhanced a quick and effective tool for benchmarking Swift clusters called ssbench, which is freely available at https://github.com/swiftstack/ssbench. Prior to each test execution, a predetermined number of containers and objects were pre-populated in the cluster, then the tests performed a maximum number of PUTS to the cluster or GETS from the cluster across a range of object sizes with varying levels of concurrent client connections each for a 180-second period of time. Two examples of scenario files used to determine how a benchmark test will execute are provided here: Example: Testing reads of 1kB objects { "name": "1KB-GET test scenario", "sizes": [{ "name": "1KB-GET", "size_min": 1024, "size_max": 1024 }], "initial_files": { "1KB-GET": 1000 }, "run_seconds": 180, "crud_profile": [0, 1, 0, 0], "user_count": 100, "container_base": "ssbench-1kb", "container_count": 100, "container_concurrency": 10 } Example: Testing writes of 1kB objects { "name": "1KB-PUT test scenario", "sizes": [{ "name": "1KB-PUT", "size_min": 1024, "size_max": 1024 Copyright 2015 SwiftStack, Inc. swiftstack.com Page 13 of 19
}], "initial_files": { "1KB-PUT": 100 }, "run_seconds": 180, "crud_profile": [1, 0, 0, 0], "user_count": 100, "container_base": "ssbench-1kb", "container_count": 100, "container_concurrency": 10 } Simulating Backup, File Sync-and-Share, and Web Applications Specifically, for this benchmark, PUTs and GETs of objects with sizes ranging from 1KB to 10MB were made using a range of concurrent workers ranging from 100 to 4,000 clients. Combinations of tens to hundreds of clients with large object sizes would be typical of many backup and archive use cases, while thousands of clients with smaller object sizes would be more typical of web applications or file sync-and-share use cases. Results captured include the total number of objects written or read, latency, throughput, and more. Interpretation of Results: Meeting Real-World Requirements It is often best to interpret performance numbers in the context of real-world requirements. As an example, this is a joint Seagate/Supermicro/SwiftStack customer s stated requirement: One of our big customers, who is into backup and archive, has a target capacity of ~1PB. At 1PB, they expect a daily change rate (8 hour window) of ~1.5-3%, which means they need an Object Store that can handle between 1.8TB and 3.6TB per hour of sustained writes, i.e., around 1GB/s. As seen in the charts below, in the first benchmark runs, client systems were able to write nearly 4,000 objects per second for over 1.5 GBytes/sec of write throughput and read nearly 18,000 objects per second for over 3.5 GBytes/sec of read throughput to and from the cluster more than sufficient for the real-world customer. When analyzing performance, it is always important to understand what factor is currently the bottleneck for performance; in this case, analysis showed that the single 10 Gb network links between PACO nodes were saturated during these tests. These performance numbers are expected to improve by bonding multiple physical network interfaces to expand bandwidth at this point in the architecture, and this white paper will be updated with results when that testing is complete, in conjunction with support for Kinetic in the SwiftStack Controller. Copyright 2015 SwiftStack, Inc. swiftstack.com Page 14 of 19
Initial Result Data: Writes Object Size Concurrent Clients Total Requests Per Second Total Throughput from Client (MB/s) 1KB 2,000 3,721 3.6 96KB 2,000 3,391 318 256KB 2,000 3,078 769 1MB 2,000 1,671 1,671 10MB 2,000 156 1,563 100MB 2,000 16 1,597 512MB 2,000 3 1,545 Initial Result Data: Reads Object Size Concurrent Clients Total Requests Per Second Total Throughput to Client (MB/s) 1KB 2,000 18,623 18.2 96KB 2,000 16,642 1,560 256KB 2,000 13,468 3,367 1MB 2,000 3,598 3,599 10MB 2,000 354 3,542 100MB 2,000 N/A* N/A* 512MB 2,000 N/A* N/A* * Benchmark clients ran out of memory and were unable to perform highly concurrent large-object read tests COST: KINETIC VS. CONVENTIONAL DRIVES To compare the costs for a Swift-on-Kinetic cluster with 1PB of raw capacity to Swift cluster of the same raw capacity using conventional storage hardware, such as SAS or SATA drives, a model was developed to simulate Kinetic and conventional environments in order to be able to estimate the annual hardware, power and cooling costs for two comparable configurations. The hardware assumptions in this hardware TCO model were as follows: Copyright 2015 SwiftStack, Inc. swiftstack.com Page 15 of 19
Configuration with Conventional Drives (Qty. 3) Data Center Rack (Qty. 3) 10Gb Top-of-Rack Switch (Qty. 3) 10Gb Proxy-to-Object Switch (Qty. 21) Swift Account/Container/Object Nodes: 2U 12-drive chassis, E5-2630v3 CPU (Qty. 12) Enterprise 4TB SATA Drives, Dual 10GbE NIC (Qty. 4) 8GB DRAM (Qty. 3) Swift Proxy Nodes: 1U Chassis, 64GB SATA DOM, Dual-port 10GbE Intel NIC (Qty. 2) E5-2630 v3 CPUs (Qty. 4) 16GB DRAM (Qty. 2) 120GB SSD Per-Rack Configuration: (Qty. 1) Top-of-Rack Switch (Qty. 1) 10Gb Proxy-to-Object Switch (Qty. 1) Proxy Node (Qty. 7) Account/Container/Object Nodes Configuration with Kinetic drives: (Qty. 3) Data center rack (Qty. 3) 10Gb Top-of-Rack Switch (Qty. 3) 10Gb PACO-to-Kinetic Switch (Qty. 9) Swift Proxy/Account/Container/Object Nodes: 1U Chassis, 64GB SATA DOM, Dual-port 10GbE Intel NIC (Qty. 2) E5-2630 v3 CPUs (Qty. 4) 16GB DRAM (Qty. 2) 120GB SSD (Qty. 21) Kinetic JBOD: 1U 12-drive Supermicro chassis 4TB Kinetic drives (Qty. 2) Dual 10GbE RJ45 NIC Per-Rack Configuration: (Qty. 1) 10Gb Top-of-Rack Switch (Qty. 1) 10Gb PACO-to-Kinetic Switch (Qty. 3) Swift Proxy/Account/Container/Object Nodes (Qty. 7) Kinetic JBOD Shelves Note: The configurations in this comparison compared chassis alternatives available from Supermicro, and are intended to be similar in terms of architecture. There is a difference in NICs comparing configurations: for each of the ACO nodes in the conventional configuration, the TCO analysis assumes only one (1) dual-port 10GbE RJ45 NIC, whereas in the Kinetic configuration, every Kinetic JBOD has two (2) dual-port 10GbE RJ45 NICs. Copyright 2015 SwiftStack, Inc. swiftstack.com Page 16 of 19
For the comparison above, each rack is configured as follows: The Conventional configuration in this comparison uses a total of 17U per rack, whereas the Kinetic configuration only uses 12U per rack. This is possible with the reduction of the Storage Server controllers required for the Object Node in the Conventional case. Although not accounted for in this TCO analysis, the additional rack space savings in the Kinetic configuration provides potential for additional TCO improvement when moving towards larger scale. Based on these assumptions, the hardware costs amortized over 48 months along with the annual costs for datacenter operations including power and cooling was calculated. A 3% annual drive failure rate was also assumed. The results of the analysis are shown in the following table: Copyright 2015 SwiftStack, Inc. swiftstack.com Page 17 of 19
The total annual hardware TCO for the Swift cluster with Kinetic drives resulted in a 32% decrease compared to a similar cluster configured with conventional SAS or SATA drives: On a per-tb basis, this decreased the hardware TCO for each usable TB from $85 per year to $58 per year. On a per GB per month basis, hardware TCO decreased from 0.71 cents per month to 0.48 cent per month. And while that does not take into account SwiftStack licenses and cost for operational staff, which will be required for a production deployment, SwiftStack with Kinetic provides an extremely attractive TCO for private cloud storage. CONCLUSION In conclusion, Seagate Kinetic drive technology enables a flexible and scalable SwiftStack architecture that leverages direct Ethernet connectivity from storage nodes to drives in Supermicro Kinetic JBODs. Descriptions of the component technologies were discussed; details of 0.5PB and 1PB reference architectures were provided; early benchmark results were captured; and total cost of ownership of Kinetic versus conventional architectures were analyzed. Copyright 2015 SwiftStack, Inc. swiftstack.com Page 18 of 19
Compared to a conventional SAS-based storage hardware architecture, TCO savings of nearly 30% are possible while maintaining performance and capacity capabilities that meet today s real-world customer requirements and prepare for scaling to the needs of tomorrow s workloads as well. More information can be found from Seagate, Supermicro, and SwiftStack at their respective websites: www.seagate.com www.supermicro.com www.swiftstack.com Copyright 2015 SwiftStack, Inc. swiftstack.com Page 19 of 19