1 Overview of SimpliVity s OmniCube: Cloud Economics with Enterprise Performance, Protection and Functionality Abstract SimpliVity OmniCube is the only hyperconverged infrastructure platform that combines x86 Cloud Economics without Compromising Enterprise capabilities: Data Protection, Data Efficiency; Performance and Global Unified Management. SimpliVity s Data Virtualization Platform is the key to Simplifying IT. A novel global data architecture. August 2014
2 Contents 1. SimpliVity Company Overview The Data Problem The Legacy Stack Does Not Solve the Data Problem IT Is Turning To the Cloud SimpliVity OmniCube Solves the Data Problem SimpliVity s Three Core Innovations Data Virtualization Platform Technology Overview Deduplication, Compression, and Optimization Today SimpliVity Data Virtualization Platform The Starting Point: Real-time Deduplication, Compression and Optimization without Impact to Performance OmniCube Accelerator Card Enhancing the Value through Optimization Global Federated Management Path to Hyper Convergence (v 3.0) Summary... 22
3 SimpliVity and OmniCube are trademarks of SimpliVity Corporation. All trademarks and registered trademarks mentioned herein are the property of their respective owners. Information in this document is subject to change without notification. Reproduction in any manner whatsoever without the written permission of SimpliVity is strictly forbidden. SimpliVity Corporation 2014 Publication Date: 08/09/2014
4 1. SimpliVity Company Overview SimpliVity was founded in 2009 with a mission to simplify IT. Specifically, SimpliVity s intentions has been to deliver to customers the best of both worlds: x86 Cloud Economics Enterprise functionality, protection and performance These dual goals required three and a half years of development in stealth before OmniCube was made generally available in April 2013, given that core technologies could not be added after the fact. Figure 1 - SimpliVity Overview SimpiVity s go-to-market is 100% through the channel. On July 1st, SimpliVity formally announced the SimpliVity ParterAdvantage Program underscoring the priority and focus that SimpliVity devotes to its world-class partners. SimpliVity has raised a total of $101M in three rounds of funding by top-tier venture capital firms including Accel Partners, Charles River Ventures, DFJ, Kleiner Perkins Caufield & Byers (KPCB) Growth and Meritech Capital Partners. These venture firms backed successful companies such as Amazon, Google, Facebook, and others. Since launching the first product OmniCube in early 2013, SimpliVity has experienced rapid customer adoption and world-class channel development. We already have hundreds of customers deployed in production and hundreds of channel partners globally, and we ve won awards that are more typical of larger companies, such as VMworld Gold in 2013 for best product in storage and backup.
5 Figure 2 illustrates our view whereby what Enterprise customers need is more than what cloud companies or web companies deliver. Enterprise customers need more functionality, protection and performance than is currently delivered at or by the Web companies. Enterprise customers want the best of both worlds: x86 Cloud economics, with Enterprise functionality, protection and performance. Figure 2: Can Facebook and Google offer everything? 2. The Data Problem IT organizations are asked by their business constituents to maintain service level agreements and provide new innovation, while budgets remain flat. It s the proverbial do more with less. Not only is data growing exponentially to 40 zettabytes by 2020, according to IDC but we must do more with this data than ever before: Ensure high performance for mission-critical applications. Provide mobility across data centers and remote offices. Integrate data protection. Offer streamlined management to reduce operational expense. These growing demands, and the simultaneous business goals to reduce cost and increase agility and flexibility have led IT organization to quickly move towards both virtualization and the cloud. Virtualization is the new default in the data center, with more virtual than physical servers today. 1 According to research conducted by Vanson Bourne, IT organizations, on average, expect 59% of their 1 IDC. Worldwide Cloud and Virtualization System Software, Dec Presentation.
6 server workloads to be virtualized 2 and Gartner supports it by saying: almost two-thirds of x86 architecture workloads have been virtualized on servers. 3 Virtualization starts to solve the problem at the server level. But IT organizations still handcuffed by the Data Problem. Figure 3 A Picture of Today s Data Center A snapshot of today s IT environment is one of complexity, cost and inflexibility that inhibit IT staff from effectively supporting the business. Several challenges are listed below and are illustrated in Figure Inability to Innovate: An estimated 70% of the time, IT employees are just keeping the lights on by conducting maintenance, upgrades, patches, etc., and only 30% are they building new innovation or engaging in new projects that will push the business forward. 2. Complexity and Decreasing Employee Productivity: The typical datacenter faces the complex challenges of assimilating many different IT stacks, including primary storage, servers, backup deduplication appliances, WAN optimization appliances, SSD acceleration arrays, public cloud gateways, backup applications, replication applications, and other special purpose appliances and software applications. IT staff must somehow cobble them together but it inevitably results in poor utilization, idle resources, and high labor costs. 3. Multiple points of management: Many modern infrastructures require dedicated staff with specialized training to manage the interface of each stand-alone appliance Gartner. Magic Quadrant for x86 Server Virtualization Infrastructure. June 2013.
7 4. Limited data mobility: As organization move to virtualization, they are presented with the benefits of VM mobility. VMs can be shifted from server to server or data center to data center using tools like VMware vmotion. But, in today s IT infrastructure, the data associated with the VM is still limited in its mobility. 5. Inflexible Scaling up and down: Predicting infrastructure requirements three years into the future is not practical or efficient. Datacenter managers need a solution that can scale out with growing demand without increased complexity. Similarly, the ability to quickly scale down infrastructure or rebalance workloads is time consuming and difficult. 6. Poor IT and Business Agility: The complexities of legacy infrastructure place a burden on IT teams in day to day management. The inherent inflexible nature of these technologies also burden IT teams, and therefore the business, in their ability to quickly roll out new applications or build new capabilities that that the business requires. More technically, there are also restrictions on legacy infrastructure ability to restore, replicate, and clone data both locally and to remote datacenters in an efficient manner at scale. This introduces economical limitations in terms of sought data management and protection practices. 7. Cost: Highly functional and high performance data storage is dependent on an expensive stack of technologies from storage area network (SAN) or Network Attached Storage (NAS), to target backup devices, to WAN optimization appliances, to traditional standalone servers, both in terms of capital expense (CAPEX) and operational expense (OPEX). Figure 4 - Today's Complex Web of Data Center Technologies
8 3. The Legacy Stack Does Not Solve the Data Problem Approximately 12 disparate products are required in order to deliver Enterprise functionality, protection and performance. Each of those products is purchased from different vendors, each requires training of IT professionals; and each is managed from separate management monitor. Over the last 15 years, there was marked proliferation of appliances and point solutions whereby each only addressed a singular problem. Such appliances all address aspects of the Data Problem. They fall into 3 sub-categories: data efficiency, data protection, and performance. 1. Data Efficiency appliances: The cost of the WAN was one significant problem, so some companies offered an appliance to address the optimization of traffic on the WAN. Data protection was another problem, so another company proposed a different appliance to optimize the local and remote data backup issue. These two different technologies each addressed merely a subset of the Data Problem but not all of the Data Problem. When public cloud came into play, another product, from yet another vendor, was necessary to deduplicate and compress the data for cloud on-ramp purposes. 2. Data Performance Appliances and sub-systems: Efficient SSD arrays then became a point of contention. Why the need? Density of the drive had increased dramatically, about 300 fold during the past decade, yet the RPMs increased only 1.5x thus giving rise to a significant discrepancy between HDD density and IOPS. Figure 5 The I/O Gap Therefore, IOPS has become one of the most expensive resources in the data center. SSD arrays, SSD caching, SSD drives in the server and SSD drives in storage arrays have all been added to the data-center, in order to address the IOPS problem. Most of these SSD arrays and sub-systems are accompanied by deduplication or compression technologies. Again, each data-efficiency technology is optimized in order to address a phase in the life-cycle of the data. 3. Data Protection appliances: As requirements for protecting the data increased over the past decade, a slew of data-protection and restoration emerged. In some data centers, we find numerous data-protection products from various vendors.
9 Over time, IT organizations have invested these different point technologies to address aspects or symptoms of the Data Problem. These are bought from and supported by many different vendors, and managed from as many different management consoles. With these multiple points of management came the requirement of having dedicated staff with specialized training to maintain the interface of each stand-alone appliance and that IT reality required great amounts of CAPEX and OPEX. Figure 6: The Legacy Stack Figure 6 illustrates previous attempts at solving the Data Problem having resulted in a very large, inflexible, complex infrastructure stack. 4. IT Is Turning To the Cloud By cloud, we mean three core attributes: 1. Automation, orchestration and self-provisioning of IT resources across the organization 2. Elastic infrastructure: grow up, out or in based on consumption 3. A business model supporting #1 and #2 Cloud enables the quick provisioning of IT resources compute, memory, storage, application services, data protection services, etc. from a centralized pool of resources, automated and orchestrated around the needs of the business. It is fundamentally a new way of thinking about the delivery of IT services.
10 The reality is that many organizations are turning to the cloud whether IT knows it or not. This is categorized as Shadow IT. Figure 7: Shadow IT As IT has been struggling to meet demands and maintain SLAs, the business can no longer wait. With the growing trend of the Consumerization of IT, employees are used to a certain speed and flexibility. This expectation leads them to pulling out credit cards for Amazon Web Services or signing up for a free account on Dropbox, bypassing IT standards and controls. So what is an IT organization to do? Should it just turn everything over to Amazon or Google? The big cloud players today do not offer a comprehensive solution to the data problem, which introduces a dilemma: How can we bring the cloud technology that Amazon and Google are supposedly implementing into our datacenters when they don t address some of these problems? There is disconnect between the big players and the design and implementation of the solution to the Data Problem. The ideal datacenter would face the challenge of combining primary storage, servers, backup deduplication appliances, WAN optimization appliances, SSD acceleration arrays, public cloud gateways, backup applications, replication applications, and other special purpose appliances and software applications so that they all run as a unified stack atop a single shared resource pool. If this Data Problem is truly addressed, atomic IT building blocks can be offered that deliver cloud economics in your data centers at enterprise scale.
11 5. SimpliVity OmniCube Solves the Data Problem SimpliVity s solution is the revolutionary hyperconverged OmniCube a scalable, economical, 2U building block using an x86 server platform that offers all the functionality of traditional IT infrastructures in one device. The OmniCube is a combined, all-in-one, IT infrastructure platform that includes storage, compute, networking, hypervisor, real-time deduplication, compression, and optimization along with powerful data management, data protection, and disaster recovery capabilities. The OmniCube is an elegant 2U building blocks based on x86 industry-standard systems containing compute, memory, SSDs, HDDs, and 10GbE interfaces that can be clustered in an efficient scale-out manner to deliver performance, capacity, availability, and functionality. The benefits delivered by this framework include performance acceleration by eliminating redundant IOPS, capacity optimization, and WAN optimization through the deletion of redundant data sent between data centers and remote offices. The solution delivers cloud economics with Enterprise-class functionality (performance, reliability, availability, security, data protection, and disaster recovery). SimpliVity refers to this level integration as Hyperconvergence (3.0) The solution is designed for high availability with no single point of failure. By combining the powerful capabilities in a scalable IT building block and leveraging the pool of resources, SimpliVity provides dramatic improvements in economics and IT simplification when compared to legacy solutions. Figure 8: SimpliVity s Simplified Solution As data is written at inception, the OmniCube deduplicates, compresses, and optimizes it, inline, once and forever, everywhere. This Everywhere is challenging because a global file system and a global namespace is needed. Most systems deduplicate at one phase of the data life cycle and incur a rehydration tax across its lifecycle (primary, backup, archive, WAN, cloud). SimpliVity deduplicates, compresses and optimizes just once and it persists forever, everywhere. In order to achieve this quickly before the data ever hits the disk, which is something no other company does we developed the OmniCube Accelerator Card, a PCIe card with FPGA and NVRAM,
12 protected with super capacitors. This architecture allows data processing at near-wire speeds, delivering enterprise-class performance and reducing latency because of high speed, high availability NVRAM. The architecture is also extremely efficient because we have our own FGPAs. This means we are not only not slowing the data, we re accelerating it we deduplicate IOPS and data as its written, therefore we write less data and speed up the process. There is no longer a need to have separate devices for WAN optimization, backup deduplication, or cloud gateways. The OmniCube can securely connect to Amazon and also backup and restore to the Amazon cloud using just our system. OmniCube also requires fewer SSDs in the system than legacy devices as writes will have already been deduplicated. There are many added protection capabilities, and additional OmniCubes can be implemented for even higher efficiency and availability. So now we have combined, or hyperconverged, all the functionalities that are associated with storage, data, and data movement. VMware ESXi currently runs on the OmniCube system; however, KVM and Hyper-V will be added in the future. Each OmniCube is operated with a SimpliVity controller with VM workloads running on the platform. OmniCube includes a simple policy-based framework to manage all the backups in the system. The backup policy for a virtual machine specifies how frequently backups are taken, how long they are kept, and in which data center they are stored (either local or a remote data center in the Federation). The public cloud is simply another destination option. All the data that moves is compressed and optimized, once and forever, everywhere, achieving effortless scalability and mobility. 6. SimpliVity s Three Core Innovations Figure 9 below illustrates how OmniCube encompasses three core innovations that fundamentally solve the Data Problem in today s datacenters and enterprises. Figure 9: SimpliVity s Core Innovations
13 1. Data Virtualization Platform The core technology that performs inline data deduplication, compression, and optimization* on all data at inception across all phases of the data lifecycle (primary, backup, WAN, archive, and on the cloud), across all tiers within a system (DRAM, Flash/SSD, and HDD), all handled with fine data granularity of just 4KB-8KB. a. Reduce IOPS to SSD/flash or HDD b. Reduce capacity and associated space and power c. Enablement of global mobility of VMs and data, at a fraction of the time and cost d. *Optimization: technology that strips the data of overhead that is injected by the Operating System and the Virtualization stack (for example, the vswap file), thus contributing to the efficiency of IOPS, storage and WAN transfer. 2. Hyperconvergence A single software stack that combines the functionality of up to 12 different products in one, running efficiently atop a single shared x86 resource pool and leveraging a commodity server platform to deliver Enterprise IT. The solution delivers Enterprise functionality, protection and performance on x86 commodity servers. Our customers are benefiting from 3x TCO savings based on acquisition cost of IT infrastructure, cost of labor, space, and power. Additionally, a low-cost 10GE network is sufficient in order to run a high performance, high functionality IT. 3. Global Federated Management An intelligent network of collaborative systems that provides massive scale-out capabilities as well as VM-centric management through a single unified interface for the entire global infrastructure. A key differentiator with the OmniCube GUI is that the management interface is fully integrated with VMware vcenter as a plug-in. A single administrator can manage all aspects of the OmniCube from within vcenter. Figure 10 - Global Federated Architecture Figure 10 also shows three OmniCube systems hosting multiple VMs along with a SimpliVity Cloud instance for efficient, secure Backups in the Public Cloud. Figure 10 demonstrates that OmniCube allows customers
14 to leverage their existing investment of servers for hosting VMs and applications while taking advantage of the rich functionality of OmniCube. When more resources are needed, more OmniCube nodes can be seamlessly added to the Federation, thereby dynamically expanding the shared resource pool. Similarly, if resources need to be consolidated within the Federation, customers can easily move VMs using vmotion and SimpliVity handles the task of dynamically and efficiently migrating the data across the consolidated resource pool. The result of SimpliVity s three innovations is the market s most efficient infrastructure for the modern, agile datacenter a globally federated hyperconverged IT platform that enables VM-centric global management of all VMs, their data, and the underlying infrastructure. Figure 11 Before and After with SimpliVity
15 7. Data Virtualization Platform As stated in Section 6 above, the Data Virtualization Platform is the core technology that performs inline data deduplication, compression, and optimization on all data at inception across all phases of the data lifecycle (primary, backup, WAN, archive, and on the cloud), across all tiers within a system (DRAM, Flash/SSD, and HDD), all handled with fine data granularity of just 4KB-8KB. Here we ll go into more technical detail on the need for, and ultimately benefit provided by the SimpliVity Data Virtualization Platform. 7.1 Technology Overview The need for a lighter data architecture one that fosters mobility rather than inhibits it has been clear for some time. Many have seen great promise in data deduplication and compression and have recognized that if done well, these technologies can facilitate lighter-weight, mobile data structures. Optimization holds further promise as a means of intelligently managing data based on the anticipated usage of it by the applications it serves. Following are brief definitions of these technologies: A. Deduplication the process of finding and eliminating redundant data within a given data set in reference to the whole available repository holds great promise in delivering a light-weight, mobile data structure and therefore is seen as a key to solving the complexity crisis by addressing the root cause. B. Compression the process of finding and eliminating redundant data within a given data set, in relation to other data within the same dataset, is a simpler problem, but provides complimentary value. C. Optimization the intelligent treatment of data based on its anticipated use by an application. Systems that can identify file types and make real-time decisions about whether and where to store that data can achieve overall improved storage efficiency, performance, and bandwidth usage. Specifically, deduplication, compression and optimization have several key benefits that address the core requirements of today s data center: More efficient use of the SSD storage cache. A deduplication process that operates at the right point in the data stream can reduce the footprint on the cache, improving overall system-wide performance. Dramatic bandwidth reduction on replication between sites. Twenty years ago, the IT organization was dedicated to a single primary data center, but today, almost all IT teams manage multiple sites. A fundamental requirement of the infrastructure, then, is fostering efficient data transfer among sites. Deduplicating data before it is sent to a remote site makes the transfer itself more efficient and saves significant bandwidth resources. Enhanced data mobility. A fundamental principle of server virtualization is the mobility of the VMs, but coarse-grain data structures significantly block mobility in a traditional infrastructure environment. When the data is deduplicated, it is easier to move VMs from one server to another, and it is easier to move data in and out of the cloud for the same reason. Efficient storage utilization. Required capacity can be reduced 2-3X in standard primary use cases based on the effective use of deduplication, compression, and optimization.
16 Enhanced performance given that less actual data needs to be written to disk or read from disk. This is amplified in application environments such as Virtual Desktop Infrastructure (VDI), where boot storm can generate multiple GB of random reads from disk. With deduplication, compression, and optimization, that can be reduced to tens of MB. Enhanced time-to-data. Achieve faster access to data when performing migrations or when recovering from a remote site or the cloud. The above list enumerates the great potential value of deduplication, compression, and optimization across a number of areas. This may be counter-intuitive given that deduplication technologies have historically been designed to optimize for HDD capacity. 7.2 Deduplication, Compression, and Optimization Today When introduced to the market in the mid-2000s, deduplication was designated entirely for backup. In this use case, optimizing for capacity is crucial, given massive redundancy of data and the ever increasing volume of data to be backed up and retained. Deduplication then spread to other isolated phases of the data lifecycle. It has been implemented as resource-intensive operations that have been implemented in different products, by different vendors, each addressing a single specific problem: deduplication of backup data, or deduplication of data across the WAN, or deduplication long-term archives. Despite the maturity of deduplication, and the great capacity and performance benefits therein, no vendor has thus far comprehensively solved the deduplication challenge in primary data. Some products apply deduplication only within the SSD tier, and therefore only offer limited benefits in terms of overall efficiency. Others apply compression technology and incorrectly use the term deduplication. In primary storage systems, optimizing for disk capacity is a relatively lower priority. Hard Disk IOPS are a much more expensive system resource than HDD capacity. As a result of the latency that deduplication may impose, many have deployed it as a post-process, which severely limits other operations such as replication and backup. Most of these sub-optimal implementations are a result of adding deduplication to an existing legacy architecture, rather than developing it as the foundation for the overall 21 st Century architecture. The various fragmented work-arounds that vendors have delivered have varying levels of value, but fall short of solving the underlying problem; they ultimately do not deliver a truly fine-grained and mobile data infrastructure. IT teams can be left with higher acquisition costs and even more complexity as they manage partial deduplication amidst their other infrastructure burdens. All of this points in one direction: 21 st century data has to be deduplicated, compressed, and optimized at the primary storage level, and no later. When data is deduplicated across all tiers right from the point of inception, it has significant resource-saving ramifications downstream, and opens up the advanced functionality required for today s virtualized world. 7.3 SimpliVity Data Virtualization Platform Rather than taking an existing data architecture and trying to build-in deduplication, compression and optimization, SimpliVity took the inverse approach. As a first step, it designed the core technology that
17 performs real-time deduplication and compression on primary data, in real-time, without impact to performance or latency (see below, the OmniCube Accelerator Card), and built an entire globally federated data architecture around that foundation that manages the resulting fine-grained data elements across a Global Federation of systems. In doing so, it addressed all of the core requirements for truly effective deduplication, compression and optimization for the primary production infrastructure system and beyond: Real-time Once and forever (no need for a second pass, or hydration/dehydration inefficiencies) Across all tiers of data within a system Across all datasets Across all locations Including on the Public Cloud Without impacting performance In delivering the Data Virtualization Platform, SimpliVity is realizing the potential of well-implemented deduplication, compression, and optimization of primary data. In addition to disk capacity, the Data Virtualization Platform optimizes HDD IOPS, flash capacity, DRAM capacity, and WAN capacity. In so doing, SimpliVity s technology is going far beyond capacity efficiency. What may at first seem counterintuitive, the Data Virtualization Platform actually improves system performance. With SimpliVity, deduplication, compression, and optimization occur before data is written to the HDD, thus preserving the precious HDD IOPS The Starting Point: Real-time Deduplication, Compression and Optimization without Impact to Performance The Data Virtualization Platform performs deduplication, compression and optimization in real-time, as the data is first written into the OmniCube datastore. This contrasts to a more prevalent approach called postprocess deduplication, which allows data to be written first without deduplication and at some later stage, performs the deduplication process. The big problem with post-processing deduplication is that it introduces a lag where there was none before. Businesses are presented with the choice to replicate data before deduplicating it or waiting to replicate until the deduplication process is complete. But neither option is sufficient: replicating before deduplicating defeats the purpose of deduplicating at all, and waiting to replicate can create RPO (Recovery Point Objective) issues. Given the clear superiority (and elegance) of performing deduplication real-time, why is it unusual? In a word, performance. Deduplication is a resource-intensive process. As data enters the system, it must be scanned, analyzed, compared to an index or table that has cataloged all existing blocks in the data set, and then acted upon (either deleted if redundant, or written if new). Pointers and indexes need to be updated in real-time such that the system can keep track of all data elements in all their locations, while maintaining an understanding of the full data sets (pre-deduplication) that have been stored in the system. The challenge is augmented if we wish to maximize data-efficiency by focusing the architecture on granular 4KB or 8KB data sets (which is the original size at which data is written by the application). A system managing 4KB blocks and ingesting data at 400MB/s needs to perform 100,000 such operations per second.
18 Given the challenge, it is understandable that many vendors have opted to conduct this operation out-ofband, so as not to impact performance. This is a challenge that SimpliVity addressed head-on and resolved OmniCube Accelerator Card SimpliVity s real-time deduplication breakthrough is the OmniCube Accelerator Card (OAC), a specially architected SimpliVity PCIe module that processes all writes and manages the compute intensive tasks of deduplication and compression. All data that is written to the OmniCube datastore first passes through the OAC at inception, as it is created. The practical effect of real-time deduplication is that the Data Virtualization Platform processes data elements that are between 4KB and 8KB in size, compared to the 10-20MB of traditional architectures, i.e. 2,000 times more efficient. The data is thus born to be mobile from the beginning, and remains so throughout its lifecycle within the OmniCube Global Federation. Within a given OmniCube system, deduplication makes each storage media tier more efficient DRAM, Flash, SSD, and HDD thereby dramatically lowering the cost of the system compared to traditional offerings. While deduplication within a single OmniCube system provides great efficiencies and cost savings, the additionally groundbreaking value of OmniCube lies in the Global Federation the network of connected OmniCube systems that provide High Availability (HA), resource sharing, simplified scale-out, and replication for VM movement and Disaster Recovery (DR). Additionally, with deduplication at the core, the Data Virtualization Platform has been designed and optimized for managing a very large set of fine-grained data elements, across a Federation of systems that are both local (within the same data center) and remote (dispersed data centers), including the Public Cloud. Designing the overall data architecture around the deduplication, compression and optimization engine has ensured that the value of deduplication pervades all media, all tiers (primary, backup, and archive), and all locations Enhancing the Value through Optimization While deduplication is the fundamental core, the Data Virtualization Platform further enhances the CAPEX and OPEX savings enabled with OmniCube by delivering remarkable efficiencies through operating-system and virtualization-aware optimizations. The optimizations within OmniCube deliver similar effects to deduplication in a different way they identify data that need not be copied, or replicated, and take dataspecific actions to improve the overall efficiency of the system. Given that OmniCube today is optimized for the VMware environment, most such optimizations stem from awareness of VMware specific content or commands. For example,.vswp files, though important to the functionality of each individual VM, do not need to be backed up or replicated across sites. Thus, when preparing to backup or replicate a given VM from one site to another, the Data Virtualization Platform recognizes the.vswp file associated with a VM, and eliminates that data from the transfer - saving time, bandwidth and capacity. Other optimizations are similar in nature leveraging the Data Virtualization Platform s ability to find and make real-time decisions on common data types within a VMware
19 environment. 8. Global Federated Management Beyond the global enhancements provided by the Data Virtualization Platform as described in Section 7 above whereby data is deduplicated, compressed, and optimized across all sites and all stages of the data lifecycle, the SimpliVity global federation also provides extensive operational benefits. The SimpliVity solution includes a robust, comprehensive management framework. The design is to simplify IT and make the solution easy to manage within and across data centers and remote offices. The design focuses on the global federated deployment, and administrators can easily traverse the OmniCube Federation from within VMware vcenter. With OmniCube, administrators can easily view and manage applications as well as VMs using simple operations. All analysis, reporting, actions, and management tasks in the SimpliVity OmniCube are VMcentric to eliminate the complexity that exists between vsphere and traditional storage arrays and storage area networks. This means all storage related policies, actions, and monitoring are accomplished on a per- VM basis across the multi-site federated network. One user can manage the entire global infrastructure spanning one or multiple sites through one, simple interface. Policy and automation capabilities in the management layer enable dramatic improvements in operational efficiency, productivity gains, and the simplification of IT. Examples of the easy-to-use interface, familiar to a VMware Administrator, are shown below. The Federation View shows a representation of each data center in the federation along with the connections between data centers. Note that there are several private vsphere data centers and there is one instance of an Amazon AWS public cloud that hosts the SimpliVity OmniCube for Amazon. This illustrates how customers can have a hybrid cloud deployment within the federation whereby low cost backups can be archived securely and efficiently in cloud infrastructure.
20 Figure 12 The Federation View 9. Path to Hyper Convergence (v 3.0) SimpliVity delivers hyperconverged infrastructure for the Software Defined Data Center. We see the converged infrastructure evolution as having traced the following progression. Convergence 1.0 endeavors included servers, storage, and switch with VMware, not including data protection or data efficiency appliances. The benefits from 1.0 are reduced labor costs associated with managing the product; however, the IOPS are still very costly.. Figure 13 Integrated Systems 1.0
21 Convergence (2.0) provides servers, storage, and switch, but with a virtualized environment of all the resources. There is now the benefit of a single shared resource pool that enhances efficiencies; however, the rest of the appliances for protection and efficiency are not part of this scope and virtual storage appliances are running on a server with a clustered file system. Therefore, the Data Problem is still not addressed. Figure 14 Partial Convergence 2.0 SimpliVity proposes Convergence 3.0, or the whole of the legacy stack in one box, including servers, storage, switch, deduplication, backup, and a WAN function on x86 resources with global scalability. Figure 15 Hyperconvergence 3.0 The final destination in the evolution is one that delivers true hyperconverged infrastructure for the Software Defined Data Center, and SimpliVity is the first and only vendor executing this vision and delivering the total solution with OmniCube. SimpliVity refers to this as "Hyperconvergence 3.0."
22 Figure 16 The Path to Hyperconvergence and SDDC Interestingly and not at all surprisingly, the leading players of the prior phases of convergence 1.0 and 2.0, VCE and Nutanix, respectively, have each invested far less in their technologies until they became Generally Available (GA). VCE took about 8 months to launch their Vblock. Nutanix, approximately 18 months (founded in September 2009 and announced first VDI shipment in April 2011). SimpliVity, on the other hand, invested 42 months in the delivery of its platform on GA basis: Figure 17 The History of Convergence
23 10. Summary Enterprise data centers require much functionality in order to deliver IT services. In addition to rudimentary servers, storage, server virtualization and networking, numerous appliances and applications have been added in order to address: Protection, Data Efficiency (about 3-5 different deduplication/compression products for different phases of the data life-cycle); Performance and Global Unified Management. This has caused significant complexity and cost. Each technology requires support, maintenance, licensing, power, cooling not to mention a set of dedicated resources capable of administrating and maintaining the elements. CIOs want simplicity, without compromising capabilities. What s required is a combination of x86 based, Cloud/Web economics, without compromising Enterprise capabilities: Data Protection, Data Efficiency, Performance and Global Unified Management. SimpliVity is the first and only company to deliver the Best of Both Worlds: Cloud Economics and Enterprise Capabilities. This is enabled via SimpliVity s Data Virtualization Platform. The net benefits of the SimpliVity OmniCube solution include the following: Simplified IT and 3x TCO savings. Enterprise performance, reliability, and availability running on x86 commodity resources of your choice, under the virtualization and management of your choice. Global unified management. Flexibility in terms of form-factor and deployment options. Through infrastructure consolidation, increased effectiveness of both physical and human resources, and decreased complexity, SimpliVity can help organizations take on the challenges of maximizing efficiency while reducing costs.
The SimpliVity Data Virtualization Engine An Overview of the Data Architecture Powering SimpliVity s OmniCube Table of Contents 1. Synopsis... 1 2. The SimpliVity Data Virtualization Engine: Addressing
SimpliVity Simplifies IT with Hyperconvergence First and only to provide the Best of Both Worlds: x86 Cloud Economics without compromising Enterprise Capabilities Abstract SimpliVity is the only hyperconverged
OmniCube SimpliVity OmniCube and Citrix XenDesktop 7 Reference Architecture Date: November 17, 2014 SimpliVity and OmniCube are trademarks of SimpliVity Corporation. All trademarks and registered trademarks
OmniCube SimpliVity OmniCube and Multi Federation ROBO Reference Architecture White Paper Authors: Bob Gropman Date: April 13, 2015 SimpliVity and OmniCube are trademarks of SimpliVity Corporation. All
Arif Goelmhd Goelammohamed Solutions Architect @agoelammohamed Hyperconverged Infrastructure: The How-To and Why Now? Agenda: 1. SimpliVity Overview 2. The Problem 3. The Solution 4. Demo Simplify IT with
SimpliVity Best of Both Worlds Cloud Economics with Enterprise Performance, Protection and Functionality Wolfgang Huber Regional Sales Director, Central Europe +49 170 8632160 "SimpliVity is one of the
Hyper-converged IT drives: - TCO cost savings - data protection - amazing operational excellence Sebastian Nowicki SimpliVity is one of the biggest innovations in enterprise computing since ware. ~John
Private and Public Clouds Powered by SimpliVity Solution Brief SimpliVity s OmniCube is the market-leading hyperconverged infrastructure platform, delivering the best of both worlds: x86 cloud economics
Deep Dive on SimpliVity s OmniStack A Technical Whitepaper By Hans De Leenheer and Stephen Foskett August 2013 1 Introduction This paper is an in-depth look at OmniStack, the technology that powers SimpliVity
Whitepaper NexentaConnect for VMware Virtual SAN Full Featured File services for Virtual SAN Table of Contents Introduction... 1 Next Generation Storage and Compute... 1 VMware Virtual SAN... 2 Highlights
MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with
Native Data Protection with SimpliVity Solution Brief SimpliVity is the market-leading hyperconverged infrastructure that delivers native data protection. A fundamental requirement of hyperconvergence,
Economic Insight Paper Is Hyperconverged Cost-Competitive with the Cloud? An Evaluator Group TCO Analysis Comparing AWS and SimpliVity By Eric Slack, Sr. Analyst January 2016 Enabling you to make the best
Maxta Storage Platform Enterprise Storage Re-defined WHITE PAPER Software-Defined Data Center The Software-Defined Data Center (SDDC) is a unified data center platform that delivers converged computing,
VMware Software-Defined Storage and EVO:RAIL Gaetan Castelein, Sr. Director, Storage Product Marketing Michael McDonough, Sr. Director, EVO 9/14/2014 2014 VMware Inc. All rights reserved. Agenda VMware
: Unified Protected ROBO with SimpliVity OmniStack Hyperconverged Infrastructure A Guide to SimpliVity OmniStack for ROBO Environments Page 1 of 14 Table of Contents 1 Executive Summary... 3 1.1 Purpose...
What it Means for the IT Practitioner WHITE PAPER Extending the Power of Virtualization to Storage Server virtualization has changed the way IT runs data centers across the world. According to Gartner,
The next step in Software-Defined Storage with Virtual SAN VMware vforum, 2014 Lee Dilworth, principal SE @leedilworth 2014 VMware Inc. All rights reserved. The Software-Defined Data Center Expand virtual
Simplify IT with Hyperconvergence Gregory Oldyck Solutions Architect Why is Performance Lagging? 2 Data Efficiency Equations 3 The SimpliVity Equation 4 Vision Orchestration Legacy IT Infrastructure Management
V3 Appliance FAQ Why is the V3 appliance so effective as a physical desktop replacement? The V3 appliance leverages local solid-state storage in the appliance. This design allows V3 to dramatically reduce
REDEFINE SIMPLICITY AGILE. SCALABLE. TRUSTED. TOP REASONS: EMC VSPEX BLUE FOR VIRTUALIZED ENVIRONMENTS Redefine Simplicity: Agile, Scalable and Trusted. Mid-market and Enterprise customers as well as Managed
VDI Without Compromise with SimpliVity OmniStack and Citrix XenDesktop Page 1 of 11 Introduction Virtual Desktop Infrastructure (VDI) provides customers with a more consistent end-user experience and excellent
Nutanix Tech Note Configuration Best Practices for Nutanix Storage with VMware vsphere Nutanix Virtual Computing Platform is engineered from the ground up to provide enterprise-grade availability for critical
Microsoft Private Cloud Fast Track Microsoft Private Cloud Fast Track is a reference architecture designed to help build private clouds by combining Microsoft software with Nutanix technology to decrease
White Paper Reference Architecture for OmniStack TM Integrated Solution with Cisco UCS C240 April 2015 OmniStack Innovation 2015 SimpliVity Corporation. All rights reserved. Page 1 of 19 Contents 1 Executive
White Paper OmniStack Integrated Solution with Cisco Unified Computing System: Reference Architecture July 2014 Overview SimpliVity s fundamental mission is to simplify IT. Specifically, SimpliVity s mission
New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN Updated: May 19, 2015 Contents Introduction... 1 Cloud Integration... 1 OpenStack Support... 1 Expanded
Software-Defined Storage & VMware Virtual SAN 5.5 Simon Pfau 2014 VMware Inc. All rights reserved. The Software-Defined Data Center Expand virtual compute to all applications Transform storage by aligning
Nutanix Solution Note Version 1.0 April 2015 2 Copyright 2015 Nutanix, Inc. All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. Nutanix is
Technology Insight Paper Evolving Datacenter Architectures HP technologies for Cloud ready IT By Russ Fellows January, 2013 Enabling you to make the best technology decisions Evolving Datacenter Architectures
White Paper June 2011 Hypervisor-based Replication A New Approach to Business Continuity/ Disaster Recovery Hypervisor-Based Replication Zerto has introduced a virtual-aware, software-only, tier-one, enterprise-class
EMC SOLUTIONS TO OPTIMIZE EMR INFRASTRUCTURE FOR CERNER ESSENTIALS Mitigate project risk with the proven leader, many of largest EHR sites run on EMC storage Reduce overall storage costs with automated
EMC - XtremIO All-Flash Array evolution - Much more than high speed Carlos Marconi Systems Engineer Team Lead EMC SouthCone August 2015 Evolution of High End Systems and Storage Centralized Single servers
SYMANTEC NETBACKUP APPLIANCE FAMILY OVERVIEW BROCHURE When you can do it simply, you can do it all. SYMANTEC NETBACKUP APPLIANCES Symantec understands the shifting needs of the data center and offers NetBackup
Copyright 2015 EMC Corporation. All rights reserved. 1 Radically Accelerate Laying The Foundation Of The SDDC & Redefine Simplicity With EMC VSPEX BLUE Based On VMware EVO:RAIL Rob Glanzman Director of
Business white paper Invest in the right flash storage solution A guide for the savvy tech buyer Business white paper Page 2 Introduction You re looking at flash storage because you see it s taking the
Product Brochure Elastic Scales to petabytes of data Start with as few as two nodes and scale to thousands. Add capacity if and when needed. Embrace the economics of commodity x86 infrastructure to build
VMware Software-Defined Storage Vision Lee Dilworth (@leedilworth) Principal Systems Engineer 2014 VMware Inc. All rights reserved. The Software-Defined Data Center Expand virtual compute to all applications
Smart Storage and Modern Data Protection Built for Virtualization Dot Hill Storage Arrays and Veeam Backup & Replication Software offer the winning combination. Veeam and Dot Hill Solutions Introduction
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
Efficient Storage Strategies for Virtualized Data Centers Contents Abstract. 1 Data Center Virtualization Status Report. 2 Dell EqualLogic Virtualized iscsi SAN Solutions. 2 Seamless, non-disruptive scalability.
Hyperscale Use Cases for Scaling Out with Flash David Olszewski Business challenges Performanc e Requireme nts Storage Budget Balance the IT requirements How can you get the best of both worlds? SLA Optimized
Increasing Storage Performance, Reducing Cost and Simplifying Management for VDI Deployments Table of Contents Introduction.......................................3 Benefits of VDI.....................................4
Nutanix Tech Note Data Protection and Disaster Recovery Nutanix Virtual Computing Platform is engineered from the ground-up to provide enterprise-grade availability for critical virtual machines and data.
Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary
Invest in the right FLASH STORAGE SOLUTION A guide for the savvy tech buyer Introduction You re looking at flash storage because you see it s taking the storage world by storm. You re interested in accelerating
VMware and Primary Data: Making the Software-Defined Datacenter a Reality CONTENTS About This Document... 3 Freeing Data From Physical Storage Silos... 3 Dynamically Move Virtual Disks to Meet Business
High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software White Paper Overview The Micron M500DC SSD was designed after months of close work with major data center service providers and
Nutanix Solutions for Private Cloud Kees Baggerman Performance and Solution Engineer Nutanix: Web-Scale Converged Infrastructure ü Founded in 2009 ü Now on fourth generation ü Core team from industry leaders
CLOUDBYTE ELASTISTOR QOS GUARANTEE MEETS USER REQUIREMENTS WHILE REDUCING TCO The use of VDI (Virtual Desktop Infrastructure) enables enterprises to become more agile and flexible, in tune with the needs
Mit Soft- & Hardware zum Erfolg IT-Transformation VCE Converged and Hyperconverged Infrastructure VCE VxRack EMC VSPEX Blue IT-Transformation IT has changed dramatically in last past years The requirements
Using VMWare VAAI for storage integration with Infortrend EonStor DS G7i Application Note Abstract: This document describes how VMware s vsphere Storage APIs (VAAI) can be integrated and used for accelerating
WHITE PAPER The Evolution of the Data Center and the Role of Virtualized Infrastructure and Unified 3D Management Sponsored by: Hitachi Data Systems Laura DuBois October 2010 Marshall Amaldas EXECUTIVE
Redefining Microsoft SQL Server Data Management APRIL Actifio 11, 2013 PAS Specification Table of Contents Introduction.... 3 Background.... 3 Virtualizing Microsoft SQL Server Data Management.... 4 Virtualizing
Product Brochure Elastic Scales to petabytes of data Start with as few as two nodes and scale to thousands. Add capacity if and when needed. Embrace the economics of commodity x86 infrastructure to build
BEST PRACTICES GUIDE Nimble Storage for VMware View VDI N I M B L E B E S T P R A C T I C E S G U I D E : N I M B L E S T O R A G E F O R V M W A R E V I E W V D I 1 Overview Virtualization is an important
How the Software-Defined Data Center Is Transforming End User Computing The Essentials Series sponsored by David Davis SDDC Powered Virtual Desktops and Applications... 1 Three Pillars of SDDC and Desktop/Application
EMC XTREMIO EXECUTIVE OVERVIEW COMPANY BACKGROUND XtremIO develops enterprise data storage systems based completely on random access media such as flash solid-state drives (SSDs). By leveraging the underlying
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
Making IT Infrastructure Invisible Michael Berthiaume Systems Engineer Nutanix VCDX # 84 2 But First Let s take a trip on the to.. 1989 3 RAID invented at UC Berkeley Let s take a trip on the to.. 1993
1 BEST PRACTICES FOR DEPLOYING VNX IN THE HYBRID CLOUD MICHAEL TRUITT; CORPORATE SOLUTION ENGINEERING 2 ROADMAP INFORMATION DISCLAIMER EMC makes no representation and undertakes no obligations with regard
Deploying F5 BIG-IP Virtual Editions in a Hyper-Converged Infrastructure Justin Venezia Senior Solution Architect Paul Pindell Senior Solution Architect Contents The Challenge 3 What is a hyper-converged
Symantec Backup Appliances End-to-end Protection for your backup environment Stefan Redtzer Sales Manager Backup Appliances, Nordics 1 Today s IT Challenges: Why Better Backup is needed? Accelerated Data
Amazon Cloud Storage Options Table of Contents 1. Overview of AWS Storage Options 02 2. Why you should use the AWS Storage 02 3. How to get Data into the AWS.03 4. Types of AWS Storage Options.03 5. Object
White Paper Consolidate and Virtualize Your Windows Environment with NetApp and VMware Sachin Chheda, NetApp and Gaetan Castelein, VMware October 2009 WP-7086-1009 TABLE OF CONTENTS 1 EXECUTIVE SUMMARY...
WHITE PAPER Improving Storage Efficiencies with Data Deduplication and Compression Sponsored by: Oracle Steven Scully May 2010 Benjamin Woo IDC OPINION Global Headquarters: 5 Speen Street Framingham, MA
The Power of Deduplication-Enabled Per-VM Data Protection SimpliVity s Hyperconverged Infrastructure Aligns VM and Data Management A DeepStorage Technology Report Prepared for SimpliVity About DeepStorage
Complete Storage and Data Protection Architecture for VMware vsphere Executive Summary The cost savings and agility benefits of server virtualization are well proven, accounting for its rapid adoption.
TOP FIVE REASONS WHY CUSTOMERS USE EMC AND VMWARE TO VIRTUALIZE ORACLE ENVIRONMENTS Leverage EMC and VMware To Improve The Return On Your Oracle Investment ESSENTIALS Better Performance At Lower Cost Run
The Next Evolution in Storage Virtualization Management Global Storage Virtualization Simplifies Management, Lowers Operational Costs By Hitachi Data Systems July 2014 Contents Executive Summary... 3 Introduction...
TECHNICAL WHITE PAPER Backup and Recovery Best Practices With CommVault Simpana Software www.tintri.com Contents Intended Audience....1 Introduction....1 Consolidated list of practices...............................
A Virtual Filer for VMware s Virtual SAN A Maginatics and VMware Joint Partner Brief With the massive growth of unstructured data in today s enterprise environments, storage IT administrators are constantly
Windows Server 2003 Migration Guide: Nutanix Webscale Converged Infrastructure Eases Migration Windows Server 2003 end-of-support means planning must start now James E. Bagley Senior Analyst Deni Connor
IOmark- VDI Nimbus Data Gemini Test Report: VDI- 130906- a Test Copyright 2010-2013 Evaluator Group, Inc. All rights reserved. IOmark- VDI, IOmark- VDI, VDI- IOmark, and IOmark are trademarks of Evaluator
Optimizing Storage for Better TCO in Oracle Environments INFOSTOR Executive Brief a QuinStreet Excutive Brief. 2012 To the casual observer, and even to business decision makers who don t work in information
HyperQ Hybrid Flash Storage Made Easy White Paper Parsec Labs, LLC. 7101 Northland Circle North, Suite 105 Brooklyn Park, MN 55428 USA 1-763-219-8811 www.parseclabs.com firstname.lastname@example.org email@example.com
Best Practices for Architecting Storage in Virtualized Environments Leverage Advances in Storage Technology to Accelerate Performance, Simplify Management, and Save Money in Your Virtual Server Environment
Redefining Microsoft Exchange Data Management FEBBRUARY, 2013 Actifio PAS Specification Table of Contents Introduction.... 3 Background.... 3 Virtualizing Microsoft Exchange Data Management.... 3 Virtualizing
Top Ten Questions to Ask Your Primary Storage Provider About Their Data Efficiency May 2014 Copyright 2014 Permabit Technology Corporation Introduction The value of data efficiency technologies, namely
BACKUP AND RECOVERY FOR MICROSOFT-BASED PRIVATE CLOUDS LEVERAGING THE EMC DATA PROTECTION SUITE A Detailed Review ABSTRACT This white paper highlights how IT environments which are increasingly implementing