ATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEG N POWERFUL RELEVANT PERFORMANCE SOLUTION CLO IRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN V
|
|
|
- Silvester Preston
- 10 years ago
- Views:
Transcription
1 ATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEG N POWERFUL RELEVANT PERFORMANCE SOLUTION CLO IRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN V WHITE PAPER Better Object Storage With Hitachi Content Platform The Fundamentals of Hitachi Content Platform By Michael Ratner November 2014
2 WHITE PAPER 2 Contents Executive Summary 3 Introduction 4 Main Concepts and Features 4 Object-Based Storage 4 Distributed Design 7 Open Architecture 7 Multitenancy 8 Object Versioning 8 Search 9 Adaptive Cloud Tiering 9 Spin-Down Capability 9 Replication and Global Access Topology 10 Common Use Cases 11 Cloud-Enabled Storage 11 Backup-Free Data Protection and Content Preservation 12 Fixed-Content Archiving 14 Compliance, E-Discovery and Metadata Analysis 14 System Fundamentals 15 Hardware Overview 15 Software Overview 18 System Organization 18 Namespaces and Tenants 20 Main Concepts 20 User and Group Accounts 21 System and Tenant Management 22 Object Policies 22 Content Management Services 24 Conclusion 27
3 WHITE PAPER 3 Better Object Storage With Hitachi Content Platform Executive Summary One of IT s greatest challenges today is an explosive, uncontrolled growth of unstructured data. Continual growth of and documents, video, Web pages, presentations, medical images and the like increase both complexity and risk. These difficulties are seen particularly in distributed IT environments, such as cloud service providers and organizations with branch or remote office sites. The vast quantity of data being created, the difficulties in management and proper handling of unstructured content, and the complexity of supporting more users and applications pose significant challenges to IT departments. Organizations often end up with sprawling storage silos for a multitude of applications and workloads, with few resources available to manage, govern, protect and search the data. Hitachi Data Systems provides an alternative solution to these challenges through Hitachi Content Platform (HCP). This single object storage platform can be divided into virtual storage systems, each configured for the desired level of service. The great scale and rich features of this solution help IT organizations in both private enterprises and cloud service providers. HCP assists with management of distributed IT environments and control of the flood of storage requirements for unstructured content, and it addresses a variety of workloads. The Hitachi Content Platform portfolio products integrate tightly with HCP to deliver powerful file sync and share capability, and elastic backup-free file services for remote and branch offices. Built from end to end by Hitachi Data Systems, Hitachi Content Platform Anywhere (HCP Anywhere) provides safe, secure file sharing, collaboration and synchronization. End users simply save a file to HCP Anywhere and it synchronizes across their devices. These files and folders can then be shared via hyperlinks. Because HCP Anywhere stores data in HCP, it is protected, compressed, single-instanced, encrypted, replicated and access-controlled. Hitachi Data Ingestor (HDI) combines with HCP to deliver elastic and backup-free file services beyond the data center. When a file is written to HDI, it is automatically replicated to HCP. From there, it can be used by another HDI for efficient content distribution and in support of roaming home directories, where users' permissions follow them to any HDI site. Files stay in the HDI file system until free space is needed. Then, HDI reduces any inactive files to pointers referencing the object on HCP. HDI drastically simplifies deployment, provisioning and management by eliminating the need to constantly manage capacity, utilization, protection, recovery and performance of the system. One infrastructure is far easier to manage than disparate silos of technology for each application or set of users. By integrating many key technologies in a single storage platform, Hitachi Data Systems object storage solutions provide a path to short-term return on investment and significant long-term efficiency improvements. They help IT evolve to meet new challenges, stay agile over the long term, and address future change and growth.
4 WHITE PAPER 4 Introduction Hitachi Content Platform (HCP) is a multipurpose distributed object-based storage system designed to support large-scale repositories of unstructured data. HCP enables IT organizations and cloud service providers to store, protect, preserve and retrieve unstructured content with a single storage platform. It supports multiple levels of service and readily evolves with technology and scale changes. With a vast array of data protection and content preservation technologies, the system can significantly reduce or even eliminate tape-based backups of itself or of edge devices HCP and Content Cloud SEE VIDEO connected to the platform. HCP obviates the need for a siloed approach to storing unstructured content. Massive scale, multiple storage tiers, Hitachi reliability, nondisruptive hardware and software updates, multitenancy and configurable attributes for each tenant allow the platform to support a wide range of applications on a single physical HCP instance. By dividing the physical system into multiple, uniquely configured tenants, administrators create "virtual content platforms" that can be further subdivided into namespaces for further organization of content, policies and access. With support for thousands of tenants, tens of thousands of namespaces, and petabytes of capacity in one system, HCP is truly cloud-ready (see Figure 1). Figure 1. A single Hitachi Content Platform supports a wide range of applications. Main Concepts and Features Object-Based Storage Hitachi Content Platform, as a general-purpose object store, allows unstructured data files to be stored as objects. An object is essentially a container that includes both file data and associated metadata that describes the data. The objects are stored in a repository. Each object is treated within HCP as a single unit for all intents and purposes. The metadata is used to define the structure and administration of the data. HCP can also leverage object metadata to apply specific management functions, such as storage tiering, to each object. The objects have intelligence that enables them to automatically take advantage of advanced storage and data management features to ensure proper placement and distribution of content. HCP architecture isolates stored data from the hardware layer. Internally, ingested files are represented as objects that encapsulate both the data and metadata required to support applications. Externally, HCP presents each object either as a set of files in a standard directory structure or as a uniform resource locator (URL) accessible by users and applications via HTTP or HTTPS.
5 WHITE PAPER 5 Object Structure An HCP repository object is composed of fixed-content data and the associated metadata, which in turn consists of system metadata and, optionally, custom metadata and an access control list (ACL). The structure of the object is shown in Figure 2. Fixed-content data is an exact digital copy of the actual file contents at the time of its ingestion. It becomes immutable after the file is successfully stored in the repository. If the object is under retention, it cannot be deleted before the expiration of its retention period, except when using a special privileged operation. If versioning is enabled, multiple versions of a file can be retained. If appendable objects are enabled, data can be appended to an object (with the CIFS or NFS protocols) without modifying the original fixed-content data. Figure 2. HCP Object Metadata is system- or user-generated data that describes the fixed-content data of an object and defines the object's properties. System metadata, the system-managed properties of the object, includes HCP-specific metadata and POSIX metadata. HCP-specific metadata includes the date and time the object was added to the namespace (ingest time), the date and time the object was last changed (change time), the cryptographic hash value of the object along with the namespace hash algorithm used to generate that value, and the protocol through which the object was ingested. It also includes the object's policy settings such as DPL, retention, shredding, indexing and versioning. POSIX metadata includes a user ID and group ID, a POSIX permissions value and POSIX time attributes. Custom metadata is optional, user-supplied descriptive information about a data object that is usually provided as well-formed XML. It is typically intended for more detailed description of the object. This metadata can also be used by future users and applications to understand and repurpose the object content. HCP supports multiple custom metadata fields for each object.
6 WHITE PAPER 6 ACL is optional, user-provided metadata containing a set of permissions granted to users or user groups to perform operations on an object. ACLs control data access at an individual object level and are the most granular data access mechanism. In addition to data objects, HCP also stores directories and symbolic links in the repository. Only POSIX metadata is maintained for directories and symbolic links; they have no fixed-content data, custom metadata or ACLs. All the metadata for an object is viewable; only some of it can be modified. The way metadata can be viewed and modified depends on the namespace configuration, the data access protocol and the type of metadata. Object Representation HCP presents objects to a user or application in 2 different ways, depending on the namespace access interface. With the RESTful HTTP protocols (HCP REST, Amazon S3), HCP presents each object as a URL. Both data and metadata is accessed through the REST interface. Metadata is handled by using URL query parameters and HTTP headers. Clients specify metadata values by including HCP-specific parameters in the request URL; HCP returns system metadata in HTTP response headers. For non-restful namespace protocols (WebDAV, CIFS, and NFS), HCP includes the HCP file system, a standard POSIX file system that allows users and applications to view stored objects as regular files, directories and symbolic links. HCP file system allows data to be handled in familiar ways using existing methods. It presents each object as a set of files in 2 hierarchical directory structures that hold the components of the object: one for the object's data and another for the object's metadata. For a data object (an object other than a directory or symbolic link), one of these files contains the fixed-content data. The name of this file is identical to the object's name, and its content is the same as the originally stored file. The other files contain object metadata. These files, which are either plain text, XML or JSON, are called metafiles. Directories that contain metafiles are called metadirectories. HCP File System HCP file system represents a single file system across a given namespace. Each HCP namespace that has any non-restful access protocol enabled exposes a separate HCP file system instance to clients. HCP file system maintains a directory structure with separate branches for data files and metafiles. The data top-level directory is a traditional file system view that includes fixed-content data files for all objects in the namespace. This directory hierarchy is created by a user adding files and directories to the namespace. Each data file and directory in this structure has the same name as the object or directory it represents. The metadata top-level directory contains all the metafiles and metadirectories for objects and directories. This structure parallels that of data, excluding symbolic links, and is created by HCP file system automatically as data and directories are added to the namespace by an end user. HCP metafiles provide a means of viewing and manipulating object metadata through a traditional file system interface. Clients can view and retrieve metafiles through the WebDAV, CIFS and NFS protocols. These protocols can also be used to change metadata by overwriting metafiles that contain the HCP-specific metadata (that can be changed). A sample HCP file system data and metadata structure, as seen through CIFS, NFS and WebDAV protocols, is shown in Figure 3.
7 WHITE PAPER 7 Figure 3. HCP File System Data and Metadata Structure Distributed Design A single Hitachi Content Platform consists of both hardware and software. It is composed of many different components that are connected together to form a robust, scalable architecture for object-based storage. HCP runs on an array of servers, or nodes, that are networked together to form a single physical instance. Each node stores data objects and can also store search index. All runtime operations and physical storage, including data, metadata and index, are distributed among the system nodes. All objects in the repository are distributed across all available storage space but still presented as files in a standard directory structure. Objects that are physically stored on any particular node are available from all other nodes. Open Architecture Hitachi Content Platform has an open architecture that insulates stored data from technology changes and from changes in HCP itself due to product enhancements. This open architecture ensures that users will have access to the data long after it has been added to the repository. HCP acts as a repository that can store customer data and an online portal. As a portal, it enables access to that data by means of several industry-standard interfaces, as well as through an integrated search facility and Hitachi Data Discovery Suite (HDDS). The industry-standard HTTP REST, Amazon S3, WebDAV, CIFS and NFS protocols support various operations. These operations include storing data, creating and viewing directories, viewing and retrieving objects and their metadata, modifying object metadata, and deleting objects. Objects that were added using any protocol are immediately accessible through any other supported protocol. These protocols can be used to access the data with a Web browser, the HCP client tools, 3rd-party applications, Microsoft Windows Explorer, or native Windows or Unix tools. HCP also allows special-purpose access to the repository through the SMTP protocol in order to support journaling.
8 WHITE PAPER 8 HCP provides a number of HTTP-based RESTful open APIs for easy integration with customer applications. In addition to HCP REST and Amazon S3-compatible HS3 interfaces that are used for namespace content access, HCP supports metadata query API for searching for objects in a namespace and management API (MAPI) for tenant and namespace-level administration. HCP implements the open, standards-based Internet Protocol version 6 (IPv6), the latest version of the Internet Protocol (IP). This protocol allows HCP to be deployed in very large scale networks and ensure compliance with a number of government agencies where IPv6 is mandatory. HCP provides IPv6 dual stack capability that enables coexistence of IPv4 and IPv6 protocols and corresponding applications. HCP can be configured in native IPv4, native IPv6, or dual IPv4 and IPv6 modes where each virtual network will support either or both IP versions. The IPv4 and IPv6 dual-stack feature is indispensable in heterogeneous environments during transition to IPv6 infrastructure. Any network mode can be enabled when desired, and existing IPv4 applications can be upgraded to IPv6 independently and with minimal disruption in service. All standard networking protocols and existing HCP access interfaces are supported and can use either IPv4 and/or IPv6 addresses based on the enabled network mode, which allows seamless integration with existing data center environments. Multitenancy Multitenancy support allows the repository in a single physical Hitachi Content Platform instance to be partitioned into multiple namespaces. A namespace is a logical partition that contains a collection of objects particular to one or more applications. Each namespace is a private object store that is represented by a separate directory structure and has a set of independently configured attributes. Namespaces provide segregation of data, while tenants, or groupings of namespaces, provide segregation of management. An HCP system can have up to 1,000 tenants and 10,000 namespaces. Each tenant and its set of namespaces constitute a virtual HCP system that can be accessed and managed independently by users and applications. This HCP feature is essential in enterprise, cloud and serviceprovider environments. Data access to HCP namespaces can be either authenticated or nonauthenticated, depending on the type and configuration of the access protocol. Authentication can be performed using HCP local accounts or Microsoft Active Directory groups. Object Versioning Hitachi Content Platform supports object versioning, which is the capability of a namespace to create, store and manage multiple versions of objects in the HCP repository. This ability provides a history of how the data has changed over time. Versioning facilitates storage and replication of evolving content, thereby creating new opportunities for HCP in markets such as content depots and workflow applications. Versioning is available in HCP namespaces and is configured at the namespace level. Versioning is supported only with HTTP REST protocol. Other protocols cannot be enabled if versioning is enabled for the namespace. Versioning applies only to objects, not to directories or symbolic links. A new version of an object is created when an object with the same name and location as an existing object is added to the namespace. A special type of version, called a deleted version, is created when an object is deleted. This helps protect the content against accidental deletes. Updates to the object metadata affect only the current version of an object and do not create new versions. Previous versions of objects that are older than a specified amount of time can be automatically deleted, or pruned. It is not possible to delete specific historical versions of an object; however, a user or application with appropriate permissions can purge the object to delete all its versions, including the current one.
9 WHITE PAPER 9 Search Hitachi Content Platform includes comprehensive search capabilities that enable users to search for objects in namespaces, analyze namespace contents, and manipulate groups of objects. To satisfy government requirements, HCP supports e-discovery for audits and litigation. HCP supports 2 search facilities and includes a Web application portal called the search console that provides an interactive interface to these search facilities. HCP provides the only integrated metadata query engine (MQE) on the market. The MQE search facility is integrated with HCP and is always available in any HCP system. The HDDS search facility interacts with Hitachi Data Discovery Suite, and this separate HDS product enables federated search across multiple HCP and other supported systems. HDDS performs search and returns results to the HCP search console. HDDS must be installed separately and configured in the HCP search console. MQE can index and search only object metadata. The HDDS search facility indexes both content and metadata and allows full content search of objects in a namespace. MQE is also used by the metadata query API, a programmatic interface for querying namespaces. Adaptive Cloud Tiering Adaptive cloud tiering expands Hitachi Content Platform capacity to any storage device or cloud service. It enables hybrid cloud configurations to scale and share resources between public and private clouds. It also allows HCP to be used to build custom, evolving service level agreements (SLAs) for specific data sets using enhanced service plans. HCP provides comprehensive storage-tiering capabilities as part of the long-term goal of supporting information lifecycle management (ILM) and intelligent objects. HCP supports a range of storage components that are grouped into storage pools. Storage pools virtualize access to one or more logically grouped storage components with similar price/performance characteristics. The storage components can be either primary storage (HCP storage) or extended storage. Primary storage includes direct attached storage (DAS) and SAN storage; internal DAS storage is always running, while SAN storage may be running or spin-down-capable. Extended storage includes non- HCP external storage devices (NFS and S3-compatible) and public cloud storage services (Amazon S3, Microsoft Azure, Google Cloud Storage and Hitachi Cloud Services). The topology of the adaptive cloud tiering is shown in Figure 4. Objects are stored in storage pools and are managed by object life-cycle policies, which are defined in service plans. Service plans determine content life cycle from ingest to obsolescence or disposition and implement protection strategies at each tier; they effectively represent customer SLAs. Service plans can be offered to a tenant administrator so they can be applied to individual namespaces. Storage tiering functionality is implemented as an HCP service. Storage tiering service applies service plans and moves objects between tiers of storage. Flexible service plans allow storage tiering to adapt to changes. Spin-Down Capability HCP spin-down-capable storage takes advantage of the power savings feature of Hitachi midrange storage systems and is one of the core elements of the storage tiering functionality and adaptive cloud tiering. According to storage tiering strategy that an organization specifies, the storage tiering service identifies objects that are eligible to reside on spin-down storage and moves them to and from the spin-down storage as needed. Tiering selected content to spin-down-enabled storage lowers overall cost by reducing energy consumption for large-scale unstructured data storage, such as deep archives and disaster recovery sites. Storage tiering can very effectively be used with customer-identified "dark data" (rarely accessed data) or data replicated for disaster recovery by moving that data to spin-down storage some time after ingestion or replication.
10 WHITE PAPER 10 Figure 4. Adaptive Cloud Tiering Replication and Global Access Topology Replication, an add-on feature to HCP, is the process that keeps selected tenants and namespaces in 2 or more HCP systems in sync with each other. The replication service copies one or more tenants or namespaces from one HCP system to another, propagating object creations, objects deletions and metadata changes. HCP also replicates tenant and namespace configuration, tenant-level user accounts, compliance and tenant log messages, and retention classes. The replication process is object-based and asynchronous. The HCP system in which the objects are initially created is called the primary system. The second system is called the replica. Typically, the primary system and the replica are in separate geographic locations and connected by a high-speed wide area network. HCP supports advanced traditional replication topologies including many-to-one and chain configurations, as well as revolutionary global access topology where globally distributed HCP systems are synchronized in a way that allows users and applications to access data from the closest HCP site for improved collaboration, performance and availability. Global access topology is based on bidirectional, active-active replication links that allow read-and-write access to the same namespace on all participating HCP systems. The content is synchronized between systems (or locations) in both directions. This enables read-and-write access to data in any namespace and from any location across entire replication topology, essentially creating global content point-of-presence network.
11 WHITE PAPER 11 Common Use Cases Cloud-Enabled Storage The powerful, industry-leading capabilities of Hitachi Content Platform make it well suited to the cloud storage space. An HCP-based infrastructure solution is sufficiently flexible to accommodate any cloud deployment models (public, private or hybrid) and simplify the migration to the cloud for both service providers and subscribers. HCP provides edge-to-core, secure multitenancy and robust management capabilities, and a host of features to optimize cloud storage operations. HCP, in its role as an online data repository, is truly ready for a cloud-enabled market. While numerous HCP features were already discussed earlier in this paper, the purpose of this section is to summarize those that contribute the most to HCP cloud capabilities. They include: Large-scale multitenancy. Management segregation. HCP supports up to 1,000 tenants, each of which can be uniquely configured for use by a separate cloud service subscriber. Data segregation. HCP supports up to 10,000 namespaces, each of which can be uniquely configured for a particular application or workload. Massive scale. Petabyte repository offers 80PB of storage, 80 nodes, 64 billion user objects and 30 million files per directory, all on a single physical system. Best node density in the object storage industry supports 500TB and 800 million objects per node. With fewer numbers of nodes, HCP requires less power, less cooling and less floor space. Unparalleled expandability allows organizations to "start small" and expand according to demand. Nodes and/or storage can be added to expand an HCP system's storage and throughput capacity, without disruptions. Multiple storage systems are supported by a single HCP system. Easy tenant and storage provisioning. Geographical dispersal and global accessibility. Global access topology that enables creation of a global content point-of-presence network. WAN-friendly REST interface for namespace data access and replication. WAN-optimized, high-throughput data transfer. High availability. Fully redundant hardware. Automatic routing of client requests around hardware failures. Load balancing across all available hardware. Adaptive cloud tiering enables hybrid cloud configurations where resources can be easily scaled and shared between public and private clouds. Specific data sets can be migrated on-demand across various cloud services and local storage, and new cloud storage can be easily integrated and existing storage retired.
12 WHITE PAPER 12 Multiple REST interfaces. These interfaces include the HCP REST and Amazon S3-compatible REST APIs for namespace data access, management API and metadata query API. REST API is a technology of choice for cloud enablers and consumers. Some of the reasons for its popularity include high efficiency and low overhead, caching at both the client and the server, and API uniformity. In addition, this technology offers a stateless nature that allows accommodation of the latencies of Internet access and potentially complex firewall configurations. Secure, granular access to tenants, namespaces and objects, which is crucial in any cloud environment. This access is facilitated by the HCP multilayer, flexible permission mechanism, including object-level ACLs. Usage metering. HCP has built-in chargeback capabilities, indispensable for cloud use, to facilitate providersubscriber transactions. HCP also provides tools for 3rd-party vendors and customers to write to the API for easy integration with the HDS solution for billing and reporting. Low-touch system that is self-monitoring, self-managing and self-healing. HCP features advanced monitoring, audit and reporting capabilities. HCP services can automatically repair issues if they arise. Support for multiple levels of service. This support is provided through HCP policies, service plans and quotas that can be configured for each tenant. It helps enforce SLAs and allows the platform to accommodate a wide range of subscriber use cases and business models on a single physical system. Edge-to-core solution. HCP, working in tandem with Hitachi Data Ingestor provides an integrated edge-to-core solution for cloud storage deployments. HCP serves as the "engine" at the core of the HDS cloud architecture. HDI resides at the edge of the storage cloud (for instance, at a remote office or subscriber site) and serves as the "onramp" for application data to enter the cloud infrastructure. HDI acts as a local storage cache while migrating data into HCP and maintaining links to stored content for later retrieval. Users and applications interact with HDI at the edge of the cloud but perceive bottomless, backup-free storage provided by HCP at the core. File-sync-and-share solution. HCP, working in tandem with Hitachi Content Platform Anywhere (HCP Anywhere), provides a secure file and folder synchronization and sharing solution for workforce mobility. HCP again serves as the "engine" at the core of the HDS cloud architecture. HCP Anywhere servers are deployed in conjunction with HCP and client applications that are installed on user devices including laptops, desktops and mobile devices. End users simply save a file to their HCP Anywhere folder and it automatically synchronizes to all of their registered devices and becomes available via popular Web browsers. Once saved to the HCP Anywhere folder, the file is protected, compressed, single-instanced, encrypted, replicated and access-controlled by the well-proven Hitachi Content Platform. Individual files or entire folders can then be shared with a simple hyperlink. Backup-Free Data Protection and Content Preservation Hitachi Content Platform is a truly backup-free platform. HCP protects content without the need for backup. It uses sophisticated data preservation technologies, such as configurable data and metadata protection levels, object versioning and change tracking, multisite replication with seamless application failover, and many others. HCP includes a variety of features designed to protect the integrity, provide the privacy, and ensure the availability and security of stored data. Below is a summary of the key HCP data protection features: HCP Anywhere Benefits Video WATCH Content immutability. This intrinsic feature of HCP "write-once, read-many" (WORM) storage design protects the integrity of the data in the repository. Content verification. The content verification service maintains data integrity and protects against data corruption or tampering by ensuring that the data of each object matches its cryptographic hash value. Any violation is repaired in a self-healing fashion.
13 WHITE PAPER 13 Scavenging. The scavenging service ensures that all objects in the repository have valid metadata. In case metadata is lost or corrupted, the service tries to reconstruct it by using the secondary, or scavenging, metadata (a copy of the metadata stored with each copy of the object data). Data encryption. HCP supports encryption at rest capability that allows seamless encryption of data on the physical volumes of the repository. This ensures data privacy by preventing unauthorized access to the stored data. The encryption and decryption are handled automatically and transparently to users and applications. Versioning. HCP uses versioning to protect against accidental deletes and storing wrong copies of objects. Data availability. RAID protection. RAID storage technology provides efficient protection from simple disk failures. SAN-based HCP systems typically use RAID-6 erasure coding protection to guard against dual drive failures. Multipathing and zero-copy failover. These features provide data availability in SAN-based HCP systems. Data protection level (DPL) and protection service. In addition to using RAID and SAN technologies to provide data integrity and availability, HCP can use software mirroring to store the data for each object in multiple locations on different nodes. HCP groups system nodes into protection sets with the same number of nodes in each set. It tries to store all the copies of the data for an object in a single protection set where each copy is stored on a different node. The protection service enforces the required level of data redundancy by checking and repairing protection sets. In case of violation, it creates additional copies or deletes extra copies of an object to bring the object into compliance. If replication is enabled, the protection service can use an object copy from a replica system if the copy on the primary system is unavailable. Metadata redundancy. In addition to the data redundancy as specified by DPL, HCP creates multiple copies of the metadata for an object on different nodes. Metadata protection level or MDPL is a system-wide setting that specifies the number of copies of the metadata that the HCP system must maintain (normally 2 copies, MDPL2). Management of MDPL redundancy is independent of the management of data copies for DPL. Nondisruptive software and hardware upgrades. HCP employs a number of techniques that minimize or eliminate any disruption of normal system functions during software and hardware upgrades. Nondisruptive software upgrade (NDSU) is one of these techniques. It includes greatly enhanced online upgrade support, nondisruptive patch management, and online upgrade performance improvements. HCP supports media-free and remote upgrades, HTTP or REST drain mode, and parallel operating system (OS) installation. It also supports automatic online upgrade commit, offline upgrade duration estimate, enhanced monitoring and alerts, and other features. Nodes can be added to an HCP system without causing any downtime. HCP also supports nondisruptive storage upgrades that allow online storage addition to SAN-based HCP systems without any data outage. Seamless application failover. This feature is supported by HCP systems in a replicated topology. This capability includes seamless failover routing feature that enables direct integration with customer-owned load balancers by allowing HTTP requests to be serviced by any HCP system in a replication topology. Seamless domain name system (DNS) failover is an HCP built-in, multisite, load-balancing and high-availability technology that is ideal for cost efficient, best-effort customer environments. Replication. If enabled, this feature provides a multitude of mechanisms to ensure data availability. The replica system can be used both as a source for disaster recovery and to maintain data availability by providing good object copies for protection and content verification services. If an object cannot be read from the primary system, HCP can try to read the object from the replica if read-from-replica feature is enabled.
14 WHITE PAPER 14 Data security. Authentication of management and data access. Granular, multilayer data access permission scheme. IP filtering technology and protocol-specific access or deny lists. Secure Sockets Layer (SSL) support for HTTP and WebDAV data access, management access and replication. Node login prevention. Shredding policy and service. Autonomic technology refresh. This feature is implemented as HCP migration service. It enables organizations to maintain continuously operating content stores that allows them to preserve their digital content assets for the long term. Fixed-Content Archiving Hitachi Content Platform is optimized for fixed-content data archiving. Fixed-content data is information that does not change but must be kept available for future reference and be easily accessible when needed. A fixed-content storage system is one in which the data cannot be modified. HCP uses WORM storage technology, and a variety of policies and services (such as retention, content verification and protection) to ensure the integrity of data in the repository. The WORM storage means that data, once ingested into the repository, cannot be updated or modified; that is, the data is guaranteed to remain unchanged from when it was originally stored. If the versioning feature is enabled within the HCP system, different versions of the data can be stored and retrieved in which case each version is WORM. Compliance, E-Discovery and Metadata Analysis Custom metadata brings structure to unstructured content. It enables building massive unstructured data stores by providing means for faster and more accurate access of content. Custom metadata gives storage managers the meaningful information they need to efficiently and intelligently process data and apply the right object policies to meet all business, compliance and protection requirements. Structured custom metadata (content properties) and multiple custom metadata annotations take this capability to the next level by helping yield better analytic results and facilitating content sharing among applications. Regulatory compliance features include namespace retention mode (compliance and enterprise), retention classes, retention hold, automated content disposition, and privileged delete and purge. HCP search capabilities include support for e-discovery for litigation or audit purposes, and allow direct 3rd-party integration through built-in open APIs. The search console offers a structured environment for creating and executing queries (sets of criteria that each object in the search results must satisfy). End users can apply various selection criteria, such as objects stored before a certain date or larger than a specified size. Queries return metadata for objects included in the search result. This metadata can be used to retrieve the object. From the search console, end users can open objects, perform bulk operations on objects (hold, release, delete, purge, privileged delete and purge, change owner, set ACL), and export search results in standard file formats for use as input to other applications. Search is enabled at both the tenant and namespace levels. Indexing is enabled on a per-namespace basis. Settings at the system and namespace levels determine whether custom metadata is indexed in addition to system metadata and ACLs. If indexing of custom metadata is disabled, the MQE index does not include custom metadata. If a namespace is not indexed at all, searches do not return any results for objects in this namespace.
15 WHITE PAPER 15 MQE indexes system metadata, custom metadata (optionally), and ACLs of objects in each search-enabled and index-enabled namespace. In namespaces with versioning enabled it indexes only the current version of an object. Each object has an index setting that affects indexing of custom metadata by the metadata query engine. If indexing is enabled for a namespace, MQE always indexes system metadata and ACLs, regardless of the index setting for an object. If the index setting is set to true, MQE also indexes custom metadata for this object. The MQE index resides on designated logical volumes on the HCP nodes, sharing or not sharing the space on these volumes with the object data, depending on the type of system and volume configuration. The Hitachi Data Discovery Suite search facility creates and maintains its own index that resides separately in HDDS. REST clients can search HCP programmatically using the metadata query API. As with the search console, the response to a query is metadata for the objects that meet the query criteria, in XML or JSON format. Two types of queries are supported: HDDS Demo WATCH Object-based query locates objects that currently exist in the repository based on their metadata, including system metadata, custom metadata and ACLs, as well as object location (namespace or directory). Multiple, robust metadata criteria can be specified in object-based queries. Objects must be indexed to support this type of query. Operation-based query provides time-based retrieval of objects transactions. It searches for objects based on operations performed on the objects during specified time periods. And it retrieves records of object creation, deletion and purge (user-initiated actions) and disposition and pruning (system-initiated actions). Operation-based queries return not only objects currently in the repository but also deleted, disposed, purged or pruned objects. If versioning is enabled, both current and old versions of objects can be returned. The response is retrieved directly from the HCP metadata database and internal logs; thus, no indexing is required to support this type of query. Operation-based queries enable HCP integration with backup servers, search engines (such as HDDS), policy engines and other applications. System Fundamentals Hardware Overview An individual physical Hitachi Content Platform instance, or HCP system, is not a single device; it is a collection of devices that, combined with HCP software, can provide all the features of an online object repository while tolerating node, disk and other component failures. From a hardware perspective, each HCP system consists of the following categories of components: Nodes (servers). Internal or SAN-attached storage. Networking components (switches and cabling). Infrastructure components (racks and power distribution units). System nodes are the vital part of HCP. They store and manage the objects that reside in the physical system storage. The nodes are conventional off-the-shelf servers. Each node can have multiple internal physical drives and/or connect to external Fibre Channel storage (SAN). In addition to using RAID and SAN technologies and a host of other features to protect the data, HCP uses software mirroring to store the data and metadata for each object in multiple locations on different nodes. For data, this feature is managed by the namespace data protection level (DPL) setting, which specifies the number of copies of each object HCP must maintain in the repository to ensure the required
16 WHITE PAPER 16 level of data protection. For metadata, this feature is managed by the metadata protection level (MDPL), which is a system-wide setting. An HCP system uses private back-end and public front-end networks. The isolated back-end network is used for vital internode communication and coordination. It uses a bonded Ethernet interface in each node, 2 Ethernet switches, and 2 sets of cables connecting the nodes to the switches, thereby making it fully redundant. The front-end network is used for customer interaction with the system and also uses a bonded Ethernet interface in each node. The recommended setup includes 2 independent switches that connect these ports to the front-end (corporate) network. HCP runs on a redundant array of independent nodes (RAIN) or a SAN-attached array of independent nodes (SAIN). RAIN systems use the internal storage in each node. SAIN systems use the external SAN storage. HCP is offered as 2 products: HCP 300 (based on RAIN configuration) and HCP 500 (based on SAIN configuration). Hitachi Content Platform RAIN (HCP 300) The nodes in an HCP 300 system are Hitachi Compute Rack 210H (CR 210H) servers. RAIN nodes contain internal storage: RAID controller and disks. All nodes use hardware RAID-5 data protection. In an HCP RAIN system, the physical disks in each node form a single RAID group, normally RAID-5 (5D+1P) (see Figure 5). This configuration helps ensure the integrity of the data stored on each node. Figure 5. HCP 300 Hardware Architecture
17 WHITE PAPER 17 An HCP 300 (RAIN) system must have a minimum of 4 nodes. Additional nodes are added in 4-node increments. An HCP 300 system can have a maximum of 20 nodes. HCP 300 systems are normally configured with a DPL setting of 2 (DPL2), which, coupled with hardware RAID-5, yields an effective RAID-5+1 total protection level. Hitachi Content Platform SAIN (HCP 500/500XL) The nodes in an HCP 500 system are either Hitachi Compute Rack 210H (CR 210H) or Hitachi Compute Rack 220S (CR 220S) servers. The HCP 500 nodes contain Fibre Channel host bus adapters (HBAs) and use external Fibre Channel SAN storage; they are diskless servers that boot from the SAN-attached storage. HCP 500 may use Fibre Channel switches or have nodes directly connected to external storage. The HCP 500 system using direct connect is shown in Figure 6. The nodes in a SAIN system can have internal storage in addition to being connected to external storage. These nodes are called HCP 500XL nodes. They are an alternative to the standard HCP 500 nodes and have the same hardware configuration, except the addition of the RAID controller and internal hard disk drives. A typical 500XL node internal storage configuration includes six 500GB 7200RPM SATA II drives in a single RAID-5 (5D+1P) RAID group, with 2 LUNs: 31GB (operating system) and 2.24TB (database). In HCP 500XL nodes the system metadata database resides on the local disks, which leads to more efficient and faster database operations. As a result, the system has the ability to better support larger capacity and higher object counts per node and address higher performance requirements. The HCP 500XL nodes are usually considered when the system configuration exceeds 4 standard nodes. Figure 6. HCP 500 Hardware Architecture (Direct Connect)
18 WHITE PAPER 18 Typically, the external SAN-attached storage uses RAID-6. Best protection and high availability of an HCP 500 system is achieved by giving each node its own RAID group or Hitachi Dynamic Provisioning (HDP) pool containing one RAID group. SAIN systems support multiple storage arrays in a single system or even for a single node. HCP 500 and 500XL systems are supported with a minimum of 4 nodes. With a SAIN system, additional nodes are added in pairs, so the system always has an even number of nodes. A SAIN system can have a maximum of 80 nodes. Both RAIN and SAIN systems can have a DPL as high as 4, which affords maximum data availability but greatly sacrifices storage utilization. SAIN systems introduce a number of SAN-specific features that help maintain the organization's data availability. They include multipathing, cross mapping and zero-copy failover. In a SAN environment, multiple physical paths may be configured between an HCP node and any given LUN that maps to it. Multipathing facilitates uninterrupted read and write access to the system, protecting it against storage array controller, Fibre Channel switch, fiber optic cable and HBA port failures. The process of one node automatically taking over management of storage previously managed by another, failed node is called zero-copy failover. To support zero-copy failover, each LUN that stores object data or MQE index must map to 2 different nodes. The pair of nodes forms a set such that the LUNs that map to one of the nodes also map to the other. This is called cross-mapping. In a cross-mapped pair of nodes, the LUNs on a node that are managed by this node during normal operation are called primary LUNs; the LUNs from the other node that will be managed by this node after failover are called standby LUNs. Cross-mapping of LUNs from one node to another node in the system allows instantaneous access to data from failed nodes. Software Overview Hitachi Content Platform system software consists of an operating system and core software. The Linux-based HCP operating system is called appliance operating system. The core software includes components that: Enable access to the object repository through the industry-standard HTTP or HTTPS, WebDAV, CIFS, NFS, SMTP and NDMP protocols. Ingest fixed-content data, convert it into HCP objects, and manage the objects data and metadata over time. Maintain the integrity, stability, availability and security of stored data by enforcing repository policies and executing system services. Enable configuration, monitoring and management of the HCP system through a human-readable interface. Support searching the repository through an interactive Web interface (the search console) and a programmatic interface (the metadata query API). System Organization HCP is a fully symmetric, distributed application that stores and manages objects (see Figure 7). An HCP object encapsulates the raw fixed-content data that is written by a client application, and its associated system and custom metadata. Each node in an HCP system is a Linux-based server that runs a complete HCP instance. The HCP system can withstand multiple simultaneous node failures, and acts automatically to ensure that all object and namespace policies are valid.
19 WHITE PAPER 19 External system communication is managed by the DNS manager, a distributed network component that balances client requests across all nodes to ensure maximum system throughput and availability. The DNS manager works in conjunction with a corporate DNS server to allow clients to access the system as a single entity, even though the system is made up of multiple independent nodes. The HCP system is configured as a subdomain of an existing corporate domain. Clients access the system using predefined protocol-specific or namespace-specific names. Figure 7. The High-Level Structure of an HCP System While not required, using DNS is important in ensuring balanced and problem-free client access to an HCP system, especially for the REST HTTP clients. Each node in the HCP system runs a complete software stack made up of the appliance operating system and the HCP core software. All nodes have an identical software image to ensure maximum reliability and fully symmetrical operation of the system. An HCP system node can serve as both an object repository and an access point for client applications and is capable of taking over the functions of other nodes in the event of node failure. All intranode and internode communication is based on scalable performance-oriented cluster communication (SPOCC). This efficient, reliable and easily expandable message-based middleware runs over TCP/IP. It functions as a unified message bus for distributed applications, forming the backbone of the back-end network where all node interaction occurs. SPOCC supports multicast and point-to-point connections and is designed to deal gracefully with network and hardware failures. An HCP system is inherently a distributed system. Many of its core components, including the database, have a distributed nature. To process incoming client requests, software components on a particular node need to interact
20 WHITE PAPER 20 with the components on other nodes across the system by means of the SPOCC-powered system backbone. All runtime operations are distributed among the system nodes. Each node bears equal responsibilities for processing requests, storing data and sustaining the overall health of the system. No single node becomes a bottleneck: All nodes are equally capable of handling any client request, ensuring reliability and performance. Because HCP uses a distributed processing scheme, the system can scale linearly as the repository grows in size and in the number of clients accessing it. When a new node is added to the HCP system, the system automatically integrates that node into the overall workflow without manual intervention. Namespaces and Tenants Main Concepts A Hitachi Content Platform repository is partitioned into namespaces. A namespace is a logical repository as viewed by an application. Each namespace consists of a distinct logical grouping of objects with its own directory structure, such that the objects in one namespace are not visible in any other namespace. Access to one namespace does not grant a user access to any other namespace. To the user of a namespace, the namespace is the repository. Namespaces are not associated with any preallocated storage; they share the same underlying physical storage. Namespaces provide a mechanism for separating the data stored for different applications, business units or customers. For example, there may be one namespace for accounts receivable and another for accounts payable. While a single namespace can host one or more applications, it typically hosts only one application. Namespaces also enable operations to work against selected subsets of repository objects. For example, a search could target the accounts receivable and accounts payable namespaces but not the employees namespace. Namespaces are owned and managed by tenants. Tenants are administrative entities that provide segregation of management, while namespaces offer segregation of data. A tenant typically represents an actual organization such as a company or a department within a company that uses a portion of a repository. A tenant can also correspond to an individual person. Namespace administration is done at the owning tenant level. Clients can access HCP namespaces through HTTP or HTTPS, WebDAV, CIFS, NFS and SMTP protocols. These protocols can support authenticated and/or anonymous types of access. HCP namespaces are owned by HCP tenants. An HCP system can have multiple HCP tenants, each of which can own multiple namespaces. The number of namespaces each HCP tenant can own can be limited by an administrator. Figure 8 shows the logical structure of an HCP system with respect to its multitenancy features.
21 WHITE PAPER 21 Figure 8. HCP System Logical Layout: Namespaces and Tenants User and Group Accounts User and group accounts control access to various Hitachi Content Platform interfaces and give users permission to perform administrative tasks and access namespace content. An HCP user account is defined in HCP; it has a set of credentials, username and password, which is stored locally in the system. The HCP system uses these credentials to authenticate a user, performing local authentication. An HCP group account is a representation of an Active Directory (AD) group. To create group accounts, HCP must be configured to support Active Directory. The group account enables AD users in the AD group to access one or more of HCP interfaces. Like HCP user accounts, HCP group accounts are defined separately at the system and tenant levels. Different tenants have different user and group accounts. These accounts cannot be shared across tenants. Group membership is different at the system and tenant levels. HCP administrative roles can be associated with both system-level and tenant-level user and group accounts. Data access permissions can be associated with only tenant-level user and group accounts. Consequently, system-level
22 WHITE PAPER 22 local and AD users can only be administrative users, while tenant-level local and AD users can be both administrative users and have data access permissions. Tenant-level users can have only administrative roles without namespace data permissions, or only namespace data permissions without administrative roles, or any combination of administrative roles and namespace data permissions. System and Tenant Management The implementation of segregation of management in the Hitachi Content Platform system is illustrated in Figure 8. An HCP system has both system-level and tenant-level administrators: System-level administrative accounts are used for configuring system-wide features, monitoring system hardware and software and overall repository usage, and managing system-level users. The system administrator user interface, the system management console, provides the functionality needed by the maintainer of the physical HCP system. For example, it allows the maintainer to shut down the system, see information about nodes, manage policies and services, and create HCP tenants. System administrators have a view of the system as a whole, including all HCP software and hardware that make up the system, and can perform all of the administration for actions that have system scope. Tenant-level administrative accounts are used for creating HCP namespaces and configuring individual tenants and namespaces. They can monitor namespace usage at the tenant and namespace level, manage tenant-level users, and control access to namespaces. The required functionality is provided by the tenant administrator user interface, the tenant management console. This interface is intended for use by the maintainer of the virtual HCP system (an individual tenant with a set of namespaces it owns). The tenant-level administration feature facilitates segregation of management, which is essential in cloud environments. An HCP tenant can optionally grant system-level users administrative access to itself. In this case, system-level users with the monitor, administrator, security or compliance role can log into the tenant management console or use the HCP management API for that tenant. System-level users with the monitor or administrator role can also access the tenant management console directly from the system management console. This effectively enables a system administrator to function as a tenant administrator, as shown in Figure 8. System-level users can perform all the activities allowed by the tenant-level roles that correspond to their system-level roles. An AD user may belong to AD groups for which the corresponding HCP group accounts exist at both the system and tenant levels. This user has the roles associated with both the applicable system-level group accounts and the applicable tenant-level group accounts. Object Policies Objects in a namespace have a variety of properties, such as the retention setting or index setting. These properties are defined for each object by the object system metadata. Objects can also be affected by some namespace properties, such as the default metadata settings that are inherited by new objects stored in the namespace, or the versioning setting. Both the namespace-level settings and the properties that are part of the object metadata serve as parameters for the Hitachi Content Platform system's transactions and services, and determine the object's behavior during its life cycle within the repository. These settings are called policies. An HCP policy is one or more settings that influence how transactions and internal processes (services) affect objects in a namespace. Policies ensure that objects behave in expected ways. The HCP policies are described in Table 1.
23 WHITE PAPER 23 Table 1. HCP Policies Policy Name Policy Description and Components Transactions and Services Influenced DPL System DPL setting, namespace DPL setting. Object creation. Protection service. Retention Default retention setting, object retention setting, hold setting, system metadata and custom metadata options for objects under retention. Object creation, object deletion, system and custom metadata handling. Disposition, Garbage collection services. Shredding Default shred setting, object shred setting. Object deletion. Shredding service. Indexing Default index setting, object index setting. Metadata query engine. Versioning Versioning setting, pruning setting. Object creation and deletion. Garbage collection service. Custom Metadata Validation XML syntax validation. Add/replace custom metadata operations. Each policy may consist of one or more settings that may have different scopes of application and methods of configuration. Policy settings are defined at the object and namespace level. While all policies affect objects, only the object-level policy settings are included in the object's metadata; they affect individual objects. The namespace-level policies affect all objects in the namespace and are part of the namespace configuration. Table 2 lists all policy settings sorted according to their scope and method of configuration.
24 WHITE PAPER 24 Table 2. HITACHI CONTENT PLATFORM Policy Settings: Scope and Configuration Policy Policy Setting Hitachi Content Platform Namespaces Scope/Level Configured Via Data Protection Level System DPL: 1-4 System System UI Namespace DPL: 1-4, dynamic Namespace Tenant UI, MAPI Retention Default retention setting: fixed date, offset, special value, retention class Namespace Tenant UI, MAPI Retention setting: fixed date, offset, special value, retention class Object REST API, retention.txt Hold setting: true or false Object REST API Ownership and POSIX permission changes under retention: true or false Namespace Tenant UI, MAPI Custom metadata operations allowed under retention Namespace Tenant UI, MAPI Indexing Index setting: true or false (1/0) Object REST API, index.txt Default index setting: true or false Namespace Tenant UI, MAPI Shredding Shred setting: true or false (1/0) Object REST API, shred.txt Default shred setting: true or false Namespace Tenant UI, MAPI Custom Metadata Validation XML validation: true or false Namespace Tenant UI, MAPI Versioning Versioning setting: true or false Namespace Tenant UI, MAPI Pruning setting: true/false and number of days for primary or replica Namespace Tenant UI, MAPI Content Management Services A Hitachi Content Platform service is a background process that performs a specific function that is targeted at preserving and improving the overall health of the HCP system. In particular, services are responsible for optimizing the use of system resources and maintaining the integrity and availability of the data stored in the HCP repository. Services are configured during HCP installation and generally run without user intervention. They can be enabled or disabled and started or stopped at the system level via the system management console using service role. Services run either continuously, periodically (on a specific schedule), in response to certain events, or manually. Each service
25 WHITE PAPER 25 runs independently of other services. Multiple services can be executing at the same time, although some services take precedence over others. Services work by detecting and repairing conditions that do not conform to their requirements, while iterating over objects in the background. They work on the repository as a whole, across all namespaces, with the exception of disposition and replication that can be enabled or disabled at the namespace level. HCP implements 12 services: protection, content verification, scavenging, garbage collection, duplicate elimination, shredding, disposition, compression, capacity balancing, storage tiering, migration and replication. The HCP services are briefly described in Table 3.
26 WHITE PAPER 26 Table 3. Hitachi Content Platform Services Policy Protection Content Verification Scavenging Garbage Collection Duplicate Elimination Shredding Disposition Compression Capacity Balancing Storage Tiering Migration Replication Description Enforces DPL policy compliance by ensuring that the proper number of copies of each object exists in the system, and that damaged or lost objects can be recovered. Any policy violation invokes repair process. Offers both scheduled and event-driven service. Events trigger a full service run, even if the service is disabled, after a configurable amount of time: 90 minutes after node shutdown; 1 minute after logical volume failure; 10 minutes after node removal. Guarantees data integrity of repository objects by ensuring that the content of a file matches its digital signature. Repairs the object if the hash does not match. Detects and repairs discrepancies between primary and secondary metadata. SHA-256 hash algorithm is used by default. Checksums are computed on external and internal files. Computationally intensive and time-consuming service. Runs according to the active service schedule. Ensures that all objects in the repository have valid metadata, and reconstructs metadata in case the metadata is lost or corrupted, but data files exist. The service verifies that both the primary metadata for each data object and the copies of the metadata stored with the object data (secondary metadata) are complete, valid and in sync with each other. Computationally intensive and time-consuming service. Scheduled service. Reclaims storage space by purging hidden data and metadata for objects marked for deletion, or left behind by incomplete transactions. It also deletes old versions of objects that are eligible for pruning. When applicable, the deletion triggers the shredding service. Scheduled service, not event driven. Identifies and eliminates redundant objects in the repository, and merges duplicate data to free space. The hash signature of external file representations is used to select objects as input to the service. These objects are then checked in a byte for byte manner to ensure that the data contents are indeed identical. Scheduled service. Overwrites storage locations where copies of the deleted object were stored in such a way that none of its data or metadata can be reconstructed, for security reasons. Also called secure deletion. The default HCP shredding algorithm uses 3 passes to overwrite an object and is DoD M standard compliant. The algorithm is selected at install time. Event-driven only service, not scheduled. It is triggered by the deletion of an object marked for shredding. Automatic cleanup of expired objects. All HCP namespaces can be configured to automatically delete objects after their retention period expires. Can be enabled or disabled both at the system and namespace level; enabling disposition for a namespace has no effect if the service is disabled at the system level. Disposition service deletes only current versions of versioned objects. Scheduled service. Compresses object data to make more efficient use of system storage space. The space reclaimed by compression can be used for additional storage. A number of configurable parameters are provided via System Management Console. Scheduled service. Attempts to keep the usable storage capacity balanced (roughly equivalent) across all storage nodes in the system. If storage utilization for the nodes differs by a wide margin, the service moves objects around to bring the nodes closer to a balanced state. Runs only when started manually. Additions and deletions of objects do not trigger the service. Typically, an authorized HCP service provider starts this service after adding new storage nodes to the system. In addition, while not part of the service, during normal system operation new objects tend to naturally spread among all nodes in the system in fairly even proportion. This is due to the nature of the storage manager selection algorithm and resource monitoring of the administrative engine. Determines which storage tiering strategy applies to an object, evaluates where the copies of the object should reside based on the rules in the applied service plan, and moves objects between running and spin-down storage as needed. Active only in spin-down-capable HCP SAIN systems. Scheduled service. Migrates data off selected nodes in an HCP RAIN system or selected storage arrays in an HCP SAIN system to allow for these devices to be retired. Can only be run manually. Copies one or more tenants from one HCP system to another to ensure data availability and enable disaster recovery. Ongoing service: once set up, runs continually in the background. Users can configure, monitor and control the activity of this service. Replication is an optional feature.
27 WHITE PAPER 27 Conclusion Hitachi Data Systems object storage solutions avoid the limitations of traditional storage systems by intelligently storing content in far larger quantities and in a much more efficient manner. These solutions provide for the new demands imposed by the explosion of unstructured data and its growing importance to organizations, their partners, their customers, their governments and their shareholders. Hitachi Content Platform, a Hitachi Data Systems object storage platform, treats file data, file metadata and custom metadata as a single object that is tracked and stored among a variety of storage tiers. With secure multitenancy and configurable attributes for each logical partition, the HCP object repository can be divided into a number of smaller virtual object stores that present configurable attributes to support different service levels. This allows the object store to support a wide range of workloads, such as content preservation, data protection, content distribution and even cloud, from a single physical infrastructure. HCP is also part of a larger portfolio of solutions that include Hitachi Data Ingestor for elastic, backup-free file services and Hitachi Content Platform Anywhere for synchronization and sharing of files and folders across a wide range of user devices. One infrastructure is far easier to manage than disparate silos of technology for each application or set of users. By integrating many key technologies in a single storage platform, Hitachi Data Systems object storage solutions provide a path to short-term return on investment and significant long-term efficiency improvements. They help IT evolve to meet new challenges, stay agile over the long term, and address future change and growth.
28 Corporate Headquarters 2845 Lafayette Street Santa Clara, CA USA community.hds.com Regional Contact Information Americas: or Europe, Middle East and Africa: +44 (0) or Asia Pacific: or Hitachi Data Systems Corporation All rights reserved. HITACHI is a trademark or registered trademark of Hitachi, Ltd. Microsoft, Windows, Azure and Active Directory are trademarks or registered trademarks of Microsoft Corporation. All other trademarks, service marks, and company names are properties of their respective owners. WP-425-D DG November 2014
How to Manage Unstructured Data with Hitachi Content Platform from OnData
W H I T E P A P E R data How to Manage Unstructured Data with Hitachi Content Platform from OnData By Hitachi Data Systems April 2012 Hitachi Data Systems 2 Table of Contents OnData 3 Executive Summary
Hitachi Content Platform. Andrej Gursky, Solutions Consultant May 2015
Hitachi Content Platform Andrej Gursky, Solutions Consultant May 2015 What Is Object Storage? Aggregate, manage, protect and use content Just like we move photos from devices to a PC Hard to use on the
Hitachi Content Platform Installing an HCP System
Hitachi Content Platform MK-99ARC026-11 2009 2015 Hitachi Data Systems Corporation. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic
Lab Validation Report
Lab Validation Report Hitachi Data Ingestor with Hitachi Content Platform Bottomless, Backup-free Storage for Distributed IT By Tony Palmer, ESG Lab Senior Analyst June 2012 Lab Validation: Hitachi Data
How To Use The Hitachi Content Archive Platform
O V E R V I E W Hitachi Content Archive Platform An Active Archive Solution Hitachi Data Systems Hitachi Content Archive Platform An Active Archive Solution As companies strive to better manage information
Hitachi Content Platform as a Continuous Integration Build Artifact Storage System
Hitachi Content Platform as a Continuous Integration Build Artifact Storage System A Hitachi Data Systems Case Study By Hitachi Data Systems March 2015 Contents Executive Summary... 2 Introduction... 3
Hitachi NAS Platform and Hitachi Content Platform with ESRI Image
W H I T E P A P E R Hitachi NAS Platform and Hitachi Content Platform with ESRI Image Aciduisismodo Extension to ArcGIS Dolore Server Eolore for Dionseq Geographic Uatummy Information Odolorem Systems
Dionseq Uatummy Odolorem Vel Layered Security Approach
A P P L I C A T I O N B R I E F Aciduisismodo Hitachi Content Dolore Platform Eolore Dionseq Uatummy Odolorem Vel Layered Security Approach Highly Scalable, Cloud-enabled Platform Ensures Data Safety with
EMC DATA DOMAIN OPERATING SYSTEM
ESSENTIALS HIGH-SPEED, SCALABLE DEDUPLICATION Up to 58.7 TB/hr performance Reduces protection storage requirements by 10 to 30x CPU-centric scalability DATA INVULNERABILITY ARCHITECTURE Inline write/read
Hitachi Cloud Service for Content Archiving. Delivered by Hitachi Data Systems
SOLUTION PROFILE Hitachi Cloud Service for Content Archiving, Delivered by Hitachi Data Systems Improve Efficiencies in Archiving of File and Content in the Enterprise Bridging enterprise IT infrastructure
EMC DATA DOMAIN OPERATING SYSTEM
EMC DATA DOMAIN OPERATING SYSTEM Powering EMC Protection Storage ESSENTIALS High-Speed, Scalable Deduplication Up to 58.7 TB/hr performance Reduces requirements for backup storage by 10 to 30x and archive
The Economics of File-based Storage
Technology Insight Paper The Economics of File-based Storage Implementing End-to-End File Storage Efficiency By John Webster March, 2013 Enabling you to make the best technology decisions The Economics
WOS Cloud. ddn.com. Personal Storage for the Enterprise. DDN Solution Brief
DDN Solution Brief Personal Storage for the Enterprise WOS Cloud Secure, Shared Drop-in File Access for Enterprise Users, Anytime and Anywhere 2011 DataDirect Networks. All Rights Reserved DDN WOS Cloud
STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER
STORAGE CENTER DATASHEET STORAGE CENTER Go Beyond the Boundaries of Traditional Storage Systems Today s storage vendors promise to reduce the amount of time and money companies spend on storage but instead
Introduction to NetApp Infinite Volume
Technical Report Introduction to NetApp Infinite Volume Sandra Moulton, Reena Gupta, NetApp April 2013 TR-4037 Summary This document provides an overview of NetApp Infinite Volume, a new innovation in
Hitachi Cloud Solutions
SOLUTION PROFILE Innovate With Information While Adopting Cloud, Your Way, At Your Own Pace Hitachi Cloud Solutions Hitachi Data Systems delivers secure, flexible, scalable and easy-to-manage cloud storage
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011
Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011 Executive Summary Large enterprise Hyper-V deployments with a large number
EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION
EMC SYNCPLICITY FILE SYNC AND SHARE SOLUTION Automated file synchronization Flexible, cloud-based administration Secure, on-premises storage EMC Solutions January 2015 Copyright 2014 EMC Corporation. All
BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything
BlueArc unified network storage systems 7th TF-Storage Meeting Scale Bigger, Store Smarter, Accelerate Everything BlueArc s Heritage Private Company, founded in 1998 Headquarters in San Jose, CA Highest
Hitachi Data Systems Silver Lining. HDS Enables a Flexible, Fluid Cloud Storage Infrastructure
White Paper Hitachi Data Systems Silver Lining HDS Enables a Flexible, Fluid Cloud Storage Infrastructure By Terri McClure November, 2010 This ESG White Paper was commissioned by Hitachi Data Systems and
A Virtual Filer for VMware s Virtual SAN A Maginatics and VMware Joint Partner Brief
A Virtual Filer for VMware s Virtual SAN A Maginatics and VMware Joint Partner Brief With the massive growth of unstructured data in today s enterprise environments, storage IT administrators are constantly
Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration
Solutions Integrated Storage Appliances Management Optimized Storage & Migration Archive Data Retention & Compliance Services Global Installation & Support SECURING THE FUTURE OF YOUR DATA w w w.q sta
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
SOLUTION BRIEF Citrix Cloud Solutions Citrix Cloud Solution for Disaster Recovery
SOLUTION BRIEF Citrix Cloud Solutions Citrix Cloud Solution for Disaster Recovery www.citrix.com Contents Introduction... 3 Fitting Disaster Recovery to the Cloud... 3 Considerations for Disaster Recovery
Hitachi Content Platform (HCP)
Copyright 2014 A*STAR Version 1.0 Hitachi Content Platform (HCP) HCP and HCP Anywhere Features Evaluation THIS DOCUMENT AND THE INFORMATION CONTAINED HEREIN IS PROVIDED ON AN "AS IS" BASIS WITHOUT ANY
Lab Validation Report
Lab Validation Report Hitachi Data Systems Content Platform Portfolio Integrated, Secure, and Agile Content Mobility By Tony Palmer, Senior Lab Analyst February 2015 2015 The Enterprise Strategy Group,
ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE
ENABLING GLOBAL HADOOP WITH EMC ELASTIC CLOUD STORAGE Hadoop Storage-as-a-Service ABSTRACT This White Paper illustrates how EMC Elastic Cloud Storage (ECS ) can be used to streamline the Hadoop data analytics
nwstor Storage Security Solution 1. Executive Summary 2. Need for Data Security 3. Solution: nwstor isav Storage Security Appliances 4.
CONTENTS 1. Executive Summary 2. Need for Data Security 3. Solution: nwstor isav Storage Security Appliances 4. Conclusion 1. EXECUTIVE SUMMARY The advantages of networked data storage technologies such
June 2009. Blade.org 2009 ALL RIGHTS RESERVED
Contributions for this vendor neutral technology paper have been provided by Blade.org members including NetApp, BLADE Network Technologies, and Double-Take Software. June 2009 Blade.org 2009 ALL RIGHTS
SECURE, ENTERPRISE FILE SYNC AND SHARE WITH EMC SYNCPLICITY UTILIZING EMC ISILON, EMC ATMOS, AND EMC VNX
White Paper SECURE, ENTERPRISE FILE SYNC AND SHARE WITH EMC SYNCPLICITY UTILIZING EMC ISILON, EMC ATMOS, AND EMC VNX Abstract This white paper explains the benefits to the extended enterprise of the on-
EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise
EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise ESSENTIALS Easy-to-use, single volume, single file system architecture Highly scalable with
Brocade One Data Center Cloud-Optimized Networks
POSITION PAPER Brocade One Data Center Cloud-Optimized Networks Brocade s vision, captured in the Brocade One strategy, is a smooth transition to a world where information and applications reside anywhere
High Availability with Windows Server 2012 Release Candidate
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
How to Manage Critical Data Stored in Microsoft Exchange Server 2010. By Hitachi Data Systems
W H I T E P A P E R How to Manage Critical Data Stored in Microsoft Exchange Server 2010 By Hitachi Data Systems April 2012 2 Table of Contents Executive Summary and Introduction 3 Mission-critical Microsoft
Symantec Enterprise Vault.cloud Overview
Fact Sheet: Archiving and ediscovery Introduction The data explosion that has burdened corporations and governments across the globe for the past decade has become increasingly expensive and difficult
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
ATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEG N POWERFUL RELEVANT PERFORMANCE SOLUTION CLO IRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN V
ATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEG N POWERFUL RELEVANT PERFORMANCE SOLUTION CLO IRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN V WHITE PAPER Solve Data Protection and Security Issues Amid
Five Best Practices for Improving the Cloud Experience by Cloud Innovators. By Hitachi Data Systems
Five Best Practices for Improving the Cloud Experience by Cloud Innovators By Hitachi Data Systems May 2015 1 Contents 1. Ensure Cloud Providers Meet Corporate Business and IT Requirements.... 3 2. Choose
Hitachi Cloud Services Delivered by Hitachi Data Systems for Telco Markets
SOLUTION PROFILE Achieve your service level agreement (SLA) targets from a complete, scalable, comprehensive and flexible infrastructure: On or off the premises. Hitachi Cloud Services Delivered by Hitachi
Unstructured data in the enterprise
Introduction Silverton Consulting, Inc. StorInt Briefing Today, file data is expanding without limit. Some suggest that this data will grow 40X over the next decade, at which time ~80% of all company data
HP and Mimosa Systems A system for email archiving, recovery, and storage optimization white paper
HP and Mimosa Systems A system for email archiving, recovery, and storage optimization white paper Mimosa NearPoint for Microsoft Exchange Server and HP StorageWorks 1510i Modular Smart Array Executive
Redefining Microsoft SQL Server Data Management. PAS Specification
Redefining Microsoft SQL Server Data Management APRIL Actifio 11, 2013 PAS Specification Table of Contents Introduction.... 3 Background.... 3 Virtualizing Microsoft SQL Server Data Management.... 4 Virtualizing
In the Age of Unstructured Data, Enterprise-Class Unified Storage Gives IT a Business Edge
In the Age of Unstructured Data, Enterprise-Class Unified Storage Gives IT a Business Edge 7 Key Elements to Look for in a Multipetabyte-Scale Unified Storage System By Hitachi Data Systems April 2014
Best Practices for Managing Storage in the Most Challenging Environments
Best Practices for Managing Storage in the Most Challenging Environments Sanjay Srivastava Senior Product Manager, Symantec The Typical Virtualization Adoption Path Today, 20-25% of server workloads are
Designing a Cloud Storage System
Designing a Cloud Storage System End to End Cloud Storage When designing a cloud storage system, there is value in decoupling the system s archival capacity (its ability to persistently store large volumes
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications
Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications White Paper Table of Contents Overview...3 Replication Types Supported...3 Set-up &
ENTERPRISE STORAGE WITH THE FUTURE BUILT IN
ENTERPRISE STORAGE WITH THE FUTURE BUILT IN Breakthrough Efficiency Intelligent Storage Automation Single Platform Scalability Real-time Responsiveness Continuous Protection Storage Controllers Storage
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Object Storage A Dell Point of View
Object Storage A Dell Point of View Dell Product Group 1 THIS POINT OF VIEW PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED
REDUCE COSTS AND COMPLEXITY WITH BACKUP-FREE STORAGE NICK JARVIS, DIRECTOR, FILE, CONTENT AND CLOUD SOLUTIONS VERTICALS AMERICAS
REDUCE COSTS AND COMPLEXITY WITH BACKUP-FREE STORAGE NICK JARVIS, DIRECTOR, FILE, CONTENT AND CLOUD SOLUTIONS VERTICALS AMERICAS WEBTECH EDUCATIONAL SERIES REDUCE COSTS AND COMPLEXITY WITH BACKUP-FREE
Diagram 1: Islands of storage across a digital broadcast workflow
XOR MEDIA CLOUD AQUA Big Data and Traditional Storage The era of big data imposes new challenges on the storage technology industry. As companies accumulate massive amounts of data from video, sound, database,
Configuring Celerra for Security Information Management with Network Intelligence s envision
Configuring Celerra for Security Information Management with Best Practices Planning Abstract appliance is used to monitor log information from any device on the network to determine how that device is
IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE
White Paper IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE Abstract This white paper focuses on recovery of an IBM Tivoli Storage Manager (TSM) server and explores
Implementing Multi-Tenanted Storage for Service Providers with Cloudian HyperStore. The Challenge SOLUTION GUIDE
Implementing Multi-Tenanted Storage for Service Providers with Cloudian HyperStore COST EFFECTIVE SCALABLE STORAGE PLATFORM FOR CLOUD STORAGE SERVICES SOLUTION GUIDE The Challenge Service providers (MSPs/ISPs/ASPs)
Big data management with IBM General Parallel File System
Big data management with IBM General Parallel File System Optimize storage management and boost your return on investment Highlights Handles the explosive growth of structured and unstructured data Offers
NetApp Big Content Solutions: Agile Infrastructure for Big Data
White Paper NetApp Big Content Solutions: Agile Infrastructure for Big Data Ingo Fuchs, NetApp April 2012 WP-7161 Executive Summary Enterprises are entering a new era of scale, in which the amount of data
Enterprise Private Cloud Storage
Enterprise Private Cloud Storage The term cloud storage seems to have acquired many definitions. At Cloud Leverage, we define cloud storage as an enterprise-class file server located in multiple geographically
ATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEG N POWERFUL RELEVANT PERFORMANCE SOLUTION CLO IRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN V
ATA DRIVEN GLOBAL VISION CLOUD PLATFORM STRATEG N POWERFUL RELEVANT PERFORMANCE SOLUTION CLO IRTUAL BIG DATA SOLUTION ROI FLEXIBLE DATA DRIVEN V WHITE PAPER Gear Up for the As-a-Service Era A Path to the
HDS HCP Anywhere: Easy, Secure, On-Premises File Sharing Date: May 2013 Author: Vinny Choinski, Senior Lab Analyst, and Kerry Dolan, Lab Analyst
ESG Lab Review HDS HCP Anywhere: Easy, Secure, On-Premises File Sharing Date: May 2013 Author: Vinny Choinski, Senior Lab Analyst, and Kerry Dolan, Lab Analyst Abstract: This ESG Lab review documents hands-on
EMC PowerPath Family
DATA SHEET EMC PowerPath Family PowerPath Multipathing PowerPath Migration Enabler PowerPath Encryption with RSA The enabler for EMC host-based solutions The Big Picture Intelligent high-performance path
Protect Microsoft Exchange databases, achieve long-term data retention
Technical white paper Protect Microsoft Exchange databases, achieve long-term data retention HP StoreOnce Backup systems, HP StoreOnce Catalyst, and Symantec NetBackup OpenStorage Table of contents Introduction...
Securing Data in the Virtual Data Center and Cloud: Requirements for Effective Encryption
THE DATA PROTECTIO TIO N COMPANY Securing Data in the Virtual Data Center and Cloud: Requirements for Effective Encryption whitepaper Executive Summary Long an important security measure, encryption has
Assessment of Hitachi Data Systems (HDS) Hitachi Content Platform (HCP) For Dodd-Frank Compliance
Assessment of Hitachi Data Systems (HDS) Hitachi Content Platform (HCP) For Dodd-Frank Compliance 1. Executive Summary Assessment Goal LiquidHub Consulting was engaged by Hitachi Data Systems ( HDS ) to
Web Application Hosting Cloud Architecture
Web Application Hosting Cloud Architecture Executive Overview This paper describes vendor neutral best practices for hosting web applications using cloud computing. The architectural elements described
Simplified Management With Hitachi Command Suite. By Hitachi Data Systems
Simplified Management With Hitachi Command Suite By Hitachi Data Systems April 2015 Contents Executive Summary... 2 Introduction... 3 Hitachi Command Suite v8: Key Highlights... 4 Global Storage Virtualization
An Evaluation of Hitachi Content Archive Platform
Platform Evaluation: Hitachi Content Archive www.kahnconsultinginc.com An Evaluation of Hitachi Content Archive Platform I. Executive Summar y Summary of Evaluation It is the opinion of Kahn Consulting,
Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation
Solution Overview Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation IT organizations face challenges in consolidating costly and difficult-to-manage branch-office
Migration and Disaster Recovery Underground in the NEC / Iron Mountain National Data Center with the RackWare Management Module
Migration and Disaster Recovery Underground in the NEC / Iron Mountain National Data Center with the RackWare Management Module WHITE PAPER May 2015 Contents Advantages of NEC / Iron Mountain National
Storage Virtualization
Section 2 : Storage Networking Technologies and Virtualization Storage Virtualization Chapter 10 EMC Proven Professional The #1 Certification Program in the information storage and management industry
Amazon Cloud Storage Options
Amazon Cloud Storage Options Table of Contents 1. Overview of AWS Storage Options 02 2. Why you should use the AWS Storage 02 3. How to get Data into the AWS.03 4. Types of AWS Storage Options.03 5. Object
Technical. Overview. ~ a ~ irods version 4.x
Technical Overview ~ a ~ irods version 4.x The integrated Ru e-oriented DATA System irods is open-source, data management software that lets users: access, manage, and share data across any type or number
Cisco Active Network Abstraction Gateway High Availability Solution
. Cisco Active Network Abstraction Gateway High Availability Solution White Paper This white paper describes the Cisco Active Network Abstraction (ANA) Gateway High Availability solution developed and
Hitachi Content Platform Using the HCP OpenStack Swift API
Hitachi Content Platform MK-92ARC041-00 2013-2015 Hitachi Data Systems Corporation. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic
Hitachi Data Migrator to Cloud Best Practices Guide
Hitachi Data Migrator to Cloud Best Practices Guide Global Solution Services Engineering April 2015 MK-92HNAS045-02 Notices and Disclaimer Copyright 2015 Corporation. All rights reserved. The performance
HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010
White Paper HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010 Abstract This white paper demonstrates key functionality demonstrated in a lab environment
Private Cloud Storage for Media Applications. Bang Chang Vice President, Broadcast Servers and Storage [email protected]
Private Cloud Storage for Media Bang Chang Vice President, Broadcast Servers and Storage [email protected] Table of Contents Introduction Cloud Storage Requirements Application transparency Universal
Growth of Unstructured Data & Object Storage. Marcel Laforce Sr. Director, Object Storage
Growth of Unstructured Data & Object Storage Marcel Laforce Sr. Director, Object Storage Agenda Unstructured Data Growth Contrasting approaches: Objects, Files & Blocks The Emerging Object Storage Market
White Paper. Anywhere, Any Device File Access with IT in Control. Enterprise File Serving 2.0
White Paper Enterprise File Serving 2.0 Anywhere, Any Device File Access with IT in Control Like it or not, cloud- based file sharing services have opened up a new world of mobile file access and collaborative
Building the Virtual Information Infrastructure
Technology Concepts and Business Considerations Abstract A virtual information infrastructure allows organizations to make the most of their data center environment by sharing computing, network, and storage
(Scale Out NAS System)
For Unlimited Capacity & Performance Clustered NAS System (Scale Out NAS System) Copyright 2010 by Netclips, Ltd. All rights reserved -0- 1 2 3 4 5 NAS Storage Trend Scale-Out NAS Solution Scaleway Advantages
Building Storage-as-a-Service Businesses
White Paper Service Providers Greatest New Growth Opportunity: Building Storage-as-a-Service Businesses According to 451 Research, Storage as a Service represents a large and rapidly growing market with
EMC arhiviranje. Lilijana Pelko Primož Golob. Sarajevo, 16.10.2008. Copyright 2008 EMC Corporation. All rights reserved.
EMC arhiviranje Lilijana Pelko Primož Golob Sarajevo, 16.10.2008 1 Agenda EMC Today Reasons to archive EMC Centera EMC EmailXtender EMC DiskXtender Use cases 2 EMC Strategic Acquisitions: Strengthen and
StorReduce Technical White Paper Cloud-based Data Deduplication
StorReduce Technical White Paper Cloud-based Data Deduplication See also at storreduce.com/docs StorReduce Quick Start Guide StorReduce FAQ StorReduce Solution Brief, and StorReduce Blog at storreduce.com/blog
GPFS Cloud ILM. IBM Research - Zurich. Storage Research Technology Outlook
IBM Research - Zurich GPFS Cloud ILM Storage Research Technology Outlook Dr. Thomas Weigold ([email protected]) Manager Cloud Storage & Security IBM Research Zurich Why Cloud Storage? Economics! Lower
CA Cloud Overview Benefits of the Hyper-V Cloud
Benefits of the Hyper-V Cloud For more information, please contact: Email: [email protected] Ph: 888-821-7888 Canadian Web Hosting (www.canadianwebhosting.com) is an independent company, hereinafter
Object Storage: A Growing Opportunity for Service Providers. White Paper. Prepared for: 2012 Neovise, LLC. All Rights Reserved.
Object Storage: A Growing Opportunity for Service Providers Prepared for: White Paper 2012 Neovise, LLC. All Rights Reserved. Introduction For service providers, the rise of cloud computing is both a threat
Effective Storage Management for Cloud Computing
IBM Software April 2010 Effective Management for Cloud Computing April 2010 smarter storage management Page 1 Page 2 EFFECTIVE STORAGE MANAGEMENT FOR CLOUD COMPUTING Contents: Introduction 3 Cloud Configurations
HITACHI NAS PLATFORM F1140 CUSTOMER OVERVIEW
HITACHI NAS PLATFORM F1140 CUSTOMER OVERVIEW JUNE 2013 1 Hitachi Data Systems Corporation 2013. All Rights Reserved. HOW CAN I ADDRESS MY SMALLER FILE SERVING NEEDS TODAY AND TOMORROW? My remote, branch
Building Storage Service in a Private Cloud
Building Storage Service in a Private Cloud Sateesh Potturu & Deepak Vasudevan Wipro Technologies Abstract Storage in a private cloud is the storage that sits within a particular enterprise security domain
Network Attached Storage. Jinfeng Yang Oct/19/2015
Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability
Veritas Enterprise Vault for Microsoft Exchange Server
Veritas Enterprise Vault for Microsoft Exchange Server Store, manage, and discover critical business information Trusted and proven email archiving Veritas Enterprise Vault, the industry leader in email
VMware vcloud Air - Disaster Recovery User's Guide
VMware vcloud Air - Disaster Recovery User's Guide vcloud Air This document supports the version of each product listed and supports all subsequent versions until the document is replaced by a new edition.
VERITAS Backup Exec 9.0 for Windows Servers
WHITE PAPER Data Protection Solutions for Network Attached Storage VERITAS Backup Exec 9.0 for Windows Servers VERSION INCLUDES TABLE OF CONTENTS STYLES 1 TABLE OF CONTENTS Background...3 Why Use a NAS
