Diagram 1: Islands of storage across a digital broadcast workflow



Similar documents
Big data management with IBM General Parallel File System

Private Cloud Storage for Media Applications. Bang Chang Vice President, Broadcast Servers and Storage

WOS Cloud. ddn.com. Personal Storage for the Enterprise. DDN Solution Brief

BlueArc unified network storage systems 7th TF-Storage Meeting. Scale Bigger, Store Smarter, Accelerate Everything

Designing a Cloud Storage System

Comparing Microsoft SQL Server 2005 Replication and DataXtend Remote Edition for Mobile and Distributed Applications

Content Distribution Management

Cloud Based Application Architectures using Smart Computing

Network Attached Storage. Jinfeng Yang Oct/19/2015

IBM Global Technology Services September NAS systems scale out to meet growing storage demand.

June Blade.org 2009 ALL RIGHTS RESERVED

UNINETT Sigma2 AS: architecture and functionality of the future national data infrastructure

(Scale Out NAS System)

With DDN Big Data Storage

Distributed File Systems

POWER ALL GLOBAL FILE SYSTEM (PGFS)

EMC ISILON OneFS OPERATING SYSTEM Powering scale-out storage for the new world of Big Data in the enterprise

Quantum StorNext. Product Brief: Distributed LAN Client

Cloud Optimize Your IT

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

ETERNUS CS High End Unified Data Protection

Scala Storage Scale-Out Clustered Storage White Paper

WHITE PAPER. Reinventing Large-Scale Digital Libraries With Object Storage Technology

High Performance Server SAN using Micron M500DC SSDs and Sanbolic Software

NetApp Big Content Solutions: Agile Infrastructure for Big Data

High Availability with Windows Server 2012 Release Candidate

Introduction to NetApp Infinite Volume

ANY SURVEILLANCE, ANYWHERE, ANYTIME

Service Overview CloudCare Online Backup

Protect Microsoft Exchange databases, achieve long-term data retention

Amazon Cloud Storage Options

Archive Data Retention & Compliance. Solutions Integrated Storage Appliances. Management Optimized Storage & Migration

UniFS A True Global File System

STORAGE CENTER. The Industry s Only SAN with Automated Tiered Storage STORAGE CENTER

Technical Overview Simple, Scalable, Object Storage Software

New Features in PSP2 for SANsymphony -V10 Software-defined Storage Platform and DataCore Virtual SAN

Object Storage: A Growing Opportunity for Service Providers. White Paper. Prepared for: 2012 Neovise, LLC. All Rights Reserved.

Media Exchange really puts the power in the hands of our creative users, enabling them to collaborate globally regardless of location and file size.

Implementing Multi-Tenanted Storage for Service Providers with Cloudian HyperStore. The Challenge SOLUTION GUIDE

Application Brief: Using Titan for MS SQL

Constant Replicator: An Introduction

Technology Insight Series

CA ARCserve Family r15

Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle

Hitachi NAS Platform and Hitachi Content Platform with ESRI Image

DAS, NAS or SAN: Choosing the Right Storage Technology for Your Organization

CROSS PLATFORM AUTOMATIC FILE REPLICATION AND SERVER TO SERVER FILE SYNCHRONIZATION

Microsoft Private Cloud Fast Track

White Paper: Nasuni Cloud NAS. Nasuni Cloud NAS. Combining the Best of Cloud and On-premises Storage

EMC IRODS RESOURCE DRIVERS

Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V

TECHNICAL WHITE PAPER: ELASTIC CLOUD STORAGE SOFTWARE ARCHITECTURE

NETWORK ATTACHED STORAGE DIFFERENT FROM TRADITIONAL FILE SERVERS & IMPLEMENTATION OF WINDOWS BASED NAS

Data Protection with IBM TotalStorage NAS and NSI Double- Take Data Replication Software

High Availability and Disaster Recovery for Exchange Servers Through a Mailbox Replication Approach

Cisco and EMC Solutions for Application Acceleration and Branch Office Infrastructure Consolidation

Lab Validation Report

IBM TSM DISASTER RECOVERY BEST PRACTICES WITH EMC DATA DOMAIN DEDUPLICATION STORAGE

T a c k l i ng Big Data w i th High-Performance

A Virtual Filer for VMware s Virtual SAN A Maginatics and VMware Joint Partner Brief

Media Exchange. Enterprise-class Software Lets Users Anywhere Move Large Media Files Fast and Securely. Powerfully Simple File Movement

Enterprise Storage Solution for Hyper-V Private Cloud and VDI Deployments using Sanbolic s Melio Cloud Software Suite April 2011

Scality RING High performance Storage So7ware for pla:orms, StaaS and Cloud ApplicaAons

<Insert Picture Here> Oracle Cloud Storage. Morana Kobal Butković Principal Sales Consultant Oracle Hrvatska

EMC VPLEX FAMILY. Continuous Availability and Data Mobility Within and Across Data Centers

Distributed Software Development with Perforce Perforce Consulting Guide

Business Continuity with the. Concerto 7000 All Flash Array. Layers of Protection for Here, Near and Anywhere Data Availability

Storage Virtualization

Cisco UCS and Quantum StorNext: Harnessing the Full Potential of Content

An Oracle White Paper July Oracle ACFS

Zadara Storage Cloud A

Business-centric Storage FUJITSU Hyperscale Storage System ETERNUS CD10000

Data Management in an International Data Grid Project. Timur Chabuk 04/09/2007

HGST Object Storage for a New Generation of IT

Object Storage: Out of the Shadows and into the Spotlight

In Memory Accelerator for MongoDB

Cloud Computing for Control Systems CERN Openlab Summer Student Program 9/9/2011 ARSALAAN AHMED SHAIKH

HIGHLY AVAILABLE MULTI-DATA CENTER WINDOWS SERVER SOLUTIONS USING EMC VPLEX METRO AND SANBOLIC MELIO 2010

Breaking the Storage Array Lifecycle with Cloud Storage

Driving IBM BigInsights Performance Over GPFS Using InfiniBand+RDMA

Discover Smart Storage Server Solutions

IBM Tivoli Storage Manager Version Introduction to Data Protection Solutions IBM

Cisco Wide Area Application Services Software Version 4.1: Consolidate File and Print Servers

EMC VPLEX FAMILY. Transparent information mobility within, across, and between data centers ESSENTIALS A STORAGE PLATFORM FOR THE PRIVATE CLOUD

SCALABLE FILE SHARING AND DATA MANAGEMENT FOR INTERNET OF THINGS

Storage Switzerland White Paper Storage Infrastructures for Big Data Workflows

Getting performance & scalability on standard platforms, the Object vs Block storage debate. Copyright 2013 MPSTOR LTD. All rights reserved.

HA / DR Jargon Buster High Availability / Disaster Recovery

EMC CLOUDARRAY PRODUCT DESCRIPTION GUIDE

EMC VPLEX FAMILY. Continuous Availability and data Mobility Within and Across Data Centers

Windows Server 2012 授 權 說 明

How To Use Attix5 Pro For A Fraction Of The Cost Of A Backup

Transcription:

XOR MEDIA CLOUD AQUA Big Data and Traditional Storage The era of big data imposes new challenges on the storage technology industry. As companies accumulate massive amounts of data from video, sound, database, and text files, so increases the demand to manage, transfer, share, and utilize them intelligently. Traditional storage simply falls short in this requirement. Current storage infrastructures are typically designed for local deployment, making it difficult to share content across geographically dispersed locations. In such cases where a multi site environment is present, storage is often used inefficiently; multiple manual file transfers are employed, resulting in islands of storage that inhibit productivity. Diagram 1: Islands of storage across a digital broadcast workflow Multi user, multi system, and multi site access presents multiple complexities in managing data using on hand database management tools. The challenges of managing big data, the size definition of which ranes from terabytes to exabytes, include data capture, curation, storage, search, sharing, analysis, and visualization throughout its life cycle. This is true for IT Page 1 of 12

infrastructures in fields like research, medicine, military, media, and many others, where data sets are large and complex for traditional storage hardware and software to handle. To merely keep up with these challenges, companies are forced to spend more in managing the overall infrastructure from the onset throughout the life of the company itself. This includes the need for a separate technology or strategy specifically for redundancy and disaster recovery. Big Data and the Cloud In the recent years, developments in cloud computing and cloud storage provide solutions and benefits to the management and utilization of big data. XOR Media on Big Data and the Cloud XOR Media has decades of experience in big data storage and a strong background in enterprise software involving large object count, high transaction volume, complex workflows, and mission critical availability. Formerly SeaChange Broadcast, the group specializes in high performance, open IT storage specialized for big data such as media applications running in large media enterprises and private cloud data centers. XOR offers cloud capable and media optimized technologies used by broadcasters and content providers that together broadcast 15,000 channels. Clearly, XOR Media s expertise and experience puts it in a good position to develop a cloud solution that applies to various vertical markets including digital media, digital library, healthcare, education, and enterprise data centers. What is Cloud Aqua Cloud Aqua is a private cloud object based storage optimized for big data. It provides realtime sharing and intelligent management of big data files across networks or geographically dispersed locations. Cloud Aqua features include: Virtualized storage resources across networks or geographically dispersed locations providing a massively scalable global storage pool On demand allocation of storage and network resources improving storage utilization and efficiency Object based architecture providing intelligence at the storage level Automated processing through intelligent policy based content and metadata processing Built in availability and data protection Cross platform data sharing and application transparency through open standard interfaces WAN optimized data transfers or content distribution Flexible resource allocation and cloud services through multi tenancy Updated on 10.20.2014 Page 2 of 12

Let us explore these features in more detail. Scalable Global Storage Pool Cloud Aqua consolidates storage by providing a virtual file system overlay, pooling available storage resources into a scalable global storage pool. Cloud Aqua provides a global namespace where applications get access to content transparently regardless of where the content is physically located. Cloud Aqua employs a federated architecture where underlying storage systems or hardware can be on different networks or geographically dispersed locations. This makes Cloud Aqua also globally scalable to petabyte or exabyte levels (without disrupting existing connections or access) where, for example, adding capacity to an existing storage node in Cloud Aqua or adding a new storage node, will also scale the entire cloud object store. Across locations Across networks Site 1 Site 2 Network 1 Network 2 Cloud Aqua Site 4 Cloud Aqua Site 3 Network 3 Cloud Aqua distributed infrastructure Through virtualization, Cloud Aqua facilitates comprehensive management of storage resources (from multi user management to access control and quota settings, to authentication via integration with LDAP or Active Directory, etc), offering tremendous performance, scalability, manageability, and concurrent high speed access to heterogeneous clients or applications, all across a single global namespace. Multi tenancy Infrastructure Cloud Aqua allows flexible resource allocation and cloud services through its tenant based infrastructure. This means a global storage pool can be provisioned to multiple groups of users (such as departments within a company, sub organization within an umbrella organization, individual conglomerates within a mother company, corporations subscribed to a cloud storage services, etc.), providing potential not only for flexibility in resource allocation, but possible improvements to economies of cost from consolidating resources and streamlined management of resource provisioning. Cloud Aqua provisions storage (from its entire global storage pool) in a leasing model to multiple tenants that manage and share data files within their own domains. A tenant specifies a primary region in its domain, and may set up one or more secondary regions, where replicas of data files are kept. Updated on 10.20.2014 Page 3 of 12

A tenant determines where to setup primary region and secondary region(s), in order to facilitate data access and transfers. The regions can be conveniently removed or changed at will. All storage and network resources are allocated on tenant basis. Different domains under the same tenant share the same resources. Site 1 Domain A (PRIMARY) Site 2 Domain A (SECONDARY) Domain B (PRIMARY) Site 3 Site 4 Cloud Aqua Domain B (SECONDARY) Site 5 Domain A (SECONDARY) Tenant 1: Domain A Tenant 2: Domain B On demand Storage and Network Resources Allocation Cloud Aqua allocates storage space and network resources on demand, improving storage utilization and efficiency. For example, a quota can be allocated to a user or application and be adjusted anytime. It also supports multi user and multi region administration and resource provisioning, allowing flexibility in how storage and storage resources are managed within a multi user, multi tenant environment. Policies enabled storage selection for object creation Storage clusters are defined in Cloud Aqua where flexible policies enable files to be placed automatically in specific clusters or cluster groups according to its user account domain, storage geography and physical racks, different load balancing strategies, storage ties definitions as well as various object metadata collections. This allows administrators to set up where files are placed depending on specific requirements. Big data files, for example, are placed in good read/write performance storage clusters, while small files are placed in storage clusters that have higher storage capacity. Media files for play to air are for higher stability storage clusters, as media files for VOD are for clusters with high network data throughput. Real time media files are to be stored in cluster groups with higher read/write performance, while archive files are to be stored in cluster groups with higher density. Updated on 10.20.2014 Page 4 of 12

Load balanced storage selection for object retrieving Objects are read from storage clusters according to real time status, including the states of object replica, version, frontend and storage workload, thus, ensuring retrieve of the latest and correct object data, and automatic I/O load balancing across multiple frontends and storages. Thin Provisioning Cloud Aqua utilizes thin provisioning mechanism and allows space be easily allocated for big objects (e.g. virtualized server images), on a just enough and just in time basis, to give the appearance of having storage space more than are actually allocated to. Thin provisioning can defer storage capacity upgrades in line with actual business usage and save the operating costs associated with keeping unused disk capacity spinning. Object based Architecture Cloud Aqua employs an object based architecture, which means data or content are encapsulated together with metadata (attributes that define or describe the associated content/data) into a single object. Unlike files which have standard attributes defined (such as file creation date, time, type of file, etc), objects are self describing. They contain userdefined metadata attributes that allow applications to further describe or define data in an object. Metadata makes objects self describing. This is what brings about intelligence at the storage level. Unlike typical file systems, Cloud Aqua stores objects in a flat structure throughout the entire cloud, with each object having a unique Object ID across the entire storage pool. This enables a decentralized indexing of objects or files for better scale out ability, faster searches through metadata, and easy management. OBJECT DATA FILE SYSTEM METADATA USER DEFINED METADATA Objects are also self contained. Unlike in traditional storage where files are stored separately from its metadata or descriptive attributes, metadata is essentially part of the data, and not a separate implementation by itself. Therefore, management and storage of data is all based on managing an object and its metadata. Cloud Aqua manages objects across all sites and regions. External applications access objects in the cloud object store transparently. Updated on 10.20.2014 Page 5 of 12

Management operations to an object include read, write, and update data files while management operations to an object s metadata include maintain, search, update, and customize metadata. Shifting to a purely object based architecture, however, would mean having to use special interfaces to access the object store and may even mean letting go of typical hierarchical structure of folders and sub folders as is the most common with most file systems. This is not the case with Cloud Aqua, as it preserves folder based hierarchical structure by implementing folder structure as part of an object s metadata. This way, applications or users are allowed to virtually see objects in a typical hierarchical structure and handle them as they most typically would. This allows for adoption of Cloud Aqua with very little (if not without) changes to the way some users or applications would handle data. Intelligent and automated data processing In Cloud Aqua s object based architecture, intelligence is provided at the storage level. The storage understands what is being stored what the content is and what the metadata is. Cloud Aqua is aware of both content and its attributes, opening many different possibilities in optimization, searching, and performing automated actions at the storage level. Cloud Aqua provides a way for applications to communicate more efficiently with the storage systems. To illustrate, applications are able to query Cloud Aqua for a list of objects with a specific metadata or even combinations of metadata. Storage nodes in Cloud Aqua can quickly return results for application searches because they understand the objects. Automated processing is another feature of Cloud Aqua enabled through the use of an object s metadata. Based on user pre defined policies, Cloud Aqua can automatically take actions on an object depending on an object s metadata (i.e. folder, tag, creation date/time) or during an event (such as an object creation, object update). Examples of actions that can be automatically triggered upon execution of pre defined policies would be: Create data file replica(s) in different regions or storage clusters Transcode a data file into different formats Compress or dedupe a data file Other processes such as sniffing, checksum, code validation, etc. Processed objects are saved in different versions, and Cloud Aqua maintains its metadata to ensure relativity between versions. Such policy driven automated actions can be especially useful in content distribution, media file processing, backup creation, versioning, storage optimization, content caching (on demand local copy creation), on demand automated content placement in Hierarchical Storage Management (HSM), data migration, and many other uses that enable higher levels of efficiency. Updated on 10.20.2014 Page 6 of 12

Cloud Aqua also has a built in media workflow engine that picks up from automated processing abilities and allows one to setup workflows that incorporate a set of Cloud Aqua processing functionalities specifically for media files. Such set of actions include transcoding, encryption, low bit proxy generation, and content replication/distribution that may be triggered upon a media file being created, updated, or transferred (metadata updated). Based on pre defined policies, these actions can affect efficiencies in different media workflows. Given these capabilities, Cloud Aqua is made integral to various video/media systems outside Cloud Aqua: play to air systems, VOD, CDN, Digital Media Archiving, Production, and a lot more. Intelligent data processing capabilities of Cloud Aqua help put together efficient workflows. In a lot of ways, applications can be offloaded with some of the indexing, searching, and even processing tasks because these are now implemented at the storage level into Cloud Aqua. Built in availability and data protection Cloud Aqua provides higher levels of resiliency through built in availability and data protection features, such as replication, snapshots, remote streaming and failover/failback features. These provide inherent ability to address availability and disaster recovery within Cloud Aqua, negating the need to design and implement a disaster recovery strategy that is separate from the storage infrastructure itself. Replication Replication a basic protection and availability feature in Cloud Aqua, which allows creating replicas of objects in pre defined locations using pre defined replication methods. Object replicas are copies of objects (could be multiple replicas) that provide redundancy protection (an object replica located in a different site or domain) transparent to users or applications. Object replicas also allow improved access to an object by creating local replicas for an object that is located on another network or site. There are 3 replication methods in Cloud Aqua. Asynchronous Replication allows for creation, update, deletion of replicas asynchronously. This provides for high write speeds and short access latency to the replica(s) in other region(s). Synchronous Replication allows for creation, update, deletion of replicas synchronously. This ensures real time to replication sites/domains but does not provide the same level of write speeds as that of asynchronous replication. Cached Replication creates a local replica for an object if there is a large amount of read/write requests from a region without any local replica, improving local read/write speed to the object. Updated on 10.20.2014 Page 7 of 12

Asynchronous Site 1 Domain Region A Site 2 Domain Region B Replica Site 5 Domain Region C Replica Synchronous Cached Replica Site 3 To provide integrity for replication, Cloud Aqua replication incorporates a version control mechanism that ensures users always accesses the latest version of object or replica and also implements checksum functions to ensure data integrity during transfers. Quick Snapshots Cloud Aqua supports multiple layers object snapshots. These allow users or applications to create quick or instant point in time copies of an object. Snapshots in Cloud Aqua employ a copy on write strategy where actual physical copies are only created when there are changes to stored data every time new data is entered or existing data is updated. Cloud Aqua also makes use of available replica(s) for creating snapshots. Copy on write and use of replicas optimize storage utilization by reducing physical storage required for replicas while at the same time allows for rapid creation and access to snapshots. Snapshots can also be initiated even while files are still being written. This is especially useful in minimizing any wait times to initiate snapshots while media files for example are still being ingested into the system, further improving productivity. Cloud Aqua also provides users the flexibility to manage snapshots, including creation, viewing, performing rollbacks to snapshots, and deleting snapshots. Stream through Stream through enables a user to read a file in high speed from a region without a local object replica. This enables the media broadcasting system or video on demand system integrated with Cloud Aqua to provide continuous uninterruptible service. When the integrated broadcasting system or VOD system, for example, is unable to read files from a region due to a regional crash or other failures, the system can read files via stream through from other region. Updated on 10.20.2014 Page 8 of 12

Fault detection, rebuild and rebalance Cloud Aqua has embedded fault detection capability to automatically detect fault occurrence in storage unit, storage node, frontend node, metadata server, object replica, metadata replica and etc. Thanks to federated data and metadata replica, even with portion failure, Cloud Aqua is still able to provide object access and correct data. The adding or removal (permanently), enable or disable (temporarily) of a storage node would lead to data rebuilding among the nodes in Cloud Aqua. The rebuild procedure includes object replica validation, complementation and extra deletion. Rebalance feature is used when storage usage rebalance and object access rate rebalance to get better load balancing. Cross platform data sharing and universal access Cloud Aqua provides several open standard interfaces for applications that access the global object store. These interfaces include RESTful API, Windows or Linux plug ins, and also standard NAS interface. Cloud Aqua conforms to the Cloud Data Management Interface (CDMI) standard (ISO/IEC 17826:2012), a standard method for accessing and managing cloud data using RESTful access method. RESTful APIs are provided to allow applications to access and manage data in Cloud Aqua. Cloud Aqua provides client access SDK in various computing language: Java, Python and C/C++, facilitating integration with application and service providers operation. In order to accommodate traditional file system, as well as many existing applications and services, Cloud Aqua also provides Windows or Linux OS plugins to support standard file system I/O and standard NAS access methods to the object store. Applications can directly access content through standard NFS, SAMBA, or FTP interfaces since virtual folder structures are implemented in Cloud Aqua through an object s metadata. These various ways to access the object store allow customers to choose or combine access methods according to application requirements and existing network infrastructures. It provides application transparency and universal access to different OS platforms and access methods without requiring significant changes to current infrastructures or access methods Updated on 10.20.2014 Page 9 of 12

(whether via low cost NAS or high performance SAN) when Cloud Aqua is introduced to replace isolated local storage islands. WAN optimized transfers and content distribution Earlier discussions about the features and capabilities of Cloud Aqua (such as replication and content distribution) would suggest the need for good network speeds or, better yet, fast transfer speeds in order to use the same features more effectively and efficiently. Cloud Aqua is an optimized infrastructure even as individual storage systems and devices are situated in different networks and/or physical locations connected over wide area networks. It implements several methods to optimize transfers and streaming even over long distances that include: 1) User Datagram Protocol (UDP) User Datagram Protocol based data transfer is a high performance protocol designed for transferring large volumes of data over WAN. UDP is much faster than standard TCP in several orders of magnitude. This is the same transfer technology used by companies that provide file transport solutions (such as Signiant and Aspera), now natively implemented on the storage level in Cloud Aqua. 2) P2P (Peer to Peer) Access to content or objects in Cloud Aqua allow for simultaneous transfers/streams from an object and its replicas whenever possible. This increased parallelism improves access latency and the speed at which one can deliver content or stream content from one location. 3) Optimized for Sequential I/O Cloud Aqua is optimized for big data and media files. Caching on storage server heads of storage nodes in Cloud Aqua accelerate write operations and read operations (due to automatic pre fetching when large sequential reads are detected). Standard 512KB strip size definition accelerates write operations while support for big chunk sizes is implemented to improve big data read and write performance. An adaptive file layout also simultaneously allows space efficiency and performance optimization for large files by adhering to a space efficient file layout first when writing a file and switching to performance optimized layout when large files are detected. 4) QoS Bandwidth Reservation Cloud Aqua allows administrators to reserve network bandwidth based on sessions, users, class of service, etc. Graphic and command line monitoring and statistics Monitor and Configuration Center (MCC) provides graphic user interface for monitoring Cloud Aqua storage status, usage, frontend status, total active throughput, totally active sessions, CRDU statistics, metadata status and etc. It allows administers to monitor and configure Cloud Aqua through easy to understand charts, diagrams and graphics. Updated on 10.20.2014 Page 10 of 12

Details such as historical data of data throughput, request count and etc will be presented clearly so that administrators can easily grasp what happens in the system or reaches problematic area. Command Line tool AquaStat is also provided to gather Cloud Aqua DynamicMBean attributes by specified interval. Cloud Aqua Benefits and Conclusion Cloud Aqua provides features that bring about many possibilities in managing, handling, and transferring big data in a storage cloud. It provides the following benefits: Scalable global storage pool through virtualized storage resources Improved storage utilization and efficiency through on demand allocation of storage and network resources Flexible resource allocation and cloud services through tenant based infrastructure Cross platform data sharing and application transparency through open standard interfaces Updated on 10.20.2014 Page 11 of 12

Intelligence at the storage platform/devices through object based infrastructure Automated processing through intelligent policy based content and metadata processing Higher levels of resiliency through built in availability and data protection Optimized big data performance through WAN optimization and QoS Monitoring and configuring through easy to understand graphic user interface With these features and benefits, Cloud Aqua brings about many ways to facilitate multiuser/multi tenant resource allocation, application storage tiering, creation and operation of centralized content libraries, content transformation and content distribution, content and metadata processing workflows, and many others. These features make Cloud Aqua ideal for many different usage applications from digital media libraries, to video surveillance, to network based education, cloud services, and many others. Updated on 10.20.2014 Page 12 of 12