Evaluating Apache Cassandra as a Cloud Database WHITE PAPER
|
|
|
- Solomon Parrish
- 10 years ago
- Views:
Transcription
1 Evaluating Apache Cassandra as a Cloud Database WHITE PAPER
2 Evaluating Apache Cassandra as a Cloud Database By DataStax Corporation November 2011
3 Contents Introduction... 3 Why Move to a Cloud Database?... 3 The Cloud Promises Transparent Elasticity... 4 The Cloud Promises Transparent Scalability... 4 The Cloud Promises High Availability... 5 The Cloud Promises Easy Data Distribution... 5 The Cloud Promises Redundancy... 6 The Cloud Promises Support for All Data Types... 6 The Cloud Promises Easier Manageability... 6 The Cloud Promises Lower Cost... 7 Evaluating Apache Cassandra as a Cloud Database... 7 What Is Apache Cassandra?... 7 Why Cassandra?... 8 Netflix An Example of Succeeding in the Cloud with Cassandra... 9 DataStax Enterprise Real-Time Datastore and Analytics in the Cloud Solving the Cloud Mixed-Workload Problem Visual Database Management Enterprise Production Support and Services Conclusion About DataStax... 17
4 Introduction The amount of information that currently resides only in the cloud is small, but that s about to change. A recent study by IT industry analyst group IDC estimates that cloud computing accounts for less than 2 percent of IT spending today, but by 2015, nearly 20 percent of all information will be touched (stored or processed) in a cloud. 1 Moreover, IDC predicts that by that same year, as much as 10 percent of all data will be maintained in a cloud. 2 Despite this growing movement toward cloud computing, some IT professionals remain standoffish toward the idea of porting a company s data onto a public cloud computing platform such as Amazon, Rackspace, and others. This position is understandable, given the current confusion over whether running a database in a cloud environment actually delivers tangible benefits technical and otherwise over keeping that same data on-premise. Whether deciding to move a small or significant amount of data to a cloud database, today s IT decision-makers need to understand whether the solution they re considering is designed and/or implemented in a way that utilizes all the benefits and promises of cloud computing. This paper examines those key characteristics and discusses how Apache Cassandra stacks up from an evaluation perspective. Why Move to a Cloud Database? First, it should be understood that a cloud database is more than simply taking traditional relational database management system (RDBMS) software and running an instance of it on a cloud platform such as Amazon. Such a deployment in no way maximizes the capabilities of a cloud computing environment. But what constitutes a cloud-ready database? What features and functionalities must the database have to deliver on the potential that cloud computing offers? What follows are some of the key promises of the cloud and what features a database should have to supply real benefits in a cloud environment. 1 Extracting Value from Chaos, by John Gantz and David Reinsel, IDC, June 2011, 2 Ibid DataStax. All rights reserved. Page: 3
5 The Cloud Promises Transparent Elasticity The first promise of cloud computing worth noting is transparent elasticity. This equates to being able to add and subtract nodes (defined as actual physical machines or virtual machines) when the underlying application and business demands it. This first attribute can be somewhat difficult for legacy RDBMSs, as well as some NoSQL solutions, as almost all are built on some master-slave architecture, which makes elastic expansion and contraction tricky and complex to manage. Further, this capability should allow for online operations so that no downtime is experienced during expansion and contraction. In addition, a clustered database configuration should allow for some sort of easy load balancing so that an even distribution of the total workload is experienced. The Cloud Promises Transparent Scalability The primary reason for wanting elasticity in a cloud database is to scale properly to meet what businesses hope will be increasing customer traffic and accompanying workload. When talking about scale, there are normally two goals. First, scaling out in the cloud carries with it an expectation that adding nodes will increase the performance of the database. The hope is that linear scalability will be experienced; for example, if two nodes are able to handle a throughput of 200,000 transactions, then four nodes should be able to manage 400,000. Scale and speed are of the utmost concern to modern businesses today, and for good reason. A brief look at the following facts 3 from some of today s premier software and services providers demonstrates how performance impacts their business: Amazon says every 100 millisecond (ms) delay has the potential to cost the company 1 percent of sales (in 2009, that translated to US$245 million) Microsoft discovered that an additional 500ms of delay on its web page loads resulted in the loss of about 1.2 percent in revenue per user Mozilla shaved 2.2 seconds of load time off its landing pages and correspondingly increased download conversions by 15.4 percent, which translated into an additional 60 million downloads each year Shopzilla reduced its page load time by five seconds and saw an increase of 25 percent in page views and a 7 to 12 percent increase in revenue 3 For individual references for each statistic, go to DataStax. All rights reserved. Page: 4
6 Experts estimate that just a 10ms latency could result in a reduction in revenue for U.S. brokerage houses of 10 percent Second, scaling out also equates to being able to tackle big data 4 while maintaining response time service level agreements (SLAs). This means that no matter the data volume size, end user requests should be serviced every bit as fast as if the data volume size was much less. The Cloud Promises High Availability Another key promise of cloud computing is increased uptime. Like performance and scalability, availability is critical to successful businesses. The cost of database downtime can vary widely depending on the industry; however, average downtime costs can range anywhere from about US$90,000 per hour in the media industry to about US$6.48 million per hour for large online brokerages. 5 A database implemented in the cloud should have features that piggyback off of a cloud provider s infrastructure where high availability is concerned. Additionally, that database s degree of availability often is affected by the next two cloud computing promises: easy data distribution and redundancy. The Cloud Promises Easy Data Distribution Cloud providers typically promise the ability to distribute compute resources and data across different geographies or zones that the cloud provider makes available. Where the underlying database of a cloud application is concerned, this usually equates to a couple of desirable features. First is the ability to read and write from any node that makes up the cloud database. Again, master-slave architectures usually have a difficult time meeting this requirement, especially where writes are concerned. Reads may be served up across various slave servers, but writes are a different matter. The multi-geographic data distribution also supplies the benefits of high availability if a particular region goes down, and simplified disaster recovery so the data is protected if a particular physical disaster befalls one of the cloud provider s data centers. Of course, this assumes the database 4 For more information on big data management, see Big Data: Beyond the Hype, Why Big Data Matters to You, DataStax, October 2011, BigData.pdf. 5 How Much Does Downtime Really Cost?, by Henry Martinez, InfoManagement Direct, August 6, 2009, DataStax. All rights reserved. Page: 5
7 actually contains redundant copies of data spread across different data centers of the cloud provider, which brings us to the next point. The Cloud Promises Redundancy One of the main draws of the cloud is redundancy for both compute resources and data stored in the cloud. Where the database is concerned, redundancy means keeping more than one copy of the underlying data, so if the primary copy is destroyed, another copy is available for use. How such redundancy is managed can be very important. As stated in the prior section, multidata center support in different geographies is important if a large-scale disaster occurs at a particular site. However, machine outages within a single data center also can pose high availability issues (either a complete outage- or response time-related problems) for a database. Good redundancy support can mitigate such problems, though, through the use of data center rack awareness capabilities. This equates to the database being smart enough to store multiple copies of data on different physical racks, so if one rack experiences a machine failure, the other rack can continue to serve data. Of course, the database can both support multi-rack awareness at single data centers and provide the ability to store redundant copies of data at other, remote data centers. The Cloud Promises Support for All Data Types Cloud computing, in general, attempts to welcome all data types and formats structured, semistructured, and unstructured. However, doing so is more easily said than done. The database being implemented in the cloud is really the key determiner of whether the cloud provider can easily keep this promise. This is where today s NoSQL databases shine over legacy RDBMSs because they typically offer a flexible and dynamic schema that accepts all key data formats more straightforwardly. The Cloud Promises Easier Manageability All cloud providers tout their ease-of-use, but the reality is that, normally, managing a distributed and geographically dispersed database is anything but easy. Web-based interfaces supplied by the cloud provider often can help with management duties. But for specific monitoring and management of the cloud database s particular feature set, the database vendor should provide a tool or set of tools that helps carry out routine administrative operations DataStax. All rights reserved. Page: 6
8 The Cloud Promises Lower Cost Many IT professionals who begin looking at the cloud as an alternative to typical on-premise architectures assume they will experience cheaper software costs in a cloud implementation. However, they often have a rude awakening when they fail to experience cost benefits from managing data in the cloud. The cold reality is that some traditional RDBMS providers are every bit as expensive in a cloud implementation as they are in a standard on-premise implementation. When looking to implement a database in the cloud, IT professionals should seek a cost structure that is friendly to scaling out horizontally, regardless of machine size or the data volume being managed. Otherwise, there is risk of unpleasant cost increases when the underlying business becomes very successful and more nodes are needed to manage ballooning data volumes and increased concurrent users. Evaluating Apache Cassandra as a Cloud Database So what is Apache Cassandra and how does it stack up against the criteria for cloud databases previously discussed? Following is an overview of Cassandra, including a description of the key technology differentiators that make it a stand-out cloud database. Also discussed is how DataStax Enterprise a smart data platform powered by Cassandra provides the best possible cloud database option for those needing to manage both real-time and analytic data in a cloud environment. What Is Apache Cassandra? Apache Cassandra is a highly scalable and high-performance distributed database management system that excels at being a real-time datastore (i.e., the system of record ) for online/transactional applications that need extremely fast read and write operations. Cassandra can manage the distribution of data across multiple data centers and offers incremental scalability with no single points of failure. Cassandra was originally incubated at Facebook and is based upon Google s BigTable and Amazon s Dynamo software. The end result is an extremely scalable and fault-tolerant data infrastructure that solves both small and big data problems, handles write-intensive user traffic, delivers sub-millisecond caching layer reads, and supports demanding workloads involving petabytes of data DataStax. All rights reserved. Page: 7
9 Why Cassandra? Key technical differentiators that make Cassandra a winning choice in a cloud computing environment include the following: A built-for-scale architecture that can handle petabytes of information and thousands of concurrent users/operations per second as easily as it can manage much smaller amounts of data and user traffic Peer-to-peer design that offers no single point of failure for any database process or function; every node is the same, so there is no concept of a master node or anything similar Online capacity additions that deliver linear performance gains for both read and write operations Read/write anywhere capabilities that equate to a true network-independent method of storing and accessing data Guaranteed data safety that ensures no loss of data, no matter what node is written to in a cluster Tunable data consistency that allows Cassandra to offer the data durability and protection like an RDBMS, but with the flexible choice of relaxing data consistency when application use cases allow Flexible/dynamic schema design that accommodates all formats of big data applications, including structured, semi-structured, and unstructured data; data is represented in Cassandra via column families that are dynamic in nature and accommodate all modifications online Simplified replication that provides data redundancy and is capable of being multi-data center and cloud in nature Data compression that reduces the footprint of raw big data by over 80 percent in some use cases A SQL-like language (CQL Cassandra Query Language) that lessens the learning curve for developers and administrators coming from the RDBMS world Support for key developer languages (e.g., Java, Python) and operating systems No requirement for any special equipment; Cassandra runs on commodity hardware Very easy installations in cloud environments including Amazon Machine Images (AMIs) that enable a user to be up and running with a multiple-node cluster in minutes Cassandra is built with the assumption that failures can and will occur in a data center or cloud infrastructure. Therefore, data redundancy to protect against hardware failure and other data loss scenarios is built into and managed transparently by Cassandra. Furthermore, this capability can 2011 DataStax. All rights reserved. Page: 8
10 be configured so that big data applications can use a single large database distributed across multiple, geographically dispersed data centers, between different physical racks in a data center, and between public cloud providers and on-premise managed data centers. Figure 1: Cassandra multi-data center capabilities These and other capabilities make Cassandra and DataStax Enterprise the smart choice for modern businesses with big data management needs that have outgrown traditional RDBMS software. Netflix An Example of Succeeding in the Cloud with Cassandra With more than 25 million members worldwide, Netflix, Inc. (Nasdaq: NFLX) is the world's leading Internet subscription service for enjoying movies and TV shows. Netflix allows its members to instantly watch unlimited movies and TV episodes streaming over the Internet to computers and TVs. Cassandra and DataStax are a key part of Netflix s database infrastructure, with everything being hosted in the cloud. Recently, Netflix gave a presentation at the 2011 High Performance Transaction System workshop that demonstrated both the ease-of-use and linear performance 2011 DataStax. All rights reserved. Page: 9
11 capabilities of using Cassandra in the cloud. An excerpt from a Netflix blog post summarizing the presentation states the following: The automated tooling that Netflix has developed lets us quickly deploy large scale Cassandra clusters, in this case a few clicks on a web page and about an hour to go from nothing to a very large Cassandra cluster consisting of 288 medium sized instances, with 96 instances in each of three EC2 availability zones in the US-East region. Using an additional 60 instances as clients running the stress program we ran a workload of 1.1 million client writes per second. Data was automatically replicated across all three zones making a total of 3.3 million writes per second across the cluster. 6 Figure 2: Performance results from Netflix s benchmark of Cassandra in the cloud The linear performance capabilities are illustrated well in the Netflix benchmark, delivering a very impressive 1.1 million writes per second. The ease with which Cassandra nodes can be configured and implemented in the cloud is also clear. 6 Benchmarking Cassandra Scalability on AWS Over a million writes per second, by Adrian Cockcroft and Denis Sheahan, The Netflix Tech Blog, November 2, 2011, DataStax. All rights reserved. Page: 10
12 DataStax Enterprise Real-Time Datastore and Analytics in the Cloud DataStax is the leading provider of enterprise NoSQL software products and services based on Apache Cassandra. Through its offerings, DataStax supports businesses that need a progressive data management system that can serve as a real-time datastore for critical production applications, and delivers built-in analytic capabilities for analyzing that data once it is in Cassandra. DataStax Enterprise inherits all of Cassandra s powerful feature set (described above) for servicing modern real-time applications, and uses it to merge in a fault-tolerant, analytics platform that provides Hadoop MapReduce, Hive, and Pig support for business intelligence systems. A key differentiator of DataStax Enterprise is that real-time and analytic operations are smartly separated across a DataStax Enterprise cluster so that no competition for underlying compute resources or data is encountered. Solving the Cloud Mixed-Workload Problem A primary benefit of DataStax Enterprise is its ability to service both real-time and analytic data operations in the same database cluster without either load impacting the other. The key to making this possible is Cassandra s underlying architecture. Part of DataStax Enterprise is an enhanced Hadoop and Hive distribution that utilizes Cassandra for many of its core services. DataStax Enterprise provides integrated Hadoop MapReduce, Hive and job/task tracking capabilities, while providing an HDFS-compatible storage layer (CassandraFS) powered by Cassandra. The end result is a single integrated solution that provides increased reliability, simpler deployment, and lower total cost of ownership (TCO) than a traditional Hadoop solution. DataStax Enterprise also is fully compatible with existing HDFS, Hadoop, and Hive tools and utilities DataStax. All rights reserved. Page: 11
13 Figure 3: DataStax Enterprise real-time and analytic data in one cloud database Another benefit of DataStax Enterprise is that it eliminates the complexity and single points of failure of the typical Hadoop HDFS layer. From an operational standpoint, there is no need to set up a Hadoop name node, secondary name node, Zookeeper, and so on. Instead, DataStax Enterprise provides a single layer in which every node is a peer of the others and automatically knows its position in the cluster. On startup, all DataStax Enterprise nodes automatically start a Hadoop task tracker, and one of the nodes is elected to be the job tracker. If the job tracker node fails, the job tracker is automatically restarted on a different node. DataStax Enterprise utilizes full data locality awareness for Hadoop task assignment. Another key benefit of DataStax Enterprise is the tight feedback loop it has between real-time application and the analytics that naturally follow. Traditionally, users would be forced to move data between cloud systems via complex ETL processes, or perform both functions on the same system with the risk of one impacting the other. In big data environments, this can be a timeconsuming and burdensome process. With DataStax Enterprise, both real-time and analytic big data operations take place in the same distributed system, but users have the ability to dedicate certain nodes solely for analytics so that workload doesn t slow down real-time processing. Users simply define one or more replica groups, and configure the role of each one or more Cassandra, Hadoop or HDFS (i.e., HDFS without job/task tracker) nodes. Writes are instantly replicated between both real-time and analytic nodes. With DataStax Enterprise, users truly have the best of both worlds for big data management. They have all the power of Cassandra serving their highest-volume and high-velocity, real-time applications, and the power of Hadoop, Hive, and Pig working directly against the same data for 2011 DataStax. All rights reserved. Page: 12
14 analytics. The result is smart workload isolation for cloud applications, which is much simpler to manage and more reliable than any of the alternatives. Visual Database Management DataStax Enterprise includes a visual, browser-based management solution named OpsCenter Enterprise. OpsCenter Enterprise allows a developer or administrator to manage and monitor the health of cloud databases from a centralized web console. Figure 4: OpsCenter Enterprise database cluster ring view OpsCenter Enterprise uses an agent-based architecture to monitor and carry out tasks on each node in a DataStax Enterprise cluster. Through a graphical and intuitive point-and-click interface, a user can understand the state of a cluster, which nodes are up and down, and what type of performance users are experiencing. Key events are reported into a centralized dashboard displayed along with other vital statistics DataStax. All rights reserved. Page: 13
15 Figure 5: OpsCenter dashboard Analytic operations also can be monitored and controlled from within OpsCenter Enterprise: Figure 6: OpsCenter analytic operations monitoring 2011 DataStax. All rights reserved. Page: 14
16 Enterprise Production Support and Services Cloud implementations often require fast access to skilled expertise. DataStax Enterprise includes experienced production support and consultative services from Cassandra experts who know cloud environments. IT professionals can choose the right production support package for their business needs, including rapid response SLAs and consultative help. DataStax also provides certified quarterly service pack updates for DataStax Enterprise, as well as other benefits such as emergency hot fixes (for production outages) and bug escalation privileges. Additionally, DataStax offers professional training on Cassandra and Hadoop, with classes offered in many major cities as well as on-site for corporations that need many staff members trained at once DataStax. All rights reserved. Page: 15
17 Conclusion Moving to a cloud-based infrastructure necessitates choosing a database that is capable of fully utilizing all the benefits the cloud provides. The following table illustrates how Apache Cassandra meets all of the key attributes required of a cloud database, and how DataStax Enterprise, which is powered by Cassandra, delivers a unique smart data platform ready for the cloud and capable of managing both real-time and analytic data in the same database cluster. Cloud Database Requirement Meet? Additional Information Transparent elasticity Yes Peer-to-peer architecture makes scaling out in an online fashion easy Transparent scalability Yes Linear performance gains delivered through node additions; big data-capable High availability Yes No single point of failure Easy data distribution Yes Simple replication among one or more data centers, geographies, and the cloud; read/write anywhere design Data redundancy Yes Data is easily replicated across many regions, with rack awareness also supported Support for all data formats Yes Structured, semi-structured, and unstructured data is supported in data model Simple manageability Yes Easy AMI installs for the cloud; web-based tools are also provided Low cost Yes Apache Cassandra is free/open source; DataStax Enterprise is 80 to 90 percent less expensive than legacy RDBMS software To find out more about DataStax Enterprise and obtain software, please visit or [email protected] DataStax. All rights reserved. Page: 16
18 About DataStax DataStax, the commercial leader in Apache Cassandra, offers products and services that make it easy for customers to build, deploy and operate elastically scalable and cloud-optimized applications and data services. Over 100 customers use DataStax today, including leaders such as Netflix, Cisco, Rackspace and Constant Contact, with industries served including web, financial services, telecommunications, logistics and government. DataStax is backed by industry-leading investors, including Lightspeed Venture Partners and Crosslink, and is based in Burlingame, CA, with offices in Austin, TX and Stamford, CT. For more information, visit DataStax. All rights reserved. Page: 17
Evaluating Apache Cassandra as a Cloud Database WHITE PAPER
Evaluating Apache Cassandra as a Cloud Database WHITE PAPER By DataStax Corporation March 2012 Contents Introduction... 3 Why Move to a Cloud Database?... 3 The Cloud Promises Transparent Elasticity...
Evaluating Apache Cassandra as a Cloud Database White Paper
Evaluating Apache Cassandra as a Cloud Database White Paper BY DATASTAX CORPORATION October 2013 1 Table of Contents Abstract 3 Introduction 3 Why Move to a Cloud Database? 3 The Cloud Promises Transparent
Big Data: Beyond the Hype
Big Data: Beyond the Hype Why Big Data Matters to You WHITE PAPER Big Data: Beyond the Hype Why Big Data Matters to You By DataStax Corporation October 2011 Table of Contents Introduction...4 Big Data
Big Data: Beyond the Hype
Big Data: Beyond the Hype Why Big Data Matters to You WHITE PAPER By DataStax Corporation March 2012 Contents Introduction... 3 Big Data and You... 5 Big Data Is More Prevalent Than You Think... 5 Big
Big Data: Beyond the Hype. Why Big Data Matters to You. White Paper
Big Data: Beyond the Hype Why Big Data Matters to You White Paper BY DATASTAX CORPORATION October 2013 Table of Contents Abstract 3 Introduction 3 Big Data and You 5 Big Data Is More Prevalent Than You
Introduction to Multi-Data Center Operations with Apache Cassandra, Hadoop, and Solr WHITE PAPER
Introduction to Multi-Data Center Operations with Apache Cassandra, Hadoop, and Solr WHITE PAPER By DataStax Corporation August 2012 Contents Introduction...3 The Growth in Multiple Data Centers...3 Why
Introduction to Apache Cassandra
Introduction to Apache Cassandra White Paper BY DATASTAX CORPORATION JULY 2013 1 Table of Contents Abstract 3 Introduction 3 Built by Necessity 3 The Architecture of Cassandra 4 Distributing and Replicating
The Modern Online Application for the Internet Economy: 5 Key Requirements that Ensure Success
The Modern Online Application for the Internet Economy: 5 Key Requirements that Ensure Success 1 Table of Contents Abstract... 3 Introduction... 3 Requirement #1 Smarter Customer Interactions... 4 Requirement
Introduction to Multi-Data Center Operations with Apache Cassandra and DataStax Enterprise
Introduction to Multi-Data Center Operations with Apache Cassandra and DataStax Enterprise White Paper BY DATASTAX CORPORATION October 2013 1 Table of Contents Abstract 3 Introduction 3 The Growth in Multiple
Comparing the Hadoop Distributed File System (HDFS) with the Cassandra File System (CFS)
Comparing the Hadoop Distributed File System (HDFS) with the Cassandra File System (CFS) White Paper BY DATASTAX CORPORATION August 2013 1 Table of Contents Abstract 3 Introduction 3 Overview of HDFS 4
Comparing the Hadoop Distributed File System (HDFS) with the Cassandra File System (CFS) WHITE PAPER
Comparing the Hadoop Distributed File System (HDFS) with the Cassandra File System (CFS) WHITE PAPER By DataStax Corporation September 2012 Contents Introduction... 3 Overview of HDFS... 4 The Benefits
How Transactional Analytics is Changing the Future of Business A look at the options, use cases, and anti-patterns
How Transactional Analytics is Changing the Future of Business A look at the options, use cases, and anti-patterns Table of Contents Abstract... 3 Introduction... 3 Definition... 3 The Expanding Digitization
INTRODUCTION TO CASSANDRA
INTRODUCTION TO CASSANDRA This ebook provides a high level overview of Cassandra and describes some of its key strengths and applications. WHAT IS CASSANDRA? Apache Cassandra is a high performance, open
Highly available, scalable and secure data with Cassandra and DataStax Enterprise. GOTO Berlin 27 th February 2014
Highly available, scalable and secure data with Cassandra and DataStax Enterprise GOTO Berlin 27 th February 2014 About Us Steve van den Berg Johnny Miller Solutions Architect Regional Director Western
So What s the Big Deal?
So What s the Big Deal? Presentation Agenda Introduction What is Big Data? So What is the Big Deal? Big Data Technologies Identifying Big Data Opportunities Conducting a Big Data Proof of Concept Big Data
Hadoop IST 734 SS CHUNG
Hadoop IST 734 SS CHUNG Introduction What is Big Data?? Bulk Amount Unstructured Lots of Applications which need to handle huge amount of data (in terms of 500+ TB per day) If a regular machine need to
How To Handle Big Data With A Data Scientist
III Big Data Technologies Today, new technologies make it possible to realize value from Big Data. Big data technologies can replace highly customized, expensive legacy systems with a standard solution
Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh
1 Hadoop: A Framework for Data- Intensive Distributed Computing CS561-Spring 2012 WPI, Mohamed Y. Eltabakh 2 What is Hadoop? Hadoop is a software framework for distributed processing of large datasets
Implement Hadoop jobs to extract business value from large and varied data sets
Hadoop Development for Big Data Solutions: Hands-On You Will Learn How To: Implement Hadoop jobs to extract business value from large and varied data sets Write, customize and deploy MapReduce jobs to
Simplifying Database Management with DataStax OpsCenter
Simplifying Database Management with DataStax OpsCenter Table of Contents Table of Contents... 2 Abstract... 3 Introduction... 3 DataStax OpsCenter... 3 How Does DataStax OpsCenter Work?... 3 The OpsCenter
No-SQL Databases for High Volume Data
Target Conference 2014 No-SQL Databases for High Volume Data Edward Wijnen 3 November 2014 The New Connected World Needs a Revolutionary New DBMS Today The Internet of Things 1990 s Mobile 1970 s Mainfram
Cloudwick. CLOUDWICK LABS Big Data Research Paper. Nebula: Powering Enterprise Private & Hybrid Cloud for DataStax Big Data
Nebula: Powering Enterprise Private & Hybrid Cloud for DataStax Big Data was commissioned to evaluate and test the Nebula One Private and Hybrid Cloud Appliance using DataStax, a leading Apache Cassandra
How To Scale Out Of A Nosql Database
Firebird meets NoSQL (Apache HBase) Case Study Firebird Conference 2011 Luxembourg 25.11.2011 26.11.2011 Thomas Steinmaurer DI +43 7236 3343 896 [email protected] www.scch.at Michael Zwick DI
Hadoop and Map-Reduce. Swati Gore
Hadoop and Map-Reduce Swati Gore Contents Why Hadoop? Hadoop Overview Hadoop Architecture Working Description Fault Tolerance Limitations Why Map-Reduce not MPI Distributed sort Why Hadoop? Existing Data
Object Storage: A Growing Opportunity for Service Providers. White Paper. Prepared for: 2012 Neovise, LLC. All Rights Reserved.
Object Storage: A Growing Opportunity for Service Providers Prepared for: White Paper 2012 Neovise, LLC. All Rights Reserved. Introduction For service providers, the rise of cloud computing is both a threat
You should have a working knowledge of the Microsoft Windows platform. A basic knowledge of programming is helpful but not required.
What is this course about? This course is an overview of Big Data tools and technologies. It establishes a strong working knowledge of the concepts, techniques, and products associated with Big Data. Attendees
Data processing goes big
Test report: Integration Big Data Edition Data processing goes big Dr. Götz Güttich Integration is a powerful set of tools to access, transform, move and synchronize data. With more than 450 connectors,
Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale
WHITE PAPER Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale Sponsored by: IBM Carl W. Olofson December 2014 IN THIS WHITE PAPER This white paper discusses the concept
BASHO DATA PLATFORM SIMPLIFIES BIG DATA, IOT, AND HYBRID CLOUD APPS
WHITEPAPER BASHO DATA PLATFORM BASHO DATA PLATFORM SIMPLIFIES BIG DATA, IOT, AND HYBRID CLOUD APPS INTRODUCTION Big Data applications and the Internet of Things (IoT) are changing and often improving our
Trafodion Operational SQL-on-Hadoop
Trafodion Operational SQL-on-Hadoop SophiaConf 2015 Pierre Baudelle, HP EMEA TSC July 6 th, 2015 Hadoop workload profiles Operational Interactive Non-interactive Batch Real-time analytics Operational SQL
Cloud Platforms, Challenges & Hadoop. Aditee Rele Karpagam Venkataraman Janani Ravi
Cloud Platforms, Challenges & Hadoop Aditee Rele Karpagam Venkataraman Janani Ravi Cloud Platform Models Aditee Rele Microsoft Corporation Dec 8, 2010 IT CAPACITY Provisioning IT Capacity Under-supply
Big Data and Apache Hadoop Adoption:
Expert Reference Series of White Papers Big Data and Apache Hadoop Adoption: Key Challenges and Rewards 1-800-COURSES www.globalknowledge.com Big Data and Apache Hadoop Adoption: Key Challenges and Rewards
Hadoop Evolution In Organizations. Mark Vervuurt Cluster Data Science & Analytics
In Organizations Mark Vervuurt Cluster Data Science & Analytics AGENDA 1. Yellow Elephant 2. Data Ingestion & Complex Event Processing 3. SQL on Hadoop 4. NoSQL 5. InMemory 6. Data Science & Machine Learning
Managing Big Data with Hadoop & Vertica. A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database
Managing Big Data with Hadoop & Vertica A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database Copyright Vertica Systems, Inc. October 2009 Cloudera and Vertica
Implementing Search in Web, Mobile, and IOT Applications An Overview of DataStax Enterprise Search
Implementing Search in Web, Mobile, and IOT Applications An Overview of DataStax Enterprise Search Table of Contents Introduction... 3 Why Search?... 3 General Search Requirements... 3 Traditional Deployment
Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop
Role of Cloud Computing in Big Data Analytics Using MapReduce Component of Hadoop Kanchan A. Khedikar Department of Computer Science & Engineering Walchand Institute of Technoloy, Solapur, Maharashtra,
NoSQL Data Base Basics
NoSQL Data Base Basics Course Notes in Transparency Format Cloud Computing MIRI (CLC-MIRI) UPC Master in Innovation & Research in Informatics Spring- 2013 Jordi Torres, UPC - BSC www.jorditorres.eu HDFS
How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time
SCALEOUT SOFTWARE How In-Memory Data Grids Can Analyze Fast-Changing Data in Real Time by Dr. William Bain and Dr. Mikhail Sobolev, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 T wenty-first
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE AGENDA Introduction to Big Data Introduction to Hadoop HDFS file system Map/Reduce framework Hadoop utilities Summary BIG DATA FACTS In what timeframe
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Oracle s Big Data solutions. Roger Wullschleger. <Insert Picture Here>
s Big Data solutions Roger Wullschleger DBTA Workshop on Big Data, Cloud Data Management and NoSQL 10. October 2012, Stade de Suisse, Berne 1 The following is intended to outline
W H I T E P A P E R. Deriving Intelligence from Large Data Using Hadoop and Applying Analytics. Abstract
W H I T E P A P E R Deriving Intelligence from Large Data Using Hadoop and Applying Analytics Abstract This white paper is focused on discussing the challenges facing large scale data processing and the
RPO represents the data differential between the source cluster and the replicas.
Technical brief Introduction Disaster recovery (DR) is the science of returning a system to operating status after a site-wide disaster. DR enables business continuity for significant data center failures
HIGH-SPEED BRIDGE TO CLOUD STORAGE
HIGH-SPEED BRIDGE TO CLOUD STORAGE Addressing throughput bottlenecks with Signiant s SkyDrop 2 The heart of the Internet is a pulsing movement of data circulating among billions of devices worldwide between
Big Data at Cloud Scale
Big Data at Cloud Scale Pushing the limits of flexible & powerful analytics Copyright 2015 Pentaho Corporation. Redistribution permitted. All trademarks are the property of their respective owners. For
A Brief Outline on Bigdata Hadoop
A Brief Outline on Bigdata Hadoop Twinkle Gupta 1, Shruti Dixit 2 RGPV, Department of Computer Science and Engineering, Acropolis Institute of Technology and Research, Indore, India Abstract- Bigdata is
Big Data Are You Ready? Thomas Kyte http://asktom.oracle.com
Big Data Are You Ready? Thomas Kyte http://asktom.oracle.com The following is intended to outline our general product direction. It is intended for information purposes only, and may not be incorporated
Ubuntu and Hadoop: the perfect match
WHITE PAPER Ubuntu and Hadoop: the perfect match February 2012 Copyright Canonical 2012 www.canonical.com Executive introduction In many fields of IT, there are always stand-out technologies. This is definitely
White Paper: What You Need To Know About Hadoop
CTOlabs.com White Paper: What You Need To Know About Hadoop June 2011 A White Paper providing succinct information for the enterprise technologist. Inside: What is Hadoop, really? Issues the Hadoop stack
High Availability on MapR
Technical brief Introduction High availability (HA) is the ability of a system to remain up and running despite unforeseen failures, avoiding unplanned downtime or service disruption*. HA is a critical
Introduction to Cloud : Cloud and Cloud Storage. Lecture 2. Dr. Dalit Naor IBM Haifa Research Storage Systems. Dalit Naor, IBM Haifa Research
Introduction to Cloud : Cloud and Cloud Storage Lecture 2 Dr. Dalit Naor IBM Haifa Research Storage Systems 1 Advanced Topics in Storage Systems for Big Data - Spring 2014, Tel-Aviv University http://www.eng.tau.ac.il/semcom
Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack
Elasticsearch on Cisco Unified Computing System: Optimizing your UCS infrastructure for Elasticsearch s analytics software stack HIGHLIGHTS Real-Time Results Elasticsearch on Cisco UCS enables a deeper
Open source Google-style large scale data analysis with Hadoop
Open source Google-style large scale data analysis with Hadoop Ioannis Konstantinou Email: [email protected] Web: http://www.cslab.ntua.gr/~ikons Computing Systems Laboratory School of Electrical
Hadoop. http://hadoop.apache.org/ Sunday, November 25, 12
Hadoop http://hadoop.apache.org/ What Is Apache Hadoop? The Apache Hadoop software library is a framework that allows for the distributed processing of large data sets across clusters of computers using
NoSQL and Hadoop Technologies On Oracle Cloud
NoSQL and Hadoop Technologies On Oracle Cloud Vatika Sharma 1, Meenu Dave 2 1 M.Tech. Scholar, Department of CSE, Jagan Nath University, Jaipur, India 2 Assistant Professor, Department of CSE, Jagan Nath
The 3 questions to ask yourself about BIG DATA
The 3 questions to ask yourself about BIG DATA Do you have a big data problem? Companies looking to tackle big data problems are embarking on a journey that is full of hype, buzz, confusion, and misinformation.
Tap into Hadoop and Other No SQL Sources
Tap into Hadoop and Other No SQL Sources Presented by: Trishla Maru What is Big Data really? The Three Vs of Big Data According to Gartner Volume Volume Orders of magnitude bigger than conventional data
Comparing Oracle with Cassandra / DataStax Enterprise
Comparing Oracle with Cassandra / DataStax Enterprise Table of Contents Table of Contents... 2 Abstract... 3 Introduction... 3 Oracle and Today s Online Applications... 3 Architectural Limitations... 3
DataStax Enterprise, powered by Apache Cassandra (TM)
PerfAccel (TM) Performance Benchmark on Amazon: DataStax Enterprise, powered by Apache Cassandra (TM) Disclaimer: All of the documentation provided in this document, is copyright Datagres Technologies
Hexaware E-book on Q & A for Cloud BI Hexaware Business Intelligence & Analytics Actionable Intelligence Enabled
Hexaware E-book on Q & A for Cloud BI Hexaware Business Intelligence & Analytics Actionable Intelligence Enabled HEXAWARE Q & A E-BOOK ON CLOUD BI Layers Applications Databases Security IaaS Self-managed
Elastic Application Platform for Market Data Real-Time Analytics. for E-Commerce
Elastic Application Platform for Market Data Real-Time Analytics Can you deliver real-time pricing, on high-speed market data, for real-time critical for E-Commerce decisions? Market Data Analytics applications
Virtualizing Apache Hadoop. June, 2012
June, 2012 Table of Contents EXECUTIVE SUMMARY... 3 INTRODUCTION... 3 VIRTUALIZING APACHE HADOOP... 4 INTRODUCTION TO VSPHERE TM... 4 USE CASES AND ADVANTAGES OF VIRTUALIZING HADOOP... 4 MYTHS ABOUT RUNNING
Introduction to Hadoop. New York Oracle User Group Vikas Sawhney
Introduction to Hadoop New York Oracle User Group Vikas Sawhney GENERAL AGENDA Driving Factors behind BIG-DATA NOSQL Database 2014 Database Landscape Hadoop Architecture Map/Reduce Hadoop Eco-system Hadoop
Big Data on AWS. Services Overview. Bernie Nallamotu Principle Solutions Architect
on AWS Services Overview Bernie Nallamotu Principle Solutions Architect \ So what is it? When your data sets become so large that you have to start innovating around how to collect, store, organize, analyze
Data Modeling for Big Data
Data Modeling for Big Data by Jinbao Zhu, Principal Software Engineer, and Allen Wang, Manager, Software Engineering, CA Technologies In the Internet era, the volume of data we deal with has grown to terabytes
CDH AND BUSINESS CONTINUITY:
WHITE PAPER CDH AND BUSINESS CONTINUITY: An overview of the availability, data protection and disaster recovery features in Hadoop Abstract Using the sophisticated built-in capabilities of CDH for tunable
Scalable Architecture on Amazon AWS Cloud
Scalable Architecture on Amazon AWS Cloud Kalpak Shah Founder & CEO, Clogeny Technologies [email protected] 1 * http://www.rightscale.com/products/cloud-computing-uses/scalable-website.php 2 Architect
Multi-Datacenter Replication
www.basho.com Multi-Datacenter Replication A Technical Overview & Use Cases Table of Contents Table of Contents... 1 Introduction... 1 How It Works... 1 Default Mode...1 Advanced Mode...2 Architectural
BIG DATA TRENDS AND TECHNOLOGIES
BIG DATA TRENDS AND TECHNOLOGIES THE WORLD OF DATA IS CHANGING Cloud WHAT IS BIG DATA? Big data are datasets that grow so large that they become awkward to work with using onhand database management tools.
Evaluator s Guide. McKnight. Consulting Group. McKnight Consulting Group
NoSQL Evaluator s Guide McKnight Consulting Group William McKnight is the former IT VP of a Fortune 50 company and the author of Information Management: Strategies for Gaining a Competitive Advantage with
How Cisco IT Built Big Data Platform to Transform Data Management
Cisco IT Case Study August 2013 Big Data Analytics How Cisco IT Built Big Data Platform to Transform Data Management EXECUTIVE SUMMARY CHALLENGE Unlock the business value of large data sets, including
Where We Are. References. Cloud Computing. Levels of Service. Cloud Computing History. Introduction to Data Management CSE 344
Where We Are Introduction to Data Management CSE 344 Lecture 25: DBMS-as-a-service and NoSQL We learned quite a bit about data management see course calendar Three topics left: DBMS-as-a-service and NoSQL
Cloud Service Model. Selecting a cloud service model. Different cloud service models within the enterprise
Cloud Service Model Selecting a cloud service model Different cloud service models within the enterprise Single cloud provider AWS for IaaS Azure for PaaS Force fit all solutions into the cloud service
MakeMyTrip CUSTOMER SUCCESS STORY
MakeMyTrip CUSTOMER SUCCESS STORY MakeMyTrip is the leading travel site in India that is running two ClustrixDB clusters as multi-master in two regions. It removed single point of failure. MakeMyTrip frequently
Cloud computing - Architecting in the cloud
Cloud computing - Architecting in the cloud [email protected] 1 Outline Cloud computing What is? Levels of cloud computing: IaaS, PaaS, SaaS Moving to the cloud? Architecting in the cloud Best practices
Capitalize on Big Data for Competitive Advantage with Bedrock TM, an integrated Management Platform for Hadoop Data Lakes
Capitalize on Big Data for Competitive Advantage with Bedrock TM, an integrated Management Platform for Hadoop Data Lakes Highly competitive enterprises are increasingly finding ways to maximize and accelerate
Application Development. A Paradigm Shift
Application Development for the Cloud: A Paradigm Shift Ramesh Rangachar Intelsat t 2012 by Intelsat. t Published by The Aerospace Corporation with permission. New 2007 Template - 1 Motivation for the
Realizing the True Potential of Software-Defined Storage
Realizing the True Potential of Software-Defined Storage Who should read this paper Technology leaders, architects, and application owners who are looking at transforming their organization s storage infrastructure
White Paper. Amazon in an Instant: How Silver Peak Cloud Acceleration Improves Amazon Web Services (AWS)
Amazon in an Instant: How Silver Peak Cloud Acceleration Improves Amazon Web Services (AWS) Amazon in an Instant: How Silver Peak Cloud Acceleration Improves Amazon Web Services (AWS) Amazon Web Services
Why Migrate from MySQL to Cassandra?
Why Migrate from MySQL to Cassandra? White Paper BY DATASTAX CORPORATION June 2012 1 Table of Contents Abstract 3 Introduction 3 Why Stay with MySQL 4 Why Migrate from MySQL? 4 Architectural Limitations
Accelerating Hadoop MapReduce Using an In-Memory Data Grid
Accelerating Hadoop MapReduce Using an In-Memory Data Grid By David L. Brinker and William L. Bain, ScaleOut Software, Inc. 2013 ScaleOut Software, Inc. 12/27/2012 H adoop has been widely embraced for
Non-Stop Hadoop Paul Scott-Murphy VP Field Techincal Service, APJ. Cloudera World Japan November 2014
Non-Stop Hadoop Paul Scott-Murphy VP Field Techincal Service, APJ Cloudera World Japan November 2014 WANdisco Background WANdisco: Wide Area Network Distributed Computing Enterprise ready, high availability
Alfresco Enterprise on AWS: Reference Architecture
Alfresco Enterprise on AWS: Reference Architecture October 2013 (Please consult http://aws.amazon.com/whitepapers/ for the latest version of this paper) Page 1 of 13 Abstract Amazon Web Services (AWS)
Dominik Wagenknecht Accenture
Dominik Wagenknecht Accenture Improving Mainframe Performance with Hadoop October 17, 2014 Organizers General Partner Top Media Partner Media Partner Supporters About me Dominik Wagenknecht Accenture Vienna
Using an In-Memory Data Grid for Near Real-Time Data Analysis
SCALEOUT SOFTWARE Using an In-Memory Data Grid for Near Real-Time Data Analysis by Dr. William Bain, ScaleOut Software, Inc. 2012 ScaleOut Software, Inc. 12/27/2012 IN today s competitive world, businesses
Forecast of Big Data Trends. Assoc. Prof. Dr. Thanachart Numnonda Executive Director IMC Institute 3 September 2014
Forecast of Big Data Trends Assoc. Prof. Dr. Thanachart Numnonda Executive Director IMC Institute 3 September 2014 Big Data transforms Business 2 Data created every minute Source http://mashable.com/2012/06/22/data-created-every-minute/
Hadoop in the Hybrid Cloud
Presented by Hortonworks and Microsoft Introduction An increasing number of enterprises are either currently using or are planning to use cloud deployment models to expand their IT infrastructure. Big
GigaSpaces Real-Time Analytics for Big Data
GigaSpaces Real-Time Analytics for Big Data GigaSpaces makes it easy to build and deploy large-scale real-time analytics systems Rapidly increasing use of large-scale and location-aware social media and
Making Sense ofnosql A GUIDE FOR MANAGERS AND THE REST OF US DAN MCCREARY MANNING ANN KELLY. Shelter Island
Making Sense ofnosql A GUIDE FOR MANAGERS AND THE REST OF US DAN MCCREARY ANN KELLY II MANNING Shelter Island contents foreword preface xvii xix acknowledgments xxi about this book xxii Part 1 Introduction
Complying with Payment Card Industry (PCI-DSS) Requirements with DataStax and Vormetric
Complying with Payment Card Industry (PCI-DSS) Requirements with DataStax and Vormetric Table of Contents Table of Contents... 2 Overview... 3 PIN Transaction Security Requirements... 3 Payment Application
