Why Migrate from MySQL to Cassandra?
|
|
|
- Laurel Craig
- 10 years ago
- Views:
Transcription
1 Why Migrate from MySQL to Cassandra? White Paper BY DATASTAX CORPORATION June
2 Table of Contents Abstract 3 Introduction 3 Why Stay with MySQL 4 Why Migrate from MySQL? 4 Architectural Limitations 5 Data Model Limitations 6 Scalability and Performance Limitations 6 Why Migrate to Cassandra and DataStax Enterprise? 7 A Technical Overview of Cassandra 7 Cassandra vs. Other NoSQL Solutions 8 Who s Using Cassandra? 9 A Quick Look at DataStax 10 DataStax Enterprise The Choice for Production Big Data Deployments 10 What About Cost? 11 How to Migrate from MySQL to Cassandra 11 Using Sqoop to Migrate from MySQL 12 Using Pentaho Kettle to Migrate from MySQL 13 Examples of Customers Who Have Switched from MySQL 13 Mahalo 14 Pantheon Systems 14 Conclusion 15 About DataStax 15 Appendix A FAQ on Switching from MySQL to DataStax Enterprise/Cassandra 16 Appendix B Comparing MySQL and DataStax Enterprise/Cassandra 18 General Comparisons 18 Datatype Comparisons 20 2
3 Abstract For 15+ years, Oracle s MySQL has been a de facto infrastructure piece in web applications, enjoying wide adoption. This is for good reason: MySQL provides a solid relational database that enables companies to build systems that perform well in many use cases. Yet, even its strongest supporters admit that it is not architected to tackle the new wave of big data applications. Modern businesses that need to manage big data use cases are helping to forge a different set of technologies to replace MySQL. This paper examines the why s and how s of migrating from Oracle s MySQL to these new big data technologies. Introduction Founded in 1995, MySQL has been one of the de facto infrastructure pieces in applications that target the Web. The database component of the Internet LAMP stack, MySQL has enjoyed wide success in adoption. This is for good reason: MySQL provides solid relational database management system (RDBMS) capabilities in an open source package that enables companies to build systems that perform well from a database perspective in many general purpose use cases. MySQL was acquired by Sun Microsystems in 2008, and was afterwards formally acquired by Oracle (via its acquisition of Sun Microsystems) in January Now part of Oracle s stable of database products, MySQL continues to be promoted and sold through Oracle. In addition to Oracle, a number of other vendors have either forked MySQL to create another database offering or are using MySQL as part of a specialized service offering. Examples include Percona, Monty Program AB, Infobright, Calpont, and Amazon RDS. While Oracle s MySQL remains a good RDBMS that performs well for the use cases it was designed for, even its strongest supporters admit that it is not architected to tackle the new wave of big data applications being developed today. In fact, in the same way that the needs of late 20th century web companies helped give birth to MySQL and drive its success, modern businesses today that need to manage big data use cases are helping to forge a different set of technologies that are replacing MySQL in many situations. This paper examines the why s and how s of migrating from Oracle s MySQL to these new big data technologies, such as Apache Cassandra and Apache Hadoop. It also looks at the benefits of moving from MySQL to a fully integrated data stack that combines those technologies and others together into a single big data platform like that found in DataStax Enterprise. 3
4 Why Stay with MySQL Before continuing with a discussion on why it might be necessary to either develop new applications or migrate existing MySQL systems to another database platform, it first makes sense to understand why staying with MySQL as a database service may be the right choice. Because database migrations in particular can be resource-intensive, an IT professional should ensure they are making the right decision before they make such a move. ACID-compliant transactions, with nested transactions, commits/rollbacks, and full referential integrity required. A very denormalized data model that is well served by the Codd-Date relational design, and one where join operations cannot be avoided. Data is primarily structured with little to no unstructured or semi-structured data being present. Low to moderate data volumes that can be handled easily by the MySQL optimizer. Telco applications that require use of main memory solutions and whose data is primarily accessed via primary keys. Scale out architectures that are primarily read in nature, with no need to write to multiple masters or servers that exist in many different cloud zones or geographies. No requirement for a single database/cluster to span many different data centers. High availability requirements can be accomplished via a synchronous replication architecture that is primarily maintained at a single data center. If an application presents these and similar requirements, then Oracle s MySQL may indeed be a good fit as a database platform. Why Migrate from MySQL? One primary reason why various IT organizations are either already migrating away from Oracle s MySQL or planning to do so is the very visible rise of big data applications and specifically, big data online transaction processing (OLTP) applications. These companies either have existing systems morphing into big data systems, or they are planning new applications that are big data in nature and need something more than MySQL for their database platform. Although the top industry IT analyst groups may disagree on various technical and marketplace trends, they are, remarkably, in agreement on the following three things: (1) the upsurge of big data applications; (2) the definition of what constitutes big data, and (3) the need for new technology to deal with big data. 4
5 The momentum and growth of big data applications is unmistakable. Underscoring this is a recent survey of 600 IT professionals that revealed nearly 70 percent of organizations are now considering, planning, or running big data projects. 1 As for a definition of big data, it is nearly universally agreed that big data involves one or all of the following: 1. Velocity data coming in at extreme rates of speed. 2. Variety the types of data needing to be captured (structured, semi-structured, and unstructured). 3. Volume sizes that potentially involve terabytes to petabytes of data. 4. Complexity involves everything from moving operational data into big data platforms, maintaining various data silos, and the difficulty in managing data across multiple sites and geographies. Although they may phrase it differently, top analyst groups and market observers agree that, to tackle big data applications that have the above characteristics, something more than standard RDBMSs like Oracle s MySQL is needed. For example, David Kellogg says big data is too big to be reasonably handled by current/traditional technologies. 2 Consulting and research firm McKinsey & Company agrees with Kellogg s concept of big data and defines it as datasets whose size is beyond the ability of typical database software tools to capture, store, manage, and analyze. 3 IDC says: Big data technologies describe a new generation of technologies and architectures, designed to economically extract value from very large volumes of a wide variety of data, by enabling high-velocity capture, discovery, and/or analysis. 4 And finally, O Reilly defines big data in this way: Big data is data that exceeds the processing capacity of conventional database systems. The data is too big, moves too fast, or doesn t fit the strictures of your database architectures. To gain value from this data, you must choose an alternative way to process it. 5 In particular, big data OLTP applications are nudging MySQL aside for other options. While MySQL initially gained its popularity through use of the MyISAM storage engine, the InnoDB transactional engine is arguably the most used today. But InnoDB isn t designed to handle the types of big data requirements discussed above. What types of limitations, bottlenecks, and issues are MySQL users experiencing? Although the exact situations vary, a few of the most prevalent reasons that cause a move from MySQL are as follows: Architectural Limitations One reason modern businesses are switching from Oracle s MySQL to big data platforms is because the underlying architecture does not support key big data use cases. This is true regardless of which MySQL products are being considered MySQL Community/Enterprise, MySQL Cluster, or database services such as Amazon RDS. 1 Survey: 70 Percent of Organizations Have Big Plans for Big Data, by Pedro Hernandez, EntepriseAppsToday.com, May 14, 2012: 2 Big data has jumped the shark, DBMS2, September 11, 2011: 3 Big Data: The next frontier for innovation, competition, and productivity, McKinsey Global Institute, May 2011, p. 11: ion. 4 Extracting Value from Chaos, by John Gantz and David Reinsel, IDC, June 2011: 5 What Is Big Data? An Introduction to the Big Data Landscape, by Edd Dumbill, O Reilly Radar, January 11, 2012: 5
6 Some of the architectural issues that arise when MySQL is thrown into big data situations include: The traditional master-slave architecture of MySQL (one write master with 1-n slaves) prohibits location independent or read/write anywhere use cases that are very common in big data environments where a database cluster is spread out throughout many different geographies and data centers (and the cloud), with each node needing to support both reads and writes. The necessity to manually shard (i.e., partition) general MySQL systems to overcome various performance shortcomings becomes a very time-consuming, error-prone, and expensive proposition to support. It also places a heavy burden on development staff to support sharding logic in the application. Failover and failback situations tend to require manual intervention with generally replicated MySQL systems. Failback can be especially challenging. Although it provides automatic sharding and supports simple geographic replication, MySQL Cluster s dependence on synchronous replication can cause latency and transactional response time issues. Further, its geographic replication does not support multiple (i.e., >2) data centers in a way that either performs well or is easy to manage. Database services such as Amazon s RDS suffer from the same shortfalls above, as Amazon only supports either a simple standby server that is maintained in a different availability zone in Amazon s cloud, or a series of read replicas that are provisioned and used to help service increased query (not write) traffic. Data Model Limitations A big reason why many businesses are moving to NoSQL-based solutions is because the legacy RDBMS data model is not flexible enough to handle big data use cases that contain a mixture of structured, semi-structured, and unstructured data. While MySQL has good datatype support for traditional RDBMS situations that deal with structured data, it lacks the dynamic data model necessary to tackle high-velocity data coming in from machine-generated systems or time series applications, as well as cases needing to manage semi-structured and unstructured data. Recently, Oracle announced it had introduced a NoSQL-type interface into its MySQL Cluster product that is key/value in design. While certainly helpful in some situations, such a design still falls short in key big data use cases like time series applications that require inserting data into structures that support tens of thousands of columns. Scalability and Performance Limitations Oracle s MySQL has long been touted as a scale-out database. However, those who know and use MySQL admit it has limitations that negate its use in big data situations where scalability is required. For example: More servers can be added to a general MySQL Community/Enterprise cluster to help service more reads, but writes are still bottlenecked via the main write master server. Moreover, if many read slave servers are required, latency issues can arise in the process of simply getting the data from the master server to all the slaves. Consumption of high-velocity data can be challenging, especially if the InnoDB storage engine is used, as the index-organized structure often does not handle high insert rates well. Third-party storage engine vendors, which are columnar in nature, typically cannot help in this case, as they rely on their proprietary high-speed loaders to load data quickly into a database. Data volumes over half a terabyte become a real challenge for the MySQL optimizer. To overcome this, a third-party storage engine vendor such as Calpont or Infobright must be used but these vendors have limitations either in their SQL support, MPP capabilities, or both. 6
7 Why Migrate to Cassandra and DataStax Enterprise? While a move from Oracle s MySQL may be necessary because of its inability to handle key big data use cases, why should that move involve a switch to Apache Cassandra and DataStax Enterprise? The sections that follow describe why a move to Cassandra and DataStax Enterprise make both technical and business sense for MySQL users seeking alternatives. A Technical Overview of Cassandra Apache Cassandra, an Apache Software Foundation project, is an open source NoSQL distributed database management system. Cassandra is designed to handle big data OLTP workloads across multiple data centers with no single point of failure, providing enterprises with continuous availability without compromising performance. In selecting an alternative to Oracle s MySQL, IT professionals will find Apache Cassandra is a standout among other NoSQL offerings for the following technical reasons: Massively scalable architecture Cassandra s masterless, peer-to-peer architecture overcomes the limitations of master-slave designs and allows for both high availability and massive scalability. Cassandra is the acknowledged NoSQL leader when it comes to comfortably scaling to terabytes or petabytes of data, while maintaining industryleading write and read performance. Linear scale performance Nodes added to a Cassandra cluster (all done online) increase the throughput of a database in a predictable, linear fashion for both read and write operations, even in the cloud where such predictability can be difficult to ensure. Continuous availability Data is replicated to multiple nodes in a Cassandra database cluster to protect from loss during node failure and provide continuous availability with no downtime. Transparent fault detection and recovery Cassandra clusters can grow into the hundreds or thousands of nodes. Because Cassandra was designed for commodity servers, machine failure is expected. Cassandra utilizes gossip protocols to detect machine failure and recover when a machine is brought back into the cluster all without the application noticing. Flexible, dynamic schema data modeling Cassandra offers the organization of a traditional RDBMS table layout combined with the flexibility and power of no stringent structure requirements. This allows data to be dynamically stored as needed without performance penalty for changes that occur. In addition, Cassandra can store structured, semi-structured, and unstructured data. Guaranteed data safety Cassandra far exceeds other systems on write performance due to its append-only commit log while always ensuring durability. Users must no longer trade off durability to keep up with immense write streams. Data is absolutely safe in Cassandra; data loss is not possible. Distributed, location independence design Cassandra s architecture avoids the hot spots and read/write issues found in master-slave designs. Users can have a highly distributed database (e.g., multi-geography, multi-data center) and read or write to any node in a cluster without concern over what node is being accessed. Tunable data consistency Cassandra offers flexible data consistency on a cluster, data center, or individual I/O operation basis. Very strong or eventual data consistency among all participating nodes can be set globally and also controlled on a per-operation basis (e.g., per INSERT, per UPDATE) 7
8 Multi-data center replication Whether it s keeping data in multiple locations for disaster recovery scenarios or locating data physically near its end users for fast performance, Cassandra offers support for multiple data centers. Administrators simply configure how many copies of the data they want in each data center, and Cassandra handles the rest replicating the data automatically. Cassandra is also rack-aware and can keep replicas of data stored on different physical racks, which helps ensure uptime in the case of single rack failures. Cloud-enabled Cassandra s architecture maximizes the benefits of running in the cloud. Also, Cassandra allows for hybrid data distribution where some data can be kept on-premise and some in the cloud. Data compression Cassandra supplies built-in data compression, with up to an 80 percent reduction in raw data footprint. More importantly, Cassandra s compression results in no performance penalty, with some use cases showing actual read/write operations speeding up due to less physical I/O being required. CQL (Cassandra Query Language) Cassandra provides a SQL-like language called CQL that mirrors SQL s DDL, DML, and SELECT syntax. CQL greatly decreases the learning curve for those coming from RDBMS systems because they can use familiar syntax for all object creation and data access operations. No caching layer required Cassandra offers caching on each of its nodes. Coupled with Cassandra s scalability characteristics, nodes can be incrementally added to the cluster to keep as much data in memory as needed. The result is that there is no need for a separate caching layer. No special hardware needed Cassandra runs on commodity machines and requires no expensive or special hardware. Incremental and elastic expansion The Cassandra ring allows online node additions. Because of Cassandra s fully distributed architecture, every node type is the same, which means clusters can grow as needed without any complex architecture decisions. Simple install and setup Cassandra can be downloaded and installed in minutes, even for multi-cluster installs. Ready for developers Cassandra has drivers and client libraries for all the popular development languages (e.g., Java, Python) Given these technical features and benefits, the following are typical big data use cases handled well by Cassandra in the enterprise: Big data OLTP situations Time series data management High-velocity device data ingestion and analysis Healthcare system input and analysis Media streaming management (e.g., music, movies) Social media (i.e., unstructured data) input and analysis Online web retail (e.g., shopping carts, user transactions) Real-time data analytics Online gaming (e.g., real-time messaging) Software as a Service (SaaS) applications that utilize web services Write-intensive systems Cassandra vs. Other NoSQL Solutions 8
9 What does the performance of Cassandra look like compared to other NoSQL options? While each use case is different, external benchmarks such as the 6 YCSB test show Cassandra to outperform its rivals in a number of situations. This particular benchmark (right) shows Cassandra delivering nearly 4x the write performance, 2x the read performance, and better than 12x overall performance in a mixed workload use case over another leading NoSQL provider. Who s Using Cassandra? One benefit MySQL users have enjoyed is a large community of users who have deployed the database in many production environments. Cassandra, likewise, is used in many industries for modern applications that need scale, fast performance, data flexibility, and easy data distribution. Below is a snapshot of some of the companies and organizations that use Cassandra in production. (Note: There are many other household name companies with production implementations that cannot be published due to NDA restrictions.) 6 NoSQL Benchmarking, CUBRID: 9
10 A Quick Look at DataStax A viable company behind an open source offering is vital for enterprises wanting to enjoy the benefits provided by open source software, but also needing the professional requirements supplied by proprietary software offerings. DataStax is the leading provider of modern enterprise database software products and services based on Apache Cassandra. It employs the Apache chair of Cassandra as well as most of the project s committers. At the time of this writing, DataStax has nearly 50 employees and over 170 customers, though both of these statistics are growing rapidly. DataStax provides free and open source NoSQL products as well as commercial solutions aimed at production big data environments, with its flagship solution being DataStax Enterprise. DataStax Enterprise The Choice for Production Big Data Deployments Apache Cassandra can be likened to MySQL s community server in that both are free and open source. By contrast, DataStax Enterprise is more akin to MySQL Enterprise or MySQL Cluster s Carrier Grade editions in that it is designed for production deployments that power key business systems. DataStax Enterprise is tailor-made to manage big data effectively. The solution inherits all of Cassandra s powerful feature set for servicing modern big data OLTP applications, and smartly integrates a fault-tolerant, analytics platform that provides Hadoop MapReduce, Hive, Pig, Mahout, and Sqoop support for business intelligence systems. It also includes enterprise search capabilities via Apache Solr, which is the most popular open source search software in use today. A key differentiator of DataStax Enterprise over other big data providers is that real-time, analytic, and search workloads are intelligently isolated across a distributed DataStax Enterprise database cluster, so that no competition for underlying compute resources or data occurs. DataStax Enterprise is comprised of three components: The DataStax Enterprise Server built on Apache Cassandra, the server manages realtime data with Cassandra, analytic data with Hadoop, and enterprise search data with Apache Solr. OpsCenter Enterprise a visual, browser-based solution for managing and monitoring Cassandra and the DataStax Enterprise server. Production Support full 24x7x365 support from the big data experts at DataStax. With Hadoop and Solr, the types of use cases that can be tackled with DataStax Enterprise grow exponentially beyond those previously covered with Cassandra alone and include: Social media input and analysis Web clickstream analysis Buyer event and behavior analytics Fraud detection and analysis Risk analysis and management Supply chain analytics Web product searches Internal document search (e.g., law firms) Real estate/property searches Social media matchups Web and application log management/analysis 10
11 What About Cost? There are many technical benefits in moving from Oracle s MySQL to Cassandra and DataStax Enterprise, but what about cost? How does DataStax compare with Oracle in that regard? As of May 2012, the list prices for Oracle s MySQL products were priced by subscription, per server/socket, and were as follows 7 : Product List Price MySQL Standard Edition Subscription (1-4 socket server) $2,000 MySQL Standard Edition Subscription (5+ socket server) $4,000 MySQL Enterprise Edition Subscription (1-4 socket server) $5,000 MySQL Enterprise Edition Subscription (5+ socket server) $10,000 MySQL Cluster Carrier Grade Edition Subscription (1-4 socket server) $10,000 MySQL Cluster Carrier Grade Edition Subscription (5+ socket server) $20,000 For feature differences in the MySQL products listed above, see Like the MySQL Community Edition, DataStax provides a free edition of Apache Cassandra (the DataStax Community Edition), which comes with the latest version of Cassandra, a free version of DataStax s OpsCenter management and monitoring tool, and quick-start developer aids. From a production, enterprise perspective, a subscription to DataStax Enterprise Edition may be compared to the MySQL Enterprise and MySQL Cluster Carrier Grade Edition subscriptions. DataStax does not currently price by socket only by server, with pricing either being on par with MySQL Enterprise for small boxes or, with beefier machines, substantially less (e.g., 50 to 70 percent). How to Migrate from MySQL to Cassandra The first step in migrating from MySQL to Cassandra is to understand that data modeling is handled differently in NoSQL solutions vs. RDBMSs. In traditional databases such as MySQL, data is modeled in standard third normal form design without the need to know what questions will be asked of the data. By contrast, in NoSQL, the questions asked of the data are what drive the data model design and the data is highly denormalized. If a developer is simply interested in porting MySQL schema and data over into a Cassandra keyspace (analogous to a database in MySQL) just for testing or preliminary development purposes, there are two primary options: 1. Use the Sqoop interface to move MySQL tables and data. 2. Use ETL (extract-transform-load) tools such as Pentaho s Kettle to move schema and data. 7 MySQL Global Price List, Software Investment Guide, Oracle, November 1, 2010: pdf 11
12 Using Sqoop to Migrate from MySQL DataStax Enterprise supports Sqoop, which is a utility designed to transfer data between an RDBMS and Hadoop. Given that DataStax Enterprise combines Cassandra, Hadoop, and Solr together into one big data platform, a developer can move data not only to a Hadoop system with Sqoop, but also Cassandra. The DataStax Enterprise installation package includes a sample/demo of how to move MySQL schema and data into Cassandra. Because Sqoop works via Java Database Connectivity (JDBC), the only prerequisite is that the JDBC driver for MySQL must be downloaded from the MySQL website and placed in a directory where Sqoop has access to it (the /sqoop subdirectory of the main DataStax Enterprise installation is recommended). Each MySQL table is mapped to a Cassandra column family. Column families are a Google Bigtable structure, with rows and columns like MySQL but much more dynamic and flexible. The migration is done via a command line utility that accepts a number of different parameters. For example, the following code migrates a MySQL table contained on a server with IP address that s in the dev database. It connects to MySQL with the root ID and uses no password. It migrates a table called npa_nxx from MySQL into a Cassandra keyspace named dev, to a column family named npa_nxx_cf, identifies the MySQL column npa_nxx_key as the primary key, names the Cassandra server s host IP, and lastly, asks for the schema to be created before the data is imported:./dse sqoop import --connect jdbc:mysql:// /dev \ --username root \ --table npa_nxx \ --cassandra-keyspace dev \ --cassandra-column-family npa_nxx_cf \ --cassandra-row-key npa_nxx_key \ --cassandra-thrift-host \ --cassandra-create-schema Figure 1 Using Sqoop to move data from MySQL to DataStax Enterprise 12
13 Using Pentaho Kettle to Migrate from MySQL Another way to migrate MySQL tables and data to Cassandra is by using a number of ETL tools on the market such as Pentaho s Data Integration product, also known as Kettle. Pentaho makes two editions of their ETL tool available: a free community edition and a paid enterprise edition. For core ETL tasks such as moving MySQL schema and data to Cassandra, the community edition should provide everything that is needed. Figure 2 Kettle s visual interface for performing ETL operations Pentaho s Kettle product provides an easy-to-use graphical user interface (GUI) that allows a developer to visually design their MySQL migration tasks. Unlike the Sqoop utility, which just does extract-load, Pentaho s product allows a developer to create simple to sophisticated transformation routines to customize how a MySQL schema and data are moved to Cassandra. In addition, the data movement engine of Kettle is quite efficient, so medium to semi-large data volumes can be moved in a high-performance manner. More information about Kettle and free downloads can be found at: Examples of Customers Who Have Switched from MySQL Whether it s by using Sqoop, ETL tools, or other in-house-developed options, a move from Oracle s MySQL to big data platforms like Cassandra and DataStax Enterprise is not difficult at all. DataStax has many customers who have made just such a switch, but do not discuss it from an external-facing standpoint. However, the following are a few customers who were kind enough to let us share their use cases publicly. 13
14 Mahalo Mahalo is a social media learning business that has a top 200 web ranking and experiences 12 million visits per month. Mahalo needed a database provider to manage the activity log for every single customer interaction as well as store information for their Q&A topic database. Mahalo started using Oracle s MySQL. However, performance and availability issues necessitated a move to a database that could scale and handle their heavy write workload. Further, the company needed a more dynamic data model to store the variety of data that was coming in. Mahalo chose Cassandra and DataStax as their MySQL replacement. Mahalo s chief technology officer (CTO), Jason Burch, says: With the Cassandra conversion completed and running smoothly, we re now free to focus on our primary mission, knowing that we ll be able to deliver the excellent responsiveness and capabilities that our user community has come to expect. Pantheon Systems Pantheon Systems provides a cloud-based web development platform for websites made with Drupal. Pantheon needed a primary database platform that would hold all metadata information that supports their main application platform and also all of their media storage. Pantheon began with MySQL, but found it was unable to handle its requirements of managing both structured and unstructured data. Moreover, MySQL could not scale or support Pantheon s need for continuous availability across multiple data centers. Pantheon switched to Cassandra and DataStax, which met all of their specific requirements. David Strauss, Pantheon s CTO, describes the company s use of Cassandra this way: All the actual platform data in Pantheon is persisted primarily to Cassandra. We could wipe out pretty much everything on Pantheon, but as long as the Cassandra store is there, we have our data. 14
15 Conclusion There is no argument that Oracle s MySQL is a good RDBMS and one that well serves the use cases for which it was originally designed. But for IT professionals who are either planning new big data applications or have existing MySQL systems that have begun to break down under big data workloads, a move to DataStax Enterprise and Cassandra makes both business and technical sense. Switching to a modern, big data platform like DataStax Enterprise will future-proof any application, and provides confidence that the system will scale and perform well both now and into a demanding future. For more information on DataStax Enterprise and Cassandra, visit For downloads of DataStax Enterprise which may be freely used for development purposes visit About DataStax DataStax powers the big data applications that transform business for more than 300 customers, including startups and 20 of the Fortune 100. DataStax delivers a massively scalable, flexible and continuously available big data platform built on Apache Cassandra. DataStax integrates enterprise-ready Cassandra, Apache Hadoop for analytics and Apache Solr for search across multi-data centers and in the cloud. Companies such as Adobe, Healthcare Anytime, ebay and Netflix rely on DataStax to transform their businesses. Based in San Mateo, Calif., DataStax is backed by industry-leading investors: Lightspeed Venture Partners, Crosslink Capital and Meritech Capital Partners. For more information, visit DataStax or follow 15
16 Appendix A FAQ on Switching from MySQL to DataStax Enterprise/Cassandra This appendix supplies answers to frequently asked questions about migrating from Oracle s MySQL to DataStax Enterprise/Cassandra. Do I lose transaction support when moving from MySQL to Cassandra? MySQL s InnoDB supplies ACID transaction support, whereas Cassandra provides AID transaction support. The C or consistency part of transaction support does not apply to Cassandra, as there is no concept of referential integrity or foreign keys in a NoSQL database. There is also no concept of commit/rollback in Cassandra. Batch operations are supported in Cassandra via the BATCH option in CQL. What type of data consistency does DataStax Enterprise/Cassandra support? DataStax Enterprise and Cassandra support tunable data consistency. This type of consistency is the kind represented by the C in the CAP theorem, which concerns distributed systems. Cassandra extends the concept of eventual consistency in NoSQL databases by offering tunable consistency. For any given read or write operation, the client application decides how consistent the requested data should be. Consistency levels in Cassandra can be set on any read or write query. This allows application developers to tune consistency on a per-query/operation basis depending on their requirements for response time versus data accuracy. Cassandra offers a number of consistency levels for both reads and writes. What parts of my MySQL database cannot be migrated to DataStax Enterprise/Cassandra? Schema, data, and general indexes may be migrated, but objects that currently cannot be migrated include: Stored procedures Views Triggers Functions Security privileges Referential integrity constraints Rules Partitioned table definitions Do I need to use a MySQL caching layer (like memcached) with Cassandra? No. Cassandra negates the need for extra software caching layers like memcached through its distributed architecture, fast write throughput capabilities, and internal memory caching structures. When you want more memory cache available to your cluster, you simply add more nodes and it will handle the rest for you. 16
17 Is data absolutely safe in Cassandra? Yes. First, data durability is fully supported in Cassandra, so any data written to a database cluster is first written to a commit log in the same fashion as nearly every popular RDBMS does. Second, Cassandra offers tunable data consistency. This means a developer or administrator can choose how strong they wish consistency across nodes to be. The strongest form of consistency is to mandate that any data modifications be made to all nodes, with any unsuccessful attempt on a node resulting in a failed data operation. Cassandra provides consistency in the CAP sense, in that all readers will see the same values. How is data written and stored in Cassandra? Cassandra has been architected for consuming large amounts of data as fast as possible. To accomplish this, Cassandra first writes new data to a commit log to ensure it is safe. After that, the data is written to an in-memory structure called a memtable. Cassandra deems the write successful once it is stored on both the commit log and a memtable, which provides the durability required for mission-critical systems. Once a memtable s memory limit is reached, all writes are then written to disk in the form of an SSTable (sorted strings table). An SSTable is immutable, meaning it is not written to ever again. If the data contained in the SSTable is modified, the data is written to Cassandra in an upsert fashion and the previous data automatically removed. Because SSTables are immutable and only written once the corresponding memtable is full, Cassandra avoids random seeks and instead only performs sequential I/O in large batches, resulting in high write throughput. A related factor is that Cassandra doesn t have to do a read as part of a write (i.e., check index to see where current data is). This means that insert performance remains high as data size grows, while with b-tree based engines (e.g., MongoDB) it deteriorates. What kind of query language is provided in Cassandra? Is it like SQL in MySQL? Cassandra supplies the Cassandra Query Language (CQL), which is very SQL-like. Queries are done via the standard SELECT command, while DML operations are accomplished via the familiar INSERT, UPDATE, DELETE, and TRUNCATE commands. DDL commands such as CREATE are used to create new keyspaces and column families. Although CQL has many similarities to SQL, it does not change the underlying Cassandra data model. There is no support for JOINs, for example. 17
18 Appendix B Comparing MySQL and DataStax Enterprise/Cassandra This technical appendix provides brief comparisons between Oracle s MySQL and DataStax Enterprise/Cassandra. General Comparisons Feature/Function MySQL DataStax Enterprise/Cassandra Platform support Linux, Windows, Solaris, Unix, Mac Linux, Windows, Mac Data model Relational/tabular Google Bigtable Primary data object Table Column family Data variety support Primarily structured Structured, semi-structured, unstructured Data partitioning/sharding model Manual in general MySQL; automatic in MySQL Cluster Automatic Logical database container Database Keyspace Indexes Distribution architecture CAP consistency model Primary, secondary, clustered, full-text Master/slave or synchronous replication with MySQL Cluster Synchronous with MySQL Cluster Primary, secondary Peer-to-peer with replication Tunable consistency per operation Multi-data center support Basic Multi-data center and cloud, with rack awareness Transaction support ACID AID (no C as there are no foreign keys) Memory usage model General caches, query cache, main memory option with MySQL Cluster Core language SQL CQL Primary query utilities mysql command line client CQL shell; CLI Development language support Large data volume support (TBs+) Data compression Many (e.g., Java, Python) Low TBs done with third-party storage engines Built into some storage engines Built in Distributed object/row caches across all nodes in a cluster Many (e.g., Java, Python) Native, TB-PB support Analytic support Some analytic functions Done via Hadoop (MapReduce, Hive, Pig, Mahout) Search support Full-text indexes Done via Solr integration 18
19 Feature/Function MySQL DataStax Enterprise/Cassandra Geospatial support Spatial extensions Done via Solr integration Logging (e.g., web, application) data support Nothing built in Handled via log4j Mixed workload support Must separate/etl data between OLTP, analytic, search All handled in one cluster with built-in workload isolation Backup/recovery Online, point-in-time restore Online, point-in-time restore Enterprise management/monitoring MySQL Enterprise Monitor DataStax OpsCenter 19
20 Datatype Comparisons CQL datatype MySQL datatype Description blob blob Arbitrary hexadecimal bytes (no validation) ascii char US-ASCII character string text, varchar text, varchar UTF-8 encoded string varint numeric Arbitrary-precision integer int, bigint int, bigint 8-bytes long uuid None Type 1 or type 4 UUID timestamp timestamp Date plus time, encoded as 8 bytes since epoch boolean bit true or false float float 4-byte floating point double double 8-byte floating point decimal decimal Variable-precision decimal counter None Distributed counter value (8-bytes long) 20
Introduction to Apache Cassandra
Introduction to Apache Cassandra White Paper BY DATASTAX CORPORATION JULY 2013 1 Table of Contents Abstract 3 Introduction 3 Built by Necessity 3 The Architecture of Cassandra 4 Distributing and Replicating
Introduction to Multi-Data Center Operations with Apache Cassandra and DataStax Enterprise
Introduction to Multi-Data Center Operations with Apache Cassandra and DataStax Enterprise White Paper BY DATASTAX CORPORATION October 2013 1 Table of Contents Abstract 3 Introduction 3 The Growth in Multiple
Big Data: Beyond the Hype. Why Big Data Matters to You. White Paper
Big Data: Beyond the Hype Why Big Data Matters to You White Paper BY DATASTAX CORPORATION October 2013 Table of Contents Abstract 3 Introduction 3 Big Data and You 5 Big Data Is More Prevalent Than You
Comparing Oracle with Cassandra / DataStax Enterprise
Comparing Oracle with Cassandra / DataStax Enterprise Table of Contents Table of Contents... 2 Abstract... 3 Introduction... 3 Oracle and Today s Online Applications... 3 Architectural Limitations... 3
Big Data: Beyond the Hype
Big Data: Beyond the Hype Why Big Data Matters to You WHITE PAPER By DataStax Corporation March 2012 Contents Introduction... 3 Big Data and You... 5 Big Data Is More Prevalent Than You Think... 5 Big
Big Data: Beyond the Hype
Big Data: Beyond the Hype Why Big Data Matters to You WHITE PAPER Big Data: Beyond the Hype Why Big Data Matters to You By DataStax Corporation October 2011 Table of Contents Introduction...4 Big Data
Introduction to Multi-Data Center Operations with Apache Cassandra, Hadoop, and Solr WHITE PAPER
Introduction to Multi-Data Center Operations with Apache Cassandra, Hadoop, and Solr WHITE PAPER By DataStax Corporation August 2012 Contents Introduction...3 The Growth in Multiple Data Centers...3 Why
Comparing the Hadoop Distributed File System (HDFS) with the Cassandra File System (CFS)
Comparing the Hadoop Distributed File System (HDFS) with the Cassandra File System (CFS) White Paper BY DATASTAX CORPORATION August 2013 1 Table of Contents Abstract 3 Introduction 3 Overview of HDFS 4
The Modern Online Application for the Internet Economy: 5 Key Requirements that Ensure Success
The Modern Online Application for the Internet Economy: 5 Key Requirements that Ensure Success 1 Table of Contents Abstract... 3 Introduction... 3 Requirement #1 Smarter Customer Interactions... 4 Requirement
Evaluating Apache Cassandra as a Cloud Database White Paper
Evaluating Apache Cassandra as a Cloud Database White Paper BY DATASTAX CORPORATION October 2013 1 Table of Contents Abstract 3 Introduction 3 Why Move to a Cloud Database? 3 The Cloud Promises Transparent
Highly available, scalable and secure data with Cassandra and DataStax Enterprise. GOTO Berlin 27 th February 2014
Highly available, scalable and secure data with Cassandra and DataStax Enterprise GOTO Berlin 27 th February 2014 About Us Steve van den Berg Johnny Miller Solutions Architect Regional Director Western
Comparing the Hadoop Distributed File System (HDFS) with the Cassandra File System (CFS) WHITE PAPER
Comparing the Hadoop Distributed File System (HDFS) with the Cassandra File System (CFS) WHITE PAPER By DataStax Corporation September 2012 Contents Introduction... 3 Overview of HDFS... 4 The Benefits
How Transactional Analytics is Changing the Future of Business A look at the options, use cases, and anti-patterns
How Transactional Analytics is Changing the Future of Business A look at the options, use cases, and anti-patterns Table of Contents Abstract... 3 Introduction... 3 Definition... 3 The Expanding Digitization
Evaluating Apache Cassandra as a Cloud Database WHITE PAPER
Evaluating Apache Cassandra as a Cloud Database WHITE PAPER Evaluating Apache Cassandra as a Cloud Database By DataStax Corporation November 2011 Contents Introduction... 3 Why Move to a Cloud Database?...
Evaluating Apache Cassandra as a Cloud Database WHITE PAPER
Evaluating Apache Cassandra as a Cloud Database WHITE PAPER By DataStax Corporation March 2012 Contents Introduction... 3 Why Move to a Cloud Database?... 3 The Cloud Promises Transparent Elasticity...
So What s the Big Deal?
So What s the Big Deal? Presentation Agenda Introduction What is Big Data? So What is the Big Deal? Big Data Technologies Identifying Big Data Opportunities Conducting a Big Data Proof of Concept Big Data
Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale
WHITE PAPER Affordable, Scalable, Reliable OLTP in a Cloud and Big Data World: IBM DB2 purescale Sponsored by: IBM Carl W. Olofson December 2014 IN THIS WHITE PAPER This white paper discusses the concept
INTRODUCTION TO CASSANDRA
INTRODUCTION TO CASSANDRA This ebook provides a high level overview of Cassandra and describes some of its key strengths and applications. WHAT IS CASSANDRA? Apache Cassandra is a high performance, open
DBA'S GUIDE TO NOSQL APACHE CASSANDRA
DBA'S GUIDE TO NOSQL APACHE CASSANDRA THE ENLIGHTENED DBA Smashwords Edition Copyright 2014 The Enlightened DBA This ebook is licensed for your personal enjoyment only. This ebook may not be re-sold or
Enabling SOX Compliance on DataStax Enterprise
Enabling SOX Compliance on DataStax Enterprise Table of Contents Table of Contents... 2 Introduction... 3 SOX Compliance and Requirements... 3 Who Must Comply with SOX?... 3 SOX Goals and Objectives...
Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam [email protected]
Using MySQL for Big Data Advantage Integrate for Insight Sastry Vedantam [email protected] Agenda The rise of Big Data & Hadoop MySQL in the Big Data Lifecycle MySQL Solutions for Big Data Q&A
How To Scale Out Of A Nosql Database
Firebird meets NoSQL (Apache HBase) Case Study Firebird Conference 2011 Luxembourg 25.11.2011 26.11.2011 Thomas Steinmaurer DI +43 7236 3343 896 [email protected] www.scch.at Michael Zwick DI
Practical Cassandra. Vitalii Tymchyshyn [email protected] @tivv00
Practical Cassandra NoSQL key-value vs RDBMS why and when Cassandra architecture Cassandra data model Life without joins or HDD space is cheap today Hardware requirements & deployment hints Vitalii Tymchyshyn
No-SQL Databases for High Volume Data
Target Conference 2014 No-SQL Databases for High Volume Data Edward Wijnen 3 November 2014 The New Connected World Needs a Revolutionary New DBMS Today The Internet of Things 1990 s Mobile 1970 s Mainfram
Analytics March 2015 White paper. Why NoSQL? Your database options in the new non-relational world
Analytics March 2015 White paper Why NoSQL? Your database options in the new non-relational world 2 Why NoSQL? Contents 2 New types of apps are generating new types of data 2 A brief history of NoSQL 3
Oracle s Big Data solutions. Roger Wullschleger. <Insert Picture Here>
s Big Data solutions Roger Wullschleger DBTA Workshop on Big Data, Cloud Data Management and NoSQL 10. October 2012, Stade de Suisse, Berne 1 The following is intended to outline
Why Big Data in the Cloud?
Have 40 Why Big Data in the Cloud? Colin White, BI Research January 2014 Sponsored by Treasure Data TABLE OF CONTENTS Introduction The Importance of Big Data The Role of Cloud Computing Using Big Data
CitusDB Architecture for Real-Time Big Data
CitusDB Architecture for Real-Time Big Data CitusDB Highlights Empowers real-time Big Data using PostgreSQL Scales out PostgreSQL to support up to hundreds of terabytes of data Fast parallel processing
Introduction to Hadoop. New York Oracle User Group Vikas Sawhney
Introduction to Hadoop New York Oracle User Group Vikas Sawhney GENERAL AGENDA Driving Factors behind BIG-DATA NOSQL Database 2014 Database Landscape Hadoop Architecture Map/Reduce Hadoop Eco-system Hadoop
Practical Guidelines for Selecting NoSQL vs. an RDBMS Deployment Considerations Conclusion About DataStax
TABLE OF CONTENTS Introduction Why NoSQL? NoSQL 101 Types of NoSQL Databases What are the Advantages of NoSQL Over an RDBMS? A NoSQL Example Apache Cassandra What Makes Cassandra Ideal for Modern Online
How To Handle Big Data With A Data Scientist
III Big Data Technologies Today, new technologies make it possible to realize value from Big Data. Big data technologies can replace highly customized, expensive legacy systems with a standard solution
Simplifying Database Management with DataStax OpsCenter
Simplifying Database Management with DataStax OpsCenter Table of Contents Table of Contents... 2 Abstract... 3 Introduction... 3 DataStax OpsCenter... 3 How Does DataStax OpsCenter Work?... 3 The OpsCenter
Hypertable Architecture Overview
WHITE PAPER - MARCH 2012 Hypertable Architecture Overview Hypertable is an open source, scalable NoSQL database modeled after Bigtable, Google s proprietary scalable database. It is written in C++ for
DataStax Enterprise Reference Architecture
DataStax Enterprise Reference Architecture DataStax Enterprise Reference Architecture 7.8.15 1 Table of Contents ABSTRACT... 3 INTRODUCTION... 3 DATASTAX ENTERPRISE... 3 ARCHITECTURE... 3 OPSCENTER: EASY-
MariaDB Cassandra interoperability
MariaDB Cassandra interoperability Cassandra Storage Engine in MariaDB Sergei Petrunia Colin Charles Who are we Sergei Petrunia Principal developer of CassandraSE, optimizer developer, formerly from MySQL
Dell Reference Configuration for DataStax Enterprise powered by Apache Cassandra
Dell Reference Configuration for DataStax Enterprise powered by Apache Cassandra A Quick Reference Configuration Guide Kris Applegate [email protected] Solution Architect Dell Solution Centers Dave
Microsoft Azure Data Technologies: An Overview
David Chappell Microsoft Azure Data Technologies: An Overview Sponsored by Microsoft Corporation Copyright 2014 Chappell & Associates Contents Blobs... 3 Running a DBMS in a Virtual Machine... 4 SQL Database...
Evaluator s Guide. McKnight. Consulting Group. McKnight Consulting Group
NoSQL Evaluator s Guide McKnight Consulting Group William McKnight is the former IT VP of a Fortune 50 company and the author of Information Management: Strategies for Gaining a Competitive Advantage with
The Future of Data Management
The Future of Data Management with Hadoop and the Enterprise Data Hub Amr Awadallah (@awadallah) Cofounder and CTO Cloudera Snapshot Founded 2008, by former employees of Employees Today ~ 800 World Class
MySQL és Hadoop mint Big Data platform (SQL + NoSQL = MySQL Cluster?!)
MySQL és Hadoop mint Big Data platform (SQL + NoSQL = MySQL Cluster?!) Erdélyi Ernő, Component Soft Kft. [email protected] www.component.hu 2013 (c) Component Soft Ltd Leading Hadoop Vendor Copyright 2013,
Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software
WHITEPAPER Accelerating Enterprise Applications and Reducing TCO with SanDisk ZetaScale Software SanDisk ZetaScale software unlocks the full benefits of flash for In-Memory Compute and NoSQL applications
Big Data Development CASSANDRA NoSQL Training - Workshop. March 13 to 17-2016 9 am to 5 pm HOTEL DUBAI GRAND DUBAI
Big Data Development CASSANDRA NoSQL Training - Workshop March 13 to 17-2016 9 am to 5 pm HOTEL DUBAI GRAND DUBAI ISIDUS TECH TEAM FZE PO Box 121109 Dubai UAE, email training-coordinator@isidusnet M: +97150
Comparing SQL and NOSQL databases
COSC 6397 Big Data Analytics Data Formats (II) HBase Edgar Gabriel Spring 2015 Comparing SQL and NOSQL databases Types Development History Data Storage Model SQL One type (SQL database) with minor variations
NOSQL DATABASES AND CASSANDRA
NOSQL DATABASES AND CASSANDRA Semester Project: Advanced Databases DECEMBER 14, 2015 WANG CAN, EVABRIGHT BERTHA Université Libre de Bruxelles 0 Preface The goal of this report is to introduce the new evolving
Why NoSQL? Your database options in the new non- relational world. 2015 IBM Cloudant 1
Why NoSQL? Your database options in the new non- relational world 2015 IBM Cloudant 1 Table of Contents New types of apps are generating new types of data... 3 A brief history on NoSQL... 3 NoSQL s roots
Implementing Search in Web, Mobile, and IOT Applications An Overview of DataStax Enterprise Search
Implementing Search in Web, Mobile, and IOT Applications An Overview of DataStax Enterprise Search Table of Contents Introduction... 3 Why Search?... 3 General Search Requirements... 3 Traditional Deployment
WINDOWS AZURE DATA MANAGEMENT AND BUSINESS ANALYTICS
WINDOWS AZURE DATA MANAGEMENT AND BUSINESS ANALYTICS Managing and analyzing data in the cloud is just as important as it is anywhere else. To let you do this, Windows Azure provides a range of technologies
Apache Cassandra Query Language (CQL)
REFERENCE GUIDE - P.1 ALTER KEYSPACE ALTER TABLE ALTER TYPE ALTER USER ALTER ( KEYSPACE SCHEMA ) keyspace_name WITH REPLICATION = map ( WITH DURABLE_WRITES = ( true false )) AND ( DURABLE_WRITES = ( true
BASHO DATA PLATFORM SIMPLIFIES BIG DATA, IOT, AND HYBRID CLOUD APPS
WHITEPAPER BASHO DATA PLATFORM BASHO DATA PLATFORM SIMPLIFIES BIG DATA, IOT, AND HYBRID CLOUD APPS INTRODUCTION Big Data applications and the Internet of Things (IoT) are changing and often improving our
How To Use Big Data For Telco (For A Telco)
ON-LINE VIDEO ANALYTICS EMBRACING BIG DATA David Vanderfeesten, Bell Labs Belgium ANNO 2012 YOUR DATA IS MONEY BIG MONEY! Your click stream, your activity stream, your electricity consumption, your call
Trafodion Operational SQL-on-Hadoop
Trafodion Operational SQL-on-Hadoop SophiaConf 2015 Pierre Baudelle, HP EMEA TSC July 6 th, 2015 Hadoop workload profiles Operational Interactive Non-interactive Batch Real-time analytics Operational SQL
Hadoop and Map-Reduce. Swati Gore
Hadoop and Map-Reduce Swati Gore Contents Why Hadoop? Hadoop Overview Hadoop Architecture Working Description Fault Tolerance Limitations Why Map-Reduce not MPI Distributed sort Why Hadoop? Existing Data
Complying with Payment Card Industry (PCI-DSS) Requirements with DataStax and Vormetric
Complying with Payment Card Industry (PCI-DSS) Requirements with DataStax and Vormetric Table of Contents Table of Contents... 2 Overview... 3 PIN Transaction Security Requirements... 3 Payment Application
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
MyISAM Default Storage Engine before MySQL 5.5 Table level locking Small footprint on disk Read Only during backups GIS and FTS indexing Copyright 2014, Oracle and/or its affiliates. All rights reserved.
Integrating Big Data into the Computing Curricula
Integrating Big Data into the Computing Curricula Yasin Silva, Suzanne Dietrich, Jason Reed, Lisa Tsosie Arizona State University http://www.public.asu.edu/~ynsilva/ibigdata/ 1 Overview Motivation Big
Implement Hadoop jobs to extract business value from large and varied data sets
Hadoop Development for Big Data Solutions: Hands-On You Will Learn How To: Implement Hadoop jobs to extract business value from large and varied data sets Write, customize and deploy MapReduce jobs to
Non-Stop Hadoop Paul Scott-Murphy VP Field Techincal Service, APJ. Cloudera World Japan November 2014
Non-Stop Hadoop Paul Scott-Murphy VP Field Techincal Service, APJ Cloudera World Japan November 2014 WANdisco Background WANdisco: Wide Area Network Distributed Computing Enterprise ready, high availability
Big Data Analytics. with EMC Greenplum and Hadoop. Big Data Analytics. Ofir Manor Pre Sales Technical Architect EMC Greenplum
Big Data Analytics with EMC Greenplum and Hadoop Big Data Analytics with EMC Greenplum and Hadoop Ofir Manor Pre Sales Technical Architect EMC Greenplum 1 Big Data and the Data Warehouse Potential All
NoSQL Databases. Institute of Computer Science Databases and Information Systems (DBIS) DB 2, WS 2014/2015
NoSQL Databases Institute of Computer Science Databases and Information Systems (DBIS) DB 2, WS 2014/2015 Database Landscape Source: H. Lim, Y. Han, and S. Babu, How to Fit when No One Size Fits., in CIDR,
Can the Elephants Handle the NoSQL Onslaught?
Can the Elephants Handle the NoSQL Onslaught? Avrilia Floratou, Nikhil Teletia David J. DeWitt, Jignesh M. Patel, Donghui Zhang University of Wisconsin-Madison Microsoft Jim Gray Systems Lab Presented
NoSQL for SQL Professionals William McKnight
NoSQL for SQL Professionals William McKnight Session Code BD03 About your Speaker, William McKnight President, McKnight Consulting Group Frequent keynote speaker and trainer internationally Consulted to
How To Use Hp Vertica Ondemand
Data sheet HP Vertica OnDemand Enterprise-class Big Data analytics in the cloud Enterprise-class Big Data analytics for any size organization Vertica OnDemand Organizations today are experiencing a greater
Elastic Application Platform for Market Data Real-Time Analytics. for E-Commerce
Elastic Application Platform for Market Data Real-Time Analytics Can you deliver real-time pricing, on high-speed market data, for real-time critical for E-Commerce decisions? Market Data Analytics applications
Cloud Based Application Architectures using Smart Computing
Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products
Understanding Neo4j Scalability
Understanding Neo4j Scalability David Montag January 2013 Understanding Neo4j Scalability Scalability means different things to different people. Common traits associated include: 1. Redundancy in the
BIG DATA TRENDS AND TECHNOLOGIES
BIG DATA TRENDS AND TECHNOLOGIES THE WORLD OF DATA IS CHANGING Cloud WHAT IS BIG DATA? Big data are datasets that grow so large that they become awkward to work with using onhand database management tools.
Benchmarking Couchbase Server for Interactive Applications. By Alexey Diomin and Kirill Grigorchuk
Benchmarking Couchbase Server for Interactive Applications By Alexey Diomin and Kirill Grigorchuk Contents 1. Introduction... 3 2. A brief overview of Cassandra, MongoDB, and Couchbase... 3 3. Key criteria
Architectural patterns for building real time applications with Apache HBase. Andrew Purtell Committer and PMC, Apache HBase
Architectural patterns for building real time applications with Apache HBase Andrew Purtell Committer and PMC, Apache HBase Who am I? Distributed systems engineer Principal Architect in the Big Data Platform
MySQL Enterprise Backup
MySQL Enterprise Backup Fast, Consistent, Online Backups A MySQL White Paper February, 2011 2011, Oracle Corporation and/or its affiliates Table of Contents Introduction... 3! Database Backup Terms...
SQL VS. NO-SQL. Adapted Slides from Dr. Jennifer Widom from Stanford
SQL VS. NO-SQL Adapted Slides from Dr. Jennifer Widom from Stanford 55 Traditional Databases SQL = Traditional relational DBMS Hugely popular among data analysts Widely adopted for transaction systems
Introduction to Cassandra
Introduction to Cassandra DuyHai DOAN, Technical Advocate Agenda! Architecture cluster replication Data model last write win (LWW), CQL basics (CRUD, DDL, collections, clustering column) lightweight transactions
A survey of big data architectures for handling massive data
CSIT 6910 Independent Project A survey of big data architectures for handling massive data Jordy Domingos - [email protected] Supervisor : Dr David Rossiter Content Table 1 - Introduction a - Context
X4-2 Exadata announced (well actually around Jan 1) OEM/Grid control 12c R4 just released
General announcements In-Memory is available next month http://www.oracle.com/us/corporate/events/dbim/index.html X4-2 Exadata announced (well actually around Jan 1) OEM/Grid control 12c R4 just released
Performance and Scalability Overview
Performance and Scalability Overview This guide provides an overview of some of the performance and scalability capabilities of the Pentaho Business Analytics Platform. Contents Pentaho Scalability and
MySQL. Leveraging. Features for Availability & Scalability ABSTRACT: By Srinivasa Krishna Mamillapalli
Leveraging MySQL Features for Availability & Scalability ABSTRACT: By Srinivasa Krishna Mamillapalli MySQL is a popular, open-source Relational Database Management System (RDBMS) designed to run on almost
Where We Are. References. Cloud Computing. Levels of Service. Cloud Computing History. Introduction to Data Management CSE 344
Where We Are Introduction to Data Management CSE 344 Lecture 25: DBMS-as-a-service and NoSQL We learned quite a bit about data management see course calendar Three topics left: DBMS-as-a-service and NoSQL
How to Choose Between Hadoop, NoSQL and RDBMS
How to Choose Between Hadoop, NoSQL and RDBMS Keywords: Jean-Pierre Dijcks Oracle Redwood City, CA, USA Big Data, Hadoop, NoSQL Database, Relational Database, SQL, Security, Performance Introduction A
DataStax Enterprise, powered by Apache Cassandra (TM)
PerfAccel (TM) Performance Benchmark on Amazon: DataStax Enterprise, powered by Apache Cassandra (TM) Disclaimer: All of the documentation provided in this document, is copyright Datagres Technologies
How To Store Data On An Ocora Nosql Database On A Flash Memory Device On A Microsoft Flash Memory 2 (Iomemory)
WHITE PAPER Oracle NoSQL Database and SanDisk Offer Cost-Effective Extreme Performance for Big Data 951 SanDisk Drive, Milpitas, CA 95035 www.sandisk.com Table of Contents Abstract... 3 What Is Big Data?...
On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform
On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform Page 1 of 16 Table of Contents Table of Contents... 2 Introduction... 3 NoSQL Databases... 3 CumuLogic NoSQL Database Service...
Oracle Database 12c Plug In. Switch On. Get SMART.
Oracle Database 12c Plug In. Switch On. Get SMART. Duncan Harvey Head of Core Technology, Oracle EMEA March 2015 Safe Harbor Statement The following is intended to outline our general product direction.
Benchmarking Cassandra on Violin
Technical White Paper Report Technical Report Benchmarking Cassandra on Violin Accelerating Cassandra Performance and Reducing Read Latency With Violin Memory Flash-based Storage Arrays Version 1.0 Abstract
Constructing a Data Lake: Hadoop and Oracle Database United!
Constructing a Data Lake: Hadoop and Oracle Database United! Sharon Sophia Stephen Big Data PreSales Consultant February 21, 2015 Safe Harbor The following is intended to outline our general product direction.
Replicating to everything
Replicating to everything Featuring Tungsten Replicator A Giuseppe Maxia, QA Architect Vmware About me Giuseppe Maxia, a.k.a. "The Data Charmer" QA Architect at VMware Previously at AB / Sun / 3 times
NoSQL and Hadoop Technologies On Oracle Cloud
NoSQL and Hadoop Technologies On Oracle Cloud Vatika Sharma 1, Meenu Dave 2 1 M.Tech. Scholar, Department of CSE, Jagan Nath University, Jaipur, India 2 Assistant Professor, Department of CSE, Jagan Nath
Making Sense ofnosql A GUIDE FOR MANAGERS AND THE REST OF US DAN MCCREARY MANNING ANN KELLY. Shelter Island
Making Sense ofnosql A GUIDE FOR MANAGERS AND THE REST OF US DAN MCCREARY ANN KELLY II MANNING Shelter Island contents foreword preface xvii xix acknowledgments xxi about this book xxii Part 1 Introduction
W H I T E P A P E R. Deriving Intelligence from Large Data Using Hadoop and Applying Analytics. Abstract
W H I T E P A P E R Deriving Intelligence from Large Data Using Hadoop and Applying Analytics Abstract This white paper is focused on discussing the challenges facing large scale data processing and the
HADOOP SOLUTION USING EMC ISILON AND CLOUDERA ENTERPRISE Efficient, Flexible In-Place Hadoop Analytics
HADOOP SOLUTION USING EMC ISILON AND CLOUDERA ENTERPRISE Efficient, Flexible In-Place Hadoop Analytics ESSENTIALS EMC ISILON Use the industry's first and only scale-out NAS solution with native Hadoop
Enabling Database-as-a-Service (DBaaS) within Enterprises or Cloud Offerings
Solution Brief Enabling Database-as-a-Service (DBaaS) within Enterprises or Cloud Offerings Introduction Accelerating time to market, increasing IT agility to enable business strategies, and improving
Database Management System Choices. Introduction To Database Systems CSE 373 Spring 2013
Database Management System Choices Introduction To Database Systems CSE 373 Spring 2013 Outline Introduction PostgreSQL MySQL Microsoft SQL Server Choosing A DBMS NoSQL Introduction There a lot of options
Oracle Database - Engineered for Innovation. Sedat Zencirci Teknoloji Satış Danışmanlığı Direktörü Türkiye ve Orta Asya
Oracle Database - Engineered for Innovation Sedat Zencirci Teknoloji Satış Danışmanlığı Direktörü Türkiye ve Orta Asya Oracle Database 11g Release 2 Shipping since September 2009 11.2.0.3 Patch Set now
Cassandra vs MySQL. SQL vs NoSQL database comparison
Cassandra vs MySQL SQL vs NoSQL database comparison 19 th of November, 2015 Maxim Zakharenkov Maxim Zakharenkov Riga, Latvia Java Developer/Architect Company Goals Explore some differences of SQL and NoSQL
NoSQL Databases. Nikos Parlavantzas
!!!! NoSQL Databases Nikos Parlavantzas Lecture overview 2 Objective! Present the main concepts necessary for understanding NoSQL databases! Provide an overview of current NoSQL technologies Outline 3!
