Distributed Lucene : A distributed free text index for Hadoop



Similar documents
Distributed File Systems

Distributed File Systems

Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee June 3 rd, 2008

Hadoop Distributed File System. Dhruba Borthakur Apache Hadoop Project Management Committee

Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! ti2736b-ewi@tudelft.nl

Comparative analysis of mapreduce job by keeping data constant and varying cluster size technique

THE HADOOP DISTRIBUTED FILE SYSTEM

CSE-E5430 Scalable Cloud Computing Lecture 2

Processing of massive data: MapReduce. 2. Hadoop. New Trends In Distributed Systems MSc Software and Systems

Big Data With Hadoop

Hadoop Distributed File System. T Seminar On Multimedia Eero Kurkela

The Hadoop Distributed File System

Overview. Big Data in Apache Hadoop. - HDFS - MapReduce in Hadoop - YARN. Big Data Management and Analytics

Journal of science STUDY ON REPLICA MANAGEMENT AND HIGH AVAILABILITY IN HADOOP DISTRIBUTED FILE SYSTEM (HDFS)

The Hadoop Distributed File System

Snapshots in Hadoop Distributed File System

Hadoop Distributed File System. Dhruba Borthakur June, 2007

Big Data and Hadoop with components like Flume, Pig, Hive and Jaql

Hadoop Distributed File System. Jordan Prosch, Matt Kipps

Apache Hadoop FileSystem and its Usage in Facebook

CS2510 Computer Operating Systems

CS2510 Computer Operating Systems

Hadoop Architecture. Part 1

IJFEAT INTERNATIONAL JOURNAL FOR ENGINEERING APPLICATIONS AND TECHNOLOGY

Introduction to Hadoop

HDFS Architecture Guide

Prepared By : Manoj Kumar Joshi & Vikas Sawhney

Storage of Structured Data: BigTable and HBase. New Trends In Distributed Systems MSc Software and Systems

Big Data and Apache Hadoop s MapReduce

Welcome to the unit of Hadoop Fundamentals on Hadoop architecture. I will begin with a terminology review and then cover the major components

Hadoop IST 734 SS CHUNG

Accelerating and Simplifying Apache

Big Data Analysis using Hadoop components like Flume, MapReduce, Pig and Hive

MASSIVE DATA PROCESSING (THE GOOGLE WAY ) 27/04/2015. Fundamentals of Distributed Systems. Inside Google circa 2015

What is Analytic Infrastructure and Why Should You Care?

CSE 590: Special Topics Course ( Supercomputing ) Lecture 10 ( MapReduce& Hadoop)

HDFS Users Guide. Table of contents

On the Varieties of Clouds for Data Intensive Computing

Big Data and Hadoop with Components like Flume, Pig, Hive and Jaql

Introduction to Hadoop

Distributed File Systems

Apache Hadoop FileSystem Internals

!"#$%&' ( )%#*'+,'-#.//"0( !"#$"%&'()*$+()',!-+.'/', 4(5,67,!-+!"89,:*$;'0+$.<.,&0$'09,&)"/=+,!()<>'0, 3, Processing LARGE data sets

Fault Tolerance in Hadoop for Work Migration

Hosting Transaction Based Applications on Cloud

Application Development. A Paradigm Shift

Open source software framework designed for storage and processing of large scale data on clusters of commodity hardware

Implementation Issues of A Cloud Computing Platform

A STUDY ON HADOOP ARCHITECTURE FOR BIG DATA ANALYTICS

Comparative analysis of Google File System and Hadoop Distributed File System

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 14

Hadoop & its Usage at Facebook

Hadoop & its Usage at Facebook

American International Journal of Research in Science, Technology, Engineering & Mathematics

Design and Evolution of the Apache Hadoop File System(HDFS)

International Journal of Advance Research in Computer Science and Management Studies

HDFS Under the Hood. Sanjay Radia. Grid Computing, Hadoop Yahoo Inc.

Future Prospects of Scalable Cloud Computing

Processing of Hadoop using Highly Available NameNode

Jeffrey D. Ullman slides. MapReduce for data intensive computing

Hadoop: A Framework for Data- Intensive Distributed Computing. CS561-Spring 2012 WPI, Mohamed Y. Eltabakh

GraySort and MinuteSort at Yahoo on Hadoop 0.23

Seminar Presentation for ECE 658 Instructed by: Prof.Anura Jayasumana Distributed File Systems

BIG DATA, MAPREDUCE & HADOOP

The Google File System

A Study on Workload Imbalance Issues in Data Intensive Distributed Computing

Introduction to Hadoop HDFS and Ecosystems. Slides credits: Cloudera Academic Partners Program & Prof. De Liu, MSBA 6330 Harvesting Big Data

INTERNATIONAL JOURNAL OF PURE AND APPLIED RESEARCH IN ENGINEERING AND TECHNOLOGY

Parallel Computing. Benson Muite. benson.

Highly Available Hadoop Name Node Architecture-Using Replicas of Name Node with Time Synchronization among Replicas

Hadoop at Yahoo! Owen O Malley Yahoo!, Grid Team owen@yahoo-inc.com

The Google File System

Facebook: Cassandra. Smruti R. Sarangi. Department of Computer Science Indian Institute of Technology New Delhi, India. Overview Design Evaluation

Cloud Computing at Google. Architecture

Chapter 11 Map-Reduce, Hadoop, HDFS, Hbase, MongoDB, Apache HIVE, and Related

Department of Computer Science University of Cyprus EPL646 Advanced Topics in Databases. Lecture 15


Lifetime Management of Cache Memory using Hadoop Snehal Deshmukh 1 Computer, PGMCOE, Wagholi, Pune, India

MapReduce and Hadoop Distributed File System

Mobile Cloud Computing for Data-Intensive Applications

Reduction of Data at Namenode in HDFS using harballing Technique

Hadoop Distributed File System (HDFS) Overview

Apache Hadoop. Alexandru Costan

Data-Intensive Programming. Timo Aaltonen Department of Pervasive Computing

Sector vs. Hadoop. A Brief Comparison Between the Two Systems

Take An Internal Look at Hadoop. Hairong Kuang Grid Team, Yahoo! Inc

Developing Architectural Documentation for the Hadoop Distributed File System

A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM

NoSQL Data Base Basics

Deploying Hadoop with Manager

R.K.Uskenbayeva 1, А.А. Kuandykov 2, Zh.B.Kalpeyeva 3, D.K.Kozhamzharova 4, N.K.Mukhazhanov 5

Big Data Technology Core Hadoop: HDFS-YARN Internals

Criteria to Compare Cloud Computing with Current Database Technology

Data-Intensive Computing with Map-Reduce and Hadoop

Suresh Lakavath csir urdip Pune, India

Hypertable Architecture Overview

ATLAS Tier 3

Introduction to Hadoop. New York Oracle User Group Vikas Sawhney

Transcription:

Distributed Lucene : A distributed free text index for Hadoop Mark H. Butler and James Rutherford HP Laboratories HPL-2008-64 Keyword(s): distributed, high availability, free text, parallel, search Abstract: This technical report described a parallel, distributed free text index written at HP Labs called Distributed Lucene. Distributed Lucene is based on two Apache open source projects, Lucene and Hadoop. It was written to gain a better understanding of the Apache Hadoop architecture, which is derived from work at Google on creating large distributed, high availability systems from commodity components. External Posting Date: June 7, 2008 [Fulltext] Approved for External Publication Internal Posting Date: June 7, 2008 [Fulltext] Copyright 2008 Hewlett-Packard Development Company, L.P.

Distributed Lucene : A distributed free text index for Hadoop Mark H. Butler Enterprise Informatics Laboratory James Rutherford Web Services and Systems Laboratory May 27, 2008 Abstract This technical report described a parallel, distributed free text index written at HP Labs called Distributed Lucene. Distributed Lucene is based on two Apache open source projects, Lucene and Hadoop. It was written to gain a better understanding of the Apache Hadoop architecture, which is derived from work at Google on creating large distributed, high availability systems from commodity components. Keywords: distributed, high availability, free text, parallel, search 1 Introduction Hadoop is an open source Apache Software Foundation project, sponsored by Yahoo![had] and led by Doug Cutting. It attempts to replicate the proprietary software infrastructure that Google have developed to support applications requiring high scalability such as web search. It provides components to support MapReduce parallel processing [DG], a distributed file system called HDFS inspired by the Google File System [GGL03], and a distributed database called HBase based on a Google database called BigTable [CDG + 06]. Given the origins of Hadoop, it is very natural it should be used as the basis of web search engines. It is currently used in Apache Nutch [nut], an open source web crawler that creates the data set for a search engine. Nutch is often used with Apache Lucene, which provides a free text index [luc]. Doug Cutting is lead on all three projects. Despite the link between Hadoop and Lucene, at the time of writing there is no easy, off the shelf way to use Hadoop to implement a parallel search engine with a similar architecture to the Google search engine [BDH03]. However in 2006, Doug Cutting came up with an initial design for creating a distributed free-text index using Hadoop and Lucene [Cut]. This technical report describes work at HP Labs to implement a distributed free text index based on this design. This work was undertaken in order to better understand the architectural style used in Hadoop. Since this work commenced, two other external open source projects have started work on the same problem. Katta [kat] is a distributed free text index using Hadoop, with contributions from 101tec. Bailey [bai] does not use Hadoop, but this project is being led by Doug Cutting so clearly it is influenced by Hadoop, with contributions from Yahoo and IBM. 1

1.1 Hadoop architectural style A good starting point for understanding some important aspects of the architectural design used in Hadoop is the Hadoop Distributed File System, HDFS [Bor08]. A HDFS cluster consists of a name node, and one or more racks of data nodes. HDFS is designed to store very large files by splitting them into a sequence of blocks, typically 64 MB in size. The blocks will be distributed across the cluster and replicated for fault tolerance. Typically a replication factor of three is used, with replicas distributed between two racks in order to guard against rack failure as well as data node failure. HDFS was also designed to target a specific type of application, that writes data once but read it many times at streaming speeds. The data nodes store HDFS blocks in files in their local file systems. Each data node has no knowledge about HDFS files, as this information is held on the name node. When a data node starts up, it scans through its local file system and generates a list of all HDFS data blocks that correspond to each of these local files. This information is called a BlockReport. The name node has an in-memory data structure called FsImage that contains the entire file system namespace and maps the files on to blocks. It also keeps a log file called EditLog on disc that records all the transactions since the FsImage was updated on disc. At startup, changes from EditLog are incorporated into FsImage and the updated version is written back to disc. The name node also makes all decisions regarding replication of blocks. The name node tracks which blocks need to be replicated and initiates replication whenever necessary. HDFS is designed in such a way that user data never flows through the name node, by using a client library that caches metadata from the master, and then performs as much computation as possible on the client. The data nodes send regular heartbeats to the name node so the name node can detect data node failure. If the name node does not receive heartbeats from data nodes for a predetermined period, it marks them as dead and does not forward any new read, write or replication requests to them. The heartbeat message includes the BlockReport from the data node. By design, the name node never initiates any remote procedure calls (RPCs). Instead, it only responds to RPC requests issued by data nodes or clients. It replies to heartbeats with replication requests for the specific data node. Hadoop uses its own RPC protocol that requires custom serialization and deserialization code to be written for objects. This approach allows the serialization format to be very efficient in comparison to standard Java serialization. 1.2 Distributed Lucene Although the code for distributed Lucene is heavily influence by HDFS and was written by examining the HDFS code, it does not use HDFS directly. It does reuse the Hadoop IPC layer and some code to model network topology. There are four reasons for this decision not to use HDFS: First, in HDFS it is not possible for multiple clients to write to the same index. Here it is desirable for multiple clients to be able to access the same index, in order to parallelize index creation. Second, Lucene indexes generally contain a number of different files, some of which may be smaller than the 64MB block size for HDFS, so storing them in HDFS may not be efficient. Third, creating an implementation 2

specifically for Lucene gives the author a better understanding of the Hadoop architectural style. Fourth, in the future it might be desirable to have an abstract layer in Hadoop, that can be used to implement different storage and parallel processing services. By implementing distributed Lucene separately from HDFS, it is possible start to understand what this might look like and what software components could be shared by these services. Also, on first examination, the fact that HDFS is write once read many times would appear to be a problem as any modifications to an index require a new copy of the index. However, because the design specified by Doug Cutting [Cut] uses a versioning mechanism that requires a new version of an index to be made when changes are applied, it would be possible to implement this design on a write-only index. One of the design goals of distributed Lucene was to avoid storing metadata on the name node. In HDFS, the name node stores FSImage and EditLog. This means when the name node fails, switching over to another namenode is more complicated because the other name node requires these files. Here, the aim is to keep primary metadata on data nodes, not the name node, so that name node metadata can be recovered automatically from data nodes. 1.2.1 Basic design Each data node contains information about all the versions of an index it stores. These versions are represented by an identifier known as an IndexVersion which consists of an index name and a version number. Because there may be several replicas of a specific IndexVersion, there is another identifier known as an IndexLocation that also contains the socket address of the data node that stores the replica and indicates the state of index i.e. if it is uncommitted, replicating or live. To understand the design, we will start with the basic client API. In order to simplify implementation, this API does not implement sharding. Instead this is done by a higher level client API, and the mechanism for this will be explained later. As shown in Figure 1, a client can call the name node to get metadata about the cluster or to get an identifier for a random data node. A client can call a data node as shown in Figure 2 to create new indexes, add or remove documents from indexes, commit indexes, search indexes, add one index to another, and determine the number of documents in an index. When a client adds documents to or removes documents from an index, this creates a new uncommitted IndexVersion which needs to be explicitly committed in order to be searched. Only one uncommitted IndexVersion can be open for any index at once. When a new index version is created, or a data node fails, this changes the replication state of the cluster. This will be detected by the name node at the next heartbeat, and it will schedule replication tasks in order to maintain the required level of replication. At startup, all the data nodes send heartbeats to the name node. The heartbeat contains information about the status of the data node, the IndexLocations of all the indexes it holds, and any leases it holds. The name node stores this information in order to manage the cluster. Lucene makes it easy to compare two different versions of the same index and determine what has changed, because it adds files to an index to store changes. This means when indexes are replicated, if the receiving node has an older copy of the index it is not necessary to transmit the entire index between data nodes, 3

I n t e r f a c e between c l i e n t and namenode. public interface ClientToNameNodeProtocol extends V e r s i o n e d P r o t o c o l { The v e r s i o n o f t h e p r o t o c o l. long VERSION ID = 1L ; Get t h e l o c a t i o n o f a l l i n d e x e s t h a t can be searched. @return All searchable indexes managed by t h i s namenode. I n d e x L o c a t i o n [ ] g e t S e a r c h a b l e I n d e x e s ( ) ; } @return Get a new datanode at random. S t r i n g getdatanode ( ) ; Figure 1: ClientToNameNodeProtocol because it just needs the changes that have happened since the latest version it has of the index. 1.2.2 Leases In order for multiple clients to update a file concurrently, it is important that all changes go to a single replica. [GGL03] describes a mechanism called leases used in the Google File System to solve this problem. When a client contacts a data node to update an index, the data node tries to acquire a lease from the name node. If it fails, then it returns an error message indicating it can not obtain the lease. The client will then need to try other replicas until it finds the one with the lease. However if it succeeds, then the client can modify the index, because the data node has the lease. Data nodes have to apply for lease extensions as part of heartbeating. This is so if a data node fails, the lease will become available again for other nodes. Consequently any data that was written to the index prior to the failure will be lost, so the client needs to take responsibility for uncommitted data. Unfortunately, because the name node holds the leases, this breaks the design goal of not having any state on the name node, making name node failover more difficult. However, on closer examination, there is a simple solution. The new name node can simply wait for all leases to expire, which causes all currently running transactions to fail, so clients would need to recover any metadata. As clients need to guard against failed transactions anyway, this is not a significant problem. Katta [kat] uses a framework developed at Yahoo called Zookeeper [zoo] rather than a home-grown approaches to leases. Zookeeper is inspired by work at Google on a system called Chubby that acts as a lock server and shared namespace [Bur06, CGR07]. 1.2.3 Data nodes and name nodes Next, we will consider the API used by data nodes and name nodes. As already noted, name nodes never initiate RPC communication, so there is an API for data nodes to call data nodes, and for data nodes to call name nodes. There 4

I n t e r f a c e between c l i e n t and datanode. public interface ClientToDataNodeProtocol extends V e r s i o n e d P r o t o c o l { } The v e r s i o n o f t h e p r o t o c o l. long VERSION ID = 1L ; Add a document to a p a r t i c u l a r index. @param index The index. @param doc The document. @throws IOException void adddocument ( S t r i n g index, WDocument doc ) throws IOException ; Remove documents t h a t match a s p e c i f i c term from an index. @param index The index. @param term The term. @return The number of documents removed. @throws IOException int removedocuments ( S t r i n g index, WTerm term ) throws IOException ; Commit a s p e c i f i c index. @param index The index. @return The IndexVersion of the committed index. @throws IOException IndexVersion commitversion ( S t r i n g index ) throws IOException ; Create a new index. @param index The index. @return the IndexVersion of the new index. @throws IOException I n d e x V e r s i o n c r e a t e I n d e x ( S t r i n g index ) throws IOException ; Add the contents of an index to another index. @param index The index. @param indextoadd The l o c a t i o n of the index to add. @throws IOException void addindex ( S t r i n g index, IndexLocation indextoadd ) throws IOException ; Search a s p e c i f i c index r e t u r n i n g t h e top n h i t s ordered by s o r t. @param i The index to search. @param query The query. @param s o r t The s o r t to apply to r e s u l t s. @param n The maximum number of h i t s to return. @return The r e s u l t s. @throws IOException SearchResults search ( IndexVersion i, WQuery query, WSort sort, int n ) throws IOException ; The number of documents in an index. @param index The index. @return t h e s i z @throws IOException int s i z e ( S t r i n g index ) throws IOException ; 5 Figure 2: ClientToDataNodeProtocol

datanode to namenode p r o t o c o l. public interface DataNodeToNameNodeProtocol extends V e r s i o n e d P r o t o c o l { } The v e r s i o n o f t h e p r o t o c o l. long VERSION ID = 1L ; Send a heartbeat message to the namenode. @param s t a t u s The s t a t u s o f t h e datanode. @param s e a r c h a b l e I n d e x e s The i n d e x e s on t h e datanode. @param l e a s e s The l e a s e s owned by t h e datanode. @return t h e i n d e x e s to r e p l i c a t e @throws IOExceptiuon HeartbeatResponse h e a r t b e a t ( DataNodeStatus s t a t u s, I n d e x L o c a t i o n [ ] s e a r c h a b l e I n d e x e s, Lease [ ] l e a s e s ) throws IOException ; Get a l e a s e to be t h e primary r e p l i c a f o r a s p e c i f i c index. @param index The index r e q u i r i n g t h e l e a s e. @return t h e l e a s e. @throws LeaseException Lease g e t L e a s e ( I n d e x L o c a t i o n index ) throws IOException ; R e l i n q u i s h a l e a s e. @param l e a s e t h e l e a s e. @return was o p e r a t i o n s u c c e s s f u l public boolean r e l i n q u i s h L e a s e ( Lease l e a s e ) throws IOException ; Figure 3: DataNodeToNameNodeProtocol are three methods that data nodes can use to call name nodes, as shown in Figure 3: heartbeating, requests for leases, and requests to relinquish leases when transactions complete. The data node to data node API has two methods as shown in Figure 4, one to find out what files are associated with a particular IndexVersion, and the other to retrieve a specific file associated with an IndexVersion. These methods support replication. Data nodes have three types of threads: one to service requests, one to send heartbeats to the master to inform it that the worker is alive, and one to process replication tasks. Name nodes have two types of threads: one to service requests, the other to perform failure detection and compute a replication plan. A subset of this plan is then sent back to each data node in response to their heartbeat. 1.2.4 Sharding Like leases, on first examination sharding seems to require metadata to be stored on the name node, hence making name node failover more complicated. However, to avoid this, the decision was taken to perform sharding in the client library. This is done by adopting a simple naming convention for shards. When an index is sharded, a hyphen and a number is appended to the name, so for example the index myindex might have the following shards: 6

Datanode to datanode p r o t o c o l. public interface DataNodeToDataNodeProtocol extends V e r s i o n e d P r o t o c o l { } The v e r s i o n o f t h e p r o t o c o l. long VERSION ID = 1L ; Get t h e l i s t o f f i l e s used by t h i s index. @param indexversion The index version of the index. @return A l i s t o f f i l e s used in t h a t index. @throws IOException S t r i n g [ ] g e t F i l e S e t ( I n d e x V e r s i o n i n d e x V e r s i o n ) throws IOException ; Get a p a r t i c u l a r f i l e used in a Lucene index. @param indexversion The index. @param f i l e The f i l e. @return The f i l e content. @throws IOException byte [ ] g e t F i l e C o n t e n t ( I n d e x V e r s i o n indexversion, S t r i n g f i l e ) throws IOException ; Figure 4: DataNodeToDataNodeProtocol myindex-1 myindex-2 myindex-3 This way the cluster knows nothing about sharding as it is all done by the client library, which hides the underlying sharding from users. This way a sharded index looks like a single index to the user, although they have to add shards to an index manually. Performing sharding in the client library considerably simplifies implementation. In order to better understand how this works, it is useful to look at the Cached Client API. CachedClient, shown in Figure 5, has four methods: one to create an index, and indicate if it is sharded. More shards can be added to an index by calling create multiple times. There are also methods to get an IndexUpdater to modify an index, to determine the number of documents in an index, to query an index and to get a list of all indexes available on the cluster. Then IndexUpdater API is shown in Figure 6 and has methods to add documents, remove documents or commit an update. 1.2.5 Details The distributed Lucene client library needs to use threading to query shards as querying each shard individually would be very slow. Luckily org.apache.hadoop.ipc.rpc provides a special method that provides a threaded parallel call to nodes as shown in Figure 7. This has the following arguments: the method you want to call, then calling parameters as an array, the socket addresses as an array, and the Hadoop configuration. This method is very useful and simplified the client library considerably. 7

public interface I CachedClient { } Create an index or add a new shard to an index. @param index The index name. @param sharded I s t h e index sharded. @throws IOException public void c r e a t e I n d e x ( S t r i n g index, boolean sharded ) throws IOException ; Get an IndexWriter to write to indexes. @param index The index name. @return An IndexUpdated o b j e c t. public IIndexUpdater getindexupdater ( S t r i n g index ) ; Get the s i z e of an index. @param index The index name. @return The s i z e of the index. @throws IOException public int s i z e ( S t r i n g index ) throws IOException ; Search an index. @param index The index. @param query The query. @param s o r t The order o f r e s u l t s. @param n Number o f r e s u l t s. @return The r e s u l t s o f t h e query. @throws IOException public SearchResults search ( S t r i n g index, Query query, Sort sort, int n ) throws IOException ; @return All the index names. public S t r i n g [ ] g e t I n d e x e s ( ) ; Figure 5: CachedClient 8

public interface IIndexUpdater { } Add a document to an index. @param doc t h e document @return t h e shard name @throws IOException public void adddocument ( Document doc ) throws IOException ; Remove documents from an index. @param term t h e search term @return t h e number o f documents removed @throws IOException public int removedocuments ( Term term ) throws IOException ; Commit changes to an index. @throws IOException public void commit ( ) throws IOException ; Figure 6: IndexUpdater Expert : Make m u l t i p l e, p a r a l l e l c a l l s to a s e t o f s e r v e r s. public s t a t i c Object [ ] c a l l ( Method method, Object [ ] [ ] params, InetSocketAddress [ ] addrs, C o n f i g u r a t i o n c o n f ) Figure 7: Call method in org.apache.hadoop.ipc.rpc 9

1.2.6 Current limitations Currently the code for distributed Lucene is alpha quality and at the time of writing there are currently a number of items of missing functionality: First, sorting does not work on sharded indexes, as the client library should sort the results obtained from the data nodes. As each result set from a datanode will be sorted, n-way merge sort would be an efficient algorithm. Second, there is no thread that deletes old versions of indexes. This should be done after a predetermined time, as in HDFS. Third, although the code can use Lucene s RAM based indexes for testing, this is useless for a real cluster as the indexes are no longer persistant. Clearly it would be useful to cache often queried indexes in RAM. Fourth, HDFS provides a throttler to avoid a single client using all available bandwidth. There is no equivalent of a throttler in distributed Lucene. Fifth, although replication will use older index versions to reduce replication data transfer, the replication assignment algorithm should make use of this, by scheduling replicas on nodes that already have some of the index data. Currently it is purely random. Sixth, there are no benchmarks on index performance. Finally, HDFS uses a chained approach to data replication. When a client wants to write a block, it retrieves a list of data nodes from the name node that will host a replica of that block. The client then sends the data block to the first data node. This data node saves the data, and forwards it to the second replica. The same pipelining is used for the third replica. Distributed Lucene does not use pipelining. 1.2.7 Comparison with other projects Bailey [bai] is an open source project creating a scalable, distributed document database led by Doug Cutting. The design seems to be strongly influenced by Amazon Dynamo [Vog],[dyn]. Here they calculate a hash of each document, and map it onto a document identifier hash space that is arranged as a ring. To achieve three way replication, the document is also stored on the node after and the node before. That way if the node fails, or if a new node is added, the document is still available. Replication then occurs to balance the location of documents in the cluster. Like distributed Lucene, it performs replication using differences to minimize network traffic. Katta [kat] is an open source project created to provide a distributed Lucene index by 101tec. At the time of writing, this project currently seems to be the most mature project of the three, but unlike distributed Lucene or Bailey, it does not yet allow on-line updates to the underlying Lucene indexes. In contrast to distributed Lucene, it stores the Lucene indexes in HDFS and uses Zookeeper [zoo] to perform locking and also to distribute metadata about the cluster, rather than using a heartbeating mechanism. However, it must be noted that all these projects are at a very early stage of development, and are likely to change substantially in the future. 10

1.2.8 Conclusions This report describes work implementing a free text index using Hadoop and Lucene. There is still work to be done in order to have production level quality code. In addition, there are two other open source projects working on the same problem. This shows that clearly there is interest in this area, although it is the authors belief that splitting community effort between three different projects is undesirable, so one goal should be to see if there are any possibilities for collaboration between these projects. References [bai] [BDH03] [Bor08] Bailey. http://www.sourceforge.net/projects/bailey. L. A. Barroso, Jeffrey Dean, and U. Hölzle. Web search for a planet: The Google cluster architecture. IEEE Micro, pages 22 28, March- April 2003. Dhruba Borthakur. The Hadoop Distributed File System: Architecture and design. Document on Hadoop Wiki, 2008. [Bur06] Mike Burrows. The chubby lock service for loosely-coupled distributed systems. In USENIX 06: Proceedings of the 7th conference on USENIX Symposium on Operating Systems Design and Implementation, pages 24 24, Berkeley, CA, USA, 2006. USENIX Association. [CDG + 06] Fay Chang, Jeffrey Dean, Sanjay Ghemawat, Wilson C. Hsieh, Deborah A. Wallach, Mike Burrows, Tushar Chandra, Andrew Fikes, and Robert E. Gruber. Bigtable: a distributed storage system for structured data. In OSDI 06: Proceedings of the 7th symposium on Operating systems design and implementation, pages 205 218, Berkeley, CA, USA, 2006. USENIX Association. [CGR07] [Cut] [DG] [dyn] Tushar D. Chandra, Robert Griesemer, and Joshua Redstone. Paxos made live: an engineering perspective. In PODC 07: Proceedings of the twenty-sixth annual ACM symposium on Principles of distributed computing, pages 398 407, New York, NY, USA, 2007. ACM Press. Doug Cutting. Proposal: index server project. Email message on Lucene-General email list. Jeffrey Dean and Sanjay Ghemawat. Mapreduce: Simplified data processing on large clusters. pages 137 150. Amazon Dynamo. http://www.allthingsdistributed.com/2007/10/amazons dynamo.html. [GGL03] Sanjay Ghemawat, Howard Gobioff, and Shun-Tak Leung. The Google file system. In SOSP 03: Proceedings of the nineteenth ACM symposium on Operating systems principles, pages 29 43, New York, NY, USA, 2003. ACM Press. [had] Hadoop. http://hadoop.apache.org/core/. 11

[kat] [luc] [nut] Katta. http://www.sourceforge.net/projects/katta/. Lucene. http://lucene.apache.org/java/docs/index.html. Nutch. http://lucene.apache.org/nutch/. [Vog] Werner Vogel. Eventually Consistent. http://www.allthingsdistributed.com/2007/12/eventually consistent.html. [zoo] Zookeeeper. http://www.sourceforge.net/projects/zookeeper/. 12