An Immediate Approach to Balancing Nodes of Binary Search Trees



Similar documents
Binary Search Trees. Data in each node. Larger than the data in its left child Smaller than the data in its right child

Optimal Binary Search Trees Meet Object Oriented Programming

From Last Time: Remove (Delete) Operation

root node level: internal node edge leaf node Data Structures & Algorithms McQuain

Full and Complete Binary Trees

TREE BASIC TERMINOLOGIES

Outline BST Operations Worst case Average case Balancing AVL Red-black B-trees. Binary Search Trees. Lecturer: Georgy Gimel farb

Analysis of Algorithms I: Binary Search Trees

Binary Heap Algorithms

A binary search tree or BST is a binary tree that is either empty or in which the data element of each node has a key, and:

B-Trees. Algorithms and data structures for external memory as opposed to the main memory B-Trees. B -trees

Sorting revisited. Build the binary search tree: O(n^2) Traverse the binary tree: O(n) Total: O(n^2) + O(n) = O(n^2)

Binary Heaps * * * * * * * / / \ / \ / \ / \ / \ * * * * * * * * * * * / / \ / \ / / \ / \ * * * * * * * * * *

Previous Lectures. B-Trees. External storage. Two types of memory. B-trees. Main principles

Binary Search Trees. A Generic Tree. Binary Trees. Nodes in a binary search tree ( B-S-T) are of the form. P parent. Key. Satellite data L R

The ADT Binary Search Tree

Converting a Number from Decimal to Binary

Persistent Binary Search Trees

Binary Search Trees CMPSC 122

How To Create A Tree From A Tree In Runtime (For A Tree)

Why Use Binary Trees?

S. Muthusundari. Research Scholar, Dept of CSE, Sathyabama University Chennai, India Dr. R. M.

the recursion-tree method

Learning Outcomes. COMP202 Complexity of Algorithms. Binary Search Trees and Other Search Trees

Algorithms and Data Structures

Ordered Lists and Binary Trees

1) The postfix expression for the infix expression A+B*(C+D)/F+D*E is ABCD+*F/DE*++

CSE 326: Data Structures B-Trees and B+ Trees

A Comparison of Dictionary Implementations

Lecture Notes on Binary Search Trees

Class Overview. CSE 326: Data Structures. Goals. Goals. Data Structures. Goals. Introduction

Heaps & Priority Queues in the C++ STL 2-3 Trees

Binary Search Trees (BST)

Data Structures and Algorithms

Symbol Tables. Introduction

An Evaluation of Self-adjusting Binary Search Tree Techniques

Data storage Tree indexes

Data Structure [Question Bank]

Lecture Notes on Binary Search Trees

Lecture 5 - CPA security, Pseudorandom functions

GRID SEARCHING Novel way of Searching 2D Array

Laboratory Module 6 Red-Black Trees

Lecture 6: Binary Search Trees CSCI Algorithms I. Andrew Rosenberg

Fast Sequential Summation Algorithms Using Augmented Data Structures

Analysis of Algorithms I: Optimal Binary Search Trees

Fundamental Algorithms

Lecture 1: Course overview, circuits, and formulas

CS104: Data Structures and Object-Oriented Design (Fall 2013) October 24, 2013: Priority Queues Scribes: CS 104 Teaching Team

- Easy to insert & delete in O(1) time - Don t need to estimate total memory needed. - Hard to search in less than O(n) time

The Student-Project Allocation Problem

A binary search tree is a binary tree with a special property called the BST-property, which is given as follows:

14.1 Rent-or-buy problem

6 March Array Implementation of Binary Trees

Recursive Algorithms. Recursion. Motivating Example Factorial Recall the factorial function. { 1 if n = 1 n! = n (n 1)! if n > 1

Lecture 4 Online and streaming algorithms for clustering

Chapter 14 The Binary Search Tree

LeMoyne-Owen College Division of Natural and Mathematical Sciences COMPUTER ALGORITHMS, COSI 335 Fall Syllabus

1. The memory address of the first element of an array is called A. floor address B. foundation addressc. first address D.

GRAPH THEORY LECTURE 4: TREES

A Randomized Self-Adjusting Binary Search Tree

Formal Languages and Automata Theory - Regular Expressions and Finite Automata -

The Union-Find Problem Kruskal s algorithm for finding an MST presented us with a problem in data-structure design. As we looked at each edge,

CSE373: Data Structures and Algorithms Lecture 3: Math Review; Algorithm Analysis. Linda Shapiro Winter 2015

A COMPARATIVE STUDY OF LINKED LIST SORTING ALGORITHMS

Algorithms and Data Structures

Binary Heaps. CSE 373 Data Structures

DATABASE DESIGN - 1DL400

1 The Brownian bridge construction

Lecture 6 Online and streaming algorithms for clustering

Rotation Operation for Binary Search Trees Idea:

Polynomial Degree and Lower Bounds in Quantum Complexity: Collision and Element Distinctness with Small Range

Data Structures. Jaehyun Park. CS 97SI Stanford University. June 29, 2015

GENERATING THE FIBONACCI CHAIN IN O(log n) SPACE AND O(n) TIME J. Patera

Data Structures Fibonacci Heaps, Amortized Analysis

Introduction to data structures

Lecture 22: November 10

IMPROVING PERFORMANCE OF RANDOMIZED SIGNATURE SORT USING HASHING AND BITWISE OPERATORS

CSC148 Lecture 8. Algorithm Analysis Binary Search Sorting

CS 598CSC: Combinatorial Optimization Lecture date: 2/4/2010

Two-sample inference: Continuous data

CS 2302 Data Structures Spring 2015

Lecture 10: Regression Trees

Lecture 2 February 12, 2003

Lecture Notes on Linear Search

Sequential Data Structures

Binary Search Trees. Ric Glassey

SECURITY EVALUATION OF ENCRYPTION USING RANDOM NOISE GENERATED BY LCG

Permutation Tests for Comparing Two Populations

Parallelization: Binary Tree Traversal

Introduction to Data Structures and Algorithms

Many algorithms, particularly divide and conquer algorithms, have time complexities which are naturally

Questions 1 through 25 are worth 2 points each. Choose one best answer for each.

Binary Search Trees. basic implementations randomized BSTs deletion in BSTs

INDISTINGUISHABILITY OF ABSOLUTELY CONTINUOUS AND SINGULAR DISTRIBUTIONS

History-Independent Cuckoo Hashing

Hypothesis Testing for Beginners

recursion, O(n), linked lists 6/14

CS711008Z Algorithm Design and Analysis

PRIME FACTORS OF CONSECUTIVE INTEGERS

Transcription:

Chung-Chih Li Dept. of Computer Science, Lamar University Beaumont, Texas,USA Abstract We present an immediate approach in hoping to bridge the gap between the difficulties of learning ordinary binary search trees and their height-balanced variants in the typical data structures class. Instead of balancing the heights of the two children, we balance the number of their nodes. The concepts and programming techniques required for the node-balanced tree are much easier than the tree or any other height-balanced alternatives. We also provide a very easy analysis showing that the height of a nodebalanced tree is bounded by c log n with c 1. Pedagogically, as simplicity is our main concern, our approach is worthwhile to be introduced to smooth student s learning process. Moreover, our experimental results show that node-balanced binary search trees may also be considered for practical uses under some reasonable conditions. 1 Introduction In a typical Data Structures syllabus for undergraduates (CS3), the binary search tree seems to be the first real challenge for students, which is introduced after students have been familiar with linked lists and had preliminary understanding of recursion. Then, about two weeks (equivalently, six class hours) are needed for the general concepts of binary search trees including basic data structures, searching, traversals, insertion/delection, and some preliminary analysis. It certainly is not easy to manage all these concepts and necessary programming techniques in six hours of lecturing, especially when students after CS1 and CS2 are not yet well-trained to manage complicate algorithms under today s programming paradigm for beginners. Unfortunately, this is not yet enough. It makes only little sense to learn binary search trees without knowing how to balance them given the fact that a completely random data set is just a strongly theoretical assumption. There are many alternatives to the ordinary binary search tree. Many textbooks (from old textbooks [10, 6] to more recent ones like [5, 9]) introduce the tree 1 as a supplement to binary search trees. This indeed is a natural choice as students realize that the advantage of binary search trees over linked lists disappears when the height grows out of control. To keep a binary search tree balanced, there are two problems to solve: (i) How to determine which node is out of balance? and (ii) how to reconstruct the tree when needed? The standard solution suggested by the tree is that: using the four rotations (RR, LL, RL, and LR) in accordance with the balance-factors (the difference between the heights of a node s lift-child and right-child) in the path of insertion/deletion. While the standard solution is not too difficult for an average student to understand, the implementation is involved. However, detailed implementations are often omitted from most textbooks or by many instructors due to the time constraints. Students are on their own to program their trees. This may be reasonable because, assumably, students should have sufficient programming skill learned from CS1 and CS2 to handle the details. But this may not be the case in reality. Pedagogically, we should be aware of the potential difficulties students may face. In their implementations, students may use very inefficient code to determine the balance factor, e.g., recursively compute the heights of the two children, which will force the program to travel the entire tree. Students may introduce back-links in order to travel back to the parent for determining the type of rotation to perform, but the back-links will make it more difficult to have correct code for rotations. Deletion a node from an tree is even worse. Many textbooks simply omit this important operation from their texts, i.e., [9]. 1 The tree is a classic method to construct height-balanced binary search trees, which is the most often discussed method in the classroom. There are certainly many other alternatives, e.g., the Red-Black tree. Our arguments can as well apply to those alternatives.

Apart from the implementations, to help students appreciate binary search trees and trees, some analyses are necessary; but they are not straightforward neither. Many analyses are out of the scope of the course. For example, the expected height of a random binary search tree is asymptotically in O(log n). The proof requires some nontrivial mathematical background which is more suitable for the algorithm analysis class (see [2] for the detailed proof). Moreover, the exact bound of the expected height of a random binary search tree is much more complicate, which has been an active research topic with some fundamental questions still open (e.g., see [4, 7]). With these frustrations, as we have observed, balancing binary search trees is unfortunately the turning point students begin dropping the course and change their majors to those that demand less technical contents. This usually happens in the sixth week 2 of the semester when students are just about to begin to appreciate linked lists and binary search trees. 3 To smooth this learning difficulty in the classroom, an easier way to adjust unbalanced binary search trees is needed. Two papers with this attempt appear in SIGCSE 2005. In [8] Okasaki suggests a simple alternative to Red-Black trees, but his method can t be extended to deletion. In [1] Barnes et al. present Balanced-Sample Binary Search trees in hoping to improve the balance of binary search trees without demanding too many extra techniques. While their method is indeed simple and straightforward, the experimental results do not show significant improvement; only about 12% of improvement in heights on random data. Before our investigation initiated, we had had the following observation. When asked what to do about an unbalanced binary search tree, students usually respond intuitively that we should move some nodes from the bigger child (the one with more nodes in it) to the smaller one. If we approve this why-not-idea and give them some time, it is not hard for students to figure out how to implement the idea. In fact, moving a node from one child to the other is the easiest operation for reconstructing binary search trees. Pedagogically, we should let students practice this operation before rush them into more advanced techniques. Therefore, we polish and analyze this intuitive idea and present what we call the Node-balanced Binary Search Tree in this paper. 2 Node-balanced Binary Search Trees We first fix some notations. For every node α in a binary tree, we simply use α to denote the subtree rooted at α. Let α n and α h denote the number of nodes in α and the height of α. Let α-l and α-r denote the left-child and right-child of α, respectively. Similarly, let α-l n and α-r n denote the numbers of nodes in α-l and α-r. Also, we use α-l h and α-r h to denote their heights. The null node is denoted by that also represents the empty tree. For α, we use α.data to refer to the data in α. Let α t mean to assign t to α.data and α t mean to swap the values of α.data and t. Definition 1 For k 1, we say that a binary search tree is node-balanced with degree k if, for every node α in the tree, we have α n k α-l n α-r n α n k. (1) For convenience, we use NB k to denote node-balanced binary search tree with degree k. Clearly, when k = 1, NB 1 is simply the ordinary binary search tree. We use to denote the ordinary binary search tree for the time being. To maintain an NB k, we need to keep tracking the numbers of nodes in the left-child and right-child. Thus, we add an integer field in every node, which is the only extra information added. To adjust an unbalanced tree, the operation for NB k is simply shifting a node from the bigger child to the smaller one. The function shif to left(α) that shifts a node from α-r to α-l is shown in Figure 1. The symmetric counterpart, shif to right(α), can be obtained easily by changing t to the maximum data in α-l and exchanging α-l and α-r in the two function calls. In the path starting from the root travelled during an inserting/deleting operation on an NB k, whenever imbalance is found at node α, an appropriate shifting operation will be taken immediately. This is different from the tree where the rotation should be performed at the last unbalanced node in the path. Thus, there is no need to have back-links in the nodes or maintain a stack of nodes for travelling through the insertin/deletion path backward. Consequently, the algorithms for insertion and deletion are straightforward. If we find that inserting a node to a subtree will make the subtree to have too many nodes, we shift out one node first. Similarly, 2 The timing is based on the assumption that the topic of sorting techniques is not included or is introduced in the second half of the semester after the topic of Trees is finished. 3 Most students can get by Stacks and Queues with a little help. - 2 -

shif to left(α): 1. Find t, the minimum data in α-r; 2. insert(α-l, α.data); // Insert α.data to α-l 3. α t; 4. delete(α-r, t ). // Delete t from α-r insert(α,t): 1. If (α = ) then 1) new α; 2) α t; 3) return α. 2. If (t < α.data) then // t goes to α-l 1) If (α-l n - α-r n + 1 > α n k ) then (1) shif to right(α); (2) If (t > α.data) then α t; 2) α-l = insert(α-l,t); else // t goes to α-r 1) If (α-r n - α-l n + 1 > αn k (1) shif to left(α); (2) If (t < α.data) then α t; 2) α-r = insert(α-r,t); 3. return α. Figure 1: Algorithms for insertion and deletion delete(α,t): 1. If (α = ) then return α. 2. If (t < α.data) then // Search α-l for t 1) α-l = delete(α-l,t); 2) If (α-r n - α-l n + 1 > α n k ) then shif to left(α); 3) return α. 3. If (t > α.data) then // Search α-r for t 1) α-r = delete(α-r,t); 2) If (α-l n - α-r n + 1 > α n k ) then shif to right(α); 3) return α. 4. If (α n = 1) then // t found at α delete α and return. // α is a leaf 5. If (α-l n > α-r n ) then 1) Find β, the maximum node in α-l; 2) α β.data and delete(α-l, β.data); else 1) Find β, the minimum node in α-r; 2) α β.data and delete(α-r, β.data); 6. return α. if we find that deleting a node from a subtree will make the subtree to have too few nodes, we shift in one node. In Figure 1, insert(α,t) is a function that inserts data t to subtree α and delete(α,t) deletes a node with data t, if any, from subtree α. Both functions will return the resulting tree. We implement the two functions in a recursive fashion. For deletion, since we do not know if the data to be deleted actually exists in the tree, the balance will be checked after deletion. See Figure 1 for details. 3 Analysis and Experiments The analysis of NB k is every easy too. No particular advanced mathematical background is needed. Suppose α is a node-balanced tree with balance degree k and α n = n. Let a be the number of nodes in the smaller child. According to (1) in the definition of NB k, in the worst case, we have a + a + n k = n 1 n. Thus, a = (k 1)n 2k. In other words, the number of nodes in the bigger child of α is bounded by a + n k = n(k + 1 2k ). Let h be the depth of a tree (the root is at depth 0). In the worst case, the upper bound of the number of nodes in a subtree at depth h is n( k + 1 2k )h. Clearly, when ( k + 1 2k )h < 1 (2) n there will be no node left. From (2), we have h(log(k + 1) log(2k)) < log n. - 3 -

In other words, when log n h > log n (3) log(2k) log(k + 1) there will be no nodes left for another subtree. Therefore, the height of NB k with n nodes is bounded by c log n where c 1 when k is sufficiently large. For, it is clear that inserting n nodes in the worst case will result in a tree with height of n. For the average case (i.e., the random binary search tree, which means the data to be inserted arrives randomly), the expected heights is O(log n). The proof of this asymptotic result is comprehensible for upper level undergraduates, but to obtain the precise upper bound turns out to be an academic research topic. Here we display some results for comparison. Devroye and Reed [3] show that the expected height of a random binary search tree is bounded by α log n + O(log log n), where α = 4.31107... (4) For the tree, the analysis is easier. By solving a recurrence relation, we obtain the following result: log(n + 1) h < α log(n + 2) β, where α = 1.4402, β = 0.32824, (5) where h is the height of an tree of n nodes (detailed proof can be found in [5]). Comparing (3), (4), and (5), it is clear that we can construct a better binary search tree in NB k. Also note that, since the result of our analysis is based on the worst case, it implies that the NB k is more robust to the permutation of data, i.e., the order of data s arrivals does not substantially affect the heights of NB k. Even for small k, the improvement is still significant. For example, when k = 2, we have 1/(log(2k) log(k + 1)) = 2.4094, which is about 44% improvement to on random data. When k = 4, we have a result about the same as the tree. In the next subsection, we will show the experimental results of constructing NB k trees on some semi-random data. Homework Exercises There are some immediate questions suitable for homework. We list some of them in the followings without detailed discussion due to the space constrains. These questions are not trivial; neither are they too difficult for students to solve by themselves. 1. For an NB k with n nodes in total, how many nodes will be visited to shift a node at the root in average (or in the best/worst case)? 2. In average (or in the worst case), how many shift operations will be taken to insert/delete a node to/from an NB k with n nodes in total? 3. Establish the relation between k in NB k and l in l, where l is the maximum difference of the heights of the two children in the tree. 3.1 Experimental Results To assess the computational cost of constructing NB k trees, we implement a class for NB k with necessary methods in MS Visual C++ on a PC with 2.66 GHz Pentium(R) 4 CPU and 1 GB of RAM under MS Windows XP. A node will be dynamically allocated (using new) for each newly arrived data. Necessary destructors are implemented to destroy trees that are tested and out of the scope to avoid swapping memory during another test. Also, we use 100% CPU power to run our programs. Each data is a struct with three integer fields for three key values. The three keys determine the order of data. Since whether or not the data set is completely random is not our real concern, we simply use rand() to generate pseudo-random keys for data. Unless we manipulate the data set on purpose, the chance to have duplicate data in 1.2 million nodes is slim. Data Random and Unique: Data RU The set, Data RU, contains 1.2 million distinct records with uniform distribution in random order. Our first experiment is to construct,, NB 2, and NB 8 trees using Data RU. The results are compared in Figure 2. Using the heights of as the standard, we observe that NB 2 improves about 35%, while the tree improves about 50%. 4 Increasing the value of balance degree k to 4 (not shown in Figure 2 for clarity), the heights will be close to the tree s. When k = 8, the heights are slightly better than the tree s and very close to optimums. Regarding the computational cost, our results show that NB 2 and on Data RU are about the same, which is about 20% faster than the tree. However, NB 8 runs slower but not by too far and NB 8 are trees with heights almost optimal. 4 Comparing (4) and (5), we observe that the rate of improvement is not as good as the theoretical value. This is because our is too balanced due to the use of rand(). - 4 -

Heights 45 44 42 40 39 36 34 33 31 29 28 28 26 25 25 24 24 23 22 21 21 21 20 20 17 16 17 15 16 14 14 15 12 11 12 11 9 10 9 10 11 12 14 15 16 17 Log (n), n = number of nodes -- all keys are distinct Seconds 0 5 10 15 20 12.10.60 15.14 10.63 11.56 9.22 10.40 7.80 9.03 9.45 8.56 9.21 6.47 7.98 6.94 7.63 8.17 5.21 5.92 6.75 7.26 4.00 4.97 5.88 6.37 5.05 2.83 3.06 3.99 5.49 4.15 4.68 1.77 3.32 3.91 2.15 2.55 0.70 1.30 1.82 0.57 2.35 3.10 1.10 0.46 0.41 0.98 1.61 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 Millions Number of nodes -- all keys are distinct Figure 2: Using Data RU Heights 1 10 100 1000 905 469 247 4 82 52 36 28 22 17 22 24 26 29 31 33 14 15 16 17 20 21 14 20 10 12 12 11 9 10 11 12 14 15 16 17 Log (n), n = number of nodes -- each key has n/100 duplicates Seconds 1 10 100 1000 79.4 856.3 707.2 572.4 453.8 336.9 252.5 3.4 126.0 25.5 30.8 36.1 41.7 47.6 53.6 43.7 20.5 15.9.8 11.5 29.4 34.5 39.9 45.5 51.3 24.3 7.6.5 15.1 10.9 4.3 4.9 5.7 6.6 7.5 8.4 9.3 4.2 7.2 4.1 4.0 3.3 2.6 1.4 1.9 1.1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 Millions Number of nodes -- each key has n/100 duplicates Figure 3: Using Data RD Data Random and Duplicate: Data RD In applications, there may be cases that we don t have enough key space to assign a distinct key to each record. For example, there is no need to distinguish among postal mails that are sent to the same region with the same zip code in a processing center. To reflect this situation, we create another data set, Data RD, with 1.2 million records in which each key has 12,000 duplicates distributed uniformly in the data set. In other words, each key in a set of n records drawn from Data RD is expected to have n/100 duplicates. Under this situation, is totaly out of control (see Figure 3). NB k performs much better than even when k is as small as 2. While NB k can be constructed much faster than (the time needed is only about 6% of the time needed for constructing ), it is about 5 times slower than constructing the tree. Data Sorted in Batches: Data SB It may not be realistic to assume data to be inserted are distributed uniformly and completely random. There is another common situation in real applications, in which data may arrive in batches. Each batch may arrive randomly but each contains some sorted records. Instead of using batches, we are asked to insert each record as an independent node into a binary search tree. One can image that a postal mail procession center receives mails batch by batch, and each batch comes from a zip code region. Moreover, the mails in each batch are sorted by their regional office. To simulate this situation, we generate a data set named Data SB also containing 1.2 million records. The size of each batch is fixed to 32. To generate a batch, we first generate a record (with 3 keys) as before and then set the third key to 0. Then, generate another 31 records with the first two keys identical to the first record s and the third keys set to 1, 2,..., 31 in order. - 5 -

Heights 1 10 100 1000 293 327 360 391 454 522 586 6 226 0 22 22 23 25 27 28 31 32 20 21 15 11 12 14 15 17 16 17 10 11 9 10 11 12 14 15 16 17 Log (n), n = number of nodes -- data arrives in batches of size 32 Seconds 0 20 40 60 80 100 120 5.58 2.3 1.2 0.6.64 5.5 2.6 0.9 35.33 27.70 8.3 4.2 1.5 11.1 5.7 2.0 45.42 57.59 77.15 69.02 87.34 98.25 117.11 108.28 35.3 31.7 28.5 25.1.9 16.7.4 22.2 7.3 9.0 10.7 12.4 14.2 15.9 17.7.5 2.6 3.3 3.9 4.5 5.2 5.8 6.5 7.1 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 1.1 1.2 Millions Number of nodes -- data arrives in batches of size 32 Figure 4: Using Data SB Figure 4 shows the results. The speed of constructing is acceptable but the heights of the resulting trees are out of control. As we expect, NB 2 and NB 8 can perfectly manage the heights, but their constructing times become slow; only NB 2 runs within an acceptable range. 4 Conclusion Based on our experimental results, NB k can be used under some controlled conditions (like data in Data RU and Data RD), but in general it is no replacement for those classic balancing techniques such as trees. Our main purpose of introducing NB k is pedagogical. Compared to any other balancing methods found in the literature, our node-balanced binary search tree is the easiest one. Thus, we believe that our approach is an immediate step to move after students have learned binary search trees, and we also believe that our approach should be considered as a missing intermediate step towards a mature yet complicate technique for balancing binary search trees. References [1] G. M. Barnes, J. Noga, P. D. Smith, and J. Wiegley. Experiments with balanced-sample binary trees. SIGCSE 2005, 37(1):166 170, March 2005. [2] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. Introduction to Algorithms. The MIT Press, Cambridge, Massachusetts, 89. [3] L. Devroye and B. Reed. On the variance of the height of random binary search trees. SIAM J. Comput., 24(60):1157 1162, 95. [4] M. Drmota. An analytic approach to the height of binary search trees. Algorithmica, 29(1):89 1, 2001. [5] A. Drozdek. Data Structures and Algorithms in C++. Thomson Course Technology, third edition, 2005. [6] E. Horowitz and S. Sahni. Fundamentals of Data Structures in Pascal. Computer Science Press, Inc., 84. [7] B. Manthey and R. Reischuk. Smoothed analysis of the height of binary search trees. Electronic Colloquium on Computational complexity, (60), 2005. [8] C. Okasaki. Alternatives to two classic data structures. SIGCSE 2005, 37(1):162 165, March 2005. [9] M. A. Weiss. Data Structures and Problems Solving Using JAVA. Addison Wesley, third edition, 2006. [10] N. Wirth. Algorithm + Data Structures = Programs. Prentice-Hall, Inc, 76. - 6 -