LearningBased Summarisation of XML Documents


 Kelley Dorsey
 2 years ago
 Views:
Transcription
1 LearningBase Summarisation of XML Documents Massih R. Amini Anastasios Tombros Nicolas Usunier Mounia Lalmas {first University Pierre an Marie Curie Queen Mary, University of Lonon 8, rue u capitaine Scott Department of Computer Science 7505, Paris Lonon E 4NS France Unite Kingom Abstract. Documents formatte in extensible Markup Language (XML) are available in collections of various ocument types. In this paper, we present an approach for the summarisation of XML ocuments. The novelty of this approach lies in that it is base on features not only from the content of ocuments, but also from their logical structure. We follow a machine learning, sentence extractionbase summarisation technique. To fin which features are more effective for proucing summaries, this approach views sentence extraction as an orering task. We evaluate our summarisation moel using the INEX an SUMMAC atasets. The results emonstrate that the inclusion of features from the logical structure of ocuments increases the effectiveness of the summariser, an that the learnable system is also effective an wellsuite to the task of summarisation in the context of XML ocuments. Our approach is generic, an is therefore applicable, apart from entire ocuments, to elements of varying granularity within the XML tree. We view these results as a step towars the intelligent summarisation of XML ocuments. Introuction With the growing availability of online text resources, it has become necessary to provie users with systems that obtain answers to queries in a manner which is both efficient an effective. In various information retrieval (IR) tasks, single ocument text summarisation (SDS) systems are esigne to help users to quickly fin the neee information [9, 24]. For example, SDS can be couple with conventional search engines an help users to evaluate the relevance of ocuments [34] for proviing answers to their queries. The original problem of summarisation requires the ability to unerstan an synthesise a ocument in orer to generate its abstract. However, ifferent attempts to prouce human quality summaries have shown that this process of abstraction is highly complex, since it nees to borrow elements from fiels such as linguistics, iscourse unerstaning an language generation [23, 6]. Instea, most stuies consier the task of text summarisation as the extraction of text spans (typically sentences) from the original ocument; scores are assigne to text units an the bestscoring spans are presente in the summary. These approaches transform the problem of abstraction into a simpler problem of ranking spans from an original text accoring to their relevance to be part of the ocument summary. This kin of summarisation is relate to the task of ocument
2 2 Massih R. Amini Anastasios Tombros Nicolas Usunier Mounia Lalmas retrieval, where the goal is to rank ocuments from a text collection with respect to a given query in orer to retrieve the best matches. Although such an extractive approach oes not perform an inepth analysis of the source text, it can prouce summaries that have proven to be effective [9, 24, 34]. To compute sentence scores, most previous stuies aopt a linear weighting moel which combines statistical or linguistic features characterising each sentence in a text [2]. In many systems, the set of feature weights are tune manually; this may not be tractable in practice, as the importance of ifferent features can vary for ifferent text genres [4]. Machine Learning (ML) approaches within the classification framework, have shown to be a promising way to combine automatically sentence features [7, 32, 5, 2]. In such approaches, a classifier is traine to istinguish between two classes of sentences: summary an nonsummary ones. The classifier is learnt by comparing its output to a esire output reflecting global class information. This framework is limite in that it makes the assumption that all sentences from ifferent ocuments are comparable with respect to this class information. Here we explore a ML approach for SDS base on ranking. The main rationale of this approach is to learn how to best combine sentence features such that within each ocument, summary sentences get higher scores than nonsummary ones. This orering criterion correspons exactly to what the learnt function is use for, i.e. orering sentences. Statistical features that we consier in this work, are partly from the stateofart, an they inclue cuephrases an positional inicators [2, 9], an titlekeywor similarity [9]. In aition, we propose a new contextual approach base on topic ientification to extract meaningful features from sentences. In this paper, we apply the ML approach for summarisation to XML ocuments. The XML format is becoming increasingly popular [26], an this has cause a consierable interest in the contentbase retrieval of XML ocuments, mainly through the INEX initiative [3]. In XML retrieval, ocument components, rather than entire ocuments, are retrieve. As the number of XML components is typically large (much larger than that of ocuments), it is essential to provie users of XML IR systems with overviews of the contents of the retrieve elements. The element summaries can then be use by searchers in an interactive environment. In traitional (i.e. non XML) interactive information retrieval, a summary is usually associate with each ocument; in interactive XML retrieval, a summary can be associate with each retrieve XML component. Because of the nature of XML ocuments, users can also browse within the XML ocument containing that element. One metho to facilitate browsing, is to isplay the logical structure of the ocument containing the retrieve elements (e.g. in a Table of Contents format). In this way, summaries can also be associate with the other elements forming the ocument, in aition to the retrieve elements themselves [30]. The choice of the meaningful granularity of elements to be summarise is also currently being investigate [3], as some retrieve elements may simply be too short to be summarise. The summarisation of XML ocuments is also beginning to raw attention from researchers [, 20, 26, 30]. In our experiments we have consiere sentences for extractive summarisation, so from now on, we will refer to sentences as the basic textunits to be extracte.
3 LearningBase Summarisation of XML Documents 3 A major aim of this paper is to investigate the effectiveness of an XML summarisation approach by combining structural an content features to extract sentences for summaries. More specifically, a further novel feature of our work is that we make use of the logical structure of ocuments to enhance sentence characterisation. In XML ocuments, a treelike structure, which correspons to the logical structure of the source ocument, is encoe. For example, an article can be seen as the root of the tree, an sections, subsections an paragraphs can be arrange in branches an leaves of the tree. We select a number of features from this logical structure, an learn what features are best preictors of summaryworthy sentences. The contributions of this work are therefore twofol: first, we propose an justify the effectiveness of a ranking algorithm, instea of the mostly use classification error criterion in ML approaches for SDS, an secon, we investigate the summarisation of XML ocuments by taking into account features relating both to the content an the logical structure of the ocuments. The ultimate aim of our approach is to generate summaries for components of XML ocuments at any level in the logical structure hierarchy. Since at present the evaluation of such summaries is har (ue to the lack of appropriate resources), we consier an XML article to be an XML element, an we use its content an structure to learn how we can best summarise it. Our approach is sufficiently generic to be applie to a component at any level of the logical structure of an XML ocument. In the remainer of the paper, we first iscuss, in section 2, relate work on ML approaches base on the classification framework an outline our ML approach for summarisation. In section 3 we present the structural an content features that we use to represent sentences for this task. In Section 4 we outline our evaluation methoology. In section 5 we present the results of our evaluation using two atasets from the INitiative for the Evaluation of XML retrieval (INEX) [3] an the Computation an Language collection (cmplg) of TIPSTER SUMMAC [28]. Finally, in section 6 we iscuss the outcomes of this stuy an we also raw some pointers for the continuation of this research. 2 Trainable text summarisers The purpose of this section is to present evience that, for SDS, a ranking framework is better suite for the learning of a scoring function than a classification framework. To this en, we efine two trainable text summarisers learnt using a classification an a ranking criterion, an show upon the choice of these learning criteria why our proposition hols. In both cases, we aim to learn a scoring function h : R n R which represents the best linear combination of sentence features accoring to the learning criterion in use uner the supervise setting. We chose to use a simple linear combination of sentence features for two reasons. First, uner the classification framework, it has been shown that simple linear classifiers like the Naive Bayes moel [7], or a Support Vector Machine [5] perform as well as more complex nonlinear classifiers [5]. Seconly, in orer to compare fairly between the ranking an classification approaches we fix the class of the scoring function (linear in our case) an consier two ifferent
4 4 Massih R. Amini Anastasios Tombros Nicolas Usunier Mounia Lalmas learning criteria evelope uner these two frameworks. The choice of the best ranking function class for SDS is beyon the scope of the paper. In the following, we first present notations use in the rest of the paper an give a brief review of the classification framework for text summarisation, an then present the main motivation for using an alternative ML approach base on orering criteria for this task. 2. Notations We enote by D the collection of ocuments in the training set an assume that each ocument in D is compose of a set of sentences 2, = (s k ) k {,..., } where is the length of ocument in terms of the number of sentences composing. Each sentence s = (s i ) i {,...,n} is characterise by a set of n structural an statistical features that we present in Section 3. Without loss of generality, we assume that every feature is a positive real value for any sentence. Uner the supervise setting, we suppose that a binary relevance jugment vector y = (y k ), y k {, }, k is associate to each ocument ; y k inicates whether the sentence s k in belongs, or not, to the summary. 2.2 Text summarisation as a classification task In this section, we present the classification framework for SDS which is the most use learning scheme for this task in literature. We first present a classification learning criterion relate to the minimisation of the misclassification error, an then present a logistic classifier that we prove to be aequate for this optimisation. Misclassification error rate The working principle of classification approaches to SDS is to associate class label to summary (or relevant) sentences, an class label to nonsummary (or irrelevant) ones, an to use a learning algorithm to iscover for each sentence s the best combination weights of its features h(s), with the goal of minimising the error rate of the classifier (or its classification loss enote by L C ), that is, the expectation that a sentence is incorrectly classifie by the output classifier. L C (h) = E ([[yh(s) < 0]]) () where [[pr]] is equal to if preicate pr hols an 0 otherwise. The computation of this expecte error rate epens on the probability istribution from which each pair (sentence, class) is suppose to be rawn ientically an inepenently. In practice, since this istribution is unknown, the true error rate cannot be compute exactly an it is estimate over a labele training set by the empirical error rate ˆL c given by ˆL C (h, S) = S [[yh(s) < 0]] (2) s S 2 Recall that in extractive summarisation, the summary of a ocument is mae of a subset of its sentences.
5 LearningBase Summarisation of XML Documents 5 where S represents the set of all sentences appearing in D. We notice here that sentences from ifferent ocuments are comparable with respect to a global class information. A irect optimisation of the empirical error rate (equation 2) is not tractable as this function is not ifferentiable. Schapire an Singer [25] motivate e yh(s) as a ifferentiable upper boun to [[yh(s) < 0]]. This follows because for all x, e x [[x < 0]]. Figure shows the graphs of these two misclassification error functions as well as the loglikelihoo loss function introuce below with respect to yh; negative (positive) values of yh imply incorrect (correct) classification. The exponential an loglikelihoo criteria are ifferentiable upper bouns of the misclassification error rate. These functions are also convex, so stanar optimisation algorithms can be use to minimise them. Frieman et al. have shown in [2] that the function h minimising E(e yh(s) ) is a logistic classifier whose output estimates p(y = s), the posterior probability of the class relevant given a sentence s. 6 Miscalssification Exponentiel Loglikelihoo 5 4 Loss yh Fig.. Misclassification, exponential an loglikelihoo loss functions with respect to yh. In many ML approaches, the optimisation criterion to train a logistic classifier is the binomial loglikelihoo function E log( + e 2yh(s) ). The reason is that from a statistical point of view, e yh(s) is not equal to the log of any probability mass function on ± as it is the case for log(+e 2yh(s) ). Nevertheless, Frieman et al. have shown that the optimisation of both criteria is effective an that the population minimisers of E log( + e 2yh(s) ) an E(e yh(s) ) coincie [2].
6 6 Massih R. Amini Anastasios Tombros Nicolas Usunier Mounia Lalmas For the ranking case, we will aopt a similar logistic moel an show that the minimisation of the exponential loss has a real avantage over the logbinomial in terms of computational complexities (see Section 2.3). Logistic moel for classification For the classification case, we propose to learn the parameters Λ = (λ,..., λ n ) of the feature combination h(s) = n i= λ is i by training a logistic classifier whose output estimates p(relevant s) = in orer to +e 2h(s) minimise the empirical exponential boun estimate on the training set: L c exp(s; Λ) = S e y y {,} s S y n i= λisi (3) where S an S are respectively the set of relevant an irrelevant sentences in the training set S an S is the number of sentences in S. For the minimisation of L c exp, we employ an iterative scaling algorithm [7]. This proceure is shown in Algorithm. Starting from some arbitrary set of parameters Λ = (λ,..., λ n ), the algorithm fins iteratively a new set of parameters Λ + = (λ + δ,..., λ n + δ n ) that yiel a moel of lower L c exp. At every iteration t, the upate of each λ i in this algorithm is to take λ (t+) i where each δ (t) i, i {,..., n} satisfies δ (t) i λ (t) i + δ (t) i s i e h(s,λ = 2 log s S s i e h(s,λ s S We erive this upate rule in Appenix A. After convergence, sentences of a new ocument are ranke with respect to the output of the classifier, an those with the highest scores are extracte to form the summary of the ocument. An avantage of Algorithm is that its complexity is linear in the number of examples, times the total number of iterations ( S t). This is interesting, since the number of sentences in the training set is generally large. In the following, we introuce our ranking framework for SDS. (t) ) (t) ) 2.3 Text summarisation as an orering task The classification framework for SDS has several rawbacks. First, the assumption that all sentences from ifferent ocuments are comparable with respect to a class information is not correct. Inee, text summaries epen more on the content of their respective ocuments than on a global class information. Furthermore, ue to a high number of irrelevant sentences, a classifier will typically achieve a low misclassification rate if, inepenently of where relevant sentences are ranke, it always assigns the class
7 LearningBase Summarisation of XML Documents 7 Algorithm : Classification Base Trainable Extractive Summariser Input : S = S S Initialise: Normalise each sentence vector s S such that i si = Set the value of feature weights Λ 0 = (λ 0,..., λ 0 n) with some arbitrary values 0 t repeat for i to n o s ie h(s,λ λ (t+) i λ (t) i + log s S 2 s ie h(s,λ (t) ) s S en t t + until Convergence of L c exp(s; Λ) ; Output : Λ F Create a summary for each new ocument by taking the n first sentences in with regar to the output of the linear combination of sentence features with Λ F (t) ) irrelevant to every sentence in the collection. Therefore, it is important to compare the relevance of each sentence with respect to each other within every ocument in the training set, in other wors, to learn a ranking function that assigns higher scores to relevant sentences of a ocument than to irrelevant ones. A Framework for learning a ranking function for SDS The problem of learning a trainable summariser base on ranking can be formalise as follows. For each ocument in D we enote by S an S respectively the sets of relevant an irrelevant sentences appearing in with respect to its summary. The ranking function can be represente by a function h that reflects the partial orering of relevant sentences over irrelevant ones for each ocument in the training set. For a given ocument, if we consier two sentences s an s such that s is preferre over s (s S an s S ) then h ranks s higher than s D, (s, s ) S S h(s) > h(s ) Finally, in orer to learn the ranking function we nee a relevance jugment escribing which sentence is preferre to which one. This information is given by binary jugments provie for ocuments in the training set. For these ocuments, sentences belonging (or not) to the summary are labele as + (or ). Following [], we can efine the goal of learning a ranking function h as the minimisation of the ranking loss L R efine as the average number of relevant sentences score below irrelevant ones in every ocument in D
8 8 Massih R. Amini Anastasios Tombros Nicolas Usunier Mounia Lalmas L R (h, D) = D S D S s S s S [[h(s) h(s )]] (4) Note that this formulation is similar to the misclassification error rate. The main ifference, is that instea of classifying sentences as relevant/irrelevant for the summary, a ranking algorithm classifies pairs of sentences. More specifically, it consiers the pair of sentences (s, s ) from the same ocument, such that one of the two sentences is relevant. Learning a scoring function h, which gives higher score to the relevant sentence than to the irrelevant one is then equivalent to learning a classifier which correctly classifies the pair. The Ranking Logistic Algorithm Here we are intereste in the esign of an algorithm which allows (a) to fin efficiently a function h in the family of linear ranking functions minimising equation (4), an (b) that this function generalises well on a given test set. In this paper we aress the first problem, an provie empirical evience for the performance of our ranking algorithm on ifferent test sets. There exist several ranking algorithms in the ML literature, base on the perceptron [27] or AaBoost  calle RankBoost []. For the SDS task, as the total number of sentences in the collection may be high, we nee a simple an efficient ranking algorithm. Perceptronbase ranking algorithms woul lea to quaratic complexity in the number of examples, whereas the RankBoost algorithm in its stanar setting oes not search a linear combination of the input features. In this paper, we consier the class of linear ranking functions n D, s h(s, B) = β i s i (5) where B = (β,..., β n ) are the vector weights of the ranking function that we aim to learn. Similar to the explanation given in section 2.2, a logistic moel is aapte to ranking 3 : p(relevant (s, s )) = + e 2 n (6) i= βi(si s i ) is well suite for learning the parameters of the combination B by minimising an exponential upper boun on the ranking loss L R, (equation 4): L r exp(d; B) = D D S S i= (s,s ) S S e n i= βi(s i si) (7) The interesting property of this exponential loss for ranking functions is that it can be compute in time linear to the number of examples, simply by rewriting equation (7) as follows: L r exp(d; B) = D D S S ( s S e n i= βis i )( s S e n i= βisi ) (8) 3 The choice of linear ranking functions, in our case, makes it convenient to represent a pair of sentences (s, s ) by the ifference of their representative vectors, (s s,..., s n s n) as h(s) h(s ) becomes n i= βi(si s i).
9 LearningBase Summarisation of XML Documents 9 For the ranking case, this property makes it convenient to optimise the exponential loss rather than the corresponing binomial loglikelihoo L r b(d; B) = D D S S (s,s ) S S log( + e 2 n i= βi(si s i ) ) (9) Inee, the computation of the maximum likelihoo of equation (9) requires to consier all the pairs of sentences, an leas to a complexity quaratic in the number of examples. Thus, although ranking algorithms consier the pairs of examples, in the special case of SDS, the propose algorithm is of complexity linear to the number of examples through the use of the exponential loss. For the optimisation of equation (8) we have employe the same iterative scaling proceure as in the classification case. We call our algorithm LinearRank, its pseuocoe is shown in Algorithm 2 an its upate rule (B t+ B t + Σ t ) is erive in Appenix B. Algorithm 2: Ranking Base Trainable Extractive Summariser  LinearRank Input : S D S Initialise: Normalise each sentence vector s such that i si =, i.e. i, si [0, ] Set the value of feature weights B 0 = (β, 0..., βn) 0 with some arbitrary values 0 t repeat for i to n o β (t+) i β (t) i + log 2 S D S S S D s S s S e h(s,b (t) ) e h(s,b(t)) ( s i + s i) s S e h(s,b (t) ) e h(s,b(t)) ( + s i s i) s S en t t + until Convergence of L r exp(d; B) ; Output : B F Create a summary for each new ocument by taking the n first sentences in with regar to the linear combination of sentence features with B F The most similar work to ours is that of Freun et al. [] who propose the Rank Boost algorithm. In both cases the parameters of the combination are learnt by minimising a convex function. However, the main ifference is that we propose here to learn a linear combination of the features by irectly optimising equation (8), while RankBoost learns iteratively a nonlinear combination of the features by aaptively resampling the training ata.
10 0 Massih R. Amini Anastasios Tombros Nicolas Usunier Mounia Lalmas 3 Summarising XML ocuments In the following, we introuce the sentence features that we use as the input of the trainable summarisers efine in the previous section. Here, we take the logical structure of ocuments into account when proucing summaries, as well as the content, an we learn an effective combination of features for summarisation. Although for evaluation purposes we use the INEX an SUMMAC collections, which contain scientific articles, our approach coul apply to any ocuments formatte in XML where the logical structure is available. The summarisation of scientific texts through sentence extraction has been extensively stuie in the past [33]. In our approach, we o not explicitly take avantage of the iiosyncratic nature of scientific articles, but we rather propose a generic approach that is, in essence, genreinepenent. In the next section, we present the specific etails of our approach. 3. Document features for summarisation In this section we outline the features of XML ocuments that we employe in our summarisation moel. Structural features Past work on SDS (e.g. [9, 7]) has implicitly trie to take the structure of certain ocument types into account when extracting sentences. In [7], for example, the leaing an trailing paragraphs in a ocument are consiere important, an the position of sentences within these paragraphs is also recore, an use, as a feature for summarisation. In our work, we move into an explicit use of structural features by taking into account the logical structure of XML ocuments. Our aim here is to investigate more precisely from which component of a ocument the summary is more likely to be generate. The structural features we use in our approach are:. The epth of the element in which the sentence is containe (e.g. section, subsection, subsubsection, etc.). 2. The sibling number of the element in which the sentence is containe (e.g. st, mile, last). 3. The number of sibling elements of the element in which the sentence is containe. 4. The position in the element of the paragraph in which the sentence is containe (e.g. first, or not). These features are generic, an can be applie to an entire ocument, or to components at any level of the XML tree that can be meaningfully summarise (i.e. components not too small to be summarise). These are just some of the features that can be use for moeling structural information; many of them have been consiere for example in XML retrieval approaches (see [3]). Content features Terms containe in the title of a ocument have long been recognise as effective features for automatic summarisation [9]. Our basic contentonly query (COQ) comprises terms in the title of the ocument (Title query), as well as the title keywors augmente by the most frequent terms in the ocument (up to 0 such terms)
11 LearningBase Summarisation of XML Documents (TitleMFT query). The rationale of these approaches is that these terms shoul appear in sentences that are worthwhile incluing in summaries. The importance of title terms for SDS can also be extene to components of finer granularity (e.g. sections, subsections, etc.), by using the title of the ocument to fin relevant sentences within any component, or, where appropriate, by using meaningful titles of components. Since the Title query may be very short, sentences similar to the title which o not contain title keywor terms will have a similarity measure null with the Title query. To overcome this problem we have employe queryexpansion techniques such as Local Context Analysis (LCA) [37] or thesaurus expansion methos (i.e. WorNet [0]), as well as a learningbase expansion technique. These three expansion techniques are escribe next. Expansion via WorNet an LCA From the Title query, we forme two other queries, reflecting local links between the title keywors an other wors in the corresponing ocument: TitleLCA query, inclues keywors in the title of a ocument an the wors that occur most frequently in sentences that are most similar to the Title query accoring to the cosine measure. TitleWN, inclues expane title keywors an all their first orer synonyms using WorNet. We use the cosine measure in orer to compute a preliminary score between any sentence of a ocument an these four queries (Title, TitleMFT, TitleLCA, TitleWN). The scoring measure oubles the cosine scoring of sentences containing acronyms (e.g. HMM (Hien Markov Moels), NLP (Natural Language Processing)), or cueterms, e.g. in this paper, in conclusion, etc. The use of acronyms an cue phrases in summarisation has been emphasise in the past by [9, 7]. Learningbase expansion technique We also inclue two queries by forming wor clusters in the ocument collection. This is another source of information about the relevance of sentences to summaries. It is a more contextual approach compare to the titlebase queries, as it seeks to take avantage of the cooccurrence of terms within sentences all over the corpus, as oppose to the local information provie by the titlebase queries. We form ifferent termclusters base on the cooccurrence of wors in the ocuments of the collection. For iscovering these termclusters, each wor w in the vocabulary V is first characterise as a vector w =< n(w, ) > D representing the number of occurrences of w in each ocument D [4]. Uner this representation, wor clustering is performe using the NaiveBayes clustering algorithm maximising the Classification Maximum Likelihoo criterion [3, 29]. We have arbitrary fixe the number of clusters to V 00. From these clusters, we first expan the title query by aing wors which are in the same worclusters as the title keywors. We enote this novel query by Extene concepts with wor clusters query. Secon, we represent each sentence in a ocument, as well as the ocument title, in the space of worclusters as vectors containing the
12 2 Massih R. Amini Anastasios Tombros Nicolas Usunier Mounia Lalmas number of occurrences of wors in each worcluster in that sentence, or ocument title. We refer to this vector representation of ocument titles as Projecte concepts on wor clusters queries. The first approach (Extene concepts with wor clusters) is a query expansion technique similar to those escribe above using wornet or LCA. The secon approach is a projection technique, closely relate to Latent Semantic Analysis [8]. Table shows some worclusters foun for the SUMMAC ata collection; it can be seen from this example that each cluster can be associate to a general concept. WorClusters Cluster i: transuction language grammar set wor information moel number wors rules rule lexical Cluster j: tag processing speech recognition morphological korean morpheme Table. An example of term clusters foun for the SUMMAC ata collection. 3.2 Relate Work There have been few researchers that have investigate the summarisation of information available in XML format. In [], the work focuses on retaining the structure of the source ocument in the summary. A textual summary of a ocument is create by using lexical chains. The textual summary is then combine with the overall structure of the ocument with the aim of preserving the structure of the original ocument an of superimposing the summary on that structure. In [26], the iea of generating semantic thumbnails (essentially summaries) of ocuments in XML format is suggeste. The authors propose to utilise the ontologies embee in XML an RDF ocuments in orer to evelop the semantic thumbnails. Litkowski [20] has use some iscourse analysis of XML ocuments for summarisation. In some other work [6], the tree representation of XML ocuments is use to generate tree structural summaries; these are summaries that focus on the structural properties of trees an o not correspon to summaries in the conventional sense of the term as use in IR research. Operations such as nesting an repetition reuction in the XML trees are use. In the above approaches, features pertaining to the logical structure of XML ocuments are not taken into account when proucing summaries. Structural clues are use by work on summarisation of other ocument types, e.g. s [8], or technical ocuments [36]. In these summarisation approaches, known features of the structure of ocuments are exploite in orer to prouce summaries (e.g. the presence of a FAQ, or a question/answer section in technical ocuments).
13 LearningBase Summarisation of XML Documents 3 4 Experiments In our experiments we use 2 ata sets  the INEX [3] an SUMMAC [28] test collections. For each ataset, we carrie out evaluation experiments for testing (a) the query expansion effect, (b) the learning effect an the best learning scheme for SDS between classification an ranking, an (c) the effect of structure features. For point (b), we teste the performance of a linear scoring function learnt with a ranking an a classification criterion. The combination weights of the scoring function are learnt via the logistic moel optimising the ranking criterion (8) by the LinearRank algorithm (Algorithm 2) an the classification criterion (3) using Algorithm. Furthermore, in orer to evaluate the effectiveness of learning a linear combination of sentence features for SDS uner the ranking framework, we compare the performance of the LinearRank algorithm an the RankBoost algorithm [] which learn a nonlinear combination of features. To measure the effect of structure features, we have learnt the best learning algorithm using COQ features alone, an using COQ features together with the structure features. 4. Datasets We use version.4 of the INEX ocument collection. This version consists of 2,07 articles of the IEEE Computer Society s publications, from 995 to 2002, totaling 494 megabytes. It contains over 8.2 million element noes of varying granularity, where the average epth of a noe is 6.9 (taking an article as the root of the tree). The overall structure of a typical article consists of a front matter (containing e.g. title, author, publication information an abstract), a boy (consisting of e.g. sections, subsections, subsubsections, paragraphs, tables, figures, lists, citations) an a back matter (incluing bibliography an author information). The SUMMAC corpus consists of 83 articles. Documents in this collection are scientific papers which appeare in ACL (Association for Computational Linguistics) sponsore conferences. The collection has been marke up in XML by converting automatically the latex version of the papers to XML. In this ataset the markup inclues tags covering information such as title, authors or inventors, etc., as well as basic structure such as abstract, boy, sections, lists, etc. We have remove ocuments from the INEX ataset that o not possess title keywors or an abstract. From the SUMMAC ataset, we remove ocuments whose title containe noinformative wors, such as a list of proper names. From each ataset, we also remove ocuments having extractive summaries (as foun by Marcu s algorithm, see Section 4.2) compose of one sentence only, arguing that a sentence is not sufficient to summarise a scientific article. In our experiments, we use in total 6 ocuments from SUMMAC an 4, 446 ocuments from INEX collections. We extracte the logical structure of XML ocuments using freely available structure parsers. Documents are tokenise by removing wors in a stop list, an sentence bounaries within each ocument are foun using the morphosyntactic tree tagger program [35]. In Table 2, we show some statistics about the two ocument collections use, about the abstracts provie with the two collections, an about the extracts that were create using Marcu s algorithm, as well as the training/test splits for each ataset (in
14 4 Massih R. Amini Anastasios Tombros Nicolas Usunier Mounia Lalmas all experiments the size of the training an test sets are kept fixe). Both atasets have roughly the same characteristics of sentence istribution in the articles an summaries. The summary length, in number of sentences, is approximately 9 an 6 in average for the Summac an INEX collections respectively. Data set comparison Source SUMMAC INEX Number of ocs 6(83) 4446(207) Training/Test splits 80/8 000/3446 Total # of sentences in the collection Average # of sentence per oc Maximum # of sentence per oc Minimum # of sentence per oc. 6 2 Average # of wors per sentence Size of the vocabulary Average extract size (in # of sentence) Maximum # of sentence per extract Minimum # of sentence per extract 2 3 Average abstract size (in # of sentence) Maximum # of sentence per abstract Minimum # of sentence per abstract 4 5 Table 2. Data set properties. 4.2 Experimental Setup We assume that for each ocument, summaries will only inclue sentences between the introuction an the conclusion of the ocument. A compression ratio must be specifie for extractive summaries. For both atasets we followe the SUMMAC evaluation by using a 0% compression ratio [28]. To obtain sentencebase extract summaries for all articles in both atasets, for training an evaluation purposes, we nee gol summaries. The human extraction of such reference summaries, in the case of large atasets, is not possible. To overcome this restriction we use in our experiments the authorsupplie abstracts that are available with the original articles, an apply an algorithm propose by Marcu [22] in orer to generate extracts from the abstracts. This algorithm has shown a high egree of correlation to sentence extracts prouce by humans. We therefore evaluate the effectiveness of our learning algorithm on the basis of how well it matches the automatic extracts. The learning algorithms take as input the set of features efine in section 3.. Each sentence in the training set is represente as a feature vector, an the algorithms are learnt base on this input representation an the extracte summaries foun by Marcu s algorithm [22], which were use as esire outputs. For all the algorithms, on each ataset, we have generate precision an recall curves to measure the query expansion an learning effects. Precision an recall are
15 LearningBase Summarisation of XML Documents Inex ataset  COQ features Extene concepts with worclusters Projecte concepts on worclusters TitleLCA Title TitleWN TitleMFT Precision Recall 0.8 Summac ataset  COQ features Extene concepts with worclusters Projecte concepts on worclusters TitleLCA Title TitleWN TitleMFT 0.6 Precision Recall Fig. 2. PrecisionRecall curves at 0% compression ratio for the COQ features on INEX (top) an SUMMAC (bottom) atasets. Each point represents the mean performance for 0 crossvaliation fols. The bars show stanar eviations for the estimate performance.
16 6 Massih R. Amini Anastasios Tombros Nicolas Usunier Mounia Lalmas compute as follows: # of sentences in the extract an also in the gol stanar Precision = total # of sentences in the extract # of sentences in the extract an also in the gol stanar Recall = total # of sentences in the gol stanar Precision an recall values are average over 0 ranom splits of the training/test sets. We have also measure the breakeven point at 0% compression ratio for the 3 learning algorithms an the best COQ feature (Table 3). 5 Analysis of Results We examine the results from three viewpoints: in Section 5. we present the effectiveness of each of the content only queries (COQ) alone, as well as the query expansion effect, in Section 5.2 we examine the performance of the three learning algorithms, an in Section 5.3 we look into the effectiveness of our summarisation approach for XML ocuments. 5. Query expansion effects In Figure 2, we present the precision an recall graphs showing the effectiveness of contentonly features for SDS without the learning effect (i.e. by using each content feature iniviually to rank the sentences). The orer of effectiveness of the features seems to be consistent across the two atasets: extene concepts with wor clusters are the most effective, followe by projecte concepts on wor clusters an title with local context analysis. Title with the most frequent terms in the ocument is the least effective feature in both cases. The high effectiveness obtaine with wor clusters (extene an projecte concepts with wor clusters) emonstrates that the contextual approach investigate here is effective an shoul be further exploite for SDS. 5.2 Learning algorithms In Figure 3, we present the precision an recall graphs obtaine through the combination of content an structure features for the two atasets when using the three learning algorithms. For comparison, we isplay the PrecisionRecall curves obtaine for the best CO feature (Extene concepts) with those obtaine from the learning algorithms. A first result is that the combination of features by learning outperforms each feature alone. The results also show that the two orering algorithms are more effective in both atasets than the logistic classifier. This fining corroborates with the justification given in Section 2.3. When comparing the two orering algorithms, we see that Algorithm 2 (Linear Rank) slightly outperforms the RankBoost algorithm for low recall values. Since both
17 LearningBase Summarisation of XML Documents Inex ataset  Learning effect Combining COQ an SF  LinearRank Combining COQ an SF  RankBoost Combining COQ an SF  Logistic Classifier Combining COQ features  LinearRank Extene Concepts 0.7 Precision Recall 0.8 Summac ataset  Learning effect Combining COQ an SF  LinearRank Combining COQ an SF  RankBoost Combining COQ an SF  Logistic Classifier Combining COQ features  LinearRank Extene Concepts 0.6 Precision Recall Fig. 3. PrecisionRecall curves at 0% compression ratio for the learning effects on INEX (top) an SUMMAC (bottom) atasets.
18 8 Massih R. Amini Anastasios Tombros Nicolas Usunier Mounia Lalmas orering algorithms optimise the same criteria (equation 8), the ifference in performance can be explaine by the class of functions that each algorithm learns. The Rank Boost algorithm outputs a nonlinear combination of the features, while with the LinearRank algorithm we obtain a linear combination of these features. As the space of features is small, the nonlinear RankBoost moel has low bias an high variance an hence attempts to overfit the ata. We have notice this effect in both test collections by comparing Precision an Recall curves for RankBoost on the test an the training sets. Our experimental results suggest that a ranking criterion is better suite to the SDS task than a classification criterion. Moreover, a simple logistic moel performs better than a nonlinear algorithm an, epening on the implementation, can be significantly faster to train than RankBoost. This leas to the conclusion that such a linear moel, i.e. optimising equation (8), can be a goo choice for learning a summariser, in particular when consiering structural features. 5.3 Summarisation effectiveness By looking at the ata in Figure 3 from the point of view of comparing the effectiveness of the summariser with ifferent features, one can note that the combination of content an structure features yiels greater effectiveness than the use of content features alone. This result seems to hol equally for both ocument sets for most recall points. In terms of breakeven points, (Table 3), the increase in effectiveness is approximately 3% for the RankBoost an LinearRank algorithms in both ata sets 4. This provies evience that the use of structural features improves the effectiveness of the task of SDS. It is to be note that as the structural features we consiere here are iscrete, the orering of sentences with respect to ifferent structural components was, hence, not possible. Training the learning moels using only these features i not provie significant results either (we chose not to isplay these results as they were not informative). The fact that structural features increase the performance of the learning moels when they are ae to CO features, is in our opinion ue to that structural features provie nonreunant information compare to CO features. Breakeven points (%) Data sets Best COQ Classifier RankBoost LinearRank COQ features COQ+SF COQ features COQ+SF COQ features COQ+SF SUMMAC INEX Table 3. Breakeven points at 0% compression ratio for learning algorithms an the best COQ feature: Extene title keywors with worclusters. Each value represents the mean performance for 0 crossvaliation fols. From the set of structure features use in our experiments (Section 3.), the epth of the sentence s component an the paragraph s position containing summary sentences 4 The same performance increase is also obtaine from the classifier.
19 LearningBase Summarisation of XML Documents 9 within the component (i.e. whether it is in the first paragraph or not of a component) got the highest weights with both ranking algorithms. Any sentence in the first paragraph of any first sections of a ocument, containing relevant COQ features, thus got high scores. In our experiments, these two structural features were the most effective for SDS. It is well known that, in scientific articles, sentences in the first parts of sections such as Introuction an Conclusions are useful for summarisation purposes [9, 7]. Our results agree with this, as the increase weights for the paragraph s position in a component suggests. The features corresponing to the position of elements with respect to their siblings are less effective than epth an paragraph position, but features inicating the position of an element as the first or the last sibling have a higher impact than when the element was the mile sibling. We shoul also note that the feature corresponing to the number of siblings of an element was the least conclusive in all of our experiments; its utility seeme to highly epen on the ataset. For the specific case of scientific text, from the set of structure features use, a set of features which is known to be effective was weighte higher by our summarisation metho. One way to view this result is that our metho correctly ientifie features that are known to be effective for this ocument genre, an has therefore the potential to perform equally well in other ocument genre. This in turn, can be seen as an inication that the use of structure features coul be applie to ocument collections of ifferent genre. The availability of suitable ocument collections containing ifferent ocument types will be necessary in orer to test this assertion. By looking at the ata in Table 3 (an Figures 2 an 3), one can note that effectiveness when using the INEX collection is always lower than when using the SUMMAC collection. This ifference in effectiveness can be attribute to the ifferent characteristics of the two atasets. The INEX collection contains many more ocuments than SUMMAC, an is also a more heterogeneous ataset. In aition, the logical structure of INEX ocuments is more complex than that of the SUMMAC collection. These factors are likely to cause the small ifference in effectiveness between the two collections. 6 Discussion an conclusions The results presente in the previous section are encouraging in relation to our two main motivations: a novel learning algorithm for SDS, an the inclusion of structure, in aition to content, features for the summarisation of XML ocuments. In terms of the algorithms, it was shown that using the same logistic moel, but choosing a ranking criterion instea of a classification one, leas to a notable performance increase. Moreover, compare to RankBoost, the LinearRank algorithm performs better an it also has the potential to be implemente in a simpler manner. This property may make this latter algorithm an effective an efficient choice for the task of SDS. In terms of the summarisation of XML ocuments by using content an structure features, the results emonstrate that for both atasets, the inclusion of structural features improve the effectiveness of learning algorithms for SDS. The improvements are not ramatic, but they are consistent across both atasets an across most recall points. This consistency suggests that the inclusion of features from the logical structure of XML ocuments is effective.
20 20 Massih R. Amini Anastasios Tombros Nicolas Usunier Mounia Lalmas The ultimate aim of our approach for the summarisation of XML ocuments is to prouce summaries for components at any level of granularity (e.g. section, subsection, etc.). The content an structure features that we presente in Section 3. can be applie to any level of granularity. For example, the epth of an element, the sibling number of an element in which a sentence is containe, the number of sibling elements in which the sentence is containe, an the position in the element of the paragraph in which the sentence is containe (i.e. the structure features in section 3.) can be applie to entire ocuments, sections, subsections, etc. Essentially, they can be applie to any XML element that can be meaningfully summarise, i.e. that is informative an long enough to make its summarisation meaningful [3]. In particular, the most effective content (expane concepts with wor clusters an projecte concepts on wor clusters), an structure features (epth of element an position of paragraph in the element), can be applie to various granularity levels within an XML tree. The effectiveness of such an approach however, cannot be teste until atasets with human prouce summaries, or summary extracts, at component level become available. We shoul also note that we focus on generic (rather than querybiase) summaries for evaluation purposes, but the propose moel can be applie to both types of summarisation. In Section 5.3 we mentione that the results provie us with some inication that the use of structural features can also be effective for summarising XML ocuments from atasets containing ocuments other than scientific articles. One possible irection for future research woul therefore be to examine this issue in more etail, an to ientify appropriate atasets of nonscientific XML ata for summarisation. The list of structural features that we use in this stuy is short, so a larger variety of features coul be investigate. When moving into ocument collections of ifferent types, it will be worthwhile to investigate whether useful structural features can be erive automatically, e.g. by looking at a collection s DTD. Some further interesting issues that arise when consiering the summarisation at any structural level, relate to the choice of the appropriate components to be summarise. For example, it may be unrealistic to provie summaries of very small size components, or of components that are not informative enough. One of the main research issues in XML retrieval is to efine an unerstan what a meaningful retrieval unit is [3]. One irection to follow, woul be to conuct a user stuy in which to observe what kins of XML elements searchers woul prefer to see in a summarise version after the initial retrieval. Some initial investigation can be foun in [30, 3], where results inicate a positive correlation between element probability of relevance, length an user preference to see summary information. Further research in this irection is currently unerway. By looking at the results of this stuy as a whole, we can say that the work presente here achieve its main aim, to effectively summarise XML ocuments by combining content an structure features through using novel machine learning approaches. Both atasets that we use contain scientific articles, that have some inherent characteristics which may simplify the task of SDS. This work has however a greater impact, as we believe that it can be applie to atasets containing ocuments of other types. The availability of XML ata will continue to increase as, for example, XML is becoming the W3C stanar for representing ocuments (e.g. in igital libraries where content
On Adaboost and Optimal Betting Strategies
On Aaboost an Optimal Betting Strategies Pasquale Malacaria 1 an Fabrizio Smerali 1 1 School of Electronic Engineering an Computer Science, Queen Mary University of Lonon, Lonon, UK Abstract We explore
More informationA New Evaluation Measure for Information Retrieval Systems
A New Evaluation Measure for Information Retrieval Systems Martin Mehlitz martin.mehlitz@ailabor.e Christian Bauckhage Deutsche Telekom Laboratories christian.bauckhage@telekom.e Jérôme Kunegis jerome.kunegis@ailabor.e
More informationLecture 8: Expanders and Applications
Lecture 8: Expaners an Applications Topics in Complexity Theory an Pseuoranomness (Spring 013) Rutgers University Swastik Kopparty Scribes: Amey Bhangale, Mrinal Kumar 1 Overview In this lecture, we will
More informationState of Louisiana Office of Information Technology. Change Management Plan
State of Louisiana Office of Information Technology Change Management Plan Table of Contents Change Management Overview Change Management Plan Key Consierations Organizational Transition Stages Change
More informationLecture 17: Implicit differentiation
Lecture 7: Implicit ifferentiation Nathan Pflueger 8 October 203 Introuction Toay we iscuss a technique calle implicit ifferentiation, which provies a quicker an easier way to compute many erivatives we
More information10.2 Systems of Linear Equations: Matrices
SECTION 0.2 Systems of Linear Equations: Matrices 7 0.2 Systems of Linear Equations: Matrices OBJECTIVES Write the Augmente Matrix of a System of Linear Equations 2 Write the System from the Augmente Matrix
More informationCHAPTER 5 : CALCULUS
Dr Roger Ni (Queen Mary, University of Lonon)  5. CHAPTER 5 : CALCULUS Differentiation Introuction to Differentiation Calculus is a branch of mathematics which concerns itself with change. Irrespective
More informationMSc. Econ: MATHEMATICAL STATISTICS, 1995 MAXIMUMLIKELIHOOD ESTIMATION
MAXIMUMLIKELIHOOD ESTIMATION The General Theory of ML Estimation In orer to erive an ML estimator, we are boun to make an assumption about the functional form of the istribution which generates the
More informationUCLA STAT 13 Introduction to Statistical Methods for the Life and Health Sciences. Chapter 9 Paired Data. Paired data. Paired data
UCLA STAT 3 Introuction to Statistical Methos for the Life an Health Sciences Instructor: Ivo Dinov, Asst. Prof. of Statistics an Neurology Chapter 9 Paire Data Teaching Assistants: Jacquelina Dacosta
More informationSprings, Shocks and your Suspension
rings, Shocks an your Suspension y Doc Hathaway, H&S Prototype an Design, C. Unerstaning how your springs an shocks move as your race car moves through its range of motions is one of the basics you must
More informationData Center Power System Reliability Beyond the 9 s: A Practical Approach
Data Center Power System Reliability Beyon the 9 s: A Practical Approach Bill Brown, P.E., Square D Critical Power Competency Center. Abstract Reliability has always been the focus of missioncritical
More informationChapter 2 Review of Classical Action Principles
Chapter Review of Classical Action Principles This section grew out of lectures given by Schwinger at UCLA aroun 1974, which were substantially transforme into Chap. 8 of Classical Electroynamics (Schwinger
More informationOpen World Face Recognition with Credibility and Confidence Measures
Open Worl Face Recognition with Creibility an Confience Measures Fayin Li an Harry Wechsler Department of Computer Science George Mason University Fairfax, VA 22030 {fli, wechsler}@cs.gmu.eu Abstract.
More informationMath 230.01, Fall 2012: HW 1 Solutions
Math 3., Fall : HW Solutions Problem (p.9 #). Suppose a wor is picke at ranom from this sentence. Fin: a) the chance the wor has at least letters; SOLUTION: All wors are equally likely to be chosen. The
More informationnparameter families of curves
1 nparameter families of curves For purposes of this iscussion, a curve will mean any equation involving x, y, an no other variables. Some examples of curves are x 2 + (y 3) 2 = 9 circle with raius 3,
More informationENSURING POSITIVENESS OF THE SCALED DIFFERENCE CHISQUARE TEST STATISTIC ALBERT SATORRA UNIVERSITAT POMPEU FABRA UNIVERSITY OF CALIFORNIA
PSYCHOMETRIKA VOL. 75, NO. 2, 243 248 JUNE 200 DOI: 0.007/S336009935Y ENSURING POSITIVENESS OF THE SCALED DIFFERENCE CHISQUARE TEST STATISTIC ALBERT SATORRA UNIVERSITAT POMPEU FABRA PETER M. BENTLER
More informationModelling and Resolving Software Dependencies
June 15, 2005 Abstract Many Linux istributions an other moern operating systems feature the explicit eclaration of (often complex) epenency relationships between the pieces of software
More information19.2. First Order Differential Equations. Introduction. Prerequisites. Learning Outcomes
First Orer Differential Equations 19.2 Introuction Separation of variables is a technique commonly use to solve first orer orinary ifferential equations. It is socalle because we rearrange the equation
More informationFirewall Design: Consistency, Completeness, and Compactness
C IS COS YS TE MS Firewall Design: Consistency, Completeness, an Compactness Mohame G. Goua an XiangYang Alex Liu Department of Computer Sciences The University of Texas at Austin Austin, Texas 787121188,
More informationRobust Reading of Ambiguous Writing
In submission; please o not istribute. Abstract A given entity, representing a person, a location or an organization, may be mentione in text in multiple, ambiguous ways. Unerstaning natural language requires
More informationPurpose of the Experiments. Principles and Error Analysis. ε 0 is the dielectric constant,ε 0. ε r. = 8.854 10 12 F/m is the permittivity of
Experiments with Parallel Plate Capacitors to Evaluate the Capacitance Calculation an Gauss Law in Electricity, an to Measure the Dielectric Constants of a Few Soli an Liqui Samples Table of Contents Purpose
More information9.3. Diffraction and Interference of Water Waves
Diffraction an Interference of Water Waves 9.3 Have you ever notice how people relaxing at the seashore spen so much of their time watching the ocean waves moving over the water, as they break repeately
More informationDigital barrier option contract with exponential random time
IMA Journal of Applie Mathematics Avance Access publishe June 9, IMA Journal of Applie Mathematics ) Page of 9 oi:.93/imamat/hxs3 Digital barrier option contract with exponential ranom time Doobae Jun
More informationAn intertemporal model of the real exchange rate, stock market, and international debt dynamics: policy simulations
This page may be remove to conceal the ientities of the authors An intertemporal moel of the real exchange rate, stock market, an international ebt ynamics: policy simulations Saziye Gazioglu an W. Davi
More informationA Data Placement Strategy in Scientific Cloud Workflows
A Data Placement Strategy in Scientific Clou Workflows Dong Yuan, Yun Yang, Xiao Liu, Jinjun Chen Faculty of Information an Communication Technologies, Swinburne University of Technology Hawthorn, Melbourne,
More informationStock Market Value Prediction Using Neural Networks
Stock Market Value Preiction Using Neural Networks Mahi Pakaman Naeini IT & Computer Engineering Department Islamic Aza University Paran Branch email: m.pakaman@ece.ut.ac.ir Hamireza Taremian Engineering
More informationCrossOver Analysis Using TTests
Chapter 35 CrossOver Analysis Using ests Introuction his proceure analyzes ata from a twotreatment, twoperio (x) crossover esign. he response is assume to be a continuous ranom variable that follows
More informationDetecting Possibly Fraudulent or ErrorProne Survey Data Using Benford s Law
Detecting Possibly Frauulent or ErrorProne Survey Data Using Benfor s Law Davi Swanson, Moon Jung Cho, John Eltinge U.S. Bureau of Labor Statistics 2 Massachusetts Ave., NE, Room 3650, Washington, DC
More informationy or f (x) to determine their nature.
Level C5 of challenge: D C5 Fining stationar points of cubic functions functions Mathematical goals Starting points Materials require Time neee To enable learners to: fin the stationar points of a cubic
More informationThe Elastic Capacitor and its Unusual Properties
1 The Elastic Capacitor an its Unusual Properties Michael B. Partensky, Department of Chemistry, Braneis University, Waltham, MA 453 partensky@attbi.com The elastic capacitor (EC) moel was first introuce
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS Contents 1. Moment generating functions 2. Sum of a ranom number of ranom variables 3. Transforms
More informationJON HOLTAN. if P&C Insurance Ltd., Oslo, Norway ABSTRACT
OPTIMAL INSURANCE COVERAGE UNDER BONUSMALUS CONTRACTS BY JON HOLTAN if P&C Insurance Lt., Oslo, Norway ABSTRACT The paper analyses the questions: Shoul or shoul not an iniviual buy insurance? An if so,
More informationThe oneyear nonlife insurance risk
The oneyear nonlife insurance risk Ohlsson, Esbjörn & Lauzeningks, Jan Abstract With few exceptions, the literature on nonlife insurance reserve risk has been evote to the ultimo risk, the risk in the
More informationRobust Reading: Identification and Tracing of Ambiguous Names
Robust Reaing: Ientification an Tracing of Ambiguous Names Xin Li Paul Morie Dan Roth Department of Computer Science University of Illinois, Urbana, IL 61801 {xli1,morie,anr}@uiuc.eu Abstract A given entity,
More informationFOURIER TRANSFORM TERENCE TAO
FOURIER TRANSFORM TERENCE TAO Very broaly speaking, the Fourier transform is a systematic way to ecompose generic functions into a superposition of symmetric functions. These symmetric functions are usually
More informationIdentification and Tracing of Ambiguous Names: Discriminative and Generative Approaches
Ientification an Tracing of Ambiguous Names: Discriminative an Generative Approaches Xin Li Paul Morie Dan Roth Department of Computer Science University of Illinois, Urbana, IL 61801 {xli1,morie,anr}@uiuc.eu
More informationIf you have ever spoken with your grandparents about what their lives were like
CHAPTER 7 Economic Growth I: Capital Accumulation an Population Growth The question of growth is nothing new but a new isguise for an ageol issue, one which has always intrigue an preoccupie economics:
More informationProduct Differentiation for SoftwareasaService Providers
University of Augsburg Prof. Dr. Hans Ulrich Buhl Research Center Finance & Information Management Department of Information Systems Engineering & Financial Management Discussion Paper WI99 Prouct Differentiation
More information! # % & ( ) +,,),. / 0 1 2 % ( 345 6, & 7 8 4 8 & & &&3 6
! # % & ( ) +,,),. / 0 1 2 % ( 345 6, & 7 8 4 8 & & &&3 6 9 Quality signposting : the role of online information prescription in proviing patient information Liz Brewster & Barbara Sen Information School,
More informationOptimal Control Policy of a Production and Inventory System for multiproduct in Segmented Market
RATIO MATHEMATICA 25 (2013), 29 46 ISSN:15927415 Optimal Control Policy of a Prouction an Inventory System for multiprouct in Segmente Market Kuleep Chauhary, Yogener Singh, P. C. Jha Department of Operational
More informationMeasures of distance between samples: Euclidean
4 Chapter 4 Measures of istance between samples: Eucliean We will be talking a lot about istances in this book. The concept of istance between two samples or between two variables is funamental in multivariate
More information2 HYPERBOLIC FUNCTIONS
HYPERBOLIC FUNCTIONS Chapter Hyperbolic Functions Objectives After stuying this chapter you shoul unerstan what is meant by a hyperbolic function; be able to fin erivatives an integrals of hyperbolic functions;
More informationWeek 4  Linear Demand and Supply Curves
Week 4  Linear Deman an Supply Curves November 26, 2007 1 Suppose that we have a linear eman curve efine by the expression X D = A bp X an a linear supply curve given by X S = C + P X, where b > 0 an
More informationCh 10. Arithmetic Average Options and Asian Opitons
Ch 10. Arithmetic Average Options an Asian Opitons I. Asian Option an the Analytic Pricing Formula II. Binomial Tree Moel to Price Average Options III. Combination of Arithmetic Average an Reset Options
More information6.3 Microbial growth in a chemostat
6.3 Microbial growth in a chemostat The chemostat is a wielyuse apparatus use in the stuy of microbial physiology an ecology. In such a chemostat also known as continuousflow culture), microbes such
More informationCalibration of the broad band UV Radiometer
Calibration of the broa ban UV Raiometer Marian Morys an Daniel Berger Solar Light Co., Philaelphia, PA 19126 ABSTRACT Mounting concern about the ozone layer epletion an the potential ultraviolet exposure
More information_Mankiw7e_CH07.qxp 3/2/09 9:40 PM Page 189 PART III. Growth Theory: The Economy in the Very Long Run
189220_Mankiw7e_CH07.qxp 3/2/09 9:40 PM Page 189 PART III Growth Theory: The Economy in the Very Long Run 189220_Mankiw7e_CH07.qxp 3/2/09 9:40 PM Page 190 189220_Mankiw7e_CH07.qxp 3/2/09 9:40 PM Page
More informationStochastic Modeling of MEMS Inertial Sensors
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 10, No Sofia 010 Stochastic Moeling of MEMS Inertial Sensors Petko Petkov, Tsonyo Slavov Department of Automatics, Technical
More information20122013 Enhanced Instructional Transition Guide Mathematics Algebra I Unit 08
01013 Enhance Instructional Transition Guie Unit 08: Exponents an Polynomial Operations (18 ays) Possible Lesson 01 (4 ays) Possible Lesson 0 (7 ays) Possible Lesson 03 (7 ays) POSSIBLE LESSON 0 (7 ays)
More informationKater Pendulum. Introduction. It is wellknown result that the period T of a simple pendulum is given by. T = 2π
Kater Penulum ntrouction t is wellknown result that the perio of a simple penulum is given by π L g where L is the length. n principle, then, a penulum coul be use to measure g, the acceleration of gravity.
More informationView Synthesis by Image Mapping and Interpolation
View Synthesis by Image Mapping an Interpolation Farris J. Halim Jesse S. Jin, School of Computer Science & Engineering, University of New South Wales Syney, NSW 05, Australia Basser epartment of Computer
More informationHardness Evaluation of Polytetrafluoroethylene Products
ECNDT 2006  Poster 111 Harness Evaluation of Polytetrafluoroethylene Proucts T.A.KODINTSEVA, A.M.KASHKAROV, V.A.KALOSHIN NPO Energomash Khimky, Russia A.P.KREN, V.A.RUDNITSKY, Institute of Applie Physics
More informationApplications of Global Positioning System in Traffic Studies. Yi Jiang 1
Applications of Global Positioning System in Traffic Stuies Yi Jiang 1 Introuction A Global Positioning System (GPS) evice was use in this stuy to measure traffic characteristics at highway intersections
More informationM147 Practice Problems for Exam 2
M47 Practice Problems for Exam Exam will cover sections 4., 4.4, 4.5, 4.6, 4.7, 4.8, 5., an 5.. Calculators will not be allowe on the exam. The first ten problems on the exam will be multiple choice. Work
More informationThroughputScheduler: Learning to Schedule on Heterogeneous Hadoop Clusters
ThroughputScheuler: Learning to Scheule on Heterogeneous Haoop Clusters Shehar Gupta, Christian Fritz, Bob Price, Roger Hoover, an Johan e Kleer Palo Alto Research Center, Palo Alto, CA, USA {sgupta, cfritz,
More informationAn Introduction to Eventtriggered and Selftriggered Control
An Introuction to Eventtriggere an Selftriggere Control W.P.M.H. Heemels K.H. Johansson P. Tabuaa Abstract Recent evelopments in computer an communication technologies have le to a new type of largescale
More informationLagrangian and Hamiltonian Mechanics
Lagrangian an Hamiltonian Mechanics D.G. Simpson, Ph.D. Department of Physical Sciences an Engineering Prince George s Community College December 5, 007 Introuction In this course we have been stuying
More informationUniversal Gravity Based on the Electric Universe Model
Universal Gravity Base on the Electric Universe Moel By Frerik Nygaar frerik_nygaar@hotmail.com Nov 015, Porto Portugal. Introuction While most people are aware of both Newton's Universal Law of Gravity
More informationImplementing IP Traceback in the Internet An ISP Perspective
Implementing IP Traceback in the Internet An ISP Perspective Dong Wei, Stuent Member, IEEE, an Nirwan Ansari, Senior Member, IEEE AbstractDenialofService (DoS) attacks consume the resources of remote
More informationFAST JOINING AND REPAIRING OF SANDWICH MATERIALS WITH DETACHABLE MECHANICAL CONNECTION TECHNOLOGY
FAST JOINING AND REPAIRING OF SANDWICH MATERIALS WITH DETACHABLE MECHANICAL CONNECTION TECHNOLOGY Jörg Felhusen an Sivakumara K. Krishnamoorthy RWTH Aachen University, Chair an Insitute for Engineering
More informationOption Pricing for Inventory Management and Control
Option Pricing for Inventory Management an Control Bryant Angelos, McKay Heasley, an Jeffrey Humpherys Abstract We explore the use of option contracts as a means of managing an controlling inventories
More informationWrites of Passage: Writing an Empirical Journal Article
LYNN WHITE University of Nebraska Lincoln Writes of Passage: Writing an Empirical Journal Article This article provies avice about preparing research reports for submission to professional journals in
More informationMeanValue Theorem (Several Variables)
MeanValue Theorem (Several Variables) 1 MeanValue Theorem (Several Variables) THEOREM THE MEANVALUE THEOREM (SEVERAL VARIABLES) If f is ifferentiable at each point of the line segment ab, then there
More informationA Comparison of Performance Measures for Online Algorithms
A Comparison of Performance Measures for Online Algorithms Joan Boyar 1, Sany Irani 2, an Kim S. Larsen 1 1 Department of Mathematics an Computer Science, University of Southern Denmark, Campusvej 55,
More informationTraffic Delay Studies at Signalized Intersections with Global Positioning System Devices
Traffic Delay Stuies at Signalize Intersections with Global Positioning System Devices THIS FEATURE PRESENTS METHODS FOR APPLYING GPS DEVICES TO STUDY TRAFFIC DELAYS AT SIGNALIZED INTERSECTIONS. MOST OF
More informationProfessional Level Options Module, Paper P4(SGP)
Answers Professional Level Options Moule, Paper P4(SGP) Avance Financial Management (Singapore) December 2007 Answers Tutorial note: These moel answers are consierably longer an more etaile than woul be
More informationA Universal Sensor Control Architecture Considering Robot Dynamics
International Conference on Multisensor Fusion an Integration for Intelligent Systems (MFI2001) BaenBaen, Germany, August 2001 A Universal Sensor Control Architecture Consiering Robot Dynamics Frierich
More informationTowards a Framework for Enterprise Architecture Frameworks Comparison and Selection
Towars a Framework for Enterprise Frameworks Comparison an Selection Saber Aballah Faculty of Computers an Information, Cairo University Saber_aballah@hotmail.com Abstract A number of Enterprise Frameworks
More informationJitter effects on Analog to Digital and Digital to Analog Converters
Jitter effects on Analog to Digital an Digital to Analog Converters Jitter effects copyright 1999, 2000 Troisi Design Limite Jitter One of the significant problems in igital auio is clock jitter an its
More informationDETERMINING OPTIMAL STOCK LEVEL IN MULTIECHELON SUPPLY CHAINS
HUNGARIAN JOURNA OF INDUSTRIA CHEMISTRY VESZPRÉM Vol. 39(1) pp. 107112 (2011) DETERMINING OPTIMA STOCK EVE IN MUTIECHEON SUPPY CHAINS A. KIRÁY 1, G. BEVÁRDI 2, J. ABONYI 1 1 University of Pannonia, Department
More informationImproving Direct Marketing Profitability with Neural Networks
Volume 9 o.5, September 011 Improving Direct Marketing Profitability with eural etworks Zaiyong Tang Salem State University Salem, MA 01970 ABSTRACT Data mining in irect marketing aims at ientifying the
More informationPROBLEMS. A.1 Implement the COINCIDENCE function in sumofproducts form, where COINCIDENCE = XOR.
724 APPENDIX A LOGIC CIRCUITS (Corrispone al cap. 2  Elementi i logica) PROBLEMS A. Implement the COINCIDENCE function in sumofproucts form, where COINCIDENCE = XOR. A.2 Prove the following ientities
More informationUnsteady Flow Visualization by Animating EvenlySpaced Streamlines
EUROGRAPHICS 2000 / M. Gross an F.R.A. Hopgoo Volume 19, (2000), Number 3 (Guest Eitors) Unsteay Flow Visualization by Animating EvenlySpace Bruno Jobar an Wilfri Lefer Université u Littoral Côte Opale,
More informationUsing Stein s Method to Show Poisson and Normal Limit Laws for Fringe Subtrees
AofA 2014, Paris, France DMTCS proc. (subm., by the authors, 1 12 Using Stein s Metho to Show Poisson an Normal Limit Laws for Fringe Subtrees Cecilia Holmgren 1 an Svante Janson 2 1 Department of Mathematics,
More informationPerformance of 2D versus 3D Topographic Representations for Different Task Types
Performance of versus Topographic Representations for Different s Debra MacIvor Savage, Eric N. Wiebe an Hugh A. Devine North Carolina State University Raleigh, North Carolina In this stuy, a performance
More informationA Case Study of Applying SOM in Market Segmentation of Automobile Insurance Customers
International Journal of Database Theory an Application, pp.2536 http://x.oi.org/10.14257/ijta.2014.7.1.03 A Case Stuy of Applying SOM in Market Segmentation of Automobile Insurance Customers Vahi Golmah
More informationHeatAndMass Transfer Relationship to Determine Shear Stress in Tubular Membrane Systems Ratkovich, Nicolas Rios; Nopens, Ingmar
Aalborg Universitet HeatAnMass Transfer Relationship to Determine Shear Stress in Tubular Membrane Systems Ratkovich, Nicolas Rios; Nopens, Ingmar Publishe in: International Journal of Heat an Mass Transfer
More informationPerformance And Analysis Of Risk Assessment Methodologies In Information Security
International Journal of Computer Trens an Technology (IJCTT) volume 4 Issue 10 October 2013 Performance An Analysis Of Risk Assessment ologies In Information Security K.V.D.Kiran #1, Saikrishna Mukkamala
More informationRUNESTONE, an International Student Collaboration Project
RUNESTONE, an International Stuent Collaboration Project Mats Daniels 1, Marian Petre 2, Vicki Almstrum 3, Lars Asplun 1, Christina Björkman 1, Carl Erickson 4, Bruce Klein 4, an Mary Last 4 1 Department
More informationA Generalization of Sauer s Lemma to Classes of LargeMargin Functions
A Generalization of Sauer s Lemma to Classes of LargeMargin Functions Joel Ratsaby University College Lonon Gower Street, Lonon WC1E 6BT, Unite Kingom J.Ratsaby@cs.ucl.ac.uk, WWW home page: http://www.cs.ucl.ac.uk/staff/j.ratsaby/
More informationFactoring Dickson polynomials over finite fields
Factoring Dickson polynomials over finite fiels Manjul Bhargava Department of Mathematics, Princeton University. Princeton NJ 08544 manjul@math.princeton.eu Michael Zieve Department of Mathematics, University
More informationSensitivity Analysis of Nonlinear Performance with Probability Distortion
Preprints of the 19th Worl Congress The International Feeration of Automatic Control Cape Town, South Africa. August 2429, 214 Sensitivity Analysis of Nonlinear Performance with Probability Distortion
More informationRisk Adjustment for Poker Players
Risk Ajustment for Poker Players William Chin DePaul University, Chicago, Illinois Marc Ingenoso Conger Asset Management LLC, Chicago, Illinois September, 2006 Introuction In this article we consier risk
More informationINFLUENCE OF GPS TECHNOLOGY ON COST CONTROL AND MAINTENANCE OF VEHICLES
1 st Logistics International Conference Belgrae, Serbia 2830 November 2013 INFLUENCE OF GPS TECHNOLOGY ON COST CONTROL AND MAINTENANCE OF VEHICLES Goran N. Raoičić * University of Niš, Faculty of Mechanical
More information(We assume that x 2 IR n with n > m f g are twice continuously ierentiable functions with Lipschitz secon erivatives. The Lagrangian function `(x y) i
An Analysis of Newton's Metho for Equivalent Karush{Kuhn{Tucker Systems Lus N. Vicente January 25, 999 Abstract In this paper we analyze the application of Newton's metho to the solution of systems of
More informationEmergence of heterogeneity in acute leukemias
Stiehl et al. Biology Direct (2016) 11:51 DOI 10.1186/s1306201601541 RESEARCH Open Access Emergence of heterogeneity in acute leukemias Thomas Stiehl 1,2,3*, Christoph Lutz 4 an Anna MarciniakCzochra
More informationStudying the Behavior of Active Mass Drivers during an Earthquake Using Discrete Instantaneous Optimal Control Method
Stuying the Behavior of Active Mass Drivers uring an Earthquae Using Discrete Instantaneous Optimal Control Metho O. Bahar 1, M. R. Banan 2, an M. Mahzoon 3 1. Structural Engineering Research Center, International
More informationChapter 9 AIRPORT SYSTEM PLANNING
Chapter 9 AIRPORT SYSTEM PLANNING. Photo creit Dorn McGrath, Jr Contents Page The Planning Process................................................... 189 Airport Master Planning..............................................
More informationUsing research evidence in mental health: userrating and focus group study of clinicians preferences for a new clinical questionanswering service
DOI: 10.1111/j.14711842.2008.00833.x Using research evience in mental health: userrating an focus group stuy of clinicians preferences for a new clinical questionanswering service Elizabeth A. Barley*,
More informationIntroduction to Integration Part 1: AntiDifferentiation
Mathematics Learning Centre Introuction to Integration Part : AntiDifferentiation Mary Barnes c 999 University of Syney Contents For Reference. Table of erivatives......2 New notation.... 2 Introuction
More informationParameterized Algorithms for dhitting Set: the Weighted Case Henning Fernau. Univ. Trier, FB 4 Abteilung Informatik 54286 Trier, Germany
Parameterize Algorithms for Hitting Set: the Weighte Case Henning Fernau Trierer Forschungsberichte; Trier: Technical Reports Informatik / Mathematik No. 086, July 2008 Univ. Trier, FB 4 Abteilung Informatik
More informationSection 3.3. Differentiation of Polynomials and Rational Functions. Difference Equations to Differential Equations
Difference Equations to Differential Equations Section 3.3 Differentiation of Polynomials an Rational Functions In tis section we begin te task of iscovering rules for ifferentiating various classes of
More informationModeling and Predicting Popularity Dynamics via Reinforced Poisson Processes
Proceeings of the TwentyEighth AAAI Conference on Artificial Intelligence Moeling an Preicting Popularity Dynamics via Reinforce Poisson Processes Huawei Shen 1, Dashun Wang 2, Chaoming Song 3, AlbertLászló
More informationA New Pricing Model for Competitive Telecommunications Services Using Congestion Discounts
A New Pricing Moel for Competitive Telecommunications Services Using Congestion Discounts N. Keon an G. Ananalingam Department of Systems Engineering University of Pennsylvania Philaelphia, PA 191046315
More informationStress Concentration Factors of Various Adjacent Holes Configurations in a Spherical Pressure Vessel
5 th Australasian Congress on Applie Mechanics, ACAM 2007 1012 December 2007, Brisbane, Australia Stress Concentration Factors of Various Ajacent Holes Configurations in a Spherical Pressure Vessel Kh.
More informationImproved division by invariant integers
1 Improve ivision by invariant integers Niels Möller an Torbjörn Granlun Abstract This paper consiers the problem of iviing a twowor integer by a singlewor integer, together with a few extensions an
More informationEvaluation of Life Cycle Cost Analysis Methodologies
www.corporateenvstrategy.com Life Cycle Cost Analysis Evaluation of Life Cycle Cost Analysis Methoologies Senthil Kumaran Durairaj, S.K. Ong, A.Y.C. Nee an R.B.H. Tan* After the emergence of Life Cycle
More informationImage compression predicated on recurrent iterated function systems **
1 Image compression preicate on recurrent iterate function systems ** W. Metzler a, *, C.H. Yun b, M. Barski a a Faculty of Mathematics University of Kassel, Kassel, F. R. Germany b Faculty of Mathematics
More informationDEVELOPMENT OF A BRAKING MODEL FOR SPEED SUPERVISION SYSTEMS
DEVELOPMENT OF A BRAKING MODEL FOR SPEED SUPERVISION SYSTEMS Paolo Presciani*, Monica Malvezzi #, Giuseppe Luigi Bonacci +, Monica Balli + * FS Trenitalia Unità Tecnologie Materiale Rotabile Direzione
More informationISSN: 22773754 ISO 9001:2008 Certified International Journal of Engineering and Innovative Technology (IJEIT) Volume 3, Issue 12, June 2014
ISSN: 77754 ISO 900:008 Certifie International Journal of Engineering an Innovative echnology (IJEI) Volume, Issue, June 04 Manufacturing process with isruption uner Quaratic Deman for Deteriorating Inventory
More information