Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm
|
|
- Donald Rodgers
- 7 years ago
- Views:
Transcription
1 Sensor Network Localization from Local Connectivity : Performance Analysis for the MDS-MAP Algorithm Sewoong Oh an Anrea Montanari Electrical Engineering an Statistics Department Stanfor University, Stanfor, CA swoh@stanfor.eu, montanari@stanfor.eu Amin Karbasi LTHC an LCAV EPFL, Lausanne, Switzerlan amin.karbasi@epfl.ch Abstract Sensor localization from only connectivity information is a highly challenging problem. To this en, our result for the first time establishes an analytic boun on the performance of the popular MDS-MAP algorithm base on multiimensional scaling. For a network consisting of n sensors positione ranomly on a unit square an a given raio range r = o(1), we show that resulting error is boune, ecreasing at a rate that is inversely proportional to r, when only connectivity information is given. The same boun hols for the range-base moel, when we have an approximate measurements for the istances, an the same algorithm can be applie without any moification. I. INTRODUCTION Localization plays an important role in wireless sensor networks when the positions of the noes are not provie in avance. One way to acquire the positions is by equipping all the sensors with a global positioning system. Clearly, this as consierable cost to the system. As an alternative, many research efforts have focuse on creating algorithms that can erive positions of sensors base on basic local information such as proximity (which noes are within communication range of each other) or local istances (pairwise istances between neighboring sensors). The problem is solvable, meaning that it has a unique set of coorinates satisfying the given local information, only if there are enough constraints. Note that base on local information any solution is unique only up to rigi transformations (rotations, reflections an translations). The simplest of such algorithms assumes that all pairwise istances are known. However, it is well known that in practical scenarios such information is unavailable an far away sensors (the ones outsie the communication range) cannot obtain their pairwise istances. Many algorithms have been propose to resolve this issue by using heuristic approximations to the missing istances where the success of them has mostly been measure experimentally. Of the same interest an complementary in nature, however, are the theoretical guarantees associate with the performance of the existing methos. Such analytical bouns on the performance of localization algorithms can provie answers to practical questions: for example, how large shoul the raio range be in orer to get the error within a threshol? With this motivation in min, our work takes a forwar step in this irection. In particular, our analysis focuses on proviing a boun on the performance of the popular MDS-MAP [1] algorithm when applie to sensor localization from only connectivity information, which is a highly challenging problem. We prove for the first time that using MDS-MAP, we are able to localize sensors up to a boune error in a connecte network where most of pairwise istances are missing an only local connectivity information is given. A. Relate work A great eal of work over the past few years has focuse on the sensor localization problem. A etaile survey of this area is provie in [2]. In the case when all pairwise istances are known, the coorinates can be erive by using a classical metho known as multiimensional scaling (MDS) [3], [4]. The unerlying principle of the MDS is to convert istances into an inner prouct matrix, whose eigenvectors are the unknown coorinates. In practice, MDS tolerates errors gracefully ue to the overetermine nature of the solution. However, when most pairwise istances are missing, the problem of fining the unknown coorinates becomes more challenging. Two types of practical solutions to this problem have been propose. The first group consists of algorithms who try first to estimate the missing entries of the istance matrix an then apply MDS to the reconstructe istance matrix to fin the coorinates of the sensors. MDS-MAP introuce in [1] can be mentione as a well known example of this class where it computes the shortest paths between all pairs of noes in orer to approximate the missing entries of the istance matrix. The algorithms in the secon group mainly consier the sensor localization as a non-convex optimization problem an irectly estimate the coorinates of sensors. A famous example of this type is a relaxation to semiefinite programming (SDP)[5]. The performance of these practical algorithms are invariably measure through simulations an little is known about the theoretical analysis supporting their results. One exception is the work one in [6] where they use matrix completion methos as a mean to reconstruct the istance matrix. The
2 main contribution of their paper is that they are able to provably localize the sensors up to a boune error. However, their analysis is base on a number of strong assumptions. First, they assume that even far away sensors have non-zero probability of etecting their istances. Secon, the algorithm explicitly requires the knowlege of etection probabilities between all pairs. Thir, their theorem only works when the average egree of the network (i.e., the average number of noes etecte by each sensor) grows linearly with the number of sensors in the network. Our work has a similar flavor as [6], namely we provie theoretical guarantee that backs up experimental results. We use shortest paths (the same way it is use in MDS-MAP) as our primary guess for the missing entries in the istance matrix an apply MDS to fin the topology of the network. In contrast to [6], we require weaker assumptions for our result. More specifically, we assume that only neighboring sensors have information about each other an that only connectivity information is known. Furthermore, the knowlege of etection probabilities plays no role in our analysis or the algorithm. An last, in our analysis we assume that the average egree grows logarithmically with the number of sensors not linearly, which results in neeing much less reveale entries in the istance matrix. In particular, the last conition is quite realistic: if the average egree grows any slower then the network is not even connecte. Due to the fact that the shortest paths algorithm works for both rage-free an rangeaware cases, our analysis inclues both an provies the first error bouns on the performance of MDS-MAP. The organization of this paper is as follows. Section II escribes the state of the art algorithm known as MDS-MAP an states the main result on the performance of MDS-MAP. In Section III, we provie etaile proof of the main theorem. II. LOCALIZATION USING MULTIDIMENSIONAL SCALING In this section, we first efine the basic mathematical moel for the sensors an the measurements an state preliminary facts on MDS, a technique to erive sensor configuration from pairwise istances [1]. We will then escribe the algorithm an give a boun on the error between X an X, the estimate returne by the algorithm. A. Moel efinition For the sensor localization problem, we consier the ranom geometric graph moel G(n, r) = (V, E, P ) where V is a set of n vertices that correspon to n sensor noes which are istribute uniformly at ranom within the -imensional hypercube [0, 1], E V V is a set of unirecte eges that connect pairs of sensors which are close to each other, an P : E R + is a non-negative real-value function. We consier the function P as a mapping from a pair of connecte noes (i, j) to the approximate measurement for the istance between i an j, which we call the proximity measurement. Let be the Eucliean norm on R an let r be some positive parameter. Given a set of ranom positions of n sensors X = {x 1,..., x n }, where x i is the position of sensor i in -imensional space, a common moel for the ranom geometric graph is the isc moel where noe i an j are connecte if the Eucliean istance i,j = x i x j is less than r. More formally, (i, j) E i,j r. To each ege (i, j) E, we associate the proximity measurement P i,j between sensors i an j, which is a function of the istance i,j an ranom noise. In an ieal case when our measurements are exact, we have P i,j = i,j. On the other extreme, when we are given only network connectivity information an no istance information, we have P i,j = r for all (i, j) E. The sensor localization algorithms can be classifie into two ifferent categories. For the connectivity-base moel, which is alternatively also known as the range-free moel, only the connectivity information is available. Formally, P i,j = r if (i, j) E. In the range-base moel, which is also known as the rangeaware moel, the istance measurement is available but may be corrupte by noise or have limite accuracy. P i,j = [ i,j + z i,j ] + if (i, j) E, where z i,j moels the measurement noise (in the noiseless case z i,j = 0), possibly a function of the istance i,j, an [a] + max{0, a}. Common examples are the aitive Gaussian noise moel, where the z i,j s are i.i.. Gaussian ranom variables, an the multiplicative noise moel, where P i,j = [(1 + N i,j ) i,j ] +, for inepenent Gaussian ranom variables N i,j s. In the following, let D enote the n n square istance matrix where D ij = 2 i,j an X enote the n position matrix where the ith row correspons to x i, the -imensional position vector of sensor i. Given the graph G(n, r) = (V, E, P ) with associate proximity measurements for each eges in E, the goal of sensor network localization is to fin an embeing of the noes that best fits the measure proximity between all the pairs in E. The notion of embeing, or configuration, will be mae clear in the following sections. The main contribution of this paper is that we provie, for the first time, the performance boun for the MDS- MAP algorithm when only the connectivity information is available as in the connectivity-base moel. However, the algorithm can be reaily applie to the range-base moel without any moification. Further, since we are given extra information about the approximate istance measurements in the range-base moel, the performance is at least as goo as if we only ha the connectivity information. Note that given G(V, E, P ) from the range-base moel we can easily prouce G (V, E, P ) where P i,j = r for all (i, j) E. B. Multiimensional scaling (MDS) MDS is a technique use in fining the configuration of objects in a low imensional space such that the measure pairwise istances are preserve. If all the pairwise istances
3 are measure without error then a naive application of MDS exactly recovers the configuration of sensors [1]. Algorithm : Classical Metric MDS [1] Input: imension, estimate istance matrix M 1: Compute ( 1/2)LM L, where L = I n (1/n)1 n 1 T n ; 2: Compute the best rank- approximation U Σ U T of ( 1/2)LM L; 3: Return MDS (M) U Σ 1/2. There are many types of MDS techniques, but, throughout this paper, whenever we say MDS we refer to the classical metric MDS, which is efine as follows. Let L be an n n symmetric matrix such that L = I n (1/n)1 n 1 T n, where 1 n R n is the all ones vector an I n is the n n ientity matrix. Let MDS (D) enote the n matrix returne by MDS when applie to the square istance matrix D. Then, in formula, given the singular value ecomposition (SVD) of a symmetric an positive efinite matrix ( 1/2)LDL as ( 1/2)LDL = UΣU T, MDS (D) U Σ 1/2, where U enotes the n left singular matrix corresponing to the largest singular values an Σ enotes the iagonal matrix with largest singular values in the iagonal. This is also known as the MDSLOCALIZE algorithm in [6]. Note that since the columns of U are orthogonal to 1 n by construction, it follow that L MDS (D) = MDS (D). It can be easily shown that when MDS is applie to the correct square istance matrix without noise, the configuration of sensors are exactly recovere [6]. This follows from the following equality (1/2)LDL = LXX T L. (1) Note that we only get the configuration an not the absolute positions, in the sense that MDS (D) is one version of infinitely many solutions that matches the istance measurements D. Intuitively, it is clear that the pairwise istances are invariant to a rigi transformation (a combination of rotation, reflection an translation) of the positions X, an therefore there are multiple incients of X that result in the same D. For future use, we introuce a formal efinition of rigi transformation an relate terms. Denote by O() the orthogonal group of matrices. A set of sensor positions Y R n is a rigi transform of X, if there exists a -imensional shift vector s an an orthogonal matrix Q O() such that Y = XQ + 1 n s T. Y shoul be interprete as a result of first rotating (an/or reflecting) sensors in position X by Q an then aing a shift by s. Similarly, when we say two position matrices X an Y are equal up to a rigi transformation, we mean that there exists a rotation Q an a shift s such that Y = XQ + 1 n s T. Also, we say a function f(x) is invariant uner rigi transformation if an only if for all X an Y that are equal up to a rigi transformation we have f(x) = f(y ). Uner these efinitions, it is clear that D is invariant uner rigi transformation, since for all (i, j), D ij = x i x j 2 = (x i Q + s T ) (x j Q + s T ) 2, for any Q O() an s R. Note that when solving a localization problem, there are two possible outputs : a relative map an an absolute map. The task of fining a relative map is to fin a configuration of sensors that have the same neighbor relationships as the unerlying graph G. In the following we use the terms configuration, embeing, an relative map interchangeably. The task of fining an absolute map is to etermine the absolute geographic coorinates of all sensors. In this paper, we are intereste in presenting an algorithm MDS-MAP, that fins a configuration that best fits the proximity measurements, an proviing an analytical boun on the error between the estimate configuration an the correct configuration. While MDS works perfectly when D is available, in practice not all proximity measurements are available because of the limite raio range r. The next section escribes the MDS- MAP algorithm introuce in [1], where we apply the shortest paths algorithm on the graph G = (V, E, P ) to get a goo estimate of the unknown proximity measurements. C. Algorithm Base on MDS, MDS-MAP consists of two steps : Algorithm : MDS-MAP [1] Input: imension, graph G = (V, E, P ) 1: Compute the shortest paths, an let D be the square shortest paths matrix; 2: Apply MDS to D, an let X be the output. The original MDS-MAP algorithm introuce in [1] has aitional post-processing step to fit the given configuration to an absolute map using a small number (typically + 1) of special noes with known positions, which are calle anchors. However, the focus of this paper is to give a boun on the error between the relative map foun by the algorithm an the correct configuration. Hence, the last step, which oes not improve the performance, is omitte here. Shortest paths. The shortest path between noe i an j in graph G = (V, E, P ) is efine as a path between two noes such that the sum of the proximity measures of its constituent eges is minimize. Let ˆi,j be the compute shortest path between noe i an j. Then, the square shortest paths matrix D R n n is efine as D ij = ˆ 2 i,j for i j, an 0 for i = j. Note that D is well efine only if the graph G is connecte. If G is not connecte, there are multiple configurations resulting in the same observe proximity measures an global localization is not possible. In the unit square, assuming sensor positions are rawn uniformly at ranom as efine in the previous section, the graph is connecte, with high probability, if πr 2 > (log n + c n )/n for c n [7]. A similar conition can be erive for generic -imensions as C r > (log n + c n )/n, where C π is a constant that epens on. Hence, we focus in the regime where r = (α log n/n) 1/ for some positive constant α.
4 As will be shown in Lemma III.1, the key observation of the shortest paths step is that the estimation is guarantee to be arbitrarily close to the correct istance for large enough raio range r an large enough n. Moreover, the all-pairs shortest paths problem has an efficient algorithm whose complexity is O(n 2 log n + n E ) [8]. For r = (α log n/n) 1/ with constant α, the graph is sparse with E = O(n log n), whence the complexity is O(n 2 log n). Multiimensional scaling. In step 2, we apply the MDS to D to get a goo estimate of X. As explaine in II-B, we compute X = MDS ( D). The main step in MDS is singular value ecomposition of a ense matrix D, which has complexity of O(n 3 ). D. Main results Our main result establishes that MDS-MAP achieves an arbitrarily small error from a raio range r = (α log n/n) 1/ with a large enough constant α, when we have only the connectivity information as in the case of the connectivity-base moel. The same boun hols immeiately for the range-base moel, when we have an approximate measurements for the istances, an the same algorithm can be applie without any moification. Let X enote an n estimation for X with estimate position for noe i in the ith row. Then, we nee to efine a metric for the istance between the original position matrix X an the estimation X which is invariant uner rigi transformation of X or X. Define L I n (1/n)1 n 1 T n as in the MDS algorithm. L is an n n rank n 1 symmetric matrix which eliminates the contributions of the translation, in the sense that LX = L(X + 1s T ) for all s R. Note that L has the following nice properties : (1) LXX T L is invariant uner rigi transformation. (2) LXX T L = L X X T L implies that X an X are equal up to a rigi transformation. This naturally efines following istance between X an X. 1 (X, X) = (1/n) LXX T L L X X T L F, (2) where F enotes the Frobenius norm. Notice that the factor (1/n) correspons to the usual normalization by the number of entries in the summation, an (1/ ) correspons to the maximum istance between any two noes in the [0, 1] hypercube. Inee this istance is invariant to rigi transformation of X an X. Furthermore, 1 (X, X) = 0 implies that X an X are equal up to a rigi transformation. Define ( ) log n r 0. (3) 10 n Theorem II.1. Assume n sensors istribute uniformly at ranom in the [0, 1] hypercube, for a boune imension = O(1). For a given raio range r, a corresponing graph G(V, E, P ) is efine accoring to the connectivitybase moel. Then, with high probability, the istance between the estimate X given by MDS-MAP an the correct position matrix X is boune by 1 (X, X) r 0 r + O(r), (4) for r > r 0, where 1 ( ) is efine in Eq. (2) an r 0 in Eq. (3). The proof is provie in Section III. As escribe in the previous section, we are intereste in the regime where r = (α log n/n) 1/ for some constant α. Given a small positive constant δ, this implies that MDS-MAP is guarantee to prouce estimate positions that satisfy 1 (X, X) δ with a large enough constant α an large enough n. Perhaps a more interesting an challenging moel to look at is a probabilistic connectivity base moel, where a sensor etects another sensor whithin istance r with a fixe probability p. This moel, which is a natural generalization of the moel consiere in this paper, is a subject for future research, an we believe that a similar boun as in Theorem II.1 can be prove to hol with a larger constant in the efinition of r 0 that epens on the value of p. In the following, whenever we write that a property A hols with high probability (w.h.p.), we mean that P(A) approaches 1 as the number of sensors n goes to infinity. Given a matrix A R m n, the spectral norm of A is enote by A 2, an the Frobenius norm is enote by A F. For a vector a R n, a enotes the Eucliean norm. Finally, we use C to enote generic constants that o not epen on n. III. PROOF OF THEOREM II.1 We start by bouning the istance 1 (X, X), as efine in Eq. (2), in terms of D an D. Since L(XX T X X T )L has rank at most 2, it follows that L(XX T X X T )L F 2 L(XX T X X T )L 2, (5) where we use the fact that, for any rank 2 matrix A 2 with singular values {σ 1,..., σ 2 }, A F = i=1 σ2 i 2 A 2. The spectral norm can be boune in terms of D an D. Let M = (1/2)L DL. Then, L(XX T X X T )L 2 LXX T L M 2 + M X X T 2 (1/2) L( D + D)L 2 + (1/2) L( D + D)L 2 D D 2, (6) where in the first inequality we use the triangular inequality an the fact that X = L X as shown in Section II-B. In the secon inequality we use Eq. (1) an the fact that M X X T 2 M A 2, (7) for any rank- matrix A. Since the rank of (1/2)LDL is, the secon term follows by setting A = (1/2)LDL. The inequality in (6) follows trivially from the observation that L 2 = 1.
5 Eq. (7) can be prove as follows. From the efinition of X = MDS ( D) in Section II-B, we know that X XT is the best rank- approximation to M. Hence, X XT minimizes M A 2 for any rank- matrix A, an this proves Eq. (7). To boun D D 2, we use the following key result on the shortest paths. The main iea is that, for sensors with uniformly ranom positions, shortest paths provie arbitrarily close estimation to the correct istances for large enough raio range r. Define r 0 (β) 24 ((1 + β) log n/n) 1/. (8) For simplicity we enote r 0 (0) by r 0. Lemma III.1 (Boun on the shortest paths). Uner the hypothesis of Theorem II.1, w.h.p., the shortest paths between all pairs of noes in the graph G(V, E, P ) are uniformly boune by [3] Y. Shang, W. Ruml, Y. Zhang, an M. P. J. Fromherz, Localization from connectivity in sensor networks, IEEE Trans. Parallel Distrib. Syst., vol. 15, no. 11, pp , [4] X. Ji an H. Zha, Sensor positioning in wireless a-hoc sensor networks using multiimensional scaling, in Proc. IEEE INFOCOM, March [5] P. Biswas an Y. Ye, Semiefinite programming for a hoc wireless sensor network localization, in IPSN 04: Proceeings of the 3r international symposium on Information processing in sensor networks. New York, NY, USA: ACM, 2004, pp [6] P. Drineas, A. Jave, M. Magon-Ismail, G. Panurangant, R. Virrankoski, an A. Savvies, Distance matrix reconstruction from incomplete istance information for sensor network localization, in Proceeings of Sensor an A-Hoc Communications an Networks Conference (SECON), vol. 2, Sept. 2006, pp [7] P. Gupta an P. Kumar, Critical power for asymptotic connectivity, in Proceeings of the 37th IEEE Conference on Decision an Control, vol. 1, 1998, pp vol.1. [8] D. B. Johnson, Efficient algorithms for shortest paths in sparse networks, J. ACM, vol. 24, no. 1, pp. 1 13, ˆ 2 i,j ( 1 + ( r 0 /r) ) 2 i,j + O(r), for r > r 0, where r 0 is efine as in Eq. (8). The proof of this lemma is provie in a longer version of this paper. Define an error matrix Z = D D. Then by Lemma III.1, Z is element-wise boune by 0 Z ( r 0 /r)d + O(r). Since all the entries are non-negative, we can boun the spectral norm of Z. Let u an v be the left an right singular vectors of Z respectively. Then by Perron-Frobenius theorem, u an v are non-negative. It follow that D D 2 = u T Zv ( r 0 /r)u T Dv + O(rn) ( r 0 /r) D 2 + O(rn). (9) The first inequality follows from the element-wise boun on Z an the non-negativity of u an v. It is simple to show that D 2 = Θ(n). Typically, we are intereste in the region where r = o(1), which implies that the first term in Eq. (9) ominates the error. Now we are left with the task of bouning D 2. using the following lemma, an the proof is provie in a longer version of this paper. Lemma III.2. D 2 ( / 20 ) n + o(n), w.h.p.. Applying Lemma III.2 to Eq. (9) an substituting D D 2 in Eq. (6), we get the esire boun, an this finishes the proof of Theorem II.1. ACKNOWLEDGMENT We thank Rüiger Urbanke for stimulating iscussions on the subject of this paper. REFERENCES [1] Y. Shang, W. Ruml, Y. Zhang, an M. P. J. Fromherz, Localization from mere connectivity, in MobiHoc 03: Proceeings of the 4th ACM international symposium on Mobile a hoc networking & computing. New York, NY, USA: ACM, 2003, pp [2] G. B. J. Hightower, Locations systems for ubiquitous computing, IEEE Computer, vol. 34, pp , 2001.
Math 230.01, Fall 2012: HW 1 Solutions
Math 3., Fall : HW Solutions Problem (p.9 #). Suppose a wor is picke at ranom from this sentence. Fin: a) the chance the wor has at least letters; SOLUTION: All wors are equally likely to be chosen. The
More informationMeasures of distance between samples: Euclidean
4- Chapter 4 Measures of istance between samples: Eucliean We will be talking a lot about istances in this book. The concept of istance between two samples or between two variables is funamental in multivariate
More informationMASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS
MASSACHUSETTS INSTITUTE OF TECHNOLOGY 6.436J/15.085J Fall 2008 Lecture 14 10/27/2008 MOMENT GENERATING FUNCTIONS Contents 1. Moment generating functions 2. Sum of a ranom number of ranom variables 3. Transforms
More information10.2 Systems of Linear Equations: Matrices
SECTION 0.2 Systems of Linear Equations: Matrices 7 0.2 Systems of Linear Equations: Matrices OBJECTIVES Write the Augmente Matrix of a System of Linear Equations 2 Write the System from the Augmente Matrix
More informationMSc. Econ: MATHEMATICAL STATISTICS, 1995 MAXIMUM-LIKELIHOOD ESTIMATION
MAXIMUM-LIKELIHOOD ESTIMATION The General Theory of M-L Estimation In orer to erive an M-L estimator, we are boun to make an assumption about the functional form of the istribution which generates the
More informationA Generalization of Sauer s Lemma to Classes of Large-Margin Functions
A Generalization of Sauer s Lemma to Classes of Large-Margin Functions Joel Ratsaby University College Lonon Gower Street, Lonon WC1E 6BT, Unite Kingom J.Ratsaby@cs.ucl.ac.uk, WWW home page: http://www.cs.ucl.ac.uk/staff/j.ratsaby/
More information2r 1. Definition (Degree Measure). Let G be a r-graph of order n and average degree d. Let S V (G). The degree measure µ(s) of S is defined by,
Theorem Simple Containers Theorem) Let G be a simple, r-graph of average egree an of orer n Let 0 < δ < If is large enough, then there exists a collection of sets C PV G)) satisfying: i) for every inepenent
More informationLecture L25-3D Rigid Body Kinematics
J. Peraire, S. Winall 16.07 Dynamics Fall 2008 Version 2.0 Lecture L25-3D Rigi Boy Kinematics In this lecture, we consier the motion of a 3D rigi boy. We shall see that in the general three-imensional
More informationNotes on tangents to parabolas
Notes on tangents to parabolas (These are notes for a talk I gave on 2007 March 30.) The point of this talk is not to publicize new results. The most recent material in it is the concept of Bézier curves,
More informationFirewall Design: Consistency, Completeness, and Compactness
C IS COS YS TE MS Firewall Design: Consistency, Completeness, an Compactness Mohame G. Goua an Xiang-Yang Alex Liu Department of Computer Sciences The University of Texas at Austin Austin, Texas 78712-1188,
More informationFactoring Dickson polynomials over finite fields
Factoring Dickson polynomials over finite fiels Manjul Bhargava Department of Mathematics, Princeton University. Princeton NJ 08544 manjul@math.princeton.eu Michael Zieve Department of Mathematics, University
More informationA New Evaluation Measure for Information Retrieval Systems
A New Evaluation Measure for Information Retrieval Systems Martin Mehlitz martin.mehlitz@ai-labor.e Christian Bauckhage Deutsche Telekom Laboratories christian.bauckhage@telekom.e Jérôme Kunegis jerome.kunegis@ai-labor.e
More informationOn Adaboost and Optimal Betting Strategies
On Aaboost an Optimal Betting Strategies Pasquale Malacaria 1 an Fabrizio Smerali 1 1 School of Electronic Engineering an Computer Science, Queen Mary University of Lonon, Lonon, UK Abstract We explore
More informationHow To Find Out How To Calculate Volume Of A Sphere
Contents High-Dimensional Space. Properties of High-Dimensional Space..................... 4. The High-Dimensional Sphere......................... 5.. The Sphere an the Cube in Higher Dimensions...........
More informationThe Quick Calculus Tutorial
The Quick Calculus Tutorial This text is a quick introuction into Calculus ieas an techniques. It is esigne to help you if you take the Calculus base course Physics 211 at the same time with Calculus I,
More informationMinimum-Energy Broadcast in All-Wireless Networks: NP-Completeness and Distribution Issues
Minimum-Energy Broacast in All-Wireless Networks: NP-Completeness an Distribution Issues Mario Čagal LCA-EPFL CH-05 Lausanne Switzerlan mario.cagal@epfl.ch Jean-Pierre Hubaux LCA-EPFL CH-05 Lausanne Switzerlan
More informationInner Product Spaces
Math 571 Inner Product Spaces 1. Preliminaries An inner product space is a vector space V along with a function, called an inner product which associates each pair of vectors u, v with a scalar u, v, and
More informationView Synthesis by Image Mapping and Interpolation
View Synthesis by Image Mapping an Interpolation Farris J. Halim Jesse S. Jin, School of Computer Science & Engineering, University of New South Wales Syney, NSW 05, Australia Basser epartment of Computer
More informationUnsteady Flow Visualization by Animating Evenly-Spaced Streamlines
EUROGRAPHICS 2000 / M. Gross an F.R.A. Hopgoo Volume 19, (2000), Number 3 (Guest Eitors) Unsteay Flow Visualization by Animating Evenly-Space Bruno Jobar an Wilfri Lefer Université u Littoral Côte Opale,
More information11 CHAPTER 11: FOOTINGS
CHAPTER ELEVEN FOOTINGS 1 11 CHAPTER 11: FOOTINGS 11.1 Introuction Footings are structural elements that transmit column or wall loas to the unerlying soil below the structure. Footings are esigne to transmit
More informationVector and Matrix Norms
Chapter 1 Vector and Matrix Norms 11 Vector Spaces Let F be a field (such as the real numbers, R, or complex numbers, C) with elements called scalars A Vector Space, V, over the field F is a non-empty
More informationCh 10. Arithmetic Average Options and Asian Opitons
Ch 10. Arithmetic Average Options an Asian Opitons I. Asian Option an the Analytic Pricing Formula II. Binomial Tree Moel to Price Average Options III. Combination of Arithmetic Average an Reset Options
More informationModelling and Resolving Software Dependencies
June 15, 2005 Abstract Many Linux istributions an other moern operating systems feature the explicit eclaration of (often complex) epenency relationships between the pieces of software
More informationMannheim curves in the three-dimensional sphere
Mannheim curves in the three-imensional sphere anju Kahraman, Mehmet Öner Manisa Celal Bayar University, Faculty of Arts an Sciences, Mathematics Department, Muraiye Campus, 5, Muraiye, Manisa, urkey.
More informationOptimal Energy Commitments with Storage and Intermittent Supply
Submitte to Operations Research manuscript OPRE-2009-09-406 Optimal Energy Commitments with Storage an Intermittent Supply Jae Ho Kim Department of Electrical Engineering, Princeton University, Princeton,
More informationMATH10212 Linear Algebra. Systems of Linear Equations. Definition. An n-dimensional vector is a row or a column of n numbers (or letters): a 1.
MATH10212 Linear Algebra Textbook: D. Poole, Linear Algebra: A Modern Introduction. Thompson, 2006. ISBN 0-534-40596-7. Systems of Linear Equations Definition. An n-dimensional vector is a row or a column
More informationHere the units used are radians and sin x = sin(x radians). Recall that sin x and cos x are defined and continuous everywhere and
Lecture 9 : Derivatives of Trigonometric Functions (Please review Trigonometry uner Algebra/Precalculus Review on the class webpage.) In this section we will look at the erivatives of the trigonometric
More informationLagrangian and Hamiltonian Mechanics
Lagrangian an Hamiltonian Mechanics D.G. Simpson, Ph.D. Department of Physical Sciences an Engineering Prince George s Community College December 5, 007 Introuction In this course we have been stuying
More informationCalibration of the broad band UV Radiometer
Calibration of the broa ban UV Raiometer Marian Morys an Daniel Berger Solar Light Co., Philaelphia, PA 19126 ABSTRACT Mounting concern about the ozone layer epletion an the potential ultraviolet exposure
More informationDATA ANALYSIS II. Matrix Algorithms
DATA ANALYSIS II Matrix Algorithms Similarity Matrix Given a dataset D = {x i }, i=1,..,n consisting of n points in R d, let A denote the n n symmetric similarity matrix between the points, given as where
More informationThe one-year non-life insurance risk
The one-year non-life insurance risk Ohlsson, Esbjörn & Lauzeningks, Jan Abstract With few exceptions, the literature on non-life insurance reserve risk has been evote to the ultimo risk, the risk in the
More informationDIFFRACTION AND INTERFERENCE
DIFFRACTION AND INTERFERENCE In this experiment you will emonstrate the wave nature of light by investigating how it bens aroun eges an how it interferes constructively an estructively. You will observe
More informationA Comparison of Performance Measures for Online Algorithms
A Comparison of Performance Measures for Online Algorithms Joan Boyar 1, Sany Irani 2, an Kim S. Larsen 1 1 Department of Mathematics an Computer Science, University of Southern Denmark, Campusvej 55,
More informationAn intertemporal model of the real exchange rate, stock market, and international debt dynamics: policy simulations
This page may be remove to conceal the ientities of the authors An intertemporal moel of the real exchange rate, stock market, an international ebt ynamics: policy simulations Saziye Gazioglu an W. Davi
More informationDigital barrier option contract with exponential random time
IMA Journal of Applie Mathematics Avance Access publishe June 9, IMA Journal of Applie Mathematics ) Page of 9 oi:.93/imamat/hxs3 Digital barrier option contract with exponential ranom time Doobae Jun
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS 1. SYSTEMS OF EQUATIONS AND MATRICES 1.1. Representation of a linear system. The general system of m equations in n unknowns can be written a 11 x 1 + a 12 x 2 +
More informationLeast-Squares Intersection of Lines
Least-Squares Intersection of Lines Johannes Traa - UIUC 2013 This write-up derives the least-squares solution for the intersection of lines. In the general case, a set of lines will not intersect at a
More informationBV has the bounded approximation property
The Journal of Geometric Analysis volume 15 (2005), number 1, pp. 1-7 [version, April 14, 2005] BV has the boune approximation property G. Alberti, M. Csörnyei, A. Pe lczyński, D. Preiss Abstract: We prove
More informationParameterized Algorithms for d-hitting Set: the Weighted Case Henning Fernau. Univ. Trier, FB 4 Abteilung Informatik 54286 Trier, Germany
Parameterize Algorithms for -Hitting Set: the Weighte Case Henning Fernau Trierer Forschungsberichte; Trier: Technical Reports Informatik / Mathematik No. 08-6, July 2008 Univ. Trier, FB 4 Abteilung Informatik
More informationBindel, Spring 2012 Intro to Scientific Computing (CS 3220) Week 3: Wednesday, Feb 8
Spaces and bases Week 3: Wednesday, Feb 8 I have two favorite vector spaces 1 : R n and the space P d of polynomials of degree at most d. For R n, we have a canonical basis: R n = span{e 1, e 2,..., e
More informationMetric Spaces. Chapter 7. 7.1. Metrics
Chapter 7 Metric Spaces A metric space is a set X that has a notion of the distance d(x, y) between every pair of points x, y X. The purpose of this chapter is to introduce metric spaces and give some
More informationDifferentiability of Exponential Functions
Differentiability of Exponential Functions Philip M. Anselone an John W. Lee Philip Anselone (panselone@actionnet.net) receive his Ph.D. from Oregon State in 1957. After a few years at Johns Hopkins an
More informationOptimal Control Policy of a Production and Inventory System for multi-product in Segmented Market
RATIO MATHEMATICA 25 (2013), 29 46 ISSN:1592-7415 Optimal Control Policy of a Prouction an Inventory System for multi-prouct in Segmente Market Kuleep Chauhary, Yogener Singh, P. C. Jha Department of Operational
More informationGame Theoretic Modeling of Cooperation among Service Providers in Mobile Cloud Computing Environments
2012 IEEE Wireless Communications an Networking Conference: Services, Applications, an Business Game Theoretic Moeling of Cooperation among Service Proviers in Mobile Clou Computing Environments Dusit
More informationHull, Chapter 11 + Sections 17.1 and 17.2 Additional reference: John Cox and Mark Rubinstein, Options Markets, Chapter 5
Binomial Moel Hull, Chapter 11 + ections 17.1 an 17.2 Aitional reference: John Cox an Mark Rubinstein, Options Markets, Chapter 5 1. One-Perio Binomial Moel Creating synthetic options (replicating options)
More informationWeb Appendices to Selling to Overcon dent Consumers
Web Appenices to Selling to Overcon ent Consumers Michael D. Grubb MIT Sloan School of Management Cambrige, MA 02142 mgrubbmit.eu www.mit.eu/~mgrubb May 2, 2008 B Option Pricing Intuition This appenix
More informationMATRIX ALGEBRA AND SYSTEMS OF EQUATIONS
MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS Systems of Equations and Matrices Representation of a linear system The general system of m equations in n unknowns can be written a x + a 2 x 2 + + a n x n b a
More informationPythagorean Triples Over Gaussian Integers
International Journal of Algebra, Vol. 6, 01, no., 55-64 Pythagorean Triples Over Gaussian Integers Cheranoot Somboonkulavui 1 Department of Mathematics, Faculty of Science Chulalongkorn University Bangkok
More informationReading: Ryden chs. 3 & 4, Shu chs. 15 & 16. For the enthusiasts, Shu chs. 13 & 14.
7 Shocks Reaing: Ryen chs 3 & 4, Shu chs 5 & 6 For the enthusiasts, Shu chs 3 & 4 A goo article for further reaing: Shull & Draine, The physics of interstellar shock waves, in Interstellar processes; Proceeings
More informationJON HOLTAN. if P&C Insurance Ltd., Oslo, Norway ABSTRACT
OPTIMAL INSURANCE COVERAGE UNDER BONUS-MALUS CONTRACTS BY JON HOLTAN if P&C Insurance Lt., Oslo, Norway ABSTRACT The paper analyses the questions: Shoul or shoul not an iniviual buy insurance? An if so,
More informationAnswers to the Practice Problems for Test 2
Answers to the Practice Problems for Test 2 Davi Murphy. Fin f (x) if it is known that x [f(2x)] = x2. By the chain rule, x [f(2x)] = f (2x) 2, so 2f (2x) = x 2. Hence f (2x) = x 2 /2, but the lefthan
More informationSimilarity and Diagonalization. Similar Matrices
MATH022 Linear Algebra Brief lecture notes 48 Similarity and Diagonalization Similar Matrices Let A and B be n n matrices. We say that A is similar to B if there is an invertible n n matrix P such that
More informationSensitivity Analysis of Non-linear Performance with Probability Distortion
Preprints of the 19th Worl Congress The International Feeration of Automatic Control Cape Town, South Africa. August 24-29, 214 Sensitivity Analysis of Non-linear Performance with Probability Distortion
More informationSection 6.1 - Inner Products and Norms
Section 6.1 - Inner Products and Norms Definition. Let V be a vector space over F {R, C}. An inner product on V is a function that assigns, to every ordered pair of vectors x and y in V, a scalar in F,
More informationExponential Functions: Differentiation and Integration. The Natural Exponential Function
46_54.q //4 :59 PM Page 5 5 CHAPTER 5 Logarithmic, Eponential, an Other Transcenental Functions Section 5.4 f () = e f() = ln The inverse function of the natural logarithmic function is the natural eponential
More informationWeb Appendices of Selling to Overcon dent Consumers
Web Appenices of Selling to Overcon ent Consumers Michael D. Grubb A Option Pricing Intuition This appenix provies aitional intuition base on option pricing for the result in Proposition 2. Consier the
More informationMODELLING OF TWO STRATEGIES IN INVENTORY CONTROL SYSTEM WITH RANDOM LEAD TIME AND DEMAND
art I. robobabilystic Moels Computer Moelling an New echnologies 27 Vol. No. 2-3 ransport an elecommunication Institute omonosova iga V-9 atvia MOEING OF WO AEGIE IN INVENOY CONO YEM WIH ANOM EA IME AN
More informationf(x) = a x, h(5) = ( 1) 5 1 = 2 2 1
Exponential Functions an their Derivatives Exponential functions are functions of the form f(x) = a x, where a is a positive constant referre to as the base. The functions f(x) = x, g(x) = e x, an h(x)
More informationCost Efficient Datacenter Selection for Cloud Services
Cost Efficient Datacenter Selection for Clou Services Hong u, Baochun Li henryxu, bli@eecg.toronto.eu Department of Electrical an Computer Engineering University of Toronto Abstract Many clou services
More informationOverview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model
Overview of Violations of the Basic Assumptions in the Classical Normal Linear Regression Model 1 September 004 A. Introduction and assumptions The classical normal linear regression model can be written
More informationScalar : Vector : Equal vectors : Negative vectors : Proper vector : Null Vector (Zero Vector): Parallel vectors : Antiparallel vectors :
ELEMENTS OF VECTOS 1 Scalar : physical quantity having only magnitue but not associate with any irection is calle a scalar eg: time, mass, istance, spee, work, energy, power, pressure, temperature, electric
More informationCross-Over Analysis Using T-Tests
Chapter 35 Cross-Over Analysis Using -ests Introuction his proceure analyzes ata from a two-treatment, two-perio (x) cross-over esign. he response is assume to be a continuous ranom variable that follows
More informationNotes on Determinant
ENGG2012B Advanced Engineering Mathematics Notes on Determinant Lecturer: Kenneth Shum Lecture 9-18/02/2013 The determinant of a system of linear equations determines whether the solution is unique, without
More informationSearch Advertising Based Promotion Strategies for Online Retailers
Search Avertising Base Promotion Strategies for Online Retailers Amit Mehra The Inian School of Business yeraba, Inia Amit Mehra@isb.eu ABSTRACT Web site aresses of small on line retailers are often unknown
More informationIntroduction to Integration Part 1: Anti-Differentiation
Mathematics Learning Centre Introuction to Integration Part : Anti-Differentiation Mary Barnes c 999 University of Syney Contents For Reference. Table of erivatives......2 New notation.... 2 Introuction
More informationarxiv:1309.1857v3 [gr-qc] 7 Mar 2014
Generalize holographic equipartition for Friemann-Robertson-Walker universes Wen-Yuan Ai, Hua Chen, Xian-Ru Hu, an Jian-Bo Deng Institute of Theoretical Physics, LanZhou University, Lanzhou 730000, P.
More informationMathematics Review for Economists
Mathematics Review for Economists by John E. Floy University of Toronto May 9, 2013 This ocument presents a review of very basic mathematics for use by stuents who plan to stuy economics in grauate school
More informationNumerical Analysis Lecture Notes
Numerical Analysis Lecture Notes Peter J. Olver 5. Inner Products and Norms The norm of a vector is a measure of its size. Besides the familiar Euclidean norm based on the dot product, there are a number
More informationA Blame-Based Approach to Generating Proposals for Handling Inconsistency in Software Requirements
International Journal of nowlege an Systems Science, 3(), -7, January-March 0 A lame-ase Approach to Generating Proposals for Hanling Inconsistency in Software Requirements eian Mu, Peking University,
More informationExploratory Optimal Latin Hypercube Designs for Computer Simulated Experiments
Thailan Statistician July 0; 9() : 7-93 http://statassoc.or.th Contribute paper Exploratory Optimal Latin Hypercube Designs for Computer Simulate Experiments Rachaaporn Timun [a,b] Anamai Na-uom* [a,b]
More informationA Data Placement Strategy in Scientific Cloud Workflows
A Data Placement Strategy in Scientific Clou Workflows Dong Yuan, Yun Yang, Xiao Liu, Jinjun Chen Faculty of Information an Communication Technologies, Swinburne University of Technology Hawthorn, Melbourne,
More informationGiven three vectors A, B, andc. We list three products with formula (A B) C = B(A C) A(B C); A (B C) =B(A C) C(A B);
1.1.4. Prouct of three vectors. Given three vectors A, B, anc. We list three proucts with formula (A B) C = B(A C) A(B C); A (B C) =B(A C) C(A B); a 1 a 2 a 3 (A B) C = b 1 b 2 b 3 c 1 c 2 c 3 where the
More informationDi usion on Social Networks. Current Version: June 6, 2006 Appeared in: Économie Publique, Numéro 16, pp 3-16, 2005/1.
Di usion on Social Networks Matthew O. Jackson y Caltech Leeat Yariv z Caltech Current Version: June 6, 2006 Appeare in: Économie Publique, Numéro 16, pp 3-16, 2005/1. Abstract. We analyze a moel of i
More informationLecture Topic: Low-Rank Approximations
Lecture Topic: Low-Rank Approximations Low-Rank Approximations We have seen principal component analysis. The extraction of the first principle eigenvalue could be seen as an approximation of the original
More informationFAST JOINING AND REPAIRING OF SANDWICH MATERIALS WITH DETACHABLE MECHANICAL CONNECTION TECHNOLOGY
FAST JOINING AND REPAIRING OF SANDWICH MATERIALS WITH DETACHABLE MECHANICAL CONNECTION TECHNOLOGY Jörg Felhusen an Sivakumara K. Krishnamoorthy RWTH Aachen University, Chair an Insitute for Engineering
More informationNotes on Symmetric Matrices
CPSC 536N: Randomized Algorithms 2011-12 Term 2 Notes on Symmetric Matrices Prof. Nick Harvey University of British Columbia 1 Symmetric Matrices We review some basic results concerning symmetric matrices.
More information17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function
17. Inner product spaces Definition 17.1. Let V be a real vector space. An inner product on V is a function, : V V R, which is symmetric, that is u, v = v, u. bilinear, that is linear (in both factors):
More informationExample Optimization Problems selected from Section 4.7
Example Optimization Problems selecte from Section 4.7 19) We are aske to fin the points ( X, Y ) on the ellipse 4x 2 + y 2 = 4 that are farthest away from the point ( 1, 0 ) ; as it happens, this point
More informationLow-Complexity and Distributed Energy Minimization in Multi-hop Wireless Networks
Low-Complexity an Distribute Energy inimization in ulti-hop Wireless Networks Longbi Lin, Xiaojun Lin, an Ness B. Shroff Center for Wireless Systems an Applications (CWSA) School of Electrical an Computer
More informationWhat is Linear Programming?
Chapter 1 What is Linear Programming? An optimization problem usually has three essential ingredients: a variable vector x consisting of a set of unknowns to be determined, an objective function of x to
More informationRules for Finding Derivatives
3 Rules for Fining Derivatives It is teious to compute a limit every time we nee to know the erivative of a function. Fortunately, we can evelop a small collection of examples an rules that allow us to
More informationPatch Complexity, Finite Pixel Correlations and Optimal Denoising
Patch Complexity, Finite Pixel Correlations an Optimal Denoising Anat Levin Boaz Naler Freo Duran William T. Freeman Weizmann Institute MIT CSAIL Abstract. Image restoration tasks are ill-pose problems,
More informationThe mean-field computation in a supermarket model with server multiple vacations
DOI.7/s66-3-7-5 The mean-fiel computation in a supermaret moel with server multiple vacations Quan-Lin Li Guirong Dai John C. S. Lui Yang Wang Receive: November / Accepte: 8 October 3 SpringerScienceBusinessMeiaNewYor3
More informationWhich Networks Are Least Susceptible to Cascading Failures?
Which Networks Are Least Susceptible to Cascaing Failures? Larry Blume Davi Easley Jon Kleinberg Robert Kleinberg Éva Taros July 011 Abstract. The resilience of networks to various types of failures is
More informationCoalitional Game Theoretic Approach for Cooperative Transmission in Vehicular Networks
Coalitional Game Theoretic Approach for Cooperative Transmission in Vehicular Networks arxiv:.795v [cs.gt] 8 Feb Tian Zhang, Wei Chen, Zhu Han, an Zhigang Cao State Key Laboratory on Microwave an Digital
More informationSection 4.4 Inner Product Spaces
Section 4.4 Inner Product Spaces In our discussion of vector spaces the specific nature of F as a field, other than the fact that it is a field, has played virtually no role. In this section we no longer
More informationDetecting Possibly Fraudulent or Error-Prone Survey Data Using Benford s Law
Detecting Possibly Frauulent or Error-Prone Survey Data Using Benfor s Law Davi Swanson, Moon Jung Cho, John Eltinge U.S. Bureau of Labor Statistics 2 Massachusetts Ave., NE, Room 3650, Washington, DC
More informationCalculating Viscous Flow: Velocity Profiles in Rivers and Pipes
previous inex next Calculating Viscous Flow: Velocity Profiles in Rivers an Pipes Michael Fowler, UVa 9/8/1 Introuction In this lecture, we ll erive the velocity istribution for two examples of laminar
More informationData Center Power System Reliability Beyond the 9 s: A Practical Approach
Data Center Power System Reliability Beyon the 9 s: A Practical Approach Bill Brown, P.E., Square D Critical Power Competency Center. Abstract Reliability has always been the focus of mission-critical
More informationInner Product Spaces and Orthogonality
Inner Product Spaces and Orthogonality week 3-4 Fall 2006 Dot product of R n The inner product or dot product of R n is a function, defined by u, v a b + a 2 b 2 + + a n b n for u a, a 2,, a n T, v b,
More informationTracking Control of a Class of Hamiltonian Mechanical Systems with Disturbances
Proceeings of Australasian Conference on Robotics an Automation, -4 Dec 4, The University of Melbourne, Melbourne, Australia Tracking Control of a Class of Hamiltonian Mechanical Systems with Disturbances
More informationA New Pricing Model for Competitive Telecommunications Services Using Congestion Discounts
A New Pricing Moel for Competitive Telecommunications Services Using Congestion Discounts N. Keon an G. Ananalingam Department of Systems Engineering University of Pennsylvania Philaelphia, PA 19104-6315
More information1 Solving LPs: The Simplex Algorithm of George Dantzig
Solving LPs: The Simplex Algorithm of George Dantzig. Simplex Pivoting: Dictionary Format We illustrate a general solution procedure, called the simplex algorithm, by implementing it on a very simple example.
More informationFluid Pressure and Fluid Force
0_0707.q //0 : PM Page 07 SECTION 7.7 Section 7.7 Flui Pressure an Flui Force 07 Flui Pressure an Flui Force Fin flui pressure an flui force. Flui Pressure an Flui Force Swimmers know that the eeper an
More informationOption Pricing for Inventory Management and Control
Option Pricing for Inventory Management an Control Bryant Angelos, McKay Heasley, an Jeffrey Humpherys Abstract We explore the use of option contracts as a means of managing an controlling inventories
More informationAn Introduction to Event-triggered and Self-triggered Control
An Introuction to Event-triggere an Self-triggere Control W.P.M.H. Heemels K.H. Johansson P. Tabuaa Abstract Recent evelopments in computer an communication technologies have le to a new type of large-scale
More informationSupporting Adaptive Workflows in Advanced Application Environments
Supporting aptive Workflows in vance pplication Environments Manfre Reichert, lemens Hensinger, Peter Daam Department Databases an Information Systems University of Ulm, D-89069 Ulm, Germany Email: {reichert,
More informationStock Market Value Prediction Using Neural Networks
Stock Market Value Preiction Using Neural Networks Mahi Pakaman Naeini IT & Computer Engineering Department Islamic Aza University Paran Branch e-mail: m.pakaman@ece.ut.ac.ir Hamireza Taremian Engineering
More informationLECTURE 5: DUALITY AND SENSITIVITY ANALYSIS. 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method
LECTURE 5: DUALITY AND SENSITIVITY ANALYSIS 1. Dual linear program 2. Duality theory 3. Sensitivity analysis 4. Dual simplex method Introduction to dual linear program Given a constraint matrix A, right
More informationA Universal Sensor Control Architecture Considering Robot Dynamics
International Conference on Multisensor Fusion an Integration for Intelligent Systems (MFI2001) Baen-Baen, Germany, August 2001 A Universal Sensor Control Architecture Consiering Robot Dynamics Frierich
More information