Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.

Similar documents
Going Big in Data Dimensionality:

CS 5614: (Big) Data Management Systems. B. Aditya Prakash Lecture #18: Dimensionality Reduc7on

Data Warehousing und Data Mining

Review Jeopardy. Blue vs. Orange. Review Jeopardy

Data Warehousing. Jens Teubner, TU Dortmund Winter 2015/16. Jens Teubner Data Warehousing Winter 2015/16 1

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

The Scientific Data Mining Process

Dimensionality Reduction: Principal Components Analysis

Similarity Search in a Very Large Scale Using Hadoop and HBase

Text Analytics (Text Mining)

α = u v. In other words, Orthogonal Projection

CS Introduction to Data Mining Instructor: Abdullah Mueen

Nonlinear Iterative Partial Least Squares Method

Introduction to Principal Components and FactorAnalysis

Multi-dimensional index structures Part I: motivation

Medical Information Management & Mining. You Chen Jan,15, 2013 You.chen@vanderbilt.edu

How To Cluster

TOOLS FOR 3D-OBJECT RETRIEVAL: KARHUNEN-LOEVE TRANSFORM AND SPHERICAL HARMONICS

Data Mining: Algorithms and Applications Matrix Math Review

Information Retrieval and Web Search Engines

Social Media Mining. Data Mining Essentials

SYMMETRIC EIGENFACES MILI I. SHAH

Data Preprocessing. Week 2

Soft Clustering with Projections: PCA, ICA, and Laplacian

Multimedia Databases. Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig

Component Ordering in Independent Component Analysis Based on Data Power

Text Mining in JMP with R Andrew T. Karl, Senior Management Consultant, Adsurgo LLC Heath Rushing, Principal Consultant and Co-Founder, Adsurgo LLC

Comparison of Non-linear Dimensionality Reduction Techniques for Classification with Gene Expression Microarray Data

Manifold Learning Examples PCA, LLE and ISOMAP

RANDOM PROJECTIONS FOR SEARCH AND MACHINE LEARNING

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Knowledge Discovery and Data Mining. Structured vs. Non-Structured Data

521493S Computer Graphics. Exercise 2 & course schedule change

Module II: Multimedia Data Mining

Environmental Remote Sensing GEOG 2021

APPM4720/5720: Fast algorithms for big data. Gunnar Martinsson The University of Colorado at Boulder

D-optimal plans in observational studies

Section 1.1. Introduction to R n

Clustering & Visualization

Database Management System Dr. S. Srinath Department of Computer Science & Engineering Indian Institute of Technology, Madras Lecture No.

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

Data, Measurements, Features

DATA MINING CLUSTER ANALYSIS: BASIC CONCEPTS

11.1. Objectives. Component Form of a Vector. Component Form of a Vector. Component Form of a Vector. Vectors and the Geometry of Space

1 VECTOR SPACES AND SUBSPACES

Solving Simultaneous Equations and Matrices

Cluster Analysis. Chapter. Chapter Outline. What You Will Learn in This Chapter

Linear Algebra Review. Vectors

Common factor analysis

Principles of Data Mining by Hand&Mannila&Smyth

Notes on Orthogonal and Symmetric Matrices MENU, Winter 2013

Distance Metric Learning in Data Mining (Part I) Fei Wang and Jimeng Sun IBM TJ Watson Research Center

Ins+tuto Superior Técnico Technical University of Lisbon. Big Data. Bruno Lopes Catarina Moreira João Pinho

BEHAVIOR BASED CREDIT CARD FRAUD DETECTION USING SUPPORT VECTOR MACHINES

Lecture Data Warehouse Systems

Clustering. Chapter Introduction to Clustering Techniques Points, Spaces, and Distances

Topics in basic DBMS course

IMPROVING DATA INTEGRATION FOR DATA WAREHOUSE: A DATA MINING APPROACH

(Week 10) A04. Information System for CRM. Electronic Commerce Marketing

Factor analysis. Angela Montanari

Introduction to Matrix Algebra

MATRIX ALGEBRA AND SYSTEMS OF EQUATIONS. + + x 2. x n. a 11 a 12 a 1n b 1 a 21 a 22 a 2n b 2 a 31 a 32 a 3n b 3. a m1 a m2 a mn b m

x = + x 2 + x

Chapter 6 FOUNDATIONS OF BUSINESS INTELLIGENCE: DATABASES AND INFORMATION MANAGEMENT Learning Objectives

SPATIAL DATA CLASSIFICATION AND DATA MINING

Mario Guarracino. Data warehousing

DATA ANALYSIS II. Matrix Algorithms

Data Exploration and Preprocessing. Data Mining and Text Mining (UIC Politecnico di Milano)

How To Model Data For Business Intelligence (Bi)

Numerical Analysis Lecture Notes

Linear algebra and the geometry of quadratic equations. Similarity transformations and orthogonal matrices

Orthogonal Diagonalization of Symmetric Matrices

Distance Degree Sequences for Network Analysis

SPECIAL PERTURBATIONS UNCORRELATED TRACK PROCESSING

Chapter 20: Data Analysis

Chapter 6. Orthogonality

Relational Database Systems 2 1. System Architecture

Factor Rotations in Factor Analyses.

5. Orthogonal matrices

Efficient Iceberg Query Evaluation for Structured Data using Bitmap Indices

New Approach of Computing Data Cubes in Data Warehousing

Chapter 5 Business Intelligence: Data Warehousing, Data Acquisition, Data Mining, Business Analytics, and Visualization

Unsupervised and supervised dimension reduction: Algorithms and connections

Foundations of Data Science 1

Visualization of General Defined Space Data

3 Orthogonal Vectors and Matrices

Applied Algorithm Design Lecture 5

Challenges in Finding an Appropriate Multi-Dimensional Index Structure with Respect to Specific Use Cases

Adaptive Face Recognition System from Myanmar NRC Card

Foundations of Business Intelligence: Databases and Information Management

The Benefits of Data Modeling in Business Intelligence.

Clustering. Adrian Groza. Department of Computer Science Technical University of Cluj-Napoca

OLAP and Data Mining. Data Warehousing and End-User Access Tools. Introducing OLAP. Introducing OLAP

Chapter 5. Warehousing, Data Acquisition, Data. Visualization

Supervised Feature Selection & Unsupervised Dimensionality Reduction

Part-Based Recognition

Similarity and Diagonalization. Similar Matrices

Data Warehousing. Wolf-Tilo Balke Silviu Homoceanu Institut für Informationssysteme Technische Universität Braunschweig

Course DSS. Business Intelligence: Data Warehousing, Data Acquisition, Data Mining, Business Analytics, and Visualization

Principle Component Analysis and Partial Least Squares: Two Dimension Reduction Techniques for Regression

Transcription:

Multimedia Databases Wolf-Tilo Balke Philipp Wille Institut für Informationssysteme Technische Universität Braunschweig http://www.ifis.cs.tu-bs.de

14 Previous Lecture 13 Indexes for Multimedia Data 13.1 R-Trees 13.2 M-Trees Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 2

14 Indexes for Multimedia Data 2 14 Indexes for Multimedia Data 2 14.1 Curse of Dimensionality 14.2 Dimension Reduction 14.3 GEMINI Indexing Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 3

14.1 Curse of Dimensionality Curse of Dimensionality Why are traditional index structures useless in multidimensional spaces? For (approx.) uniformly distributed data: all known index trees start failing at about15-20 dimensions Their use leads to higher costs than a linear scan Is it possible to create an efficient high-dimensional tree structure? What structure do high-dimensional spaces have anyway? Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 4

14.1 Basic Geometry Relationship between high-dimensional cubes and spheres in the R d (Hyper-) cube: Edge length 1 Center c = (½, ½,, ½) (Hyper-) sphere: Radius 1 Center o = (0, 0,..., 0) Is the center of the cube always inside the sphere? o c Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 5

14.1 Basic Geometry Calculating the Euclidean distance between the two centers: For d = 4 the center of the cube is on the edge of the sphere For d 5 the center of the cube is outside the sphere Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 6

14.1 Basic Geometry Where are points in high-dimensional space located? Inside or on the surface of a cube? Consider cube A = [0,1] d Cut out smaller cube B with edge length (1-2ε) The volume of A is 1, the volume of B is (1 2ε) d A B (1-2 ) Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 7

14.1 Basic Geometry In high-dimensional spaces, almost all points lie near the surface of A What is the volume of the inner cube? ε / d 2 50 100 500 1000 0,1 0,64 1,4 10 5 2 10 10 3,5 10 49 1,2 10 97 0,05 0,81 0,01 2,7 10 5 1,3 10 23 1,8 10 46 0,01 0,96 0,36 0,13 4,1 10 5 1,7 10 9 If a point is positioned randomly (uniformly) in the outer cube, for large d it is with very low probability in the inner cube Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 8

14.1 Basic Geometry How big is the volume of spheres inscribed in cubes? Again cube A = [0,1] d and an inscribed sphere S For even d, S has volume 0.5 Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 9

14.1 Basic Geometry How big is the volume of S? How many randomly distributed points in the cube are needed, to ensure on average that at least one lies within the sphere? The number of points grows exponentially in d d Volume Nr. Points 2 0,79 2 4 0,31 4 10 0,002 402 20 2,46 10 8 4063127 40 3,28 10 21 3,05 10 20 100 1,87 10 70 5,35 10 69 Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 10

14.1 Basic Geometry How far away is the nearest neighbor from the center of the sphere? 1,000,000 uniformly distributed points in the cube Even the closest neighbors are far away! The maximum distance is d 1/2 (diagonal) Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 11

14.1 Basic Geometry How many points are there at exactly distance s from the center? d s: As the distance increases, the variance is lower! For large d, almost all points have the same distance from the query (Beyer et al, 1998) Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 12

14.1 Conclusion High-dimensional spaces are different In high-dimensional spaces, the sequential search through the objects is often better than using some index structure On the other hand, our analysis was focused on uniformly distributed points in Euclidean spaces Real-world data may have an intrinsic lower dimensionality Example: dimensions price and maximal speed in a vehicle database Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 13

14.1 Speeding up Sequential Search Vector Approximation Files (Weber et al, 1998) Partition each dimension into intervals Dimension i is divided into 2 b i intervals Represented by b i bits E.g., splitting some dimension in 4 intervals Representation of these intervals by 00, 01, 10 and 11 The i-th coordinate of each data point can thus approximately be represented by b i bits Thus, each point can be approximated by b = b 1 + b 2 + + b d bits Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 14

14.1 VA Files Points and their encoding in a VA-File Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 15

14.1 VA Files Advantages of VA Files If b is large enough, there are significantly more partitions of space into hyper-cubes as there are data points Thus, collisions are nearly impossible, and every bit vector represents just one point It is much faster to perform bit-wise operations on fixed-length bit vectors, than performing calculations on the original representation of the data Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 16

14.1 VA Files Query processing: filter & refine e.g., region queries: Sequentially scan over all partitions For partitions that intersect with the search region Check all contained points using their exact coordinates Actual results False drops Excluded points Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 17

14.2 Dimensionality Reduction Indexing of high dimensional data is problematic, but does every dimension contain essential information? Strongly correlated dimensions can be combined e.g., price in Euro and price in Dollars If in some dimensions the objects differ more, then these dimensions also carry more information than others Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 18

14.2 Principal Component Analysis Principal Component Analysis (PCA), also known as Karhunen-Loève transform Detection of linear dependencies between features, so-called axes The most pronounced axis is called the main axis The correlation is always subject to a certain variation Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 19

14.2 Principal Component Analysis Linear dependence is a sign of redundancy A dimension may be represented as a linear combination of other dimensions Idea Rotate (and shift) the axes such that there is no linear dependence between them Remove all axes with low variance, to keep the error introduced by omission of information minimal Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 20

14.2 Principal Component Analysis Example: rotate the basis of a coordinate system Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 21

14.2 Principal Component Analysis The covariance matrix determines linear dependencies between different data dimensions Let X = (X 1,..., X d ) be a random vector which is uniformly distributed on the set of (d-dimensional) data points Center the coordinate system around the mean:, with Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 22

14.2 Principal Component Analysis The covariance between X i and X j is The covariance is positive, if X j always tends to have large values whenever X i does (and vice versa) The covariance is negative, if X j always tends to have large values whenever X i has small values (and vice versa) Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 23

14.2 Principal Component Analysis The covariance matrix contains all pair-wise covariances between the dimensions Dimensions with nonzero covariance are interdependent and thus carry redundant information Idea: rotate the centered data around the origin such that all covariances have value 0 The dimensions are then linearly independent from each other Distances between data points are preserved, since it is only a rotation Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 24

14.2 Principal Component Analysis Linear Algebra: Any symmetric matrix A can be diagonalized This means that there are matrices Q and D with A = Q D Q 1 Where Q is orthogonal, therefore Q T = Q 1 D is diagonal, i.e., besides the main diagonal it contains only 0 s The orthogonal matrices belonging to linear mappings are always just reflections and rotations Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 25

14.2 Principal Component Analysis The covariance matrix is symmetric If the covariance matrix is diagonalized and the transformation corresponding to matrix Q is applied to the data, then the covariance between the new data dimensions is always 0 The properties of the transformation are transferred to the covariance! Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 26

14.2 Principal Component Analysis The diagonal matrix D contains the eigenvalues of the covariance matrix eigenvalue decomposition Dimensions with low variance of the data (small corresponding eigenvalues) can be removed after the transformations The resulting error is small and can be generally neglected Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 27

14.2 Example Data points: Center m= Covariance matrix Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 28

14.2 Example Transformation of the data: Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 29

14.2 Example x-axis is the main axis (since it has the largest eigenvalue) Variance on the y axis is small compared with variance on the x-axis (0.8 vs. 2.83) y-axis can therefore be omitted without much loss of information (dimensionality reduction) Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 30

14.2 Latent Semantic Indexing A similar method to principal component analysis is Latent Semantic Indexing (LSI) In principal component analysis, the covariance matrix is decomposed, while in LSI the feature matrix F is decomposed The feature matrix contains the feature vectors as columns Note, this matrix is usually not quadratic and symmetric In this case: F = U D V T with diagonal matrix D and matrices U and V with orthonormal column vectors Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 31

14.2 Latent Semantic Indexing The decomposition of the feature matrix is a transformation on minimal concepts that are scaled by the eigenvalues in D Each concept is an artificial dimension of a vector space, which expresses the latent semantics of the feature vectors LSI is used in information retrieval, to represent synonyms by a common concept and to split ambiguous terms into different concepts Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 32

14.2 Latent Semantic Indexing Through the decomposition of the feature matrix, the semantics of feature vectors (the columns of the feature matrix) is contained in matrix V T feature dimensions (the rows of the feature matrix) is contained in matrix U They can be used as in the principal component analysis for extracting dependencies Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 33

14.2 Latent Semantic Indexing According to the eigenvalue decomposition, D indicates the importance of the new dimensions Irrelevant dimensions (particularly those with eigenvalue 0) can be deleted with minimal error Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 34

14.2 Latent Semantic Indexing The decomposition of the feature matrix for the previous example yields: = Two latent dimensions with weights 7.95 and 1.8 Weights of these dimensions in point 1: 0.17 and -0.14 U and V transform the original axes, respectively the data points in the new space Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 35

14.2 Dimension Reduction Principle component analysis and LSI, however, are difficult to use when there are frequent changes in the data Data changes could drastically affect the feature values and their respective variances and covariances For this reason, they are usually only calculated for a representative portion of the data and used for all objects Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 36

14.3 GEMINI Indexing GEMINI - GEneric Multimedia Object INdexIng (Faloutsos, 1996) Can be used for hard to calculate distance functions between objects (e.g., edit distances) Only useful, if there are no directly usable efficient search algorithms Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 37

14.3 GEMINI Indexing Idea: use a distance function which is easy to compute as an estimate for the complex distance function and immediately prune some objects, i.e. a quick and dirty test E.g., use the distance regarding average colors for the exclusion of certain histograms from the comparison A histogram with avg. color red will be more similar to a histogram with avg. color orange, than to one with avg. color blue Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 38

14.3 GEMINI Indexing Assume we have to calculate a complex distance function d(a, B) on the set of objects O Choose a function (transformation) f on O and a simple distance function δ with the property A, B O : δ(f(a), f(b)) d(a, B) ( lower bounding property ) Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 39

14.3 Example Comparison of time series for stock quotes The comparison of two curves with the Euclidean distance (d) on a discrete grid (e.g., closing prices) is very complicated Transforming each curve with DFT and comparing the coefficients yields the same result, but is just as expensive (Parseval's Theorem) Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 40

14.3 Example But comparing only the first few coefficients corresponds to a dimension reduction (f) The Euclidean distance of only the first coefficient is faster to compute (δ) Since the Euclidean distance is calculated as a sum of non-negative terms (square), the distance δ of the first term always underestimates the distance d of all the terms (lower bounding) Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 41

14.3 Algorithm Preparation Transform all the database objects with function f Pruning To answer a region query (Q, r) exclude all objects A with δ(a, Q)> r from the search Query processing Compute d(b, Q) for all remaining objects B and exclude all objects with d(b, Q) > r Return all the remaining elements Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 42

14.3 Algorithm By using estimations false positives may occur, but never false negatives Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 43

14.3 Nearest Neighbors Nearest-neighbor queries can be performed using the index, too Attention: the nearest neighbor according to δ is not always the nearest neighbor according to d Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 44

14.3 Nearest Neighbors However, if A is the nearest neighbor of Q regarding δ, then the nearest neighbor of Q regarding d can have at most distance d(a, Q) Therefore: Find the nearest neighbors A of Q regarding δ Determine d(a, Q) Create an area query with radius d(a, Q) The nearest neighbor of Q regarding d is guaranteed to be among the results Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 45

14.3 Fast Map How can complicated distance functions be simplified (respecting the lower bounding)? Fast Map represents all database objects as points in R k, on the basis of some random distance function d (Faloutsos and Lin, 1995) The Euclidean distance then allows an approximation of the original distance Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 46

14.3 Fast Map Fast Map represents the objects in the k-dimensional space, where k is a pre-selected constant The higher the k, the more accurate is the respective approximation The lower the k, the more efficient is the search process Solution: find a suitable compromise depending on the application Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 47

14.3 Fast Map Requirements Efficient mapping (if possible only linear effort) Good distance approximation when regarding the original distances Efficient mapping of new objects independent of all other objects Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 48

14.3 Fast Map Idea: consider all the objects as already in k- dimensional Euclidean space, however without knowing their coordinates Find k orthogonal coordinate axes based on the distances between the points Project the points on the axes and compute the feature values in Euclidean space Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 49

14.3 Fast Map Projection (of a point C) on an axis given by pivot points A and B C A Considering the law of cosines: X B Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 50

14.3 Fast Map In this way, each point C can be projected onto the coordinate axis given by A and B No coordinates of the involved points are required for this, but only the distances from each other On this axis C has coordinate value d(a, X) C A X B Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 51

14.3 Fast Map After a new axis was added: Project all data points on the (k-1)-dimensional hyperplane, whose normal vector is the new axis We again have a data set with a total of k dimensions including the newly created one The coordinates of the remaining (k-1) dimensions are unknown We can however calculate the distances of the points in the hyperplane Arbitrary points A and B in the hyperplane have distance d'(a, B) with where x(a) and x(b) are the coordinates of A and B, on the newly added axis Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 52

14.3 Fast Map Repeat this steps k times In this way, we create k new (orthogonal) dimensions, which describe the data This allows us to approximate objects with complex distance function by points in the standard space For this, only the distance matrix is needed Afterwards, we can also apply standard algorithms Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 53

14.3 Fast Map Problem: which objects should we choose as pivot elements? The two objects that are farthest apart provide the least loss of accuracy They can be found with quadratic cost and with some heuristics, even within linear time Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 54

14.3 Example Calculate the pairwise dissimilarities in a set of words based on the edit distance To allow for indexing, all words should be mapped as multidimensional points in Euclidean space The Euclidean distance can then be used instead of the more complicated editing distance Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 55

14.3 Example Edit distance between words O = {Medium, Database, Multimedia, System, Object} =: {w 1,, w 5 } Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 56

14.3 Fast Map Mapping on 4-dimensional points: Original: Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 57

14 Next Semester Lectures Relational Database Systems II Data Warehousing and Data Mining Techniques Distributed Database Systems and Peer-to-Peer Data Management Seminar Master Seminar Linked Open Data Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 58

Data Warehousing Featuring new and exciting ways to store your data state enormous queries mine your Data in seconds learn stuff which is important for industry Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 59

Data Warehousing Combining several Data Sources! Data Sources Staging Area Warehouse Data Marts Users Operational System Purchasing Analysis Operational System Summary Data Metadata Raw Data Sales Reporting Flat files Inventory Mining Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 60

Data Warehousing Exciting new OLAP Operations! Slice Product Product 818 818 Laptops CellP. Time Laptops CellP. Time 13.11.2008 18.12.2008 13.11.2008 18.12.2008 Geography Geography Dice Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 61

Data Warehousing Classification and Clustering! Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 62

Distributed Data Management Featuring Distributed Query/ Transaction Processing Unstructured P2P Networks Distributed Hash Tables Concepts of Cloud Storage Amazon Dynamo Hadoop Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 63

Distributed Data Management Different Distributed Architectures! CPU Memory Disk Disk Memory CPU Network CPU CPU CPU CPU Memory CPU CPU Memory Memory Disk Disk Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 64

Distributed Data Management Distributed Data Storage! 56 54 52 49 45 44 42 39 60 37 63 33 30 4 7 26 23 13 14 16 19 49 45 52 56 54 42 Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 65 39 60 37 63 33 30 4 26 7 23 13 14 16 19

Distributed Data Management Everything as a Service! Individuals Corporations Non-Commercial? CLOUD Storage Provisioning OS Provisioning Cloud Middle Ware Network Provisioning Service(apps) Provisioning SLA(monitor), Security, Billing, Payment Resources Services Storage Network OS Relational Database Systems 1 Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 66

Relational Databases 2 Featuring the architecture of a DBMS storing data on hard disks indexing query evaluation and optimization transactions and ACID recovery Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 67

Relational Databases 2 Data structures for indexes! Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 68

Relational Databases 2 Query optimization! Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 69

Relational Databases 2 Implementing transactions! Transaction Manager Scheduler Storage Manager Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 70

Master Seminar Linked Open Data (LOD) Named Entity Recognition and Disambiguation Querying Linked Data & Federated Query Processing Entity Reconciliation Ontology Alignment RDF Stores Data Quality in LOD Relation Mining Programming with LOD Multimedia Databases Wolf-Tilo Balke Institut für Informationssysteme TU Braunschweig 71