Data Models & Processing and Statistics
|
|
|
- Shannon Beryl Anderson
- 10 years ago
- Views:
Transcription
1 Data Models & Processing and Statistics Lecture BigData Analytics Julian M. Kunkel University of Hamburg / German Climate Computing Center (DKRZ)
2 Outline 1 Data: Terminology 2 Data Models & Processing 3 Technology 4 Descriptive Statistics 5 Inductive Statistics 6 Summary Julian M. Kunkel Lecture BigData Analytics, / 50
3 Basic Considerations About Storing Big Data New data is constantly coming (Velocity of Big Data) How can we update our derived data (and conclusions)? Incremental updates vs. (partly) re-computation algorithms How can we ingest the data? Storage and data management techniques are needed How can we diagnose causes for problems with data (e.g. inaccuracies)? Efficient processing of data is key for analysis Management of data Idea: Store facts (truth) and never change/delete them Data value may degrade over time, garbage clean old data Raw data is usually considered to be immutable Implies that an update of (raw) data is not necessary Create a model for representing the data Julian M. Kunkel Lecture BigData Analytics, / 50
4 Terminology Data [1, 10] Raw data: collected information that is not derived from other data Derived data: data produced with some computation/functions View: presents derived data to answer specific questions Convenient for users (only see what you need) + faster than re-computation Convenient for administration (e.g. manage permissions) Data access can be optimized Dealing with unstructured data We need to extract information from raw unstructured data e.g. perform text-processing using techniques from computer linguistics Semantic normalization is the process of reshaping free-form information into a structured form of data [11] Store raw data when your processing algorithm improves over time Julian M. Kunkel Lecture BigData Analytics, / 50
5 Terminology for Managing Data [1, 10] Data life cycle: creation, distribution, use, maintenance & disposition Information lifecycle management (ILM): business term; practices, tools and policies to manage the data life cycle in a cost-effective way Data governance: control that ensures that the data entry... meets precise standards such as business rule, a data definition and data integrity constraints in the data model [10] Data provenance: the documentation of input, transformations of data and involved systems to support analysis, tracing and reproducibility Data-lineage (Datenherkunft): forensics; allows to identify the source data used to generate data products (part of data provenance) Service level agreements (SLAs): contract defining quality, e.g. performance/reliability & responsibilities between service user/provider Julian M. Kunkel Lecture BigData Analytics, / 50
6 Data-Cleaning and Ingestion Importing of raw data into a big data system is an important process Wrong data results in wrong conclusions: Garbage in Garbage out Data wrangling: process and procedures to clean and convert data from one format to another [1] Data extraction: identify relevant data sets and extract raw data Data munging: cleaning raw data, converting it to a format for consumption Extract, Transform, Load (ETL process): data warehouse term for importing data (from databases) into a data warehouse Necessary steps Define and document data governance policies to ensure data quality Identifying and dealing with duplicates, time(stamp) synchronization Handling of missing values (NULL or replace them with default values) Document the conducted transformations (for data provenance) Data sources Conversions of data types, complex transformations Extraction of information from unstructured data (semantic normalization) Implementation of the procedures for bulk loading and cleaning of data Julian M. Kunkel Lecture BigData Analytics, / 50
7 Datawarehousing ETL Process Extract: read data from source databases Transform Perform quality control Improve quality: treat errors and uncertainty Change the layout to fit the data warehouse Load: integrate the data into the data warehouse Restructure data to fit needs of business users Rely on batch integration of large quantities of data Julian M. Kunkel Lecture BigData Analytics, / 50
8 1 Data: Terminology 2 Data Models & Processing Data Model Process Model Domain-specific Language Overview of Data Models Semantics Columnar Model Key-Value Store Document Model 3 Technology 4 Descriptive Statistics 5 Inductive Statistics 6 Summary Julian M. Kunkel Lecture BigData Analytics, / 50
9 Data Models 1 and their Instances [12] A data model describes how information is organized in a system It is a tool to specify, access and process information A model provide operations for accessing and manipulating data that follow certain semantics Typical information is some kind of entity (virtual object) e.g. car, article Logical model: abstraction expressing objects and operations Physical model: maps logical structures onto hardware resources (e.g. Files, bytes) Figure: Source: [12] DM theory: Formal methods for describing data models with tool support Applying theory creates a data model instance for a specific application 1 The term is often used ambivalently for a data (meta) model concept/theory or an instance Julian M. Kunkel Lecture BigData Analytics, / 50
10 Process Models [13] Models describing processes Process: A series of events to produce a result, especially as contrasted to product. [15] Qualities of descriptions Descriptive: Describe the events that occur during the process Prescriptive Define the intended process and how it is executed Rules and guideliness steering the process Explanatory Provide rationales for the process Describe requirements Establish links between processes Julian M. Kunkel Lecture BigData Analytics, / 50
11 Programming Paradigms [14] Programming paradigms are process models for computation Fundamental style and abstraction level for computer programming Imperative (e.g. Procedural) Declarative (e.g. Functional, Dataflow, Logic) Data-driven programming (describe patterns and transformations) Multi-paradigm supporting several (e.g. SQL, Scala) There are many paradigms with tools support available Parallelism is an important aspect for processing of large data In HPC, there are language extensions, libraries to specify parallelism PGAS, Message Passing, OpenMP, data flow e.g. OmpSs,... In BigData Analytics, libraries and domain-specific languages MapReduce, SQL, data-flow, streaming and data-driven Julian M. Kunkel Lecture BigData Analytics, / 50
12 Domain-specific Language (DSL) Specialized programming language to an application domain Mathematics e.g. statistics, modelling Description of graphs e.g. graphviz (dot) Processing of big data A contrast to general-purpose languages (GPL) Standalone vs. embedded Embedding into a GPL (e.g. regex, SQL) with library support Standalone requires to provide its own toolchain (e.g. compiler) Source-to-source compilation (DSL to GPL) an alternative High-level of abstraction or low-level Low-level: includes technical details (e.g. about hardware) Julian M. Kunkel Lecture BigData Analytics, / 50
13 Selection of Theory (concepts) for Data Models I/O Middelware: NetCDF, HDF5, ADIOS Relational model (tuples and tables) e.g. can be physically stored in a CSV file or database Relational model + raster data Operations for N-dimensional data (e.g. pictures, scientific data) NoSQL data models: Not only SQL 2, lacks features of databases Column Document Key-value Named graph Fact-based: built on top of atomic facts, well-suited for BI [11] Data modeling [10] The process in software-engineering of creating a data model instance for an information system 2 Sometimes people also call it No SQL Julian M. Kunkel Lecture BigData Analytics, / 50
14 Semantics Describes operations and their behavior Application programming interface (API) Concurrency: Behavior of simultaneously executed operations Atomicity: Are partial modifications visible to other clients Visibility: When are changes visible to other clients Isolation: Are operations influencing other ongoing operations Availability: Readiness to serve operations Robustness of the system for typical (hardware and software) errors (Scalability: availability and performance behavior with number of requests) Partition tolerance: Continue to operate even if network breaks partially Durability: Modifications should be stored on persistent storage Consistency: Any operation leaves a consistent system CAP-Theorem It is not possible to fulfill all three attributes in a distributed system: Consistency (here: immediate visibility of changes among all clients) Availability (we ll receive a response for every request) Partition tolerance (system operates despite network failures) Julian M. Kunkel Lecture BigData Analytics, / 50
15 Example Semantics POSIX I/O ACID BASE Atomicity and isolation for individual operations, locking possible Atomicity, consistency, isolation and durability for transactions Strict semantics for database systems to prevent data loss BASE is a typical semantics for Big Data due to the CAP theorem Basically Available replicated Soft state with Eventual consistency [26] Availability: Always serve but may return a failure, retry may be needed Soft state: State of the system may change over time without requests due to eventual consistency Consistency: If no updates are made any more, the last state usually becomes visible to all clients Big data solutions often exploit the immutability of data Julian M. Kunkel Lecture BigData Analytics, / 50
16 Columnar Model Data is stored in rows and columns (evtl. tables) A column is a tuple (name, value and timestamp) Each row can contain other columns Columns can store complex objects e.g. collections Examples: HBase, Cassandra, Accumulo Row/Column: student name matrikel lectures lecture name 1 "Max Mustermann" 4711 [3] - 2 "Nina Musterfrau" 4712 [3,4] "Big Data Analytics" "Hochleistungsrechnen" Table: Example columnar model for the students, each value has its own timestamp (not shown). Note that lectures and students should be modeled with two tables Julian M. Kunkel Lecture BigData Analytics, / 50
17 Key-Value Store Data is stored as value and addressed by a key The value can be complex objects e.g. JSON or collections Keys can be forged to simplify lookup evtl. tables with names Examples: CouchDB, BerkeleyDB, Memcached, BigTable Key stud/4711 stud/4712 lec/1 lec/2 Value <name>max Mustermann</name><attended><id>1</id></attended> <name>nina Musterfrau</name><attended><id>1</id><id>2</id></attended> <name>big Data Analytics</name> <name>hochleistungsrechnen</name> Table: Example key-value model for the students with embedded XML Julian M. Kunkel Lecture BigData Analytics, / 50
18 Document Model Documents contain semi-structured data (JSON, XML) Each document can contain data with other structures Addressing to lookup documents are implementation specific e.g. bucket/document key, (sub) collections, hierarchical namespace References between documents are possible Examples: MongoDB, Couchbase, DocumentDB 1 <students> 2 <student><name>max Mustermann</name><matrikel>4711</matrikel> 3 <lecturesattended><id>1</id></lecturesattended> 4 </student> 5 <student><name>nina Musterfrau</name><matrikel>4712</matrikel> 6 <lecturesattended><id>1</id><id>2</id></lecturesattended> 7 </student> 8 </students> Table: Example XML document storing students. Using a bucket/key namespace, the document could be addressed with key: uni/stud in the bucket app1 Julian M. Kunkel Lecture BigData Analytics, / 50
19 Graph Entities are stored as nodes and relations as edges in the graph Properties/Attributes provide additional information as key/value Examples: Neo4J, InfiniteGraph Nina Musterfrau age:25 Hochleistungs rechnen Max Mustermann age:22, matrikel:4711 Big Data Analytics Figure: Graph representing the students (attributes are not shown) Julian M. Kunkel Lecture BigData Analytics, / 50
20 Relational Model [10] Database model based on first-order predicate logic Theoretic foundations: relational algebra and relational calculus Data is represented as tuples Relation/Table: groups tuples with similar semantics Table consists of rows and named columns (attributes) No duplicates of complete rows allowed In its raw style no support for collections in tuples Schema: specify structure of tables Datatypes (domain of attributes) Consistency via constraints Organization and optimizations Figure: Source: Relational model concepts [11] Julian M. Kunkel Lecture BigData Analytics, / 50
21 Example Relational Model for Students Data Matrikel Name Birthday 242 Hans Fritz Table: Student table ID Name 1 Big Data Analytics 2 Hochleistungsrechnen Table: Lecture table Matrikel LectureID Table: Attends table representing a relation Julian M. Kunkel Lecture BigData Analytics, / 50
22 Fact-Based Model [11] 4 Store raw data as timestamped atomic facts Never delete true facts: Immutable data Make individual facts unique to prevent duplicates Example: social web page Record all changes to user profiles as facts Benefits Allows reconstruction of the profile state at any time Can be queried at any time 3 Example: purchases Record each item purchase as facts together with location, time,... 3 If the profile is changed recently, the query may return an old state. 4 Note that the definitions in the data warehousing (OLAP) and big data [11] domains are slightly different Julian M. Kunkel Lecture BigData Analytics, / 50
23 1 Data: Terminology 2 Data Models & Processing 3 Technology Requirements Technology 4 Descriptive Statistics 5 Inductive Statistics 6 Summary Julian M. Kunkel Lecture BigData Analytics, / 50
24 Wishlist for Big Data Technology [11] High-availability, fault-tolerance (Linear) Scalability i.e. 2n servers handle 2n the data volume + same processing time Real-time data processing capabilities (interactive) Up-to-date data Extensible, i.e. easy to introduce new features and data Simple programming models Debuggability (Cheap & ready for the cloud) Technology works with TCP/IP Julian M. Kunkel Lecture BigData Analytics, / 50
25 Components for Big Data Analytics Required components for a big data system Servers, storage, processing capabilities User interfaces Storage NoSQL databases are non-relational, distributed and scale-out Hadoop Distributed File System (HDFS) Cassandra, CouchDB, BigTable, MongoDB 5 Data Warehouses are useful for well known and repeated analysis Processing capabilities Interactive processing is difficult Available technology offers Batch processing Real-time processing (seconds to minutes turnaround) 5 See for a big list Julian M. Kunkel Lecture BigData Analytics, / 50
26 Alternative Processing Technology Figure: Source: Forrester Webinar. Big Data: Gold Rush Or Illusion? [4] Julian M. Kunkel Lecture BigData Analytics, / 50
27 The Hadoop Ecosystem (of the Hortonworks Distribution) Figure: Source: [20] Julian M. Kunkel Lecture BigData Analytics, / 50
28 The Lambda Architecture [11] Speed layer Realtime view Realtime view Realtime view New data Batch layer Serving layer Batch view Query: How many... Master dataset Batch view Goal: Interactive Processing Batch layer pre-processes data Master dataset is immutable/never changed Operations are periodically performed Serving layer offers performance optimized views Speed layer serves deltas between batch and recent activities Robust: Errors/inaccuracies of realtime views are corrected in batch view Figure: Redrawn figure. Source: [11], Fig. 2.1 Julian M. Kunkel Lecture BigData Analytics, / 50
29 1 Data: Terminology 2 Data Models & Processing 3 Technology 4 Descriptive Statistics Overview Example Dataset Distribution of Values Correlation 5 Inductive Statistics 6 Summary Julian M. Kunkel Lecture BigData Analytics, / 50
30 Statistics: Overview Statistics is the study of the collection, analysis, interpretation, presentation, and organization of data [21] Either describe properties of a sample or infer properties of a population Important terms [10] Unit of observation: the entity described by the data Unit of analysis: the major entity that is being analyzed Example: observe income of each person, analyse differences of countries Statistical population: complete set of items that share at least one property that is subject of analysis Subpopulation share additional properties Sample: (sub)set of data collected and/or selected from a population If chosen properly, they can represent the population There are many sampling methods, we can never capture ALL items Independence: one observation does not effect another Example: Select two people living in Germany randomly Dependent: select one household and pick married couple Julian M. Kunkel Lecture BigData Analytics, / 50
31 Statistics: Variables Dependent variable: represents the output/effect Example: Word count of a Wikipedia article; income of people Independent variable: assumed input/cause/explanation Example: Number of sentences; age, educational level Univariate analysis looks at a single variable Bivariate analysis describes/analyze relationships between two variables Multivariate statistics: analyze/observe multiple dependent variables Example: chemicals in the blood stream of people, chance for cancers Independent variables are personal information / habits Julian M. Kunkel Lecture BigData Analytics, / 50
32 Descriptive Statistics [10] The discipline of quantitatively describing main features of sampled data Summarize observations/selected samples Exploratory data analysis (EDA): approach for inspecting data Using different chart types, e.g. Box plots, histograms, scatter plot Methods for Univariate analysis Distribution of values, e.g. mean, variance, quantiles Probability distribution and density t-test (e.g. check if data is t-distributed) Methods for Bivariate analysis Correlation coefficient 6 describes linear relationship Rank correlation 7 : extent by which one variable increases with another var Methods for Multivariate analysis Principal component analysis (PCA) converts correlated variables into linearly uncorrelated variables called principal components 6 Pearson s product-moment coefficient 7 By Spearman or Kendall Julian M. Kunkel Lecture BigData Analytics, / 50
33 Example Dataset: Iris Flower Data Set Contains information about iris flower Three species: Iris Setosa, Iris Virginica, Iris Versicolor Data: Sepal.length, Sepal.width, Petal.length, Petal.width R example 1 > data(iris) # load iris data 2 > summary(iris) 3 Sepal.Length Sepal.Width Petal.Length 4 Min. :4.300 Min. :2.000 Min. : st Qu.: st Qu.: st Qu.: Median :5.800 Median :3.000 Median : Mean :5.843 Mean :3.057 Mean : rd Qu.: rd Qu.: rd Qu.: Max. :7.900 Max. :4.400 Max. : Petal.Width Species 12 Min. :0.100 setosa : st Qu.:0.300 versicolor:50 14 Median :1.300 virginica :50 15 Mean : rd Qu.: Max. : # Draw a matrix of all variables 20 > plot(iris[,1:4], col=iris$species) Sepal.Length Sepal.Width Petal.Length Petal.Width Figure: R plot of the iris data Julian M. Kunkel Lecture BigData Analytics, / 50
34 Distribution of Values: Histograms [10] Distribution: frequency of outcomes (values) in a sample Example: Species in the Iris data set setosa: 50 versicolor: 50 virginica: 50 Histogram: graphical representation of the distribution Partition observed values into bins Count number of occurrences in each bin It is an estimate for the probability distribution R example 1 # nclass specifies the number of bins 2 # by default, hist uses equidistant bins 3 hist(iris$petal.length, nclass=10, main="") 4 5 hist(iris$petal.length, nclass=25, main="") Frequency Frequency iris$petal.length iris$petal.length Figure: Histograms with 10 and 25 bins Julian M. Kunkel Lecture BigData Analytics, / 50
35 Distribution of Values: Density [10] Probability density function (density): Likelihood for a continuous variable to take on a given value Kernel density estimation (KDE) approximates the density R example 1 # The kernel density estimator moves a function (kernel) in a window across samples 2 # With bw="sj" or nrd it automatically determines the bandwidth i.e. window size 3 d = density(iris$petal.length, bw="sj", kernel="gaussian") 4 plot(d, main="") Density N = 150 Bandwidth = Figure: Density estimation of Petal.Length Julian M. Kunkel Lecture BigData Analytics, / 50
36 Distribution of Values: Quantiles [10] Percentile: value below which a given percentage of observations fall q-quantiles: values that partition a ranked set into q equal sized subsets Quartiles: three data points that split a ranked set into four equal points Q1=P(25), Q2=median=P(50), Q3=P(75), interquartile range iqr=q3-q1 Boxplot: shows quartiles (Q1,Q2,Q3) and whiskers R example Whiskers extend to values up to 1.5 iqr from Q1 and Q3 Outliers are outside of whiskers 1 > boxplot(iris, range=1.5) # 1.5 interquartile range 2 > d = iris$sepal.width 3 > quantile(d) 4 0% 25% 50% 75% 100% > q3 = quantile(d,0.75) # pick value below which are 75% 7 > q1 = quantile(d,0.25) 8 > irq = (q3 - q1) 9 # identify all outliers based on the interquartile range 10 > mask = d < (q1-1.5*irq) d > (q *irq) 11 # pick outlier selection from full data set 12 > o = iris[mask,] 13 # draw the species name into the boxplot 14 > text(rep(1.5,nrow(o)), o$sepal.width, o$species, col=as.numeric(o$species)) setosa setosa versicolor Sepal.Length Sepal.Width Petal.Length Petal.Width Species Figure: Boxplot Julian M. Kunkel Lecture BigData Analytics, / 50
37 Density Plot Including Summary 1 d = density(iris$petal.length, bw="sj", kernel="gaussian") 2 3 # add space for two axes 4 par(mar=c(5, 4, 4, 6) + 0.1) 5 plot(d, main="") 6 # draw lines for Q1, Q2, Q3 7 q = quantile(iris$petal.length) 8 q = c(q, mean(iris$petal.length)) 9 abline(v=q[1], lty=2, col="green", lwd=2) 10 abline(v=q[2], lty=3, col="blue", lwd=2) 11 abline(v=q[3], lty=3, col="red", lwd=3) 12 abline(v=q[4], lty=3, col="blue", lwd=2) 13 abline(v=q[5], lty=2, col="green", lwd=2) 14 abline(v=q[6], lty=4, col="black", lwd=2) 15 # Add titles 16 text(q, rep(-0.01, 5), c("min", "Q1", "median", "Q3", "max", "mean")) # identify x limits 19 xlim = par("usr")[1:2] 20 par(new=true) # Empirical cummulative distribution function 23 e = ecdf(iris$petal.length) 24 plot(e, col="blue", axes=false, xlim=xlim, ylab="", xlab="", main="") axis(4, ylim=c(0,1.0), col="blue") 27 mtext("cummulative distribution function", side=4, line=2.5) Density min Q1 mean median Q3 max N = 150 Bandwidth = Figure: Density estimation with 5-number summary and cumulative density function Cummulative distribution function Julian M. Kunkel Lecture BigData Analytics, / 50
38 Correlation Coefficients Measures (linear) correlation between two variables Value between -1 and +1 >0.7: strong positive correlation >0.2: weak positive correlation 0: no correlation, < 0: negative correlation R example 1 library(corrplot) 2 d = iris 3 d$species = as.numeric(d$species) 4 corrplot(cor(d), method = "circle") # linear correlation 5 6 mplot = function(x,y, name){ 7 pdf(name,width=5,height=5) # plot into a PDF 8 p = cor(x,y, method="pearson") # compute correlation 9 k = cor(x,y, method="spearman") 10 plot(x,y, xlab=sprintf("x\n cor. coeff: %.2f rank coef.: %.2f", p, k)) 11 dev.off() 12 } mplot(iris$petal.length, iris$petal.width, "iris-corr.pdf") 15 # cor. coeff: 0.96 rank coef.: x = 1:10; y = c(1,3,2,5,4,7,6,9,8,10) 18 mplot(x,y, "linear.pdf") # cor. coeff: 0.95 rank coef.: mplot(x, x*x*x, "x3.pdf") # cor. coeff: 0.93 rank coef.: 1 Sepal.Length Sepal.Width Petal.Length Petal.Width Species Sepal.Length Sepal.Width Petal.Length Petal.Width Figure: Corrplot Julian M. Kunkel Lecture BigData Analytics, / 50 Species
39 Example Correlations for X, Y Plots Figure: Correlations for x,y plots; Source: [22] Julian M. Kunkel Lecture BigData Analytics, / 50
40 1 Data: Terminology 2 Data Models & Processing 3 Technology 4 Descriptive Statistics 5 Inductive Statistics Overview Linear Models Time Series 6 Summary Julian M. Kunkel Lecture BigData Analytics, / 50
41 Inductive Statistics: Some Terminology [10] Statistical inference is the process of deducting properties of a population by analyzing samples Build a statistical model and test the hypothesis if it applies Allows to deduct propositions (statements about data properties) Statistical hypothesis: hypothesis that is testable on a process modeled via a set of random variables Statistical model: embodies a set of assumptions concerning the generation of the observed data, and similar data from a larger population. A model represents, often in considerably idealized form, the data-generating process Validation: Process to verify that a model/hypothesis is likely to represent the observation/population Significance: A significant finding is one that is determined (statistically) to be very unlikely to happen by chance Residual: difference of observation and estimated/predicted value Julian M. Kunkel Lecture BigData Analytics, / 50
42 Statistics: Inductive Statistics [10] Testing process 1 Formulate default (null 8 ) and alternative hypothesis 2 Formulate statistical assumptions e.g. independence of variables 3 Decide which statistical tests can be applied to disprove null hypothesis 4 Choose significance level α for wrongly rejecting null hypothesis 5 Compute test results, especially the p-value 9 6 If p-value < α, then reject null hypothesis and go for alternative Be careful: (p-value α) null hypothesis is true, though it may be Example hypotheses Petal.Width of each iris flowers species follow a normal distribution Waiting time of a supermarket checkout queue is gamma distributed 8 We try to reject/nullify this hypothesis. 9 Probability of obtaining a result equal or more extreme than observed. Julian M. Kunkel Lecture BigData Analytics, / 50
43 Checking if Petal.Width is Normal Distributed R example 1 # The Shapiro-Wilk-Test allows for testing if a population represented by a sample is normal distributed 2 # The Null-hypothesis claims that data is normal distributed 3 4 # Let us check for the full population 5 > shapiro.test(iris$petal.width) 6 # W = , p-value = 1.68e-08 7 # Value is almost 0, thus reject null hypothesis => 8 # In the full population, Petal.Width is not normal distributed 9 10 # Maybe the Petal.Width is normal distributed for individual species? 11 for (spec in levels(iris$species)){ 12 print(spec) 13 y = iris[iris$species==spec,] # Shapiro-Wilk-test checks if data is normal distributed 16 print(shapiro.test(y$petal.width)) 17 } [1] "virginica" 20 W = , p-value = # Small p-value means a low chance this happens, here about 8.7% 22 # With the typical significance level of 0.05 Petal.Width is normal distributed 23 # For simplicitly, we may now assume Petal.Width is normal distributed for this species [1] "setosa" 26 W = , p-value = 8.659e-07 # it is not normal distributed [1] "versicolor" 29 W = , p-value = # still too unlikely to be normal distributed Julian M. Kunkel Lecture BigData Analytics, / 50
44 Linear Models (for Regression) [10] Linear regression: Modeling the relationship between dependent var Y and explanatory variables X i Assume n samples are observed with their values in the tuples (Y i, X i1,..., X ip ) Y i is the dependent variable (label) X ij are independent variables Assumption for linear models: normal distributed variables A linear regression model fits Y i = c 0 + c 1 f 1 (X i1 ) c p f p (X ip ) + ɛ i Determine coefficients c 0 to c p to minimize the error term ɛ The functions f i can be non-linear R example 1 # R allows to define equations, here Petal.Width is our dependent var 2 m = lm("petal.width ~ Petal.Length + Sepal.Width", data=iris) 3 4 print(m) # print coefficients 5 # (Intercept) Petal.Length Sepal.Width 6 # # So Petal.Width = * Petal.Length * Sepal.Width Julian M. Kunkel Lecture BigData Analytics, / 50
45 Compare Prediction with Observation Index iris$petal.width Figure: Iris linear model Index d$petal.width Figure: With sorted data 1 # Predict petal.width for a given petal.length and sepal.width 2 d = predict(m, iris) 3 4 # Add prediction to our data frame 5 iris$prediction = d 6 7 # Plot the differences 8 plot(iris$petal.width, col="black") 9 points(iris$prediction, col=rgb(1,0,0,alpha=0.8)) # Sort observations 12 d = iris[sort(iris$petal.width, index.return=true)$ix,] 13 plot(d$petal.width, col="black") 14 points(d$prediction, col=rgb(1,0,0,alpha=0.8)) Julian M. Kunkel Lecture BigData Analytics, / 50
46 Analysing Model Accuracy [23] Std. error of the estimate: variability of c i, should be lower than c i t-value: Measures how useful a variable is for the model Pr(> t ) two-sided p-value: probability that the variable is not significant Degrees of freedom: number of independent samples (avoid overfitting!) R-squared: Fraction of variance explained by the model, 1 is optimal F-statistic: the f-test analyses the model goodness high value is good 1 summary(m) # Provide detailed information about the model 2 # Residuals: 3 # Min 1Q Median 3Q Max 4 # # 6 # Coefficients: 7 # Estimate Std.Error t value Pr(> t ) 8 # (Intercept) e-06 *** 9 # Petal.Length < 2e-16 *** 10 # Sepal.Width * 11 # # Signif. codes: 0 *** ** 0.01 * " " 1 13 # 14 # Residual standard error: on 147 degrees of freedom 15 # Multiple R-squared: , Adjusted R-squared: # F-statistic: on 2 and 147 DF, p-value: < 2.2e-16 Akaike s Information Criterion (AIC) Idea: prefer accurate models with smaller number of parameters Test various models to reduce AIC Improve good candidates AIC allows to check which models can be excluded Julian M. Kunkel Lecture BigData Analytics, / 50
47 Time Series A time series is a sequence of observations e.g. temperature, or stock price over time Prediction of the future behavior is of high interest An observation may depend on any previous observation Trend: tendency in the data Seasonality: periodic variation Prediction models Autoregressive models: AR(p) Depend linearly on last p values (+ white noise) Moving average models: MA(q) Random shocks: Depend linearly on last q white noise terms (+ white noise) Autoregressive moving average (ARMA) models Combine AR and MA models Autoregressive integrated moving average: ARIMA(p, d, q) Combines AR, MA and differencing (seasonal) models Julian M. Kunkel Lecture BigData Analytics, / 50
48 Example Time Series Temperature in Hamburg every day at 12:00 Three years of data (1980, 1996, 2014) 1 d = read.csv("temp-hamburg.csv", header=true) 2 d$lat = NULL; d$lon = NULL 3 colnames(d) = c("h", "t") 4 d$t = d$t # convert degree Kelvin to Celcius 5 plot(d$t, xlab="day", ylab="temperature in C") 6 7 pdf("hamburg-temp-models.pdf", width=5,height=5) 8 plot(d$t, xlab="day", ylab="temperature in C") 9 10 # Enumerate values 11 d$index=1:nrow(d) # General trend 14 m = lm("t ~ index", data=d) 15 points(predict(m, d), col="green") # Summer/Winter model per day of the year 18 d$day=c(rep(c(1:183, 182:1),3),0) 19 m = lm("t ~ day + index", data=d) 20 points(predict(m, d), col="blue"); dev.off() library(forecast) 23 # Convert data to a time series 24 ts = ts(d$t, frequency = 365, start= c(1980, 1)) 25 # Apply a model for non-seasonal data on seasonal adjusted data 26 tsmod = stlm(ts, modelfunction=ar) 27 plot(forecast(tsmod, h=365)) Temperature in C day Figure: Temperature time series Julian M. Kunkel Lecture BigData Analytics, / 50
49 Example Time Series Models Temperature in C day Forecasts from STL + AR(14) Figure: Linear models for trend and winter/summer cycle Figure: Using the forecast package/stlm() Julian M. Kunkel Lecture BigData Analytics, / 50
50 Summary Data-cleaning and ingestion is a key to successful modeling Big data can be considered to be immutable Data models describe how information is organized I/O middleware, relational NoSQL: Column, document, key-value, graphs Semantics describe operations and behavior, e.g. POSIX, ACID, BASE Process models and programming paradigms describe how to transform and analyze data Hadoop ecosystem offers means for batch and real-time processing Lambda architecture is a concept for optimizing real-time processing Descriptive statistics helps analyzing samples Inductive statistics provide concepts for inferring knowledge Julian M. Kunkel Lecture BigData Analytics, / 50
51 Diagrams with Python 1 import numpy as np 2 import matplotlib 3 # Do not use X11 backend 4 matplotlib.use( Agg ) 5 import matplotlib.pyplot as plt 6 plt.style.use( ggplot ) # draw like R s ggplot 7 # Create 4 subplots 8 (fig, axes) = plt.subplots(ncols=1, nrows=4) 9 fig.set_size_inches(15,15) # x, y 10 matplotlib.rcparams.update({ font.size : 22}) 11 (ax1, ax2, ax3, ax4) = axes.ravel() 12 # Create a 2D scatter plot with random data 13 (x, y) = np.random.normal(size=(2, 10)) 14 ax1.plot(x, y, o ) # create a bar graph 17 width = x = np.arange(4) # create a vector with 1 to 4 19 (y1, y2, y3) = np.random.randint(1, 10, size=(3, 4)) 20 ax2.bar(x, y1, width) 21 # color schemes provide fancier output 22 ax2.bar(x+width, y2, width, color=plt.rcparams[ axes.color_cycle ][2]) 23 ax2.bar(x+2*width, y3, width, color=plt.rcparams[ axes.color_cycle ][3]) 24 ax2.set_xticklabels([ 1, 2, 3, 4 ]) 25 ax2.set_xticks(x + width) 26 # Draw a line plot 27 ax3.plot([0,1,2,3, 4, 5], [1,0,2,1, 0,1] ) 28 # Draw a second line 29 ax3.plot(range(0,5), [0,2,1, 0,1], color="blue") # Draw a boxplot including Q1, Q2, Q3, IQR= ax4.boxplot((x, y), labels=["a", "B"], whis=0.5) 33 # Store to file or visualize with plt.show() 34 fig.savefig( matplotlib-demo.pdf, dpi=100) A B Figure: Output of the sample code See Julian M. Kunkel Lecture BigData for Analytics, further 2015 example code. 51 / 50
52 Bibliography 4 Forrester Big Data Webinar. Holger Kisker, Martha Bennet. Big Data: Gold Rush Or Illusion? 10 Wikipedia 11 Book: N. Marz, J. Warren. Big Data Principles and best practices of scalable real-time data systems Book: Y. Dodge. The Oxford Dictionary of Statistical Terms. ISBN Forecasting: principles and practise 25 Hypothesis testing 26 Overcoming CAP with Consistent Soft-State Replication Julian M. Kunkel Lecture BigData Analytics, / 50
Big Data JAMES WARREN. Principles and best practices of NATHAN MARZ MANNING. scalable real-time data systems. Shelter Island
Big Data Principles and best practices of scalable real-time data systems NATHAN MARZ JAMES WARREN II MANNING Shelter Island contents preface xiii acknowledgments xv about this book xviii ~1 Anew paradigm
SQL VS. NO-SQL. Adapted Slides from Dr. Jennifer Widom from Stanford
SQL VS. NO-SQL Adapted Slides from Dr. Jennifer Widom from Stanford 55 Traditional Databases SQL = Traditional relational DBMS Hugely popular among data analysts Widely adopted for transaction systems
Getting Started with R and RStudio 1
Getting Started with R and RStudio 1 1 What is R? R is a system for statistical computation and graphics. It is the statistical system that is used in Mathematics 241, Engineering Statistics, for the following
Data Visualization in R
Data Visualization in R L. Torgo [email protected] Faculdade de Ciências / LIAAD-INESC TEC, LA Universidade do Porto Oct, 2014 Introduction Motivation for Data Visualization Humans are outstanding at detecting
NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )
Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates
Lambda Architecture. Near Real-Time Big Data Analytics Using Hadoop. January 2015. Email: [email protected] Website: www.qburst.com
Lambda Architecture Near Real-Time Big Data Analytics Using Hadoop January 2015 Contents Overview... 3 Lambda Architecture: A Quick Introduction... 4 Batch Layer... 4 Serving Layer... 4 Speed Layer...
Data Mining: Exploring Data. Lecture Notes for Chapter 3. Slides by Tan, Steinbach, Kumar adapted by Michael Hahsler
Data Mining: Exploring Data Lecture Notes for Chapter 3 Slides by Tan, Steinbach, Kumar adapted by Michael Hahsler Topics Exploratory Data Analysis Summary Statistics Visualization What is data exploration?
extensible record stores document stores key-value stores Rick Cattel s clustering from Scalable SQL and NoSQL Data Stores SIGMOD Record, 2010
System/ Scale to Primary Secondary Joins/ Integrity Language/ Data Year Paper 1000s Index Indexes Transactions Analytics Constraints Views Algebra model my label 1971 RDBMS O tables sql-like 2003 memcached
Data Exploration Data Visualization
Data Exploration Data Visualization What is data exploration? A preliminary exploration of the data to better understand its characteristics. Key motivations of data exploration include Helping to select
Geostatistics Exploratory Analysis
Instituto Superior de Estatística e Gestão de Informação Universidade Nova de Lisboa Master of Science in Geospatial Technologies Geostatistics Exploratory Analysis Carlos Alberto Felgueiras [email protected]
business statistics using Excel OXFORD UNIVERSITY PRESS Glyn Davis & Branko Pecar
business statistics using Excel Glyn Davis & Branko Pecar OXFORD UNIVERSITY PRESS Detailed contents Introduction to Microsoft Excel 2003 Overview Learning Objectives 1.1 Introduction to Microsoft Excel
Data Modeling for Big Data
Data Modeling for Big Data by Jinbao Zhu, Principal Software Engineer, and Allen Wang, Manager, Software Engineering, CA Technologies In the Internet era, the volume of data we deal with has grown to terabytes
Lecture 2: Descriptive Statistics and Exploratory Data Analysis
Lecture 2: Descriptive Statistics and Exploratory Data Analysis Further Thoughts on Experimental Design 16 Individuals (8 each from two populations) with replicates Pop 1 Pop 2 Randomly sample 4 individuals
Multiple Linear Regression
Multiple Linear Regression A regression with two or more explanatory variables is called a multiple regression. Rather than modeling the mean response as a straight line, as in simple regression, it is
Challenges for Data Driven Systems
Challenges for Data Driven Systems Eiko Yoneki University of Cambridge Computer Laboratory Quick History of Data Management 4000 B C Manual recording From tablets to papyrus to paper A. Payberah 2014 2
Evaluating NoSQL for Enterprise Applications. Dirk Bartels VP Strategy & Marketing
Evaluating NoSQL for Enterprise Applications Dirk Bartels VP Strategy & Marketing Agenda The Real Time Enterprise The Data Gold Rush Managing The Data Tsunami Analytics and Data Case Studies Where to go
Statistical Models in R
Statistical Models in R Some Examples Steven Buechler Department of Mathematics 276B Hurley Hall; 1-6233 Fall, 2007 Outline Statistical Models Linear Models in R Regression Regression analysis is the appropriate
Advanced Big Data Analytics with R and Hadoop
REVOLUTION ANALYTICS WHITE PAPER Advanced Big Data Analytics with R and Hadoop 'Big Data' Analytics as a Competitive Advantage Big Analytics delivers competitive advantage in two ways compared to the traditional
Tutorial 3: Graphics and Exploratory Data Analysis in R Jason Pienaar and Tom Miller
Tutorial 3: Graphics and Exploratory Data Analysis in R Jason Pienaar and Tom Miller Getting to know the data An important first step before performing any kind of statistical analysis is to familiarize
How To Scale Out Of A Nosql Database
Firebird meets NoSQL (Apache HBase) Case Study Firebird Conference 2011 Luxembourg 25.11.2011 26.11.2011 Thomas Steinmaurer DI +43 7236 3343 896 [email protected] www.scch.at Michael Zwick DI
Descriptive statistics Statistical inference statistical inference, statistical induction and inferential statistics
Descriptive statistics is the discipline of quantitatively describing the main features of a collection of data. Descriptive statistics are distinguished from inferential statistics (or inductive statistics),
Data-Intensive Applications on HPC Using Hadoop, Spark and RADICAL-Cybertools
Data-Intensive Applications on HPC Using Hadoop, Spark and RADICAL-Cybertools Shantenu Jha, Andre Luckow, Ioannis Paraskevakos RADICAL, Rutgers, http://radical.rutgers.edu Agenda 1. Motivation and Background
Hadoop Ecosystem Overview. CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook
Hadoop Ecosystem Overview CMSC 491 Hadoop-Based Distributed Computing Spring 2015 Adam Shook Agenda Introduce Hadoop projects to prepare you for your group work Intimate detail will be provided in future
How to Choose Between Hadoop, NoSQL and RDBMS
How to Choose Between Hadoop, NoSQL and RDBMS Keywords: Jean-Pierre Dijcks Oracle Redwood City, CA, USA Big Data, Hadoop, NoSQL Database, Relational Database, SQL, Security, Performance Introduction A
Introduction to Hadoop. New York Oracle User Group Vikas Sawhney
Introduction to Hadoop New York Oracle User Group Vikas Sawhney GENERAL AGENDA Driving Factors behind BIG-DATA NOSQL Database 2014 Database Landscape Hadoop Architecture Map/Reduce Hadoop Eco-system Hadoop
Tutorial 2: Descriptive Statistics and Exploratory Data Analysis
Tutorial 2: Descriptive Statistics and Exploratory Data Analysis Rob Nicholls [email protected] MRC LMB Statistics Course 2014 A very basic understanding of the R software environment is assumed.
Basic Statistics and Data Analysis for Health Researchers from Foreign Countries
Basic Statistics and Data Analysis for Health Researchers from Foreign Countries Volkert Siersma [email protected] The Research Unit for General Practice in Copenhagen Dias 1 Content Quantifying association
Data Exploration and Preprocessing. Data Mining and Text Mining (UIC 583 @ Politecnico di Milano)
Data Exploration and Preprocessing Data Mining and Text Mining (UIC 583 @ Politecnico di Milano) References Jiawei Han and Micheline Kamber, "Data Mining: Concepts and Techniques", The Morgan Kaufmann
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE
INTRODUCTION TO APACHE HADOOP MATTHIAS BRÄGER CERN GS-ASE AGENDA Introduction to Big Data Introduction to Hadoop HDFS file system Map/Reduce framework Hadoop utilities Summary BIG DATA FACTS In what timeframe
Introduction to Big Data! with Apache Spark" UC#BERKELEY#
Introduction to Big Data! with Apache Spark" UC#BERKELEY# So What is Data Science?" Doing Data Science" Data Preparation" Roles" This Lecture" What is Data Science?" Data Science aims to derive knowledge!
Overview of Databases On MacOS. Karl Kuehn Automation Engineer RethinkDB
Overview of Databases On MacOS Karl Kuehn Automation Engineer RethinkDB Session Goals Introduce Database concepts Show example players Not Goals: Cover non-macos systems (Oracle) Teach you SQL Answer what
STATS8: Introduction to Biostatistics. Data Exploration. Babak Shahbaba Department of Statistics, UCI
STATS8: Introduction to Biostatistics Data Exploration Babak Shahbaba Department of Statistics, UCI Introduction After clearly defining the scientific problem, selecting a set of representative members
Programming Hadoop 5-day, instructor-led BD-106. MapReduce Overview. Hadoop Overview
Programming Hadoop 5-day, instructor-led BD-106 MapReduce Overview The Client Server Processing Pattern Distributed Computing Challenges MapReduce Defined Google's MapReduce The Map Phase of MapReduce
Integrating Big Data into the Computing Curricula
Integrating Big Data into the Computing Curricula Yasin Silva, Suzanne Dietrich, Jason Reed, Lisa Tsosie Arizona State University http://www.public.asu.edu/~ynsilva/ibigdata/ 1 Overview Motivation Big
Testing Big data is one of the biggest
Infosys Labs Briefings VOL 11 NO 1 2013 Big Data: Testing Approach to Overcome Quality Challenges By Mahesh Gudipati, Shanthi Rao, Naju D. Mohan and Naveen Kumar Gajja Validate data quality by employing
A Scalable Data Transformation Framework using the Hadoop Ecosystem
A Scalable Data Transformation Framework using the Hadoop Ecosystem Raj Nair Director Data Platform Kiru Pakkirisamy CTO AGENDA About Penton and Serendio Inc Data Processing at Penton PoC Use Case Functional
Week 1. Exploratory Data Analysis
Week 1 Exploratory Data Analysis Practicalities This course ST903 has students from both the MSc in Financial Mathematics and the MSc in Statistics. Two lectures and one seminar/tutorial per week. Exam
BNG 202 Biomechanics Lab. Descriptive statistics and probability distributions I
BNG 202 Biomechanics Lab Descriptive statistics and probability distributions I Overview The overall goal of this short course in statistics is to provide an introduction to descriptive and inferential
Big Data With Hadoop
With Saurabh Singh [email protected] The Ohio State University February 11, 2016 Overview 1 2 3 Requirements Ecosystem Resilient Distributed Datasets (RDDs) Example Code vs Mapreduce 4 5 Source: [Tutorials
NoSQL Databases. Institute of Computer Science Databases and Information Systems (DBIS) DB 2, WS 2014/2015
NoSQL Databases Institute of Computer Science Databases and Information Systems (DBIS) DB 2, WS 2014/2015 Database Landscape Source: H. Lim, Y. Han, and S. Babu, How to Fit when No One Size Fits., in CIDR,
InfiniteGraph: The Distributed Graph Database
A Performance and Distributed Performance Benchmark of InfiniteGraph and a Leading Open Source Graph Database Using Synthetic Data Objectivity, Inc. 640 West California Ave. Suite 240 Sunnyvale, CA 94086
Managing Big Data with Hadoop & Vertica. A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database
Managing Big Data with Hadoop & Vertica A look at integration between the Cloudera distribution for Hadoop and the Vertica Analytic Database Copyright Vertica Systems, Inc. October 2009 Cloudera and Vertica
Reference Architecture, Requirements, Gaps, Roles
Reference Architecture, Requirements, Gaps, Roles The contents of this document are an excerpt from the brainstorming document M0014. The purpose is to show how a detailed Big Data Reference Architecture
Silvermine House Steenberg Office Park, Tokai 7945 Cape Town, South Africa Telephone: +27 21 702 4666 www.spss-sa.com
SPSS-SA Silvermine House Steenberg Office Park, Tokai 7945 Cape Town, South Africa Telephone: +27 21 702 4666 www.spss-sa.com SPSS-SA Training Brochure 2009 TABLE OF CONTENTS 1 SPSS TRAINING COURSES FOCUSING
BIG DATA IN THE CLOUD : CHALLENGES AND OPPORTUNITIES MARY- JANE SULE & PROF. MAOZHEN LI BRUNEL UNIVERSITY, LONDON
BIG DATA IN THE CLOUD : CHALLENGES AND OPPORTUNITIES MARY- JANE SULE & PROF. MAOZHEN LI BRUNEL UNIVERSITY, LONDON Overview * Introduction * Multiple faces of Big Data * Challenges of Big Data * Cloud Computing
THE DEVELOPER GUIDE TO BUILDING STREAMING DATA APPLICATIONS
THE DEVELOPER GUIDE TO BUILDING STREAMING DATA APPLICATIONS WHITE PAPER Successfully writing Fast Data applications to manage data generated from mobile, smart devices and social interactions, and the
DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9
DEPARTMENT OF PSYCHOLOGY UNIVERSITY OF LANCASTER MSC IN PSYCHOLOGICAL RESEARCH METHODS ANALYSING AND INTERPRETING DATA 2 PART 1 WEEK 9 Analysis of covariance and multiple regression So far in this course,
Lecture 10: HBase! Claudia Hauff (Web Information Systems)! [email protected]
Big Data Processing, 2014/15 Lecture 10: HBase!! Claudia Hauff (Web Information Systems)! [email protected] 1 Course content Introduction Data streams 1 & 2 The MapReduce paradigm Looking behind the
Exploratory data analysis (Chapter 2) Fall 2011
Exploratory data analysis (Chapter 2) Fall 2011 Data Examples Example 1: Survey Data 1 Data collected from a Stat 371 class in Fall 2005 2 They answered questions about their: gender, major, year in school,
BIG DATA What it is and how to use?
BIG DATA What it is and how to use? Lauri Ilison, PhD Data Scientist 21.11.2014 Big Data definition? There is no clear definition for BIG DATA BIG DATA is more of a concept than precise term 1 21.11.14
Making Sense ofnosql A GUIDE FOR MANAGERS AND THE REST OF US DAN MCCREARY MANNING ANN KELLY. Shelter Island
Making Sense ofnosql A GUIDE FOR MANAGERS AND THE REST OF US DAN MCCREARY ANN KELLY II MANNING Shelter Island contents foreword preface xvii xix acknowledgments xxi about this book xxii Part 1 Introduction
IMPROVING DATA INTEGRATION FOR DATA WAREHOUSE: A DATA MINING APPROACH
IMPROVING DATA INTEGRATION FOR DATA WAREHOUSE: A DATA MINING APPROACH Kalinka Mihaylova Kaloyanova St. Kliment Ohridski University of Sofia, Faculty of Mathematics and Informatics Sofia 1164, Bulgaria
Trafodion Operational SQL-on-Hadoop
Trafodion Operational SQL-on-Hadoop SophiaConf 2015 Pierre Baudelle, HP EMEA TSC July 6 th, 2015 Hadoop workload profiles Operational Interactive Non-interactive Batch Real-time analytics Operational SQL
How To Improve Performance In A Database
Some issues on Conceptual Modeling and NoSQL/Big Data Tok Wang Ling National University of Singapore 1 Database Models File system - field, record, fixed length record Hierarchical Model (IMS) - fixed
Lecture Data Warehouse Systems
Lecture Data Warehouse Systems Eva Zangerle SS 2013 PART C: Novel Approaches in DW NoSQL and MapReduce Stonebraker on Data Warehouses Star and snowflake schemas are a good idea in the DW world C-Stores
Lambda Architecture for Batch and Real- Time Processing on AWS with Spark Streaming and Spark SQL. May 2015
Lambda Architecture for Batch and Real- Time Processing on AWS with Spark Streaming and Spark SQL May 2015 2015, Amazon Web Services, Inc. or its affiliates. All rights reserved. Notices This document
Cloud Application Development (SE808, School of Software, Sun Yat-Sen University) Yabo (Arber) Xu
Lecture 4 Introduction to Hadoop & GAE Cloud Application Development (SE808, School of Software, Sun Yat-Sen University) Yabo (Arber) Xu Outline Introduction to Hadoop The Hadoop ecosystem Related projects
FAQs. This material is built based on. Lambda Architecture. Scaling with a queue. 8/27/2015 Sangmi Pallickara
CS535 Big Data - Fall 2015 W1.B.1 CS535 Big Data - Fall 2015 W1.B.2 CS535 BIG DATA FAQs Wait list Term project topics PART 0. INTRODUCTION 2. A PARADIGM FOR BIG DATA Sangmi Lee Pallickara Computer Science,
BSC vision on Big Data and extreme scale computing
BSC vision on Big Data and extreme scale computing Jesus Labarta, Eduard Ayguade,, Fabrizio Gagliardi, Rosa M. Badia, Toni Cortes, Jordi Torres, Adrian Cristal, Osman Unsal, David Carrera, Yolanda Becerra,
NoSQL systems: introduction and data models. Riccardo Torlone Università Roma Tre
NoSQL systems: introduction and data models Riccardo Torlone Università Roma Tre Why NoSQL? In the last thirty years relational databases have been the default choice for serious data storage. An architect
Dongfeng Li. Autumn 2010
Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis
Exploratory Data Analysis
Exploratory Data Analysis Johannes Schauer [email protected] Institute of Statistics Graz University of Technology Steyrergasse 17/IV, 8010 Graz www.statistics.tugraz.at February 12, 2008 Introduction
Diagrams and Graphs of Statistical Data
Diagrams and Graphs of Statistical Data One of the most effective and interesting alternative way in which a statistical data may be presented is through diagrams and graphs. There are several ways in
ETL PROCESS IN DATA WAREHOUSE
ETL PROCESS IN DATA WAREHOUSE OUTLINE ETL : Extraction, Transformation, Loading Capture/Extract Scrub or data cleansing Transform Load and Index ETL OVERVIEW Extraction Transformation Loading ETL ETL is
Client Overview. Engagement Situation. Key Requirements
Client Overview Our client is one of the leading providers of business intelligence systems for customers especially in BFSI space that needs intensive data analysis of huge amounts of data for their decision
NoSQL and Hadoop Technologies On Oracle Cloud
NoSQL and Hadoop Technologies On Oracle Cloud Vatika Sharma 1, Meenu Dave 2 1 M.Tech. Scholar, Department of CSE, Jagan Nath University, Jaipur, India 2 Assistant Professor, Department of CSE, Jagan Nath
Exploratory Data Analysis
Exploratory Data Analysis Paul Cohen ISTA 370 Spring, 2012 Paul Cohen ISTA 370 () Exploratory Data Analysis Spring, 2012 1 / 46 Outline Data, revisited The purpose of exploratory data analysis Learning
Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining
Data Mining: Exploring Data Lecture Notes for Chapter 3 Introduction to Data Mining by Tan, Steinbach, Kumar What is data exploration? A preliminary exploration of the data to better understand its characteristics.
Associate Professor, Department of CSE, Shri Vishnu Engineering College for Women, Andhra Pradesh, India 2
Volume 6, Issue 3, March 2016 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Special Issue
Big Systems, Big Data
Big Systems, Big Data When considering Big Distributed Systems, it can be noted that a major concern is dealing with data, and in particular, Big Data Have general data issues (such as latency, availability,
Data Mining: Exploring Data. Lecture Notes for Chapter 3. Introduction to Data Mining
Data Mining: Exploring Data Lecture Notes for Chapter 3 Introduction to Data Mining by Tan, Steinbach, Kumar Tan,Steinbach, Kumar Introduction to Data Mining 8/05/2005 1 What is data exploration? A preliminary
How To Handle Big Data With A Data Scientist
III Big Data Technologies Today, new technologies make it possible to realize value from Big Data. Big data technologies can replace highly customized, expensive legacy systems with a standard solution
Cloud Scale Distributed Data Storage. Jürmo Mehine
Cloud Scale Distributed Data Storage Jürmo Mehine 2014 Outline Background Relational model Database scaling Keys, values and aggregates The NoSQL landscape Non-relational data models Key-value Document-oriented
The Comparisons. Grade Levels Comparisons. Focal PSSM K-8. Points PSSM CCSS 9-12 PSSM CCSS. Color Coding Legend. Not Identified in the Grade Band
Comparison of NCTM to Dr. Jim Bohan, Ed.D Intelligent Education, LLC [email protected] The Comparisons Grade Levels Comparisons Focal K-8 Points 9-12 pre-k through 12 Instructional programs from prekindergarten
SQL + NOSQL + NEWSQL + REALTIME FOR INVESTMENT BANKS
Enterprise Data Problems in Investment Banks BigData History and Trend Driven by Google CAP Theorem for Distributed Computer System Open Source Building Blocks: Hadoop, Solr, Storm.. 3548 Hypothetical
Graphics in R. Biostatistics 615/815
Graphics in R Biostatistics 615/815 Last Lecture Introduction to R Programming Controlling Loops Defining your own functions Today Introduction to Graphics in R Examples of commonly used graphics functions
Business Statistics. Successful completion of Introductory and/or Intermediate Algebra courses is recommended before taking Business Statistics.
Business Course Text Bowerman, Bruce L., Richard T. O'Connell, J. B. Orris, and Dawn C. Porter. Essentials of Business, 2nd edition, McGraw-Hill/Irwin, 2008, ISBN: 978-0-07-331988-9. Required Computing
Databases 2 (VU) (707.030)
Databases 2 (VU) (707.030) Introduction to NoSQL Denis Helic KMI, TU Graz Oct 14, 2013 Denis Helic (KMI, TU Graz) NoSQL Oct 14, 2013 1 / 37 Outline 1 NoSQL Motivation 2 NoSQL Systems 3 NoSQL Examples 4
Big Data Technology ดร.ช ชาต หฤไชยะศ กด. Choochart Haruechaiyasak, Ph.D.
Big Data Technology ดร.ช ชาต หฤไชยะศ กด Choochart Haruechaiyasak, Ph.D. Speech and Audio Technology Laboratory (SPT) National Electronics and Computer Technology Center (NECTEC) National Science and Technology
COM CO P 5318 Da t Da a t Explora Explor t a ion and Analysis y Chapte Chapt r e 3
COMP 5318 Data Exploration and Analysis Chapter 3 What is data exploration? A preliminary exploration of the data to better understand its characteristics. Key motivations of data exploration include Helping
MongoDB in the NoSQL and SQL world. Horst Rechner [email protected] Berlin, 2012-05-15
MongoDB in the NoSQL and SQL world. Horst Rechner [email protected] Berlin, 2012-05-15 1 MongoDB in the NoSQL and SQL world. NoSQL What? Why? - How? Say goodbye to ACID, hello BASE You
Using distributed technologies to analyze Big Data
Using distributed technologies to analyze Big Data Abhijit Sharma Innovation Lab BMC Software 1 Data Explosion in Data Center Performance / Time Series Data Incoming data rates ~Millions of data points/
An Approach to Implement Map Reduce with NoSQL Databases
www.ijecs.in International Journal Of Engineering And Computer Science ISSN: 2319-7242 Volume 4 Issue 8 Aug 2015, Page No. 13635-13639 An Approach to Implement Map Reduce with NoSQL Databases Ashutosh
Visualizing Data. Contents. 1 Visualizing Data. Anthony Tanbakuchi Department of Mathematics Pima Community College. Introductory Statistics Lectures
Introductory Statistics Lectures Visualizing Data Descriptive Statistics I Department of Mathematics Pima Community College Redistribution of this material is prohibited without written permission of the
Current Standard: Mathematical Concepts and Applications Shape, Space, and Measurement- Primary
Shape, Space, and Measurement- Primary A student shall apply concepts of shape, space, and measurement to solve problems involving two- and three-dimensional shapes by demonstrating an understanding of:
Viewing Ecological data using R graphics
Biostatistics Illustrations in Viewing Ecological data using R graphics A.B. Dufour & N. Pettorelli April 9, 2009 Presentation of the principal graphics dealing with discrete or continuous variables. Course
WROX Certified Big Data Analyst Program by AnalytixLabs and Wiley
WROX Certified Big Data Analyst Program by AnalytixLabs and Wiley Disclaimer: This material is protected under copyright act AnalytixLabs, 2011. Unauthorized use and/ or duplication of this material or
Sisense. Product Highlights. www.sisense.com
Sisense Product Highlights Introduction Sisense is a business intelligence solution that simplifies analytics for complex data by offering an end-to-end platform that lets users easily prepare and analyze
Openbus Documentation
Openbus Documentation Release 1 Produban February 17, 2014 Contents i ii An open source architecture able to process the massive amount of events that occur in a banking IT Infraestructure. Contents:
Cloud Computing at Google. Architecture
Cloud Computing at Google Google File System Web Systems and Algorithms Google Chris Brooks Department of Computer Science University of San Francisco Google has developed a layered system to handle webscale
Azure Machine Learning, SQL Data Mining and R
Azure Machine Learning, SQL Data Mining and R Day-by-day Agenda Prerequisites No formal prerequisites. Basic knowledge of SQL Server Data Tools, Excel and any analytical experience helps. Best of all:
Introduction to NOSQL
Introduction to NOSQL Université Paris-Est Marne la Vallée, LIGM UMR CNRS 8049, France January 31, 2014 Motivations NOSQL stands for Not Only SQL Motivations Exponential growth of data set size (161Eo
Knowledge Discovery and Data Mining. Structured vs. Non-Structured Data
Knowledge Discovery and Data Mining Unit # 2 1 Structured vs. Non-Structured Data Most business databases contain structured data consisting of well-defined fields with numeric or alphanumeric values.
Predictor Coef StDev T P Constant 970667056 616256122 1.58 0.154 X 0.00293 0.06163 0.05 0.963. S = 0.5597 R-Sq = 0.0% R-Sq(adj) = 0.
Statistical analysis using Microsoft Excel Microsoft Excel spreadsheets have become somewhat of a standard for data storage, at least for smaller data sets. This, along with the program often being packaged
Time Series Analysis
Time Series 1 April 9, 2013 Time Series Analysis This chapter presents an introduction to the branch of statistics known as time series analysis. Often the data we collect in environmental studies is collected
Big Data and Scripting Systems build on top of Hadoop
Big Data and Scripting Systems build on top of Hadoop 1, 2, Pig/Latin high-level map reduce programming platform interactive execution of map reduce jobs Pig is the name of the system Pig Latin is the
Foundation of Quantitative Data Analysis
Foundation of Quantitative Data Analysis Part 1: Data manipulation and descriptive statistics with SPSS/Excel HSRS #10 - October 17, 2013 Reference : A. Aczel, Complete Business Statistics. Chapters 1
Course Text. Required Computing Software. Course Description. Course Objectives. StraighterLine. Business Statistics
Course Text Business Statistics Lind, Douglas A., Marchal, William A. and Samuel A. Wathen. Basic Statistics for Business and Economics, 7th edition, McGraw-Hill/Irwin, 2010, ISBN: 9780077384470 [This
