Maximum Likelihood vs. Least Squares

Size: px
Start display at page:

Download "Maximum Likelihood vs. Least Squares"

Transcription

1 Precondition Basis Maximum Likelihood vs. Least Squares pdf exactly known Height of pdf Mean and variance known Deviation from mean Efficiency Complexity Robustness Correlated measurements Special case Maximal Complicated, mostly non-linear No (tail modelling) Difficult Identical for Maximal among linear estimators For linear models exactly solvable No (tails) easy, with covariance matrix Gaussian errors Statistical Methods BND 2011 Einführung in die Datenanalyse Michael Feindt Maria Laach 2004

2 Monte Carlo Statistical Methods BND 2011 Einführung in die Datenanalyse Michael Feindt Feindt Maria Laach 2004

3 Quasirandom numbers Pseudorandom numbers Regular grid Statistical Methods BND 2011 Einführung in die Datenanalyse Michael Feindt Feindt Maria Laach 2004

4 NeuroBayes in and outside the Ivory Tower of High Energy Physics Particle Physics Seminar, University Bonn, January 13, 2011 Michael IEKP KCETA Karlsruhe Feindt, Institute of Technology KIT, KIT Scientific Advisor Phi-T GmbH KIT University of the State of Baden-Wuerttemberg and National Research Center of the Helmholtz Association

5 Agenda of this talk: What is NeuroBayes? Where does it come from? How to use NeuroBayes classification Robustness, speed, generalisation ability, ease-to-use NeuroBayes output is a Bayesian posterior probability How to find data-monte Carlo disagreements with NeuroBayes How to train NeuroBayes from data when no good MC model is available Examples of successful NeuroBayes applications in physics Full B reconstruction at B factories Examples from industry applications

6 History of NeuroBayes M.F. & co-workers: experience with NN in DELPHI, development of many packages: ELEPHANT, MAMMOTH, BSAURUS etc Invention of NeuroBayes algorithm 1997-now extensive use of NeuroBayes in CDF II NeuroBayes - specialisation for economy at the University of Karlsruhe, supported by BMBF 2002: Phi-T GmbH founded, industrial projects, further developments 2008: Foundation of sub-company Phi-T products & services, 2. office in Hamburg 2008-now: extensive use of NeuroBayes in Belle 2010: LHCb decides to use NeuroBayes massively to optimise reconstruction code 2011: Relaunch as Blue Yonder Phi-T owns exclusive rights for NeuroBayes Staff (currently 50) almost all physicists (mainly from HEP) Einführung Continuous die further Datenanalyse development of NeuroBayes Michael Feindt Maria Laach 2004

7 Successful in competition with other data-miningmethods World s largest students competion: Data-Mining-Cup 2005: Fraud detection in internet trading 2006: price prediction in ebay auctions 2007: coupon redemption prediction 2008: lottery customer behaviour prediction

8 Since 2009: new rules: only up to 2 teams per University and 2009 Task: Prognosis of the turnaround of 8 books in 2500 book stores Winner: Uni Karlsruhe II with help of NeuroBayes and Task: Winner: Optimisation of individual customer care measures in online shop Uni Karlsruhe II with help of NeuroBayes

9

10

11

12

13

14 NeuroBayes task 1: Classifications Classification: Binary targets: Each single outcome will be yes or no NeuroBayes output is the Bayesian posterior probability that answer is yes (given that inclusive rates are the same in training and test sample, otherwise simple transformation necessary). Examples: > This elementary particle is a K meson. > This jet is a b-jet. > This three-particle combination is a D+. > This event is real data and not Monte Carlo. > This neutral B-meson was a particle and not an antiparticle at production time. > Customer Meier will cancel his contract next year.

15 NeuroBayes task 2: Conditional probability densities Probability density for real valued targets: For each possible (real) value a probability (density) is given. From that all statistical quantities like mean value, median, mode, standard deviation, etc. can be deduced. f ( t x) Deviations from normal distribution, e.g. crash probability Expectation value Standard deviation volatility Mode Examples: > Energy of an elementary particle (e.g a semileptonically decaying B meson with missing neutrino) > Q value (invariant mass) of a decay > Lifetime of a decay > Phi-direction of an inclusively reconstructed B-meson in a jet. > Turnaround of an article next year t (very important in industrial applications)

16 One way to construct a one dimensional test statistic from multidimensional input (a MVA-method): Neural networks Self learning procedures, copied from nature Frontal Lobe Motor Cortex Parietal Cortex Temporal Lobe Brain Stem Occipital Lobe Cerebellum

17 Neural networks The NeuroBayes classification core is based on a simple feed forward neural network. The information (the knowledge, the expertise) is coded in the connections between the neurons. Each neuron performs fuzzy decisions. A neural network can learn from examples. Human brain: about 100 billion ( ) neurons about 100 trillion ( ) connections NeuroBayes : 10 to few 100 neurons

18 Neural Network basic functions

19 Neural network transfer functions

20 NeuroBayes classifications Input Preprocessing NeuroBayes Teacher: Learning of complex relationships from existing data bases (e.g. Monte Carlo) NeuroBayes Expert: Prognosis for unknown data Significance control Postprocessing Output

21 How it works: training and application Historic or simulated data Data set a =... b =... c = t =! NeuroBayes Teacher Expert system Expertise Probability that hypothesis is correct (classification) or probability density for variable t Actual (new real) data Data set a =... b =... c = t =? NeuroBayes Expert f t t

22 Neural network training Backpropagation (Rumelhardt et al. 1986): Calculate gradient backwards by applying chain rule Optimise using gradient descent method. Step size??

23 Neural network training Difficulty: find global minimum of highly non-linear function in high (~ >100) dimensional space. Imagine task to find deepest valley in the Alps (just 2 dimensions) Easy to find the next local minimum... but globally......impossible! è needs good preconditioning

24 NeuroBayes strengths: NeuroBayes is a very powerful algorithm excellent generalisability (does not overtrain) robust always finds good solution even with erratic input data fast automatically select significant variables output interpretable as Bayesian a posteriori probability can train with weights and background subtraction NeuroBayes is easy to use Examples and documentation available Good default values for all options fast start! Direct interface to TMVA available Introduction into root planned

25 <phi-t> NeuroBayes > is based on 2nd generation neural network algorithms, Bayesian regularisation, optimised preprocessing with non-linear transformations and decorrelation of input variables and linear correlation to output. > learns extremely fast due to 2nd order BFGS methods and even faster with 0-iteration mode. > produces small expertise files. > is extremely robust against outliers in input data. > is immune against learning by heart statistical noise. > tells you if there is nothing relevant to be learned. > delivers sensible prognoses already with small statistics. > can handle weighted events, even negative weights. > has advanced boost and cross validation features. > is steadily further developed professionally.

26 Bayes Theorem P(T D) P(D T), but Likelihood Prior Posterior Evidence NeuroBayes internally uses Bayesian arguments for regularisation NeuroBayes automatically makes Bayesian posterior statements

27

28 Teacher code fragment (1) #include "NeuroBayesTeacher.hh //create NeuroBayes instance NeuroBayesTeacher* nb = NeuroBayesTeacher::Instance(); const int nvar = 14; //number of input variables nb->nb_def_node1(nvar+1); nb->nb_def_node2(nvar); nb->nb_def_node3(1); nb->nb_def_task("cla"); nb->nb_def_iter(10); // nodes in input layer // nodes in hidden layer // nodes in output layer // binominal classification // number of training iterations nb->setoutputfile("bsdspiksk_expert.nb"); nb->setrootfile("bsdspiksk_expert.root"); // expertise file name // histogram file name

29 Teacher code fragment (2) // in training event loop nb->setweight(1.0); //set weight of event // set Target nb->settarget(0.0) ; // set Target, this event is BACKGROUND, else set to 1. InputArray[0] = GetValue(back,"BsPi.Pt"); // define input variables InputArray[1] = TMath::Abs(GetValue(back,"Bs.D0"));... nb->setnextinput(nvar,inputarray); //end of event loop nb->trainnet(); //perform training Many options existing, but this simple code usually already gives very good results.

30 Expert code fragment #include "Expert.hh"... Expert* nb = new Expert("../train/BsDsPiKSK_expert.nb",-2);... InputArray[0] = GetValue(signal,"BsPi.Pt"); InputArray[1] = TMath::Abs(GetValue(signal,"Bs.D0"));... Netout = nb->nb_expert(inputarray);

31 input variables ordered by relevance (standard deviations of additional information)

32

33 NeuroBayes training output (analysis file) NeuroBayes output distribution red:signal black: background Signal purity S/(S+B) in bins of NeuroBayes output. If on diagonal, then P=2*NBout+1 is the probability that the event actually is signal. This proves that NB always is well calibrated in the training.

34 NeuroBayes training output (analysis file) Purity vs. signal efficiency plot for different NeuroBayes output cuts. Should be as much in upper right corner as possible. The lower curve comes from cutting the wrong way round. Signal efficiency vs. total efficiency when cutting at different NeuroBayes outputs (lift chart). The area between blue curve and diagonal should be large. Physical region: white Right diagonal: events randomly sorted, no individualisation. Left diagonal border: completely correctly sorted, first all signal events, then all bg. Gini index: classification quality measure, The larger, the better.

35 NeuroBayes training output (analysis file) Correlation matrix of input variables. 1.row/column: training target

36 NeuroBayes training output (analysis file) Most important input variable significance: 78 standard deviations Accepted for the training Probability integral transformed input variable distribution: signal, background ( this is a binary variable!) Signal purity as function of the input variable (this case: unordered classes) Mean 0, width 1 transformation of signal purity of transformed input variable Purity-efficiency plot of this variable compared to that of complete NeuroBayes

37 NeuroBayes training output (analysis file) 2.most important input variable, alone 67 standard deviations. But added after most important var taken into account only 11 sigma. Probability integral transformed input variable distribution: signal, background ( this is a largely continuous variable!) Signal purity as function of the input variable (this case: spline fit) Mean 0, width 1 transformation of (fitted) signal purity of input variable Purity-efficiency plot of this variable compared to that of complete NeuroBayes

38 NeuroBayes training output (analysis file) 39. most important input variable, alone 17 standard deviations, but only 0.6 sigma added after more significant variables. Ignored for the training Probability integral transformed input variable distribution: signal, background For 3339 events this input was not available (delta-function) Signal purity as function of the input variable (this case: spline fit + delta) Mean 0, width 1 transformation of (fitted) signal purity of input variable Due to the preprocessing 94 the delta is mapped to 0, not to its purity.

39 NeuroBayes output is a linear measure of the Bayesian posterior signal probability: P T (S) = (NB +1)/2 Signal to background ratio in training set: r T = S T If the training was performed with different S/B than actually present in expert dataset, one can transform the signal probability:!! P E (S) = # 1+ % $ 1 1 P T (S) "1 & () r T ' r E B T, in expert set:! r E = S E B E

40 Hunting data MC disagreements with NeuroBayes 1. Use data as signal, Monte Carlo as background. 2. Train NeuroBayes classification network. If MC model describes data well, nothing should be learned! 3. Look at most significant variables of this training. These give a hint where MC is not good. Could e.g. be in pt-spectrum or invariant mass spectrum (width). 4.Decide whether effects are due to physics modelling or detector resolution/efficiency. 5.Reweigh MC by w=(1+nbout)/(1-nbout) or produce a more realistic MC à goto 1

41 Scenario: MC for signal available, but not for backgrounds Idea: take background from sidebands in data Check that network cannot learn mass (by training left sideband vs. right sideband: remove input variables until this net cannot learn anything more) Works well if data-mc agreement quite good

42 Scenario: Neither reliable signal nor background Monte Carlo available Idea: Training with background subtraction Signal: Peak region weight 1 Sideband region with weight -1 (statistical subtraction) Background: Sideband region with weight 1 works very well! also for Y(2S) and Y(3S)! Although just trained on Y(1S)

43 Example for data-only training (on 1. resonance)

44 NeuroBayes B s to J/ψ Φ selection without MC (2 stage background subtraction training process) all data soft cut on net 1, input to second NeuroBayes training soft preselection, input to first NeuroBayes training cut on net 2

45 Exploiting S/B information more efficiently : The splot-method Fit data signal and background in one distribution (e.g. mass). Compute splot weights w s for signal (may be <0 or >1) as function of mass from fit. 2 Candidates per 0.5 MeV/c CDF Run II preliminary S B Train NeuroBayes network 0 with each event treated both as signal with signal weight w S and as background with weight 1-w S. Soft cut on output enriches S/B considerably: Make sure network cannot learn mass! (Paper in preparation) Mass(p K + ) [GeV/c 2 ]

46 More than 60 diploma and Ph.D. theses and many publications from experiments DELPHI, CDF II, AMS, CMS and Belle used NeuroBayes or predecessors very successfully. Also ATLAS and LHCb applications starting. Many of these can be found at Talks about NeuroBayes and applications: www-ekp.physik.uni-karlsruhe.de/~feindt à Forschung

47 Some NeuroBayes highlights: Bs oscillations Discovery of excited Bs states X(3872) properties Single top quark production discovery High mass Higgs exclusion

48 Just a few examples NeuroBayes soft electron identification for CDF II Thesis U. Kerzel: on basis of Soft Electron Collection (much more efficient than cut selection or JetNet with same inputs) - after clever preprocessing by hand and careful learning parameter choice this could also be as good as NeuroBayes

49 Just a few examples NeuroBayes selection

50 Just a few examples First observation of B_s1 and most precise of B_s2* Selection using NeuroBayes

51 Belle B-factory ran very successfully KIT joined Belle Collaboration in 2008 and introduced NeuroBayes. Continuum subtraction Flavour tagging Particle ID S/B selection optim. Full B reconstruction NeuroBayes enhances efficiency of flavour tagging calibration reaction B-> D* l by 71% at same purity.

52 Physics at B factory (asymmetric e+e- collider at Y(4S)) Y(4S) decays into 2 B mesons almost at rest in CMS. Decay products of 2 Bs not easily distinguishable. Many 1000 exclusive decay chains per B. Reconstruct as many as possible Bs exclusively (tag side). Then all other reconstructed particles belong to other B (signal side). And kinematics of signal side uniquely determined, allows missing mass reconstruction.

53 Hierarchical Full Reconstruction

54 Example D0 signals of very different purity with/without NB cut

55 Optimal combination of decay channels of very different purity using NeuroBayes outputs Precuts such that number of additional bg events per additional signal event is constant.

56 Full reconstruction of B mesons in 1042 decay chains. Hierarchical probabilistic reconstruction system with 71 NeuroBayes networks, fully automatic (NeuroBayes Factory) B+ efficiency increased by ~104 % at same (total) purity (corresponds to many years of additional data taking) using NeuroBayes classical algorithm

57 Alternatively one can make the sample cleaner, e.g. to the same background level: B+ efficiency increased by +88% at same background level. (Real data plots, about 8% of full data set) signal using NeuroBayes background signal (classical algorithm)

58 Alternatively one can make the sample much cleaner: e.g. at same signal efficiency as classical algorithm: B+ background suppression by factor of 17! background (classical algorithm) background using NeuroBayes signal

59 First application test on real data: Select B 0 D* + l ν on signal side, fully reco. B on tag side. Calculate missing mass squared on signal side. Peak at 0 from missing neutrino expected and seen. Efficiency more than doubled with new algorithm!

60 Flexibility: Working with NeuroBayes allows continuous choice of working point in purity-efficiency. NIM-paper in preparation.

61 customers & projects Very successful projects for: among others BGV and VKB car insurances Lupus Alpha Asset Management Otto Versand (mail order business) Thyssen Krupp (steel industry) AXA and Central health insurances dm drogerie markt (drugstore chain) Libri (book wholesale)... expanding

62 Individual risk prognoses for car insurances: Accident probability Cost probability distribution Large damage prognosis Contract cancellation prob. very successful at

63 Correlation among input variables, target color coded Ramler II-Plot Einführung in die Datenanalyse Michael Feindt Maria Laach

64 Contract cancellations in a large financial institute Real cancellation rate as function of cancellation rate predicted by NeuroBayes Very good performance within statistical errors

65

66 Near Future Turnaround Predictions for Chain Stores 1. Time series modelling 2. Correction and error estimate using NeuroBayes

67 Turnover prognosis for mail order business

68 Typical test results (always Colour codes: NeuroBayes very successful) better same worse than classical methods Trainings- Seasons Test-Seasons

69

70 Prognosis of individual health costs Pilot project for a large private health insurance Prognosis of costs in following year for each person insured with confidence intervals 4 years of training, test on following year Results: Probability density for each customer/tarif combination Kunde N Mann, 44 Tarif XYZ123 seit ca. 17 Jahre Very good test results! Has potential for a real and objective cost reduction in health management

71 Prognosis of financial markets VDI-Nachrichten, NeuroBayes based risk averse market neutral fonds for institutional investors. Fully automatic trading ( : 20 Mio, since 2010: 130 Mio ) Lupus Alpha NeuroBayes Short Term Trading Fund Test Test Börsenzeitung,

72 Licenses NeuroBayes is commercial software. All rights belong to Phi-T GmbH. It is not open source. CERN, Fermilab, KEK have licenses for use in high energy physics research. Expert runs without license (can run in the grid!) License only needed for training networks. For purchasing additional teacher licenses (for computers outside CERN) please contact Phi-T. Bindings to many programming languages exist. Code generator for easy usage exists.

73 Prognosis of sports events from historical data: NeuroNetz er Results: Probabilities for home - tie - guest

74 Documentation Basics: M. Feindt, A Neural Bayesian Estimator for Conditional Probability Densities, E-preprint-archive physics M. Feindt, U. Kerzel, The NeuroBayes Neural Network Package, NIM A 559(2006) 190 Web Sites: (Research and commercial companies websites, German and English) www-ekp.physik.uni-karlsruhe.de/~feindt (some NeuroBayes talks can be found here under -> Forschung) (English site on physics results with NeuroBayes & all diploma and PhD theses using NeuroBayes, and discussion forum and FAQ for usage in physics.. Please use this and also post your results here! )

75 The <phi-t> mouse game: or: even your ``free will is predictable //

Spin-Off from Physics Research to Business

Spin-Off from Physics Research to Business From Delphi to Phi-T Spin-Off from Physics Research to Business Prof. Dr. Michael Feindt KCETA - Centrum für Elementarteilchen- und Astroteilchenphysik IEKP, Universität Karlsruhe, Karlsruhe Institute

More information

Neural networks in data analysis

Neural networks in data analysis ISAPP Summer Institute 2009 Neural networks in data analysis Michal Kreps ISAPP Summer Institute 2009 M. Kreps, KIT Neural networks in data analysis p. 1/38 Outline What are the neural networks 1 Basic

More information

The Best from Two Worlds The Blue Yonder View on Data Analytics

The Best from Two Worlds The Blue Yonder View on Data Analytics The Best from Two Worlds The Blue Yonder View on Data Analytics Prof. Dr. Michael Feindt IEKP, Karlsruhe Institute of Technology Founder, Phi-T GmbH Founder & Chief Scientific Advisor, Blue Yonder GmbH

More information

NeuroBayes Big Data Predictive Analytics for High Energy Physics & "Real Life

NeuroBayes Big Data Predictive Analytics for High Energy Physics & Real Life NeuroBayes Big Data Predictive Analytics for High Energy Physics & "Real Life Prof. Dr. Michael Feindt Karlsruhe Institute of Technology Founder & Chief Scientific Advisor, Blue Yonder GmbH&Co KG Blue

More information

Blue Yonder Research Papers

Blue Yonder Research Papers Blue Yonder Research Papers Why cutting edge technology matters for Blue Yonder solutions Prof. Dr. Michael Feindt, Chief Scientific Advisor Abstract This article gives an overview of the stack of predictive

More information

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not.

Example: Credit card default, we may be more interested in predicting the probabilty of a default than classifying individuals as default or not. Statistical Learning: Chapter 4 Classification 4.1 Introduction Supervised learning with a categorical (Qualitative) response Notation: - Feature vector X, - qualitative response Y, taking values in C

More information

Data Mining Algorithms Part 1. Dejan Sarka

Data Mining Algorithms Part 1. Dejan Sarka Data Mining Algorithms Part 1 Dejan Sarka Join the conversation on Twitter: @DevWeek #DW2015 Instructor Bio Dejan Sarka ([email protected]) 30 years of experience SQL Server MVP, MCT, 13 books 7+ courses

More information

Top rediscovery at ATLAS and CMS

Top rediscovery at ATLAS and CMS Top rediscovery at ATLAS and CMS on behalf of ATLAS and CMS collaborations CNRS/IN2P3 & UJF/ENSPG, LPSC, Grenoble, France E-mail: [email protected] We describe the plans and strategies of the

More information

Measurement of the Mass of the Top Quark in the l+ Jets Channel Using the Matrix Element Method

Measurement of the Mass of the Top Quark in the l+ Jets Channel Using the Matrix Element Method Measurement of the Mass of the Top Quark in the l+ Jets Channel Using the Matrix Element Method Carlos Garcia University of Rochester For the DØ Collaboration APS Meeting 2007 Outline Introduction Top

More information

Theory versus Experiment. Prof. Jorgen D Hondt Vrije Universiteit Brussel [email protected]

Theory versus Experiment. Prof. Jorgen D Hondt Vrije Universiteit Brussel jodhondt@vub.ac.be Theory versus Experiment Prof. Jorgen D Hondt Vrije Universiteit Brussel [email protected] Theory versus Experiment Pag. 2 Dangerous cocktail!!! Pag. 3 The basics in these lectures Part1 : Theory meets

More information

STA 4273H: Statistical Machine Learning

STA 4273H: Statistical Machine Learning STA 4273H: Statistical Machine Learning Russ Salakhutdinov Department of Statistics! [email protected]! http://www.cs.toronto.edu/~rsalakhu/ Lecture 6 Three Approaches to Classification Construct

More information

Study of the B D* ℓ ν with the Partial Reconstruction Technique

Study of the B D* ℓ ν with the Partial Reconstruction Technique Study of the B D* ℓ ν with the Partial Reconstruction Technique + University of Ferrara / INFN Ferrara Dottorato di Ricerca in Fisica Ciclo XVII Mirco Andreotti 4 March 25 Measurement of B(B D*ℓν) from

More information

How To Teach Physics At The Lhc

How To Teach Physics At The Lhc LHC discoveries and Particle Physics Concepts for Education Farid Ould- Saada, University of Oslo On behalf of IPPOG EPS- HEP, Vienna, 25.07.2015 A successful program LHC data are successfully deployed

More information

Bounding the Higgs width at the LHC

Bounding the Higgs width at the LHC Bounding the Higgs width at the LHC Higgs XSWG workshop, June 2014 John Campbell, Fermilab with K. Ellis, C. Williams 1107.5569, 1311.3589, 1312.1628 Reminder of the method This is the essence of the original

More information

Gamma Distribution Fitting

Gamma Distribution Fitting Chapter 552 Gamma Distribution Fitting Introduction This module fits the gamma probability distributions to a complete or censored set of individual or grouped data values. It outputs various statistics

More information

Java Modules for Time Series Analysis

Java Modules for Time Series Analysis Java Modules for Time Series Analysis Agenda Clustering Non-normal distributions Multifactor modeling Implied ratings Time series prediction 1. Clustering + Cluster 1 Synthetic Clustering + Time series

More information

Lecture 9: Introduction to Pattern Analysis

Lecture 9: Introduction to Pattern Analysis Lecture 9: Introduction to Pattern Analysis g Features, patterns and classifiers g Components of a PR system g An example g Probability definitions g Bayes Theorem g Gaussian densities Features, patterns

More information

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski [email protected]

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski [email protected] Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems

More information

Data Mining mit der JMSL Numerical Library for Java Applications

Data Mining mit der JMSL Numerical Library for Java Applications Data Mining mit der JMSL Numerical Library for Java Applications Stefan Sineux 8. Java Forum Stuttgart 07.07.2005 Agenda Visual Numerics JMSL TM Numerical Library Neuronale Netze (Hintergrund) Demos Neuronale

More information

An Introduction to Machine Learning

An Introduction to Machine Learning An Introduction to Machine Learning L5: Novelty Detection and Regression Alexander J. Smola Statistical Machine Learning Program Canberra, ACT 0200 Australia [email protected] Tata Institute, Pune,

More information

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches

Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches Modelling, Extraction and Description of Intrinsic Cues of High Resolution Satellite Images: Independent Component Analysis based approaches PhD Thesis by Payam Birjandi Director: Prof. Mihai Datcu Problematic

More information

Calculating VaR. Capital Market Risk Advisors CMRA

Calculating VaR. Capital Market Risk Advisors CMRA Calculating VaR Capital Market Risk Advisors How is VAR Calculated? Sensitivity Estimate Models - use sensitivity factors such as duration to estimate the change in value of the portfolio to changes in

More information

Bayesian Machine Learning (ML): Modeling And Inference in Big Data. Zhuhua Cai Google, Rice University [email protected]

Bayesian Machine Learning (ML): Modeling And Inference in Big Data. Zhuhua Cai Google, Rice University caizhua@gmail.com Bayesian Machine Learning (ML): Modeling And Inference in Big Data Zhuhua Cai Google Rice University [email protected] 1 Syllabus Bayesian ML Concepts (Today) Bayesian ML on MapReduce (Next morning) Bayesian

More information

MACHINE LEARNING IN HIGH ENERGY PHYSICS

MACHINE LEARNING IN HIGH ENERGY PHYSICS MACHINE LEARNING IN HIGH ENERGY PHYSICS LECTURE #1 Alex Rogozhnikov, 2015 INTRO NOTES 4 days two lectures, two practice seminars every day this is introductory track to machine learning kaggle competition!

More information

Monday Morning Data Mining

Monday Morning Data Mining Monday Morning Data Mining Tim Ruhe Statistische Methoden der Datenanalyse Outline: - data mining - IceCube - Data mining in IceCube Computer Scientists are different... Fakultät Physik Fakultät Physik

More information

4. Continuous Random Variables, the Pareto and Normal Distributions

4. Continuous Random Variables, the Pareto and Normal Distributions 4. Continuous Random Variables, the Pareto and Normal Distributions A continuous random variable X can take any value in a given range (e.g. height, weight, age). The distribution of a continuous random

More information

Introduction to Support Vector Machines. Colin Campbell, Bristol University

Introduction to Support Vector Machines. Colin Campbell, Bristol University Introduction to Support Vector Machines Colin Campbell, Bristol University 1 Outline of talk. Part 1. An Introduction to SVMs 1.1. SVMs for binary classification. 1.2. Soft margins and multi-class classification.

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

Measurement of Neutralino Mass Differences with CMS in Dilepton Final States at the Benchmark Point LM9

Measurement of Neutralino Mass Differences with CMS in Dilepton Final States at the Benchmark Point LM9 Measurement of Neutralino Mass Differences with CMS in Dilepton Final States at the Benchmark Point LM9, Katja Klein, Lutz Feld, Niklas Mohr 1. Physikalisches Institut B RWTH Aachen Introduction Fast discovery

More information

Linear Classification. Volker Tresp Summer 2015

Linear Classification. Volker Tresp Summer 2015 Linear Classification Volker Tresp Summer 2015 1 Classification Classification is the central task of pattern recognition Sensors supply information about an object: to which class do the object belong

More information

Top-Quark Studies at CMS

Top-Quark Studies at CMS Top-Quark Studies at CMS Tim Christiansen (CERN) on behalf of the CMS Collaboration ICHEP 2010, Paris 35th International Conference on High-Energy Physics tt 2 km 22 28 July 2010 Single-top 4 km New Physics

More information

Package EstCRM. July 13, 2015

Package EstCRM. July 13, 2015 Version 1.4 Date 2015-7-11 Package EstCRM July 13, 2015 Title Calibrating Parameters for the Samejima's Continuous IRT Model Author Cengiz Zopluoglu Maintainer Cengiz Zopluoglu

More information

Monotonicity Hints. Abstract

Monotonicity Hints. Abstract Monotonicity Hints Joseph Sill Computation and Neural Systems program California Institute of Technology email: [email protected] Yaser S. Abu-Mostafa EE and CS Deptartments California Institute of Technology

More information

Neural Networks for Machine Learning. Lecture 13a The ups and downs of backpropagation

Neural Networks for Machine Learning. Lecture 13a The ups and downs of backpropagation Neural Networks for Machine Learning Lecture 13a The ups and downs of backpropagation Geoffrey Hinton Nitish Srivastava, Kevin Swersky Tijmen Tieleman Abdel-rahman Mohamed A brief history of backpropagation

More information

Component Ordering in Independent Component Analysis Based on Data Power

Component Ordering in Independent Component Analysis Based on Data Power Component Ordering in Independent Component Analysis Based on Data Power Anne Hendrikse Raymond Veldhuis University of Twente University of Twente Fac. EEMCS, Signals and Systems Group Fac. EEMCS, Signals

More information

More details on the inputs, functionality, and output can be found below.

More details on the inputs, functionality, and output can be found below. Overview: The SMEEACT (Software for More Efficient, Ethical, and Affordable Clinical Trials) web interface (http://research.mdacc.tmc.edu/smeeactweb) implements a single analysis of a two-armed trial comparing

More information

SAS Certificate Applied Statistics and SAS Programming

SAS Certificate Applied Statistics and SAS Programming SAS Certificate Applied Statistics and SAS Programming SAS Certificate Applied Statistics and Advanced SAS Programming Brigham Young University Department of Statistics offers an Applied Statistics and

More information

Non Linear Dependence Structures: a Copula Opinion Approach in Portfolio Optimization

Non Linear Dependence Structures: a Copula Opinion Approach in Portfolio Optimization Non Linear Dependence Structures: a Copula Opinion Approach in Portfolio Optimization Jean- Damien Villiers ESSEC Business School Master of Sciences in Management Grande Ecole September 2013 1 Non Linear

More information

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. 1.3 Neural Networks 19 Neural Networks are large structured systems of equations. These systems have many degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. Two very

More information

Theoretical Particle Physics FYTN04: Oral Exam Questions, version ht15

Theoretical Particle Physics FYTN04: Oral Exam Questions, version ht15 Theoretical Particle Physics FYTN04: Oral Exam Questions, version ht15 Examples of The questions are roughly ordered by chapter but are often connected across the different chapters. Ordering is as in

More information

INTELLIGENT ENERGY MANAGEMENT OF ELECTRICAL POWER SYSTEMS WITH DISTRIBUTED FEEDING ON THE BASIS OF FORECASTS OF DEMAND AND GENERATION Chr.

INTELLIGENT ENERGY MANAGEMENT OF ELECTRICAL POWER SYSTEMS WITH DISTRIBUTED FEEDING ON THE BASIS OF FORECASTS OF DEMAND AND GENERATION Chr. INTELLIGENT ENERGY MANAGEMENT OF ELECTRICAL POWER SYSTEMS WITH DISTRIBUTED FEEDING ON THE BASIS OF FORECASTS OF DEMAND AND GENERATION Chr. Meisenbach M. Hable G. Winkler P. Meier Technology, Laboratory

More information

Data Mining and Visualization

Data Mining and Visualization Data Mining and Visualization Jeremy Walton NAG Ltd, Oxford Overview Data mining components Functionality Example application Quality control Visualization Use of 3D Example application Market research

More information

Inference of Probability Distributions for Trust and Security applications

Inference of Probability Distributions for Trust and Security applications Inference of Probability Distributions for Trust and Security applications Vladimiro Sassone Based on joint work with Mogens Nielsen & Catuscia Palamidessi Outline 2 Outline Motivations 2 Outline Motivations

More information

Risk and return (1) Class 9 Financial Management, 15.414

Risk and return (1) Class 9 Financial Management, 15.414 Risk and return (1) Class 9 Financial Management, 15.414 Today Risk and return Statistics review Introduction to stock price behavior Reading Brealey and Myers, Chapter 7, p. 153 165 Road map Part 1. Valuation

More information

Neural Network Add-in

Neural Network Add-in Neural Network Add-in Version 1.5 Software User s Guide Contents Overview... 2 Getting Started... 2 Working with Datasets... 2 Open a Dataset... 3 Save a Dataset... 3 Data Pre-processing... 3 Lagging...

More information

How To Understand The Theory Of Probability

How To Understand The Theory Of Probability Graduate Programs in Statistics Course Titles STAT 100 CALCULUS AND MATR IX ALGEBRA FOR STATISTICS. Differential and integral calculus; infinite series; matrix algebra STAT 195 INTRODUCTION TO MATHEMATICAL

More information

PHYSICS WITH LHC EARLY DATA

PHYSICS WITH LHC EARLY DATA PHYSICS WITH LHC EARLY DATA ONE OF THE LAST PROPHETIC TALKS ON THIS SUBJECT HOPEFULLY We may have some two month of the Machine operation in 2008 LONG HISTORY... I will extensively use: Fabiola GIANOTTI

More information

Statistical Machine Learning

Statistical Machine Learning Statistical Machine Learning UoC Stats 37700, Winter quarter Lecture 4: classical linear and quadratic discriminants. 1 / 25 Linear separation For two classes in R d : simple idea: separate the classes

More information

Social Media Mining. Data Mining Essentials

Social Media Mining. Data Mining Essentials Introduction Data production rate has been increased dramatically (Big Data) and we are able store much more data than before E.g., purchase data, social media data, mobile phone data Businesses and customers

More information

!"!!"#$$%&'()*+$(,%!"#$%$&'()*""%(+,'-*&./#-$&'(-&(0*".$#-$1"(2&."3$'45"

!!!#$$%&'()*+$(,%!#$%$&'()*%(+,'-*&./#-$&'(-&(0*.$#-$1(2&.3$'45 !"!!"#$$%&'()*+$(,%!"#$%$&'()*""%(+,'-*&./#-$&'(-&(0*".$#-$1"(2&."3$'45"!"#"$%&#'()*+',$$-.&#',/"-0%.12'32./4'5,5'6/%&)$).2&'7./&)8'5,5'9/2%.%3%&8':")08';:

More information

Data, Measurements, Features

Data, Measurements, Features Data, Measurements, Features Middle East Technical University Dep. of Computer Engineering 2009 compiled by V. Atalay What do you think of when someone says Data? We might abstract the idea that data are

More information

Service courses for graduate students in degree programs other than the MS or PhD programs in Biostatistics.

Service courses for graduate students in degree programs other than the MS or PhD programs in Biostatistics. Course Catalog In order to be assured that all prerequisites are met, students must acquire a permission number from the education coordinator prior to enrolling in any Biostatistics course. Courses are

More information

Geostatistics Exploratory Analysis

Geostatistics Exploratory Analysis Instituto Superior de Estatística e Gestão de Informação Universidade Nova de Lisboa Master of Science in Geospatial Technologies Geostatistics Exploratory Analysis Carlos Alberto Felgueiras [email protected]

More information

Christfried Webers. Canberra February June 2015

Christfried Webers. Canberra February June 2015 c Statistical Group and College of Engineering and Computer Science Canberra February June (Many figures from C. M. Bishop, "Pattern Recognition and ") 1of 829 c Part VIII Linear Classification 2 Logistic

More information

Azure Machine Learning, SQL Data Mining and R

Azure Machine Learning, SQL Data Mining and R Azure Machine Learning, SQL Data Mining and R Day-by-day Agenda Prerequisites No formal prerequisites. Basic knowledge of SQL Server Data Tools, Excel and any analytical experience helps. Best of all:

More information

Organizing Your Approach to a Data Analysis

Organizing Your Approach to a Data Analysis Biost/Stat 578 B: Data Analysis Emerson, September 29, 2003 Handout #1 Organizing Your Approach to a Data Analysis The general theme should be to maximize thinking about the data analysis and to minimize

More information

Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization. Learning Goals. GENOME 560, Spring 2012

Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization. Learning Goals. GENOME 560, Spring 2012 Why Taking This Course? Course Introduction, Descriptive Statistics and Data Visualization GENOME 560, Spring 2012 Data are interesting because they help us understand the world Genomics: Massive Amounts

More information

Customer Classification And Prediction Based On Data Mining Technique

Customer Classification And Prediction Based On Data Mining Technique Customer Classification And Prediction Based On Data Mining Technique Ms. Neethu Baby 1, Mrs. Priyanka L.T 2 1 M.E CSE, Sri Shakthi Institute of Engineering and Technology, Coimbatore 2 Assistant Professor

More information

> plot(exp.btgpllm, main = "treed GP LLM,", proj = c(1)) > plot(exp.btgpllm, main = "treed GP LLM,", proj = c(2)) quantile diff (error)

> plot(exp.btgpllm, main = treed GP LLM,, proj = c(1)) > plot(exp.btgpllm, main = treed GP LLM,, proj = c(2)) quantile diff (error) > plot(exp.btgpllm, main = "treed GP LLM,", proj = c(1)) > plot(exp.btgpllm, main = "treed GP LLM,", proj = c(2)) 0.4 0.2 0.0 0.2 0.4 treed GP LLM, mean treed GP LLM, 0.00 0.05 0.10 0.15 0.20 x1 x1 0.4

More information

CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS

CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS Examples: Regression And Path Analysis CHAPTER 3 EXAMPLES: REGRESSION AND PATH ANALYSIS Regression analysis with univariate or multivariate dependent variables is a standard procedure for modeling relationships

More information

MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL

MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL MANAGING QUEUE STABILITY USING ART2 IN ACTIVE QUEUE MANAGEMENT FOR CONGESTION CONTROL G. Maria Priscilla 1 and C. P. Sumathi 2 1 S.N.R. Sons College (Autonomous), Coimbatore, India 2 SDNB Vaishnav College

More information

Data analysis in Par,cle Physics

Data analysis in Par,cle Physics Data analysis in Par,cle Physics From data taking to discovery Tuesday, 13 August 2013 Lukasz Kreczko - Bristol IT MegaMeet 1 $ whoami Lukasz (Luke) Kreczko Par,cle Physicist Graduated in Physics from

More information

Dongfeng Li. Autumn 2010

Dongfeng Li. Autumn 2010 Autumn 2010 Chapter Contents Some statistics background; ; Comparing means and proportions; variance. Students should master the basic concepts, descriptive statistics measures and graphs, basic hypothesis

More information

A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data

A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data Athanasius Zakhary, Neamat El Gayar Faculty of Computers and Information Cairo University, Giza, Egypt

More information

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION

PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION PATTERN RECOGNITION AND MACHINE LEARNING CHAPTER 4: LINEAR MODELS FOR CLASSIFICATION Introduction In the previous chapter, we explored a class of regression models having particularly simple analytical

More information

Performance Analysis of Naive Bayes and J48 Classification Algorithm for Data Classification

Performance Analysis of Naive Bayes and J48 Classification Algorithm for Data Classification Performance Analysis of Naive Bayes and J48 Classification Algorithm for Data Classification Tina R. Patil, Mrs. S. S. Sherekar Sant Gadgebaba Amravati University, Amravati [email protected], [email protected]

More information

Health Spring Meeting May 2008 Session # 42: Dental Insurance What's New, What's Important

Health Spring Meeting May 2008 Session # 42: Dental Insurance What's New, What's Important Health Spring Meeting May 2008 Session # 42: Dental Insurance What's New, What's Important Floyd Ray Martin, FSA, MAAA Thomas A. McInteer, FSA, MAAA Jonathan P. Polon, FSA Dental Insurance Fraud Detection

More information

Predict Influencers in the Social Network

Predict Influencers in the Social Network Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, [email protected] Department of Electrical Engineering, Stanford University Abstract Given two persons

More information

Quantitative Inventory Uncertainty

Quantitative Inventory Uncertainty Quantitative Inventory Uncertainty It is a requirement in the Product Standard and a recommendation in the Value Chain (Scope 3) Standard that companies perform and report qualitative uncertainty. This

More information

AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S.

AUTOMATION OF ENERGY DEMAND FORECASTING. Sanzad Siddique, B.S. AUTOMATION OF ENERGY DEMAND FORECASTING by Sanzad Siddique, B.S. A Thesis submitted to the Faculty of the Graduate School, Marquette University, in Partial Fulfillment of the Requirements for the Degree

More information

Clustering & Visualization

Clustering & Visualization Chapter 5 Clustering & Visualization Clustering in high-dimensional databases is an important problem and there are a number of different clustering paradigms which are applicable to high-dimensional data.

More information

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics

Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics Institute of Actuaries of India Subject CT3 Probability and Mathematical Statistics For 2015 Examinations Aim The aim of the Probability and Mathematical Statistics subject is to provide a grounding in

More information

Marketing Mix Modelling and Big Data P. M Cain

Marketing Mix Modelling and Big Data P. M Cain 1) Introduction Marketing Mix Modelling and Big Data P. M Cain Big data is generally defined in terms of the volume and variety of structured and unstructured information. Whereas structured data is stored

More information

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING EFFICIENT DATA PRE-PROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College

More information

Probabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014

Probabilistic Models for Big Data. Alex Davies and Roger Frigola University of Cambridge 13th February 2014 Probabilistic Models for Big Data Alex Davies and Roger Frigola University of Cambridge 13th February 2014 The State of Big Data Why probabilistic models for Big Data? 1. If you don t have to worry about

More information

STATISTICA Formula Guide: Logistic Regression. Table of Contents

STATISTICA Formula Guide: Logistic Regression. Table of Contents : Table of Contents... 1 Overview of Model... 1 Dispersion... 2 Parameterization... 3 Sigma-Restricted Model... 3 Overparameterized Model... 4 Reference Coding... 4 Model Summary (Summary Tab)... 5 Summary

More information

How To Find The Higgs Boson

How To Find The Higgs Boson Dezső Horváth: Search for Higgs bosons Balaton Summer School, Balatongyörök, 07.07.2009 p. 1/25 Search for Higgs bosons Balaton Summer School, Balatongyörök, 07.07.2009 Dezső Horváth MTA KFKI Research

More information

Advanced analytics at your hands

Advanced analytics at your hands 2.3 Advanced analytics at your hands Neural Designer is the most powerful predictive analytics software. It uses innovative neural networks techniques to provide data scientists with results in a way previously

More information

Data mining and statistical models in marketing campaigns of BT Retail

Data mining and statistical models in marketing campaigns of BT Retail Data mining and statistical models in marketing campaigns of BT Retail Francesco Vivarelli and Martyn Johnson Database Exploitation, Segmentation and Targeting group BT Retail Pp501 Holborn centre 120

More information

Artificial Neural Network, Decision Tree and Statistical Techniques Applied for Designing and Developing E-mail Classifier

Artificial Neural Network, Decision Tree and Statistical Techniques Applied for Designing and Developing E-mail Classifier International Journal of Recent Technology and Engineering (IJRTE) ISSN: 2277-3878, Volume-1, Issue-6, January 2013 Artificial Neural Network, Decision Tree and Statistical Techniques Applied for Designing

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/313/5786/504/dc1 Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks G. E. Hinton* and R. R. Salakhutdinov *To whom correspondence

More information

Chapter 12 Discovering New Knowledge Data Mining

Chapter 12 Discovering New Knowledge Data Mining Chapter 12 Discovering New Knowledge Data Mining Becerra-Fernandez, et al. -- Knowledge Management 1/e -- 2004 Prentice Hall Additional material 2007 Dekai Wu Chapter Objectives Introduce the student to

More information

Basics of Statistical Machine Learning

Basics of Statistical Machine Learning CS761 Spring 2013 Advanced Machine Learning Basics of Statistical Machine Learning Lecturer: Xiaojin Zhu [email protected] Modern machine learning is rooted in statistics. You will find many familiar

More information

Visualization methods for patent data

Visualization methods for patent data Visualization methods for patent data Treparel 2013 Dr. Anton Heijs (CTO & Founder) Delft, The Netherlands Introduction Treparel can provide advanced visualizations for patent data. This document describes

More information

Calorimetry in particle physics experiments

Calorimetry in particle physics experiments Calorimetry in particle physics experiments Unit n. 8 Calibration techniques Roberta Arcidiacono Lecture overview Introduction Hardware Calibration Test Beam Calibration In-situ Calibration (EM calorimeters)

More information

Descriptive Statistics

Descriptive Statistics Y520 Robert S Michael Goal: Learn to calculate indicators and construct graphs that summarize and describe a large quantity of values. Using the textbook readings and other resources listed on the web

More information

Software for data analysis and accurate forecasting. Forecasts for Guaranteed Profits. The Predictive Analytics Software for Insurance Companies

Software for data analysis and accurate forecasting. Forecasts for Guaranteed Profits. The Predictive Analytics Software for Insurance Companies Software for data analysis and accurate forecasting Forecasts for Guaranteed Profits The Predictive Analytics Software for Insurance Companies About Blue Yonder Blue Yonder, established in 2008, is the

More information

STATS8: Introduction to Biostatistics. Data Exploration. Babak Shahbaba Department of Statistics, UCI

STATS8: Introduction to Biostatistics. Data Exploration. Babak Shahbaba Department of Statistics, UCI STATS8: Introduction to Biostatistics Data Exploration Babak Shahbaba Department of Statistics, UCI Introduction After clearly defining the scientific problem, selecting a set of representative members

More information

Representing Uncertainty by Probability and Possibility What s the Difference?

Representing Uncertainty by Probability and Possibility What s the Difference? Representing Uncertainty by Probability and Possibility What s the Difference? Presentation at Amsterdam, March 29 30, 2011 Hans Schjær Jacobsen Professor, Director RD&I Ballerup, Denmark +45 4480 5030

More information

Drawing a histogram using Excel

Drawing a histogram using Excel Drawing a histogram using Excel STEP 1: Examine the data to decide how many class intervals you need and what the class boundaries should be. (In an assignment you may be told what class boundaries to

More information

CHI-SQUARE: TESTING FOR GOODNESS OF FIT

CHI-SQUARE: TESTING FOR GOODNESS OF FIT CHI-SQUARE: TESTING FOR GOODNESS OF FIT In the previous chapter we discussed procedures for fitting a hypothesized function to a set of experimental data points. Such procedures involve minimizing a quantity

More information

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur

Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Probability and Statistics Prof. Dr. Somesh Kumar Department of Mathematics Indian Institute of Technology, Kharagpur Module No. #01 Lecture No. #15 Special Distributions-VI Today, I am going to introduce

More information

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop

These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher Bishop Music and Machine Learning (IFT6080 Winter 08) Prof. Douglas Eck, Université de Montréal These slides follow closely the (English) course textbook Pattern Recognition and Machine Learning by Christopher

More information