BACK CALCULATION PROCEDURE FOR THE STIFFNESS MODULUS OF CEMENT TREATED BASE LAYERS USING COMPUTATIONAL INTELLIGENCE BASED MODELS

Save this PDF as:
 WORD  PNG  TXT  JPG

Size: px
Start display at page:

Download "BACK CALCULATION PROCEDURE FOR THE STIFFNESS MODULUS OF CEMENT TREATED BASE LAYERS USING COMPUTATIONAL INTELLIGENCE BASED MODELS"

Transcription

1 BACK CALCULATION PROCEDURE FOR THE STIFFNESS MODULUS OF CEMENT TREATED BASE LAYERS USING COMPUTATIONAL INTELLIGENCE BASED MODELS Maryam Miradi André.A. A. Molenaar * Martin F. C. van de Ven All members of the Faculty of Civil Engineering and Geo Sciences Delft University of Technology P.O. Box 5048, 2600 GA Delft, the Netherlands * corresponding author tel: Fax: Word count: figures + 3 tables = 2500 equivalent words Total nr of words: = 7458

2 ABSTRACT In the Netherlands, there is a need for a procedure that allows accurate estimation of the stiffness of cement bound base courses using deflection measurements and avoiding the need to take a large amount of cores. Such a procedure is needed to ensure clients that the pavement is built by the contractor as agreed upon in the contract. This paper describes the development of such a procedure. The procedure is developed using Computational Intelligence (CI) Techniques, and in particular Artificial Neural Networks (ANN) and Support Vector Machines (SVM), on a data set consisting of over 2000 deflection profiles calculated for a large number of three layer pavement structures using the BISAR PC software. The ANN and SVM models use falling weight deflectometer (FWD) deflection bowl parameters and the total pavement thickness as input. The total pavement thickness can be determined with radar measurements. The model showed to be capable of predicting the cement treated base course modulus with a high degree of accuracy and is a quick and powerful tool for scanning the stiffness of cement bound base courses.

3 INTRODUCTION Design, build, finance and maintain (DBFM) contracts and especially DB contracts are gaining increasing popularity in the Netherlands. In these contracts it is agreed that the contractor guarantees that the pavement condition stays above a certain minimum acceptance level throughout the contract period. In the Netherlands this period normally is 7 years. These contracts give on the one hand a lot of freedom to the contractors to select the types of materials and structures to be used, but on the other, they also imply that significant risks are taken by both the contractor and the client. One of the risks for the client is the performance of the pavement after the contractual period which might be far less than was anticipated. Because of the materials used, the structure as built might show undesired types of failure which only show up after a significant period of time and can result in significant maintenance needs. An example of such a problem is the use of certain types of slag which e.g. might result in unevenness because of slow chemical reactions taking place in the slag. Another example is leaching of certain chemicals resulting in environmental problems. Since in DB and DBFM contracts, the client has, in principle, no say about materials to be used, quality control in terms of gradation etc, they are often concerned about the structural quality of the pavement as constructed. A question often asked is whether or not the pavement really will have the performance as predicted by the contractor or, in other words, do the pavement layers really have the stiffness and thickness as assumed by the contractor in his design analyses. Because of less positive experiences, clients are especially concerned when a cement treated base course is proposed by the contractor. Such base courses however are gaining popularity because cement stabilization allows a wide variety of recycled materials to be used. Such materials might be mixtures of recycled concrete and masonry, harbor dredging sludge, all kinds of slag etc. Lower quality material, which in principle, are not suited to be used in base courses, can relatively easily be upgraded to higher quality material useable for base courses but the question always is what is the long term performance. To safeguard this, a protocol has been developed which requires the contractor to prove that the thickness and stiffness of the pavement layers are according to the design he proposed. If not, the contractor will be given a penalty and/or he has to upgrade the structure such that it will give the pavement life as agreed upon in the contract. An important part in that protocol is deflection testing by means of FWD testing and coring to determine the thickness of the layers. Using these values, the stiffness of the various layers is back calculated using a back analysis program based on multilayer linear elastic theory. The protocol prescribes that in the back calculation analysis, a three layer system is assumed consisting of the total asphalt thickness, the base course and the subgrade. A three layer system has to be assumed because the majority of the designs are based on three layer analyses. One of the disadvantages of the protocol is the need to take quite a number of cores for layer thickness evaluation. Experience has shown that the protocol might give undesired results. Especially when the second layer, the base course, has a higher stiffness and a greater thickness than the top layer, the back calculated stiffness values are not always realistic. This is especially the case when the top layer is relatively thin (less than 75 mm). In those cases it is quite often observed that the stiffness of the top layer is overestimated while the stiffness of the base layer is underestimated. If, because of this, the back calculated stiffness of the base course is lower than assumed in the design analyses, the contractor is penalized because he didn t build the pavement as designed. It is clear that in this case the penalty is given unrightfully leading to all kinds of unnecessary disputes and even court cases. The above mentioned situation has resulted in a need for a procedure that allows a rapid and accurate evaluation of the base course stiffness without the need of taking a large amount of cores. This procedure may be provided by artificial neural network or support vector machines, being two strong modeling tools within the field of Computational Intelligence (CI), a sub-field of Artificial Intelligence (AI). AI is the science of making computers do things that require intelligence if done by human beings (1). Since it was believed that such a procedure could be developed using artificial intelligence techniques like ANN and SVM, these techniques were applied in this study on a data base consisting of deflection profiles calculated for a large number of structures. The remainder of this paper is as follows. First, a short introduction is given on ANN and SVM. It is discussed how the data has been simulated using BISAR PC software. Then, modeling results using ANN regression, ANN classification, SVM classification, and SVM for regression will be discussed. After

4 that, the validation of ANN regression models will be tested using a new database. The paper ends with conclusions. ARTIFICIAL NEURAL NETWORKS AND SUPPORT VECTOR MACHINES Artificial neural networks (ANNs) are data processing systems. They are mathematical models of human neural systems, trying to mimic the intelligence of humans. ANNs have the same network structure as the human brain. Their structure consists of many neurons connected to each other. In fact, these neurons are non-linear calculation units. Each connection (between neurons) has a weight. For modeling, the input data records are fed into the network. Through the modeling process, connections gain in each iteration another weight, adopting themselves to the input data. The final model is capable of predicting the output accurately even for unseen new data. This ability is called generalization, meaning that through the modeling process, ANN finds the general pattern (hidden relation) between input and output using only data without necessity for any prior knowledge about the problem. This important characteristic makes ANNs suitable modeling techniques for complicated problems from which little to no pre-knowledge is available. For a detailed explanation of how ANN works, the reader is referred to Haykin (2). Traditional neural network approaches have suffered difficulties with generalization, producing models that can over-fit the data. This is a consequence of the optimization algorithms used for parameter selection and the statistical measures used to select the best model. These problems have more or less been solved by another recent CI-based modeling technique, support vector machines (SVM). The foundations of SVM have been developed by Vapnik (3) and are gaining popularity due to many attractive features, and promising empirical performance. Support vector machines are intelligent systems that use a hypothesis space of linear functions in a multi dimensional feature space, trained with a learning algorithm from optimization theory that implements a learning bias derived from statistical learning theory. This learning strategy is a very powerful technique that in the few years since its introduction has already outperformed most other systems in a wide variety of applications (4). The formulation embodies the Structural Risk Minimization (SRM) principle, which has been shown to be superior (5) to the traditional Empirical Risk Minimization (ERM) principle, employed by conventional neural networks. SRM minimizes an upper bound on the expected risk, as opposed to ERM that minimizes the error on the training data. It is this difference which equips SVM with a greater ability to generalize, which is the goal in statistical learning. SVMs were developed to solve the classification problem, but recently they have been extended to the domain of regression problems (6). A further explanation of SVM can be found in references (3, 5, 6). DATA BASE Table 1 shows the structures for which the deflection bowl was calculated. The FWD load used in the analyses, was a 50 kn load and the deflections were calculated at distances of 0, 300, 600, 900, 1200, 1500 and 1800 mm from the load centre. TABLE 1 Overview of the combinations of the thickness and stiffness of the pavement layers for which the deflection bowls were calculated. Variable Value Unit Number of layers 3 - Elastic modulus of asphalt layer (E 1 ) 4000, 6000, 8000, MPa Elastic modulus of cement treated base (E 2 ) 1500, 3000, 4500, 6000, 7500, 9000 MPa Elastic modulus of subgrade (E 3 ) 50, 100, 150, 200 MPa Poison's ratios of asphalt layer (ν 1 ) Poison's ratios of cement treated base layer (ν 2 ) Poison's ratios of subgrade layer (ν 3 ) Asphalt layer thickness (h 1 ) 100, 150, 200,250,300 mm Cement treated base thickness(h 2 ) 150, 200, 250, 300, 350, 400 mm

5 In total the deflection bowls of 2880 structures were calculated. Figure 1 gives an example of how similar two deflection profiles of two different structures can be, indicating that back calculation of stiffness moduli using standard routines might not always be an easy task. The pavement with a base stiffness of 4500 MPa had a total thickness of 700 mm (the total thickness is the thickness of the asphalt layer + the thickness of the base layer) while the pavement with a base course stiffness of 6000 MPa had a total thickness of 650 mm. FIGURE 1 Two different structures can have two similar deflection profiles. CI MODELS TO PREDICT THE STIFFNESS OF CEMENT TREATED BASE COURSES In generalized form, the dependency of the base stiffness on the deflection bowl and the total thickness of the pavement, can be written as: E b = f(e 1, E 2, E 3, h t, d 0, d 300, d 600, d 900, d 1200, d 1500, d 1800 ) (1) Where: E b d x h t = elastic modulus of cement treated base, = deflection at x mm from the loading centre, = the total pavement thickness (thickness asphalt layer + thickness base layer). However, to improve the quality of model performance, some pre-investigation was done to decrease the number of input parameters. These investigations resulted in a decrease from eight input parameters in Equation (1) to five in Equation (2). E b = f(d 0, SCI, BDI, BCI, h t ) (2) Where: SCI = D 0 -D 300, BDI = D 300 -D 600, BCI = D 600 -D 900.

6 Taking the discrete nature of E b (see Table 1), both classification and regression modeling techniques were possible for back-calculation of E b. In this section the application of ANN and SVM to predict (back-calculate) E b using both regression and classification will be discussed. Artificial neural network classification models. In an ANN model, the process of fitting a model to the data is called training. This is because the model learns from the observations (data points) how the input parameters and the output parameter relate. The first stage of modeling was to partition the dataset (2880 data points) into a training set (1999 data points), a validation set (441 data points), and a test set (440 data points). The training set is used to train (fit) the model, the validation set is used to validate the training process and avoid over-fitting. By over-fitting, we mean if the model starts to fit each single data point instead of finding a general pattern in the data. Finally, the test set is used to test the performance of the model after the training was carried out. One of the important parameters in ANN modeling is the activation function. Earlier experiments of this team with ANN modeling have shown that hyperbolic tangent activation function has excellent performance (7, 8). Trying different activation functions for this problem showed again that hyperbolic tangent shows the highest model performance. Determining the number of hidden layers and hidden neurons is a crucial step in ANN modeling. According to universal approximation theorem (9), one hidden layer is enough to model almost all problems. The detailed mathematical description of this theorem is given by Haykin (2). Concerning the optimal number of hidden neurons, an approach explained by Haykin (2) was used. Following this approach, the network was trained with different numbers of hidden neurons in one hidden layer (from 1 to 30) and was tested on the validation set each time. The number of hidden neurons which results in the lowest validation error is the optimal number of hidden neurons. This is shown in Figure 2. For this model, the optimal number of hidden neurons was 10. Error Validation Training Optimal number Number of hidden neurons FIGURE 2 Finding the optimal number of hidden neurons. One step before training the model is to determine the optimal training algorithm. To do so, many types of training algorithms were tried including Quasi-Newton backpropagation, conjugate gradient backpropagation, Levenberg-Marquardt backpropagation, one-step secant backpropagation, random order incremental update, and scaled conjugate gradient backpropagation. Quasi-Newton backpropagation and Levenberg-Marquardt backpropagation showed the lowest error. The classification with Quasi-Newton backpropagation was performed using Alyuda Neuro-Intelligence software. Quasi-Newton is the most popular algorithm in nonlinear optimization, with a reputation for fast convergence. It works by exploiting the observation that, on a quadratic (i.e. parabolic) error surface, one can step directly to the minimum using the Newton step - a calculation involving the Hessian matrix (the matrix of second partial derivatives of the error surface).

7 Artificial neural network-based prediction of the stiffness of the cement treated base course The dataset (2880 data points) was divided into a training set (1999 data points), a validation set (441 data points), and a test set (440 data points). Earlier experiments have shown that the hyperbolic tangent activation function has excellent performance (1, 2). The network was constructed with one hidden layer containing 10 hidden units and the error function of the output was cross entropy. Using Haykin s method (3), the optimal number of hidden neurons was found using 10-fold cross validation. Concerning the training algorithm, many types of training algorithms were tried including Quasi- Newton back propagation, conjugate gradient back propagation, Levenberg-Marquardt back propagation, one-step secant backpropagation, random order incremental update, and scaled conjugate gradient back propagation. Quasi-Newton back propagation and Levenberg-Marquardt back propagation gave the lowest error. The classification with Quasi-Newton backpropagation was performed using Atyuda Neuro- Intelligence software. Quasi-Newton is the most popular algorithm in nonlinear optimization, with a reputation for fast convergence. Quasi-Newton works by exploiting the observation that, on a quadratic (i.e. parabolic) error surface, one can step directly to the minimum using the Newton step - a calculation involving the Hessian matrix (the matrix of second partial derivatives of the error surface). The Quasi- Newton algorithm converged at 108 iterations; the correct classification rate (CCR) of the training set was 84% while the validation set classified 79% correctly. Figure 3 shows the relative importance of the input variables after the model was trained. The figure shows that for ANN classification, total thickness (h 1 + h 2 ) and BDI (BDI = D 300 D 600 ; D x = deflection measured at a distance of x mm from the load centre) are most influential parameters. The question might arise why the total pavement thickness was used as input parameter instead of the thickness of the individual layers. The reason for this is as follows. As mentioned before, one of the objectives was to develop a method which needed the smallest amount of cores as possible. This can be achieved when the layer thickness is estimated by means of Ground Penetrating Radar (GPR) measurements. In such a case only a limited number of cores is needed for calibration purposes. The total thickness was taken as explaining variable since it is believed that this thickness can be estimated with a higher degree of accuracy than the thickness of the individual layers. This is because of the fact that on the radar images, the contrast between the cement treated base course and the underlying subgrade, which in the Netherlands in almost all cases consists of sand, is greater than the contrast between the asphalt layer and the cement treated base course. The trained model was used to test the test set. The correct classification rate of the test set appeared to be 83%. The result of the prediction using the ANN classifier is presented by means of the confusion matrix shown in Table 2.

8 FIGURE. 3 Relative input importance of the parameters used in the model. TABLE 2 Confusion matrix for the test set. Predicted output Actual output A confusion matrix is used for checking the accuracy of a classification. Each column of the matrix represents the predicted output values, while each row represents the actual output values (this is taken from the dataset). Output values are addressed as output classes because in the classification each discrete output value is called a class. One benefit of a confusion matrix is that it is easy to see if the classifier is confusing two classes. Table 2 shows for example that of the 77 ( ) data points with actual output of 1500, 1 has been predicted as 3000 and 13 as The table also shows that ANN predicts class 6000 much better than the other classes. The class 4500 has predicted that 16 data points are belonging to the non-neighboring class 1500 and 10 data points to the neighboring class Therefore, class 4500, with the total of 26 misclassified data points, is the worst class. In summary, ANN predicts 82% of class 1500, 80% of class 3000, 64% of class 4500, 97% of class 6000, 88% of class 7500, and 92% of class 9000 correctly.

9 Artificial neural network regression models. Applying the regression power of ANN, a regression ANN model was developed. Again, the dataset (2880 data points) was partitioned to a training set (1999 data points), a validation set (441 data points), and a test set (440 data points). Experiencing with many different training algorithms leads to the Quasi-Newton back propagation algorithm with limited memory. The optimal architecture for the ANN model resulted in one hidden layer with 17 hidden neurons. The root means square error (RMSE) of the training and testing set were 251 and 277, respectively. The RMSE is determined by calculating the difference between the predicted output and the actual output and squaring them, summing up these squared values and dividing this sum by the number of data points, and finally taking the square root of that. The trained model was then tested with the test set. The RMSE of the test set amounted 258 with an R-squared of Figure 4 shows the scatter plot of the comparison between the actual output (from dataset) and the predicted output (predicted by ANN) using the test set. On request a CD containing the program can be made available. FIGURE 4 Scatter plot of the ANN regression model. Road engineering experts rated the outcome of the ANN modeling as not good enough. First of all the scatter in the predicted E b (Figure 4) was considered to be too large and secondly the confusion matrix showed too many wrong classifications. This implies that there are too many cases where the base modulus E b is predicted either too high (which is beneficial to the contractor) or too low, which is bad for the contractor because it implies that the structure is not approved although it fulfills the requirements Support vector machines. Classification models. Because the results obtained by means of ANN were not considered to be accurate enough, another AI technique called Support Vector Machines (SVM) was used on the entire dataset. Because it goes beyond the scope of this paper to explain the backgrounds of SVM, only the results obtained with this technique are described here. 70% of the data points were used for the training and validation sets and 30% for the test set. The kernel function used was a radial basis function with a power of 20. The training error of the built SVM

10 model was 1.3% while the testing error was 1.8%. In other words the CCR of SVM for the test set was 98.20%. The quality of SVM classification for the six classes of CTB is very impressive. This is proven by the confusion matrix shown in table 3. This table shows that SVM predicts class 1500 with 95%, 3000 with 98%, 4500 with 100%, 6000 with 99%, 7500 with 100%, and 9000 with 97% accuracy. TABLE 3 SVM confusion matrix. Actual output Predicted output Support vector regression models. The SV regression (SVR) technique has also been applied on the data set. Without going into the background of this technique, the results are presented here-after. The trained SVR model had a root mean square error (RMSE) of 133 and an R-squared of The RMSE of the test set amounted 181 with an R- squared of Figure 5 is the box plot of the output of the trained SVR model and the actual output (taken from dataset). The plot has produced a box and whisker plot. The box has lines at the lower quartile, median, and upper quartile values. A quartile is any of three values which divide the sorted dataset into four equal parts (lower quartile, median, upper quartile). The whiskers are lines extending from each end of the box to show the extent of the rest of the data. Outliers are data with values beyond the ends of the whiskers, which are indicated with +. Figure 5 shows that SVR predicts E 2 for values other than 1500 [MPa] with a very low error. On request a CD containing the program can be made available. Road engineering experts were very pleased with these results because the accuracy of the predictions was higher than was obtained by using ANN. VALIDATION OF THE ANN REGRESSION MODEL In spite of the fact that the ANN regression model was not considered to be the best one, it was decided to test the predictive capabilities of the model using a completely different data set. As one can observe from table 1, the original dataset consisted of pavement structures the layer thicknesses and stiffnesses of which were varied in a rather systematic way. In fact only 6 different values for the stiffness of the base were considered. The question was how well the model could predict the stiffness of the base course for pavement structures with layer thickness and stiffness combinations that differ from the combinations shown in table 1. In order to determine how well the model would predict the stiffness of the base course, the deflection profile of 100 additional structures were calculated using the BISAR PC software. Figure 6 shows the variation of the total pavement thickness (h 1 + h 2 ) of the additional pavement structures, while figure 7 shows the deflection profiles of these structures. In figure 8, the actual E 2 values as used in the BISAR calculations are compared to the E 2 values as predicted by means of the ANN regression model. One can observe that a remarkable good fit between the actual and predicted values was obtained. Based on this result, the conclusion was drawn that although the ANN regression model was initially rated as not as good, it still is capable of giving very good predictions of the stiffness of the cement treated base course.

11 FIGURE 5 Scatter plot of the test set using SVR. FIGURE 6 Total pavement thickness of the additional analyzed structures.

12 FIGURE 7 Deflection profiles of the additional analyzed pavement structures. FIGURE 8 Comparison between actual and predicted E 2 values.

13 A similar check of the SV regression model still has to be performed but is expected to give at least similar results. CONCLUSIONS From the results of the models, the following conclusions have been drawn: 1. CI techniques have proven to be powerful tools for the development of models to predict pavement layer stiffness using the measured deflection profile and total pavement thickness as input. 2. Support vector machine for regression has proven to produce better results than artificial neural network regression. 3. Extra validation of ANN regression model showed that in spite of the lesser performance of the ANN regression model, even this model is capable of accurately predicting the stiffness of cement treated base courses. REFERENCES 1. Minsky, M., The Society of Mind. 1986, New York, USA: Simon and Schuster. 2. Haykin, S., Neural Networks : A Comprehensive Foundation. 2nd ed. 1999, New Jersey: Prentice Hall. 3. Vapnik, V.N., The Nature of Statistical Learning Theory. 1995, New York: Springer - Verlag. 4. Cristianini, N., Shawe-Taylor, J., An Introduction to Support Vector Machines and Other Kernelbased Learning Methods. 2000: Cambridge University Press Gunn, S.R., Support Vector Machines for Classification and Regression, in Technical Report ISIS , Department of Electronics and Computer Science, University of Southampton: Southampton. 6. Vapnik, V.N., Golowich, S, Smola, A.J., Support vector method for function approximation, regression estimation, and signal processing. Advances in Neural Information Processing Systems, : p Miradi, M., Molenaar, A.A.A., Development of artificial neural network (ANN) models for maintenance planning of porous asphalt wearing courses 2005, Road and Railway Research Laboratory, Delft University of Technology, : Delft. 8. Miradi. M, M., A.A.A. Application of artificial neural network (ANN) to PA lifespan: forecasting models. in IEEE World Congress on Computational Intelligence Vancouver, Canada: Omnipress. 9. Hecht-Nielsen, R., Neurocomputing. 1990: Addison-Wesley. 10. Duin, R.P.W., Juszczak, P., De Ridder, D., Paclik, P., Pekalska, E., Tax, D.M.J., PRTools, a Matlab toolbox for pattern recognition. 2004, Delft University of Technology: Delft.

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence Artificial Neural Networks and Support Vector Machines CS 486/686: Introduction to Artificial Intelligence 1 Outline What is a Neural Network? - Perceptron learners - Multi-layer networks What is a Support

More information

Using artificial intelligence for data reduction in mechanical engineering

Using artificial intelligence for data reduction in mechanical engineering Using artificial intelligence for data reduction in mechanical engineering L. Mdlazi 1, C.J. Stander 1, P.S. Heyns 1, T. Marwala 2 1 Dynamic Systems Group Department of Mechanical and Aeronautical Engineering,

More information

Advanced analytics at your hands

Advanced analytics at your hands 2.3 Advanced analytics at your hands Neural Designer is the most powerful predictive analytics software. It uses innovative neural networks techniques to provide data scientists with results in a way previously

More information

Prediction Model for Crude Oil Price Using Artificial Neural Networks

Prediction Model for Crude Oil Price Using Artificial Neural Networks Applied Mathematical Sciences, Vol. 8, 2014, no. 80, 3953-3965 HIKARI Ltd, www.m-hikari.com http://dx.doi.org/10.12988/ams.2014.43193 Prediction Model for Crude Oil Price Using Artificial Neural Networks

More information

FORECASTING THE JORDANIAN STOCK PRICES USING ARTIFICIAL NEURAL NETWORK

FORECASTING THE JORDANIAN STOCK PRICES USING ARTIFICIAL NEURAL NETWORK 1 FORECASTING THE JORDANIAN STOCK PRICES USING ARTIFICIAL NEURAL NETWORK AYMAN A. ABU HAMMAD Civil Engineering Department Applied Science University P. O. Box: 926296, Amman 11931 Jordan SOUMA M. ALHAJ

More information

Analecta Vol. 8, No. 2 ISSN 2064-7964

Analecta Vol. 8, No. 2 ISSN 2064-7964 EXPERIMENTAL APPLICATIONS OF ARTIFICIAL NEURAL NETWORKS IN ENGINEERING PROCESSING SYSTEM S. Dadvandipour Institute of Information Engineering, University of Miskolc, Egyetemváros, 3515, Miskolc, Hungary,

More information

6.2.8 Neural networks for data mining

6.2.8 Neural networks for data mining 6.2.8 Neural networks for data mining Walter Kosters 1 In many application areas neural networks are known to be valuable tools. This also holds for data mining. In this chapter we discuss the use of neural

More information

Learning. Artificial Intelligence. Learning. Types of Learning. Inductive Learning Method. Inductive Learning. Learning.

Learning. Artificial Intelligence. Learning. Types of Learning. Inductive Learning Method. Inductive Learning. Learning. Learning Learning is essential for unknown environments, i.e., when designer lacks omniscience Artificial Intelligence Learning Chapter 8 Learning is useful as a system construction method, i.e., expose

More information

Regression Using Support Vector Machines: Basic Foundations

Regression Using Support Vector Machines: Basic Foundations Regression Using Support Vector Machines: Basic Foundations Technical Report December 2004 Aly Farag and Refaat M Mohamed Computer Vision and Image Processing Laboratory Electrical and Computer Engineering

More information

Support Vector Machines with Clustering for Training with Very Large Datasets

Support Vector Machines with Clustering for Training with Very Large Datasets Support Vector Machines with Clustering for Training with Very Large Datasets Theodoros Evgeniou Technology Management INSEAD Bd de Constance, Fontainebleau 77300, France theodoros.evgeniou@insead.fr Massimiliano

More information

CHAPTER 6 NEURAL NETWORK BASED SURFACE ROUGHNESS ESTIMATION

CHAPTER 6 NEURAL NETWORK BASED SURFACE ROUGHNESS ESTIMATION CHAPTER 6 NEURAL NETWORK BASED SURFACE ROUGHNESS ESTIMATION 6.1. KNOWLEDGE REPRESENTATION The function of any representation scheme is to capture the essential features of a problem domain and make that

More information

NEURAL NETWORKS A Comprehensive Foundation

NEURAL NETWORKS A Comprehensive Foundation NEURAL NETWORKS A Comprehensive Foundation Second Edition Simon Haykin McMaster University Hamilton, Ontario, Canada Prentice Hall Prentice Hall Upper Saddle River; New Jersey 07458 Preface xii Acknowledgments

More information

INTRODUCTION TO NEURAL NETWORKS

INTRODUCTION TO NEURAL NETWORKS INTRODUCTION TO NEURAL NETWORKS Pictures are taken from http://www.cs.cmu.edu/~tom/mlbook-chapter-slides.html http://research.microsoft.com/~cmbishop/prml/index.htm By Nobel Khandaker Neural Networks An

More information

Artificial Neural Network and Non-Linear Regression: A Comparative Study

Artificial Neural Network and Non-Linear Regression: A Comparative Study International Journal of Scientific and Research Publications, Volume 2, Issue 12, December 2012 1 Artificial Neural Network and Non-Linear Regression: A Comparative Study Shraddha Srivastava 1, *, K.C.

More information

Price Prediction of Share Market using Artificial Neural Network (ANN)

Price Prediction of Share Market using Artificial Neural Network (ANN) Prediction of Share Market using Artificial Neural Network (ANN) Zabir Haider Khan Department of CSE, SUST, Sylhet, Bangladesh Tasnim Sharmin Alin Department of CSE, SUST, Sylhet, Bangladesh Md. Akter

More information

Predict Influencers in the Social Network

Predict Influencers in the Social Network Predict Influencers in the Social Network Ruishan Liu, Yang Zhao and Liuyu Zhou Email: rliu2, yzhao2, lyzhou@stanford.edu Department of Electrical Engineering, Stanford University Abstract Given two persons

More information

Comparison of K-means and Backpropagation Data Mining Algorithms

Comparison of K-means and Backpropagation Data Mining Algorithms Comparison of K-means and Backpropagation Data Mining Algorithms Nitu Mathuriya, Dr. Ashish Bansal Abstract Data mining has got more and more mature as a field of basic research in computer science and

More information

High-Performance Signature Recognition Method using SVM

High-Performance Signature Recognition Method using SVM High-Performance Signature Recognition Method using SVM Saeid Fazli Research Institute of Modern Biological Techniques University of Zanjan Shima Pouyan Electrical Engineering Department University of

More information

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski trajkovski@nyus.edu.mk Introduction to Machine Learning and Data Mining Prof. Dr. Igor Trakovski trakovski@nyus.edu.mk Neural Networks 2 Neural Networks Analogy to biological neural systems, the most robust learning systems

More information

A Simple Introduction to Support Vector Machines

A Simple Introduction to Support Vector Machines A Simple Introduction to Support Vector Machines Martin Law Lecture for CSE 802 Department of Computer Science and Engineering Michigan State University Outline A brief history of SVM Large-margin linear

More information

Multilayer Perceptrons

Multilayer Perceptrons Made wi t h OpenOf f i ce. or g 1 Multilayer Perceptrons 2 nd Order Learning Algorithms Made wi t h OpenOf f i ce. or g 2 Why 2 nd Order? Gradient descent Back-propagation used to obtain first derivatives

More information

Lecture 6. Artificial Neural Networks

Lecture 6. Artificial Neural Networks Lecture 6 Artificial Neural Networks 1 1 Artificial Neural Networks In this note we provide an overview of the key concepts that have led to the emergence of Artificial Neural Networks as a major paradigm

More information

Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms

Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms Comparing the Results of Support Vector Machines with Traditional Data Mining Algorithms Scott Pion and Lutz Hamel Abstract This paper presents the results of a series of analyses performed on direct mail

More information

NTC Project: S01-PH10 (formerly I01-P10) 1 Forecasting Women s Apparel Sales Using Mathematical Modeling

NTC Project: S01-PH10 (formerly I01-P10) 1 Forecasting Women s Apparel Sales Using Mathematical Modeling 1 Forecasting Women s Apparel Sales Using Mathematical Modeling Celia Frank* 1, Balaji Vemulapalli 1, Les M. Sztandera 2, Amar Raheja 3 1 School of Textiles and Materials Technology 2 Computer Information

More information

Programming Exercise 3: Multi-class Classification and Neural Networks

Programming Exercise 3: Multi-class Classification and Neural Networks Programming Exercise 3: Multi-class Classification and Neural Networks Machine Learning November 4, 2011 Introduction In this exercise, you will implement one-vs-all logistic regression and neural networks

More information

Neural Networks. Neural network is a network or circuit of neurons. Neurons can be. Biological neurons Artificial neurons

Neural Networks. Neural network is a network or circuit of neurons. Neurons can be. Biological neurons Artificial neurons Neural Networks Neural network is a network or circuit of neurons Neurons can be Biological neurons Artificial neurons Biological neurons Building block of the brain Human brain contains over 10 billion

More information

Data quality in Accounting Information Systems

Data quality in Accounting Information Systems Data quality in Accounting Information Systems Comparing Several Data Mining Techniques Erjon Zoto Department of Statistics and Applied Informatics Faculty of Economy, University of Tirana Tirana, Albania

More information

Predictive Data modeling for health care: Comparative performance study of different prediction models

Predictive Data modeling for health care: Comparative performance study of different prediction models Predictive Data modeling for health care: Comparative performance study of different prediction models Shivanand Hiremath hiremat.nitie@gmail.com National Institute of Industrial Engineering (NITIE) Vihar

More information

Neural Network Applications in Stock Market Predictions - A Methodology Analysis

Neural Network Applications in Stock Market Predictions - A Methodology Analysis Neural Network Applications in Stock Market Predictions - A Methodology Analysis Marijana Zekic, MS University of Josip Juraj Strossmayer in Osijek Faculty of Economics Osijek Gajev trg 7, 31000 Osijek

More information

Lecture 8 February 4

Lecture 8 February 4 ICS273A: Machine Learning Winter 2008 Lecture 8 February 4 Scribe: Carlos Agell (Student) Lecturer: Deva Ramanan 8.1 Neural Nets 8.1.1 Logistic Regression Recall the logistic function: g(x) = 1 1 + e θt

More information

HYBRID PROBABILITY BASED ENSEMBLES FOR BANKRUPTCY PREDICTION

HYBRID PROBABILITY BASED ENSEMBLES FOR BANKRUPTCY PREDICTION HYBRID PROBABILITY BASED ENSEMBLES FOR BANKRUPTCY PREDICTION Chihli Hung 1, Jing Hong Chen 2, Stefan Wermter 3, 1,2 Department of Management Information Systems, Chung Yuan Christian University, Taiwan

More information

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING EFFICIENT DATA PRE-PROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College

More information

Neural Networks. Introduction to Artificial Intelligence CSE 150 May 29, 2007

Neural Networks. Introduction to Artificial Intelligence CSE 150 May 29, 2007 Neural Networks Introduction to Artificial Intelligence CSE 150 May 29, 2007 Administration Last programming assignment has been posted! Final Exam: Tuesday, June 12, 11:30-2:30 Last Lecture Naïve Bayes

More information

COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS16. Lecture 2: Linear Regression Gradient Descent Non-linear basis functions

COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS16. Lecture 2: Linear Regression Gradient Descent Non-linear basis functions COMPUTATIONAL INTELLIGENCE (INTRODUCTION TO MACHINE LEARNING) SS16 Lecture 2: Linear Regression Gradient Descent Non-linear basis functions LINEAR REGRESSION MOTIVATION Why Linear Regression? Regression

More information

CHARACTERISTICS IN FLIGHT DATA ESTIMATION WITH LOGISTIC REGRESSION AND SUPPORT VECTOR MACHINES

CHARACTERISTICS IN FLIGHT DATA ESTIMATION WITH LOGISTIC REGRESSION AND SUPPORT VECTOR MACHINES CHARACTERISTICS IN FLIGHT DATA ESTIMATION WITH LOGISTIC REGRESSION AND SUPPORT VECTOR MACHINES Claus Gwiggner, Ecole Polytechnique, LIX, Palaiseau, France Gert Lanckriet, University of Berkeley, EECS,

More information

Artificial Neural Computation Systems

Artificial Neural Computation Systems Artificial Neural Computation Systems Spring 2003 Technical University of Szczecin Department of Electrical Engineering Lecturer: Prof. Adam Krzyzak,PS 5. Lecture 15.03.2003 147 1. Multilayer Perceptrons............

More information

Keywords: Image complexity, PSNR, Levenberg-Marquardt, Multi-layer neural network.

Keywords: Image complexity, PSNR, Levenberg-Marquardt, Multi-layer neural network. Global Journal of Computer Science and Technology Volume 11 Issue 3 Version 1.0 Type: Double Blind Peer Reviewed International Research Journal Publisher: Global Journals Inc. (USA) Online ISSN: 0975-4172

More information

Data Mining - Evaluation of Classifiers

Data Mining - Evaluation of Classifiers Data Mining - Evaluation of Classifiers Lecturer: JERZY STEFANOWSKI Institute of Computing Sciences Poznan University of Technology Poznan, Poland Lecture 4 SE Master Course 2008/2009 revised for 2010

More information

Drug Store Sales Prediction

Drug Store Sales Prediction Drug Store Sales Prediction Chenghao Wang, Yang Li Abstract - In this paper we tried to apply machine learning algorithm into a real world problem drug store sales forecasting. Given store information,

More information

Data Quality Mining: Employing Classifiers for Assuring consistent Datasets

Data Quality Mining: Employing Classifiers for Assuring consistent Datasets Data Quality Mining: Employing Classifiers for Assuring consistent Datasets Fabian Grüning Carl von Ossietzky Universität Oldenburg, Germany, fabian.gruening@informatik.uni-oldenburg.de Abstract: Independent

More information

PERFORMANCE TESTING OF BITUMINOUS MIXES USING FALLING WEIGHT DEFLECTOMETER

PERFORMANCE TESTING OF BITUMINOUS MIXES USING FALLING WEIGHT DEFLECTOMETER ABSTRACT NO. 6 PERFORMANCE TESTING OF BITUMINOUS MIXES USING FALLING WEIGHT DEFLECTOMETER Prof Praveen Kumar Dr G D Ransinchung Lt. Col. Mayank Mehta Nikhil Saboo IIT Roorkee IIT Roorkee IIT Roorkee IIT

More information

Impact of Feature Selection on the Performance of Wireless Intrusion Detection Systems

Impact of Feature Selection on the Performance of Wireless Intrusion Detection Systems 2009 International Conference on Computer Engineering and Applications IPCSIT vol.2 (2011) (2011) IACSIT Press, Singapore Impact of Feature Selection on the Performance of ireless Intrusion Detection Systems

More information

Prediction of Stock Performance Using Analytical Techniques

Prediction of Stock Performance Using Analytical Techniques 136 JOURNAL OF EMERGING TECHNOLOGIES IN WEB INTELLIGENCE, VOL. 5, NO. 2, MAY 2013 Prediction of Stock Performance Using Analytical Techniques Carol Hargreaves Institute of Systems Science National University

More information

A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data

A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data A Comparative Study of the Pickup Method and its Variations Using a Simulated Hotel Reservation Data Athanasius Zakhary, Neamat El Gayar Faculty of Computers and Information Cairo University, Giza, Egypt

More information

FRAUD DETECTION IN ELECTRIC POWER DISTRIBUTION NETWORKS USING AN ANN-BASED KNOWLEDGE-DISCOVERY PROCESS

FRAUD DETECTION IN ELECTRIC POWER DISTRIBUTION NETWORKS USING AN ANN-BASED KNOWLEDGE-DISCOVERY PROCESS FRAUD DETECTION IN ELECTRIC POWER DISTRIBUTION NETWORKS USING AN ANN-BASED KNOWLEDGE-DISCOVERY PROCESS Breno C. Costa, Bruno. L. A. Alberto, André M. Portela, W. Maduro, Esdras O. Eler PDITec, Belo Horizonte,

More information

Horse Racing Prediction Using Artificial Neural Networks

Horse Racing Prediction Using Artificial Neural Networks Horse Racing Prediction Using Artificial Neural Networks ELNAZ DAVOODI, ALI REZA KHANTEYMOORI Mathematics and Computer science Department Institute for Advanced Studies in Basic Sciences (IASBS) Gavazang,

More information

Neural Networks and Support Vector Machines

Neural Networks and Support Vector Machines INF5390 - Kunstig intelligens Neural Networks and Support Vector Machines Roar Fjellheim INF5390-13 Neural Networks and SVM 1 Outline Neural networks Perceptrons Neural networks Support vector machines

More information

Support Vector Machines Explained

Support Vector Machines Explained March 1, 2009 Support Vector Machines Explained Tristan Fletcher www.cs.ucl.ac.uk/staff/t.fletcher/ Introduction This document has been written in an attempt to make the Support Vector Machines (SVM),

More information

Notes on Support Vector Machines

Notes on Support Vector Machines Notes on Support Vector Machines Fernando Mira da Silva Fernando.Silva@inesc.pt Neural Network Group I N E S C November 1998 Abstract This report describes an empirical study of Support Vector Machines

More information

A Content based Spam Filtering Using Optical Back Propagation Technique

A Content based Spam Filtering Using Optical Back Propagation Technique A Content based Spam Filtering Using Optical Back Propagation Technique Sarab M. Hameed 1, Noor Alhuda J. Mohammed 2 Department of Computer Science, College of Science, University of Baghdad - Iraq ABSTRACT

More information

Neural Networks and Back Propagation Algorithm

Neural Networks and Back Propagation Algorithm Neural Networks and Back Propagation Algorithm Mirza Cilimkovic Institute of Technology Blanchardstown Blanchardstown Road North Dublin 15 Ireland mirzac@gmail.com Abstract Neural Networks (NN) are important

More information

Forecasting of Economic Quantities using Fuzzy Autoregressive Model and Fuzzy Neural Network

Forecasting of Economic Quantities using Fuzzy Autoregressive Model and Fuzzy Neural Network Forecasting of Economic Quantities using Fuzzy Autoregressive Model and Fuzzy Neural Network Dušan Marček 1 Abstract Most models for the time series of stock prices have centered on autoregressive (AR)

More information

Applications to Data Smoothing and Image Processing I

Applications to Data Smoothing and Image Processing I Applications to Data Smoothing and Image Processing I MA 348 Kurt Bryan Signals and Images Let t denote time and consider a signal a(t) on some time interval, say t. We ll assume that the signal a(t) is

More information

Neural network software tool development: exploring programming language options

Neural network software tool development: exploring programming language options INEB- PSI Technical Report 2006-1 Neural network software tool development: exploring programming language options Alexandra Oliveira aao@fe.up.pt Supervisor: Professor Joaquim Marques de Sá June 2006

More information

A New Approach For Estimating Software Effort Using RBFN Network

A New Approach For Estimating Software Effort Using RBFN Network IJCSNS International Journal of Computer Science and Network Security, VOL.8 No.7, July 008 37 A New Approach For Estimating Software Using RBFN Network Ch. Satyananda Reddy, P. Sankara Rao, KVSVN Raju,

More information

(Quasi-)Newton methods

(Quasi-)Newton methods (Quasi-)Newton methods 1 Introduction 1.1 Newton method Newton method is a method to find the zeros of a differentiable non-linear function g, x such that g(x) = 0, where g : R n R n. Given a starting

More information

1. Classification problems

1. Classification problems Neural and Evolutionary Computing. Lab 1: Classification problems Machine Learning test data repository Weka data mining platform Introduction Scilab 1. Classification problems The main aim of a classification

More information

Support Vector Machine. Tutorial. (and Statistical Learning Theory)

Support Vector Machine. Tutorial. (and Statistical Learning Theory) Support Vector Machine (and Statistical Learning Theory) Tutorial Jason Weston NEC Labs America 4 Independence Way, Princeton, USA. jasonw@nec-labs.com 1 Support Vector Machines: history SVMs introduced

More information

Chapter 7. Diagnosis and Prognosis of Breast Cancer using Histopathological Data

Chapter 7. Diagnosis and Prognosis of Breast Cancer using Histopathological Data Chapter 7 Diagnosis and Prognosis of Breast Cancer using Histopathological Data In the previous chapter, a method for classification of mammograms using wavelet analysis and adaptive neuro-fuzzy inference

More information

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta].

degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. 1.3 Neural Networks 19 Neural Networks are large structured systems of equations. These systems have many degrees of freedom and are able to adapt to the task they are supposed to do [Gupta]. Two very

More information

ANN Based Fault Classifier and Fault Locator for Double Circuit Transmission Line

ANN Based Fault Classifier and Fault Locator for Double Circuit Transmission Line International Journal of Computer Sciences and Engineering Open Access Research Paper Volume-4, Special Issue-2, April 2016 E-ISSN: 2347-2693 ANN Based Fault Classifier and Fault Locator for Double Circuit

More information

Knowledge Discovery from patents using KMX Text Analytics

Knowledge Discovery from patents using KMX Text Analytics Knowledge Discovery from patents using KMX Text Analytics Dr. Anton Heijs anton.heijs@treparel.com Treparel Abstract In this white paper we discuss how the KMX technology of Treparel can help searchers

More information

Neural Networks. CAP5610 Machine Learning Instructor: Guo-Jun Qi

Neural Networks. CAP5610 Machine Learning Instructor: Guo-Jun Qi Neural Networks CAP5610 Machine Learning Instructor: Guo-Jun Qi Recap: linear classifier Logistic regression Maximizing the posterior distribution of class Y conditional on the input vector X Support vector

More information

Chapter 6. The stacking ensemble approach

Chapter 6. The stacking ensemble approach 82 This chapter proposes the stacking ensemble approach for combining different data mining classifiers to get better performance. Other combination techniques like voting, bagging etc are also described

More information

Making Sense of the Mayhem: Machine Learning and March Madness

Making Sense of the Mayhem: Machine Learning and March Madness Making Sense of the Mayhem: Machine Learning and March Madness Alex Tran and Adam Ginzberg Stanford University atran3@stanford.edu ginzberg@stanford.edu I. Introduction III. Model The goal of our research

More information

Recurrent Neural Networks

Recurrent Neural Networks Recurrent Neural Networks Neural Computation : Lecture 12 John A. Bullinaria, 2015 1. Recurrent Neural Network Architectures 2. State Space Models and Dynamical Systems 3. Backpropagation Through Time

More information

Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j

Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j Analysis of kiva.com Microlending Service! Hoda Eydgahi Julia Ma Andy Bardagjy December 9, 2010 MAS.622j What is Kiva? An organization that allows people to lend small amounts of money via the Internet

More information

Machine Learning and Pattern Recognition Logistic Regression

Machine Learning and Pattern Recognition Logistic Regression Machine Learning and Pattern Recognition Logistic Regression Course Lecturer:Amos J Storkey Institute for Adaptive and Neural Computation School of Informatics University of Edinburgh Crichton Street,

More information

Machine Learning in FX Carry Basket Prediction

Machine Learning in FX Carry Basket Prediction Machine Learning in FX Carry Basket Prediction Tristan Fletcher, Fabian Redpath and Joe D Alessandro Abstract Artificial Neural Networks ANN), Support Vector Machines SVM) and Relevance Vector Machines

More information

THE SVM APPROACH FOR BOX JENKINS MODELS

THE SVM APPROACH FOR BOX JENKINS MODELS REVSTAT Statistical Journal Volume 7, Number 1, April 2009, 23 36 THE SVM APPROACH FOR BOX JENKINS MODELS Authors: Saeid Amiri Dep. of Energy and Technology, Swedish Univ. of Agriculture Sciences, P.O.Box

More information

Introduction to Machine Learning. Speaker: Harry Chao Advisor: J.J. Ding Date: 1/27/2011

Introduction to Machine Learning. Speaker: Harry Chao Advisor: J.J. Ding Date: 1/27/2011 Introduction to Machine Learning Speaker: Harry Chao Advisor: J.J. Ding Date: 1/27/2011 1 Outline 1. What is machine learning? 2. The basic of machine learning 3. Principles and effects of machine learning

More information

International Journal of Electronics and Computer Science Engineering 1449

International Journal of Electronics and Computer Science Engineering 1449 International Journal of Electronics and Computer Science Engineering 1449 Available Online at www.ijecse.org ISSN- 2277-1956 Neural Networks in Data Mining Priyanka Gaur Department of Information and

More information

Chapter 12 Discovering New Knowledge Data Mining

Chapter 12 Discovering New Knowledge Data Mining Chapter 12 Discovering New Knowledge Data Mining Becerra-Fernandez, et al. -- Knowledge Management 1/e -- 2004 Prentice Hall Additional material 2007 Dekai Wu Chapter Objectives Introduce the student to

More information

Data Mining Algorithms Part 1. Dejan Sarka

Data Mining Algorithms Part 1. Dejan Sarka Data Mining Algorithms Part 1 Dejan Sarka Join the conversation on Twitter: @DevWeek #DW2015 Instructor Bio Dejan Sarka (dsarka@solidq.com) 30 years of experience SQL Server MVP, MCT, 13 books 7+ courses

More information

Chapter 4: Artificial Neural Networks

Chapter 4: Artificial Neural Networks Chapter 4: Artificial Neural Networks CS 536: Machine Learning Littman (Wu, TA) Administration icml-03: instructional Conference on Machine Learning http://www.cs.rutgers.edu/~mlittman/courses/ml03/icml03/

More information

A TUTORIAL. BY: Negin Yousefpour PhD Student Civil Engineering Department TEXAS A&M UNIVERSITY

A TUTORIAL. BY: Negin Yousefpour PhD Student Civil Engineering Department TEXAS A&M UNIVERSITY ARTIFICIAL NEURAL NETWORKS: A TUTORIAL BY: Negin Yousefpour PhD Student Civil Engineering Department TEXAS A&M UNIVERSITY Contents Introduction Origin Of Neural Network Biological Neural Networks ANN Overview

More information

SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS

SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS UDC: 004.8 Original scientific paper SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS Tonimir Kišasondi, Alen Lovren i University of Zagreb, Faculty of Organization and Informatics,

More information

New Ensemble Combination Scheme

New Ensemble Combination Scheme New Ensemble Combination Scheme Namhyoung Kim, Youngdoo Son, and Jaewook Lee, Member, IEEE Abstract Recently many statistical learning techniques are successfully developed and used in several areas However,

More information

Neural Networks in Data Mining

Neural Networks in Data Mining IOSR Journal of Engineering (IOSRJEN) ISSN (e): 2250-3021, ISSN (p): 2278-8719 Vol. 04, Issue 03 (March. 2014), V6 PP 01-06 www.iosrjen.org Neural Networks in Data Mining Ripundeep Singh Gill, Ashima Department

More information

CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning.

CS 2750 Machine Learning. Lecture 1. Machine Learning. http://www.cs.pitt.edu/~milos/courses/cs2750/ CS 2750 Machine Learning. Lecture Machine Learning Milos Hauskrecht milos@cs.pitt.edu 539 Sennott Square, x5 http://www.cs.pitt.edu/~milos/courses/cs75/ Administration Instructor: Milos Hauskrecht milos@cs.pitt.edu 539 Sennott

More information

Introduction to Neural Networks

Introduction to Neural Networks Introduction to Neural Networks 2nd Year UG, MSc in Computer Science http://www.cs.bham.ac.uk/~jxb/inn.html Lecturer: Dr. John A. Bullinaria http://www.cs.bham.ac.uk/~jxb John A. Bullinaria, 2004 Module

More information

International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 8 August 2013

International Journal of Computer Trends and Technology (IJCTT) volume 4 Issue 8 August 2013 A Short-Term Traffic Prediction On A Distributed Network Using Multiple Regression Equation Ms.Sharmi.S 1 Research Scholar, MS University,Thirunelvelli Dr.M.Punithavalli Director, SREC,Coimbatore. Abstract:

More information

Data Mining Approach for Analyzing Call Center Performance

Data Mining Approach for Analyzing Call Center Performance Data Mining Approach for Analyzing Call Center Performance Marcin Paprzycki, Ajith Abraham, Ruiyuan Guo and Srinivas Mukkamala * Computer Science Department, Oklahoma State University, USA Computer Science

More information

Supporting Online Material for

Supporting Online Material for www.sciencemag.org/cgi/content/full/313/5786/504/dc1 Supporting Online Material for Reducing the Dimensionality of Data with Neural Networks G. E. Hinton* and R. R. Salakhutdinov *To whom correspondence

More information

Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval

Search Taxonomy. Web Search. Search Engine Optimization. Information Retrieval Information Retrieval INFO 4300 / CS 4300! Retrieval models Older models» Boolean retrieval» Vector Space model Probabilistic Models» BM25» Language models Web search» Learning to Rank Search Taxonomy!

More information

Parallel Data Selection Based on Neurodynamic Optimization in the Era of Big Data

Parallel Data Selection Based on Neurodynamic Optimization in the Era of Big Data Parallel Data Selection Based on Neurodynamic Optimization in the Era of Big Data Jun Wang Department of Mechanical and Automation Engineering The Chinese University of Hong Kong Shatin, New Territories,

More information

Optimizing content delivery through machine learning. James Schneider Anton DeFrancesco

Optimizing content delivery through machine learning. James Schneider Anton DeFrancesco Optimizing content delivery through machine learning James Schneider Anton DeFrancesco Obligatory company slide Our Research Areas Machine learning The problem Prioritize import information in low bandwidth

More information

Predictive time series analysis of stock prices using neural network classifier

Predictive time series analysis of stock prices using neural network classifier Predictive time series analysis of stock prices using neural network classifier Abhinav Pathak, National Institute of Technology, Karnataka, Surathkal, India abhi.pat93@gmail.com Abstract The work pertains

More information

Real Stock Trading Using Soft Computing Models

Real Stock Trading Using Soft Computing Models Real Stock Trading Using Soft Computing Models Brent Doeksen 1, Ajith Abraham 2, Johnson Thomas 1 and Marcin Paprzycki 1 1 Computer Science Department, Oklahoma State University, OK 74106, USA, 2 School

More information

EVALUATING THE EFFECT OF DATASET SIZE ON PREDICTIVE MODEL USING SUPERVISED LEARNING TECHNIQUE

EVALUATING THE EFFECT OF DATASET SIZE ON PREDICTIVE MODEL USING SUPERVISED LEARNING TECHNIQUE International Journal of Software Engineering & Computer Sciences (IJSECS) ISSN: 2289-8522,Volume 1, pp. 75-84, February 2015 Universiti Malaysia Pahang DOI: http://dx.doi.org/10.15282/ijsecs.1.2015.6.0006

More information

Iranian J Env Health Sci Eng, 2004, Vol.1, No.2, pp.51-57. Application of Intelligent System for Water Treatment Plant Operation.

Iranian J Env Health Sci Eng, 2004, Vol.1, No.2, pp.51-57. Application of Intelligent System for Water Treatment Plant Operation. Iranian J Env Health Sci Eng, 2004, Vol.1, No.2, pp.51-57 Application of Intelligent System for Water Treatment Plant Operation *A Mirsepassi Dept. of Environmental Health Engineering, School of Public

More information

An Introduction to Neural Networks

An Introduction to Neural Networks An Introduction to Vincent Cheung Kevin Cannons Signal & Data Compression Laboratory Electrical & Computer Engineering University of Manitoba Winnipeg, Manitoba, Canada Advisor: Dr. W. Kinsner May 27,

More information

BREAST CANCER DIAGNOSIS USING STATISTICAL NEURAL NETWORKS

BREAST CANCER DIAGNOSIS USING STATISTICAL NEURAL NETWORKS ISTANBUL UNIVERSITY JOURNAL OF ELECTRICAL & ELECTRONICS ENGINEERING YEAR VOLUME NUMBER : 004 : 4 : (1149-1153) BREAST CANCER DIAGNOSIS USING STATISTICAL NEURAL NETWORKS Tüba KIYAN 1 Tülay YILDIRIM 1, Electronics

More information

Forecasting the U.S. Stock Market via Levenberg-Marquardt and Haken Artificial Neural Networks Using ICA&PCA Pre-Processing Techniques

Forecasting the U.S. Stock Market via Levenberg-Marquardt and Haken Artificial Neural Networks Using ICA&PCA Pre-Processing Techniques Forecasting the U.S. Stock Market via Levenberg-Marquardt and Haken Artificial Neural Networks Using ICA&PCA Pre-Processing Techniques Golovachev Sergey National Research University, Higher School of Economics,

More information

FOREX TRADING PREDICTION USING LINEAR REGRESSION LINE, ARTIFICIAL NEURAL NETWORK AND DYNAMIC TIME WARPING ALGORITHMS

FOREX TRADING PREDICTION USING LINEAR REGRESSION LINE, ARTIFICIAL NEURAL NETWORK AND DYNAMIC TIME WARPING ALGORITHMS FOREX TRADING PREDICTION USING LINEAR REGRESSION LINE, ARTIFICIAL NEURAL NETWORK AND DYNAMIC TIME WARPING ALGORITHMS Leslie C.O. Tiong 1, David C.L. Ngo 2, and Yunli Lee 3 1 Sunway University, Malaysia,

More information

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( )

NCSS Statistical Software Principal Components Regression. In ordinary least squares, the regression coefficients are estimated using the formula ( ) Chapter 340 Principal Components Regression Introduction is a technique for analyzing multiple regression data that suffer from multicollinearity. When multicollinearity occurs, least squares estimates

More information

Performance Evaluation of Artificial Neural. Networks for Spatial Data Analysis

Performance Evaluation of Artificial Neural. Networks for Spatial Data Analysis Contemporary Engineering Sciences, Vol. 4, 2011, no. 4, 149-163 Performance Evaluation of Artificial Neural Networks for Spatial Data Analysis Akram A. Moustafa Department of Computer Science Al al-bayt

More information

Multiple Kernel Learning on the Limit Order Book

Multiple Kernel Learning on the Limit Order Book JMLR: Workshop and Conference Proceedings 11 (2010) 167 174 Workshop on Applications of Pattern Analysis Multiple Kernel Learning on the Limit Order Book Tristan Fletcher Zakria Hussain John Shawe-Taylor

More information

Supply Chain Forecasting Model Using Computational Intelligence Techniques

Supply Chain Forecasting Model Using Computational Intelligence Techniques CMU.J.Nat.Sci Special Issue on Manufacturing Technology (2011) Vol.10(1) 19 Supply Chain Forecasting Model Using Computational Intelligence Techniques Wimalin S. Laosiritaworn Department of Industrial

More information

Learning to Process Natural Language in Big Data Environment

Learning to Process Natural Language in Big Data Environment CCF ADL 2015 Nanchang Oct 11, 2015 Learning to Process Natural Language in Big Data Environment Hang Li Noah s Ark Lab Huawei Technologies Part 1: Deep Learning - Present and Future Talk Outline Overview

More information