Time Series Prediction with Neural Networks for the Athens Stock Exchange Indicator

Similar documents
Feed-Forward mapping networks KAIST 바이오및뇌공학과 정재승

AN APPLICATION OF TIME SERIES ANALYSIS FOR WEATHER FORECASTING

A Time Series ANN Approach for Weather Forecasting

SEMINAR OUTLINE. Introduction to Data Mining Using Artificial Neural Networks. Definitions of Neural Networks. Definitions of Neural Networks

Lecture 6. Artificial Neural Networks

Forecasting of Economic Quantities using Fuzzy Autoregressive Model and Fuzzy Neural Network

Artificial Neural Networks and Support Vector Machines. CS 486/686: Introduction to Artificial Intelligence

Introduction to Machine Learning and Data Mining. Prof. Dr. Igor Trajkovski

Neural Network Design in Cloud Computing

Recurrent Neural Networks

SELECTING NEURAL NETWORK ARCHITECTURE FOR INVESTMENT PROFITABILITY PREDICTIONS

TRAINING A LIMITED-INTERCONNECT, SYNTHETIC NEURAL IC

Price Prediction of Share Market using Artificial Neural Network (ANN)

ARTIFICIAL NEURAL NETWORKS FOR DATA MINING

APPLICATION OF ARTIFICIAL NEURAL NETWORKS USING HIJRI LUNAR TRANSACTION AS EXTRACTED VARIABLES TO PREDICT STOCK TREND DIRECTION

Performance Evaluation of Artificial Neural. Networks for Spatial Data Analysis

Iranian J Env Health Sci Eng, 2004, Vol.1, No.2, pp Application of Intelligent System for Water Treatment Plant Operation.

Artificial neural networks

Comparison of Supervised and Unsupervised Learning Classifiers for Travel Recommendations

An Introduction to Neural Networks

XIV. Title. 2.1 Schematics of the WEP Encryption in WEP technique Decryption in WEP technique Process of TKIP 25

Chapter 4: Artificial Neural Networks

Machine Learning: Multi Layer Perceptrons

Keywords: Image complexity, PSNR, Levenberg-Marquardt, Multi-layer neural network.

Machine Learning and Data Mining -

Introduction to Artificial Neural Networks

NEURAL NETWORK FUNDAMENTALS WITH GRAPHS, ALGORITHMS, AND APPLICATIONS

Stock Prediction using Artificial Neural Networks

Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification

Prediction Model for Crude Oil Price Using Artificial Neural Networks

CLOUD FREE MOSAIC IMAGES

129: Artificial Neural Networks. Ajith Abraham Oklahoma State University, Stillwater, OK, USA 1 INTRODUCTION TO ARTIFICIAL NEURAL NETWORKS

Analecta Vol. 8, No. 2 ISSN

Follow links Class Use and other Permissions. For more information, send to:

Time Series Data Mining in Rainfall Forecasting Using Artificial Neural Network

Stock Market Prediction Based on Fundamentalist Analysis with Fuzzy- Neural Networks

COMBINED NEURAL NETWORKS FOR TIME SERIES ANALYSIS

SURVIVABILITY ANALYSIS OF PEDIATRIC LEUKAEMIC PATIENTS USING NEURAL NETWORK APPROACH

Neural network software tool development: exploring programming language options

Neural Networks in Data Mining

Cash Forecasting: An Application of Artificial Neural Networks in Finance

Forecasting of Indian Rupee (INR) / US Dollar (USD) Currency Exchange Rate Using Artificial Neural Network

Improving returns on stock investment through neural network selection

Performance Evaluation On Human Resource Management Of China S Commercial Banks Based On Improved Bp Neural Networks

NEURAL NETWORKS IN DATA MINING

ELLIOTT WAVES RECOGNITION VIA NEURAL NETWORKS

Comparison of Regression Model and Artificial Neural Network Model for the prediction of Electrical Power generated in Nigeria

CHAPTER 5 PREDICTIVE MODELING STUDIES TO DETERMINE THE CONVEYING VELOCITY OF PARTS ON VIBRATORY FEEDER

Improving Returns on Stock Investment through Neural Network Selection

Neural Computation - Assignment

INTELLIGENT ENERGY MANAGEMENT OF ELECTRICAL POWER SYSTEMS WITH DISTRIBUTED FEEDING ON THE BASIS OF FORECASTS OF DEMAND AND GENERATION Chr.

The CHIR Algorithm for Feed Forward. Networks with Binary Weights

Neural Networks and Support Vector Machines

Neural Networks and Back Propagation Algorithm

Multiple Layer Perceptron Training Using Genetic Algorithms

Introduction to Machine Learning Using Python. Vikram Kamath

Prediction of Stock Market in Nigeria Using Artificial Neural Network

NTC Project: S01-PH10 (formerly I01-P10) 1 Forecasting Women s Apparel Sales Using Mathematical Modeling

A Prediction Model for Taiwan Tourism Industry Stock Index

An Augmented Normalization Mechanism for Capacity Planning & Modelling Elegant Approach with Artificial Intelligence

NONLINEAR TIME SERIES ANALYSIS

ANN Based Fault Classifier and Fault Locator for Double Circuit Transmission Line

Forecasting Stock Prices using a Weightless Neural Network. Nontokozo Mpofu

Optimization of technical trading strategies and the profitability in security markets

THREE DIMENSIONAL REPRESENTATION OF AMINO ACID CHARAC- TERISTICS

Horse Racing Prediction Using Artificial Neural Networks

Back Propagation Neural Network for Wireless Networking

SEARCH AND CLASSIFICATION OF "INTERESTING" BUSINESS APPLICATIONS IN THE WORLD WIDE WEB USING A NEURAL NETWORK APPROACH

Predictive time series analysis of stock prices using neural network classifier

Neural network models: Foundations and applications to an audit decision problem

Role of Neural network in data mining

Sound Quality Evaluation of Hermetic Compressors Using Artificial Neural Networks

Neural Network Applications in Stock Market Predictions - A Methodology Analysis

Temporal Difference Learning in the Tetris Game

Qos Monitoring for Web Services by Time Series Forecasting

Novelty Detection in image recognition using IRF Neural Networks properties

An Artificial Neural Networks-Based on-line Monitoring Odor Sensing System

How To Use Neural Networks In Data Mining

Efficient online learning of a non-negative sparse autoencoder

Performance Comparison between Backpropagation Algorithms Applied to Intrusion Detection in Computer Network Systems

Data Mining on Romanian Stock Market Using Neural Networks for Price Prediction

A New Approach For Estimating Software Effort Using RBFN Network

Data Mining using Artificial Neural Network Rules

Power Prediction Analysis using Artificial Neural Network in MS Excel

TRAINING A 3-NODE NEURAL NETWORK IS NP-COMPLETE

Using Neural Networks to Forecast Stock Market Prices

Neural Networks for Sentiment Detection in Financial Text

Neural Networks in Quantitative Finance

Open Access Research on Application of Neural Network in Computer Network Security Evaluation. Shujuan Jin *

Data Mining Techniques Chapter 7: Artificial Neural Networks

NEURAL NETWORKS A Comprehensive Foundation

American International Journal of Research in Science, Technology, Engineering & Mathematics

Prediction of Surface Roughness in CNC Milling of Al7075 alloy: A case study of using 8mm slot mill cutter

The Backpropagation Algorithm

Neural Networks: a replacement for Gaussian Processes?

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

Comparison of K-means and Backpropagation Data Mining Algorithms

PORTFOLIO SELECTION USING

6.2.8 Neural networks for data mining

Sensitivity Analysis for Data Mining

Transcription:

European Research Studies, Volume XV, Issue (2), 2012 Time Series Prediction with Neural Networks for the Athens Stock Exchange Indicator M. Hanias 1, P. Curtis 2, E. Thalassinos 3 Abstract: The main aim of this study is to predict the daily stock exchange price index of the Athens Stock Exchange (ASE) using back propagation neural networks. We construct the neural network based on the minimum embedding dimension of the corresponding strange attractor. Multistep prediction for nine days ahead is achieved with this particular network indicating the increased possibity of this technique for immediate forecasts for very timeshort data sets, mostly daily and weekly. Key Words: Time Series Forecasting, Neural Networks, Perceptions, Neuron JEL Classification: C02, C22, C69 1 TEI Chalkidas, Athens,Greece. 2 TEI Chalkidas, Athens, Greece, email: pcurtis@teihal.gr 3 Professor, Department of Maritime Studies, University of Piraeus,Chair Jean Monnet, email: thalassi@unipi.gr

24 European Research Studies, Volume XV, Issue (2), 2012 1. Introduction The common characteristic of all stock exchange indicators have the term of uncertainty, which is related with their short and long-term future state. This feature is undesirable for the investor but it is also unavoidable whenever the stock exchange indicator is selected as an investment tool. Among the best possible alterations that the researcher desires to make is to reduce this uncertainty. Stock exchange prediction or forecasting is one of the interesting issues in this process as it is also referred in other works (Helstrom T. & Holmstrom K., 1998, Tsibouris G. & Zeidenberg M., 1996, White H., 1993). Time series forecasting, or time series prediction, takes an existing series of x, x, x, x x,, t x 2 t n, t 2 t 1 data t and forecasts the 1 t data values. The goal is to observe or model the existing data series to enable future unknown data values to be forecasted accurately. Examples of data series include financial data series (stocks, indices, rates, etc.), physically observed data series (sunspots, weather, etc.), and mathematical data series (Fibonacci sequence, integrals of differential equations, etc.). A neural network (Tsibouris G. & Zeidenberg M., 1996) is a computational model that is loosely based on the neuron cell structure of the biological nervous system. Given a training set of data, the neural network can learn the data with a learning algorithm; in this research the most common algorithm, back propagation, is used. Through back propagation, the neural network forms a mapping between inputs and desired outputs from the training set by altering weighted connections within the network. The origin of neural networks dates back to the 1940s. McCulloch and Pitts (1943) and Hebb (1949) researched networks of simple computing devices that could model neurological activity and learning within these networks, respectively. Later, the work of Rosenblatt (1962) focused on computational abity in perceptrons, or single-layer feed-forward networks. Proofs showing that perceptrons, trained with the Perceptron Rule on nearly separable pattern class data, could correctly separate the classes generated excitement among researchers and practitioners. This excitement waned with the discouraging analysis of perceptrons presented by Minsky and Papert (1969). The analysis pointed out that perceptrons could not learn the class of nearly inseparable functions. It also stated that the mitations could be resolved if networks contained more than one layer, however no effective training algorithm for multi-layer networks was available. Rumelhart, Hinton, and Wilams (1986) revived interest in neural networks by introducing the generazed delta rule for learning by back propagation, which is today the most commonly used training algorithm for multi-layer networks.

Time Series Prediction with Neural Networks for the Athens Stock Exchange Indicator 25 2. Athens Stock Exchange Indicator Time Series The Athens Stock Exchange Indicator is presented as a signal x=x( as it shown in Figure 1. It covers data from the period 1998 to 2005. The sampng rate is Δt=1 day. Figure 1. Time Series Data for the Athens Stock Exchange Indicator 7000 6000 5000 Stock Index 4000 3000 2000 1000 0 500 1000 1500 2000 Time units (Days) 3. Neural Network Constructions In order to predict the time series for the Athens Stock Exchange Indicator we construct a back propagation network (Wan A.E., 1990, Widrow B., and Lehr M., 1990, Rumelhart D., McClelland J. et al., 1986) that consists of 1 input layer 1 middle or hidden layer and 1 output layer. The input layer has 3 processing elements (PE) neurons; the middle layer has 2x3+1=7 neurons according to Kolmogorov theorem (Kolmogorov N. A., 1950, Blackwell D., 1947). We choose the input layer to have 3 inputs because of minimum embedding dimension 3, and the attractor is embedded at a three dimensional phase space (Hanias P.M., Curtis G.P., and Thalassinos E.J., 2006). Beginning with the first value of the time series, the first set of inputs data is x1, x2, x3 and our outputs are x4, x5, x6, x7, x8, x9, x10, x11, x12. Similarly, the second set of inputs is x2, x3, x4, and the outputs we are trying to predict are x5, x6, x7, x8, x9, x10, x11, x12, x13. So we construct the network having an input layer of 3 neurons. The output layer has 9 neurons. In other words with every 3 input values of stock market we can

26 European Research Studies, Volume XV, Issue (2), 2012 predict 9 steps ahead. The network architecture is shown in Figure 2. There is a full interconnection between each layer. Figure 2. Network Architecture with one Input Layer one Hidden Layer and one Output Layer As it is shown in Figure 2 this neural network is composed of a layered arrangement of artificial neurons in which each neuron of a given layer feeds the neurons of the next layer. Singles neurons extracted from the corresponding layers are represented in Figure 3. Figure 3. Singles Neurons Extracted from the Input, Hidden and Output Layer p i l-2 I-1 l

Time Series Prediction with Neural Networks for the Athens Stock Exchange Indicator 27 In Figure 3 there is a neuron p at input layer l-2, a neuron at middle layer l- 1, a neuron i at output layer i. If m is the number of neurons in layer l, in our case m=9, n is the number of neurons in layer l-1, in our case n=7, and N is the number of examples, in our case N=1024, while t is an index that points to the corresponding case. We denote y( as the output of the ith neuron of the layer l and y(l-1)( as the output of the th neuron of the middle layer l-1 while Ti( is the desired case. The inputs y(l-1)( to the neuron i are multiped by variable coefficients w called weights which represent the synaptic connectivity between neuron in the previous layer and neuron i in layer l. At output layer l the input at neuron i is I m 1 w y ( (1) while the output of neuron i is where tanh is the hyperboc tangent function. y tanh( I ) (2) Simpstically we can say that the output of a neuron is a tanh function of the weighted sum of its inputs. The error δl( for neuron i of output layer l, for the case t is T y i (3) where y( is the predicted output while Ti( is the desired output. Using a back propagation network we propagate backwards the errors for each output layer neuroses to the middle layer, using the same interconnection and weights as the middle layer use to transmit their outputs to the output layer. Then we can compute a gain error for each neuron in the middle layer, based on its portion of the blame for the output layer s error. The gain error OE for neuron i of output layer l is as it is pointed out in Wan A.E., (1994): OE(=δ(y([1-y(] E( OE w y ( (4) and (5) and finally

28 European Research Studies, Volume XV, Issue (2), 2012 w w E( w (6) or w w OE ( y ( (7) is a weight change algorithm called generazed delta rule (Danilo P., Mandic J., and Chambers A., 2001). w w The notation represents the weight vector before and after the weight has been adusted. This rule is called the Widrow Hoff training rule, or the Least Mean Squared Rule. What this law attempts to do is to ensure that the aggregate statistical Least Mean Square Error is achieved in the network. The constant β is the learning constant. However, we used a more robust learning rule as the Generazed Delta Rule with momentum term α. The statement of the Delta Rule is: W ( ) w OE y( l 1 ( w ( t 1) w ( t 1)) (8) The momentum term α is simply a constant α, times the weight vector of this neurode from the previous presentation of the input pattern. So, if the last weight change was in a particular direction the momentum term tends to make the next weight change, more or less, into the same direction. Applying the same procedure to middle layer we have the following: At middle layer l-1 the input at neuron is: I n ( w( y( l 2) p p 1 (9) while the output from neuron is y( tanh( I ( l i ) ) (10) the error δ(l-1)( for neuron of middle layer l-1, for case t is m ( woe 1 (11)

Time Series Prediction with Neural Networks for the Athens Stock Exchange Indicator 29 The gain error ME for neuron of middle layer l-1 is: ME(l-1)(=δ(l-1)(y(l-1)([1-y(l-1)(] (12) and the Delta rule is E( w ( ME ( y ( l 2) p (13) w ( w ( E( w ( (14) or w( w( ME( y( l 2) p (15) Therefore the generazed Delta Rule is: w( w( ( ME( y( l 2) p ( w( ( t 1) w( ) (16) 4. Time Series Prediction In this section we construct the neural network of Figure 2. The input layer has 3 neurons the hidden layer has 7 neurons and the output layer has 9 neurons. The time series has 2047 points. We train the network with a training set of 1024 exemplars, the learning rate β=1 and the momentum α=0.7. The network was training so the Mean Square Error (MSE) to be less than 0.0025. After 1300 epochs the minimum training was, MSE=0.0024. Figure 4 presents the dependence of MSE on epochs. As it is shown in Figure 4 there is a rapid convergence.

30 European Research Studies, Volume XV, Issue (2), 2012 Figure 4. MSE Versus Epochs at the Training Data Set 0.10 0.08 0.06 MSE 0.04 0.02 0.00 0 200 400 600 800 1000 1200 1400 epochs Then we can test the network on the last 991 data values that the network has never seen before. For each 3 inputs we can predict the next 9 values as it is shown in Figure 5. Figure 5. Actual and Predicted Time Series for 9 Time Steps Ahead 4 5 0 0 4 0 0 0 3 5 0 0 A c tu a l d a ta P re d ic te d d a ta Stock index 3 0 0 0 2 5 0 0 2 0 0 0 1 5 0 0 1 0 0 0 0 200 4 00 600 800 1000 T im e u n its (d a ys ) At it is shown in Figure 5 there is a very good fit between the actual and the predicted data.

Time Series Prediction with Neural Networks for the Athens Stock Exchange Indicator 31 5. Conclusions In this paper we have shown that a back propagation neural network can predict in a very well manner the Athens Stock Exchange Indicator. Using three days as inputs we predicted nine days ahead in the future. The actual and the predicted data are very close with a minimum Mean Square Error (MSE) = 0.0024 which is an acceptable error for this kind of data. References 1. Blackwell D., (1947), "Conditional Expectation and Unbiased Sequential Estimation, Ann. Math. Stat., 18 pp. 105 110. 2. Danilo P., Mandic, J., and Chambers A., (2001), Recurrent Neural Networks for Prediction, John Wiley & Sons Ltd, London. 3. Hanias P.M., Curtis G.P., Thalassinos E.J., (2006), Non-Linear Dynamics and Chaos: The Case of the Price Indicator at the Athens Stock Exchange, European Research Studies Vol. VII issue (3-4) and Eastern Economic Association Annual Conference, February 2007, New York. 4. Helstrom T., Holmstrom K., (1998), Predicting the Stock Market, pubshed as Opuscula ISRN HEV-BIB-OP-26-SE. 5. Kolmogorov N.A., (1950), Unbiased Estimates, Izv. Akad. Nauk SSSR Ser. Mat.,14: 4 pp. 303 326 (In Russian). 6. Maddala G.S., (1992), Introduction to Econometrics, New York, Toronto: Macmillan Pubshing Company. 7. Pesaran H.M., Timmermann A., (1994), Forecasting Stock Returns: An Examination of Stock Market Trading in the Presence of Transaction Costs, Journal of Forecasting, Vol. 13, pp 335-367. 8. Rumelhart D., McClelland J, et al, (1986), Parallel Distributed Processing, Cambridge, MA MIT Press. 9. Thalassinos E., Thalassinos J., (2004), European Stock Markets: Integration Analysis, Eastern Economic Association Conference, February 2007, New York. 10. Thalassinos E., Thalassinos P., (2005), Stock Market Analysis, European Research Studies Journal, Vol., IX issue (1-2), and International Conference in Finance, Istanbul, 2005. 11. Tsibouris G., Zeidenberg M., (1996), Testing the Efficient Market Hypothesis with Gradient Descent Algorithms, in Reffenes A.P., Neural Networks in the Capital Markets, England: John Wiley & Sons Ltd, pp 127-136. 12. Wan A.E., (1990), Temporal Backpropagation: An Efficient Algorithm for Finite Impulse Response Neural Networks, Proceedings of the 1990 Connectionist Models, Summer School, San Mateo, CA. 13. Wan A. E., (1994), Time Series Prediction by Using a Connectionist Network with Internal Delay Lines Network with Internal Delay Lines in Time Series Prediction: Forecasting the Future and Understanding the Past, Eds. by 14. White H., (1993), Economic Prediction Using Neural Networks: The Case of IBM Daily Stock Returns, in Trippi R.R., and Turban E., Neural Networks in Finance and Investment, Chicago, Cambridge England: Probus Pubshing Company, pp 315-329. 15. Widrow B., Lehr M., (1990), 30 Years of Adaptive Neural Networks: Perceptron, Madane, and Backpropagation, IEEE Proceedings, Vol. 78, No. 9, pp. 1415 1442.

32 European Research Studies, Volume XV, Issue (2), 2012