Hybrid-Learning Methods for Stock Index Modeling



Similar documents
Forecasting the Direction and Strength of Stock Market Movement

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network

A Genetic Programming Based Stock Price Predictor together with Mean-Variance Based Sell/Buy Actions

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

Course outline. Financial Time Series Analysis. Overview. Data analysis. Predictive signal. Trading strategy

A COLLABORATIVE TRADING MODEL BY SUPPORT VECTOR REGRESSION AND TS FUZZY RULE FOR DAILY STOCK TURNING POINTS DETECTION

Figure 1. Training and Test data sets for Nasdaq-100 Index (b) NIFTY index

Lecture 2: Single Layer Perceptrons Kevin Swingler

On the Optimal Control of a Cascade of Hydro-Electric Power Stations

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

What is Candidate Sampling

An Interest-Oriented Network Evolution Mechanism for Online Communities

Document Clustering Analysis Based on Hybrid PSO+K-means Algorithm

An Evaluation of the Extended Logistic, Simple Logistic, and Gompertz Models for Forecasting Short Lifecycle Products and Services

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Sequential Optimizing Investing Strategy with Neural Networks

An Alternative Way to Measure Private Equity Performance

Research Article Integrated Model of Multiple Kernel Learning and Differential Evolution for EUR/USD Trading

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

A study on the ability of Support Vector Regression and Neural Networks to Forecast Basic Time Series Patterns

A New Task Scheduling Algorithm Based on Improved Genetic Algorithm

IMPACT ANALYSIS OF A CELLULAR PHONE

A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm

SOLVING CARDINALITY CONSTRAINED PORTFOLIO OPTIMIZATION PROBLEM BY BINARY PARTICLE SWARM OPTIMIZATION ALGORITHM

NEURO-FUZZY INFERENCE SYSTEM FOR E-COMMERCE WEBSITE EVALUATION

Invoicing and Financial Forecasting of Time and Amount of Corresponding Cash Inflow

Stock volatility forecasting using Swarm optimized Hybrid Network

Australian Forex Market Analysis Using Connectionist Models

Flexible Neural Trees Ensemble for Stock Index Modeling

Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College

Damage detection in composite laminates using coin-tap method

Patterns Antennas Arrays Synthesis Based on Adaptive Particle Swarm Optimization and Genetic Algorithms

Bayesian Network Based Causal Relationship Identification and Funding Success Prediction in P2P Lending

BUSINESS PROCESS PERFORMANCE MANAGEMENT USING BAYESIAN BELIEF NETWORK. 0688,

A Binary Particle Swarm Optimization Algorithm for Lot Sizing Problem

Foreign Exchange Rate Prediction using Computational Intelligence Methods

Dynamic Constrained Economic/Emission Dispatch Scheduling Using Neural Network

Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic

Mining Feature Importance: Applying Evolutionary Algorithms within a Web-based Educational System

The Network flow Motoring System based on Particle Swarm Optimized

Forecasting the Prices of TAIEX Options by Using Genetic Programming and Support Vector Regression

Using Multi-objective Metaheuristics to Solve the Software Project Scheduling Problem

Performance Analysis and Coding Strategy of ECOC SVMs

Statistical Methods to Develop Rating Models

Credit Limit Optimization (CLO) for Credit Cards

Testing and Debugging Resource Allocation for Fault Detection and Removal Process

Support Vector Machines

A COPMARISON OF PARTICLE SWARM OPTIMIZATION AND THE GENETIC ALGORITHM

Optimal Choice of Random Variables in D-ITG Traffic Generating Tool using Evolutionary Algorithms

Investigation of Modified Bee Colony Algorithm with Particle and Chaos Theory

Study on Model of Risks Assessment of Standard Operation in Rural Power Network

Learning with Imperfections A Multi-Agent Neural-Genetic Trading System. with Differing Levels of Social Learning

Journal of Economics and Business

Software project management with GAs

The Greedy Method. Introduction. 0/1 Knapsack Problem

Recurrence. 1 Definitions and main statements

Intra-day Trading of the FTSE-100 Futures Contract Using Neural Networks With Wavelet Encodings

Ants Can Schedule Software Projects

CS 2750 Machine Learning. Lecture 3. Density estimation. CS 2750 Machine Learning. Announcements

How To Calculate The Accountng Perod Of Nequalty

Project Networks With Mixed-Time Constraints

Financial market forecasting using a two-step kernel learning method for the support vector regression

Improved SVM in Cloud Computing Information Mining

SPECIALIZED DAY TRADING - A NEW VIEW ON AN OLD GAME

LITERATURE REVIEW: VARIOUS PRIORITY BASED TASK SCHEDULING ALGORITHMS IN CLOUD COMPUTING

"Research Note" APPLICATION OF CHARGE SIMULATION METHOD TO ELECTRIC FIELD CALCULATION IN THE POWER CABLES *

Gender Classification for Real-Time Audience Analysis System

Construction Rules for Morningstar Canada Target Dividend Index SM

Sciences Shenyang, Shenyang, China.

Realistic Image Synthesis

A Binary Quantum-behaved Particle Swarm Optimization Algorithm with Cooperative Approach

A GENETIC ALGORITHM-BASED METHOD FOR CREATING IMPARTIAL WORK SCHEDULES FOR NURSES

A Hierarchical Anomaly Network Intrusion Detection System using Neural Network Classification

Single and multiple stage classifiers implementing logistic discrimination

COMPUTER SUPPORT OF SEMANTIC TEXT ANALYSIS OF A TECHNICAL SPECIFICATION ON DESIGNING SOFTWARE. Alla Zaboleeva-Zotova, Yulia Orlova

Mooring Pattern Optimization using Genetic Algorithms

A DATA MINING APPLICATION IN A STUDENT DATABASE

Maintenance Scheduling by using the Bi-Criterion Algorithm of Preferential Anti-Pheromone

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek

A Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy S-curve Regression

Can Auto Liability Insurance Purchases Signal Risk Attitude?

8 Algorithm for Binary Searching in Trees

The OC Curve of Attribute Acceptance Plans

Calendar Corrected Chaotic Forecast of Financial Time Series

Comparison of support-vector machines and back propagation neural networks in forecasting the six major Asian stock markets

Data Broadcast on a Multi-System Heterogeneous Overlayed Wireless Network *

Overview of monitoring and evaluation

Using Association Rule Mining: Stock Market Events Prediction from Financial News

Multiple-Period Attribution: Residuals and Compounding

Research on Evaluation of Customer Experience of B2C Ecommerce Logistics Enterprises

Risk Model of Long-Term Production Scheduling in Open Pit Gold Mining

行 政 院 國 家 科 學 委 員 會 補 助 專 題 研 究 計 畫 成 果 報 告 期 中 進 度 報 告

Transcription:

Hybrd-Learnng Methods for Stock Index Modelng 63 Chapter IV Hybrd-Learnng Methods for Stock Index Modelng Yuehu Chen, Jnan Unversty, Chna Ajth Abraham, Chung-Ang Unversty, Republc of Korea Abstract The use of ntellgent systems for stock market predcton has been wdely establshed. In ths paper, we nvestgate how the seemngly chaotc behavor of stock markets could be well represented usng several connectonst paradgms and soft computng technques. To demonstrate the dfferent technques, we consder the Nasdaq-00 ndex of Nasdaq Stock Market SM and the S&P CNX NIFTY stock ndex. We analyze 7- year Nasdaq 00 man-ndex values and 4-year NIFTY ndex values. Ths chapter nvestgates the development of novel, relable, and effcent technques to model the seemngly chaotc behavor of stock markets. We consder the flexble neural tree algorthm, a wavelet neural network, local lnear wavelet neural network, and fnally a feed-forward artfcal neural network. The partcle-swarm-optmzaton algorthm optmzes the parameters of the dfferent technques. Ths paper brefly explans how the dfferent learnng paradgms could be formulated usng varous methods and then nvestgates whether they can provde the requred level of performance n other

64 Chen and Abraham words, whether they are suffcently good and robust so as to provde a relable forecast model for stock market ndces. Experment results reveal that all the models consdered could represent the stock ndces behavor very accurately. Introducton Predcton of stocks s generally beleved to be a very dffcult task t behaves lke a random walk process and tme varyng. The obvous complexty of the problem paves the way for the mportance of ntellgent predcton paradgms (Abraham, Nath, & Mahant, 200). Durng the last decade, stocks and futures traders have come to rely upon varous types of ntellgent systems to make tradng decsons (Abraham, Phlp, & Saratchandran, 2003; Chan & Lu, 2002; Francs, Tay, & Cao, 2002; Legh, Modan, Purvs, & Roberts, 2002; Legh, Purvs, & Ragusa, 2002; Oh & Km, 2002; Quah & Srnvasan, 999; Wang, 2002). Several ntellgent systems have n recent years been developed for modelng expertse, decson support, and complcated automaton tasks (Berkeley, 997; Bsch & Valor, 2000; Cos, 200; Km & Han, 2000; Koulourots, Dakoulaks, & Emrs, 200; Lebaron, 200; Palma-dos-Res & Zahed, 999; Wuthrch et al., 998). In ths chapter, we analyse the seemngly chaotc behavor of two well-known stock ndces namely the Nasdaq-00 ndex of Nasdaq SM and the S&P CNX NIFTY stock ndex. The Nasdaq-00 ndex reflects Nasdaq s largest companes across major ndustry groups, ncludng computer hardware and software, telecommuncatons, retal/wholesale trade, and botechnology. The Nasdaq-00 ndex s a modfed captalzatonweghted ndex, desgned to lmt domnaton of the Index by a few large stocks whle generally retanng the captalzaton rankng of companes. Through an nvestment n Nasdaq-00 ndex trackng stock, nvestors can partcpate n the collectve performance of many of the Nasdaq stocks that are often n the news or have become household names. Smlarly, S&P CNX NIFTY s a well-dversfed 50-stock ndex accountng for 25 sectors of the economy. It s used for a varety of purposes such as benchmarkng fund portfolos, ndex-based dervatves, and ndex funds. The CNX ndces are computed usng the market captalzaton weghted method, wheren the level of the ndex reflects the total market value of all the stocks n the ndex relatve to a partcular base perod. The method also takes nto account consttuent changes n the ndex and mportantly corporate actons such as stock splts, rghts, and so on, wthout affectng the ndex value. Our research nvestgates the performance analyss of four dfferent connectonst paradgms for modelng the Nasdaq-00 and NIFTY stock market ndces. We consder the Flexble Neural Tree (FNT) algorthm (Chen, Yang, and Dong, 2004), a Wavelet Neural Network (WNN), Local Lnear Wavelet Neural Network (LLWNN) (Chen et al., 2005) and fnally a feed-forward Neural Network (ANN) (Chen et al., 2004). The partcle-swarmoptmzaton algorthm optmzes the parameters of the dfferent technques (Kennedy & Eberhart, 995). We analysed the Nasdaq-00 ndex value from January 995 to January 2002 and the NIFTY ndex from 0 January 998 to 03 December 200. For both ndces, we dvded the entre data nto roughly two equal halves. No specal rules were used to select the tranng set other than ensurng a reasonable representaton of the

Hybrd-Learnng Methods for Stock Index Modelng 65 Fgure. (a) Tranng and test data sets for the Nasdaq-00 ndex and (b) the NIFTY ndex parameter space of the problem doman (Abraham et al., 2003). The complexty of the tranng and test data sets for both ndces s depcted n Fgure. In the secton enttled Hybrd-learnng Models, we brefly descrbe the dfferent learnng algorthms. Ths s secton s followed by the Expermentaton Setup and Results secton. Ths s, n turn, followed by the Conclusons secton. Partcle-Swarm-Optmzaton Algorthm (PSO) The PSO conducts searches usng a populaton of partcles that correspond to ndvduals n an Evolutonary Algorthm (EA). Intally, a populaton of partcles s randomly generated. Each partcle represents a potental soluton and has a poston represented by a poston vector x. A swarm of partcles moves through the problem space, wth the movng velocty of each partcle represented by a velocty vector v. At each tme step, a functon f representng a qualty measure s calculated by usng x as nput. Each

66 Chen and Abraham partcle keeps track of ts own best poston, whch s assocated wth the best ftness t has acheved so far n a vector p. Furthermore, thebest poston among all the partcles obtaned so far n the populaton s kept track of as p g. In addton to ths global verson, another verson of PSO keeps track of the best poston among all the topologcal neghbors of a partcle. At each tme step t, by usng the ndvdual best poston, p (t), and the global best poston, p g (t), a new velocty for partcle s updated by: v ( t + ) = v ( t) + c ( p( t) x ( t)) + c2φ2( pg ( t) x ( t)) φ () where c and c 2 are postve constants and φ and φ 2 are unformly dstrbuted random numbers n [0,]. The term c s lmted to the range of ±V max (f the velocty volates ths lmt, t s set to ts proper lmt). Changng velocty ths way enables the partcle to search around both ts ndvdual best poston, p, and global best poston, p g. Based on the updated veloctes, each partcle changes ts poston accordng to: x ( t + ) = x ( t) + v ( t + ) (2) The PSO algorthm s employed to optmze the parameter vectors of FNT, ANN, and WNN. Hybrd-Learnng Models Flexble Neural Tree Model In ths research, a tree-structured encodng method wth specfc nstructon set s selected for representng a FNT model (Chen et al., 2005). Flexble Neuron Instructor and FNT Model The functon set F and termnal nstructon set T used for generatng a FNT model are descrbed as follows: S = F U T = { + 2, + 3, L, + N } U{ x, x2, L, xn} (3) where + ( = 2, 3,..., N) denote nonleaf nodes nstructons and takng arguments.x, x 2,..., x n are leaf nodes nstructons and takng no other arguments. The output of a nonleaf

Hybrd-Learnng Methods for Stock Index Modelng 67 node s calculated as a flexble neuron model (see Fgure 2). From ths pont of vew, the nstructon + s also called a flexble neuron operator wth nputs. In the constructon process of a neural tree, f a nontermnal nstructon, that s, + ( = 2, 3,..., N) s selected, real values are randomly generated and used for representng the connecton strength between the node + and ts chldren. In addton, two adjustable parameters a and b are randomly created as flexble actvaton functon parameters. For developng the FNT model, the followng flexble actvaton functon s used: f ( a, b ; x) ( x a ) 2 exp( ) b 2 = (4) The output of a flexble neuron + n can be calculated as follows. The total exctaton of + n s: net n = n j= w x j j (5) wherex j (j =, 2,..., n) are the nputs to node + n. The output of the node + n s then calculated by: outn ( net ) 2 (,, ) exp( n a f a n n bn netn = ) b 2 n = (6) A typcal flexble neuron operator and a neural tree model are llustrated n Fgure 2. The overall output of the flexble neural tree can be recursvely computed from left to rght by the depth-frst method. Fgure 2. A flexble neuron operator (left), and a typcal representaton of the FNT wth functon nstructon set F = {+ 2, + 3,..., + 6 }, and termnal nstructon set T = {x, x 2, x 3 }

68 Chen and Abraham Optmzaton of the FNT Model The optmzaton of FNT ncludes both tree-structure and parameter optmzaton. Fndng an optmal or near-optmal neural tree s formulated as a product of evoluton. A number of neural tree varaton operators are developed as follows: Mutaton Four dfferent mutaton operators were employed to generate offsprng from the parents. These mutaton operators are as follows: () Changng one termnal node: Randomly select one termnal node n the neural tree and replace t wth another termnal node. (2) Changng all the termnal nodes: Select each and every termnal node n the neural tree and replace t wth another termnal node. (3) Growng: Select a random leaf n the hdden layer of the neural tree and replace t wth a newly generated subtree. (4) Prunng: Randomly select a functon node n the neural tree and replace t wth a termnal node. The neural tree operators were appled to each of the parents to generate an offsprng usng the followng steps: (a) (b) (c) A Posson random number N, wth mean λ, was generated. N random mutaton operators were unformly selected wth replacement from the prevous four-mutaton operator set. These N mutaton operators were appled n sequence one after the other to the parents to get the offsprng. Crossover Select two neural trees at random and select one nontermnal node n the hdden layer for each neural tree randomly, then swap the selected subtree. The crossover operator s mplemented wth a predefned probablty of 0.3 n ths study.

Hybrd-Learnng Methods for Stock Index Modelng 69 Selecton Evolutonary-programmng (EP) tournament selecton was appled to select the parents for the next generaton. Parwse comparson s conducted for the unon of µ parents and µ offsprng. For each ndvdual, q opponents are chosen unformly at random from all the parents and offsprng. For each comparson, f the ndvdual s ftness s no smaller than the opponent s, t s selected. Then select µ ndvduals from parents and offsprng that have most wns to form the next generaton. Parameter Optmzaton by PSO Parameter optmzaton s acheved by the PSO algorthm as descrbed n the The Partcle-swarm-optmzaton (PSO) Algorthm secton. In ths stage, the FNT archtecture s fxed, as the best tree developed by the end of run of the structured search. The parameters (weghts and flexble actvaton-functon parameters) encoded n the best tree formulate a partcle. The PSO algorthm works as follows: (a) (b) (c) (d) An ntal populaton s randomly generated. The learnng parametersc and c 2 n PSO should be assgned n advance. The objectve functon value s calculated for each partcle. Modfcaton of search pont the current search pont of each partcle s changed usng Equatons and 2. If the maxmum number of generatons s reached or no better parameter vector s found for a sgnfcantly long tme (~00 steps), then stop, otherwse go to step (b). The Artfcal Neural Network (ANN) Model A neural network classfer traned usng the PSO algorthm wth flexble bpolar sgmod actvaton functons at hdden layer was constructed for the stock data. Before descrbng the detals of the algorthm for tranng the ANN classfer, the ssue of codng needs to be addressed. Codng concerns the way the weghts and the flexble actvatonfuncton parameters of the ANN are represented by ndvduals or partcles. A floatngpont codng scheme s adopted here. For neural network (NN) codng, suppose there are M nodes n the hdden layer and one node n the output layer and n nput varables, then the number of total weghts s n M + M, the number of thresholds s M + and the number of flexble actvaton-functon parameters s M +, therefore the total number of free parameters n the ANN to be coded s n M + M + 2(M + ). These parameters are coded nto an ndvdual or partcle orderly. The smple proposed tranng algorthm for a neural network s as follows:

70 Chen and Abraham Step : Intalzaton Intal populaton s generated randomly. The learnng parameters c and c 2 n the PSO should be assgned n advance. Step 2: Evaluaton The objectve functon value s calculated for each partcle. Step 3: Modfcaton of search pont The current search pont of each partcle s changed usng Equatons and 2. Step 4: If the maxmum number of generatons s reached or no better parameter vector s found for a sgnfcantly long tme (00 steps say), then stop, otherwse go to Step 2. The WNN-Predcton Model In terms of wavelet transformaton theory, wavelets n the followng form: x b ψ = { ψ a 2 ( = ϕ ) : a, b R, Z} a (7) x = ( x, x2, L, xn ), a = ( a, a2, L, an ), b = ( b, b 2, L, bn ) are a famly of functons generated from one sngle functon ϕ(x) by the operaton of dlaton and translaton.ϕ(x), whch s localzed n both the tme space and the frequency space, s called a mother wavelet and the parameters a and b are named the scale and translaton parameters, respectvely. In the standard form of a wavelet neural network, output s gven by: M M x b f ( x) = = ψ ( x) ω a 2 ϕ( ) a = = ω (8) where ψ s the wavelet actvaton functon of -th unt of the hdden layer and ω s the weght connectng the-th unt of the hdden layer to the output-layer unt. Note that for the n-dmensonal nput space, the multvarate wavelet-bass functon can be calculated by the tensor product of n sngle wavelet-bass functons as follows: n ϕ( x) = ϕ( x ) = (9) Before descrbng detals of the PSO algorthm for tranng WNNs, the ssue of codng needs to be addressed. Codng concerns the way the weghts, dlaton, and translaton

Hybrd-Learnng Methods for Stock Index Modelng 7 parameters of WNNs are represented by ndvduals or partcles. A floatng-pont codng scheme s adopted here. For WNN codng, suppose there are M nodes n the hdden layer and n nput varables, then the total number of parameters to be coded s (2n + )M. The codng of a WNN nto an ndvdual or partcle s as follows: a blanb nω a2b2la2nb2nω 2 L anbnla nn nn n b ω The smple proposed tranng algorthm for a WNN s as follows: Step : An ntal populaton s randomly generated. The learnng parameters, such as c, c 2 n PSO should be assgned n advance. Step 2: Parameter optmzaton wth PSO algorthm. Step 3: f the maxmum number of generatons s reached or no better parameter vector s found for a sgnfcantly long tme (~00 steps), then go to Step 4; otherwse go to Step 2. Step 4: Parameter optmzaton wth gradent-descent algorthm. Step 5: If a satsfactory soluton s found then stop; otherwse go to Step 4. The Local Lnear WNN Predcton Model An ntrnsc feature of bass-functon networks s the localzed actvaton of the hddenlayer unts, so that the connecton weghts assocated wth the unts can be vewed as locally accurate pecewse constant models whose valdty for any gven nput s ndcated by the actvaton functons. Compared to the multlayer perceptron neural network, ths local capacty provdes some advantages, such as learnng effcency and structure transparency. However, the problem of bass-functon networks requres some specal attenton. Due to the crudeness of the local approxmaton, a large number of bass-functon unts have to be employed to approxmate a gven system. A shortcomng of the wavelet neural network s that for hgher dmensonal problems many hdden-layer unts are needed. In order to take advantage of the local capacty of the wavelet-bass functons whle not havng too many hdden unts, here we propose an alternatve type of WNN. The archtecture of the proposed local lnear WNN (LLWNN) s shown n Fgure 3. Its output n the output layer s gven by: y = M = M = ( ω ( ω 0 0 + ω + ω x + L+ ω n x + L+ ω n x ) ψ ( x) = n n x ) a 2 x b ϕ( ) a (0)

72 Chen and Abraham Fgure 3. Archtecture of a local lnear wavelet neural network where x = (x, x 2,..., x n ). Instead of the straghtforward weght ω (pecewse constant model), a lnear model: v = ω + ωx + + ωn xn 0 () s ntroduced. The actvtes of the lnear model v ( =, 2,..., M) are determned by the assocated locally actve wavelet functon ψ (x)( =, 2,..., M), thus v s only locally sgnfcant. The motvatons for ntroducng local lnear models nto a WNN are as follows: () Local-lnear models have been studed n some neurofuzzy systems (Abraham, 200) and offer good performances; and (2) Local-lnear models should provde a more parsmonous nterpolaton n hgh-dmenson spaces when modelng samples are sparse. The scale and translaton parameters and local-lnear-model parameters are randomly ntalzed at the begnnng and are optmzed by the PSO algorthm. Experment Setup and Results We consdered 7-year stock data for the Nasdaq-00 Index and 4-year for the NIFTY ndex. Our target was to develop effcent forecast models that could predct the ndex value of the followng tradng day based on the openng, closng, and maxmum values on any gven day. The tranng and test patterns for both ndces (scaled values) are llustrated n Fgure. We used the same tranng- and test-data sets to evaluate the

Hybrd-Learnng Methods for Stock Index Modelng 73 dfferent connectonst models. More detals are reported n the followng sectons. Experments were carred out on a Pentum IV, 2.8 GHz Machne wth 52 MB RAM and the programs mplemented n C/C++. Test data was presented to the traned connectonst models, and the output from the network compared wth the actual ndex values n the tme seres. The assessment of the predcton performance of the dfferent connectonst paradgms were done by quantfyng the predcton obtaned on an ndependent data set. The rootmean-squared error (RMSE), maxmum-absolute-percentage error (MAP), mean-absolute-percentage error (MAPE), and correlaton coeffcent (CC) were used to study the performance of the traned forecastng model for the test data. MAP s defned as follows: MAP Pactual, Ppredcted, = max 00 P predcted, (2) where P actual, s the actual ndex value on day and P predcted, s the forecast value of the ndex on that day. SmlarlyMAPE s gven as: N P actual, Ppredcted, MAPE = 00 N P, (3) = actual where N represents the total number of days. FNT Algorthm We used the nstructon set S = {+ 2, + 3,..., + 0, x 0, x, x 2 } modelng the Nasdaq-00 ndex and nstructon set S = {+ 2, + 3,..., + 0, x 0, x, x 2, x 3, x 4 } modelng the NIFTY ndex. We used the flexble actvaton functon of Equaton 4 for the hdden neurons. Tranng was termnated after 80 epochs on each dataset. NN-PSO Tranng A feed-forward neural network wth three nput nodes and a sngle hdden layer consstng of 0 neurons was used for modelng the Nasdaq-00 ndex. A feed-forward neural network wth fve nput nodes and a sngle hdden layer consstng of 0 neurons was used for modelng the NIFTY ndex. Tranng was termnated after 3000 epochs on each dataset.

74 Chen and Abraham WNN-PSO A WNN wth three nput nodes and a sngle hdden layer consstng of 0 neurons was used for modelng the Nasdaq-00 ndex. A WNN wth fve nput nodes and a sngle hdden layer consstng of 0 neurons was used for modelng the NIFTY ndex. Tranng was termnated after 4000 epochs on each dataset. LLWNN-PSO A LLWNN wth three nput nodes and a hdden layer consstng of fve neurons for modelng Nasdaq-00 ndex. A LLWNN wth fve nput nodes and a sngle hdden layer consstng of fve neurons for modelng NIFTY ndex. Tranng was termnated after 4500 epochs on each dataset. Fgure 4. Test results showng the performance of the dfferent methods for modelng the Nasdaq-00 ndex Fgure 5. Test results showng the performance of the dfferent methods for modelng the NIFTY ndex

Hybrd-Learnng Methods for Stock Index Modelng 75 Table. Emprcal comparson of RMSE results for four learnng methods FNT NN-PSO WNN-PSO LLWNN-PSO Tranng results Nasdaq-00 0.02598 0.02573 0.02586 0.0255 NIFTY 0.0847 0.0729 0.0829 0.069 Testng results Nasdaq-00 0.0882 0.0864 0.0789 0.0968 NIFTY 0.0428 0.0326 0.0426 0.0564 Table 2. Statstcal analyss of four learnng methods (test data) FNT NN-PSO WNN-PSO LLWNN-PSO Nasdaq-00 CC 0.997579 0.997704 0.99772 0.997623 MAP 98.07 4.363 52.754 230.54 MAPE 6.205 6.528 6.570 6.952 NIFTY CC 0.996298 0.997079 0.996399 0.99629 MAP 39.987 27.257 39.67 30.84 MAPE 3.328 3.092 3.408 4.46 Performance and Results Acheved Table summarzes the tranng and test results acheved for the two stock ndces usng the four dfferent approaches. The statstcal analyss of the four learnng methods s depcted n Table 2. Fgures 4 and 5 depct the test results for the -day-ahead predcton of Nasdaq-00 ndex and NIFTY ndex respectvely. Concluson In ths chapter, we have demonstrated how the chaotc behavor of stock ndces could be well-represented by dfferent hybrd learnng paradgms. Emprcal results on the two data sets usng four dfferent learnng models clearly reveal the effcency of the proposed technques. In terms of RMSE values, for the Nasdaq-00 ndex, WNN performed margnally better than the other models and for the NIFTY ndex, the NN approach gave the lowest generalzaton RMSE values. For both data sets, LLWNN had the lowest tranng error. For the Nasdaq-00 ndex (test data), WNN had the hghest CC, but the lowest values of MAPE and MAP were acheved by usng the FNT model. The hghest CC together wth the best MAPE/MAP values for the NIFTY ndex were acheved usng the NN traned usng the PSO model. A low MAP value s a crucal ndcator for evaluatng the stablty of a market under unforeseen fluctuatons. In the present example, the predctablty ensures that the decrease n trade s only a temporary cyclc varaton that s perfectly under control.

76 Chen and Abraham Our research was to predct the share prce for the followng tradng day based on the openng, closng, and maxmum values on any gven day. Our expermental results ndcate that the most promnent parameters that affect share prces are ther mmedate openng and closng values. The fluctuatons n the share market are chaotc n the sense that they heavly depend on the values of ther mmedate forerunnng fluctuatons. Long-term trends exst, but are slow varatons and ths nformaton s useful for longterm nvestment strateges. Our study focused on short-term floor trades n whch the rsk s hgher. However, the results of our study show that even wth seemngly random fluctuatons, there s an underlyng determnstc feature that s drectly encphered n the openng, closng, and maxmum values of the ndex of any day makng predctablty possble. Emprcal results also show that there are varous advantages and dsadvantages for the dfferent technques consdered. There s lttle reason to expect that one can fnd a unformly best learnng algorthm for optmzaton of the performance for dfferent stock ndces. Ths s n accordance wth the no-free-lunch theorem, whch explans that for any algorthm, any elevated performance over one class of problems s exactly pad for n performance over another class (Macready & Wolpert, 997). Our future research wll be orented towards determnng the optmal way to combne the dfferent learnng paradgms usng an ensemble approach (Maqsood, Kahn, & Abraham, 2004) so as to complement the advantages and dsadvantages of the dfferent methods consdered. References Abraham, A., Nath, B., & Mahant, P. K. (200). Hybrd ntellgent systems for stock market analyss. In V. N. Alexandrov et al. (Eds.), Computatonal Scence (pp. 337-345). Germany: Sprnger-Verlag. Abraham, A. (200). NeuroFuzzy systems: State-of-the-art modelng technques. In J. Mra & A. Preto (Eds.), Proceedngs of the 7th Internatonal Work Conference on Artfcal and Neural Networks, Connectonst Models of Neurons, Learnng Processes, and Artfcal Intellgence, Granada, Span (pp. 269-276). Germany: Sprnger-Verlag. Abraham, A., Phlp, N. S., & Saratchandran, P. (2003). Modelng chaotc behavor of stock ndces usng ntellgent paradgms.internatonal Journal of Neural, Parallel & Scentfc Computatons, (-2), 43-60. Berkeley, A. R. (997). Nasdaq s technology floor: Its presdent takes stock. IEEE Spectrum, 34(2), 66-67. Bsch, G. I., & Valor, V. (2000). Nonlnear effects n a dscrete-tme dynamc model of a stock market. Chaos, Soltons & Fractals, (3), 203-22. Chan, W. S., & Lu, W. N. (2002). Dagnosng shocks n stock markets of Southeast Asa, Australa, and New Zealand. Mathematcs and Computers n Smulaton, 59(-3), 223-232.

Hybrd-Learnng Methods for Stock Index Modelng 77 Chen, Y., Yang, B., & Dong, J. (2004). Nonlnear system modelng va optmal desgn of neural trees.internatonal Journal of Neural Systems, 4(2), 25-37. Chen, Y., Yang, B., & Dong, J. (n press). Tme-seres predcton usng a local lnear wavelet neural network. Internatonal Journal of Neural Systems. Chen, Y., Yang, B., Dong, J., & Abraham, A. (n press). Tme-seres forecastng usng flexble neural tree model. Informaton Scence. Cos, K. J. (200). Data mnng n fnance: Advances n relatonal and hybrd methods. Neurocomputng, 36(-4), 245-246. Francs, E. H., Tay, & Cao, L. J. (2002). Modfed support vector machnes n fnancal tme seres forecastng.neurocomputng, 48(-4), 847-86. Kennedy, J., & Eberhart, R. C. (995). Partcle swarm optmzaton. In Proceedngs IEEE Internatonal Conference on Neural Networks (pp. 942-948). Km, K. J., & Han, I. (2000). Genetc algorthms approach to feature dscretzaton n artfcal neural networks for the predcton of stock prce ndex. Expert Systems wth Applcatons, 9(2), 25-32. Koulourots, D. E., Dakoulaks, I. E., & Emrs, D. M. (200). A fuzzy cogntve map-based stock market model: Synthess, analyss and expermental results. In Proceedngs of the 0th IEEE Internatonal Conference on Fuzzy Systems, Vol. (pp. 465-468). LeBaron, B. (200). Emprcal regulartes from nteractng long- and short-memory nvestors n an agent-based stock market. IEEE Transactons on Evolutonary Computaton, 5(5), 442-455. Legh, W., Modan, N., Purvs, R., & Roberts, T. (2002). Stock market tradng rule dscovery usng techncal chartng heurstcs. Expert Systems wth Applcatons, 23(2), 55-59. Legh, W., Purvs, R., & Ragusa, J. M. (2002). Forecastng the NYSE composte ndex wth techncal analyss, pattern recognzer, neural network, and genetc algorthm: A case study n romantc decson support. Decson Support Systems, 32(4), 36-377. Maqsood, I., Khan, M. R., & Abraham, A. (2004). Neural network ensemble method for weather forecastng. Neural Computng & Applcatons, 3(2), 2-22. Macready, W. G., & Wolpert, D. H. (997). The no free lunch theorems. IEEE Transacton on Evolutonary Computng, (), 67-82. Nasdaq Stock Market SM. (n.d.). Retreved from http://www.nasdaq.com Natonal Stock Exchange of Inda Lmted. (n.d.). Retreved from http://www.nse-nda.com Oh, K. J., & Km, K. J. (2002). Analyzng stock market tck data usng pecewse nonlnear model. Expert Systems wth Applcatons, 22(3), 249-255. Palma-dos-Res, A., & Zahed, F. (999). Desgnng personalzed ntellgent fnancal decson support systems. Decson Support Systems, 26(), 3-47. Quah, T. S., & Srnvasan, B. (999). Improvng returns on stock nvestment through neural network selecton. Expert Systems wth Applcatons, 7(4), 295-30. Wang, Y. F. (2002). Mnng stock prce usng fuzzy rough set system. Expert Systems wth Applcatons, 24(), 3-23.

78 Chen and Abraham Wuthrch, B., Cho, V., Leung, S., Permunetlleke, D., Sankaran, K., & Zhang, J. (998). Daly stock market forecast from textual web data. Proceedngs IEEE Internatonal Conference on Systems, Man, and Cybernetcs, 3, 2720-2725.