Complex Stock Trading Strategy Baed on Particle Swarm Optimization Fei Wang, Philip L.H. Yu and David W. Cheung Abtract Trading rule have been utilized in the tock market to make profit for more than a century. However, only uing a ingle trading rule may not be ufficient to predict the tock price trend accurately. Although ome complex trading trategie combining variou clae of trading rule have been propoed in the literature, they often pick only one rule for each cla, which may loe valuable information from other rule in the ame cla. In thi paper, a complex tock trading trategy, namely weight reward trategy (WRS), i propoed. WRS combine the two mot popular clae of trading rule moving average (MA) and trading range break-out (TRB). For both MA and TRB, WRS include different combination of the rule parameter to get a univere of 140 component trading rule in all. Each component rule i aigned a tart weight and a reward/penalty mechanim baed on profit i propoed to update thee rule weight over time. To determine the bet parameter value of WRS, we employ an improved time variant Particle Swarm Optimization (PSO) algorithm with the objective of maximizing the annual net profit generated by WRS. The experiment how that our propoed WRS optimized by PSO outpeorm the bet moving average and trading range break-out rule. I. INTRODUCTION TRADING rule are widely ued in financial market a a technical analyi tool for ecurity trading. Typically, they predict the future price trend by analyzing hitorical price movement and initiate buy/ell ignal accordingly. Trading rule have developed for more than a century and many empirical tudie provided upporting evidence to the ignificant profitability of different trading rule [1][2][3][4][5]. Until nowaday trading rule are commonly ued by practitioner to make trading deciion in many financial market. Pring [6], however, argued that no ingle trading rule can ever be expected to forecat all price trend and it i important to combine thee imple rule together to get a complex trading trategy. Hu and Kuan [7] firt examined the profitability of three clae of complex trading trategie: learning trategie (LS), vote trategie (VS) and fractional poition trategie (FPS). Their reult howed that the three clae of complex trading trategie did not provide ignificant improvement a compared with imple trading rule. However, the failure of thee complex trading trategie i becaue they are relatively primitive. For example, LS picked Fei Wang i with the Department of Computer Science, The Univerity of Hong Kong, Pokfulam, Hong Kong (email: fwang@c.hku.hk). Philip L.H. Yu i with the Department of Statitic and Actuarial Science, The Univerity of Hong Kong, Pokfulam, Hong Kong (email: plhyu@hku.hk). David W. Cheung i with the Department of Computer Science, The Univerity of Hong Kong, Pokfulam, Hong Kong (email: dcheung@c.hku.hk). the bet imple trading rule for trading deciion making each time intead of combining all rule in an appropriate manner. For VS and FPS, both of them regarded all imple trading rule a equally important without conidering their relative peormance. To addre the above problem, Subramanian et al. [8] propoed a weighted combination of imple trading rule. In their tudy, each component rule aociated with a given weight initiate it own ignal and the ignal to be implemented i eventually determined by the um of thee weighted ignal. They created thi combination by applying a Genetic Algorithm (GA) to optimize the bet et of weight vector. Thereafter Briza et al. [9] propoed a imilar tock trading trategy whoe weight vector wa optimized by Particle Swarm Optimization (PSO). Both combined trategie are found to outpeorm the bet component trading rule in term of profit of the tet et. However, they only conidered a common ued rule for each cla of trading rule in their tudie. Thi may not guarantee that the trading rule under conideration alway peorm better than thoe not conidered. Therefore it i important to include variou combination of parameter for each cla of rule a many a poible to get a comprehenive coverage of component rule. Note that the weight of component rule were held fixed during the whole trading period in their approache. Given uch a complex and dynamic market, a trading trategy with a tatic choice of component weight i hard to peorm well conitently over time. In thi regard, an objective of thi paper i to conider a dynamic updating cheme for component weight. In thi paper, we preent a complex tock trading - trategy called weight reward trategy (WRS) which combine two clae of the implet and mot popular trading rule moving average and trading range break-out [2][5]. For moving average, we get 119 rule by conidering different value of it two parameter. For trading range breakout, we get 21 rule by taking 21 different value of it ingle parameter. Therefore there are 140 imple rule in all. All parameter are well elected to repreent each cla peormance in a wide range [10]. Each component rule i aigned a tart weight, and a reward/penalty mechanim baed on component rule peormance i propoed to update their weight over time. The trading ignal of WRS i determined by the weighted um of component rule ignal and two additional ignal threhold parameter. Together with component rule tart weight and other five parameter of WRS to be dicued later, there are altogether 145 parameter for WRS. We ue an improved time variant
Particle Swarm Optimization (PSO) [11] to optimize the bet et of the 145 parameter. In the literature, GA and PSO are the mot popular optimization algorithm ued to optimize trading rule [4][8][9][12]. We chooe PSO rather than GA becaue PSO i not only eay to be implemented, but alo it ha higher computation efficiency and can achieve the ame peormance a compared with GA [13][14]. The ret of thi paper i organized a follow: Section II briefly introduce PSO algorithm. Section III decribe propoed WRS in detail and the optimization of WRS i preented in Section IV. Section V dicue the experimental reult and concluion i given in Section VI. II. PARTICLE SWARM OPTIMIZATION Particle Swarm Optimization (PSO) i a tochatic evolution algorithm baed on warm intelligence, which wa firt inrtoduced by Kennedy and Eberhart in 1995 [15]. Since it inception, PSO ha hown great ucce in olving function optimization problem and ha been widely applied in a variety of engineering application [9][16]. PSO i motivated by the behavior of bird flock in finding food. Suppoe a flock of bird want to find food, but they do not know where the food i before they find it. However, thi bird flock can alway find their food at lat. PSO ue a warm of particle to imulate thee bird. Each particle i a poible olution of the optimization problem and ha a random initial poition X and velocity V. The objective function targeted to be optimized i ued to evaluate each particle poition fitne. Higher fitne mean a better poition. For each particle, PSO ue pbet to record the bet poition thi particle ha arrived. For the whole warm, gbet i ued to record the global bet poition achieved by all particle. At time t, PSO update each particle velocity uing the following equation: V t+1 = wv t + c 1 r 1 (pbet X t ) + c 2 r 2 (gbet X t ) (1) where w i the inertia weight, c 1, c 2 are the acceleration coefficient and r 1, r 2 are two random number in the range between 0 and 1 [17]. The firt term of equation (1) indicate an inertia for a particle wondering in the earch pace. The econd term repreent elf-cognition of pat experience of a particle, i.e., the particle tend to move toward it pat bet poition. Similarly, the third term indicate that particle have ocial cognition to the whole warm and are attracted by the global bet poition. For PSO, inertia weight w control the influence of previou velocity on a particle. A large w allow particle to explore more earch pace for the optimal poition, while mall w help warm to earch in a local area for the exact olution. Coefficient c 1 and c 2 control the influence of pbet and gbet on particle movement. A higher value of c 1 mean each particle i more likely to be attracted to a different poition, o the whole warm i more widepread in the earch pace. It effect i imilar to w. In contrat a high value of c 2 lead all particle converge to the current global bet poition. After updating the velocity, each particle will move to a new poition according to: X t+1 = X t + V t+1 (2) Thi particle movement will repeat iteratively until all particle converge to the optimal poition at lat, like when bird find the food at the end of earching. III. WEIGHT REWARD STRATEGY In thi ection, we decribe how weight reward trategy (WRS) utilize different imple trading rule to generate trading ignal and how it reward and penalize thee rule according to their trading peormance. A. Moving average and trading range break-out WRS i baed on moving average (MA) and trading range break-out (TRB) becaue they are the two of implet and mot popular imple trading rule [2]. In MA, there are two average of tock price over two moving window of nl day and n day repectively, where nl > n o the former average i long-period average and the later one i hort-period average. Both average are recalculated and updated each trading day. The ignal generation of MA i imple. Conider a trading day, MA initiate buy (ell) ignal if the hort-period moving average i above (below) the long-period moving average. The econd trading rule i TRB which i impler than MA. On trading day d, TRB initiate buy (ell) ignal if current day tock price i higher (lower) than the highet (lowet) tock price during the pat n day. The pat n-day price form a trading range, and the buy or ell ignal i invoked when one day tock price break out the range. For both MA and TRB, we can get different rule by taking different parameter value. It i important to add a many a rule to WRS rule pool becaue we cannot guarantee the elected parameter value i alway better than the other. Therefore, we combine 119 MA rule and 21 TRB rule together to get a comprehenive coverage of MA and TRB. B. Signal generation In WRS, each component rule r i i aigned a tart weight wt i, which meaure the influence of r i to the ignal generated by WRS. Conider a trading day d, each component rule initiate a ignal i,d. Thi ignal i,d take value 1, 0 and 1 if the ignal i buy, do nothing and ell repectively. The ignal of WRS on trading day d i given by: d = wt i i,d (3) where the um of the all weight ( wt i ) hould be 1 o that d i between 1 and 1. Note that d ummarize all component rule view on tock trend. If d i cloe to 1, thi mean mot of influential component rule ugget buy ignal. On the contrary, d nearing 1 mean more influential rule ugget ell ignal. So we propoe that WRS initiate a buy (ell) ignal if d i greater (maller) than a poitive buy (negative ell) threhold,
bth (th); otherwie WRS do not initiate any ignal and invetor do nothing on that day. Higher threhold repreent the trategy i more trict in buy or ell and lower threhold repreent a more tolerant trategy. The ignal generation of WRS i hown in Fig. 1. ma 1 (wt 1 ) ma 119 (wt 119 ) trb 120 (wt 120 ) trb 140 (wt 140 ) 119,d 120,d 1,d 140,d Fig. 1. d wtii, Signal generation of WRS C. Reward and penalty of component rule d d bth d th buy In WRS, each component rule r i i aociated with a tart weight wt i. It repreent how much we believe thi rule. However, thee rule peormance may change during trading, epecially over a long time period. It i reaonable to reward a good rule by adding more weight to it and to penalize a bad rule by deducting ome weight from it. The good or bad i meaured by the profitability of rule in recent time, and the updating of weight hould be conducted at regular interval. A a reult, two time pan memory pan m and review pan r, which are introduced in the learning trategy (LS), are ued here. Memory pan i a hitorical period ued for evaluating the rule peormance. Review pan i the time interval over which the weight of component rule hould be updated. We et m r a uggeted by [7]. Suppoe on trading day d, we evaluate all rule peormance and update their weight accordingly. Let profit i denote the profit of rule r i from day d m to day d 1. For thoe nonprofitable rule, we deduct their weight by a contant: wt i = wt i ell rulenum, if profit i < 0 (4) where rulenum i the total number of component rule and i a parameter called reward factor controlling the degree for penalty and reward. It i noted that wt i hould not be rulenum. negative, o wt i i et to zero if it i maller than All the weight deducted from the nonprofitable rule are ummed to form a temporary variable reward. Then we increae the weight of thoe profitable rule uing following equation: wt i = wt i + reward profitnum, if profit i > 0 (5) where profitnum i the number of profitable rule found in the memory pan. Note that the um of weight remain unchanged after the penalty and reward. However, if mot of the rule are nonprofitable and only a few rule are profitable, above reward/penalty approach may add too much weight to thoe few profitable rule. Imagine that there are 100 rule in which only one rule i profitable in memory pan, the weight increment of the only profitable rule i 99 time of the weight decrement of any other rule. The reward may be too much, epecially when the rule i jut profitable in a hort period of time. To avoid a huge reward, we replace Equation (4) with: wt i = wt i rulenum profitnum rulenum, if profit i < 0 (6) where term profitnum rulenum guarantee that the penalty weight of any nonprofitable rule and the reward weight of any profitable rulenum rule i capped at. It i noted that when all rule are profitable or all of them are nonprofitable, our reward/penalty mechanim would not be triggered a in the cae there i no need to reward or penalize any rule. IV. OPTIMIZATION OF WRS For WRS, there are 140 tart weight (wt 1 to wt 140 ), two time pan (m, r), two threhold (bth, th), and a reward factor () to be determined. Identifying uch a high dimenional parameter vector i a tough optimization problem. To tackle it, an improved time variant Particle Swarm Optimization (PSO) algorithm i ued in thi paper. A. Time variant PSO In PSO, the tradeoff between global exploration and local exploitation of particle i the main influencing factor to PSO peormance [17]. Generally, exploration hould be enhanced at the early tage of earching o that more earch pace can be explored by particle. While at the later tage the algorithm hould focu on exploitation to find the exact and accurate optimum. To achieve thee goal, Shi and Eberhart [18] uggeted to reduce PSO inertia weight w linearly over the iteration o a to help particle to find the optimal poition more efficiently. In later work, Ratnaweera et al. [11] propoed a time-varying acceleration coefficient PSO which linearly reduce the firt acceleration coefficient c 1, and increae the econd acceleration coefficient c 2 over the iteration. Baed on thee tudie, in thi paper we lineally update w, c 1 and c 2 according to the following iterative equation [11][18]: t w t = (w F w I ) max t + w I, (7) t c 1t = (c 1F c 1I ) max t + c 1I, (8) t c 2t = (c 2F c 2I ) max t + c 2I, (9)
where w I, w t and w F denote the initial, current and final value of w repectively (imilar for c 1 and c 2 ), t i the current iteration number and max t i the maximum number of iteration. B. Objective function The ultimate goal of any tock trading trategy i to make profit from the tock market, o the parameter optimization of WRS i led by thi goal. We ue annual net profit generated by the WRS a the objective function of the optimization. At firt, each tock in the market i aigned the ame amount of initial equity. During the trading, all the equity available for a tock can only be ued to invet in thi tock. Becaue we do not allow hort elling, the equity for each tock i alway poitive. On the lat day of trading, we ell all tock in hand and um their equitie together to obtain the final equity. Then the annual net profit (ANP ) of WRS can be calculated a follow: ekf e ki ANP = y (10) e ki where e ki and e kf i the initial and final equity of tock k repectively, and y i the length of trading period in year. In (10), we have already included the conideration of tranaction cot o that AN P repreent the average annual profit net of the tranaction cot. For each buy or ell, 0.1% of the total turnover i cut from the equity a tranaction cot in our tudy. C. Start weight optimization Recall that the um of component rule tart weight i retricted to be 1, i.e., wti = 1. It i difficult to atify thi contraint if the weight are optimized directly by PSO becaue of it tochatic nature. In thi regard, a new parameter vector α i introduced here and a one-to-one tranformation between wt and α i ued in thi tudy: wt i =. (11) αi Σe A (11) guarantee that the um of wt i 1 regardle of the value of α, the new parameter vector α i optimized to get the bet et of tart weight via (11). A. Data eαi V. EXPERIMENTS The contituent tock of NASDAQ100, which are 100 of the larget dometic and international nonfinancial tock on the Nadaq Stock Market, are conidered in our tudy. The daily tock cloing price data from 1994 to 2010 are collected from Reuter 3000Xtra. Becaue not all tock were iued before 1994, only 52 tock having data through the whole period are conidered in our experiment. The data from 1995 to 2002 i ued for optimizing, in other word, training the WRS. The data from 2003 to 2010 i ued for teting the profitability of the WRS and the imple trading rule. It i noted that for ome component rule uch a MA with nl = 250, it need data over the pat 250 day to calculate current day long-period moving average. Therefore, the data in 1994 ha been reerved for data preparation in training. B. Experiment etup The warm ize i et to 250 and the maximum number of iteration i et to 500. There i alo a toping criterion, that i, if gbet keep unchanged for at leat 50 iteration, the optimization will top. Table I give the parameter etting of PSO and the earch pace boundary of WRS optimization. TABLE I PARAMETER SETTINGS OF PSO AND THE BOUNDARIES OF WRS PARAMETERS PSO Value WRS Boundary w I 0.9 α 1 i [ 1, 1] w F 0.4 m [150, 300] c 1I 2.5 r [20, 150] c 1F 0.5 [0, 1] c 2I 0.5 bth [0, 0.9] c 2F 2.5 th [ 0.9, 0] 1 (i = 1..140) In Table I, the parameter etting of PSO are baed on the uggetion of [11][18]. In order to avoid the data nooping bia to any component rule in the training period, the range of α i et a [ 1, 1] o that the range of wt i approximately [0.001, 0.05]. There are about 21 trading day in a month and about 252 trading day in a year, o the minimum review pan r i et to be hortly le than one month and the maximum memory pan m i et to be a little bit longer than one year in term of trading day. Becaue m r, the minimum value of m and the maximum value of r are both et to be 150 trading day. Reward and penalty for component rule hould not be too big each time, o the maximum value of reward factor i et to be 1. The minimum value of i 0 mean that there i no reward and penalty in thi cae. For buy and ell threhold, [ 0.9,0.9] i wide enough to cover the threhold range, o they are et a the lower and upper bound of th and bth repectively. C. Experimental reult After training, WRS i compared with the bet MA (n = 125, nl = 150) and the bet TRB (n = 125) in term of the annual net profit (AN P ) in the teting period. In addition to the annual net profit, the average return per trade (Avg.return) i alo compared. Suppoe an invetor buy N hare of a tock at price p 0 and ell them at the price p 1. The tranaction cot per buy or ell i c. The invetor pay N p 0 (1 + c) to buy the N hare and collect N p 1 (1 c) by elling them, o the return of thi trade i given by: return = N p 1 (1 c) N p 0 (1 + c) 1 = p 1 (1 c) 1. (12) p 0 (1 + c) Therefore the return per trade i independent of the invetment. The reult are hown in Table II. The number of trade (N o.trade) and the average holding day per trade
Equity/Million 3 WRS 2.5 2 1.5 MA(125-150) TRB(125) 1 0.5 0 2003 2004 2005 2006 2007 2008 2009 2010 Year 1 253 505 757 1009 1261 1513 1765 2014 Trading day Fig. 2. Equity curve of WRS, MA(125-150) and TRB(125) in the teting period. Suppoe the initial equity for each tock i $0.01 million, o the initial equity for the market i $0.52 million. There are altogether 2014 trading day from 2003 to 2010. (Avg.hold day) for each trading rule in the teting period are alo given for tatitic purpoe. From Table II, we can find that both the annual net profit and the average return per trade of WRS i much higher than the bet MA (n = 125, nl = 150) and the bet TRB (n = 125) in the teting period. Thi mean WRS can take the advantage of component rule and outpeorm all of them. Table III give more detail about the 52 tock profit. It how that WRS can generate ignificant profit from thoe profitable tock and recover from thoe few nonprofitable tock. To get a more comprehenive undertanding to the peormance of thee three trading trategie, their equity curve are given in Fig. 2. It can be een that the WRS keep the highet cumulative equity during the eight year of teting period. TABLE II PERFORMANCE OF WRS, THE BEST MA(125-150) AND THE BEST TRB(125) IN THE TESTING PERIOD WRS MA(125-150) TRB(125) ANP 53.52% 37.86% 28.88% No.trade 264 532 277 Avg.hold day 320 120 228 Avg.return 48.38% 11.30% 22.22% TABLE III STOCK PROFIT SUMMARY OF WRS, THE BEST MA(125-150) AND THE Strategy WRS MA(125-150) TRB(125) BEST TRB(125) IN THE TESTING PERIOD Summary Profitable Nonprofitable Total tock 1 tock 1 tock Number 35 17 52 ANP 80.49% -2.00% 53.52% Number 37 15 52 ANP 54.61% -3.46% 37.86% Number 41 11 52 ANP 37.37% -2.77% 28.88% 1 Profitable tock mean the tock whoe final equity i more than it initial equity and vice vera. In the training period, the MA rule with n = 200 and nl = 250 generate the highet annual net profit among all of the 119 MA rule, and the TRB rule with n = 200 generate the highet annual net profit among all of the 21 TRB rule. To demontrate that the peormance of imple trading rule may fluctuate over time, we alo tet the peormance of thee two rule in the teting period. The reult are hown in Table IV and Table V. The MA(200-250) make the highet profit among all MA rule in the training period, but it peormance drop a lot in the teting period, even i below the average peormance of MA. Although the TRB(200) peorm well in both of the training and teting period, it i not the bet TRB any more in the teting period. Thi reult give upport to our complex trading trategy with adequate combination of imple trading rule. TABLE IV PERFORMANCE OF THE MA RULES IN THE TESTING PERIOD MA Wort MA Bet MA MA (200-250) (1-5) (125-150) Average ANP 19.03% 2.78% 37.86% 23.05% No.trade 305 14331 532 1933 Avg.hold day 210 4 120 79 TABLE V PERFORMANCE OF THE TRB RULES IN THE TESTING PERIOD TRB Wort TRB Bet TRB TRB (200) (10) (125) Average ANP 27,21% 6.02% 28.88% 20.84% No.trade 186 3434 277 1116 Avg.hold day 344 17 228 137 If there i no reward and penalty to component rule during trading, WRS become Weight Strategy (WS). We alo compare WRS and WS in the teting period to ee the influence of our reward/penalty mechanim. The reult are given in Table VI. Together with Table II, we can find that WS alo generate higher annual net profit than any of the imple trading rule. However, both the highet AN P and Avg.return are generated by WRS indicate that it worth to conduct the reward/penalty mechanim. For WRS, becaue
there are reward and penalty, component rule weight may fluctuate according to their recent peormance a evidenced by the profile plot of two elected weight in the teting period a hown in Fig. 3. Weight 0.018 0.016 0.014 0.012 0.01 0.008 0.006 0.004 0.002 TABLE VI PERFORMANCE OF WRS AND WS IN THE TESTING PERIOD 0 WRS WS ANP 53.52% 39.34% No.trade 264 350 Avg.hold day 320 239 Avg.return 48.38% 22.56% MA(20-25) TRB(50) 1 11 21 31 41 51 61 No. of reward/penalty Fig. 3. The profile plot of the weight of MA (n = 20, nl = 25) and TRB (n = 50) in the teting period. WRS update the weight for 61 time during 8 year trading from 2003 to 2010. We oberve that the weight of MA(20-25) keep low at the beginning and become high eventually. The weight of TRB(50) keep high for a long time and drop a lot at the end of trading. VI. CONCLUSION Thi paper ha propoed a complex tock trading trategy, namely weight reward trategy (WRS), generated from different combination of moving average and trading range break-out with their weight updated by a reward/penalty mechanim. A time variant Particle Swarm Optimization i ued to optimize WRS. WRS outpeorm the bet moving average and trading range break-out rule in NASDAQ100 market from 2003 to 2010. For our future reearch, more imple trading rule could be included in the rule pool of WRS. Beide, it i often required to trike a balance between return and rik in invetment, o multi-objective optimization in term of profit and ome rik meaure uch a Sharpe ratio and maximum drawdown could be tudied in the future. VII. APPENDIX A. The parameter value of moving average nl (number of day in a long-period moving average) = 5, 10, 15, 20, 25, 30, 40, 50, 75, 100, 125, 150, 200, 250 (14 value); n (number of day in a hort-period moving average) = 1, 2, 5, 10, 15, 20, 25, 30, 40, 50, 75, 100, 125, 150, 200 (15 value). Becaue n hould be le than nl, the total number of MA rule generated i 119. B. The parameter value of trading range break-out n (number of day for a trading range) = 5, 10, 15, 20, 25, 30, 35, 40, 45, 50, 60, 70, 75, 80, 90, 100, 125, 150, 175, 200, 250 (21 value). Becaue there i only one parameter, the total number of TRB rule generated i 21. REFERENCES [1] E. Fama and M. Blume, Filter rule and tock-market trading, The Journal of Buine, vol. 39, no. 1, pp. 226 241, 1966. [2] W. Brock, J. Lakonihok, and B. LeBaron, Simple technical trading rule and the tochatic propertie of tock return, Journal of Finance, pp. 1731 1764, 1992. [3] R. Gencay, The predictability of ecurity return with imple technical trading rule, Journal of Empirical Finance, vol. 5, no. 4, pp. 347 359, 1998. [4] F. Allen and R. Karjalainen, Uing genetic algorithm to find technical trading rule, Journal of Financial Economic, vol. 51, pp. 245 271, 1999. [5] L. Ketner, Quantitative trading trategie: harneing the power of quantitative technique to create a winning trading program. McGraw-Hill Profeional, 2003. [6] M. Pring, Technical analyi explained: The ucceful invetor guide to potting invetment trend and turning point. McGraw-Hill, 1991. [7] P. Hu and C. Kuan, Reexamining the profitability of technical analyi with data nooping check, Journal of Financial Econometric, vol. 3, no. 4, pp. 606 628, 2005. [8] H. Subramanian, S. Ramamoorthy, P. Stone, and B. Kuiper, Deigning afe, profitable automated tock trading agent uing evolutionary algorithm, in Proceeding of the 8th annual conference on Genetic and evolutionary computation. ACM, 2006, pp. 1777 1784. [9] A. Briza and P. Naval Jr, Stock trading ytem baed on the multiobjective particle warm optimization of technical indicator on endof-day market data, Applied Soft Computing, vol. 11, no. 1, pp. 1191 1201, 2011. [10] R. Sullivan, A. Timmermann, and H. White, Data-nooping, technical trading rule peormance, and the boottrap, Journal of Finance, pp. 1647 1691, 1999. [11] A. Ratnaweera, S. Halgamuge, and H. Waton, Self-organizing hierarchical particle warm optimizer with time-varying acceleration coefficient, IEEE Tranaction on Evolutionary Computation, vol. 8, no. 3, pp. 240 255, 2004. [12] J. Nenortaite and R. Simuti, Stock trading ytem baed on the particle warm optimization algorithm, Computational Science-ICCS 2004, pp. 843 850, 2004. [13] R. Haan, B. Cohanim, O. De Weck, and G. Venter, A comparion of particle warm optimization and the genetic algorithm, in Proceeding of the 1t AIAA Multidiciplinary Deign Optimization Specialit Conference, 2005. [14] J. Lee, S. Lee, S. Chang, and B. Ahn, A comparion of ga and po for exce return evaluation in tock market, Artificial Intelligence and Knowledge Engineering Application: A Bioinpired Approach, pp. 45 55, 2005. [15] J. Kennedy and R. Eberhart, Particle warm optimization, in Proceeding of IEEE International Conference on Neural Network, Picataway, NJ, vol. 4. IEEE, 1995, pp. 1942 1948. [16] J. Robinon and Y. Rahmat-Samii, Particle warm optimization in electromagnetic, IEEE Tranaction on Antenna and Propagation, vol. 52, no. 2, pp. 397 407, 2004. [17] Y. Shi and R. Eberhart, A modified particle warm optimizer, in Proceeding of 1998 IEEE International Conference on Evolutionary Computation, vol. 1. IEEE, 1998, pp. 69 73. [18], Empirical tudy of particle warm optimization, in Proceeding of 1999 IEEE International Conference on Evolutionary Computation, vol. 3. IEEE, 1999, pp. 101 106.