Discrete Hidden Markov Model Training Based on Variable Length Particle Swarm Optimization Algorithm


 Loraine Payne
 4 years ago
 Views:
Transcription
1 Discrete Hidden Markov Model Training Based on Variable Length Discrete Hidden Markov Model Training Based on Variable Length 12 Xiaobin Li, 1Jiansheng Qian, 1Zhikai Zhao School of Computer Science and Technology, China University of Mining and Technology, Xuzhou Jiangsu ,China *2, School of Computer Science and Technology, Jiangsu Normal University, Xuzhou Jiangsu 1, Abstract Expectation Maximization could be used to train a Discrete Hidden Markov Model(DHMM), but the state number must be specified in advance. Meanwhile this method may make the parameters of HMM converge to a local optimal solution. This paper puts forward a novel DHMM training method based on variable length particle swarm optimization. The method has two advantages. One is this method could optimize the state number, the other is it could make model parameters convergence to a global optimal solution. Experiments on synthetic data set verify the efficient of the method. Keywords: Discrete Hidden Markov Model, Particle swarm optimization, Expectation maximization 1. Introduction Hidden Markov Model(HMM) has demonstrated its predominant capacity in many areas[1, 2]. While training of HMM, i.e. parameters estimation is the key problem in applying HMM to practical application. The widely used parameters estimation method is BaumWelch (BW) algorithm [3]. However the BW algorithm has disadvantages of calculating slowly and converging to a local optimal solution. Meanwhile the state number of HMM must be specified in advance. Too much state number will make the learned HMM model over fitting, and too few state number will make HMM model less fitting. Thus many scholars put forward various optimization algorithms to optimize the parameters of HMM such as genetic algorithm [4], simulated annealing algorithm [5], and particle swarm algorithm [6] and so on. At the same time BIC criterion, AIC criterion, AIC3 criterion and cross validation are used to obtain the state number of HMM.. Then the HMM training problem is divided into two questions: state number optimization and parameters optimization. These two questions are influenced each other. This paper presents a novel algorithm based on variable length particle swarm optimization for Discrete Hidden Markov Model (DHMM) training. The algorithm borrows ideas from paper [7] which introduces variable length particle swarm optimization algorithm for social programming. The proposed algorithm based on variable length particle swarm optimization algorithm has the following two distinct advantages. (1)The state number of DHMM does not have to be specified in advance. The algorithm will determine the optimization state number automatically according to relative information criterion. (2)DHMM parameters including transition probability and observation probability will approximately converge to a global optimal solution under specific state number, but not the local optimal solution. In this paper, the rest of the paper is listed as follows. Section 2 introduces DHMM optimization in previous work briefly; Section 3 gives a detail explanation on variable length particle swarm algorithm applied in DHMM training; Section 4 carries on some optimization experiments on synthesis data set; Section 5 draws the conclusion of this paper. 2. Related Work State number of HMM should be appointed before a HMM optimization procedure is carried on. Paper [8] raises a state number estimation method by BIC criterion. Paper [9]proves that when there is small sample size of data set the AIC criterion is more suitable for HMM state number assessment. International Journal of Digital Content Technology and its Applications(JDCTA) Volume6,Number20,November 2012 doi: /jdcta.vol6.issue
2 Paper [10] compares several state number assessment algorithms and concludes that AIC3 criterion take more advantages. Paper [11] puts forward cross validation algorithm as HMM state number determination instead of various information criteria. Various algorithms, based on EM, are prone to converge to a local optimal solution. Then PSO algorithm is introduced for HMM training [1214]. Experimental results in Paper [6] show that PSO algorithm in HMM training outperforms BW algorithm and simulated annealing (SA) in the area of protein multiple sequence alignment (MSA). In paper [15] the immune particle swarm optimization (IPSO) algorithm is proposed for HMM training, and for the MSA problems experimental results show that IPSO can not only improve sequence alignment capability but also reduce model training cost. Paper [16] proposes a particle repayment and penalty method during optimization procedure for HMM model training and experimental results show the method is better than BW. Paper[17, 18] put forward an optimization named quantum particle swarm optimization (QPSO) and apply this algorithm to the HMM parameters estimation in MSA area. At the same time this algorithm is compared with traditional PSO and BW, and the algorithm makes a low classification error rate than the other two algorithms on some experimental data sets. Paper [19] introduces an improved particle swarm optimization algorithm (IPSO) to optimize the HMM parameters. Paper [20] combines PSO and BW together to optimize HMM state number and probability matrix in two phases. However, all these existing HMM model training techniques have one distinct drawback that HMM state number should be appointed or calculated in advance. Once state number is determined the HMM model could then optimize its parameters under particle swarm optimization. In the classical PSO algorithm, particle length is fixed, while in several particular application areas the particle length should be variable. Paper [7] presents a variable length particle swarm algorithm to implement social programming, where unfixed length particle is used to represent feasible solution. During the optimization procedure, the nonoptimal particle length will evolve to the optimal particle length in accordance with a certain probability. Paper [22][21] proposes a PSO based on feasible solution set where conventional particle swarm arithmetic calculation is replaced with logic operations on set. Then the algorithm is successfully applied into clustering problem. In the paper we borrow the thought from paper [7, 21] and modify the proposed algorithm to make it suitable for Discrete Hidden Markov Model (DHMM) training. This paper presents a variable length particle swarm optimization (VLPSO) for Discrete Hidden Markov Model training. The proposed method has the following two obvious advantages. The first is the DHMM state number don t need to be specified in advance but is determined by the algorithm according to information criterion during the dynamic optimization procedure; The second is the DHMM parameters will converge to an approximate global optimal solution under the specific state number instead of converging to a local optimal solution with EM algorithm. 3. Variable length particle swarm optimization for DHMM training 3.1. Particle swarm optimization Particle swarm optimization algorithm (PSO) is a kind of evolutionary computing technology, which is mainly inspired by the biological behaviour among birds community. In the community there is no central control mechanism and the group's behaviour is a combination of all particles in the group. Each particle s action is mainly determined by two factors of the group, one is its own historical experience and the other is its social relationship with other particles. PSO has been applied in lots of application areas such as DNA sequence alignment as well as various parameter optimization problems. In PSO algorithm a particle p is generally represented as a tuple with four elements: p ( x ( t), v ( t), p ( t), g ( t)) (1) ij ij In the tuple t means the optimization round, i means the specific particle number and j means the dimension. x ij (t) means the position of NO.i particle at t round and j dimension; v ij (t) means the speed of NO.i particle at t round and j dimension; p ij (t) means the optimal position among all the ij ij 183
3 Discrete Hidden Markov Model Training Based on Variable Length positions that NO.i particle have traversed at j dimension till t round; g ij (t ) represents the optimal position among all the positions that all particles have traversed at j dimension till t round. The particles evolvement among search space can be defined as the following two formulas. vij (t 1) wvij (t ) c1r1 j ( pij (t ) xij (t )) c2 r2 j ( g ij (t ) xij (t )) (2) xij (t 1) xij (t ) vij (t 1) (3) Where w is the inertia coefficient. In some PSO improved versions of the algorithm the coefficient can be reduced gradually with optimization, i.e. at the beginning of the optimization PSO have a larger inertia coefficient while later the inertia coefficient is gradually reduced. The number c1 and c2 means the local learning coefficient and global learning coefficient separately. While the number r1 j r2 j separately represents the random values which can be rebuilt in each dimension, and these two number form the constrain 0 r1 j, r2 j 1 Classical PSO algorithm run as follows: (1) Initialization. The algorithm produces several particles randomly and assigns these particles random initial position and velocity. (2) Assess. The algorithm calculates the fitness of each particle among the particle population with a fitness function. (3) Obtain the local optimal particle. The algorithm searches for the optimal solution of each particle till now. (4) Obtain the global optimal particle. The algorithm searches for the optimal solution among all particles till now. (5) Update all the particles position and velocity. (6) Return to step (2). The algorithm keeps running until the recycle round reaches a predetermined threshold or a satisfactory solution DHMM training based on VLPSO For a DHMM, where the model has N states and M observed symbols, a length of N+N * N+N * M particle could then be constructed. The addition symbol divides a particle into three parts, of which the first part means N states of the initial probability distribution, the second part means HMM transition probability distribution A, while the third part means HMM observation probability distribution B. When the sequence is known, the number of observed symbol M is easily identified, but the state number N is difficult to determine. Here we put forward a kind of VLPSO algorithm to optimize HMM. Firstly a five elements tuple is used to represent the particle defined in VLPSO algorithm: p ( xij (t ), vij (t ), pij (t ), p gj (t ), p aj (t )) (4) In the tuple t means the current optimization round, i means the specific particle number and j means the dimension. xij (t ) means the position of NO.i particle at t round and j dimension; vij (t ) means the speed of NO. i particle at t round and j dimension; pij (t ) means the optimal position among all the positions that NO.i particle have traversed at j dimension till t round; g ij (t ) represents the optimal position among all the positions that all particles have traversed at j dimension till t round. Among them, t means the optimization round, i means the specific particle number, j represents the dimension. xij (t ) represents the position of NO.i particles in the t round algorithm and the j dimension; vij (t ) represents the speed of NO.i particles in the t round algorithm and the j dimension; pij (t ) means the optimal position among all the positions that NO.i particle have traversed at j dimension till t round; p gj (t ) means the optimal position among all the positions that all particles have traversed at j 184
4 Discrete Hidden Markov Model Training Based on Variable Length dimension till t round. paj (t ) means the optimal position among all the positions that all particles have traversed till t round. The particles evolution in search space can be described by the following two formulas. vij (t 1) wvij (t ) c1r1 j ( pij (t ) xij (t )) c2 r2 j ( p gj (t ) xij (t )) (5) xij (t 1) xij (t ) vij (t 1) (6) In the formulas, the coefficient has the same meaning as the classical PSO algorithm. In addition to the changes of speed and position, the VLPSO algorithm also contains a dimension alteration step. Assume we have state number nit State( xi (t )) in the t round, meanwhile the global optimal state number is n gt State( pa (t )). Therefore at t+1 round state number can alter to the global optimal state number of state. ni (t 1) n g (t 1) (7) The alteration method can be described in detail as below. ni (t 1) ni (t 1) r (n g (t 1) ni (t 1) ) (8) Where r is a generated random number, and 0 r 1. When the global best particle state number is greater than the state number of assessment particles the assessment particles will increase their state number, otherwise it will reduce their state number. Once the new state number is determined, the particle position and velocity will be reinitialized and at the same time the local optimal particles and the global optimal particle will be altered. The main procedure of VLPSO algorithm is described as follows. (1) Initialization. The algorithm produces several particles randomly and assigns these particles random initial position and velocity. (2) Assess. The algorithm calculates the fitness of each particle among the particle population with a fitness function. (3) Obtain the local optimal particle. The algorithm searches for the optimal solution of each particle with the same dimension till now. (4) Obtain the global optimal particle. The algorithm searches for the optimal solution among all particles till now. (5) Update all the particles position and velocity. (6) Update all particles dimension (7) Return to step (2). The algorithm keeps running until the recycle round reaches a predetermined threshold or a satisfactory solution Fitness function Forward process could be applied to evaluate that a given sequence is whether satisfied with a HMM or not. Firstly a prior variable t (i ) P(O1O2...Ot, qt S i ) is defined as the probability of the partial observation sequence, O1O2...Ot,(until time t)and state is Si and time t, given the model. The process of calculating the prior variables is described as follows: (1) Initialization: (9) 1 (i ) i bi (O1 ),1<=i<=N (2) Induction: N (i ) a t 1 ( j ) t i 1 ij b j (Ot 1 ) 1<=t<=T1 1<=j<=N (10) 185
5 (3) Termination: N P O ) i 1 ( ( i ) (11) T As the value of P( O ) in the calculation results is greatly small, the precision is easy lost when it is directly used as the fitness function. Therefore the loglikelihood function Fitness( xi ) log( P( O )) is often applied to assess the HMM model fitness. Hence for multiple observed sequences, the likelihood (10) is introduced. L 1 Fitness( xi ) log( P( O )). O [ o1, o2,..., ot,..., ot ] (12) L l 1 In the above evaluation function, the affection of HMM state number is not considered. In fact for VLPSO algorithm in HMM training, the state number needs to be considered simultaneously. Then the state number should be fit into the likelihood evaluation function. At present, there are several standard techniques on statistical model fitting capacity, such as AIC, BIC etc. For the likelihood evaluation in DHMM model, the observed symbol number M of a known sequence is constant, so this parameter don't need to take into consideration. While the DHMM state number N is a free variable, so the state number N is the only one needed to be considered Particle dimension adjusting probability In step (6) of VLPSO algorithm, the particle dimension will alter with an adjusting probability P ) after several algorithm round with fixed dimension. Here three strategies are proposed to ( adjust determine the adjusting probability P adjust. (1) VLPSO_FIX, the first strategy, specifies a fixed constant probability c of dimension adjusting. (2) ( t) c ( 0 c 1 ) (13) P adjust (2) VLPSO_LIN, the second strategy, increases linearly with algorithm round. Assume that the minimum dimension adjusting probability is P min, the maximum dimension adjusting probability is P max, R is the total algorithm round designed and t is the algorithm round till now from the adjusting algorithm round. Pmax Pmin P adjust ( t) Pmin t (14) R In this strategy, particles hold their dimension before t round, while after t round their dimension will approximate adjust to the global optimal particle dimension. (3) VLPSO_EXP, the third strategy, P adjust varies according to the difference of the two global optimal particles fitness in successive algorithm round. Suppose the global optimal particle fitness in successive algorithm round is respectively p g (t) and p g ( t 1). Padjust ( t 1) exp( 1*( Fitness( pg ( t) Fitness( pg ( t 1))))) (15) Obviously, in this strategy, when the difference of the two global optimal particle fitness in successive algorithm round is large the dimension adjusting probability P adjust is little. Therefore at the beginning the particles dimension has little opportunity to adjust. While later the difference of the two global optimal particles fitness in successive algorithm round is getting smaller, and the dimension 186
6 adjusting probability is getting bigger. i.e. There will be more opportunity to adjust particles dimension. 4. Experiments 4.1. Data set In order to accurately evaluate the DHMM training method based on variable length particle swarm optimization algorithm we proposed. Synthetic data set is generated by Murphy s HMM toolbox. There are four data sets (HMM1, HMM2, HMM3, HMM4) generated with number 100 and length 100. The state number of the HMM1 is 2, the observed symbol number is 4 and the specific parameters of the HMM model are assigned as below: =[0.5,0.5]; A=[0.2,0.8;0.3,0.7]; B=[0.2,0.4,0.3,0.1;0.1,0.5,0.2,0.2]; The state number of the HMM2 is 4, the observed symbol number is 6 and the specific parameters of the HMM model are assigned as below: =[0.25,0.25,0.25,0.25]; A=[0.2,0.6,0.1,0.1;0.3,0.2,0.2,0.3;0.3,0.2,0.2,0.3;0.3,0.2,0.2,0.3]; B=[0.2,0.4,0.1,0.1,0.1,0.1;0.1,0.2,0.2,0.2,0.1,0.2;0.3,0.2,0.2,0.1,0.1,0.1;0.3,0.2,0.2,0.1,0.1,0.1]; The state number of the HMM3 is 6, the observed symbol number is 8 and the specific parameters of the HMM model are assigned as below: =[0.25,0.25,0.2,0.1,0.1,0.1]; A=[0.1,0.1,0.1,0.1,0.1,0.5;0.1,0.1,0.1,0.1,0.5,0.1;0.1,0.1,0.1,0.5,0.1,0.1;0.1,0.1,0.5,0.1,0.1,0.1;0.1,0. 5,0.1,0.1,0.1,0.1;0.5,0.1,0.1,0.1,0.1,0.1]; B=[0.3,0.1,0.1,0.1,0.1,0.1,0.1,0.1;0.1,0.3,0.1,0.1,0.1,0.1,0.1,0.1;0.2,0.2,0.1,0.1,0.1,0.1,0.1,0.1;0.15,0.25,0.1,0.1,0.1,0.1,0.1,0.1;0.25,0.15,0.1,0.1,0.1,0.1,0.1,0.1;0.0,0.4,0.1,0.1,0.1,0.1,0.1,0.1]; The state number of the HMM4 is 8, the observed symbol number is 8 and the specific parameters of the HMM model are assigned as below: =[0.25,0.1,0.15,0.1,0,1,0.1,0.1,0.1]'; A=[0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.3;0.1,0.1,0.1,0.1,0.1,0.1,0.3,0.1;0.1,0.1,0.1,0.1,0.1,0.3,0.1,0.1;0.1,0. 1,0.1,0.1,0.3,0.1,0.1,0.1;0.1,0.1,0.1,0.3,0.1,0.1,0.1,0.1;0.1,0.1,0.3,0.1,0.1,0.1,0.1,0.1;0.1,0.3,0.1,0.1,0.1, 0.1,0.1,0.1;0.3,0.1,0.1,0.1,0.1,0.1,0.1,0.1]; B=[0.3,0.1,0.1,0.1,0.1,0.1,0.1,0.1;0.1,0.3,0.1,0.1,0.1,0.1,0.1,0.1;0.1,0.1,0.3,0.1,0.1,0.1,0.1,0.1;0.1,0. 1,0.1,0.3,0.1,0.1,0.1,0.1;0.1,0.1,0.1,0.1,0.3,0.1,0.1,0.1;0.1,0.1,0.1,0.1,0.1,0.3,0.1,0.1;0.1,0.1,0.1,0.1,0.1, 0.1,0.3,0.1;0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.3]; 4.2. Algorithm parameters Four algorithms are applied to train HMM model and also compared with each other. They are BW algorithm, VLPSO_FIX, VLPSO_LIN, VLPSO_EXP. In the BW algorithm the main parameters include state number and iteration number. For the convenience to calculate and compare, here in BW algorithm the state number is directly assigned the same state number as used in synthetic data sets. At the same time algorithm parameters of VLPSO are listed in table 1. Table 1. Algorithm parameters of VLPSO VLPSO_FIX VLPSO_LIN VLPSO_EXP N R Radjust wstart wend C
7 C P adjust Pmax Pmin Among these parameters, N means the particle number, R means the total algorithm round, R adjust means the algorithm round of the particle dimension beginning to adjust, w start and w end respectively means the speed inertia weight of the initial and final calculation, C 1 and C 2 respectively means the learning factors in local solution space and global solution space, P adjust represents particle dimension adjusting probability, P max means maximum dimension adjusting probability and P min means minimum dimension adjusting probability State number recognition Here we apply the above four algorithms on data sets and summary the state number correct recognition. Table 2 only list the rate of correctly recognition state number. VLPSO_LIN and VLPSO_EXP have similar results. Table 2. Percentage of correctly recognize states (%) BIC AIC AICC AIC3 CAIC AICu HQC HMM HMM HMM HMM It can be seen from the experimental results that the BIC and CAIC can be well used for evaluating the likelihood. While when the state number of the sample increases, the percentage of correctly recognition states number declines Likelihood comparison We compare all likelihood calculated under the four algorithms. Table 3 only list the likelihood comparison between VLPSO_FIX and BW. For BW algorithm the state number used in previous data sets generated is directly applied. Table 3. Loglikelihood degrees under correctly recognize states (10^2) BW BIC AIC AICC AIC3 CAIC AICu HQC HMM HMM HMM HMM It can be seen from the experimental results the VLPSO algorithm, compared with BW algorithm, when under the same evolution index, the likelihood is higher. Other algorithms such as VLPSO_LIN and VLPSO_EXP have similar conclusions. 188
8 Discrete Hidden Markov Model Training Based on Variable Length 4.5. Evolution of state number HMM1 HMM2 HMM3 HMM4 Figure 1. Number of particles according to state number Here we illustrate the evolution of state number under BIC index and VLPSO_FIX in figure 1. In the figure there are four sub figures corresponding to the four data sets. Three lines are plot on each sub figure, where each line is separately corresponded to the round 30, 35 and 40. The other two algorithms also have the similar results. It can be seen from the experimental results that before dimension adjusting (round 30), each particle is optimized with a constant state number. All particles will gradually alter their dimension to the global optimal particle s dimension when the dimension adjusting process starts. 5. Conclusion Iteration in BW algorithm always increases its likelihood on data set, so the final result might produce a local rather than a global maximum solution. Meanwhile an initial state number should be appointed in advance, which is impossible when there is little prior knowledge for the data set. This paper presents VLPSO algorithm, an improvement version of traditional fixed length particle swarm algorithm, for DHMM training. From the experiments on synthetic data set we can summarize that this proposed VLPSO algorithm has the following two distinct advantages: the first is the DHMM state number do not need to be specified in advance, but is determined by the dynamic optimization, the second is DHMM parameters will approximately convergence to the global optimal solution under the specific state number rather than local optimal solution. 189
9 The main questions still remain in VLPSO algorithm is the selection of fitness function, which has a certain tight relationship with data set. How to choose a suitable fitness function and make better use of VLPSO to train DHMM so as to obtain appropriate structure is the focus work of next research. 6. Acknowledgement This work was supported by Chinese National High Technology Research and Development Program 863 under Grant No 2008AA062200, and by Chinese National High Technology Research and Development Program 863 under Grant No 2012AA062103, and by Jiangsu Province Production Research Foundation under Grant No BY in China, and by Xuzhou Industry Science Program under Grant No XX10A001, and by Jiangsu Normal University Foundation under Grant No 10XLA13. Also the authors wish to thank the reviewers for this paper. 7. Reference [1] Y. Yang, N. Cheng, M. Zhang, Research on activity recognition method based on human motion trajectory features, Journal of Convergence Information Technology, vol. 7, no. 1, pp , [2] Y. Yao, K. Xia, Y. Wu, Speech word recognizer based on the HMM algorithm, International Journal of Advancements in Computing Technology, vol. 3, no. 10, pp , [3] L. R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition, Proceedings of the IEEE, vol. 77, no. 2, pp , [4] Y. X. Li, S. Kwong, Q. H. He, J. He, J. C. Yang, Genetic algorithm based simultaneous optimization of feature subsets and hidden Markov model parameters for discrimination between speech and nonspeech events, International Journal of Speech Technology, vol. 13, no. 2, pp , [5] L. JongSeok, P. Cheol Hoon, Hybrid Simulated Annealing and Its Application to Optimization of Hidden Markov Models for Visual Speech Recognition, Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 40, no. 4, pp , [6] T. K. Rasmussen, T. Krink, Improved Hidden Markov Model training for multiple sequence alignment by a particle swarm optimizationevolutionary algorithm hybrid, Biosystems, vol. 72, no. 12, pp. 517, [7] N. Nedjah, L. Mourelle, M. O'Neill, F. Leahy, A. Brabazon, "Grammatical Swarm: A Variable Length Particle Swarm Algorithm," Swarm Intelligent Systems, Studies in Computational Intelligence, pp : Springer Berlin / Heidelberg, [8] C. Keribin, Consistent estimation of the order of mixture models, Sankhyā: The Indian Journal of Statistics, Series A, pp , [9] O. Lukočienė., J. K. Vermunt, "Determining the Number of Components in Mixture Models for Hierarchical Data Advances in Data Analysis, Data Handling and Business Intelligence," Studies in Classification, Data Analysis, and Knowledge Organization, pp : Springer Berlin Heidelberg, [10] J. Dias, "Latent Class Analysis and Model Selection From Data and Information Analysis to Knowledge Engineering," Studies in Classification, Data Analysis, and Knowledge Organization, pp : Springer Berlin Heidelberg, [11] G. Celeux, J. B. Durand, Selecting hidden Markov model state number with crossvalidated likelihood, Computational Statistics, vol. 23, no. 4, pp , [12] S. Aupetit, N. Monmarché, M. Slimane, Hidden Markov Models Training Using Populationbased Metaheuristics, Advances in Metaheuristics for Hard Optimization, pp , [13] S. PhonAmnuaisuk, "Estimating HMM Parameters Using Particle Swarm Optimisation," Applications of Evolutionary Computing, Lecture Notes in Computer Science, pp : Springer Berlin / Heidelberg, [14] J. MENG, X. U. Siqiang, X. WANG, Y. I. Yajuan, L. I. U. Hongbo, Swarmbased DHMM Training and Application in Time Sequences Classification, Journal of Computational Information Systems6, vol. 1, pp ,
10 [15] H.W. Ge, Y.C. Liang, "A Hidden Markov Model and Immune Particle Swarm Optimization Based Algorithm for Multiple Sequence Alignment," AI 2005: Advances in Artificial Intelligence, Lecture Notes in Computer Science, pp : Springer Berlin / Heidelberg, [16] M. Macaš, D. Novák, L. Lhotská, "Constraints in Particle Swarm Optimization of Hidden Markov Models," Intelligent Data Engineering and Automated Learning IDEAL 2006, Lecture Notes in Computer Science, pp : Springer Berlin / Heidelberg, [17] C. Li, H. Long, Y. Ding, J. Sun, W. Xu, "Multiple Sequence Alignment by Improved Hidden Markov Model Training and QuantumBehaved Particle Swarm Optimization," Life System Modeling and Intelligent Computing, Lecture Notes in Computer Science, pp : Springer Berlin / Heidelberg, [18] J. Sun, X. Wu, W. Fang, Y. Ding, H. Long, W. Xu, Multiple sequence alignment using the Hidden Markov Model trained by an improved quantumbehaved particle swarm optimization, Information Sciences, vol. 182, no. 1, pp , [19] C. WANG, D. DUAN, X. WANG, A Improved PSO and HMM Algorithm for Web Information Extraction, Journal of Henan Normal University(Natural Science), vol. 38, no. 05, pp , [20] J. ZHU, Y. GAO, Adaptive particle swarm optimization for hidden markov model training, Computer Engineering and Design, vol. 31, no. 01, pp , [21] C. Veenhuis, "A SetBased Particle Swarm Optimization Method," Parallel Problem Solving from Nature PPSN X, Lecture Notes in Computer Science, pp : Springer Berlin / Heidelberg,
Open Access Research on Application of Neural Network in Computer Network Security Evaluation. Shujuan Jin *
Send Orders for Reprints to reprints@benthamscience.ae 766 The Open Electrical & Electronic Engineering Journal, 2014, 8, 766771 Open Access Research on Application of Neural Network in Computer Network
More informationResearch on the Performance Optimization of Hadoop in Big Data Environment
Vol.8, No.5 (015), pp.93304 http://dx.doi.org/10.1457/idta.015.8.5.6 Research on the Performance Optimization of Hadoop in Big Data Environment Jia MinZheng Department of Information Engineering, Beiing
More informationA Novel Binary Particle Swarm Optimization
Proceedings of the 5th Mediterranean Conference on T33 A Novel Binary Particle Swarm Optimization Motaba Ahmadieh Khanesar, Member, IEEE, Mohammad Teshnehlab and Mahdi Aliyari Shoorehdeli K. N. Toosi
More informationCONCEPTUAL MODEL OF MULTIAGENT BUSINESS COLLABORATION BASED ON CLOUD WORKFLOW
CONCEPTUAL MODEL OF MULTIAGENT BUSINESS COLLABORATION BASED ON CLOUD WORKFLOW 1 XINQIN GAO, 2 MINGSHUN YANG, 3 YONG LIU, 4 XIAOLI HOU School of Mechanical and Precision Instrument Engineering, Xi'an University
More informationVideo Affective Content Recognition Based on Genetic Algorithm Combined HMM
Video Affective Content Recognition Based on Genetic Algorithm Combined HMM Kai Sun and Junqing Yu Computer College of Science & Technology, Huazhong University of Science & Technology, Wuhan 430074, China
More informationA Service Revenueoriented Task Scheduling Model of Cloud Computing
Journal of Information & Computational Science 10:10 (2013) 3153 3161 July 1, 2013 Available at http://www.joics.com A Service Revenueoriented Task Scheduling Model of Cloud Computing Jianguang Deng a,b,,
More informationEM Clustering Approach for MultiDimensional Analysis of Big Data Set
EM Clustering Approach for MultiDimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin
More informationA RANDOMIZED LOAD BALANCING ALGORITHM IN GRID USING MAX MIN PSO ALGORITHM
International Journal of Research in Computer Science eissn 22498265 Volume 2 Issue 3 (212) pp. 1723 White Globe Publications A RANDOMIZED LOAD BALANCING ALGORITHM IN GRID USING MAX MIN ALGORITHM C.Kalpana
More informationAn ACO Approach to Solve a Variant of TSP
An ACO Approach to Solve a Variant of TSP Bharat V. Chawda, Nitesh M. Sureja Abstract This study is an investigation on the application of Ant Colony Optimization to a variant of TSP. This paper presents
More informationOptimal PID Controller Design for AVR System
Tamkang Journal of Science and Engineering, Vol. 2, No. 3, pp. 259 270 (2009) 259 Optimal PID Controller Design for AVR System ChingChang Wong*, ShihAn Li and HouYi Wang Department of Electrical Engineering,
More informationPractical Applications of Evolutionary Computation to Financial Engineering
Hitoshi Iba and Claus C. Aranha Practical Applications of Evolutionary Computation to Financial Engineering Robust Techniques for Forecasting, Trading and Hedging 4Q Springer Contents 1 Introduction to
More informationError Log Processing for Accurate Failure Prediction. HumboldtUniversität zu Berlin
Error Log Processing for Accurate Failure Prediction Felix Salfner ICSI Berkeley Steffen Tschirpke HumboldtUniversität zu Berlin Introduction Context of work: Errorbased online failure prediction: error
More informationA resource schedule method for cloud computing based on chaos particle swarm optimization algorithm
Abstract A resource schedule method for cloud computing based on chaos particle swarm optimization algorithm Lei Zheng 1, 2*, Defa Hu 3 1 School of Information Engineering, Shandong Youth University of
More informationChapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network
Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network Qian Wu, Yahui Wang, Long Zhang and Li Shen Abstract Building electrical system fault diagnosis is the
More informationFUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM
International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 3448 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT
More informationData Security Strategy Based on Artificial Immune Algorithm for Cloud Computing
Appl. Math. Inf. Sci. 7, No. 1L, 149153 (2013) 149 Applied Mathematics & Information Sciences An International Journal Data Security Strategy Based on Artificial Immune Algorithm for Cloud Computing Chen
More informationPredicting the Risk of Heart Attacks using Neural Network and Decision Tree
Predicting the Risk of Heart Attacks using Neural Network and Decision Tree S.Florence 1, N.G.Bhuvaneswari Amma 2, G.Annapoorani 3, K.Malathi 4 PG Scholar, Indian Institute of Information Technology, Srirangam,
More informationAn OrderInvariant Time Series Distance Measure [Position on Recent Developments in Time Series Analysis]
An OrderInvariant Time Series Distance Measure [Position on Recent Developments in Time Series Analysis] Stephan Spiegel and Sahin Albayrak DAILab, Technische Universität Berlin, ErnstReuterPlatz 7,
More informationBMOA: Binary Magnetic Optimization Algorithm
International Journal of Machine Learning and Computing Vol. 2 No. 3 June 22 BMOA: Binary Magnetic Optimization Algorithm SeyedAli Mirjalili and Siti Zaiton Mohd Hashim Abstract Recently the behavior of
More informationIntrusion Detection via Machine Learning for SCADA System Protection
Intrusion Detection via Machine Learning for SCADA System Protection S.L.P. Yasakethu Department of Computing, University of Surrey, Guildford, GU2 7XH, UK. s.l.yasakethu@surrey.ac.uk J. Jiang Department
More informationWireless Sensor Networks Coverage Optimization based on Improved AFSA Algorithm
, pp. 99108 http://dx.doi.org/10.1457/ijfgcn.015.8.1.11 Wireless Sensor Networks Coverage Optimization based on Improved AFSA Algorithm Wang DaWei and Wang Changliang Zhejiang Industry Polytechnic College
More information14.10.2014. Overview. Swarms in nature. Fish, birds, ants, termites, Introduction to swarm intelligence principles Particle Swarm Optimization (PSO)
Overview Kyrre Glette kyrrehg@ifi INF3490 Swarm Intelligence Particle Swarm Optimization Introduction to swarm intelligence principles Particle Swarm Optimization (PSO) 3 Swarms in nature Fish, birds,
More informationDynamic Generation of Test Cases with Metaheuristics
Dynamic Generation of Test Cases with Metaheuristics Laura Lanzarini, Juan Pablo La Battaglia IIILIDI (Institute of Research in Computer Science LIDI) Faculty of Computer Sciences. National University
More informationInternational Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering
DOI: 10.15662/ijareeie.2014.0307061 Economic Dispatch of Power System Optimization with Power Generation Schedule Using Evolutionary Technique Girish Kumar 1, Rameshwar singh 2 PG Student [Control system],
More informationProgramming Risk Assessment Models for Online Security Evaluation Systems
Programming Risk Assessment Models for Online Security Evaluation Systems Ajith Abraham 1, Crina Grosan 12, Vaclav Snasel 13 1 Machine Intelligence Research Labs, MIR Labs, http://www.mirlabs.org 2 BabesBolyai
More informationOptimization of PID parameters with an improved simplex PSO
Li et al. Journal of Inequalities and Applications (2015) 2015:325 DOI 10.1186/s1366001507852 R E S E A R C H Open Access Optimization of PID parameters with an improved simplex PSO Jimin Li 1, YeongCheng
More informationA Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster
, pp.1120 http://dx.doi.org/10.14257/ ijgdc.2014.7.2.02 A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster Kehe Wu 1, Long Chen 2, Shichao Ye 2 and Yi Li 2 1 Beijing
More informationMetaheuristics in Big Data: An Approach to Railway Engineering
Metaheuristics in Big Data: An Approach to Railway Engineering Silvia Galván Núñez 1,2, and Prof. Nii AttohOkine 1,3 1 Department of Civil and Environmental Engineering University of Delaware, Newark,
More informationInternational Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, MayJun 2014
RESEARCH ARTICLE OPEN ACCESS A Survey of Data Mining: Concepts with Applications and its Future Scope Dr. Zubair Khan 1, Ashish Kumar 2, Sunny Kumar 3 M.Tech Research Scholar 2. Department of Computer
More informationFault Analysis in Software with the Data Interaction of Classes
, pp.189196 http://dx.doi.org/10.14257/ijsia.2015.9.9.17 Fault Analysis in Software with the Data Interaction of Classes Yan Xiaobo 1 and Wang Yichen 2 1 Science & Technology on Reliability & Environmental
More informationPerformance Analysis of Data Mining Techniques for Improving the Accuracy of Wind Power Forecast Combination
Performance Analysis of Data Mining Techniques for Improving the Accuracy of Wind Power Forecast Combination Ceyda Er Koksoy 1, Mehmet Baris Ozkan 1, Dilek Küçük 1 Abdullah Bestil 1, Sena Sonmez 1, Serkan
More informationInternational Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) www.iasir.net
International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Journal of Emerging Technologies in Computational
More informationPerformance Evaluation of Task Scheduling in Cloud Environment Using Soft Computing Algorithms
387 Performance Evaluation of Task Scheduling in Cloud Environment Using Soft Computing Algorithms 1 R. Jemina Priyadarsini, 2 Dr. L. Arockiam 1 Department of Computer science, St. Joseph s College, Trichirapalli,
More informationPLAANN as a Classification Tool for Customer Intelligence in Banking
PLAANN as a Classification Tool for Customer Intelligence in Banking EUNITE World Competition in domain of Intelligent Technologies The Research Report Ireneusz Czarnowski and Piotr Jedrzejowicz Department
More informationGenetic Algorithm Based Interconnection Network Topology Optimization Analysis
Genetic Algorithm Based Interconnection Network Topology Optimization Analysis 1 WANG Peng, 2 Wang XueFei, 3 Wu YaMing 1,3 College of Information Engineering, Suihua University, Suihua Heilongjiang, 152061
More informationInternational Journal of Software and Web Sciences (IJSWS) www.iasir.net
International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 22790063 ISSN (Online): 22790071 International
More informationAutomatic Mining of Internet Translation Reference Knowledge Based on Multiple Search Engines
, 2224 October, 2014, San Francisco, USA Automatic Mining of Internet Translation Reference Knowledge Based on Multiple Search Engines Baosheng Yin, Wei Wang, Ruixue Lu, Yang Yang Abstract With the increasing
More informationU.P.B. Sci. Bull., Series C, Vol. 77, Iss. 1, 2015 ISSN 2286 3540
U.P.B. Sci. Bull., Series C, Vol. 77, Iss. 1, 2015 ISSN 2286 3540 ENTERPRISE FINANCIAL DISTRESS PREDICTION BASED ON BACKWARD PROPAGATION NEURAL NETWORK: AN EMPIRICAL STUDY ON THE CHINESE LISTED EQUIPMENT
More informationAuxiliary Variables in Mixture Modeling: 3Step Approaches Using Mplus
Auxiliary Variables in Mixture Modeling: 3Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives
More informationA New Natureinspired Algorithm for Load Balancing
A New Natureinspired Algorithm for Load Balancing Xiang Feng East China University of Science and Technology Shanghai, China 200237 Email: xfeng{@ecusteducn, @cshkuhk} Francis CM Lau The University of
More informationQoS Guaranteed Intelligent Routing Using Hybrid PSOGA in Wireless Mesh Networks
BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 1 Sofia 2015 Print ISSN: 13119702; Online ISSN: 13144081 DOI: 10.1515/cait20150007 QoS Guaranteed Intelligent Routing
More informationKnowledge Acquisition Approach Based on Rough Set in Online Aided Decision System for Food Processing Quality and Safety
, pp. 381388 http://dx.doi.org/10.14257/ijunesst.2014.7.6.33 Knowledge Acquisition Approach Based on Rough Set in Online Aided ecision System for Food Processing Quality and Safety Liu Peng, Liu Wen,
More informationLearning in Abstract Memory Schemes for Dynamic Optimization
Fourth International Conference on Natural Computation Learning in Abstract Memory Schemes for Dynamic Optimization Hendrik Richter HTWK Leipzig, Fachbereich Elektrotechnik und Informationstechnik, Institut
More informationProjects  Neural and Evolutionary Computing
Projects  Neural and Evolutionary Computing 20142015 I. Application oriented topics 1. Task scheduling in distributed systems. The aim is to assign a set of (independent or correlated) tasks to some
More informationAnalysis of Model and Key Technology for P2P Network Route Security Evaluation with 2tuple Linguistic Information
Journal of Computational Information Systems 9: 14 2013 5529 5534 Available at http://www.jofcis.com Analysis of Model and Key Technology for P2P Network Route Security Evaluation with 2tuple Linguistic
More informationA Hybrid Tabu Search Method for Assembly Line Balancing
Proceedings of the 7th WSEAS International Conference on Simulation, Modelling and Optimization, Beijing, China, September 1517, 2007 443 A Hybrid Tabu Search Method for Assembly Line Balancing SUPAPORN
More informationEFFICIENT DATA PREPROCESSING FOR DATA MINING
EFFICIENT DATA PREPROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College
More informationConstrained Classification of Large Imbalanced Data by Logistic Regression and Genetic Algorithm
Constrained Classification of Large Imbalanced Data by Logistic Regression and Genetic Algorithm Martin Hlosta, Rostislav Stríž, Jan Kupčík, Jaroslav Zendulka, and Tomáš Hruška A. Imbalanced Data Classification
More informationA Health Degree Evaluation Algorithm for Equipment Based on Fuzzy Sets and the Improved SVM
Journal of Computational Information Systems 10: 17 (2014) 7629 7635 Available at http://www.jofcis.com A Health Degree Evaluation Algorithm for Equipment Based on Fuzzy Sets and the Improved SVM Tian
More informationHybrid Algorithm using the advantage of ACO and Cuckoo Search for Job Scheduling
Hybrid Algorithm using the advantage of ACO and Cuckoo Search for Job Scheduling R.G. Babukartik 1, P. Dhavachelvan 1 1 Department of Computer Science, Pondicherry University, Pondicherry, India {r.g.babukarthik,
More informationUsing MixturesofDistributions models to inform farm size selection decisions in representative farm modelling. Philip Kostov and Seamus McErlean
Using MixturesofDistributions models to inform farm size selection decisions in representative farm modelling. by Philip Kostov and Seamus McErlean Working Paper, Agricultural and Food Economics, Queen
More informationA Hybrid Model of Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC) Algorithm for Test Case Optimization
A Hybrid Model of Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC) Algorithm for Test Case Optimization Abraham Kiran Joseph a, Dr. G. Radhamani b * a Research Scholar, Dr.G.R Damodaran
More informationHardware Implementation of Probabilistic State Machine for Word Recognition
IJECT Vo l. 4, Is s u e Sp l  5, Ju l y  Se p t 2013 ISSN : 22307109 (Online) ISSN : 22309543 (Print) Hardware Implementation of Probabilistic State Machine for Word Recognition 1 Soorya Asokan, 2
More informationSensors & Transducers 2015 by IFSA Publishing, S. L. http://www.sensorsportal.com
Sensors & Transducers 2015 by IFSA Publishing, S. L. http://www.sensorsportal.com A Dynamic Deployment Policy of Slave Controllers for Software Defined Network Yongqiang Yang and Gang Xu College of Computer
More informationImproved PSObased Task Scheduling Algorithm in Cloud Computing
Journal of Information & Computational Science 9: 13 (2012) 3821 3829 Available at http://www.joics.com Improved PSObased Tas Scheduling Algorithm in Cloud Computing Shaobin Zhan, Hongying Huo Shenzhen
More informationMethod of Fault Detection in Cloud Computing Systems
, pp.205212 http://dx.doi.org/10.14257/ijgdc.2014.7.3.21 Method of Fault Detection in Cloud Computing Systems Ying Jiang, Jie Huang, Jiaman Ding and Yingli Liu Yunnan Key Lab of Computer Technology Application,
More informationSoftware Project Planning and Resource Allocation Using Ant Colony Optimization with Uncertainty Handling
Software Project Planning and Resource Allocation Using Ant Colony Optimization with Uncertainty Handling Vivek Kurien1, Rashmi S Nair2 PG Student, Dept of Computer Science, MCET, Anad, Tvm, Kerala, India
More informationIntroduction to Algorithmic Trading Strategies Lecture 2
Introduction to Algorithmic Trading Strategies Lecture 2 Hidden Markov Trading Model Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Carry trade Momentum Valuation CAPM Markov chain
More informationANT COLONY OPTIMIZATION ALGORITHM FOR RESOURCE LEVELING PROBLEM OF CONSTRUCTION PROJECT
ANT COLONY OPTIMIZATION ALGORITHM FOR RESOURCE LEVELING PROBLEM OF CONSTRUCTION PROJECT Ying XIONG 1, Ya Ping KUANG 2 1. School of Economics and Management, Being Jiaotong Univ., Being, China. 2. College
More informationA Novel Feature Selection Method Based on an Integrated Data Envelopment Analysis and Entropy Mode
A Novel Feature Selection Method Based on an Integrated Data Envelopment Analysis and Entropy Mode Seyed Mojtaba Hosseini Bamakan, Peyman Gholami RESEARCH CENTRE OF FICTITIOUS ECONOMY & DATA SCIENCE UNIVERSITY
More informationResearch Article Service Composition Optimization Using Differential Evolution and Oppositionbased Learning
Research Journal of Applied Sciences, Engineering and Technology 11(2): 229234, 2015 ISSN: 20407459; eissn: 20407467 2015 Maxwell Scientific Publication Corp. Submitted: May 20, 2015 Accepted: June
More informationHYBRID ACOIWD OPTIMIZATION ALGORITHM FOR MINIMIZING WEIGHTED FLOWTIME IN CLOUDBASED PARAMETER SWEEP EXPERIMENTS
HYBRID ACOIWD OPTIMIZATION ALGORITHM FOR MINIMIZING WEIGHTED FLOWTIME IN CLOUDBASED PARAMETER SWEEP EXPERIMENTS R. Angel Preethima 1, Margret Johnson 2 1 Student, Computer Science and Engineering, Karunya
More informationBinary Ant Colony Evolutionary Algorithm
Weiqing Xiong Liuyi Wang Chenyang Yan School of Information Science and Engineering Ningbo University, Ningbo 35 China Weiqing,xwqdds@tom.com, Liuyi,jameswang@hotmail.com School Information and Electrical
More informationA Binary Model on the Basis of Imperialist Competitive Algorithm in Order to Solve the Problem of Knapsack 10
212 International Conference on System Engineering and Modeling (ICSEM 212) IPCSIT vol. 34 (212) (212) IACSIT Press, Singapore A Binary Model on the Basis of Imperialist Competitive Algorithm in Order
More informationMétaheuristiques pour l optimisation
Métaheuristiques pour l optimisation Differential Evolution (DE) Particle Swarm Optimization (PSO) Alain Dutech Equipe MAIA  LORIA  INRIA Nancy, France Web : http://maia.loria.fr Mail : Alain.Dutech@loria.fr
More informationBig Data Analytics of MultiRelationship Online Social Network Based on MultiSubnet Composited Complex Network
, pp.273284 http://dx.doi.org/10.14257/ijdta.2015.8.5.24 Big Data Analytics of MultiRelationship Online Social Network Based on MultiSubnet Composited Complex Network Gengxin Sun 1, Sheng Bin 2 and
More informationA No el Probability Binary Particle Swarm Optimization Algorithm and Its Application
28 JOURAL OF SOFTWARE, VOL. 3, O. 9, DECEMBER 2008 A o el Probability Binary Particle Swarm Optimization Algorithm and Its Application Ling Wang* School of Mechatronics and Automation, Shanghai University,
More informationSTUDY OF PROJECT SCHEDULING AND RESOURCE ALLOCATION USING ANT COLONY OPTIMIZATION 1
STUDY OF PROJECT SCHEDULING AND RESOURCE ALLOCATION USING ANT COLONY OPTIMIZATION 1 Prajakta Joglekar, 2 Pallavi Jaiswal, 3 Vandana Jagtap Maharashtra Institute of Technology, Pune Email: 1 somanprajakta@gmail.com,
More informationClassspecific Sparse Coding for Learning of Object Representations
Classspecific Sparse Coding for Learning of Object Representations Stephan Hasler, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH CarlLegienStr. 30, 63073 Offenbach am Main, Germany
More informationA New Method for Traffic Forecasting Based on the Data Mining Technology with Artificial Intelligent Algorithms
Research Journal of Applied Sciences, Engineering and Technology 5(12): 34173422, 213 ISSN: 247459; eissn: 247467 Maxwell Scientific Organization, 213 Submitted: October 17, 212 Accepted: November
More informationMaster's projects at ITMO University. Daniil Chivilikhin PhD Student @ ITMO University
Master's projects at ITMO University Daniil Chivilikhin PhD Student @ ITMO University General information Guidance from our lab's researchers Publishable results 2 Research areas Research at ITMO Evolutionary
More informationA hybrid Approach of Genetic Algorithm and Particle Swarm Technique to Software Test Case Generation
A hybrid Approach of Genetic Algorithm and Particle Swarm Technique to Software Test Case Generation Abhishek Singh Department of Information Technology Amity School of Engineering and Technology Amity
More informationDuplicating and its Applications in Batch Scheduling
Duplicating and its Applications in Batch Scheduling Yuzhong Zhang 1 Chunsong Bai 1 Shouyang Wang 2 1 College of Operations Research and Management Sciences Qufu Normal University, Shandong 276826, China
More informationA Novel Web Optimization Technique using Enhanced Particle Swarm Optimization
A Novel Web Optimization Technique using Enhanced Particle Swarm Optimization P.N.Nesarajan Research Scholar, Erode Arts & Science College, Erode M.Venkatachalam, Ph.D Associate Professor & HOD of Electronics,
More informationComplex Network Visualization based on Voronoi Diagram and Smoothedparticle Hydrodynamics
Complex Network Visualization based on Voronoi Diagram and Smoothedparticle Hydrodynamics Zhao Wenbin 1, Zhao Zhengxu 2 1 School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu
More informationFinding Liveness Errors with ACO
Hong Kong, June 16, 2008 1 / 24 Finding Liveness Errors with ACO Francisco Chicano and Enrique Alba Motivation Motivation Nowadays software is very complex An error in a software system can imply the
More informationAPPLICATION OF ADVANCED SEARCH METHODS FOR AUTOMOTIVE DATABUS SYSTEM SIGNAL INTEGRITY OPTIMIZATION
APPLICATION OF ADVANCED SEARCH METHODS FOR AUTOMOTIVE DATABUS SYSTEM SIGNAL INTEGRITY OPTIMIZATION Harald Günther 1, Stephan Frei 1, Thomas Wenzel, Wolfgang Mickisch 1 Technische Universität Dortmund,
More informationKnowledge Based Descriptive Neural Networks
Knowledge Based Descriptive Neural Networks J. T. Yao Department of Computer Science, University or Regina Regina, Saskachewan, CANADA S4S 0A2 Email: jtyao@cs.uregina.ca Abstract This paper presents a
More informationHybrid Data Envelopment Analysis and Neural Networks for Suppliers Efficiency Prediction and Ranking
1 st International Conference of Recent Trends in Information and Communication Technologies Hybrid Data Envelopment Analysis and Neural Networks for Suppliers Efficiency Prediction and Ranking Mohammadreza
More informationA New Approach in Software Cost Estimation with Hybrid of Bee Colony and Chaos Optimizations Algorithms
A New Approach in Software Cost Estimation with Hybrid of Bee Colony and Chaos Optimizations Algorithms Farhad Soleimanian Gharehchopogh 1 and Zahra Asheghi Dizaji 2 1 Department of Computer Engineering,
More informationHidden Markov Models in Bioinformatics. By Máthé Zoltán Kőrösi Zoltán 2006
Hidden Markov Models in Bioinformatics By Máthé Zoltán Kőrösi Zoltán 2006 Outline Markov Chain HMM (Hidden Markov Model) Hidden Markov Models in Bioinformatics Gene Finding Gene Finding Model Viterbi algorithm
More informationComparison of Various Particle Swarm Optimization based Algorithms in Cloud Computing
Comparison of Various Particle Swarm Optimization based Algorithms in Cloud Computing Er. Talwinder Kaur M.Tech (CSE) SSIET, Dera Bassi, Punjab, India Email talwinder_2@yahoo.co.in Er. Seema Pahwa Department
More informationManjeet Kaur Bhullar, Kiranbir Kaur Department of CSE, GNDU, Amritsar, Punjab, India
Volume 5, Issue 6, June 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Multiple Pheromone
More informationEricsson T18s Voice Dialing Simulator
Ericsson T18s Voice Dialing Simulator Mauricio Aracena Kovacevic, Anna Dehlbom, Jakob Ekeberg, Guillaume Gariazzo, Eric Lästh and Vanessa Troncoso Dept. of Signals Sensors and Systems Royal Institute of
More informationChapter ML:XI (continued)
Chapter ML:XI (continued) XI. Cluster Analysis Data Mining Overview Cluster Analysis Basics Hierarchical Cluster Analysis Iterative Cluster Analysis DensityBased Cluster Analysis Cluster Evaluation Constrained
More informationOptional Insurance Compensation Rate Selection and Evaluation in Financial Institutions
, pp.233242 http://dx.doi.org/10.14257/ijunesst.2014.7.1.21 Optional Insurance Compensation Rate Selection and Evaluation in Financial Institutions Xu Zhikun 1, Wang Yanwen 2 and Liu Zhaohui 3 1, 2 College
More informationEffect of Using Neural Networks in GABased School Timetabling
Effect of Using Neural Networks in GABased School Timetabling JANIS ZUTERS Department of Computer Science University of Latvia Raina bulv. 19, Riga, LV1050 LATVIA janis.zuters@lu.lv Abstract:  The school
More informationTechnical Analysis on Financial Forecasting
Technical Analysis on Financial Forecasting SGopal Krishna Patro 1, Pragyan Parimita Sahoo 2, Ipsita Panda 3, Kishore Kumar Sahu 4 1,2,3,4 Department of CSE & IT, VSSUT, Burla, Odisha, India sgkpatro2008@gmailcom,
More informationManagement Science Letters
Management Science Letters 4 (2014) 905 912 Contents lists available at GrowingScience Management Science Letters homepage: www.growingscience.com/msl Measuring customer loyalty using an extended RFM and
More informationStudy on the Evaluation for the Knowledge Sharing Efficiency of the Knowledge Service Network System in Agile Supply Chain
Send Orders for Reprints to reprints@benthamscience.ae 384 The Open Cybernetics & Systemics Journal, 2015, 9, 384389 Open Access Study on the Evaluation for the Knowledge Sharing Efficiency of the Knowledge
More informationHidden Markov Models
8.47 Introduction to omputational Molecular Biology Lecture 7: November 4, 2004 Scribe: HanPang hiu Lecturer: Ross Lippert Editor: Russ ox Hidden Markov Models The G island phenomenon The nucleotide frequencies
More informationMultiObjective Supply Chain Model through an Ant Colony Optimization Approach
MultiObjective Supply Chain Model through an Ant Colony Optimization Approach Anamika K. Mittal L. D. College of Engineering, Ahmedabad, India Chirag S. Thaker L. D. College of Engineering, Ahmedabad,
More informationCLOUD DATABASE ROUTE SCHEDULING USING COMBANATION OF PARTICLE SWARM OPTIMIZATION AND GENETIC ALGORITHM
CLOUD DATABASE ROUTE SCHEDULING USING COMBANATION OF PARTICLE SWARM OPTIMIZATION AND GENETIC ALGORITHM *Shabnam Ghasemi 1 and Mohammad Kalantari 2 1 Deparment of Computer Engineering, Islamic Azad University,
More informationDesign call center management system of ecommerce based on BP neural network and multifractal
Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2014, 6(6):951956 Research Article ISSN : 09757384 CODEN(USA) : JCPRC5 Design call center management system of ecommerce
More informationTOWARD BIG DATA ANALYSIS WORKSHOP
TOWARD BIG DATA ANALYSIS WORKSHOP 邁 向 巨 量 資 料 分 析 研 討 會 摘 要 集 2015.06.0506 巨 量 資 料 之 矩 陣 視 覺 化 陳 君 厚 中 央 研 究 院 統 計 科 學 研 究 所 摘 要 視 覺 化 (Visualization) 與 探 索 式 資 料 分 析 (Exploratory Data Analysis, EDA)
More informationModeling of Knowledge Transfer in logistics Supply Chain Based on System Dynamics
, pp.377388 http://dx.doi.org/10.14257/ijunesst.2015.8.12.38 Modeling of Knowledge Transfer in logistics Supply Chain Based on System Dynamics Yang Bo School of Information Management Jiangxi University
More informationOptimize Position and Path Planning of Automated Optical Inspection
Journal of Computational Information Systems 8: 7 (2012) 2957 2963 Available at http://www.jofcis.com Optimize Position and Path Planning of Automated Optical Inspection Hao WU, Yongcong KUANG, Gaofei
More informationAdvanced Ensemble Strategies for Polynomial Models
Advanced Ensemble Strategies for Polynomial Models Pavel Kordík 1, Jan Černý 2 1 Dept. of Computer Science, Faculty of Information Technology, Czech Technical University in Prague, 2 Dept. of Computer
More informationSACOC: A spectralbased ACO clustering algorithm
SACOC: A spectralbased ACO clustering algorithm Héctor D. Menéndez, Fernando E. B. Otero, and David Camacho Abstract The application of ACObased algorithms in data mining is growing over the last few
More informationOptimal Tuning of PID Controller Using Meta Heuristic Approach
International Journal of Electronic and Electrical Engineering. ISSN 09742174, Volume 7, Number 2 (2014), pp. 171176 International Research Publication House http://www.irphouse.com Optimal Tuning of
More information