Discrete Hidden Markov Model Training Based on Variable Length Particle Swarm Optimization Algorithm

Save this PDF as:
Size: px
Start display at page:

Download "Discrete Hidden Markov Model Training Based on Variable Length Particle Swarm Optimization Algorithm"

Transcription

1 Discrete Hidden Markov Model Training Based on Variable Length Discrete Hidden Markov Model Training Based on Variable Length 12 Xiaobin Li, 1Jiansheng Qian, 1Zhikai Zhao School of Computer Science and Technology, China University of Mining and Technology, Xuzhou Jiangsu ,China *2, School of Computer Science and Technology, Jiangsu Normal University, Xuzhou Jiangsu 1, Abstract Expectation Maximization could be used to train a Discrete Hidden Markov Model(DHMM), but the state number must be specified in advance. Meanwhile this method may make the parameters of HMM converge to a local optimal solution. This paper puts forward a novel DHMM training method based on variable length particle swarm optimization. The method has two advantages. One is this method could optimize the state number, the other is it could make model parameters convergence to a global optimal solution. Experiments on synthetic data set verify the efficient of the method. Keywords: Discrete Hidden Markov Model, Particle swarm optimization, Expectation maximization 1. Introduction Hidden Markov Model(HMM) has demonstrated its predominant capacity in many areas[1, 2]. While training of HMM, i.e. parameters estimation is the key problem in applying HMM to practical application. The widely used parameters estimation method is Baum-Welch (BW) algorithm [3]. However the BW algorithm has disadvantages of calculating slowly and converging to a local optimal solution. Meanwhile the state number of HMM must be specified in advance. Too much state number will make the learned HMM model over fitting, and too few state number will make HMM model less fitting. Thus many scholars put forward various optimization algorithms to optimize the parameters of HMM such as genetic algorithm [4], simulated annealing algorithm [5], and particle swarm algorithm [6] and so on. At the same time BIC criterion, AIC criterion, AIC3 criterion and cross validation are used to obtain the state number of HMM.. Then the HMM training problem is divided into two questions: state number optimization and parameters optimization. These two questions are influenced each other. This paper presents a novel algorithm based on variable length particle swarm optimization for Discrete Hidden Markov Model (DHMM) training. The algorithm borrows ideas from paper [7] which introduces variable length particle swarm optimization algorithm for social programming. The proposed algorithm based on variable length particle swarm optimization algorithm has the following two distinct advantages. (1)The state number of DHMM does not have to be specified in advance. The algorithm will determine the optimization state number automatically according to relative information criterion. (2)DHMM parameters including transition probability and observation probability will approximately converge to a global optimal solution under specific state number, but not the local optimal solution. In this paper, the rest of the paper is listed as follows. Section 2 introduces DHMM optimization in previous work briefly; Section 3 gives a detail explanation on variable length particle swarm algorithm applied in DHMM training; Section 4 carries on some optimization experiments on synthesis data set; Section 5 draws the conclusion of this paper. 2. Related Work State number of HMM should be appointed before a HMM optimization procedure is carried on. Paper [8] raises a state number estimation method by BIC criterion. Paper [9]proves that when there is small sample size of data set the AIC criterion is more suitable for HMM state number assessment. International Journal of Digital Content Technology and its Applications(JDCTA) Volume6,Number20,November 2012 doi: /jdcta.vol6.issue

2 Paper [10] compares several state number assessment algorithms and concludes that AIC3 criterion take more advantages. Paper [11] puts forward cross validation algorithm as HMM state number determination instead of various information criteria. Various algorithms, based on EM, are prone to converge to a local optimal solution. Then PSO algorithm is introduced for HMM training [12-14]. Experimental results in Paper [6] show that PSO algorithm in HMM training outperforms BW algorithm and simulated annealing (SA) in the area of protein multiple sequence alignment (MSA). In paper [15] the immune particle swarm optimization (IPSO) algorithm is proposed for HMM training, and for the MSA problems experimental results show that IPSO can not only improve sequence alignment capability but also reduce model training cost. Paper [16] proposes a particle repayment and penalty method during optimization procedure for HMM model training and experimental results show the method is better than BW. Paper[17, 18] put forward an optimization named quantum particle swarm optimization (QPSO) and apply this algorithm to the HMM parameters estimation in MSA area. At the same time this algorithm is compared with traditional PSO and BW, and the algorithm makes a low classification error rate than the other two algorithms on some experimental data sets. Paper [19] introduces an improved particle swarm optimization algorithm (IPSO) to optimize the HMM parameters. Paper [20] combines PSO and BW together to optimize HMM state number and probability matrix in two phases. However, all these existing HMM model training techniques have one distinct drawback that HMM state number should be appointed or calculated in advance. Once state number is determined the HMM model could then optimize its parameters under particle swarm optimization. In the classical PSO algorithm, particle length is fixed, while in several particular application areas the particle length should be variable. Paper [7] presents a variable length particle swarm algorithm to implement social programming, where unfixed length particle is used to represent feasible solution. During the optimization procedure, the non-optimal particle length will evolve to the optimal particle length in accordance with a certain probability. Paper [22][21] proposes a PSO based on feasible solution set where conventional particle swarm arithmetic calculation is replaced with logic operations on set. Then the algorithm is successfully applied into clustering problem. In the paper we borrow the thought from paper [7, 21] and modify the proposed algorithm to make it suitable for Discrete Hidden Markov Model (DHMM) training. This paper presents a variable length particle swarm optimization (VLPSO) for Discrete Hidden Markov Model training. The proposed method has the following two obvious advantages. The first is the DHMM state number don t need to be specified in advance but is determined by the algorithm according to information criterion during the dynamic optimization procedure; The second is the DHMM parameters will converge to an approximate global optimal solution under the specific state number instead of converging to a local optimal solution with EM algorithm. 3. Variable length particle swarm optimization for DHMM training 3.1. Particle swarm optimization Particle swarm optimization algorithm (PSO) is a kind of evolutionary computing technology, which is mainly inspired by the biological behaviour among birds community. In the community there is no central control mechanism and the group's behaviour is a combination of all particles in the group. Each particle s action is mainly determined by two factors of the group, one is its own historical experience and the other is its social relationship with other particles. PSO has been applied in lots of application areas such as DNA sequence alignment as well as various parameter optimization problems. In PSO algorithm a particle p is generally represented as a tuple with four elements: p ( x ( t), v ( t), p ( t), g ( t)) (1) ij ij In the tuple t means the optimization round, i means the specific particle number and j means the dimension. x ij (t) means the position of NO.i particle at t round and j dimension; v ij (t) means the speed of NO.i particle at t round and j dimension; p ij (t) means the optimal position among all the ij ij 183

3 Discrete Hidden Markov Model Training Based on Variable Length positions that NO.i particle have traversed at j dimension till t round; g ij (t ) represents the optimal position among all the positions that all particles have traversed at j dimension till t round. The particles evolvement among search space can be defined as the following two formulas. vij (t 1) wvij (t ) c1r1 j ( pij (t ) xij (t )) c2 r2 j ( g ij (t ) xij (t )) (2) xij (t 1) xij (t ) vij (t 1) (3) Where w is the inertia coefficient. In some PSO improved versions of the algorithm the coefficient can be reduced gradually with optimization, i.e. at the beginning of the optimization PSO have a larger inertia coefficient while later the inertia coefficient is gradually reduced. The number c1 and c2 means the local learning coefficient and global learning coefficient separately. While the number r1 j r2 j separately represents the random values which can be rebuilt in each dimension, and these two number form the constrain 0 r1 j, r2 j 1 Classical PSO algorithm run as follows: (1) Initialization. The algorithm produces several particles randomly and assigns these particles random initial position and velocity. (2) Assess. The algorithm calculates the fitness of each particle among the particle population with a fitness function. (3) Obtain the local optimal particle. The algorithm searches for the optimal solution of each particle till now. (4) Obtain the global optimal particle. The algorithm searches for the optimal solution among all particles till now. (5) Update all the particles position and velocity. (6) Return to step (2). The algorithm keeps running until the recycle round reaches a predetermined threshold or a satisfactory solution DHMM training based on VLPSO For a DHMM, where the model has N states and M observed symbols, a length of N+N * N+N * M particle could then be constructed. The addition symbol divides a particle into three parts, of which the first part means N states of the initial probability distribution, the second part means HMM transition probability distribution A, while the third part means HMM observation probability distribution B. When the sequence is known, the number of observed symbol M is easily identified, but the state number N is difficult to determine. Here we put forward a kind of VLPSO algorithm to optimize HMM. Firstly a five elements tuple is used to represent the particle defined in VLPSO algorithm: p ( xij (t ), vij (t ), pij (t ), p gj (t ), p aj (t )) (4) In the tuple t means the current optimization round, i means the specific particle number and j means the dimension. xij (t ) means the position of NO.i particle at t round and j dimension; vij (t ) means the speed of NO. i particle at t round and j dimension; pij (t ) means the optimal position among all the positions that NO.i particle have traversed at j dimension till t round; g ij (t ) represents the optimal position among all the positions that all particles have traversed at j dimension till t round. Among them, t means the optimization round, i means the specific particle number, j represents the dimension. xij (t ) represents the position of NO.i particles in the t round algorithm and the j dimension; vij (t ) represents the speed of NO.i particles in the t round algorithm and the j dimension; pij (t ) means the optimal position among all the positions that NO.i particle have traversed at j dimension till t round; p gj (t ) means the optimal position among all the positions that all particles have traversed at j 184

4 Discrete Hidden Markov Model Training Based on Variable Length dimension till t round. paj (t ) means the optimal position among all the positions that all particles have traversed till t round. The particles evolution in search space can be described by the following two formulas. vij (t 1) wvij (t ) c1r1 j ( pij (t ) xij (t )) c2 r2 j ( p gj (t ) xij (t )) (5) xij (t 1) xij (t ) vij (t 1) (6) In the formulas, the coefficient has the same meaning as the classical PSO algorithm. In addition to the changes of speed and position, the VLPSO algorithm also contains a dimension alteration step. Assume we have state number nit State( xi (t )) in the t round, meanwhile the global optimal state number is n gt State( pa (t )). Therefore at t+1 round state number can alter to the global optimal state number of state. ni (t 1) n g (t 1) (7) The alteration method can be described in detail as below. ni (t 1) ni (t 1) r (n g (t 1) ni (t 1) ) (8) Where r is a generated random number, and 0 r 1. When the global best particle state number is greater than the state number of assessment particles the assessment particles will increase their state number, otherwise it will reduce their state number. Once the new state number is determined, the particle position and velocity will be reinitialized and at the same time the local optimal particles and the global optimal particle will be altered. The main procedure of VLPSO algorithm is described as follows. (1) Initialization. The algorithm produces several particles randomly and assigns these particles random initial position and velocity. (2) Assess. The algorithm calculates the fitness of each particle among the particle population with a fitness function. (3) Obtain the local optimal particle. The algorithm searches for the optimal solution of each particle with the same dimension till now. (4) Obtain the global optimal particle. The algorithm searches for the optimal solution among all particles till now. (5) Update all the particles position and velocity. (6) Update all particles dimension (7) Return to step (2). The algorithm keeps running until the recycle round reaches a predetermined threshold or a satisfactory solution Fitness function Forward process could be applied to evaluate that a given sequence is whether satisfied with a HMM or not. Firstly a prior variable t (i ) P(O1O2...Ot, qt S i ) is defined as the probability of the partial observation sequence, O1O2...Ot,(until time t)and state is Si and time t, given the model. The process of calculating the prior variables is described as follows: (1) Initialization: (9) 1 (i ) i bi (O1 ),1<=i<=N (2) Induction: N (i ) a t 1 ( j ) t i 1 ij b j (Ot 1 ) 1<=t<=T-1 1<=j<=N (10) 185

5 (3) Termination: N P O ) i 1 ( ( i ) (11) T As the value of P( O ) in the calculation results is greatly small, the precision is easy lost when it is directly used as the fitness function. Therefore the log-likelihood function Fitness( xi ) log( P( O )) is often applied to assess the HMM model fitness. Hence for multiple observed sequences, the likelihood (10) is introduced. L 1 Fitness( xi ) log( P( O )). O [ o1, o2,..., ot,..., ot ] (12) L l 1 In the above evaluation function, the affection of HMM state number is not considered. In fact for VLPSO algorithm in HMM training, the state number needs to be considered simultaneously. Then the state number should be fit into the likelihood evaluation function. At present, there are several standard techniques on statistical model fitting capacity, such as AIC, BIC etc. For the likelihood evaluation in DHMM model, the observed symbol number M of a known sequence is constant, so this parameter don't need to take into consideration. While the DHMM state number N is a free variable, so the state number N is the only one needed to be considered Particle dimension adjusting probability In step (6) of VLPSO algorithm, the particle dimension will alter with an adjusting probability P ) after several algorithm round with fixed dimension. Here three strategies are proposed to ( adjust determine the adjusting probability P adjust. (1) VLPSO_FIX, the first strategy, specifies a fixed constant probability c of dimension adjusting. (2) ( t) c ( 0 c 1 ) (13) P adjust (2) VLPSO_LIN, the second strategy, increases linearly with algorithm round. Assume that the minimum dimension adjusting probability is P min, the maximum dimension adjusting probability is P max, R is the total algorithm round designed and t is the algorithm round till now from the adjusting algorithm round. Pmax Pmin P adjust ( t) Pmin t (14) R In this strategy, particles hold their dimension before t round, while after t round their dimension will approximate adjust to the global optimal particle dimension. (3) VLPSO_EXP, the third strategy, P adjust varies according to the difference of the two global optimal particles fitness in successive algorithm round. Suppose the global optimal particle fitness in successive algorithm round is respectively p g (t) and p g ( t 1). Padjust ( t 1) exp( 1*( Fitness( pg ( t) Fitness( pg ( t 1))))) (15) Obviously, in this strategy, when the difference of the two global optimal particle fitness in successive algorithm round is large the dimension adjusting probability P adjust is little. Therefore at the beginning the particles dimension has little opportunity to adjust. While later the difference of the two global optimal particles fitness in successive algorithm round is getting smaller, and the dimension 186

6 adjusting probability is getting bigger. i.e. There will be more opportunity to adjust particles dimension. 4. Experiments 4.1. Data set In order to accurately evaluate the DHMM training method based on variable length particle swarm optimization algorithm we proposed. Synthetic data set is generated by Murphy s HMM toolbox. There are four data sets (HMM1, HMM2, HMM3, HMM4) generated with number 100 and length 100. The state number of the HMM1 is 2, the observed symbol number is 4 and the specific parameters of the HMM model are assigned as below: =[0.5,0.5]; A=[0.2,0.8;0.3,0.7]; B=[0.2,0.4,0.3,0.1;0.1,0.5,0.2,0.2]; The state number of the HMM2 is 4, the observed symbol number is 6 and the specific parameters of the HMM model are assigned as below: =[0.25,0.25,0.25,0.25]; A=[0.2,0.6,0.1,0.1;0.3,0.2,0.2,0.3;0.3,0.2,0.2,0.3;0.3,0.2,0.2,0.3]; B=[0.2,0.4,0.1,0.1,0.1,0.1;0.1,0.2,0.2,0.2,0.1,0.2;0.3,0.2,0.2,0.1,0.1,0.1;0.3,0.2,0.2,0.1,0.1,0.1]; The state number of the HMM3 is 6, the observed symbol number is 8 and the specific parameters of the HMM model are assigned as below: =[0.25,0.25,0.2,0.1,0.1,0.1]; A=[0.1,0.1,0.1,0.1,0.1,0.5;0.1,0.1,0.1,0.1,0.5,0.1;0.1,0.1,0.1,0.5,0.1,0.1;0.1,0.1,0.5,0.1,0.1,0.1;0.1,0. 5,0.1,0.1,0.1,0.1;0.5,0.1,0.1,0.1,0.1,0.1]; B=[0.3,0.1,0.1,0.1,0.1,0.1,0.1,0.1;0.1,0.3,0.1,0.1,0.1,0.1,0.1,0.1;0.2,0.2,0.1,0.1,0.1,0.1,0.1,0.1;0.15,0.25,0.1,0.1,0.1,0.1,0.1,0.1;0.25,0.15,0.1,0.1,0.1,0.1,0.1,0.1;0.0,0.4,0.1,0.1,0.1,0.1,0.1,0.1]; The state number of the HMM4 is 8, the observed symbol number is 8 and the specific parameters of the HMM model are assigned as below: =[0.25,0.1,0.15,0.1,0,1,0.1,0.1,0.1]'; A=[0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.3;0.1,0.1,0.1,0.1,0.1,0.1,0.3,0.1;0.1,0.1,0.1,0.1,0.1,0.3,0.1,0.1;0.1,0. 1,0.1,0.1,0.3,0.1,0.1,0.1;0.1,0.1,0.1,0.3,0.1,0.1,0.1,0.1;0.1,0.1,0.3,0.1,0.1,0.1,0.1,0.1;0.1,0.3,0.1,0.1,0.1, 0.1,0.1,0.1;0.3,0.1,0.1,0.1,0.1,0.1,0.1,0.1]; B=[0.3,0.1,0.1,0.1,0.1,0.1,0.1,0.1;0.1,0.3,0.1,0.1,0.1,0.1,0.1,0.1;0.1,0.1,0.3,0.1,0.1,0.1,0.1,0.1;0.1,0. 1,0.1,0.3,0.1,0.1,0.1,0.1;0.1,0.1,0.1,0.1,0.3,0.1,0.1,0.1;0.1,0.1,0.1,0.1,0.1,0.3,0.1,0.1;0.1,0.1,0.1,0.1,0.1, 0.1,0.3,0.1;0.1,0.1,0.1,0.1,0.1,0.1,0.1,0.3]; 4.2. Algorithm parameters Four algorithms are applied to train HMM model and also compared with each other. They are BW algorithm, VLPSO_FIX, VLPSO_LIN, VLPSO_EXP. In the BW algorithm the main parameters include state number and iteration number. For the convenience to calculate and compare, here in BW algorithm the state number is directly assigned the same state number as used in synthetic data sets. At the same time algorithm parameters of VLPSO are listed in table 1. Table 1. Algorithm parameters of VLPSO VLPSO_FIX VLPSO_LIN VLPSO_EXP N R Radjust wstart wend C

7 C P adjust Pmax Pmin Among these parameters, N means the particle number, R means the total algorithm round, R adjust means the algorithm round of the particle dimension beginning to adjust, w start and w end respectively means the speed inertia weight of the initial and final calculation, C 1 and C 2 respectively means the learning factors in local solution space and global solution space, P adjust represents particle dimension adjusting probability, P max means maximum dimension adjusting probability and P min means minimum dimension adjusting probability State number recognition Here we apply the above four algorithms on data sets and summary the state number correct recognition. Table 2 only list the rate of correctly recognition state number. VLPSO_LIN and VLPSO_EXP have similar results. Table 2. Percentage of correctly recognize states (%) BIC AIC AICC AIC3 CAIC AICu HQC HMM HMM HMM HMM It can be seen from the experimental results that the BIC and CAIC can be well used for evaluating the likelihood. While when the state number of the sample increases, the percentage of correctly recognition states number declines Likelihood comparison We compare all likelihood calculated under the four algorithms. Table 3 only list the likelihood comparison between VLPSO_FIX and BW. For BW algorithm the state number used in previous data sets generated is directly applied. Table 3. Log-likelihood degrees under correctly recognize states (10^2) BW BIC AIC AICC AIC3 CAIC AICu HQC HMM HMM HMM HMM It can be seen from the experimental results the VLPSO algorithm, compared with BW algorithm, when under the same evolution index, the likelihood is higher. Other algorithms such as VLPSO_LIN and VLPSO_EXP have similar conclusions. 188

8 Discrete Hidden Markov Model Training Based on Variable Length 4.5. Evolution of state number HMM1 HMM2 HMM3 HMM4 Figure 1. Number of particles according to state number Here we illustrate the evolution of state number under BIC index and VLPSO_FIX in figure 1. In the figure there are four sub figures corresponding to the four data sets. Three lines are plot on each sub figure, where each line is separately corresponded to the round 30, 35 and 40. The other two algorithms also have the similar results. It can be seen from the experimental results that before dimension adjusting (round 30), each particle is optimized with a constant state number. All particles will gradually alter their dimension to the global optimal particle s dimension when the dimension adjusting process starts. 5. Conclusion Iteration in BW algorithm always increases its likelihood on data set, so the final result might produce a local rather than a global maximum solution. Meanwhile an initial state number should be appointed in advance, which is impossible when there is little prior knowledge for the data set. This paper presents VLPSO algorithm, an improvement version of traditional fixed length particle swarm algorithm, for DHMM training. From the experiments on synthetic data set we can summarize that this proposed VLPSO algorithm has the following two distinct advantages: the first is the DHMM state number do not need to be specified in advance, but is determined by the dynamic optimization, the second is DHMM parameters will approximately convergence to the global optimal solution under the specific state number rather than local optimal solution. 189

9 The main questions still remain in VLPSO algorithm is the selection of fitness function, which has a certain tight relationship with data set. How to choose a suitable fitness function and make better use of VLPSO to train DHMM so as to obtain appropriate structure is the focus work of next research. 6. Acknowledgement This work was supported by Chinese National High Technology Research and Development Program 863 under Grant No 2008AA062200, and by Chinese National High Technology Research and Development Program 863 under Grant No 2012AA062103, and by Jiangsu Province Production Research Foundation under Grant No BY in China, and by Xuzhou Industry Science Program under Grant No XX10A001, and by Jiangsu Normal University Foundation under Grant No 10XLA13. Also the authors wish to thank the reviewers for this paper. 7. Reference [1] Y. Yang, N. Cheng, M. Zhang, Research on activity recognition method based on human motion trajectory features, Journal of Convergence Information Technology, vol. 7, no. 1, pp , [2] Y. Yao, K. Xia, Y. Wu, Speech word recognizer based on the HMM algorithm, International Journal of Advancements in Computing Technology, vol. 3, no. 10, pp , [3] L. R. Rabiner, A tutorial on hidden Markov models and selected applications in speech recognition, Proceedings of the IEEE, vol. 77, no. 2, pp , [4] Y. X. Li, S. Kwong, Q. H. He, J. He, J. C. Yang, Genetic algorithm based simultaneous optimization of feature subsets and hidden Markov model parameters for discrimination between speech and non-speech events, International Journal of Speech Technology, vol. 13, no. 2, pp , [5] L. Jong-Seok, P. Cheol Hoon, Hybrid Simulated Annealing and Its Application to Optimization of Hidden Markov Models for Visual Speech Recognition, Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, vol. 40, no. 4, pp , [6] T. K. Rasmussen, T. Krink, Improved Hidden Markov Model training for multiple sequence alignment by a particle swarm optimization--evolutionary algorithm hybrid, Biosystems, vol. 72, no. 1-2, pp. 5-17, [7] N. Nedjah, L. Mourelle, M. O'Neill, F. Leahy, A. Brabazon, "Grammatical Swarm: A Variable- Length Particle Swarm Algorithm," Swarm Intelligent Systems, Studies in Computational Intelligence, pp : Springer Berlin / Heidelberg, [8] C. Keribin, Consistent estimation of the order of mixture models, Sankhyā: The Indian Journal of Statistics, Series A, pp , [9] O. Lukočienė., J. K. Vermunt, "Determining the Number of Components in Mixture Models for Hierarchical Data Advances in Data Analysis, Data Handling and Business Intelligence," Studies in Classification, Data Analysis, and Knowledge Organization, pp : Springer Berlin Heidelberg, [10] J. Dias, "Latent Class Analysis and Model Selection From Data and Information Analysis to Knowledge Engineering," Studies in Classification, Data Analysis, and Knowledge Organization, pp : Springer Berlin Heidelberg, [11] G. Celeux, J. B. Durand, Selecting hidden Markov model state number with cross-validated likelihood, Computational Statistics, vol. 23, no. 4, pp , [12] S. Aupetit, N. Monmarché, M. Slimane, Hidden Markov Models Training Using Populationbased Metaheuristics, Advances in Metaheuristics for Hard Optimization, pp , [13] S. Phon-Amnuaisuk, "Estimating HMM Parameters Using Particle Swarm Optimisation," Applications of Evolutionary Computing, Lecture Notes in Computer Science, pp : Springer Berlin / Heidelberg, [14] J. MENG, X. U. Siqiang, X. WANG, Y. I. Yajuan, L. I. U. Hongbo, Swarm-based DHMM Training and Application in Time Sequences Classification, Journal of Computational Information Systems6, vol. 1, pp ,

10 [15] H.-W. Ge, Y.-C. Liang, "A Hidden Markov Model and Immune Particle Swarm Optimization- Based Algorithm for Multiple Sequence Alignment," AI 2005: Advances in Artificial Intelligence, Lecture Notes in Computer Science, pp : Springer Berlin / Heidelberg, [16] M. Macaš, D. Novák, L. Lhotská, "Constraints in Particle Swarm Optimization of Hidden Markov Models," Intelligent Data Engineering and Automated Learning IDEAL 2006, Lecture Notes in Computer Science, pp : Springer Berlin / Heidelberg, [17] C. Li, H. Long, Y. Ding, J. Sun, W. Xu, "Multiple Sequence Alignment by Improved Hidden Markov Model Training and Quantum-Behaved Particle Swarm Optimization," Life System Modeling and Intelligent Computing, Lecture Notes in Computer Science, pp : Springer Berlin / Heidelberg, [18] J. Sun, X. Wu, W. Fang, Y. Ding, H. Long, W. Xu, Multiple sequence alignment using the Hidden Markov Model trained by an improved quantum-behaved particle swarm optimization, Information Sciences, vol. 182, no. 1, pp , [19] C. WANG, D. DUAN, X. WANG, A Improved PSO and HMM Algorithm for Web Information Extraction, Journal of Henan Normal University(Natural Science), vol. 38, no. 05, pp , [20] J. ZHU, Y. GAO, Adaptive particle swarm optimization for hidden markov model training, Computer Engineering and Design, vol. 31, no. 01, pp , [21] C. Veenhuis, "A Set-Based Particle Swarm Optimization Method," Parallel Problem Solving from Nature PPSN X, Lecture Notes in Computer Science, pp : Springer Berlin / Heidelberg,

Open Access Research on Application of Neural Network in Computer Network Security Evaluation. Shujuan Jin *

Open Access Research on Application of Neural Network in Computer Network Security Evaluation. Shujuan Jin * Send Orders for Reprints to reprints@benthamscience.ae 766 The Open Electrical & Electronic Engineering Journal, 2014, 8, 766-771 Open Access Research on Application of Neural Network in Computer Network

More information

Research on the Performance Optimization of Hadoop in Big Data Environment

Research on the Performance Optimization of Hadoop in Big Data Environment Vol.8, No.5 (015), pp.93-304 http://dx.doi.org/10.1457/idta.015.8.5.6 Research on the Performance Optimization of Hadoop in Big Data Environment Jia Min-Zheng Department of Information Engineering, Beiing

More information

A Novel Binary Particle Swarm Optimization

A Novel Binary Particle Swarm Optimization Proceedings of the 5th Mediterranean Conference on T33- A Novel Binary Particle Swarm Optimization Motaba Ahmadieh Khanesar, Member, IEEE, Mohammad Teshnehlab and Mahdi Aliyari Shoorehdeli K. N. Toosi

More information

CONCEPTUAL MODEL OF MULTI-AGENT BUSINESS COLLABORATION BASED ON CLOUD WORKFLOW

CONCEPTUAL MODEL OF MULTI-AGENT BUSINESS COLLABORATION BASED ON CLOUD WORKFLOW CONCEPTUAL MODEL OF MULTI-AGENT BUSINESS COLLABORATION BASED ON CLOUD WORKFLOW 1 XINQIN GAO, 2 MINGSHUN YANG, 3 YONG LIU, 4 XIAOLI HOU School of Mechanical and Precision Instrument Engineering, Xi'an University

More information

Video Affective Content Recognition Based on Genetic Algorithm Combined HMM

Video Affective Content Recognition Based on Genetic Algorithm Combined HMM Video Affective Content Recognition Based on Genetic Algorithm Combined HMM Kai Sun and Junqing Yu Computer College of Science & Technology, Huazhong University of Science & Technology, Wuhan 430074, China

More information

A Service Revenue-oriented Task Scheduling Model of Cloud Computing

A Service Revenue-oriented Task Scheduling Model of Cloud Computing Journal of Information & Computational Science 10:10 (2013) 3153 3161 July 1, 2013 Available at http://www.joics.com A Service Revenue-oriented Task Scheduling Model of Cloud Computing Jianguang Deng a,b,,

More information

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set

EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set EM Clustering Approach for Multi-Dimensional Analysis of Big Data Set Amhmed A. Bhih School of Electrical and Electronic Engineering Princy Johnson School of Electrical and Electronic Engineering Martin

More information

A RANDOMIZED LOAD BALANCING ALGORITHM IN GRID USING MAX MIN PSO ALGORITHM

A RANDOMIZED LOAD BALANCING ALGORITHM IN GRID USING MAX MIN PSO ALGORITHM International Journal of Research in Computer Science eissn 2249-8265 Volume 2 Issue 3 (212) pp. 17-23 White Globe Publications A RANDOMIZED LOAD BALANCING ALGORITHM IN GRID USING MAX MIN ALGORITHM C.Kalpana

More information

An ACO Approach to Solve a Variant of TSP

An ACO Approach to Solve a Variant of TSP An ACO Approach to Solve a Variant of TSP Bharat V. Chawda, Nitesh M. Sureja Abstract This study is an investigation on the application of Ant Colony Optimization to a variant of TSP. This paper presents

More information

Optimal PID Controller Design for AVR System

Optimal PID Controller Design for AVR System Tamkang Journal of Science and Engineering, Vol. 2, No. 3, pp. 259 270 (2009) 259 Optimal PID Controller Design for AVR System Ching-Chang Wong*, Shih-An Li and Hou-Yi Wang Department of Electrical Engineering,

More information

Practical Applications of Evolutionary Computation to Financial Engineering

Practical Applications of Evolutionary Computation to Financial Engineering Hitoshi Iba and Claus C. Aranha Practical Applications of Evolutionary Computation to Financial Engineering Robust Techniques for Forecasting, Trading and Hedging 4Q Springer Contents 1 Introduction to

More information

Error Log Processing for Accurate Failure Prediction. Humboldt-Universität zu Berlin

Error Log Processing for Accurate Failure Prediction. Humboldt-Universität zu Berlin Error Log Processing for Accurate Failure Prediction Felix Salfner ICSI Berkeley Steffen Tschirpke Humboldt-Universität zu Berlin Introduction Context of work: Error-based online failure prediction: error

More information

A resource schedule method for cloud computing based on chaos particle swarm optimization algorithm

A resource schedule method for cloud computing based on chaos particle swarm optimization algorithm Abstract A resource schedule method for cloud computing based on chaos particle swarm optimization algorithm Lei Zheng 1, 2*, Defa Hu 3 1 School of Information Engineering, Shandong Youth University of

More information

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network

Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network Chapter 2 The Research on Fault Diagnosis of Building Electrical System Based on RBF Neural Network Qian Wu, Yahui Wang, Long Zhang and Li Shen Abstract Building electrical system fault diagnosis is the

More information

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM

FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT MINING SYSTEM International Journal of Innovative Computing, Information and Control ICIC International c 0 ISSN 34-48 Volume 8, Number 8, August 0 pp. 4 FUZZY CLUSTERING ANALYSIS OF DATA MINING: APPLICATION TO AN ACCIDENT

More information

Data Security Strategy Based on Artificial Immune Algorithm for Cloud Computing

Data Security Strategy Based on Artificial Immune Algorithm for Cloud Computing Appl. Math. Inf. Sci. 7, No. 1L, 149-153 (2013) 149 Applied Mathematics & Information Sciences An International Journal Data Security Strategy Based on Artificial Immune Algorithm for Cloud Computing Chen

More information

Predicting the Risk of Heart Attacks using Neural Network and Decision Tree

Predicting the Risk of Heart Attacks using Neural Network and Decision Tree Predicting the Risk of Heart Attacks using Neural Network and Decision Tree S.Florence 1, N.G.Bhuvaneswari Amma 2, G.Annapoorani 3, K.Malathi 4 PG Scholar, Indian Institute of Information Technology, Srirangam,

More information

An Order-Invariant Time Series Distance Measure [Position on Recent Developments in Time Series Analysis]

An Order-Invariant Time Series Distance Measure [Position on Recent Developments in Time Series Analysis] An Order-Invariant Time Series Distance Measure [Position on Recent Developments in Time Series Analysis] Stephan Spiegel and Sahin Albayrak DAI-Lab, Technische Universität Berlin, Ernst-Reuter-Platz 7,

More information

BMOA: Binary Magnetic Optimization Algorithm

BMOA: Binary Magnetic Optimization Algorithm International Journal of Machine Learning and Computing Vol. 2 No. 3 June 22 BMOA: Binary Magnetic Optimization Algorithm SeyedAli Mirjalili and Siti Zaiton Mohd Hashim Abstract Recently the behavior of

More information

Intrusion Detection via Machine Learning for SCADA System Protection

Intrusion Detection via Machine Learning for SCADA System Protection Intrusion Detection via Machine Learning for SCADA System Protection S.L.P. Yasakethu Department of Computing, University of Surrey, Guildford, GU2 7XH, UK. s.l.yasakethu@surrey.ac.uk J. Jiang Department

More information

Wireless Sensor Networks Coverage Optimization based on Improved AFSA Algorithm

Wireless Sensor Networks Coverage Optimization based on Improved AFSA Algorithm , pp. 99-108 http://dx.doi.org/10.1457/ijfgcn.015.8.1.11 Wireless Sensor Networks Coverage Optimization based on Improved AFSA Algorithm Wang DaWei and Wang Changliang Zhejiang Industry Polytechnic College

More information

14.10.2014. Overview. Swarms in nature. Fish, birds, ants, termites, Introduction to swarm intelligence principles Particle Swarm Optimization (PSO)

14.10.2014. Overview. Swarms in nature. Fish, birds, ants, termites, Introduction to swarm intelligence principles Particle Swarm Optimization (PSO) Overview Kyrre Glette kyrrehg@ifi INF3490 Swarm Intelligence Particle Swarm Optimization Introduction to swarm intelligence principles Particle Swarm Optimization (PSO) 3 Swarms in nature Fish, birds,

More information

Dynamic Generation of Test Cases with Metaheuristics

Dynamic Generation of Test Cases with Metaheuristics Dynamic Generation of Test Cases with Metaheuristics Laura Lanzarini, Juan Pablo La Battaglia III-LIDI (Institute of Research in Computer Science LIDI) Faculty of Computer Sciences. National University

More information

International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering

International Journal of Advanced Research in Electrical, Electronics and Instrumentation Engineering DOI: 10.15662/ijareeie.2014.0307061 Economic Dispatch of Power System Optimization with Power Generation Schedule Using Evolutionary Technique Girish Kumar 1, Rameshwar singh 2 PG Student [Control system],

More information

Programming Risk Assessment Models for Online Security Evaluation Systems

Programming Risk Assessment Models for Online Security Evaluation Systems Programming Risk Assessment Models for Online Security Evaluation Systems Ajith Abraham 1, Crina Grosan 12, Vaclav Snasel 13 1 Machine Intelligence Research Labs, MIR Labs, http://www.mirlabs.org 2 Babes-Bolyai

More information

Optimization of PID parameters with an improved simplex PSO

Optimization of PID parameters with an improved simplex PSO Li et al. Journal of Inequalities and Applications (2015) 2015:325 DOI 10.1186/s13660-015-0785-2 R E S E A R C H Open Access Optimization of PID parameters with an improved simplex PSO Ji-min Li 1, Yeong-Cheng

More information

A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster

A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster , pp.11-20 http://dx.doi.org/10.14257/ ijgdc.2014.7.2.02 A Load Balancing Algorithm based on the Variation Trend of Entropy in Homogeneous Cluster Kehe Wu 1, Long Chen 2, Shichao Ye 2 and Yi Li 2 1 Beijing

More information

Metaheuristics in Big Data: An Approach to Railway Engineering

Metaheuristics in Big Data: An Approach to Railway Engineering Metaheuristics in Big Data: An Approach to Railway Engineering Silvia Galván Núñez 1,2, and Prof. Nii Attoh-Okine 1,3 1 Department of Civil and Environmental Engineering University of Delaware, Newark,

More information

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014

International Journal of Computer Science Trends and Technology (IJCST) Volume 2 Issue 3, May-Jun 2014 RESEARCH ARTICLE OPEN ACCESS A Survey of Data Mining: Concepts with Applications and its Future Scope Dr. Zubair Khan 1, Ashish Kumar 2, Sunny Kumar 3 M.Tech Research Scholar 2. Department of Computer

More information

Fault Analysis in Software with the Data Interaction of Classes

Fault Analysis in Software with the Data Interaction of Classes , pp.189-196 http://dx.doi.org/10.14257/ijsia.2015.9.9.17 Fault Analysis in Software with the Data Interaction of Classes Yan Xiaobo 1 and Wang Yichen 2 1 Science & Technology on Reliability & Environmental

More information

Performance Analysis of Data Mining Techniques for Improving the Accuracy of Wind Power Forecast Combination

Performance Analysis of Data Mining Techniques for Improving the Accuracy of Wind Power Forecast Combination Performance Analysis of Data Mining Techniques for Improving the Accuracy of Wind Power Forecast Combination Ceyda Er Koksoy 1, Mehmet Baris Ozkan 1, Dilek Küçük 1 Abdullah Bestil 1, Sena Sonmez 1, Serkan

More information

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) www.iasir.net

International Journal of Emerging Technologies in Computational and Applied Sciences (IJETCAS) www.iasir.net International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) International Journal of Emerging Technologies in Computational

More information

Performance Evaluation of Task Scheduling in Cloud Environment Using Soft Computing Algorithms

Performance Evaluation of Task Scheduling in Cloud Environment Using Soft Computing Algorithms 387 Performance Evaluation of Task Scheduling in Cloud Environment Using Soft Computing Algorithms 1 R. Jemina Priyadarsini, 2 Dr. L. Arockiam 1 Department of Computer science, St. Joseph s College, Trichirapalli,

More information

PLAANN as a Classification Tool for Customer Intelligence in Banking

PLAANN as a Classification Tool for Customer Intelligence in Banking PLAANN as a Classification Tool for Customer Intelligence in Banking EUNITE World Competition in domain of Intelligent Technologies The Research Report Ireneusz Czarnowski and Piotr Jedrzejowicz Department

More information

Genetic Algorithm Based Interconnection Network Topology Optimization Analysis

Genetic Algorithm Based Interconnection Network Topology Optimization Analysis Genetic Algorithm Based Interconnection Network Topology Optimization Analysis 1 WANG Peng, 2 Wang XueFei, 3 Wu YaMing 1,3 College of Information Engineering, Suihua University, Suihua Heilongjiang, 152061

More information

International Journal of Software and Web Sciences (IJSWS) www.iasir.net

International Journal of Software and Web Sciences (IJSWS) www.iasir.net International Association of Scientific Innovation and Research (IASIR) (An Association Unifying the Sciences, Engineering, and Applied Research) ISSN (Print): 2279-0063 ISSN (Online): 2279-0071 International

More information

Automatic Mining of Internet Translation Reference Knowledge Based on Multiple Search Engines

Automatic Mining of Internet Translation Reference Knowledge Based on Multiple Search Engines , 22-24 October, 2014, San Francisco, USA Automatic Mining of Internet Translation Reference Knowledge Based on Multiple Search Engines Baosheng Yin, Wei Wang, Ruixue Lu, Yang Yang Abstract With the increasing

More information

U.P.B. Sci. Bull., Series C, Vol. 77, Iss. 1, 2015 ISSN 2286 3540

U.P.B. Sci. Bull., Series C, Vol. 77, Iss. 1, 2015 ISSN 2286 3540 U.P.B. Sci. Bull., Series C, Vol. 77, Iss. 1, 2015 ISSN 2286 3540 ENTERPRISE FINANCIAL DISTRESS PREDICTION BASED ON BACKWARD PROPAGATION NEURAL NETWORK: AN EMPIRICAL STUDY ON THE CHINESE LISTED EQUIPMENT

More information

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus

Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Auxiliary Variables in Mixture Modeling: 3-Step Approaches Using Mplus Tihomir Asparouhov and Bengt Muthén Mplus Web Notes: No. 15 Version 8, August 5, 2014 1 Abstract This paper discusses alternatives

More information

A New Nature-inspired Algorithm for Load Balancing

A New Nature-inspired Algorithm for Load Balancing A New Nature-inspired Algorithm for Load Balancing Xiang Feng East China University of Science and Technology Shanghai, China 200237 Email: xfeng{@ecusteducn, @cshkuhk} Francis CM Lau The University of

More information

QoS Guaranteed Intelligent Routing Using Hybrid PSO-GA in Wireless Mesh Networks

QoS Guaranteed Intelligent Routing Using Hybrid PSO-GA in Wireless Mesh Networks BULGARIAN ACADEMY OF SCIENCES CYBERNETICS AND INFORMATION TECHNOLOGIES Volume 15, No 1 Sofia 2015 Print ISSN: 1311-9702; Online ISSN: 1314-4081 DOI: 10.1515/cait-2015-0007 QoS Guaranteed Intelligent Routing

More information

Knowledge Acquisition Approach Based on Rough Set in Online Aided Decision System for Food Processing Quality and Safety

Knowledge Acquisition Approach Based on Rough Set in Online Aided Decision System for Food Processing Quality and Safety , pp. 381-388 http://dx.doi.org/10.14257/ijunesst.2014.7.6.33 Knowledge Acquisition Approach Based on Rough Set in Online Aided ecision System for Food Processing Quality and Safety Liu Peng, Liu Wen,

More information

Learning in Abstract Memory Schemes for Dynamic Optimization

Learning in Abstract Memory Schemes for Dynamic Optimization Fourth International Conference on Natural Computation Learning in Abstract Memory Schemes for Dynamic Optimization Hendrik Richter HTWK Leipzig, Fachbereich Elektrotechnik und Informationstechnik, Institut

More information

Projects - Neural and Evolutionary Computing

Projects - Neural and Evolutionary Computing Projects - Neural and Evolutionary Computing 2014-2015 I. Application oriented topics 1. Task scheduling in distributed systems. The aim is to assign a set of (independent or correlated) tasks to some

More information

Analysis of Model and Key Technology for P2P Network Route Security Evaluation with 2-tuple Linguistic Information

Analysis of Model and Key Technology for P2P Network Route Security Evaluation with 2-tuple Linguistic Information Journal of Computational Information Systems 9: 14 2013 5529 5534 Available at http://www.jofcis.com Analysis of Model and Key Technology for P2P Network Route Security Evaluation with 2-tuple Linguistic

More information

A Hybrid Tabu Search Method for Assembly Line Balancing

A Hybrid Tabu Search Method for Assembly Line Balancing Proceedings of the 7th WSEAS International Conference on Simulation, Modelling and Optimization, Beijing, China, September 15-17, 2007 443 A Hybrid Tabu Search Method for Assembly Line Balancing SUPAPORN

More information

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING

EFFICIENT DATA PRE-PROCESSING FOR DATA MINING EFFICIENT DATA PRE-PROCESSING FOR DATA MINING USING NEURAL NETWORKS JothiKumar.R 1, Sivabalan.R.V 2 1 Research scholar, Noorul Islam University, Nagercoil, India Assistant Professor, Adhiparasakthi College

More information

Constrained Classification of Large Imbalanced Data by Logistic Regression and Genetic Algorithm

Constrained Classification of Large Imbalanced Data by Logistic Regression and Genetic Algorithm Constrained Classification of Large Imbalanced Data by Logistic Regression and Genetic Algorithm Martin Hlosta, Rostislav Stríž, Jan Kupčík, Jaroslav Zendulka, and Tomáš Hruška A. Imbalanced Data Classification

More information

A Health Degree Evaluation Algorithm for Equipment Based on Fuzzy Sets and the Improved SVM

A Health Degree Evaluation Algorithm for Equipment Based on Fuzzy Sets and the Improved SVM Journal of Computational Information Systems 10: 17 (2014) 7629 7635 Available at http://www.jofcis.com A Health Degree Evaluation Algorithm for Equipment Based on Fuzzy Sets and the Improved SVM Tian

More information

Hybrid Algorithm using the advantage of ACO and Cuckoo Search for Job Scheduling

Hybrid Algorithm using the advantage of ACO and Cuckoo Search for Job Scheduling Hybrid Algorithm using the advantage of ACO and Cuckoo Search for Job Scheduling R.G. Babukartik 1, P. Dhavachelvan 1 1 Department of Computer Science, Pondicherry University, Pondicherry, India {r.g.babukarthik,

More information

Using Mixtures-of-Distributions models to inform farm size selection decisions in representative farm modelling. Philip Kostov and Seamus McErlean

Using Mixtures-of-Distributions models to inform farm size selection decisions in representative farm modelling. Philip Kostov and Seamus McErlean Using Mixtures-of-Distributions models to inform farm size selection decisions in representative farm modelling. by Philip Kostov and Seamus McErlean Working Paper, Agricultural and Food Economics, Queen

More information

A Hybrid Model of Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC) Algorithm for Test Case Optimization

A Hybrid Model of Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC) Algorithm for Test Case Optimization A Hybrid Model of Particle Swarm Optimization (PSO) and Artificial Bee Colony (ABC) Algorithm for Test Case Optimization Abraham Kiran Joseph a, Dr. G. Radhamani b * a Research Scholar, Dr.G.R Damodaran

More information

Hardware Implementation of Probabilistic State Machine for Word Recognition

Hardware Implementation of Probabilistic State Machine for Word Recognition IJECT Vo l. 4, Is s u e Sp l - 5, Ju l y - Se p t 2013 ISSN : 2230-7109 (Online) ISSN : 2230-9543 (Print) Hardware Implementation of Probabilistic State Machine for Word Recognition 1 Soorya Asokan, 2

More information

Sensors & Transducers 2015 by IFSA Publishing, S. L. http://www.sensorsportal.com

Sensors & Transducers 2015 by IFSA Publishing, S. L. http://www.sensorsportal.com Sensors & Transducers 2015 by IFSA Publishing, S. L. http://www.sensorsportal.com A Dynamic Deployment Policy of Slave Controllers for Software Defined Network Yongqiang Yang and Gang Xu College of Computer

More information

Improved PSO-based Task Scheduling Algorithm in Cloud Computing

Improved PSO-based Task Scheduling Algorithm in Cloud Computing Journal of Information & Computational Science 9: 13 (2012) 3821 3829 Available at http://www.joics.com Improved PSO-based Tas Scheduling Algorithm in Cloud Computing Shaobin Zhan, Hongying Huo Shenzhen

More information

Method of Fault Detection in Cloud Computing Systems

Method of Fault Detection in Cloud Computing Systems , pp.205-212 http://dx.doi.org/10.14257/ijgdc.2014.7.3.21 Method of Fault Detection in Cloud Computing Systems Ying Jiang, Jie Huang, Jiaman Ding and Yingli Liu Yunnan Key Lab of Computer Technology Application,

More information

Software Project Planning and Resource Allocation Using Ant Colony Optimization with Uncertainty Handling

Software Project Planning and Resource Allocation Using Ant Colony Optimization with Uncertainty Handling Software Project Planning and Resource Allocation Using Ant Colony Optimization with Uncertainty Handling Vivek Kurien1, Rashmi S Nair2 PG Student, Dept of Computer Science, MCET, Anad, Tvm, Kerala, India

More information

Introduction to Algorithmic Trading Strategies Lecture 2

Introduction to Algorithmic Trading Strategies Lecture 2 Introduction to Algorithmic Trading Strategies Lecture 2 Hidden Markov Trading Model Haksun Li haksun.li@numericalmethod.com www.numericalmethod.com Outline Carry trade Momentum Valuation CAPM Markov chain

More information

ANT COLONY OPTIMIZATION ALGORITHM FOR RESOURCE LEVELING PROBLEM OF CONSTRUCTION PROJECT

ANT COLONY OPTIMIZATION ALGORITHM FOR RESOURCE LEVELING PROBLEM OF CONSTRUCTION PROJECT ANT COLONY OPTIMIZATION ALGORITHM FOR RESOURCE LEVELING PROBLEM OF CONSTRUCTION PROJECT Ying XIONG 1, Ya Ping KUANG 2 1. School of Economics and Management, Being Jiaotong Univ., Being, China. 2. College

More information

A Novel Feature Selection Method Based on an Integrated Data Envelopment Analysis and Entropy Mode

A Novel Feature Selection Method Based on an Integrated Data Envelopment Analysis and Entropy Mode A Novel Feature Selection Method Based on an Integrated Data Envelopment Analysis and Entropy Mode Seyed Mojtaba Hosseini Bamakan, Peyman Gholami RESEARCH CENTRE OF FICTITIOUS ECONOMY & DATA SCIENCE UNIVERSITY

More information

Research Article Service Composition Optimization Using Differential Evolution and Opposition-based Learning

Research Article Service Composition Optimization Using Differential Evolution and Opposition-based Learning Research Journal of Applied Sciences, Engineering and Technology 11(2): 229-234, 2015 ISSN: 2040-7459; e-issn: 2040-7467 2015 Maxwell Scientific Publication Corp. Submitted: May 20, 2015 Accepted: June

More information

HYBRID ACO-IWD OPTIMIZATION ALGORITHM FOR MINIMIZING WEIGHTED FLOWTIME IN CLOUD-BASED PARAMETER SWEEP EXPERIMENTS

HYBRID ACO-IWD OPTIMIZATION ALGORITHM FOR MINIMIZING WEIGHTED FLOWTIME IN CLOUD-BASED PARAMETER SWEEP EXPERIMENTS HYBRID ACO-IWD OPTIMIZATION ALGORITHM FOR MINIMIZING WEIGHTED FLOWTIME IN CLOUD-BASED PARAMETER SWEEP EXPERIMENTS R. Angel Preethima 1, Margret Johnson 2 1 Student, Computer Science and Engineering, Karunya

More information

Binary Ant Colony Evolutionary Algorithm

Binary Ant Colony Evolutionary Algorithm Weiqing Xiong Liuyi Wang Chenyang Yan School of Information Science and Engineering Ningbo University, Ningbo 35 China Weiqing,xwqdds@tom.com, Liuyi,jameswang@hotmail.com School Information and Electrical

More information

A Binary Model on the Basis of Imperialist Competitive Algorithm in Order to Solve the Problem of Knapsack 1-0

A Binary Model on the Basis of Imperialist Competitive Algorithm in Order to Solve the Problem of Knapsack 1-0 212 International Conference on System Engineering and Modeling (ICSEM 212) IPCSIT vol. 34 (212) (212) IACSIT Press, Singapore A Binary Model on the Basis of Imperialist Competitive Algorithm in Order

More information

Méta-heuristiques pour l optimisation

Méta-heuristiques pour l optimisation Méta-heuristiques pour l optimisation Differential Evolution (DE) Particle Swarm Optimization (PSO) Alain Dutech Equipe MAIA - LORIA - INRIA Nancy, France Web : http://maia.loria.fr Mail : Alain.Dutech@loria.fr

More information

Big Data Analytics of Multi-Relationship Online Social Network Based on Multi-Subnet Composited Complex Network

Big Data Analytics of Multi-Relationship Online Social Network Based on Multi-Subnet Composited Complex Network , pp.273-284 http://dx.doi.org/10.14257/ijdta.2015.8.5.24 Big Data Analytics of Multi-Relationship Online Social Network Based on Multi-Subnet Composited Complex Network Gengxin Sun 1, Sheng Bin 2 and

More information

A No el Probability Binary Particle Swarm Optimization Algorithm and Its Application

A No el Probability Binary Particle Swarm Optimization Algorithm and Its Application 28 JOURAL OF SOFTWARE, VOL. 3, O. 9, DECEMBER 2008 A o el Probability Binary Particle Swarm Optimization Algorithm and Its Application Ling Wang* School of Mechatronics and Automation, Shanghai University,

More information

STUDY OF PROJECT SCHEDULING AND RESOURCE ALLOCATION USING ANT COLONY OPTIMIZATION 1

STUDY OF PROJECT SCHEDULING AND RESOURCE ALLOCATION USING ANT COLONY OPTIMIZATION 1 STUDY OF PROJECT SCHEDULING AND RESOURCE ALLOCATION USING ANT COLONY OPTIMIZATION 1 Prajakta Joglekar, 2 Pallavi Jaiswal, 3 Vandana Jagtap Maharashtra Institute of Technology, Pune Email: 1 somanprajakta@gmail.com,

More information

Class-specific Sparse Coding for Learning of Object Representations

Class-specific Sparse Coding for Learning of Object Representations Class-specific Sparse Coding for Learning of Object Representations Stephan Hasler, Heiko Wersing, and Edgar Körner Honda Research Institute Europe GmbH Carl-Legien-Str. 30, 63073 Offenbach am Main, Germany

More information

A New Method for Traffic Forecasting Based on the Data Mining Technology with Artificial Intelligent Algorithms

A New Method for Traffic Forecasting Based on the Data Mining Technology with Artificial Intelligent Algorithms Research Journal of Applied Sciences, Engineering and Technology 5(12): 3417-3422, 213 ISSN: 24-7459; e-issn: 24-7467 Maxwell Scientific Organization, 213 Submitted: October 17, 212 Accepted: November

More information

Master's projects at ITMO University. Daniil Chivilikhin PhD Student @ ITMO University

Master's projects at ITMO University. Daniil Chivilikhin PhD Student @ ITMO University Master's projects at ITMO University Daniil Chivilikhin PhD Student @ ITMO University General information Guidance from our lab's researchers Publishable results 2 Research areas Research at ITMO Evolutionary

More information

A hybrid Approach of Genetic Algorithm and Particle Swarm Technique to Software Test Case Generation

A hybrid Approach of Genetic Algorithm and Particle Swarm Technique to Software Test Case Generation A hybrid Approach of Genetic Algorithm and Particle Swarm Technique to Software Test Case Generation Abhishek Singh Department of Information Technology Amity School of Engineering and Technology Amity

More information

Duplicating and its Applications in Batch Scheduling

Duplicating and its Applications in Batch Scheduling Duplicating and its Applications in Batch Scheduling Yuzhong Zhang 1 Chunsong Bai 1 Shouyang Wang 2 1 College of Operations Research and Management Sciences Qufu Normal University, Shandong 276826, China

More information

A Novel Web Optimization Technique using Enhanced Particle Swarm Optimization

A Novel Web Optimization Technique using Enhanced Particle Swarm Optimization A Novel Web Optimization Technique using Enhanced Particle Swarm Optimization P.N.Nesarajan Research Scholar, Erode Arts & Science College, Erode M.Venkatachalam, Ph.D Associate Professor & HOD of Electronics,

More information

Complex Network Visualization based on Voronoi Diagram and Smoothed-particle Hydrodynamics

Complex Network Visualization based on Voronoi Diagram and Smoothed-particle Hydrodynamics Complex Network Visualization based on Voronoi Diagram and Smoothed-particle Hydrodynamics Zhao Wenbin 1, Zhao Zhengxu 2 1 School of Instrument Science and Engineering, Southeast University, Nanjing, Jiangsu

More information

Finding Liveness Errors with ACO

Finding Liveness Errors with ACO Hong Kong, June 1-6, 2008 1 / 24 Finding Liveness Errors with ACO Francisco Chicano and Enrique Alba Motivation Motivation Nowadays software is very complex An error in a software system can imply the

More information

APPLICATION OF ADVANCED SEARCH- METHODS FOR AUTOMOTIVE DATA-BUS SYSTEM SIGNAL INTEGRITY OPTIMIZATION

APPLICATION OF ADVANCED SEARCH- METHODS FOR AUTOMOTIVE DATA-BUS SYSTEM SIGNAL INTEGRITY OPTIMIZATION APPLICATION OF ADVANCED SEARCH- METHODS FOR AUTOMOTIVE DATA-BUS SYSTEM SIGNAL INTEGRITY OPTIMIZATION Harald Günther 1, Stephan Frei 1, Thomas Wenzel, Wolfgang Mickisch 1 Technische Universität Dortmund,

More information

Knowledge Based Descriptive Neural Networks

Knowledge Based Descriptive Neural Networks Knowledge Based Descriptive Neural Networks J. T. Yao Department of Computer Science, University or Regina Regina, Saskachewan, CANADA S4S 0A2 Email: jtyao@cs.uregina.ca Abstract This paper presents a

More information

Hybrid Data Envelopment Analysis and Neural Networks for Suppliers Efficiency Prediction and Ranking

Hybrid Data Envelopment Analysis and Neural Networks for Suppliers Efficiency Prediction and Ranking 1 st International Conference of Recent Trends in Information and Communication Technologies Hybrid Data Envelopment Analysis and Neural Networks for Suppliers Efficiency Prediction and Ranking Mohammadreza

More information

A New Approach in Software Cost Estimation with Hybrid of Bee Colony and Chaos Optimizations Algorithms

A New Approach in Software Cost Estimation with Hybrid of Bee Colony and Chaos Optimizations Algorithms A New Approach in Software Cost Estimation with Hybrid of Bee Colony and Chaos Optimizations Algorithms Farhad Soleimanian Gharehchopogh 1 and Zahra Asheghi Dizaji 2 1 Department of Computer Engineering,

More information

Hidden Markov Models in Bioinformatics. By Máthé Zoltán Kőrösi Zoltán 2006

Hidden Markov Models in Bioinformatics. By Máthé Zoltán Kőrösi Zoltán 2006 Hidden Markov Models in Bioinformatics By Máthé Zoltán Kőrösi Zoltán 2006 Outline Markov Chain HMM (Hidden Markov Model) Hidden Markov Models in Bioinformatics Gene Finding Gene Finding Model Viterbi algorithm

More information

Comparison of Various Particle Swarm Optimization based Algorithms in Cloud Computing

Comparison of Various Particle Swarm Optimization based Algorithms in Cloud Computing Comparison of Various Particle Swarm Optimization based Algorithms in Cloud Computing Er. Talwinder Kaur M.Tech (CSE) SSIET, Dera Bassi, Punjab, India Email- talwinder_2@yahoo.co.in Er. Seema Pahwa Department

More information

Manjeet Kaur Bhullar, Kiranbir Kaur Department of CSE, GNDU, Amritsar, Punjab, India

Manjeet Kaur Bhullar, Kiranbir Kaur Department of CSE, GNDU, Amritsar, Punjab, India Volume 5, Issue 6, June 2015 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Multiple Pheromone

More information

Ericsson T18s Voice Dialing Simulator

Ericsson T18s Voice Dialing Simulator Ericsson T18s Voice Dialing Simulator Mauricio Aracena Kovacevic, Anna Dehlbom, Jakob Ekeberg, Guillaume Gariazzo, Eric Lästh and Vanessa Troncoso Dept. of Signals Sensors and Systems Royal Institute of

More information

Chapter ML:XI (continued)

Chapter ML:XI (continued) Chapter ML:XI (continued) XI. Cluster Analysis Data Mining Overview Cluster Analysis Basics Hierarchical Cluster Analysis Iterative Cluster Analysis Density-Based Cluster Analysis Cluster Evaluation Constrained

More information

Optional Insurance Compensation Rate Selection and Evaluation in Financial Institutions

Optional Insurance Compensation Rate Selection and Evaluation in Financial Institutions , pp.233-242 http://dx.doi.org/10.14257/ijunesst.2014.7.1.21 Optional Insurance Compensation Rate Selection and Evaluation in Financial Institutions Xu Zhikun 1, Wang Yanwen 2 and Liu Zhaohui 3 1, 2 College

More information

Effect of Using Neural Networks in GA-Based School Timetabling

Effect of Using Neural Networks in GA-Based School Timetabling Effect of Using Neural Networks in GA-Based School Timetabling JANIS ZUTERS Department of Computer Science University of Latvia Raina bulv. 19, Riga, LV-1050 LATVIA janis.zuters@lu.lv Abstract: - The school

More information

Technical Analysis on Financial Forecasting

Technical Analysis on Financial Forecasting Technical Analysis on Financial Forecasting SGopal Krishna Patro 1, Pragyan Parimita Sahoo 2, Ipsita Panda 3, Kishore Kumar Sahu 4 1,2,3,4 Department of CSE & IT, VSSUT, Burla, Odisha, India sgkpatro2008@gmailcom,

More information

Management Science Letters

Management Science Letters Management Science Letters 4 (2014) 905 912 Contents lists available at GrowingScience Management Science Letters homepage: www.growingscience.com/msl Measuring customer loyalty using an extended RFM and

More information

Study on the Evaluation for the Knowledge Sharing Efficiency of the Knowledge Service Network System in Agile Supply Chain

Study on the Evaluation for the Knowledge Sharing Efficiency of the Knowledge Service Network System in Agile Supply Chain Send Orders for Reprints to reprints@benthamscience.ae 384 The Open Cybernetics & Systemics Journal, 2015, 9, 384-389 Open Access Study on the Evaluation for the Knowledge Sharing Efficiency of the Knowledge

More information

Hidden Markov Models

Hidden Markov Models 8.47 Introduction to omputational Molecular Biology Lecture 7: November 4, 2004 Scribe: Han-Pang hiu Lecturer: Ross Lippert Editor: Russ ox Hidden Markov Models The G island phenomenon The nucleotide frequencies

More information

Multi-Objective Supply Chain Model through an Ant Colony Optimization Approach

Multi-Objective Supply Chain Model through an Ant Colony Optimization Approach Multi-Objective Supply Chain Model through an Ant Colony Optimization Approach Anamika K. Mittal L. D. College of Engineering, Ahmedabad, India Chirag S. Thaker L. D. College of Engineering, Ahmedabad,

More information

CLOUD DATABASE ROUTE SCHEDULING USING COMBANATION OF PARTICLE SWARM OPTIMIZATION AND GENETIC ALGORITHM

CLOUD DATABASE ROUTE SCHEDULING USING COMBANATION OF PARTICLE SWARM OPTIMIZATION AND GENETIC ALGORITHM CLOUD DATABASE ROUTE SCHEDULING USING COMBANATION OF PARTICLE SWARM OPTIMIZATION AND GENETIC ALGORITHM *Shabnam Ghasemi 1 and Mohammad Kalantari 2 1 Deparment of Computer Engineering, Islamic Azad University,

More information

Design call center management system of e-commerce based on BP neural network and multifractal

Design call center management system of e-commerce based on BP neural network and multifractal Available online www.jocpr.com Journal of Chemical and Pharmaceutical Research, 2014, 6(6):951-956 Research Article ISSN : 0975-7384 CODEN(USA) : JCPRC5 Design call center management system of e-commerce

More information

TOWARD BIG DATA ANALYSIS WORKSHOP

TOWARD BIG DATA ANALYSIS WORKSHOP TOWARD BIG DATA ANALYSIS WORKSHOP 邁 向 巨 量 資 料 分 析 研 討 會 摘 要 集 2015.06.05-06 巨 量 資 料 之 矩 陣 視 覺 化 陳 君 厚 中 央 研 究 院 統 計 科 學 研 究 所 摘 要 視 覺 化 (Visualization) 與 探 索 式 資 料 分 析 (Exploratory Data Analysis, EDA)

More information

Modeling of Knowledge Transfer in logistics Supply Chain Based on System Dynamics

Modeling of Knowledge Transfer in logistics Supply Chain Based on System Dynamics , pp.377-388 http://dx.doi.org/10.14257/ijunesst.2015.8.12.38 Modeling of Knowledge Transfer in logistics Supply Chain Based on System Dynamics Yang Bo School of Information Management Jiangxi University

More information

Optimize Position and Path Planning of Automated Optical Inspection

Optimize Position and Path Planning of Automated Optical Inspection Journal of Computational Information Systems 8: 7 (2012) 2957 2963 Available at http://www.jofcis.com Optimize Position and Path Planning of Automated Optical Inspection Hao WU, Yongcong KUANG, Gaofei

More information

Advanced Ensemble Strategies for Polynomial Models

Advanced Ensemble Strategies for Polynomial Models Advanced Ensemble Strategies for Polynomial Models Pavel Kordík 1, Jan Černý 2 1 Dept. of Computer Science, Faculty of Information Technology, Czech Technical University in Prague, 2 Dept. of Computer

More information

SACOC: A spectral-based ACO clustering algorithm

SACOC: A spectral-based ACO clustering algorithm SACOC: A spectral-based ACO clustering algorithm Héctor D. Menéndez, Fernando E. B. Otero, and David Camacho Abstract The application of ACO-based algorithms in data mining is growing over the last few

More information

Optimal Tuning of PID Controller Using Meta Heuristic Approach

Optimal Tuning of PID Controller Using Meta Heuristic Approach International Journal of Electronic and Electrical Engineering. ISSN 0974-2174, Volume 7, Number 2 (2014), pp. 171-176 International Research Publication House http://www.irphouse.com Optimal Tuning of

More information