Large Scale Extreme Learning Machine using MapReduce
|
|
|
- Lesley Hudson
- 10 years ago
- Views:
Transcription
1 Large Scale Extreme Learnng Machne usng MapReduce Large Scale Extreme Learnng Machne usng MapReduce * L Dong, Pan Zhsong, 3 Deng Zhantao, 4 Zhang Yanyan Insttute of Command Automaton, PLA Unversty of Scence and echnology, anjng, 0007, Jangsu Provnce, PR Chna [email protected], [email protected], [email protected] [email protected] Abstract Extreme learnng machne (ELM) s a new method n neural networks, contrasted wth conventonal gradent-based algorthms such as BP, It can remarkably shorten the tranng tme, all the learnng process s done by only once. he algorthm cannot deal wth large scale dataset due to the memory lmtaton. Here we mplement a large scale ELM based on MapReduce, the new parallel programmng model. he foundaton of the algorthm s parallel matrx multplcaton, whch has been dscussed somewhere else, but we gves out the whole computaton and I/O cost n detals. Experments on large scale dataset show scalablty of ths method.. Introducton Keywords: ELM, MapReduce, Large scale We are now undergong the ncreasng of massve nformaton and people have emergng demand to deal wth bg data. Recently, large scale machne learnng technology receves hghly attenton as ts performance of mnng not so bg data n past decades. Many tradtonal algorthms could fal dealng wth large scale data smply because they cannot load all the data nto the memory at once, whle these algorthms are desgned based on the hypothess that all data can be read. here are roughly two classes of approaches whch can work, streamng data to an onlne learnng algorthm and parallelzng a batch-learnng algorthm []. hs paper focuses on the second approach. MapReduce s a new parallel programmng model publshed by Google at 004 [], whch can make programs run at large cluster bult up by common computers as well as mult-cores system. hs technology smplfed processng bg data through supplyng hgh level nterfaces, hdng system related detals et al. Usng MapReduce to parallelze machne learnng algorthms begns at Chu et al [3]. hey concluded that algorthms that ft the Statstcal Query model can be wrtten n summaton form whch can be easly parallelzed. Qng He et al [4] gves out several popular parallel classfcaton algorthms based on MapReduce. Cu et al [5] used MapReduce to parallelze an algorthm for fndng communty n a moble socal network. here exsts a few mplementatons of MapReduce, but Apache Hadoop [6] becomes the de-facto standard verson [7] and get wdely used n ndustry [8]. he man experments n ths paper are taken on ths platform.. Extreme learnng machne Extreme learnng machne (ELM) s proposed by Guang-bn Huang at 004 [9], and later the whole detals and expermental results are publshed at 006 [0].It shows that the ELM has great advantages at tranng speed compared wth tradtonal backpropagaton (BP) algorthm. And ELM has better generalzaton performance n most cases. he new research [] shows that ELM and SVM have the same optmzaton objectve functon, whle the formal has mlder constrants. Here gves the bref revew of ELM. Based on rgorously proved n theory that the nput weghts and the hdden layer bas of Snglehdden Layer Forward etworks (SLF) can be chosen randomly, Huang proposed ELM, whch shows how output weghts (lnkng the hdden layer to the output layer) could be analytcally determned. Internatonal Journal of Dgtal Content echnology and ts Applcatons(JDCA) Volume6,umber0,ovember 0 do:0.456/jdcta.vol6.ssue0.7 6
2 Large Scale Extreme Learnng Machne usng MapReduce For samples ( x, t ),where x [,,..., ] n x x xn R, t [,,..., ] m t t tm R,a SLF has hdden nodes and actve functon g( x) can be expressed as: g ( x) g( wx b) o, j,... () j j where w [,,..., ] w x wm s the weghts connect the nput layer and the th hdden node, [,,..., ] m s the weghts connect the output layer and the th hdden node, b s the bas of th hdden node. gx can be Sgmod or RBF, or even many o s the output of sample j. ( ) j nondfferentable actvaton functons. he tranng error s formula () can be wrtten as Where, j o t, when totally fttng, j j H () H ( w,... w, b,... b, x,... x ) gw ( xb) gw ( x b ) g( wx b) g( w x b ) t, t m m H s called hdden layer output matrx, ts the output of th hdden node. ow the weght w and bas b have generated randomly, the task s to fnd proper. In most cases, lner Equaton () has no solutons, Accordng to the error mnmal prncple ^ mn H ( w,... w, b,... b, x,... x ), the smallest norm least squares soluton of the above lnear system s : ^ H (3) Where H s the Moore Penrose generalzed nverse of matrx H, whch can be acqured by sngular value decomposton (SVD), and also can be computed by the flowng formula, If matrx H H s nonsngular ( H H H) H (4) Or f matrx HH s nonsngular H H ( HH ) (5) 63
3 Large Scale Extreme Learnng Machne usng MapReduce But f the above two matrces are sngular, accordng to rdge regresson theory, a postve value added to dagonal of orgnal matrx could help to get the stable soluton. So, formula (4) (5) can be wrtten as I H ( H H) H (6) or I H H ( HH ) (7) Huang et al [9] ponts out that calculated by above optmzaton problem. H s consstent wth the followng 3. Parallel Extreme learnng machne 3.. MapReduce mn H (8) When dataset exceeds the memory lmtaton, the hdden layer output matrx H cannot be loaded once, Moore Penrose generalzed nverse of matrx H and object weghts are all can t be calculated by above formulas. Lang et al [] developed an onlne sequental learnng algorthm called OS-ELM, makng the tranng data arrve one by one or chunk by chunk. hs method partly loosens the memory bottle neck when all data can be stored n one computer. When the data scale up to dstrbuted storage, such method may have problem to access data over machnes, and usng only one processng unt may brng up neffcency n ths case. Due to the fact that all data s stored n multple machnes, dstrbuted computng s a good choce. And movng processng to data, namely localty s the hghlght of MapReduce [3]. User mplement MapReduce programs should specfy two operatons: Map and Reduce. Map takes key/value pars as nput and generates mmedate results n the same form; Reduce takes all values whch have the same key, and processes n further step. he data flows and types can be expressed as followng: Map (k,v) lst(k,v) Reduce (k,lst(v)) lst(v) It s to be notced that the data type of Map s output must be the same as Reduce s nput. All the detals of communcaton and synchronzaton are hdden by system as well as fal-torrent mechansm. Many Maps and Reduces can be run smultaneously over machnes. 3. Matrx Multplcaton When the dataset becomes large, for example we have 0 7 samples, t s dffcult to calculate Moore Penrose generalzed nverse of matrx H through (6) or (7).However, the foundaton of them s matrx multplcaton. Sun et al [4] summarzed three schemes of matrx multplcaton n the background to solve MF. Part of them wll be used here. mn nk For common matrx A, B, the basc operaton of multplcaton AB s to dvde A as rows and B as columns, and the element of result matrx s the nner producton of two vectors. 64
4 Large Scale Extreme Learnng Machne usng MapReduce a a AB b b bk am ab abk a b a b m k m MapReduce can calculate the rows of scheme algorthm-. (9 AB n parallel wthout Reduce operaton. We call ths Map key s the row d of A, value s the correspondng row of A newvalue=value*b wrte(key,newvalue)to HDFS Fgure. Algorthm- Map Algorthm- can work well f m n, and matrx B s shared across over machnes. Large matrx A and the result both stored n rows. Or reversely, calculate the columns of B n parallel, and share A across over machnes. Algorthm- could fal f the two matrces are both large, any one cannot be shared n memory; there s a dfferent dvson method. Matrx A s dvded as columns and B as rows; the result s sum of matrces, each one s the outer producton of two vectors. We call ths scheme algorthm-. AB a a b a b b an bn 0 Where denotes outer producton a b a bk a b a b a b m k m Algorthm- works well when n m, n k. Matrx A s stored n columns, and B n rows. he result can be stored n ether rows or columns. he outer producton and summarzaton both need a MapReduce job. he detals s gven as fgure 5. 65
5 Large Scale Extreme Learnng Machne usng MapReduce Map key s the row d of A or the column d of B, value s the correspondng row of A or column of B newvalue= row(a) or column(b) Pass(key,newvalue) to phase- Reduce Fgure. Algorthm- phase-i Map Reduce phase-i Map s output newvalue= a b Wrte(key,newvalue)to HDFS Fgure 3. Algorthm- phase-i Reduce Map phase-i Reduce s output Pass(key,value) to phase-ii Reduce Fgure 4. Algorthm- phase-ii Map Reduce phase-ii Map s output newvalue=sum(lst[values]) Wrte(key,newvalue) to HDFS Fgure 5. Algorthm- phase-ii Reduce 3.3 Parallel ELM It s crtcal to choose dfferent matrx multplcaton schemes accordng to partcular demand of algorthm. In ELM, the hdden layer output matrx H commonly has far more rows than columns,, the formal s the number of samples and the later s number of hdden nodes, whch s controllable. A MapReduce job read fles by lnes through Maps, and many Maps access fles H n rows. he multplcaton H H n (6) fts algorthm- perfectly, whle HH n (7) s hard to calculate. What s more, HH s szed by, whch s dffcult to get the nverse matrx. Storng matrx H n rows s equvalent to store H n columns. In ths case, algorthm- could be reduced to one job, the map calculates outer smultaneously. So t s reasonable to store large matrx producton of a row and ts self, and the reduce summarzes. Denote C ( I H H),C, as the number of hdden nodes s controllable,the nverse operaton s easy to mplement n memory through exstng tools such as LAPACK. H CH, H s stored n columns, C can be shared n memory, ths multplcaton can be well done usng algorthm-, one can smultaneously calculate the nner producton of each column of H and C ;the result H s stored n columns. 66
6 Large Scale Extreme Learnng Machne usng MapReduce H, c s the orgnal object values, c s the dmenson, for regresson equals and classfcaton equals the number of class labels. c, s stored n rows, the multplcaton fts algorthm- perfectly. In predcaton phase, we need to calculate the predcaton matrx Y., can be shared n memory, ths multplcaton can be done usng algorthm-. In testng phase, we get object matrx and predcaton Y are both stored n rows, here we need a MapReduce job to compare the predcted value and actual value of each sample, and then calculate the fnal RMSE for regresson or success rate for classfcaton. 3.4 Cost analyss Recent researches begn to consder dsk a cost and network cost when dealng wth large scale data usng MapReduce [5], nstead of only evaluatng the tradtonal computaton cost. Yu et al [6] ponts out the tranng tme actually contans two parts, tme to run data n memory and tme to access data from dsk. In ELM, the formal part s manly the tmes of multplcaton. Assumng we have large memory, and use formula (6) as the tranng rule, theoretcal cost of ELM s ( C ) tmes multplcaton and loadng samples. If there are k mapers and reducers n the MapReduce runnng system, computaton cost s ( C ) / k tmes multplcaton. However, evaluatng the network cost and I/O cost s a bt more complcated. ot all mapper s output s shuffled to reducers through network. Actually, MapReduce framework mnmzes the network cost by assgnng reduce tasks to the machnes whch have already stored the requred data [3].o smplfy the estmatng, we could assume the rato of actual shuffled data s a constant r. In computng H H, the shuffled data s r ; In computng H, there are no reduces; In computng, the shuffled data of two phases s r( C C ). he total network cost s H etwork[( C C ) r / k] () where etwork (.) means the network cost whch s manly affected by transfer speed. All the mmedate results are stored on dsks, the readng tasks are done by mapers and wrtng tasks by reducers n most cases. he I/O cost of MapReduce jobs n the tranng phase of parallel ELM s c Read(3 C ) k Wrte C C k ( ) (3) where Read (.) and Wrte (.) means the readng and wrtng tme respectvely. able shows above costs of jobs nvolved wth tranng. he major cost comes from computng, but network and dsk I/O s also tme-consumng. 67
7 Large Scale Extreme Learnng Machne usng MapReduce able. ranng Cost for Parallel ELM 7 0, 0, Data: MB C me: Sec Shuffle Read Wrte me H H CH H p H p Other costs such as solvng nverse matrx and loadng small varables are neglgble when the number of hdden layer nodes s small. 4 Experments 4. Experments setup All experments are conducted on a cluster of 8 common servers; each has a quad-core.8ghz CPU, 8GB memory and B dsk, wth one ggabt Ethernet connected. he operaton system s Lnux server, nstalled wth Hadoop 0.0. and JDK.6. Here shows the capablty of ths parallel algorthm to handle large scale data from two aspects, regresson and classfcaton. 4. Regresson Artfcal dataset snc has been used wdely n regresson problem. All data can be generated by followng sn( x) / x, x 0 yx ( ) (4), x 0 A tranng set and test set can be generated at any scale, where x are randomly dstrbuted n nterval (-0, 0), noses n nterval (-, ) can be added to tranng data whle testng data remans orgnal. he crtera of parallel algorthm nclude RMSE and tranng tme. Fgure6 shows the tranng tme ncludng four parts showed n table as data scale up from 0 7 to 0 8, and the dataset scale vares from 335MB to 3.3GB. he total tranng tme ncreases sub lnearly due to proper matrx multplcaton schemes chosen by the algorthms HtH CHt H total 4000 tme(seconds) number of samples x 0 7 Fgure 6. ranng tme for regresson 68
8 Large Scale Extreme Learnng Machne usng MapReduce Fgure7 shows the speedup acheved by addng computaton node. Each node has 5 reducers and 7 mappers n our system. he hghest speedup s 5.7 usng 8 nodes n the experment Speedup speedup deal lnear umber of node(each has 5 reducers) Fgure 7. Speedup for regresson 4. Classfcaton ELM s essentally a supervsed learnng technology, whch requres the labeled tranng data. But t s hard to fnd large scale dataset labeled wth every sample; In fact automatcally labelng for unlabeled samples s an mportant motvaton of unsupervsed learnng. Above parallel algorthm manly execute the matrx multplcaton n parallel, other to mprove the orgnal algorthm n theory. So here shows the capablty to handle large scale data, all tranng data can be acqured by smply duplcatng orgnal data to a specfy scale. Fgure8 shows the tranng tme ncreases as the dataset scales up. he orgnal dataset s Image segmentaton wth 9 features and 7classes, has been wdely used n classfcaton problem. Smlar to the regresson case, here exsts sub lnearty between data scale and tranng tme ranng tme(second) parallel ELM deal lnear Dataset(GB) Fgure 8. ranng tme for classfcaton 5. Concluson In ths paper, we manly mplement parallel extreme learnng machne based on MapReduce. he foundaton of ths mplement s proper formula to calculate the parameters n ELM, and the 69
9 Large Scale Extreme Learnng Machne usng MapReduce correspondng parallel matrx multplcaton schemes. Experment shows the lnearty between tranng tme and dataset scale. As two man methods to deal wth large scale problem, parallel algorthm and onlne learnng algorthm may have somethng n common. It s an nterestng topc to compare these two methods n the case of ELM, and to dscuss the best appled area of ether n the future. 6. References [] John Langford, Lhong L, ong Zhang, Sparse Onlne Learnng va runcated Gradent, Journal of Machne Learnng Research, vol.0,no.0, pp , 009. [] J. Dean, S. Ghemawat, Mapreduce: Smplfed data processng on large clusters, In Proceedngs of Operatng Systems Desgn and Implementaton, pp.37 49, 004. [3] Cheng-ao Chu, Sang Kyun Km, Y-An Ln et al, Map-Reduce for machne learnng on multcore, In Proceedngs of Advances n eural Informaton Processng Systems, pp.8 88, 006. [4] Q.He, F.Z.Zhuang, J.C.L et al, Parallel mplementaton of classfcaton algorthms based on mapreduce, In Proceedngs of Rough Sets and Knowledge echnology, pp.655-6, 00. [5] Wen Cu, Guoyong Wang, Ke Xu, "Parallel Communty Mnng n Socal etwork usng Mapreduce", IJAC: Internatonal Journal of Advancements n Computng echnology, vol. 4, no. 5, pp , 0 [6] Apache Hadoop, [7] A. Verma, X. Llora, D. E. Goldberg, et al, Scalng genetc algorthms usng mapreduce, In Proceedngs of Internatonal Conference on Intellgent Systems Desgn and Applcatons, pp.3-8, 009 [8] LEI Le, "owards a Hgh Performance Vrtual Hadoop Cluster", JCI: Journal of Convergence Informaton echnology, vol. 7, no. 6, pp , 0 [9] Guang-Bn Huang, Qn-Yu Zhu, Chee-Kheong Sew, Extreme Learnng Machne: A ew Learnng Scheme of Feedforward eural etworks, In Proceedngs of Internatonal Jont Conference on eural etworks, pp , 004. [0] Guang-Bn Huang, Qn-Yu Zhu, Chee-Kheong Sew, Extreme learnng machne: heory and applcatons, eurocomputng, vol.70, no.-3, pp , 006. [] G.-B. Huang, H. Zhou, X. Dng et al, Extreme Learnng Machne for Regresson and Multclass Classfcaton, IEEE ransactons on Systems, Man, and Cybernetcs - Part B: Cybernetcs, vol.4, no., pp.53-59, 0. [] Lang -Y, Huang G-B, Saratchandran P et al, A fast and accurate on-lne sequental learnng algorthm for feedforward networks, IEEE ransactons on eural etwork, vol.7, no.6, pp.4 43, 006. [3] J. Ln, C. Dyer, Data-Intensve ext Processng wth MapReduce, Morgan & Claypool Publshers, USA, 00. [4] Zhengguo Sun, ao L, aphtal Rshe, Large-Scale Matrx Factorzaton usng MapReduce, In Proceedngs of IEEE Internatonal Conference on Data Mnng Workshops, pp.4-48, 00 [5] Robson Leonardo Ferrera Cordero, Caetano rana Jr, Agma Juc Machado rana et al, Clusterng Very Large Mult-dmensonal Datasets wth MapReduce, In Proceedngs of ACM SIGKDD Conference on Knowledge Dscovery and Data Mnng, pp , 0 [6] Hsang-Fu Yu, Cho-Ju Hseh, Ka-We Chang et al, Large Lnear Classfcaton When Data Cannot Ft In Memory, In Proceedngs of ACM SIGKDD Conference on Knowledge Dscovery and Data Mnng, pp ,
The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis
The Development of Web Log Mnng Based on Improve-K-Means Clusterng Analyss TngZhong Wang * College of Informaton Technology, Luoyang Normal Unversty, Luoyang, 471022, Chna [email protected] Abstract.
Forecasting the Direction and Strength of Stock Market Movement
Forecastng the Drecton and Strength of Stock Market Movement Jngwe Chen Mng Chen Nan Ye [email protected] [email protected] [email protected] Abstract - Stock market s one of the most complcated systems
What is Candidate Sampling
What s Canddate Samplng Say we have a multclass or mult label problem where each tranng example ( x, T ) conssts of a context x a small (mult)set of target classes T out of a large unverse L of possble
Lecture 2: Single Layer Perceptrons Kevin Swingler
Lecture 2: Sngle Layer Perceptrons Kevn Sngler [email protected] Recap: McCulloch-Ptts Neuron Ths vastly smplfed model of real neurons s also knon as a Threshold Logc Unt: W 2 A Y 3 n W n. A set of synapses
An Interest-Oriented Network Evolution Mechanism for Online Communities
An Interest-Orented Network Evoluton Mechansm for Onlne Communtes Cahong Sun and Xaopng Yang School of Informaton, Renmn Unversty of Chna, Bejng 100872, P.R. Chna {chsun,yang}@ruc.edu.cn Abstract. Onlne
Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network
700 Proceedngs of the 8th Internatonal Conference on Innovaton & Management Forecastng the Demand of Emergency Supples: Based on the CBR Theory and BP Neural Network Fu Deqang, Lu Yun, L Changbng School
Support Vector Machines
Support Vector Machnes Max Wellng Department of Computer Scence Unversty of Toronto 10 Kng s College Road Toronto, M5S 3G5 Canada [email protected] Abstract Ths s a note to explan support vector machnes.
Improved SVM in Cloud Computing Information Mining
Internatonal Journal of Grd Dstrbuton Computng Vol.8, No.1 (015), pp.33-40 http://dx.do.org/10.1457/jgdc.015.8.1.04 Improved n Cloud Computng Informaton Mnng Lvshuhong (ZhengDe polytechnc college JangSu
Luby s Alg. for Maximal Independent Sets using Pairwise Independence
Lecture Notes for Randomzed Algorthms Luby s Alg. for Maxmal Independent Sets usng Parwse Independence Last Updated by Erc Vgoda on February, 006 8. Maxmal Independent Sets For a graph G = (V, E), an ndependent
PAS: A Packet Accounting System to Limit the Effects of DoS & DDoS. Debish Fesehaye & Klara Naherstedt University of Illinois-Urbana Champaign
PAS: A Packet Accountng System to Lmt the Effects of DoS & DDoS Debsh Fesehaye & Klara Naherstedt Unversty of Illnos-Urbana Champagn DoS and DDoS DDoS attacks are ncreasng threats to our dgtal world. Exstng
A Programming Model for the Cloud Platform
Internatonal Journal of Advanced Scence and Technology A Programmng Model for the Cloud Platform Xaodong Lu School of Computer Engneerng and Scence Shangha Unversty, Shangha 200072, Chna [email protected]
Performance Analysis and Coding Strategy of ECOC SVMs
Internatonal Journal of Grd and Dstrbuted Computng Vol.7, No. (04), pp.67-76 http://dx.do.org/0.457/jgdc.04.7..07 Performance Analyss and Codng Strategy of ECOC SVMs Zhgang Yan, and Yuanxuan Yang, School
Vision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION
Vson Mouse Saurabh Sarkar a* a Unversty of Cncnnat, Cncnnat, USA ABSTRACT The report dscusses a vson based approach towards trackng of eyes and fngers. The report descrbes the process of locatng the possble
8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by
6 CHAPTER 8 COMPLEX VECTOR SPACES 5. Fnd the kernel of the lnear transformaton gven n Exercse 5. In Exercses 55 and 56, fnd the mage of v, for the ndcated composton, where and are gven by the followng
Project Networks With Mixed-Time Constraints
Project Networs Wth Mxed-Tme Constrants L Caccetta and B Wattananon Western Australan Centre of Excellence n Industral Optmsaton (WACEIO) Curtn Unversty of Technology GPO Box U1987 Perth Western Australa
Feature selection for intrusion detection. Slobodan Petrović NISlab, Gjøvik University College
Feature selecton for ntruson detecton Slobodan Petrovć NISlab, Gjøvk Unversty College Contents The feature selecton problem Intruson detecton Traffc features relevant for IDS The CFS measure The mrmr measure
On-Line Fault Detection in Wind Turbine Transmission System using Adaptive Filter and Robust Statistical Features
On-Lne Fault Detecton n Wnd Turbne Transmsson System usng Adaptve Flter and Robust Statstcal Features Ruoyu L Remote Dagnostcs Center SKF USA Inc. 3443 N. Sam Houston Pkwy., Houston TX 77086 Emal: [email protected]
Single and multiple stage classifiers implementing logistic discrimination
Sngle and multple stage classfers mplementng logstc dscrmnaton Hélo Radke Bttencourt 1 Dens Alter de Olvera Moraes 2 Vctor Haertel 2 1 Pontfíca Unversdade Católca do Ro Grande do Sul - PUCRS Av. Ipranga,
Mining Multiple Large Data Sources
The Internatonal Arab Journal of Informaton Technology, Vol. 7, No. 3, July 2 24 Mnng Multple Large Data Sources Anmesh Adhkar, Pralhad Ramachandrarao 2, Bhanu Prasad 3, and Jhml Adhkar 4 Department of
Logistic Regression. Lecture 4: More classifiers and classes. Logistic regression. Adaboost. Optimization. Multiple class classification
Lecture 4: More classfers and classes C4B Machne Learnng Hlary 20 A. Zsserman Logstc regresson Loss functons revsted Adaboost Loss functons revsted Optmzaton Multple class classfcaton Logstc Regresson
v a 1 b 1 i, a 2 b 2 i,..., a n b n i.
SECTION 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS 455 8.4 COMPLEX VECTOR SPACES AND INNER PRODUCTS All the vector spaces we have studed thus far n the text are real vector spaces snce the scalars are
On the Optimal Control of a Cascade of Hydro-Electric Power Stations
On the Optmal Control of a Cascade of Hydro-Electrc Power Statons M.C.M. Guedes a, A.F. Rbero a, G.V. Smrnov b and S. Vlela c a Department of Mathematcs, School of Scences, Unversty of Porto, Portugal;
MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Sequential Optimizing Investing Strategy with Neural Networks
MATHEMATICAL ENGINEERING TECHNICAL REPORTS Sequental Optmzng Investng Strategy wth Neural Networks Ryo ADACHI and Akmch TAKEMURA METR 2010 03 February 2010 DEPARTMENT OF MATHEMATICAL INFORMATICS GRADUATE
Causal, Explanatory Forecasting. Analysis. Regression Analysis. Simple Linear Regression. Which is Independent? Forecasting
Causal, Explanatory Forecastng Assumes cause-and-effect relatonshp between system nputs and ts output Forecastng wth Regresson Analyss Rchard S. Barr Inputs System Cause + Effect Relatonshp The job of
Can Auto Liability Insurance Purchases Signal Risk Attitude?
Internatonal Journal of Busness and Economcs, 2011, Vol. 10, No. 2, 159-164 Can Auto Lablty Insurance Purchases Sgnal Rsk Atttude? Chu-Shu L Department of Internatonal Busness, Asa Unversty, Tawan Sheng-Chang
A DATA MINING APPLICATION IN A STUDENT DATABASE
JOURNAL OF AERONAUTICS AND SPACE TECHNOLOGIES JULY 005 VOLUME NUMBER (53-57) A DATA MINING APPLICATION IN A STUDENT DATABASE Şenol Zafer ERDOĞAN Maltepe Ünversty Faculty of Engneerng Büyükbakkalköy-Istanbul
A hybrid global optimization algorithm based on parallel chaos optimization and outlook algorithm
Avalable onlne www.ocpr.com Journal of Chemcal and Pharmaceutcal Research, 2014, 6(7):1884-1889 Research Artcle ISSN : 0975-7384 CODEN(USA) : JCPRC5 A hybrd global optmzaton algorthm based on parallel
Bayesian Network Based Causal Relationship Identification and Funding Success Prediction in P2P Lending
Proceedngs of 2012 4th Internatonal Conference on Machne Learnng and Computng IPCSIT vol. 25 (2012) (2012) IACSIT Press, Sngapore Bayesan Network Based Causal Relatonshp Identfcaton and Fundng Success
L10: Linear discriminants analysis
L0: Lnear dscrmnants analyss Lnear dscrmnant analyss, two classes Lnear dscrmnant analyss, C classes LDA vs. PCA Lmtatons of LDA Varants of LDA Other dmensonalty reducton methods CSCE 666 Pattern Analyss
A Replication-Based and Fault Tolerant Allocation Algorithm for Cloud Computing
A Replcaton-Based and Fault Tolerant Allocaton Algorthm for Cloud Computng Tork Altameem Dept of Computer Scence, RCC, Kng Saud Unversty, PO Box: 28095 11437 Ryadh-Saud Araba Abstract The very large nfrastructure
Power-of-Two Policies for Single- Warehouse Multi-Retailer Inventory Systems with Order Frequency Discounts
Power-of-wo Polces for Sngle- Warehouse Mult-Retaler Inventory Systems wth Order Frequency Dscounts José A. Ventura Pennsylvana State Unversty (USA) Yale. Herer echnon Israel Insttute of echnology (Israel)
1 Example 1: Axis-aligned rectangles
COS 511: Theoretcal Machne Learnng Lecturer: Rob Schapre Lecture # 6 Scrbe: Aaron Schld February 21, 2013 Last class, we dscussed an analogue for Occam s Razor for nfnte hypothess spaces that, n conjuncton
+ + + - - This circuit than can be reduced to a planar circuit
MeshCurrent Method The meshcurrent s analog of the nodeoltage method. We sole for a new set of arables, mesh currents, that automatcally satsfy KCLs. As such, meshcurrent method reduces crcut soluton to
Fair Virtual Bandwidth Allocation Model in Virtual Data Centers
Far Vrtual Bandwdth Allocaton Model n Vrtual Data Centers Yng Yuan, Cu-rong Wang, Cong Wang School of Informaton Scence and Engneerng ortheastern Unversty Shenyang, Chna School of Computer and Communcaton
Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur
Module LOSSLESS IMAGE COMPRESSION SYSTEMS Lesson 3 Lossless Compresson: Huffman Codng Instructonal Objectves At the end of ths lesson, the students should be able to:. Defne and measure source entropy..
Robust Design of Public Storage Warehouses. Yeming (Yale) Gong EMLYON Business School
Robust Desgn of Publc Storage Warehouses Yemng (Yale) Gong EMLYON Busness School Rene de Koster Rotterdam school of management, Erasmus Unversty Abstract We apply robust optmzaton and revenue management
Recurrence. 1 Definitions and main statements
Recurrence 1 Defntons and man statements Let X n, n = 0, 1, 2,... be a MC wth the state space S = (1, 2,...), transton probabltes p j = P {X n+1 = j X n = }, and the transton matrx P = (p j ),j S def.
Distributed Column Subset Selection on MapReduce
Dstrbuted Column Subset Selecton on MapReduce Ahmed K. arahat Ahmed Elgohary Al Ghods Mohamed S. Kamel Unversty of Waterloo Waterloo, Ontaro, Canada N2L 3G1 Emal: {afarahat, aelgohary, aghodsb, mkamel}@uwaterloo.ca
A Multi-Camera System on PC-Cluster for Real-time 3-D Tracking
The 23 rd Conference of the Mechancal Engneerng Network of Thaland November 4 7, 2009, Chang Ma A Mult-Camera System on PC-Cluster for Real-tme 3-D Trackng Vboon Sangveraphunsr*, Krtsana Uttamang, and
Face Verification Problem. Face Recognition Problem. Application: Access Control. Biometric Authentication. Face Verification (1:1 matching)
Face Recognton Problem Face Verfcaton Problem Face Verfcaton (1:1 matchng) Querymage face query Face Recognton (1:N matchng) database Applcaton: Access Control www.vsage.com www.vsoncs.com Bometrc Authentcaton
Modelling of Web Domain Visits by Radial Basis Function Neural Networks and Support Vector Machine Regression
Modellng of Web Doman Vsts by Radal Bass Functon Neural Networks and Support Vector Machne Regresson Vladmír Olej, Jana Flpová Insttute of System Engneerng and Informatcs Faculty of Economcs and Admnstraton,
An Evaluation of the Extended Logistic, Simple Logistic, and Gompertz Models for Forecasting Short Lifecycle Products and Services
An Evaluaton of the Extended Logstc, Smple Logstc, and Gompertz Models for Forecastng Short Lfecycle Products and Servces Charles V. Trappey a,1, Hsn-yng Wu b a Professor (Management Scence), Natonal Chao
An Enhanced Super-Resolution System with Improved Image Registration, Automatic Image Selection, and Image Enhancement
An Enhanced Super-Resoluton System wth Improved Image Regstraton, Automatc Image Selecton, and Image Enhancement Yu-Chuan Kuo ( ), Chen-Yu Chen ( ), and Chou-Shann Fuh ( ) Department of Computer Scence
Institute of Informatics, Faculty of Business and Management, Brno University of Technology,Czech Republic
Lagrange Multplers as Quanttatve Indcators n Economcs Ivan Mezník Insttute of Informatcs, Faculty of Busness and Management, Brno Unversty of TechnologCzech Republc Abstract The quanttatve role of Lagrange
NEURO-FUZZY INFERENCE SYSTEM FOR E-COMMERCE WEBSITE EVALUATION
NEURO-FUZZY INFERENE SYSTEM FOR E-OMMERE WEBSITE EVALUATION Huan Lu, School of Software, Harbn Unversty of Scence and Technology, Harbn, hna Faculty of Appled Mathematcs and omputer Scence, Belarusan State
IMPACT ANALYSIS OF A CELLULAR PHONE
4 th ASA & μeta Internatonal Conference IMPACT AALYSIS OF A CELLULAR PHOE We Lu, 2 Hongy L Bejng FEAonlne Engneerng Co.,Ltd. Bejng, Chna ABSTRACT Drop test smulaton plays an mportant role n nvestgatng
320 The Internatonal Arab Journal of Informaton Technology, Vol. 5, No. 3, July 2008 Comparsons Between Data Clusterng Algorthms Osama Abu Abbas Computer Scence Department, Yarmouk Unversty, Jordan Abstract:
Chapter 4 ECONOMIC DISPATCH AND UNIT COMMITMENT
Chapter 4 ECOOMIC DISATCH AD UIT COMMITMET ITRODUCTIO A power system has several power plants. Each power plant has several generatng unts. At any pont of tme, the total load n the system s met by the
Joint Scheduling of Processing and Shuffle Phases in MapReduce Systems
Jont Schedulng of Processng and Shuffle Phases n MapReduce Systems Fangfe Chen, Mural Kodalam, T. V. Lakshman Department of Computer Scence and Engneerng, The Penn State Unversty Bell Laboratores, Alcatel-Lucent
Dynamic Resource Allocation for MapReduce with Partitioning Skew
Ths artcle has been accepted for publcaton n a future ssue of ths journal, but has not been fully edted. Content may change pror to fnal publcaton. Ctaton nformaton: DOI 1.119/TC.216.253286, IEEE Transactons
A Simple Approach to Clustering in Excel
A Smple Approach to Clusterng n Excel Aravnd H Center for Computatonal Engneerng and Networng Amrta Vshwa Vdyapeetham, Combatore, Inda C Rajgopal Center for Computatonal Engneerng and Networng Amrta Vshwa
An Alternative Way to Measure Private Equity Performance
An Alternatve Way to Measure Prvate Equty Performance Peter Todd Parlux Investment Technology LLC Summary Internal Rate of Return (IRR) s probably the most common way to measure the performance of prvate
Descriptive Models. Cluster Analysis. Example. General Applications of Clustering. Examples of Clustering Applications
CMSC828G Prncples of Data Mnng Lecture #9 Today s Readng: HMS, chapter 9 Today s Lecture: Descrptve Modelng Clusterng Algorthms Descrptve Models model presents the man features of the data, a global summary
Conferencing protocols and Petri net analysis
Conferencng protocols and Petr net analyss E. ANTONIDAKIS Department of Electroncs, Technologcal Educatonal Insttute of Crete, GREECE [email protected] Abstract: Durng a computer conference, users desre
Optimization Model of Reliable Data Storage in Cloud Environment Using Genetic Algorithm
Internatonal Journal of Grd Dstrbuton Computng, pp.175-190 http://dx.do.org/10.14257/gdc.2014.7.6.14 Optmzaton odel of Relable Data Storage n Cloud Envronment Usng Genetc Algorthm Feng Lu 1,2,3, Hatao
How To Know The Components Of Mean Squared Error Of Herarchcal Estmator S
S C H E D A E I N F O R M A T I C A E VOLUME 0 0 On Mean Squared Error of Herarchcal Estmator Stans law Brodowsk Faculty of Physcs, Astronomy, and Appled Computer Scence, Jagellonan Unversty, Reymonta
APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT
APPLICATION OF PROBE DATA COLLECTED VIA INFRARED BEACONS TO TRAFFIC MANEGEMENT Toshhko Oda (1), Kochro Iwaoka (2) (1), (2) Infrastructure Systems Busness Unt, Panasonc System Networks Co., Ltd. Saedo-cho
Point cloud to point cloud rigid transformations. Minimizing Rigid Registration Errors
Pont cloud to pont cloud rgd transformatons Russell Taylor 600.445 1 600.445 Fall 000-014 Copyrght R. H. Taylor Mnmzng Rgd Regstraton Errors Typcally, gven a set of ponts {a } n one coordnate system and
Risk-based Fatigue Estimate of Deep Water Risers -- Course Project for EM388F: Fracture Mechanics, Spring 2008
Rsk-based Fatgue Estmate of Deep Water Rsers -- Course Project for EM388F: Fracture Mechancs, Sprng 2008 Chen Sh Department of Cvl, Archtectural, and Envronmental Engneerng The Unversty of Texas at Austn
BERNSTEIN POLYNOMIALS
On-Lne Geometrc Modelng Notes BERNSTEIN POLYNOMIALS Kenneth I. Joy Vsualzaton and Graphcs Research Group Department of Computer Scence Unversty of Calforna, Davs Overvew Polynomals are ncredbly useful
A Secure Password-Authenticated Key Agreement Using Smart Cards
A Secure Password-Authentcated Key Agreement Usng Smart Cards Ka Chan 1, Wen-Chung Kuo 2 and Jn-Chou Cheng 3 1 Department of Computer and Informaton Scence, R.O.C. Mltary Academy, Kaohsung 83059, Tawan,
GRAVITY DATA VALIDATION AND OUTLIER DETECTION USING L 1 -NORM
GRAVITY DATA VALIDATION AND OUTLIER DETECTION USING L 1 -NORM BARRIOT Jean-Perre, SARRAILH Mchel BGI/CNES 18.av.E.Beln 31401 TOULOUSE Cedex 4 (France) Emal: [email protected] 1/Introducton The
A Performance Analysis of View Maintenance Techniques for Data Warehouses
A Performance Analyss of Vew Mantenance Technques for Data Warehouses Xng Wang Dell Computer Corporaton Round Roc, Texas Le Gruenwald The nversty of Olahoma School of Computer Scence orman, OK 739 Guangtao
"Research Note" APPLICATION OF CHARGE SIMULATION METHOD TO ELECTRIC FIELD CALCULATION IN THE POWER CABLES *
Iranan Journal of Scence & Technology, Transacton B, Engneerng, ol. 30, No. B6, 789-794 rnted n The Islamc Republc of Iran, 006 Shraz Unversty "Research Note" ALICATION OF CHARGE SIMULATION METHOD TO ELECTRIC
PEER REVIEWER RECOMMENDATION IN ONLINE SOCIAL LEARNING CONTEXT: INTEGRATING INFORMATION OF LEARNERS AND SUBMISSIONS
PEER REVIEWER RECOMMENDATION IN ONLINE SOCIAL LEARNING CONTEXT: INTEGRATING INFORMATION OF LEARNERS AND SUBMISSIONS Yunhong Xu, Faculty of Management and Economcs, Kunmng Unversty of Scence and Technology,
Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy
4.02 Quz Solutons Fall 2004 Multple-Choce Questons (30/00 ponts) Please, crcle the correct answer for each of the followng 0 multple-choce questons. For each queston, only one of the answers s correct.
A study on the ability of Support Vector Regression and Neural Networks to Forecast Basic Time Series Patterns
A study on the ablty of Support Vector Regresson and Neural Networks to Forecast Basc Tme Seres Patterns Sven F. Crone, Jose Guajardo 2, and Rchard Weber 2 Lancaster Unversty, Department of Management
A Novel Methodology of Working Capital Management for Large. Public Constructions by Using Fuzzy S-curve Regression
Novel Methodology of Workng Captal Management for Large Publc Constructons by Usng Fuzzy S-curve Regresson Cheng-Wu Chen, Morrs H. L. Wang and Tng-Ya Hseh Department of Cvl Engneerng, Natonal Central Unversty,
Mining Feature Importance: Applying Evolutionary Algorithms within a Web-based Educational System
Mnng Feature Importance: Applyng Evolutonary Algorthms wthn a Web-based Educatonal System Behrouz MINAEI-BIDGOLI 1, and Gerd KORTEMEYER 2, and Wllam F. PUNCH 1 1 Genetc Algorthms Research and Applcatons
Financial market forecasting using a two-step kernel learning method for the support vector regression
Ann Oper Res (2010) 174: 103 120 DOI 10.1007/s10479-008-0357-7 Fnancal market forecastng usng a two-step kernel learnng method for the support vector regresson L Wang J Zhu Publshed onlne: 28 May 2008
Lei Liu, Hua Yang Business School, Hunan University, Changsha, Hunan, P.R. China, 410082. Abstract
, pp.377-390 http://dx.do.org/10.14257/jsa.2016.10.4.34 Research on the Enterprse Performance Management Informaton System Development and Robustness Optmzaton based on Data Regresson Analyss and Mathematcal
Activity Scheduling for Cost-Time Investment Optimization in Project Management
PROJECT MANAGEMENT 4 th Internatonal Conference on Industral Engneerng and Industral Management XIV Congreso de Ingenería de Organzacón Donosta- San Sebastán, September 8 th -10 th 010 Actvty Schedulng
Learning from Multiple Outlooks
Learnng from Multple Outlooks Maayan Harel Department of Electrcal Engneerng, Technon, Hafa, Israel She Mannor Department of Electrcal Engneerng, Technon, Hafa, Israel [email protected] [email protected]
Watermark-based Provable Data Possession for Multimedia File in Cloud Storage
Vol.48 (CIA 014), pp.103-107 http://dx.do.org/10.1457/astl.014.48.18 Watermar-based Provable Data Possesson for Multmeda Fle n Cloud Storage Yongjun Ren 1,, Jang Xu 1,, Jn Wang 1,, Lmng Fang 3, Jeong-U
Logical Development Of Vogel s Approximation Method (LD-VAM): An Approach To Find Basic Feasible Solution Of Transportation Problem
INTERNATIONAL JOURNAL OF SCIENTIFIC & TECHNOLOGY RESEARCH VOLUME, ISSUE, FEBRUARY ISSN 77-866 Logcal Development Of Vogel s Approxmaton Method (LD- An Approach To Fnd Basc Feasble Soluton Of Transportaton
Automated Network Performance Management and Monitoring via One-class Support Vector Machine
Automated Network Performance Management and Montorng va One-class Support Vector Machne R. Zhang, J. Jang, and S. Zhang Dgtal Meda & Systems Research Insttute, Unversty of Bradford, UK Abstract: In ths
CONSTRUCTING A SALES FORECASTING MODEL BY INTEGRATING GRA AND ELM:A CASE STUDY FOR RETAIL INDUSTRY
Internatonal Journal of Electronc Busness Management, Vol. 9, o. 2, pp. 107-121 (2011) 107 COSTRUCTIG A SALES FORECASTIG MODEL BY ITEGRATIG GRA AD ELM:A CASE STUDY FOR RETAIL IDUSTRY Fe-Long Chen and Tsung-Yn
Offline Verification of Hand Written Signature using Adaptive Resonance Theory Net (Type-1)
Internatonal Journal of Sgnal Processng Systems Vol, No June 203 Offlne Verfcaton of Hand Wrtten Sgnature usng Adaptve Resonance Theory Net (Type-) Trtharaj Dash Veer Surendra Sa Unversty of Technology,
Learning from Large Distributed Data: A Scaling Down Sampling Scheme for Efficient Data Processing
Internatonal Journal of Machne Learnng and Computng, Vol. 4, No. 3, June 04 Learnng from Large Dstrbuted Data: A Scalng Down Samplng Scheme for Effcent Data Processng Che Ngufor and Janusz Wojtusak part
PERRON FROBENIUS THEOREM
PERRON FROBENIUS THEOREM R. CLARK ROBINSON Defnton. A n n matrx M wth real entres m, s called a stochastc matrx provded () all the entres m satsfy 0 m, () each of the columns sum to one, m = for all, ()
THE APPLICATION OF DATA MINING TECHNIQUES AND MULTIPLE CLASSIFIERS TO MARKETING DECISION
Internatonal Journal of Electronc Busness Management, Vol. 3, No. 4, pp. 30-30 (2005) 30 THE APPLICATION OF DATA MINING TECHNIQUES AND MULTIPLE CLASSIFIERS TO MARKETING DECISION Yu-Mn Chang *, Yu-Cheh
Audio Data Mining Using Multi-perceptron Artificial Neural Network
224 IJCSNS Internatonal Journal of Computer Scence and Network Securty, VOL.8 No.0, October 2008 Audo Data Mnng Usng Mult-perceptron Artfcal Neural Network Surendra Shetty, 2 K.K. Achary Dept of Computer
A COLLABORATIVE TRADING MODEL BY SUPPORT VECTOR REGRESSION AND TS FUZZY RULE FOR DAILY STOCK TURNING POINTS DETECTION
A COLLABORATIVE TRADING MODEL BY SUPPORT VECTOR REGRESSION AND TS FUZZY RULE FOR DAILY STOCK TURNING POINTS DETECTION JHENG-LONG WU, PEI-CHANN CHANG, KAI-TING CHANG Department of Informaton Management,
Proactive Secret Sharing Or: How to Cope With Perpetual Leakage
Proactve Secret Sharng Or: How to Cope Wth Perpetual Leakage Paper by Amr Herzberg Stanslaw Jareck Hugo Krawczyk Mot Yung Presentaton by Davd Zage What s Secret Sharng Basc Idea ((2, 2)-threshold scheme):
New Approaches to Support Vector Ordinal Regression
New Approaches to Support Vector Ordnal Regresson We Chu [email protected] Gatsby Computatonal Neuroscence Unt, Unversty College London, London, WCN 3AR, UK S. Sathya Keerth [email protected]
IWFMS: An Internal Workflow Management System/Optimizer for Hadoop
IWFMS: An Internal Workflow Management System/Optmzer for Hadoop Lan Lu, Yao Shen Department of Computer Scence and Engneerng Shangha JaoTong Unversty Shangha, Chna [email protected], [email protected]
Design of Output Codes for Fast Covering Learning using Basic Decomposition Techniques
Journal of Computer Scence (7): 565-57, 6 ISSN 59-66 6 Scence Publcatons Desgn of Output Codes for Fast Coverng Learnng usng Basc Decomposton Technques Aruna Twar and Narendra S. Chaudhar, Faculty of Computer
A novel Method for Data Mining and Classification based on
A novel Method for Data Mnng and Classfcaton based on Ensemble Learnng 1 1, Frst Author Nejang Normal Unversty;Schuan Nejang 641112,Chna, E-mal: [email protected] Abstract Data mnng has been attached great
Gender Classification for Real-Time Audience Analysis System
Gender Classfcaton for Real-Tme Audence Analyss System Vladmr Khryashchev, Lev Shmaglt, Andrey Shemyakov, Anton Lebedev Yaroslavl State Unversty Yaroslavl, Russa [email protected], [email protected], [email protected],
Loop Parallelization
- - Loop Parallelzaton C-52 Complaton steps: nested loops operatng on arrays, sequentell executon of teraton space DECLARE B[..,..+] FOR I :=.. FOR J :=.. I B[I,J] := B[I-,J]+B[I-,J-] ED FOR ED FOR analyze
Assessing Student Learning Through Keyword Density Analysis of Online Class Messages
Assessng Student Learnng Through Keyword Densty Analyss of Onlne Class Messages Xn Chen New Jersey Insttute of Technology [email protected] Brook Wu New Jersey Insttute of Technology [email protected] ABSTRACT Ths
Genetic Algorithm Based Optimization Model for Reliable Data Storage in Cloud Environment
Advanced Scence and Technology Letters, pp.74-79 http://dx.do.org/10.14257/astl.2014.50.12 Genetc Algorthm Based Optmzaton Model for Relable Data Storage n Cloud Envronment Feng Lu 1,2,3, Hatao Wu 1,3,
ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING
ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING Matthew J. Lberatore, Department of Management and Operatons, Vllanova Unversty, Vllanova, PA 19085, 610-519-4390,
7.5. Present Value of an Annuity. Investigate
7.5 Present Value of an Annuty Owen and Anna are approachng retrement and are puttng ther fnances n order. They have worked hard and nvested ther earnngs so that they now have a large amount of money on
Multi-sensor Data Fusion for Cyber Security Situation Awareness
Avalable onlne at www.scencedrect.com Proceda Envronmental Scences 0 (20 ) 029 034 20 3rd Internatonal Conference on Envronmental 3rd Internatonal Conference on Envronmental Scence and Informaton Applcaton
