The Application of Qubit Neural Networks for Time Series Forecasting with Automatic Phase Adjustment Mechanism



Similar documents
Use of Multi-attribute Utility Functions in Evaluating Security Systems

Series Solutions of ODEs 2 the Frobenius method. The basic idea of the Frobenius method is to look for solutions of the form 3


Multi-class kernel logistic regression: a fixed-size implementation

Recurrence. 1 Definitions and main statements

Implementation of Deutsch's Algorithm Using Mathcad

Causal, Explanatory Forecasting. Analysis. Regression Analysis. Simple Linear Regression. Which is Independent? Forecasting

Forecasting the Demand of Emergency Supplies: Based on the CBR Theory and BP Neural Network

Forecasting the Direction and Strength of Stock Market Movement

Time Delayed Independent Component Analysis for Data Quality Monitoring

DECOMPOSITION ALGORITHM FOR OPTIMAL SECURITY-CONSTRAINED POWER SCHEDULING

The Development of Web Log Mining Based on Improve-K-Means Clustering Analysis

Lecture 2: Single Layer Perceptrons Kevin Swingler

Modern Problem Solving Techniques in Engineering with POLYMATH, Excel and MATLAB. Introduction

What is Candidate Sampling

CONSIDER a connected network of n nodes that all wish

Calculation of Sampling Weights

How To Understand The Results Of The German Meris Cloud And Water Vapour Product

Computer Administering of the Psychological Investigations: Set-Relational Representation

BERNSTEIN POLYNOMIALS

benefit is 2, paid if the policyholder dies within the year, and probability of death within the year is ).

Luby s Alg. for Maximal Independent Sets using Pairwise Independence

This circuit than can be reduced to a planar circuit

Cyber-Security Via Computing With Words

Face Recognition in the Scrambled Domain via Salience-Aware Ensembles of Many Kernels

Peer-to-peer systems have attracted considerable attention

Figure 1. Inventory Level vs. Time - EOQ Problem

CLASSIFYING FEATURE DESCRIPTION FOR SOFTWARE DEFECT PREDICTION

Risk-based Fatigue Estimate of Deep Water Risers -- Course Project for EM388F: Fracture Mechanics, Spring 2008

Behavior Coordination in E-commerce Supply Chains

Optimal Adaptive Voice Smoother with Lagrangian Multiplier Method for VoIP Service

PSYCHOLOGICAL RESEARCH (PYC 304-C) Lecture 12

MATHEMATICAL ENGINEERING TECHNICAL REPORTS. Sequential Optimizing Investing Strategy with Neural Networks

Project Networks With Mixed-Time Constraints

THE DISTRIBUTION OF LOAN PORTFOLIO VALUE * Oldrich Alfons Vasicek

Frequency Selective IQ Phase and IQ Amplitude Imbalance Adjustments for OFDM Direct Conversion Transmitters

An Interest-Oriented Network Evolution Mechanism for Online Communities

8.5 UNITARY AND HERMITIAN MATRICES. The conjugate transpose of a complex matrix A, denoted by A*, is given by

24. Impact of Piracy on Innovation at Software Firms and Implications for Piracy Policy

Loop Parallelization

An Alternative Way to Measure Private Equity Performance

An artificial Neural Network approach to monitor and diagnose multi-attribute quality control processes. S. T. A. Niaki*

Module 2 LOSSLESS IMAGE COMPRESSION SYSTEMS. Version 2 ECE IIT, Kharagpur

A study on the ability of Support Vector Regression and Neural Networks to Forecast Basic Time Series Patterns

When can bundling help adoption of network technologies or services?

Inter-Ing INTERDISCIPLINARITY IN ENGINEERING SCIENTIFIC INTERNATIONAL CONFERENCE, TG. MUREŞ ROMÂNIA, November 2007.

L10: Linear discriminants analysis

An Evaluation of the Extended Logistic, Simple Logistic, and Gompertz Models for Forecasting Short Lifecycle Products and Services

Face Verification Problem. Face Recognition Problem. Application: Access Control. Biometric Authentication. Face Verification (1:1 matching)

A Hierarchical Anomaly Network Intrusion Detection System using Neural Network Classification

DEFINING %COMPLETE IN MICROSOFT PROJECT

THE METHOD OF LEAST SQUARES THE METHOD OF LEAST SQUARES

Generalizing the degree sequence problem

1 Example 1: Axis-aligned rectangles

The OC Curve of Attribute Acceptance Plans

Forschung zur Entwicklungsökonomie und -politik Research in Development Economics and Policy

Extending Probabilistic Dynamic Epistemic Logic

A STUDY OF SOFTBALL PLAYER SWING SPEED *

Characterization of Assembly. Variation Analysis Methods. A Thesis. Presented to the. Department of Mechanical Engineering. Brigham Young University

8 Algorithm for Binary Searching in Trees

Nonlinear Time Series Analysis in a Nutshell

Can Auto Liability Insurance Purchases Signal Risk Attitude?

How To Calculate The Accountng Perod Of Nequalty

Damage detection in composite laminates using coin-tap method

INVESTIGATION OF VEHICULAR USERS FAIRNESS IN CDMA-HDR NETWORKS

Multiple-Period Attribution: Residuals and Compounding

The Effect of Mean Stress on Damage Predictions for Spectral Loading of Fiberglass Composite Coupons 1

Portfolio Loss Distribution

A Probabilistic Theory of Coherence

ANALYZING THE RELATIONSHIPS BETWEEN QUALITY, TIME, AND COST IN PROJECT MANAGEMENT DECISION MAKING

Single and multiple stage classifiers implementing logistic discrimination

A DYNAMIC CRASHING METHOD FOR PROJECT MANAGEMENT USING SIMULATION-BASED OPTIMIZATION. Michael E. Kuhl Radhamés A. Tolentino-Peña

Answer: A). There is a flatter IS curve in the high MPC economy. Original LM LM after increase in M. IS curve for low MPC economy

Doctor of Philosophy. Troy O. McBride

Logistic Regression. Lecture 4: More classifiers and classes. Logistic regression. Adaboost. Optimization. Multiple class classification

IMPACT ANALYSIS OF A CELLULAR PHONE

A DATA MINING APPLICATION IN A STUDENT DATABASE

Vision Mouse. Saurabh Sarkar a* University of Cincinnati, Cincinnati, USA ABSTRACT 1. INTRODUCTION

Development of an intelligent system for tool wear monitoring applying neural networks

Addendum to: Importing Skill-Biased Technology

where the coordinates are related to those in the old frame as follows.

CHOLESTEROL REFERENCE METHOD LABORATORY NETWORK. Sample Stability Protocol

The circuit shown on Figure 1 is called the common emitter amplifier circuit. The important subsystems of this circuit are:

Applications of the Offset in Property-Casualty Predictive Modeling

Prediction of Disability Frequencies in Life Insurance

CONSTRUCTING A SALES FORECASTING MODEL BY INTEGRATING GRA AND ELM:A CASE STUDY FOR RETAIL INDUSTRY

Linear Circuits Analysis. Superposition, Thevenin /Norton Equivalent circuits

) of the Cell class is created containing information about events associated with the cell. Events are added to the Cell instance

Abstract. 260 Business Intelligence Journal July IDENTIFICATION OF DEMAND THROUGH STATISTICAL DISTRIBUTION MODELING FOR IMPROVED DEMAND FORECASTING

Transcription:

The Applaton of Qubt Neural Networks for Tme Seres Foreastng wth Automat Phase Adjustment Mehansm arlos R. B. Azevedo 1 and Tago. A. E. Ferrera 1 1 entro de ênas e Tenologa Unversdade atóla de Pernambuo (UNIAP) Rua do Prínpe, 56, Boa Vsta, 50050-900, Refe PE Brazl {azevedo, taef} @de.unap.br Abstrat. Quantum omputaton, quantum nformaton and artfal ntellgene have all ontrbuted for the new non-standard learnng sheme named Qubt Neural Network (QNN). In ths paper, a QNN based on the qubt neuron model s used for real world tme seres foreastng problem, where one haot seres and one stok market seres were predted. Expermental results show evdenes that the smulated system s able to preserve the relatve phase nformaton of neurons quantum states and thus, automatally adjust the foreast s tme shft. 1. Introduton Quantum omputaton has evolved from the theoretal studes of Feynman (198), Deutsh (1985), and others, to an ntensve researh feld sne the dsovery of a quantum algorthm whh an solve the problem of fatorzng a large nteger n polynomal tme by Shor (1994). Matsu et al (000) proposed a Qubt Neuron Model whh exhbts quantum learnng abltes. Ths model resulted on a quantum mult-layer feed forward neural network proposal [Kouda et al, 004] whh mplements Quantum Mehans (QM) effets, havng ts learnng effeny proved on non-lnear ontrollng problems [Kouda et al, 005]. The man objetve of ths work s to apply the QNN model for real world tme seres foreastng problem, where the nfluene of QM effets (manly, superposton) s expeted to apture the foreast s phase nformaton, levelng up the model predton qualty for real world tme seres. Ths paper s strutured as follows: setons 1 to 3 states the problem to be solved and justfes QNN usage; seton 4 presents onepts of quantum omputng; setons 5 and 6 desrbes the QNN model and the methodology used on experments; setons 7 and 8 dsusses the results and gves onlusons, respetvely.. Tme Seres Foreastng Problem From the lassal pture, a tme seres (TS) s the set of the measured propertes of a phenomenon (physal or not) ordered hronologally by the observer. Mathematally, a TS Z t an be smply defned as Z t = { z : t = 1,, n}, (1) t where t s a hronologal ndex, generally onsdered to be tme and n s the number of observatons [Ferrera et al., 004]. However, a statstal defnton an be more

approprated as probablst propertes of QM wll be dsussed later on. Hene, a real valued TS (or stohast proess) X s the real mappng X : Τ Ω, (),, 1 s a set of ndexes that enumerates sequentally the measures, Ω s the ertanty event, Σ s a sgma-algebra over Ω and P s a funton defned over Σ whh attrbutes probabltes to the subsets of Ω [Fuller, 1976]. In ths sense, the TS foreastng problem an be fnally stated over a set of random varables X(t, w), wth the form suh that, for eah fxed t, X(t, w) s a random varable n a probablty spae ( Ω Σ, Ρ) where w are the elementary events, T={,,n} {X(t 1,w 1 ), X(t,w ),, X(t n,w n )}, (3) as the problem of determnng the upomng events X(t n+1, w n+1 ),, X(t n+h, w n+h ), where h denotes the foreast horzon..1 The Random Walk Dlemma A smple lnear model for TS foreastng s the random walk model (RW) gven by Z t = Z t-1 + r t, (4) where Z t-1 s the mmedate tme lag of Z t pont and r t s a nose term wth a Gaussan dstrbuton of null mean and varane (r t ~ N(0, )). ontroversy arses from the RW model onernng fnanal TS foreastng: eonom theoretans stated that stok market pres follow a RW model and so, annot be predted [Malkel, 1978]. The mpossblty of predtng RW seres omes from the low level of orrelaton between the tme lags and the ponts whh are to be foreasted. Atually, the RW model s the frst approxmaton to fnanal TS, as observed n some experments wth Artfal Neural Networks (ANN) [Stte and Stte, 00]. Stll, Lo and MKnley (00) argued that there exst predtable omponents n the stok market, mplyng superor long-term nvestment returns through dsplned atve nvestment management. Ths researh ams at addng value to suh argument by applyng quantum omputaton ntensve strateges on fnanal TS foreastng problem. 3. Phase Adjustment As a result of the RW dlemma, some foreasts obtaned for TS wth non-lnear dependenes are one step tme shfted (or out-of-phase). Ferrera (006), n hs Ph.D. thess, has not only mplemented a proedure of phase adjustment based on ANN, but has shown that ths one step tme shft n the predton results s due to the asymptotally behavor of the ANN model towards a RW lke model, when desgnng an estmator for suh non-lnear TS. Let Ẑ be the desred estmator for TS Z t drven by a RW lke model. The expeted value of the dstane between the estmator and the real seres n an deal predtor should tend to zero: E Zˆ t Z t 0. (5)

In addton, onsderng the nose terms r k, wth k {1,, t-1}, wth ndependent omponents (r k r j, for eah k j) and E [ ] = 0, mples r k [ ] E Z ˆt E Z t 1. (6) The proedure of phase adjustment s oneved n the sense of transformng out-of-phase predtons nto n-phase predtons by the fne-tunng of the one step tme shft. The applyng of QNN smulaton s expeted to supply an automat phase adjustment due to QM effets on the QNN learnng proess n order to overome the phase nformaton loss, explaned below n the next subseton. 3.1. The Learnng of omplex Number Phase Informaton In Quantum Mehans, realty s better desrbed (and onsequently better predted) through the use of a numeral set that preserves more nformaton then the real set: the omplex set. In the exponental form, a omplex number W an be wrtten as φ W = A e, (7) where A s the ampltude, φ s the phase and s the magnary unt. The observable part of W s a real number and, beause, t does not desrbe all of the orgnal nformaton. Mathematally, the observable part of a omplex number s ts squared modulus or, more spefally, φ φ ( ) ( A e A e ) W = W W = (8) ( φ e e φ ) = A = e ( φ φ ) A = A, where * denotes the omplex onjugate. It an be learly noted that the phase nformaton s destroyed by a measurement. Therefore, the knowledge aquston aheved by the usage of omplex numbers wthn QNN model an lead to the ablty of learnng omplex number phase nformaton, mprovng the qualty of the predtor. 4. Quantum omputng The atom pee of nformaton of Quantum omputng (Q), analogous of the lassal bt, s the quantum bt, or smply qubt. As ts lassal ounterpart, t has two 0 = (1,0) (0,0) and dstnt states n whh t an be observed, the base states [ ] T 1 = [(0,0) (1,0) ] T, wth, 1 0. The symbol s part of the Dra notaton. Dfferently of lassal bt, however, a qubt an le on a state of superposton of the base states. Theoretally, superposton means that the amount of nformaton that an be stored on a sngle qubt s nfnte. However, when measured, the qubt wll ollapse nto exatly one of the base states wth a ertan probablty. An arbtrary qubt state ψ n superposton an be expressed as a lnear ombnaton between states ψ α 0 + β 1 0 and 1 as =, (9)

where the salars α and β are the ampltudes of the states 0 and 1, respetvely. After a measure, ψ wll ollapse to ollapse to 1 wth probablty 0 wth probablty α or t wll β [Nelsen and huang, 005]. Naturally, + β = 1 α. (10) 5. Proposed Qubt Neural Network Model The mplemented qubt neural network model s based on Mtrpanont and Srsuphab (00) who prepared a omplex-valued multlayer bakpropagaton neural network model to exhbt QM s effets. Ths model s also nspred n [Kouda et al, 004] and wll be now detaled, wth adaptatons. 5.1. Qubt Neuron Model A neuron has a quantum state and t s natve when ts state s state s 0 ; t fres when ts 1 and ts arbtrary state s gven by the superposton state desrbed on Equaton (9). All nformaton that flows through the network s omplex enoded. Therefore, for presentng real data to QNN, all ponts are mapped nto omplex numbers π on the form gven n Equaton (7) (wth W = A = 1), through a phase θ (0, ), n order to reate the omplex nputs x whh stores the phase nformaton: π whereθ = X n x e θ =, (11), wth X n denotng the normalzed real data. The normalzaton of the seres real data s done through a lnear transformaton, lyng n the nterval [0., 0.8], for two reasons: to avod the saturaton regons of the logst sgmod funtons (Equaton (13)) - mprovng the learnng onvergene - and to avod orthogonal quantum states (as nterats wth the qubt state), preservng, n ths way, the superposton effets. Fgure 1. Shemat dagram for the proessng realzed by the qubt neuron. Aordngly, the operaton realzed by the quantum neuron onssts of two dstnt steps: the weghted sum of nput omplex sgnals phase, wrtten as u = N = 0 w x, (1)

where N s the number of sgnals reeved, w are the weghts (wth w 0 denotng the bas) and x are the omplex nput sgnals. The seond step onssts of the non-lnear atvaton of the quantum neuron gven by y k f ( u) = Sg(Re{ u}) + Sg(Im{ u}) =. (13) Equaton (13) means that the real and magnary parts of u n Equaton (1) are submtted separately nto a real valued logst sgmod funton (Sg()) for omposng a omplex output. The funton f : s ontnuous and dfferentable, whh allows ts usage on the tranng algorthm. The superposton effets arse naturally as a result of the network dynams due to the omplex sgnals arthmet whh performs quantum state phase rotatons. Fnally, the real output Y R of the QNN model s onsdered to be the nverse mappng of Equaton (11) over the phase of the omplex response (Equaton (13)) of the output layer neuron. 5.. Quantum Tranng Algorthm The proposed QNN was traned wth omplex bakpropagaton algorthm [Ntta, 1994] used to learn omplex number nformaton. The algorthm performs the desent-gradent mnmzaton of the sum of the squared error funton 1 E = P p N p ( desred output p ), (14) where P s the number of nput patterns, N s the number of output neurons. The weght adjustment equatons are now desrbed: let be the last layer whereas the frst layer s denoted as layer 1. For the output layer, w = w t+ 1) ( t) + w (, (15) 1 y p w = y 1 p η δ p, (16) δ = ( d y ) f ( u ), (17) p p where denotes the omplex onjugate of output y p of neuron of layer -1 n response to nput pattern p, η denotes the learnng rate, d p denotes the desred output for neuron n response to nput pattern. For ntermedate proessng layers, w = w p t+ 1) ( t) + w j (, (18) w = y 1 p η δ p, (19) δ = δ f u ), (0) p u pj ( pj + 1 N δ = δ w, (1) u pj k= 1 + 1 pk + 1 k

where N +1 s number of neurons at layer +1. 6. Methodology for Experments and Setup The number of hdden layers M of QNN was fxed n M = 1, so was the number of nodes n output layer, h = 1, whh means that the foreast horzon was one step ahead. The number of nput nodes equals the number of lags for eah TS. One presented the lags data, the QNN wll predt what would our next n the future. The number of lags was determned by the lagplot method [Perval and Walden, 1998]. Besdes, the number of hdden nodes H {3, 5, 8} vared systematally n experments. The same was done wth the learnng rate η {10-1, 10 -, 10-3 }. These parameters values were hosen by prelmnary testng experments. For eah parameter ombnaton, 10 random weght s ntalzatons were done, resultng n a total of 90 experments per TS. Data were then dvded on 3 dsjont sets: tranng (50% of data), valdaton and test (eah one onsstng of 5% of data) [Prehelt, 1994]. The stop rtera used n tranng proess was the thrd lass ross-valdaton [Prehelt, 1998] and a maxmum of 300000 tranng epohs. On analyzng results, the QNN model s sad to be equvalent to other models on a partular statst f t falls under the onfdene nterval onstruted n other experments wth ANN and ARIMA models whh an be found n [Ferrera, 006]. 6.1. Statsts and Measures for Analyzng Results The analyss of foreasts was based on the followng statsts and error measures: the Mean Square Error (MSE), Mean Absolute Perentual Error (MAPE), Normalzed Mean Square Error (NMSE), Predton On hange In Dreton (POID) and Average Relatve Varane (ARV). Refer to Ferrera et al. (005) for equatons and detals. 7. Results and Dsusson 7.1. Sunspot Tme Seres Ths benhmark TS onssts of 89 annual observatons of the number of dark regons formed on Sun s surfae. It exhbts a haot behavor whh orresponds to non-lnear data dependenes. By the lagplot analyss, the hosen lags were Z t-1 to Z t-3 (Fgure ), omposng 3-H-1 QNN topologes. 0, 0, Z(t-1) 0, 0, Z(t-) 0, 0, Z(t-3) Fgure. Lagplot for normalzed Sunspot TS wth hosen lags Z t-1 to Z t-3. On average, the best topology n experments was 3-3-1 wth η = 10-1, though the best foreast was aheved wth a 3-5-1 QNN wth η = 10-1 after 1775 epohs of tranng (Fgure 3). Table 1 ompares results of the best predton aheved wth eah

Sunspot 18 156 130 104 78 5 6 0 0 5 30 35 40 45 50 55 60 65 70 75 80 85 Ponts - Test Set Fgure 3. Sunspot seres (darker lne) and best foreast (lghter lne) obtaned by the QNN model for the test set. model (QNN, ANN and ARIMA). Analyzng the measures for the best QNN ntalzaton, sne NMSE < 1, the predtor s better than a RW model and the foreast an be onsdered to be n-phase. Moreover, t s better than a head-or-tals experment (POID > 50%). Fnally, t s better then a model whh smply predts the mean of the TS (ARV < 1). Table 1. Measures of the best predtons obtaned for Sunspot s test set. Measure QNN ANN ARIMA(9,0,1) Best Model MSE 0,0039 0,905 0,019 QNN NMSE 0,3835 0,3443 0,7805 QNN & ANN POID 88,4058% 90,0000% 75,0000% QNN & ANN ARV 0,017 0,1418 0,4007 QNN MAPE 11,6140%,41% 4,35% ANN Table. Average and standard devatons for the results of the measures and statsts obtaned n the QNN experments wth Sunspot test set. QNN Topology 3-3-1 Learnng Rate MSE NMSE POID ARV MAPE 10-1 0,0046 ± 0,0013 0,4913 ± 0,1166 79,1304 ± 3,603 0,0150 ± 0,0073 11,3087 ± 1,59 10-0,005 ± 0,0011 863 ± 0,1674 78,8406 ± 3,7345 0,0159 ± 0,0054 11,9553 ± 1,043 10-3 0,0053 ± 0,00 797 ± 0,1404 79,565 ±,7075 0,0148 ± 0,0036 11,9300 ± 1,7010 10-1 0,0057 ± 0,009 07 ± 0,100 8,1739 ± 5,064 0,018 ± 0,0006 11,854 ± 1,5819 3-5-1 10-0,0077 ± 0,0071 650 ± 0,1678 78,1159 ± 3,8590 0,0133 ± 0,0011 1,9514 ±,8510 10-3 0,0049 ± 0,000 0,4398 ± 0,081 77,1014 ± 4,3863 0,015 ± 0,0006 11,9547 ± 1,510 10-1 0,0156 ± 0,0109 0,6458 ± 0,116 78,4058 ± 5,7407 0,0135 ± 0,0013 15,778 ± 4,1987 3-8-1 10-0,0064 ± 0,0033 0,4977 ± 0,1196 81,8841 ± 3,903 0,0130 ± 0,0006 1,775 ±,0007 10-3 0,0048 ± 0,001 0,4530 ± 0,0558 78,1159 ± 4,4173 0,016 ± 0,0007 11,615 ± 1,0045 These results show that, although ahevng a better MSE and ARV then ANN model, the QNN learnng effeny was smlar to lassal ANN for NMSE and POID, but s better than ARIMA(p, d, q) model n overall. Ths an be onluded by omparng the experments onduted by Ferrera (006), whh used the same methodology and smlar setups wth equvalent ANN topologes, n terms of degrees of freedom, whle appled Box & Jenkns methodology, obtanng an ARIMA(9,0,1).

7.. Dow-Jones Industral Average (DJIA) Tme Seres The DJIA s a stok market ndex onsstng of 30 ompanes. The seres used on experments s omposed of 1400 daly observatons startng at January 1998 untl August 6, 003. By the lagplot analyss, the hosen lags were Z t-1 to Z t-3 (Fgure 4). 0, 0, Z(t-1) 0, 0, Z(t-) 0, 0, Z(t-3) Fgure 4. Lagplot for normalzed DJIA tme seres wth hosen lags Z t-1 to Z t-3. 950 8900 DJIA 8550 800 7850 7500 5 35 45 55 65 75 85 95 305 315 35 335 345 Ponts - Test Set Fgure 5. DJIA seres (darker lne) and best foreast (lghter lne) obtaned by the QNN model for the last 15 ponts of the test set. Table 3. Measures of the best predtons obtaned for DJIA s test set. Measure QNN ANN ARIMA(1,0,1) Best Model MSE 3,4910-4 0,087 5,803310-4 QNN NMSE 0,7898 0,9876 1,649 QNN POID 47,934% 46,7400% 46,1000% QNN ARV 1,989110-4 3,4610-3 0,039 QNN MAPE 3,943% 0,37% 8,300% ANN Table 4. Average and standard devatons for the results of the measures and statsts obtaned n the QNN experments wth DJIA test set. QNN Topology 3-3-1 Learnng Rate MSE NMSE POID ARV MAPE 10-1 0,0009 ± 0,0006 11,5045 ± 18,4157 47,0085 ± 0,4767 0,0005 ± 0,0004 6,5701 ±,73 10-0,0003 ± 0,0000 1,1616 ± 0,0601 46,4957 ± 0,6955 0,000 ± 0,0000 4,0716 ± 0,0338 10-3 0,0005 ± 0,000 1,9736 ± 1,0150 46,9516 ± 0,490 0,0003 ± 0,0001 4,683 ± 1,614 10-1 0,0006 ± 0,000,1013 ± 0,7436 46,7806 ± 065 0,0003 ± 0,0001 5,03 ± 1,0087 3-5-1 10-0,0003 ± 0,0000 1,005 ± 0,0650 46,581 ± 443 0,000 ± 0,0000 4,0095 ± 0,149 10-3 0,0003 ± 0,0001 1,1737 ± 0,47 46,3818 ± 949 0,000 ± 0,0000 4,057 ± 0,6448 10-1 0,0004 ± 0,0001,3596 ±,8116 46,751 ± 495 0,000 ± 0,0001 4,1634 ± 0,6810 3-8-1 10-0,0004 ± 0,0000 1,1871 ± 0,093 46,6667 ± 0,6084 0,000 ± 0,0000 4,1738 ± 0,331 10-3 0,0003 ± 0,0001 1,111 ± 0,475 46,695 ± 008 0,000 ± 0,0000 3,9648 ± 0,3965

The best topology n experments was, on average, 3-3-1 wth η = 10 -, though the best foreast was aheved wth a 3-5-1 QNN wth η = 10-1 after 51587 tranng epohs (Fgure 5). Analyzng the measures for the best QNN ntalzaton, the predtor s better than a RW model (NMSE < 1) and the foreast s onsdered to be n-phase. Moreover, t s equvalent to a head-or-tals experment (POID 50%). Fnally, t s better then a model whh smply predts the mean of the TS (ARV < 1). 8. onlusons In ths paper, experments were realzed on real world TS foreastng problem wth the new non-standard QNN learnng sheme. The results show that QNN performane was better than those obtaned wth ANN onstruted n another work [Ferrera, 006] for some measures and better than lnear algebra models ARIMA n overall, notably for DJIA seres. Ths s enouragng, sne stok market predton s a dffult problem. Furthermore, traes of an automat phase adjustment mehansm ould be observed on DJIA results, espeally for NMSE measure bearng 0,78 whh orresponds to a sgnfant performane mprovement over ANN model ( 0% better). Ths an be due to quantum phase learnng abltes of the QNN model. However, more experments wth the proposed quantum model are needed n order to onfrm ths behavor wthn other real world fnanal TS whh tends to out-of-phase predtons. Stated thus, t s suggested a further nvestgaton of a probablst nterpretaton of QNN output and ts nfluene on the qualty of the predtor. Hene, the state ϕ of a qubt neuron should be represented by ts omplex-valued output y k, gven by θk yk e = osθ k + snθ k =, () whh stores the quantum state phase nformaton k, as well as the base ampltudes ( = os k and = sn k ). The real output Y R must be gven by Y R = β = snθ k, (3) representng the probablty of the state 1. In addton, the nverse umulatve dstrbuton funton would be responsble for the orret predton, aordng to β, supposng a normal dstrbuton for the TS wth estmated mean x and estmated standard devaton σ. In ths proess, t an be neessary a onvoluton mehansm n order to elmnate eventual heterosedastty on TS data set. Referenes Deutsh, D. (1985). Quantum theory, the hurh-turng prnple and the unversal quantum omputer. Proeedngs of the Royal Soety of London, pages A400:97-117. Feynman, R. (198) Smulatng physs wth omputers. In: Internatonal Journal of Theoretal Physs, pages 1:67-488. Ferrera, T. A. E., Vasonelos, G.. and Adeodato, P. J. L. (004). A hybrd ntellgent system approah for mprovng the predton of real world tme seres. IEEE In Proeedngs of the ongress on Evolutonary omputaton, pages 736-743. IEEE.

Ferrera, T. A. E., Vasonelos, G.. and Adeodato, P. J. L. (005). A new hybrd approah for enhaned tme seres predton. In Anas do XXV ongresso da Soedade Braslera de omputação, pages 831-840. SB. Ferrera, T. A. E. (006). Uma nova metodologa híbrda ntelgente para a prevsão de séres temporas. Thess (Ph.D.), entro de Informáta UFPE. Fuller, W. A. (1976), Introduton to statstal tme seres, John Wley & Sons, nd edton. Kouda, N. Matsu, N. and Nshmura, H. (004). A multlayered feedforward network based on qubt neuron model. In Systems and omputers n Japan, pages 35(13):43-51. Kouda, N., Matsu, N., Nshmura, H. and Peper, F. (005). An examnaton of qubt neural network n ontrollng an nverted pendulum. In Neural Proessng Letters, pages (3):77-90, Sprnger Netherlands. Lo, A. W. and MaKnlay, A.. (00). A non-random walk down Wall Street, Prneton Unversty Press, 6 th edton. Malkel, B. G. (1973). A random walk down Wall Street, W. W. Norton & ompany In., 6 th edton. Matsu, N, Taka, M. and Nshmura, H. (000). A network model based on qubt-lke neuron orrespondng to quantum rut. In The Insttute of Eletrons Informaton and ommunatons n Japan (Part III: Fundamental Eletron Sene), pages 83(10):67-73. Mtrpanont, J. L. and Srsuphab, A. (00). The realzaton of quantum omplex-valued bakpropagaton neural network for pattern reognton problem. In Neural Informaton Proessng, 00. IONIP 0. Proeedngs of the 9 th Internatonal onferene on, pages 1:46-466. Nelsen, M. A. and huang, I. L. (005). omputação quânta e nformação quânta, Bookman, 1ª edção. Ntta, T. (1994). Struture of learnng n the omplex numbered bak-propagaton network. In Neural Networks, 1994. IEEE World ongress on omputatonal Intellgene, 1994 IEEE Internatonal onferene on, pages 1:69-74. Perval, D. B. and Walden, A. T. (1998). Spetral analyss for physal applatons multpaper and onventonal unvarate tehnques, ambrdge Unversty Press. Prehelt, L. (1994). Proben1: a set of neural network benhmark problems and benhmarkng rules. Tehnal Report 1/94. Fakultat fur Informatk, Karlsruhe. Prehelt, L. (1998) Automat early stoppng usng ross-valdaton: quantfyng the rtera. In Neural Networks, pages 11(4):761-767. Shor, P. W. (1994) Algorthms for quantum omputaton: dsrete logarthms and fatorng. In Foundatons of omputer Sene, 1994 Proeedngs, 35th Annual Symposum on, (ed. S. Goldwasser) p. 14-134. IEEE omputer Soety Press. Stte, R. and Stte, J. (00) Neural networks approah to the random walk dlemma of fnanal tme seres. Appled Intellgene, 16(3):163-171.