ELECTONIC AND ELECTICAL ENGINEEING IN 1392 1215 2011. No. 7(113) ELEKTONIKA I ELEKTOTECHNIKA ELECTICAL ENGINEEING T 190 ELEKTO INŽINEIJA Applcaton of BF Network n otor Tme Constant Adaptaton P. Brandstetter, P. Chlebs, P. Palacky, O. kuta Department of Electroncs, VB Techncal Unversty of Ostrava, 17. lstopadu 15, 708 00 Ostrava, Cech epublc, phone:+420 597 324 477, e-mal: pavel.brandstetter@vsb.c Introducton Artfcal neural networks (ANN) are manly used n these types of applcaton where the realaton of another methods would be very dffcult, expensve or even unrealable. In these applcatons there s possble to take the advantage of the man features of neural networks, namely: approxmaton ablty of dfferent nonlnear functons, possblty to set ther parameters n vrtue of the expermental or learnng data set, the quckness of nformaton processng and ther robustness. There s no necessary mathematcal or structure descrpton, there s possble to solve the problem just lke the black box task wth ther nputs and outputs [1 8]. The adal Bass Functons (BF) emerged as a varant of artfcal neural network (ANN) n late 80`s by Broomhead and Love and ther work opened another ANN fronter. BF network s a type of ANN for applcatons to solve problems of supervsed learnng regresson, classfcaton and tme seres predcton. The radal bass functons have been appled n the area of neural networks where they may be used as a replacement for the sgmod hdden layer transfer functon n multlayer perceptrons. adal bass functons are powerful technques whch are bult nto a dstance crteron wth a respect to the centre. uch networks have 3 layers, the nput layer, the hdden layer wth the BF non-lnearty and the lnear output layer. BF networks have the advantage of non sufferng from local mnma n the same way as multlayer perceptrons. The most popular choce for the non-lnearty s the Gaussan. The output layer s n regresson problems a lnear combnaton of hdden layer values representng mean predcted output [5, 6]. In most cases, t presents hgher tranng speed when compared wth ANN based on back-propagaton tranng methods, easer optmaton of performance snce the only parameter that can be used to modfy ts structure s the number of neurons n the hdden layer etc otor tme constant adaptaton methods are used n the modern control of nducton drve. The value of rotor resstor changes n dependence on drve load. To mprove the motor power ts necessary the dentfcaton of these parameters and adjusts them [1 3]. 21 Intenton of ths paper s to ntroduce the way how new types of artfcal neural networks can be chosen n the control of electrcal drves. The procedure s demonstrated through the use of rotor tme constant adaptaton method n the vector control of an nducton motor. Vector control of the nducton motor The man problem of the vector control n the feld coordnates of the nducton motor s the separaton of torque and flux control crcuts not to be mutually nfluenced. The torque of the nducton motor and consequently the actve power are controlled by the torque control crcut whle the rotor flux and consequently reactve power are controlled by the rotor flux control crcut. The whole control operates on the prncple of stator current space vector decomposton nto two perpendcular elements x and y whch can be analyed n the feld coordnate system [x, y] wth rotor flux space vector orentaton to the x axs (Fg. 1) [2, 4]. Independent quanttes - torque and magnetaton can be analyed by ths separaton. By mantanng the ampltude of the rotor flux ( =K m ) at a fxed value there s a lnear relatonshp between torque t and the torque component y (t=k t y ). Fg. 1. tructure of the current model otor tme constant adaptaton The nducton motor wth vector control has a very good dynamc behavor and as a consequence s well suted for hgh performance applcatons. But, the vector control
s very senstve to varatons n the rotor tme constant. The decouplng between the flux and torque s lost n an ndrect rotor feld orented control f there s a msmatch between the controllers set up rotor tme constant and the actual tme constant of the motor. Adaptaton of ths rotor tme constant s thus requred, and t s necessary to estmate ths parameter n order to mantan t equal to ts rated value programmed n the decouplng controller. The adaptaton mechansm conssts of evaluaton of adaptaton sgnal (3) and ts sequental mnmalaton by the help PI-controller (5). Fg. 2 shows the structure of model adaptve reference system and also the substtuton of the adaptaton mechansm wth the artfcal neural network. Model reference adaptve system method The block structure of the model reference adaptve system (MA) wth the adaptaton method of rotor tme constant s shown n Fg. 2. The method s based on the comparson of two estmators, where one of them ncludes rotor tme constant, whch s called the adaptve model. The other one does not nclude rotor tme constant and s the so-called reference model. The error between them s used to derve an adaptaton algorthm whch produces the estmated value of a rotor tme constant for the adaptve model. Ths value can be used for adaptaton of a rotor tme constant n the current model, whch s used n the control structure of nducton motor drve. The adaptve model s based on the applcaton of a current model of rotor flux. We often use t for the determnaton of the value and poston of the magnetng current vector or rotor flux vector. The current model contans the rotor tme constant whch s a changng parameter. The adaptve model s descrbed as follows 1 1 j L m dt, (1) T where T = L / s rotor tme constant, s resstance of rotor wndng, L s rotor nductance, s rotor poston angle, ω = d/dt rotor angular speed. The reference model s based on applcaton of voltage model of rotor flux and s descrbed as follows L LL L 2 m u dt Lm L. (2) Fg. 2. tructure of model reference adaptve system adal bass functon network The am of ths work was to compare the features of adal Bass network wth dfferent archtectures. In ths paper the effort focused n dfferent archtectures of BF, also there was added whte nose, whch s very useful n the applcaton of feed-forward neural networks. There was realed comparatve procedure. At frst there was realed common BF network wth the approprate archtecture, t mean wth one, two or wthout feedback, etc Then there was changed the feld of coverage from one BF unt. In fact t means more sporadc or densely lay-out of the BF unts, whch s expressed by lower or hgher number of BF unts. The fgure 3 depcts the data acquston of tranng data set for the off-lne neural network tranng. The start of the motor was set wthout load and n the tme 0.5 seconds wth the load. The model was always adjusted accordng to the actual archtecture of tested BF network. The quanttes q are vectors n stator reference frame: rotor flux vector j, stator voltage vector u u ju, stator current vector j, s resstance of stator wndng, L and L are the stator and rotor nductances, L m s the magnetng nductance. The adaptaton algorthm t s descrbed by the followng equatons: m m, e e L e L, (3) e e, (4) 1 K1eK2 edt, (5) u ε Tranng output data eference model Adaptve model 1 ψ ψ Tranng nput data whte nose Adaptaton Mechansm where K1 0, K 2 0. Fg. 3. Block structure of the n-out data tranng acquston 22
Dfferent types of tranng algorthms were tested and evaluated as the most fttng. Three tranng algorthms were used to test the man features of BF neural networks: Forward subset selecton; dge regresson; egresson trees 1 & 2. From these tranng algorthms there were pcked lke a useful for ours purpose just the Forward subset selecton algorthm. Ths algorthm was varously modfed together wth changes of the BF network (e.g. actvaton functon, radus ). The other methods should be useful for some other problems. Output or we can say the desred output tme behavor s always depcted n the Fg.s by the red dotted lne. In the frst BF neural network there were used 97 BF unts. The output tme behavor s perfect as we can see n the Fg. 6, the dfference between the adaptaton mechansm (AM) and the BF network (BFN) s really neglect able (Fg. 7). BF network wth one feedback The frst type s the most used and common BF network wth one feedback wthout scalng and wthout the whte nose. sa s w 2,1 w 1,1 Σ 1 ( k ) Fg. 6. Output sgnal 1/T = f(t) [s,s] from BFN and AM w 3,1 w M,1 1 ( k 1 ) Fg. 4. Archtecture of BF neural network Fg. 7. Dfference between BFN and AM output sgnal Fg. 5. Input tranng data set Fg. 8. Output sgnal 1/T = f(t) [s,s] from BFN and AM The Fg. 4 depcts the BF archtecture wth the approprate nput varables. There are always three layers: nput, hdden layer wth the non-lnear actvaton functon and the output lnear layer. There are the nput data for the adaptaton mechansm, whch were also used lke an nput tranng data set for the neural network, n the Fg. 5 ( α, β, ψ α, ψ β,, 1/T BF-k = f(t) [~,s]). 23 Next network was used wth thnly lay-out of 33 BF unts and the response s also qute good (see Fg.8). Last one was used wth denser lay-out of 365 BF unts and the output was almost the same lke wth the 96 BF unts. Then s no reason to use ths knd of structure because of hgher memory demand and hgher computaton tme.
BF wthout feedback connecton Next archtecture of BF network ddn t nclude the feedback. In the Fg. 10 there s possble to see that ths output behavor s not the expected one. The network contans 81 BF unts. In the next Fg. there s obvous mprovement of the output curve, but the prce was hgher number (261) of the BF unts. sa used wth thnly lay-out of 51 BF unts and the response s also qute good (Fg.14). Denser lay-out of BF network dsposes wth 261 BF unts and ths s the same problem lke wth network wth one feedback. sa s w 2,1 w 1,1 Σ 1 ( ) k r s w 2,1 w 1,1 Σ 1 ( k ) w 3,1 1 ( k 1 ) r w 3,1 w M,1 w M,1 Fg. 9. Archtecture of BF neural network Fg. 12. Archtecture of BF neural network 1 ( k 2 ) r Fg. 10. Output sgnal 1/T = f(t) [s,s] from BFN and AM Fg. 13. Output sgnal 1/T = f(t) [s,s] from BFN and AM Fg. 11. Output sgnal 1/T = f(t) [s,s] from BFN and AM BF wth two feedback connecton There s descrbed the BF archtecture wth two feedbacks connectons. As we can see the output tme behavour s also very good (Fg.12 & Fg. 13) lke n the frst case. There were used 120 BF unts and the next was Fg. 14. Output sgnal 1/T = f(t) [s,s] from BFN and AM BF wth the scaled nput varables The next archtecture comes from dea of feedforward archtecture, where the nput values must be scaled because of ther actvaton functon. In the Fg. 15 there are depcted nput scaled tranng data set for BF neural network ( α, β, ψ α, ψ β,, 1/T BF-k = f(t) [~,s]). 24
classcal one wth 116 unts we can also consder good enough (fg.16), but network wth thn lay-out (32unts) has low-qualty output curve (fg.17). There were one mportant dfference n lower values of the nner network parameters lke radus, centers and weghts. BF wth the whte nose Fg. 15. Input tranng data set Wth an addton of the bounded whte nose there were dea of reduce the weght and centers mportance of the feedback connecton. In the feed-forward neural networks t has the less neural hdden unt foundaton. The nput tranng data set s depcted n Fg. 19 ( α, β, ψ α, ψ β,, 1/T BF-k = f(t) [~,s]). Fg. 16. Output sgnal 1/T = f(t) [s,s] from BFN and AM Fg. 19. Input tranng data set There was used just only one type of BF archtecture wth the classc lay-out of BF unts. The BF neural network contans 215 actvaton unts and then was useless to go on wth ths type. It wll be dscussed n the concluson. Anyway, n the Fg. 20 there are depcted almost perfect output curves. It shows us that the dfference between the reference adaptve model and the BF network could be neglected. Fg. 17. Output sgnal 1/T = f(t) [s,s] from BFN and AM Fg. 20. Output sgnal 1/T = f(t) [s,s] from BFN and AM Conclusons Fg. 18. Output sgnal 1/T = f(t) [s,s] from BFN and AM Only the dense (436 unts) BF unt lay-out provde output curve (fg.18) lke the unscaled networks. The The paper deals wth dfferent archtectures of adal Bass Functon neural network. At the end of ths paper, there must be sad, that the most common archtecture of BF network wth one feedback connecton presents the best output tme behavor n comparson wth the others. BF wthout feedback connecton presents qute unstable 25
and naccurate output. The BF wth two feedbacks has very good output curves, but f we reale hgher number of BF unts and more complcated connectons then the result s also aganst ths type. The next archtecture wth scaled nput nether had better tme behavor. The only result was lower values of the hdden layer varables lke radus, centers and weghts. The last type wth used whte nose gves us lower number of hdden unts n the feed-forward neural networks, but t does not work wth BF network. That s why there were not used other types wth dfferent layout. The result of ths paper s, that there could be use other types of BF archtecture f s necessary for some reason, lke scaled nput varables or non-present feedback connecton, but then must be consderate the mentoned dsadvantages. ome of these more nterestng theoretcal assumptons were verfed on real laboratory model wth nducton motor controlled by dgtal sgnal processor wth the system for the tranng data acquston. Acknowledgements In the paper there are the results of the project 102/08/0755 whch was supported by The Cech cence Foundaton (GA C). eferences 1. Aghon H., Ursaru O., Lucanu. Three phase motor control usng modfed reference wave // Electroncs and Electrcal Engneerng. Kaunas: Technologja, 2010. No. 3(99). P. 35 38. 2. Brandstetter P. A. C. Control Drves Modern Control Methods. VB Techncal Unversty of Ostrava, 1999. 3. Brandstetter P., monk P. gnal Processng for Vector Control of AC Drve // Conference Proceedngs, 20th Internatonal Conference adoelektronka. Brno, Cech epublc, 2010. P. 93 96. 4. Dobrucky B., pank P., Kabasta M. Power electronc two phase orhogonal system wth HF nput and varable output // Electroncs and Electrcal Engneerng. Kaunas: Technologja, 2009. No. 1(99). P. 9 14. 5. Haykn. Neural Network a Comprehensve Foundaton. New Jersey: Prentce Hall, 1999. 6. Orr J. L. Introducton to adal Bass Functon Networks. Unversty of Ednburg, 1996. 7. Perdukova D., Fedor P. Fuy Model Based Control of dynamc ystem // JEE Journal of Electrcal Engneerng. Unversty Poltechnca omana, 2007. Vol. 7. No. 3. P. 5 11. 8. Vas P. Artfcal ntellgence based Electrcal Machnes and Drves, 1st ed. Oxford: Unversty Press, 1999. eceved 2011 11 10 P. Brandstetter, P. Chlebs, P. Palacky, O. kuta. Applcaton of BF Network n otor Tme Constant Adaptaton // Electroncs and Electrcal Engneerng. Kaunas: Technologja, 2011. No. 7(113). P. 21 26. The paper presents the results of the rotor tme constant adaptaton method wth the applcaton of artfcal neural network. The estmaton of the rotor tme constant for adaptve model of MA s realed by the help of PI-controller and then s replaced by the adal Bass Functon network. The estmated rotor tme constant s then used n the vector control of electrcal drve. There were dscussed the dfferent archtectures of BF network n the feld of adaptaton of rotor tme constant parameter. mulatons have been performed n the Matlab-mulnk. Ill. 20, bbl. 8 (n Englsh; abstracts n Englsh and Lthuanan). P. Brandstetter, P. Chlebs, P. Palacky, O. kuta. BF tnklo įtaka rotoraus lako pastovaja // Elektronka r elektrotechnka. Kaunas: Technologja, 2011. Nr. 7(113). P. 21 26. Patektas metodas įvertnants rotoraus lako pastovąją drbtnuose neuronnuose tnkluose. Patektos kelos BF tnklo struktūros. Atlktas modelavmas naudojants programų paketu Matlab. Il. 20, bbl. 8 (anglų kalba; santraukos anglų r letuvų k.). 26