Dstrbuted Portfolo and Investment Rs Analyss on Global Grds Rafael Moreno-Vozmedano, Krshna Nadmnt, Srumar Venugopal, Ana B. Alonso-Conde 3, Hussen Gbbns, and Rajumar Buyya Grd Computng and Dstrbuted Systems Lab, Dept. of Computer Scence and Software Engneerng, The Unversty of Melbourne. VIC 3053, AUSTRALIA Dept. of Computer Archtecture, Unversdad Complutense de Madrd. 8040 - Madrd, SPAIN 3 Dept. of Busness Admnstraton (Fnance), Unversdad Rey Juan Carlos. 803 - Madrd, SPAIN ABSTRACT The fnancal servces ndustry today produces and consumes huge amounts of data and the processes nvolved n analysng these data are equally huge especally n terms of ther complexty. The need to run these processes and analyse the data n tme and get meanngful results can be met only up to a certan extent, by today s computer systems. Most servce provders are loong to ncrease effcency and qualty of ther servce offerngs by stacng up more hardware and employng better algorthms for data processng. However, there s a lmt to the gans acheved usng such an approach. One vable alternatve would be to use emergng dsruptve technologes such as the Grd. Grd computng and ts applcaton to varous domans have been actvely studed by many groups for more than a decade now. In ths paper we explore the use of the Grd n the fnancal servces doman; an area whch we beleve has not been adequately looed nto.. INTRODUCTION Investments n stocs almost always nvolve a rs-reward trade off. To get hgher returns on nvestment, an nvestor must be prepared to tae on a hgher level of rs. Investors am to optmse ther nvestment portfolo n order to mnmse the rs and maxmse the returns. However, there are many varables nvolved n portfolo optmzaton and therefore, t s a very compute-ntensve process. In ths paper we explore the use of Grd technology to mplement a dstrbuted verson of a portfolo optmzaton method, based on Value-at-Rs (VaR) estmaton by means of Monte Carlo smulaton. The computatonal ssues of common fnance ndustry problems, such as opton prcng, portfolo optmzaton, rs analyss, etc. requres the use of hgh-performance computng systems and algorthms. Tradtonal solutons to these problems nvolve the utlzaton of parallel supercomputers, whch exhbts several drawbacs: hgh cost of the systems, hghly qualfed personnel for admnstraton and mantenance, dffcult programmng envronments (dstrbuted memory or message passng), etc. In ths context, Grd computng [8] s emergng as a promsng technology for the next generaton of hgh-performance computng solutons. Ths technology s based on the effcent sharng and cooperaton of heterogeneous, geographcally dstrbuted resources, le CPUs, clusters, multprocessors, storage devces, databases and scentfc nstruments. Computatonal Grds have been successfully used for solvng grand challenge problems n scence and engneerng. However, the use of ths technology for computatonally demandng applcatons n economcs and fnance has not been deeply explored. A smple scenaro for usng the Grd n fnancal marets s shown n Fgure. As more and more data s produced by stoc marets, ths data s fed nto the Grd, and analysed usng the varous Grd resources. A Grd resource broer acts as an access pont to the Grd for varous nvestors who wsh to carry out portfolo analyss to help them optmze ther fnancal portfolo, mae better nvestment decsons and eventually reap the benefts. The rest of the paper s organzed as follows. In Secton descrbes the portfolo optmzaton method and the VaR applcaton. Secton 3 presents a bref outlne of some efforts to apply dstrbuted computng to fnance problems and also related wor n the area of appled Grd computng n other felds such as scence. Secton 4 tals about some bacground Grd technologes that were used n our experments. Secton 5 deals wth how the VaR applcaton was Grd-enabled. In secton 6, we descrbe the expermental setup we used for evaluatng the benefts of Grdenablng the optmzaton process. In secton 7, we present the results of the experments conducted. Fnally, secton 8 concludes wth a reflecton of the whole experment and the lessons learnt theren. Fgure. Smple scenaro llustratng the use of the Grd n fnancal marets for portfolo analyss.
. APPLICATION DESCRIPTION. Value-at-Rs based Portfolo Optmzaton The am of ths secton s to descrbe the VaR applcaton, dentfy ts computatonal complexty, and llustrate how t can beneft from beng Grd-enabled. Value-at-Rs (VaR) [5] s an mportant measure of the exposure of a gven portfolo to dfferent nd of rss nherent n fnancal envronments, whch can be used for portfolo optmzaton purposes. Gven a portfolo P composed by assets S = {S, S,., S }, and w = {w, w,, w } the relatve weghts or postons of the assets n the portfolo, the prce of the portfolo at tme t s gven by: P( = = w S ( The VaR of the portfolo can be defned as the maxmum expected loss over a holdng perod, t, and at a gven level of confdence c,.e., Pr ob{ P( < VaR} = c where P( = P( t + P( s the change n the value of the portfolo over the tme perod t. In ths context, the portfolo optmzaton problem can be stated ether n terms of wealth maxmzaton or n terms of rs mnmzaton. If we consder the wealth maxmzaton crtera, the optmzaton problem s fndng the portfolo composton vector w whch maxmzes the expected portfolo yeld P(, subject to a gven constrant on VaR: and VaR V = w = On the other hand, f we consder the rs mnmzaton crtera, the optmzaton problem s fndng the portfolo composton vector w whch mnmzes the expected portfolo VaR, subject to a gven constrant on yeld: and P( Y = w = Several methods for computng VaR have been proposed: - Parametrc models, le asset-normal VaR, delta-normal VaR, or delta-gamma-normal VaR. - Non-parametrc models, le hstorcal smulaton or Monte Carlo (MC) smulaton. The MC approach s based on smulatng the changes n the values of the portfolo assets, and revaluatng the entre portfolo for each smulaton experment. The man advantage of ths method s ts theoretcal flexblty, because t s not restrcted to a gven rs term dstrbuton and the grade of exactness can be mproved by ncreasng the number of smulatons. For MC smulaton purposes, the evoluton of a sngle asset, S (, can be modelled as a random wal followng a Geometrc Brownan Moton: ds( = µ S( dt + σs( dw ( where dw t s a Wener process, µ s the nstantaneous drft and σ s the volatlty of the asset. Assumng a lognormal dstrbuton and usng the Itô s Lemma, the expresson () can be transformed nto an Arthmetc Brownan Moton: d(ln S( ) = ( µ σ / ) dt + σdw ( δ t, Integratng the prevous expresson over a fnte tme nterval, we can reach an approxmated soluton for estmatng the prce evoluton of S(: S( t + δ = S( e ( µ σ / ) δ + ση where s a standard normal random varable. δt ) For a portfolo composed by assets, S (, S (,, S (, the portfolo value evoluton can be modelled as coupled prce paths: S ( t + δ = S ( e ( µ σ / ) δt+ σ Z δt ) where Z are correlated random varables wth covarance cov( Z, Z ) = cov( S, S ) = ρ j To transform a vector of uncorrelated normally dstrbuted random varables η =(,,, ) nto a vector of correlated random varables Z =(Z, Z,, Z ), we can use the Cholesy decomposton of the covarance matrx (R): R = AA T ρ ρ ρ where ρ ρ ρ s assumed to be symmetrc R = ρ ρ ρ and postve defnte, A s a lower trangular matrx and A T s the transpose of A. j j
Then, applyng the matrx A to η generates the new correlated random varables Z Z = A η To smulate an ndvdual portfolo prce path for a gven holdng perod t, usng a m-step smulaton path, t s necessary to evaluate the prce path of all the n assets n the portfolo at each tme nterval: S (t+δ, S (t+δ,, S (t+ =S (t+mδ, =,,,, where δt s the basc smulaton tme-step, δt = t/m. For each smulaton experment, j, the portfolo value at target horzon s P ( t + = j = w S, j ( t +, j =,..., N where w s the relatve weght of the asset S n the portfolo, and N s the overall number of smulatons. The changes n the value of the portfolo are Pj ( = Pj ( t + P( j =,..., N The portfolo VaR can be measured from the dstrbuton of the N changes n the portfolo value at the target horzon, tang the (- c)-percentle of ths dstrbuton, where c s the level of confdence. The problem of portfolo optmzaton problem s a complex computatonal consumng problem, snce ths MC smulaton must be acheved for dfferent portfolo compostons vector, w, n order to fnd that one whch maxmzes yeld or mnmzes rs. There are several technques for lmtng the soluton space, and shortenng the overall smulaton tme, although many tmes they fall on local mnma solutons. So, n practce, t could be necessary to smulate dfferent weght compostons (several thousand scenaros), more complex portfolos (several hundred assets), more prce paths (several mllons), or longer holdng perods. However, ncrease n the number of parameters also ncreases the smulaton tme sgnfcantly and runnng several scenaros could potentally tae several hours or even days on a sngle computer. Thus, the long turnaround tme of the smulatons motvates the use of Hgh-Performance Computng (HPC) resources wthn the doman of portfolo analyss. However, the varable nature of such worloads maes t dffcult to provson the rght amount of resources for runnng them. Therefore, on demand allocaton of resources s requred to handle expansons and contractons n the worload. 3. RELATED WORK In recent tmes, the promse of Grd computng has led researchers and developers to apply the technology on dfferent scales to a wde range of domans such as Bo-nformatcs [], Hgh energy Physcs [4], Neuroscences [4], Language Processng [], Astronomy [8] and Earth Scences []. A lot of groups that have made efforts towards scalng up ther applcatons from Clusters to Grds come from the scentfc communty. In the commercal world, the area of fnancal servces can beneft hugely from dstrbuted computng. Some companes n the fnance busness have already reaped good benefts from dstrbutng ther analyss and other resource ntensve applcatons across enterprse clusters [5][6][7]. Grds are the next logcal step beyond clusters, and provde a better soluton for large-scale compute-and-data ntensve applcatons, spannng across multple organsatons wth dfferent polces and varyng types of resources. The sharng of such heterogeneous resources, n a servce-orented maret paradgm wll only beneft all nvolved partes, due to a vastly hgher potental of the Grd. One of the many dfferent approaches to achevng performance gans s to actually rewrte an applcaton usng Message Passng Interface (MPI) or smlar paradgms to dstrbute the wor across multple processors. In the context of computatonal economcs and fnance, one such wor s descrbed n []. However, ths nvolves a lot of effort and tme and the applcaton cannot adapt tself well to changng condtons as are found n Grds. The approach presented n ths paper of composng the applcaton as a bag of ndependent tass and lettng a resource broer execute them not only elmnates the need to rewrte applcatons but also offloads the parallelzaton logc on to the broer thus solatng the applcaton developer from the need to factor n the heterogeneous Grd envronments. Also, the resource broer s capable of allocatng resources dependng on varyng applcaton requrements thus enhancng scalablty and adaptablty of the process. 4. BACKGROUND GRID TECHNOLOGIES The computatonal Grd s enabled by the use of software servces nown as Grd-mddleware. These servces mae possble secure and unform access to heterogeneous resources to execute applcatons. There are many technology optons, today for runnng applcatons on remote computers that are part of a Grd. These nclude low-level mddleware such as Globus [7], UNICORE (UNform Interface to COmputng REsources) [9] and Alchem [3] and user-level mddleware or broers whch perform aggregaton of Grd servces and meta-schedulng, such as the Grdbus broer [4], Nmrod [3], Condor [0] and GRUBER (Grd Resource Usage SLA Broer) [6] etc. For the purpose of Grd-enablng portfolo optmzaton, our requrements ncluded a system whch automates or maes t easy to conduct the process of dstrbutng the applcaton, deployng and runnng t on Grd nodes, montor the progress, handle falures and collate the results of executon. Globus s a good choce of mddleware as t s one of the most wdely used lowlevel Grd mddleware systems today n both research and commercal areas and has wde communty support and an actve development group. The Grdbus broer, a user-level mddleware that supports the Globus mddleware, was chosen for ths applcaton as t provdes smple mechansms for rapdly formulatng the applcaton requrements and meets the requrements mentoned prevously. A bref descrpton of Globus and the Grdbus broer follows. 4. The Globus Toolt The open source Globus Toolt s a set of software servces and lbrares for resource montorng, dscovery, and management, 3
<?xml verson=".0" encodng="utf-8"?> <xpml xmlns:xs="http://www.w3.org/00/xmlschema-nstance" xs:nonamespaceschemalocaton="xmlinputschema.xsd"> <parameter name="scenaro" type="nteger" doman="range"> <range from="0" to="99" type="step" nterval=""/> </parameter> <requrement type="node"> <source locaton="local" fle="cholesy.dat" /> <destnaton locaton="node" fle="cholesy.dat" /> <source locaton="local" fle="volat.dat" /> <destnaton locaton="node" fle="volat.dat" /> <source locaton="local" fle="nput.dat"/> <destnaton locaton="node" fle="nput.dat"/> <source locaton="local" fle="var" /> <destnaton locaton="node" fle="var" /> </requrement> <tas type="man"> <source locaton="local" fle="postons_$scenaro.dat"/> <destnaton locaton="node" fle="postons_$scenaro.dat"/> <execute locaton="node"> <command value="./var"/> <arg value="$scenaro"/> </execute> <source locaton="node" fle="output_$scenaro"/> <destnaton locaton="local" fle="output_$scenaro"/> <source locaton="node" fle="var_$scenaro"/> <destnaton locaton="local" fle="var_$scenaro"/> </tas> </xpml> Fgure. Applcaton Descrpton n XML plus securty and fle management. It facltates constructon of computatonal Grds and Grd-based applcatons, across corporate, nsttutonal and geographc boundares. The toolt s developed and mantaned by the Globus Allance, whch ncludes the Argonne Natonal Laboratory, USA and others. It allows secure access to remote computers va GSI (Grd-securty nfrastructure) and maes the node a part of the Grd, whle preservng the autonomy of the node by usng locally set polces to decde who can access the servces offered and when. The toolt ncludes software for securty, nformaton nfrastructure, resource management, data management, communcaton, fault detecton, and portablty. It s pacaged as a set of components that can be used ether ndependently or together to develop applcatons. 4. The Grdbus Broer The Grdbus servce broer s a flexble open-source platformndependent resource broerng system, mplemented n Java, whch provdes broerng servces for dstrbuted executon of applcatons on varous low-level mddleware systems ncludng Globus, UNICORE, Alchem, XGrd [], and queung systems such as PBS (Portable Batch System) [0], and SGE (Sun Grd Engne) []. It hdes the complexty of the Grd by translatng a bag-of-ndependent-tass or parameter-sweep type applcatons nto jobs that can be scheduled to be executed on resources, montorng those jobs and collatng the results of the executon when fnshed. The broer acts as a user-agent and maes schedulng decsons on where to place the jobs on the Grd dependng on the computatonal resources characterstcs (such as avalablty, capablty, and cos, the users qualty of servce requrements such as the deadlne and budget, and the proxmty of the requred data or ts replcas to the computatonal resources. 5. GRID ENABLING THE VaR OPTIMIZATION APPLICATION The VaR applcaton s wrtten n the C language, and s a smple program that s not drectly aware of the Grd by tself, that s t was not desgned to run as a dstrbuted applcaton. A sngle run of the VaR applcaton computes the value-at-rs for a portfolo of assets, by smulatng N prce-paths, of the stoc movements over a holdng perod, t, usng a basc tme-step of t. The assets are defned n a data fle, volat.dat, wth ther volatlty and drft nformaton. The cholesy.dat nput data fle contans the Cholesy portfolo composton matrx w. The nput parameters N, t, and t are contaned n another data fle, nput.dat. The output t produces s a frequency dstrbuton, whch s used to get a measure of the portfolo VaR by tang the (-c) percentle of the dstrbuton, where c s the level of confdence. 4
Experments Set 3 Set 4 Set 3 3 Table. Descrpton of experments Descrpton Computes VaR on a sngle computer, runnng a sngle scenaro wth dfferent values for the t (holdng perod) parameter. Evaluates applcaton performance n terms of speed, wth fxed job-sze (.e usng same parameters) and varyng number of Grd nodes. Evaluates applcaton performance wth varyng job sze and same set of Grd nodes by computng VaR on a Grd of 5 nodes, runnng 00 dfferent scenaros wth dfferent values for the t (holdng perod), and enables comparson the outputs wth those from experment Set. Grd enablng the VaR applcaton nvolves runnng the same applcaton over multple data sets or nput parameters, for smulatng dfferent scenaros of stoc movements. As such, ths applcaton fts ncely nto the parameter-sweep paradgm and s embarrassngly parallel as each run of VaR s ndependent of another run. To run the applcaton on the Grd usng the Grdbus broer, we descrbed the applcaton usng the declaratve xml-based extensble Parametrc Modellng Language (XPML) provded by the broer, as t offered an easy way to vary the parameters and rerun the applcaton. XPML allows us to specfy the nputs, executable fles and outputs generated by the VaR applcaton. The XPML fle shown n Fgure descrbes the applcaton to be consstng of a parameter rangng from 0 to 99 (.e. 00 scenaros for computng VaR). The tas performed by each job n the applcaton s descrbed by a sequence of commands whch copy fles and execute the VaR program. More detals about the specfc experment runs conducted are gven n the next secton. 6. EXPERIMENTS AND EVALUATION To evaluate the benefts the Grd brngs to ths fnance applcaton, we conducted three sets of experments as shown n Table. For our experments we vared the nput parameters t (holdng perod) and t (tme-step) and used = 76 assets and N = 500000 prce-paths n whch the stocs could vary. The assets were derved from a real nvestment product and are companes tradng on the Madrd Stoc Exchange n Span. The frst set nvolved runnng one scenaro on one computer, varyng the holdng perod parameter (, wth number of smulatons N = 500000, number of assets =76, and a basc tme step of t = day. These amed to nvestgate the effect of varyng nput parameters on the output VaR computed Table (a) shows the nput parameters of the three experments from the frst set. These smulatons were run on a sngle computer, wth Intel P4 processor at.5 GHz, 5MB RAM, and Lnux OS. Table (a) Parameters for smulaton experments -3 (Set ) Set Assets () Scenaros Smulatons (N) Holdng Perod ( Basc tme step (δ tme steps (m) = ( / (δ Exper. 76 500,000 day day Exper. 76 500,000 5 days day 5 Exper. 3 76 500,000 0 days day 0 [Note: Total Investment (USD) = 60.8 mllon] Table (b) Grd applcaton parameters used for the performance experment wth varyng number of grd nodes (Set ) Set Assets () Scenaros Smulatons (N) Holdng Perod ( Basc tme step (δ tme steps (m) = ( / (δ Grd nodes Exper. 76 00 00,000 day day Exper. 76 00 00,000 day day Exper. 3 76 00 00,000 day day 3 Exper. 4 76 00 00,000 day day 4 [Note: Total Investment (USD) = 60.8 mllon] Table (c) Parameters for smulaton experments -3 (Set 3) Set 3 Assets () Scenaros Smulatons (N) Holdng Perod ( Basc tme step (δ tme steps (m) = ( / (δ Grd nodes Exper. 76 00 500,000 day day 5 Exper. 76 00 500,000 5 days day 5 5 Exper. 3 76 00 500,000 0 days day 0 5 [Note: Total Investment (USD) = 60.8 mllon] 5
The second set of experments conducted amed to smply confrm that Grd-enablng the VaR applcaton was useful n terms of applcaton performance. Four experments wth varyng number of Grd-nodes were done, eepng the applcaton parameters, N, t, and t constant. The parameter values used n ths set of experments s shown n Table (b). Fnally, a thrd set of experments, smlar to those n the frst was conducted on a Grd of 5 nodes. These nvolved runnng 00 dfferent scenaros on Grd nodes by varyng the nput parameter t (holdng perod). In addton to serve as an ndcaton of applcaton performance wth varyng smulaton parameters, these tests were also useful to get outputs, from dstrbutng the VaR applcaton on the Grd, whch could be compared wth the outputs obtaned runnng one scenaro on a sngle computer (set ). The applcaton parameters used for set 3 of experments s shown n Table (c). For the Grd experments (set and set 3), the Belle analyss test bed data Grd - whch has resources dstrbuted around Australa ncludng Melbourne, Adelade and Canberra - was used. These systems are nterconnected va GrangeNet (Grd and Next generaton Networ) whch s a mult-ggabt networ supportng Grd and advanced communcaton servces across Australa. The broer was deployed on a PC at the GRIDS lab (bart.cs.mu.oz.au), at the Unversty of Melbourne, and the agents were dspatched to other resources at runtme by the Grdbus broer. The performance tests amed to determne the effect of ncreasng number of Grd nodes for a fxed job sze and number of jobs. The test bed resources are shown n Table 3. 7. RESULTS Fgures 3 (a), (b), and (c) plot the frequency dstrbuton graphs resultng from the smulatons of the Set experments,, and 3 respectvely, and Table 4 summarzes some VaR estmaton values for dfferent levels of confdence c, obtaned from the frequency graphs. For example, f we hold the portfolo nvestment for day the probablty of losng more than 5 mllon dollars s lower than % (c=99%). For 5 days, the probablty of losng more than 0 mllon dollars s around % (c99%), however f we hold the portfolo nvestment for 0 days, the probablty of losng more than 0 mllon dollars s 0% (c90%). The results for the second set of experments are shown n Fgure 5. Ths shows the performance of dstrbutng the smulaton over dfferent Grd nodes. The man parameters of ths smulaton are summarzed n Table 5. In ths case we have smulated 00 dfferent scenaros over a holdng perod ( of day, wth a basc tme step ( of day, and 500,000 prce paths per scenaro (N). As we can see, the smulaton of 00 scenaros on a sngle computer taes around 67 mnutes. If we dstrbute these smulatons over dfferent Grd nodes, we can obtan a sgnfcant tme reducton, for example usng 4 computng nodes, the resultng smulaton tme s halved (33 mn.). The results for experment set 3, shown n Fgure 4 (a)-(c), plot the frequency dstrbuton graphs resultng from the smulatons of the set 3 experments -3 respectvely. These results are smlar to those n set, as the applcaton nput parameters were vared n the same way, except that the experment was conducted over 00 scenaros n each case, over a Grd. Table 6 summarzes the VaR estmaton values for dfferent levels of confdence (c), obtaned from the frequency graphs obtaned from results of experments - 3 of set 3 (runnng the VaR on the Grd). The values that are produced from runnng the VaR applcaton on the Grd testbed for 00 scenaros are gven n Table 6. Ths was done by computng 00 dfferent frequency dstrbutons (one for each scenaro), and obtanng 00 dfferent VaR values (for a gven level of confdence). Then, the lowest (absolute) value of VaR s selected as the scenaro wth ths value s lely to be the best one, because the loss of money of the nvestment s lely to be lower. Comparng the values n Table 4 (for scenaro) and Table 6, we see that those n the latter are lower than the former. Whle the values are stll probablstc, they are better estmates of the VaR as more scenaros were consdered n the evaluaton. Table 4. VaR values for the three smulaton experments from Set Set c=90.0% c=95.0% c=97.0% c=99% Experment.8 mllon 3.4 mllon 3.8 mllon 4.4 mllon Experment 6.7 mllon 8. mllon 9. mllon 0.8 mllon Experment 3 0. mllon. mllon 3.4 mllon 5.8 mllon Set Table 5. Applcaton performance results (Set ) Smulatons (N) Holdng Perod ( Basc tme step (δ Grd nodes Tme taen (mnutes) Exper. 500,000 day day 67 Exper. 500,000 day day 59 Exper. 3 500,000 day day 3 46 Exper. 4 500,000 day day 4 33 Tme taen (mnutes) 80 70 60 50 40 30 0 0 0 67 VaR applcaton performance 59 46. 3 4 # of Compute Nodes 33.07 Fgure 5. VaR applcaton performance on a Grd wth varyng Grd number of nodes (Set ). 6
(a) Holdng perod = day (a) Holdng perod = day (b) Holdng perod = 5 days (b) Holdng perod = 5 days (c) Holdng perod = 0 days (c) Holdng perod = 0 days Fgure 3. Frequency graph for Set : Experments -3 ( # of scenaros = ) Fgure 4. Frequency graph for Set 3 : Experments -3 ( # of scenaros = 00 ) 7
Table 3. Resources used n the Experments. Server Name Owner Organsaton Confguraton Grd Mddleware belle.cs.mu.oz.au GRIDS Lab, The Unversty of Melbourne IBM e-server wth 4 CPUs. Globus v..4 belle.anu.edu.au Australan Natonal Unversty, Canberra IBM e-server wth 4 CPUs. Globus v..4 belle.physcs.usyd.edu.au School of Physcs, The Unversty of Sydney IBM e-server wth 4 CPUs. Globus v..4 lc.apac.edu.au APAC, Canberra 54 node, 56 CPU.8GHz Dell P4 Lnux cluster Globus v..4 manjra.cs.mu.oz.au GRIDS Lab, The Unversty of Melbourne x86 Lnux Cluster wth 3 nodes. Globus v.4.0 Fgure 6 shows the applcaton performance when run on a Grd of 5 nodes smulatng 00 scenaros (consttutng 00 Grd jobs), wth varyng nput parameters. The performance results are summarzed n Table 7. Table 6. VaR values for the three smulaton experments from Set 3. Set 3 c=90.0% c=95.0% c=97.0% c=99% Experment.5 mllon 3. mllon 3.5 mllon 4. mllon Experment 5.7 mllon 6.9 mllon 7.7 mllon 9.0 mllon Experment 3 8. mllon 9.8 mllon 0.9 mllon.7 mllon Set 3 Table 7. Applcaton performance results (Set 3) Smulatons (N) Holdng Perod ( Basc tme step (δ Grd nodes Tme taen (mnutes) Exper. 500,000 day day 5 46 Exper. 500,000 5 days day 5 58 Exper. 3 500,000 0 days day 5 34 Tme taen (mnutes) 60 40 0 00 80 60 40 0 0 VaR applcaton performance 46 58 5 0 Smulaton tme steps Fgure 6. Applcaton performance wth varyng nput parameters runnng on the Grd. (Set 3) 34 8. SUMMARY AND CONCLUSION In ths paper, we have explored the applcaton of Grd technologes wthn fnancal servces doman by executng a portfolo optmzaton applcaton that estmates the Value-at-Rs for a gven asset portfolo through Monte-Carlo smulaton. We have utlsed readly avalable Grd technologes and have shown how wth the use of a smple, declaratve nterface and wthout rewrtng the applcaton, t s possble to execute a sequental, sngle machne applcaton on aggregated Grd resources. From the results of our executon, t s evdent that runnng on a Grd reduces the tme of executon sgnfcantly. Also, a user s able to run the applcaton for more scenaros and receve a better estmaton of VaR n a shorter perod of tme. However, ths s only one of the ways n whch Grd technologes can be appled n ths doman. Whle, n our evaluaton, the asset values have been provded n a statc fle, t s possble to vsualse a servce that wll aggregate nformaton from varous stoc quote provders and perform VaR analyss for a gven portfolo over a Grd. Ths wll be able to mae use of emergng Servce-Orented Archtecture (SOA) paradgm that has been realzed n Grd computng through Grd servces [9]. REFERENCES [] A. Abdelhale and A. Blas, Parallelzaton, Optmzaton, and Performance Analyss of Portfolo Choce Models, In Proceedngs of the 30th Internatonal Conference on Parallel Processng. Valenca, Span. September 3-7, 00. [] B. Allcoc, I. Foster, V. Nefedova, A. Chervena, E. Deelman, C. Kesselman, J. Lee, A. Sm, A. Shoshan, B. Drach, and D.Wllams, Hgh-performance remote access to clmate smulaton data: a challenge problem for data grd technologes, n Proceedngs of the 00 ACM/IEEE conference on Supercomputng (SC '0). Denver, CO, USA: ACM Press, November 00. [3] R. Buyya, D. Abramson, and J. Gddy, Nmrod-G Resource Broer for Servce-Orented Grd Computng, IEEE Dstrbuted Systems Onlne, n Volume Number 7, November 00. [4] R. Buyya, S. Date, Y. Mzuno-Matsumoto, S. Venugopal, and D. Abramson, Neuroscence Instrumentaton and Dstrbuted Analyss of Bran Actvty Data: A Case for escence on Global Grd, Journal of Concurrency and Computaton: Practce and Experence, Volume 7, No. 5, Wley Press, New Yor, USA, Dec. 005. [5] D. Duffe, J. Pan, An Overvew of Value at Rs, Journal of Dervatves, Sprng 997, vol. 4, 7-49, Insttutonal Investor Inc. [6] C. Dumtrescu, I. Foster, GRUBER: A Grd Resource Usage SLAbased Broer. Proceedngs of EuroPar 005, Aug 30 - Sep, 005, Lsbon, Portugal [7] I. Foster, C. Kesselman, Globus: A Metacomputng Infrastructure Toolt. Intl J. Supercomputer Applcatons, ():5-8, 997. 8
[8] I. Foster and C. Kesselman (edtors), The Grd: Blueprnt for a Future Computng Infrastructure, Morgan Kaufmann Publshers, USA, 999. [9] I. Foster, C. Kesselman, and S. Tuece, The anatomy of the grd: Enablng scalable vrtual organzatons, Internatonal Journal of Hgh Performance Computng Applcatons, vol. 5, pp. 00-, Sage Publshers, London, UK, 00. [0] J. Frey, T. Tannenbaum, I. Foster, M. Lvny, And S. Tuece, CondorG: A Computaton Management Agent for Multnsttutonal Grds. In Internatonal Symposum on Hgh Performance Dstrbuted Computng (San Francsco, CA, 00), pp. 55--67. [] H. Gbbns, K. Nadmnt, B. Beeson, R. Chhabra, B. Smth, and R. Buyya, The Australan BoGrd Portal: Empowerng the Molecular Docng Research Communty, Proceedngs of the 3rd APAC Conference and Exhbton on Advanced Computng, Grd Applcatons and eresearch (APAC 005), Sept. 6-30, 005, Gold Coast, Australa. [] B. Hughes and S. Brd, 003. Grd-Enablng Natural Language Engneerng By Stealth. Proceedngs of HLT-NAACL 003 Worshop on Software Engneerng and Archtecture of Language Technology Systems (SEALTS), pp.3-38, Assocaton for Computatonal Lngustcs. [3] A. Luther, R. Buyya, R. Ranjan, and S. Venugopal, Alchem: A.NET-Based Enterprse Grd Computng System, Proceedngs of the 6th Internatonal Conference on Internet Computng (ICOMP'05), June 7-30, 005, Las Vegas, USA [4] S. Venugopal, R. Buyya and L. Wnton, A Grd Servce Broer for Schedulng e-scence Applcatons on Global Data Grds, Journal of Concurrency and Computaton: Practce and Experence, Wley Press, USA (accepted n Jan. 005). [5] An Overvew of Grd Computng n Fnancal Servces [Sep 005], http://www.jayecles.com/research/grd.pc [6] What's So Great About Grd? http://www.bantech.com/features/showartcle.jhtml?artcleid=4 00554&pgno=5 [7] Texas Tech Unversty Performs Stoc Prce Analyss n Hours Instead of Days [005 SAS Insttute Inc.], http://support.sas.com/rnd/scalablty/grd/ttu.html [8] E. Deelman, C. Kesselman, G. Mehta, L. Meshat, L. Pearlman, K. Blacburn, P. Ehrens, A. Lazzarn, R. Wllams, and S. Koranda, GrPhyN and LIGO: Buldng a Vrtual Data Grd for Gravtatonal Wave Scentsts. In Proceedngs of the Th IEEE nternatonal Symposum on Hgh Performance Dstrbuted Computng Hpdc- 000 (Hpdc'0) (July 4-6, 00). HPDC. IEEE Computer Socety, Washngton, DC, 5. [9] UNICORE Grd mddleware, http://www.uncore.org [0] Portable Batch System, http://www.openpbs.org/ [] Sun Grd Engne, http://www.sun.com/software/grdware/ndex.xml [] Apple XGrd, http://www.apple.com/server/macosx/features/xgrd.html 9