portion,theso-calledserver,providesbasicservicessuchasdatai/o,buermanagementandconcurrency
|
|
|
- Arleen Knight
- 10 years ago
- Views:
Transcription
1 DynamO:DynamicObjectswithPersistentStorage JiongYang,SilviaNittel,WeiWang,andRichardMuntz UniversityofCalifornia,LosAngeles DepartmentofComputerScience intensiveapplications,e.g.,datamining,scienticcomputing,imageprocessing,etc.inthispaper,we attacheddisks,thetraditionalclient-serverarchitecturebecomessuboptimalformanycomputation/data Inlightofadvancesinprocessorandnetworkingtechnology,especiallytheemergenceofnetwork LosAngeles,CA90095 introducearevisedarchitectureforthiskindofapplication:thedynamicobjectserverenvironment serverisdividedintomoduleswhicharedynamicallymigratedtotheclientondemand.also,datais (DynamO).Themaininnovationofthisarchitectureisthatthefunctionalityofapersistentstorage transfereddirectlytotheclient'scachefromnetwork-attacheddisks,thusavoidingmultiplecopiesfroma disktotheserverbuertothenetworkandtheclient.inthisway,aclientonlyplacesasmallloadonthe Abstract 1Introduction Client-serverarchitectureshavebeenpopularoverthepastdecades,andhavealsogainedwideacceptance inthedatabasecommunity.inaclient-serverarchitecture,thesystemcodeisdividedintotwoportions:one adaptability,scalabilityandcostperformance. cachemanagementallowingseveralclientstosharethein-memorydatabyusingtheconceptof\who usesit,servesittoothers".weshowviasimulationmodelshowthisarchitectureincreasesthesystem's server,andalsoavoidsthei/obottleneckontheserver.furthermore,dynamoemploysadistributed portion,theso-calledserver,providesbasicservicessuchasdatai/o,buermanagementandconcurrency controlandrunsonadedicatedmachinesuchasaworkstationorsmp,andanotherportion,theso-called client,whichprovidestheapiandexecutesonthesameapplicationmachine.normally,aserverinteracts withmanyclients.insuchanarchitecture,thescalabilityandperformanceoftheoverallsystemsignicantly attacheddisksandoftenprocesseddatarstandsenttheoutputdatatoclients,thus,reducingtheloadon secondmakingthenetworkthebottleneckfordataintensivecomputinganddiski/o.serversuseddirectly turestoemerge.inrecentyears,twomajortrendsinhardwaredevelopmenthaveimpactedtheeciencyof dependsonthecomputepower,aggregatebandwidth,etc.oftheservermachine,thedatai/orateofclients, thenetwork.today,point-to-pointconnectedbrechanneliscapableoftransferringdataat100mbyte/sec intherangeof10mbit/secwhileatypicalsystembus'bandwidthwasintherangeoftensofmbyte/sec serverandclientmachines.atthebeginningofthenineties,thetypicalbandwidthofa\fast"networkwas theclient-serverarchitecture:theemergenceofnetworkattachedstorage,andtheincreaseofcpupowerof andthescalabilityoftheserveritself. andtheindustryprojectionisthatitsbandwidthwillreach400mbyte/secsoon[fca]whileduringthepast Advancesinprocessorandlocalareanetworktechnologymakeitpossiblefordierentsoftwarearchitec- individualdiskwillsustainabandwidthontheorderof40to50mb/sec.thistrendsuggeststhatsystems willhavemuchhigheraggregatediski/obandwidthinthenearfuture(400mb/secpoint-to-point)if weconsiderhowtogetthedatatotheprocessorconsideringthatthesustainedbandwidthofasingles-bus attachedtoabrechannelbasednetwork.asaresult,largeaggregatediskbandwidthisnotaproblemuntil about60%peryear[gro96].bytheyear2000,amegabyteofdiskwillcostaboutfourcentsandeach devicesdirectlytothenetworkinsteadoftoaservermachine. decadethesustainedbandwidthofthetypicalsystembushasincreasedonlyseveraltensofmbyte/sec. Thus,thenetworkisnolongerthebottleneckinaLANenvironmentanditisfeasibletoattachstorage Therateofincreaseinthebandwidthofasinglediskisabout40%ayear,andthepriceperMBdrops
2 orpcibusisintherangeof20to60mb/sec[arp97]today,especiallyifweassumeasingleservermachine thatperformsthediski/oformanyclientmachines. AnotherrelevanthardwaretrendconcernstheCPUpowerofclientandservermachines.Tenyearsago, servermachineswereequippedwithmuchmorepowerfulcpusthantheclientmachines.asaresult,servers weredesignedtoperformmostoftheworkwithinaclient-serverbasedsystem.thesefactshavechanged dramaticallyduringthepasttenyears.today,clientmachinesareequippedwithpowerfulcpussimilarto servermachines;furthermore,theyarenormallylessutilizedthanservermachines.theaveragenumberof CPUsinaservermachineisuptothirty(inanSMPmachine)comparedtotwoorfourCPUsincurrentclient workstations;however,forapplicationsthatdonotexhibithighparallelism,itdoesnotnecessarilyincrease responsetimeifmoreworkistransferredtotheclients.usually,therearemanymoreclientmachines thanservermachinesandtheaggregatecomputepoweroftheclientsisoftenmorethanthatoftheserver machines.(underthesecircumstances,movingsameoftheworkloadstoclientscanrelievetheserverand reducequeueingdelays.) Basedonthetechnologytrendsdiscussedabove,weintroduceamorescalableapproachtopersistentobject systems:dynamicobjectswithpersistentstorage(dynamo).dynamoprovidesanapplicationinterfaceand functionalitysimilartotraditionalpersistentobjectsystems(pos)suchasexodus[car86],mneme[mos88] orkiosk[nit96],andallowsthestoring,clusteringandretrievingofstorageobjectswhicheachconsistsof anobjectidentierandanunstructuredbytecontainer.dynamo'sarchitecture,however,isdierentfrom thetraditionalclient-serverarchitectureofsuchsystems.thesystemhasalayeredarchitectureconsisting ofani/olayer,abuermanagementlayer,andanobjectmanagementlayer,alsoprovidingtransaction management.indynamo,theobjectmanagementlayerresidesontheclientmachine,andinteractswith aserverpartontheservermachine.however,theserverpartismuchsmallerindynamo,andactslikea coordinator.atdataaccesstime,thenecessaryservercodeforbuermanagement,andcataloginformationis dynamicallydownloadedtotheclientmachine,andthenrunsontheclientmachine.(thedynamicdownloadis notreallynecessary;alternativelyitcouldresidepermanentlyontheclientmachine.)theobjectmanagement layercommunicateswiththecoordinatorontheservermachineaboutthelocationofrelevantdata.however, insteadofloadingdatathroughtheservermachine,theobjectmanagementontheclientmachineinteracts withdynamo'si/olayerthatresidesonthediskcontrollersofthenetwork-attacheddisk.thisi/olayer performsphysicalandlogicaldevicemanagement,andprovidestheabstractionofdatapagestotheobject managementlayer.requesteddataisdirectlyretrievedfromthenetwork-attacheddiskandcachedlocally ontheclientmachine,thus,eliminatingthebottleneckcausedbytheserver'sbusbandwidthlimitation(see Figure1). DynamOeliminatesthetraditionalbuerarchitectureofPOSinwhichthesystembuerresidesonthe servermachine.indynamo,eachclientactsasacacheforlocaldatathatissharedwithotherclients, thus,providingadistributedcache.sincethecollectivememoryonallclientmachinesisusuallymuchlarger thanthatonservermachines,thecachehitratecanbeimproved,anddiski/oavoided.forexample,it isreportedthatthecachehitratedoublesinthenowenvironmentwhichemploysthistypeofdistributed clientcachescheme[and96]. Theredesignedarchitectureofapersistentobjectsystemaccountsforhigherperformanceandsignicantly improvedscalability.inadataintensivecomputingapplication,suchasdataintensivepersistentprograms, theservermachine'sbuscaneasilybecomeabottleneckinthetraditionalarchitecture.however,indynamo, sincethedatadoesnotgothroughtheservermachine'sbus,thisbottleneckiseliminated.ontheotherhand, forthosecomputation-anddata-intensiveapplications,suchasdatabaseapplicationswithalargenumber ofclientsalargepercentageofworkisdoneonservermachine(s),sothattheservermachine'cpuishighly utilizedintheclientserverenvironment,thus,impactingtheperformanceoftheoverallsystem.indynamo, mostofthissameworkisdoneonclientmachines,thus,theservermachine'scpuwillnotsaturateas quicklyandtheproposedarchitectureismorescalable.alsofordata-intensiverealtimeapplications,such asmultimedia,thenumberofclientsaservercanaccommodateislimitedbytheservermachine'scompute powerandtheaggregatebandwidthitcansupportintheclientserverenvironment.however,indynamo, itisonlylimitedbytheaggregatebandwidthofthenetworkandtheaggregatecomputepoweronclient machines,whichismuchlarger.oursimulationresultsshowthatdynamohasmuchbetterscalabilityand performancethanthetraditionalclientserverarchitecture. WedonotclaimthatDynamOworksbetterthanthetraditionalclientserverarchitectureforallapplications.However,ifanapplicationrequiresalargenumberofCPUcyclesand/oraccesstoalargequantityof
3 a Clients i b h c g Servers 1 5 Clients Servers d 8 f 6 e Server 7 Network Local Attached Disks Disks therequirementsforandproblemsofadistributedpersistentobjectsysteminsection3.insection4,we data,thenmovingmethodexecutiontoclientsandenablingdirectaccesstostoragedevicescaneectively removetheserverbottleneck. approachtopersistentobjectssystemsinsection5.section6containsourconclusionsandfuturework. presentanddiscussthedynamoapproach,andcomparetheperformanceofdynamowiththetraditional Theremainderofthispaperisorganizedasfollows.WeintroducerelatedworkinSection2,anddiscuss (a) Client-Server Architecture (b) Architecture (1) An application invoked client machines. (2) client (a) An application invoked client machine. (b) client sends a request server. (3) server processes request. sends a request server. (c) server processes request (4) server sends necessary code handler to locate necessary data. (d) server sends I/O request client. (5) client processes server s message. (6) client its local disks. (e) local disks retrieves data. (f) sends I/O request Network Attached Storage (NAS). anddelegationofprocessingtoclients. 2RelatedWork WorkrelatedtoDynamOcanmostlybefoundintheareaoflesystemsindistributedenvironmentsusing network-attacheddisks,andtheresearchdoneonnetwork-attacheddisks,distributedcachemanagement, local disks send via PCI bus. (g) server (7) NAS retrieved data. (8) NAS sends client. processes data. (h) sends results client. (9) client executes code on data. 2.1ServerlessNetworkFileSystem AsclientsareaddedtoaLAN,theleservercanbecomesaturated.Toaddressthisproblem,theserverless networklesystem(xfs)wasdevelopedonthenetworkofworkstations(now)attheuniversityof managerorstorageserver,orboth.herealemanagermapsaleintoasetofpageswhileastorageserver areattachedtoallworkstations.innow,partorallclientworkstationscanactcooperativelyasale CaliforniaatBerkeley[And96].Allworkstationsareconnectedbyafastlocalareanetworkanddiskdevices itemploysacooperativecachetechnique.whenoneclienttriestoaccessdatawhichisnotcachedinits mapspagesintodiskblocks.asaresult,thislesystemarchitectureprovideshighscalability.moreover, lemanagerisdeterminedstatically.thismeansthatalemanagermanagesaxedsetofpages.inan workloadsonclientscanbehighlyvariableandtheutilizationofresources,e.g.,memory,cpus,disks,and systembuses,canbequitedierentamongpeerworkstations.whichpagesofaleareservedbywhich higherandbetterscalabilityoftheleserverisachieved[and96].moreover,clientworkstationsnotonly actinconcertasthelemanagerandstorageserver,butalsotoexecuteapplicationcode.therefore,the dataisfetchedfromdisks. thedataiscachedatsomeotherclient.ifso,thecacheddataissenttotheclient.otherwise,therequested memory,itasksthedistributedlemanagerforthatdata.inturnthelemanagercheckswhetherornot environmentwheretheresourcescanbefrequentlychanged,(e.g.,peoplebringintheirownlaptopsandplug intothenetworkinthemorning,andbringthelaptopshomeatnight)howtoutilizetheseresourcesasle NOWsuccessfullyusesallworkstations'memoryandbusbandwidth.Asaresult,thecachehitratiois (i) The client continues its work.figure1:applicationprocessingparadigms
4 managersbecomesachallengewhichisnotaddressedinthexfssystem.thus,howtobalancetheutilization ofresourcesamongallcollaboratingworkstationsremainsanopenquestion.tobalancetheworkloadevenly, DynamOallowsdynamicchangeofownership(lemanger)ofles(data),thusitprovidesthemachinesto supportadaptabilitytodynamicworkloadsandresourcesenvironment. Inaddition,xFSwasdevelopedforUNIXlesystems,andusesalogbasedlewritingtechnique.However, thistechniquedoesnotworkwellintheenvironmentofdatabasesorobjectserversbecausedatabaseand objectserverusuallyexploitexplicitrequirementsfordataallocation,e.g.,sequentialaccesswhichisnotwell servedbyalogstructuredlesystem.furthermore,xfsusestraditionalserverattacheddisks.thestorage serverexecutesonclientworkstationsandconsumesmanypreciousworkstationcpucycles.toconserve theseworkstationcpucycles,dynamoputsmuchofthestorageserverfunctionality(i.e.,i/omanager)on thenetworkattacheddiskcontrollers. 2.2NASD Usuallywhenaleserverretrievesdatafromstorage,itrstcopiesthedatatoitsownmemory,thensends thedatatotheclients.inordertoeliminatecopyingthedatatotheserver'sbuersrst,somelesystems todayusethe\thirdpartytransfer"mechanismsuchase.g.thenetworkattachedsecuredisks(nasd) system.thenasdisacurrentlyresearchprojectatcmu[gib97].innasd,aclientcontactsthele managerontheservermachinewhenittriestoopenale.thelemanagerwillverifywhethertheclient haspermissiontoaccessthatle.ifso,thelemanagerthennotiesthediskcontrollerandgivesale handletotheclient.onsubsequentreadaccesses,theclientdoesnotneedtocontactthelemanager;the clientcandirectlycontactthediskcontroller. NASDstillemploysacentralizedlemanagerthatenforcesconsistencycontrol.Inaddition,theNASD projectfocusesmoreonsecurityissues.dynamoismorefocusedontheissuesofdistributedcachemanagement,itusesadistributedobjectmanageranddistributedcachemanagerwhichcanprovidebetter scalability. 2.3CondorandThinClients Theserverbottleneckhasbeenalongfoughtbattle,andseveralsolutionshavebeenproposed.Werefertotwo inuentialapproaches:condorandthe'thinclients'architecture.condor[tan95]treatsallworkstationsin itsenvironmentconnectedviaafastlocalareanetworkasapoolofresources:memory,disk,andprocessors. Ifaworkstationbecomesidleandajobiswaiting,thejobisassignedtotheidleworkstationforexecution. Oncetheuseroftheworkstationinvokesitsowncomputations,theCondorcontrollingdaemonswillhalt executionofany\visiting"jobandmoveitsexecutiontoanothermachine.however,condorisnotdesigned forpersistentobjectserverbecauseanyprogramcanbeexecutedonanymachine.dynamoaddressesthe persistentobjectserverissuesbydynamicallyloadingtheserverfunctionalitytoallclientmachines. Thethinclientarchitectureisanotherinnovativeapproachtodynamicallymovetheexecutionofapplicationcode.\Thin"referstoboththemachineandtheapplication.AThinclient(machine)hasnohard diskandminimalmemory,thus,thisapproachisonlyusefulforsimplecomputationsusingsmallamounts ofdata.inthe'thinclients'architecture,clientsdynamicallydownloadthecodeanddatathattheyneed forexecutionofauserjob.dataintensiveexecution,however,hastobemovedtotheservermachine.also, thereisnocachekeptontheclientmachine,anditislessfeasiblefordatabaseapplications. 2.4RelatedworktoCacheCoherencyControl Withtheupcomingofdistributedandshareddiskenvironments,somerelatedworktoDynamO'scache coherencysystemhasbeendone.similartodynamo,theworkof[dias89]combinesthecpupowerof severallowendcomputersystems,andintroducesintegratedconcurrency-coherencycontroltoreducethe overheadofcouplingalargenumberofsystems.furthermore,thissystemusesan'intermediate'buerthat canbeaccessedbyallsystemssothatdatai/oisminimizedforallparticipants.itisnotclearwhetherin thisapproachtheintermediatebuerismadeavailablebyonemachineorviasharedmainmemoryofallthe participatingsystems;however,likeindynamoeachoftheparticipatingsystemsmanagesadisjointregion oftheintermediatebuer.incontrasttodynamo,theintermediatebuerpartitionsarestaticallyassigned herelackingtheexibilitydynamooersbyassigningworkbasedonusagepatternandactualworkloadof theclientsystem.furthermore,dynamoemploysamoreexibleschemeofallocatingandmanagingchucks
5 ofmemoryinordertokeepthemanagementeortperpageminimalsinceweassumethatalargenumber ofpagesismanagedinsuchasystem. 3ScalabilityandPerformanceofaPersistentObjectSystem Designingandimplementingapersistentobjectsystem,forcertainapplicationssuchasdatabasessystems withalargenumberofclients,scienticdata-andcomputationpersistentapplicationsandmultimedia applicationsthescalabilityoftheclient-serverarchitectureandtheserverbottleneckresultingfromaserver machine'ssystembusbandwidthlimitationareimportantdesignissuestoconsider.inthissection,we describethebasicarchitectureofapersistentstoragesystemfordata-andcomputationintensiveapplications, anddiscussitsbottlenecksandproblemsrelatedtoscalability. 3.1Overview Storageobjectservershavebeendevelopedasstorageback-endsforpersistentprogramminglanguages,and non-standarddatabasemanagementsystems(dbms)suchasobject-orienteddbms.today,thisstorage systemtechnologyisalsousedforhigh-intensityapplicationssuchaslargedatabasesystemswithmany usersorasstoragesystemsfordata-andcomputationscienticprogrammingsystemsaswellasmultimedia applications. Astoragesystemoersapplicationsstorageobjectsconsistingofastorageobjectidentierandan unstructured,variable-sizedbytecontainer.themiddlelayerbetweenthei/omanagerandtheapplication interfaceconsistsofaspecializedbuermanagement,employingexiblebueringstrategiesthatsupportthe specializedaccessbehavioroftheabovementionedapplications,andimprovethebuerhitrate. Whiletheclassicalclient-serverarchitectureforastorageobjectserverhasperformedwellforOODBMS withalimitednumberofusersanddata,itsarchitecturehaslimitedscalabilityandperformanceforhighintensityapplications.typically,scienticandmultimediaapplicationsrequireamuchlargerdatathroughputthenthemoretraditionalapplicationsforastorageobjectserver.also,traditionaldbmsapplications encounterscalabilityandperformanceproblemsifalargenumberofdbmsclientshastobeserved.for theremainderofthepaper,weassumethatbothclientandservermachinesareworkstations.assumptions abouttheirperformancecharacteristicswillbedescribedlater. Fibre Channel Arbitrated Loop Network Attached Disk Network Attached Disk Network Attached Disk Client Client Client Server Server Figure2:FibreChannelArbitratedLoop 3.2Problems Inthispaper,wewillcomparetheperformanceoftheDynamOarchitecturewiththetraditionalclient-server architecture.themainpurposeisto(a)evaluatethepotentialbenetsofthedynamoapproachand(b) understandwhatrstorderfactorsinuencetheperformancetradeos. Weusebottleneckanalysistodiscusssomeofthemajoraspectsofperformancecomparisonbetweentwo systemmodels:traditionalclient-serveranddynamo.theargumentsinthissectionaremorequalitativeand
6 aremeanttoserveasaroadmapforthemoredetailedsimulationresultswhichfollowinlatersections.we areinterestedinhowthebottleneckshiftsinresponsetochangesinthearrivalrates,theservicerates,and soon.figure3illustratessomeaspectsofthequeuingnetworkmodelforthegeneralsystemenvironment. c 1 c system,inwhichtherearemultipleclientmachinesandonlyoneservermachine.theaverageservicerate weassumethatonlyonekindofapplicationexistsinthelanforsimplicity.weassumeaclosedqueuing 2 s includeboththeapplicationprocessingtimeandclientprocessingtimebecausebothofthemareexecuted Figure3:HighLevelSystemDescription c.(inotherwords,theaverageserverservicetimeisshorterthantheaverageclientservicetime.)butthe onclientmachines. ofclientmachineiisdenotedbyciwhiletheaverageservicerateoftheservermachineisdenotedbys.ci InaLANsystem,manydierentapplicationsmayberunningatthesametime.Withoutlossofgenerality, c utilizationleveloftheservermachinecanbemuchhigherinthetraditionalclientservermodelduetothe i Servers largenumberofclientmachines.inthedynamoarchitecture,thecgetssmallerwhilethesgetslarger becausealargefractionoftheloadismigratedtotheclientmachines.thus,theservermachineutilization Foranapplication,theaverageserverserviceratecouldbelagerthantheaverageclientservicerate Clients Machines Therefore,theserverbottleneckcanbealleviated. levelisreducedsignicantlywhiletheclientmachineutilizationlevelincreasesonlybyasmallfraction. Environment RAIDsystemstostripethedataovermultiplediskdevices.Thus,theserversystembusismostlikelyto bethebottleneckintheclient-serverarchitecture.ontheotherhand,dynamoenablesclientmachinesto directlyaccessthestoragedevices,andthustheaggregatesystembandwidthisnotlimitedbytheserver Apersistentobjectservercanbecongestedindata/computationintensiveapplications.Toeliminatethis systembuses. 4TheDynamOArchitecture Weassumethatstoragedevicesarenotabottleneckfordataintensiveapplicationsbecausewecanuse Exodus[Car86],andMneme[Mos88];therefore,wewillfocusonarchitectureissuesinthissection. applicationpointofview,thefunctionalityofdynamoisverysimilartoapersistentstoragesystemsuchas architectureofdynamoisdepictedinfigure4;wewilldescribethesystemfromthebottomup.froman machinestodirectlyaccessfromnetwork-attachedstorage.inthefollowing,wepresentthearchitecture istodynamicallymoveobjectserverfunctionalitytoclient(machines)forexecution,andallowtheclient andprinciplesofdynamo,andfocusondatai/o,cachemanagement,andinterfacestoapplications.the problemandachievehigherperformance,weproposethedynamoarchitecture.themainideaofdynamo
7 4.1I/OService ThelowestlayerofDynamOistheI/OlayerprovidingdataI/Ofromandtostoragedevices.TheI/Olayer mapslesanddatapagestostoragelocationsonastoragedeviceinasimilarfashiontoani/olayerina conventionalpersistentobjectsystem.toloaddata,dynamoemploysatechniquesimilartothatemployed bynasd[gib97].whenanapplicationinvokesanoperationonanobject,theobjectmanagerrunningon theclientmachinerequestssomestorageobjectsfromtheserverportion(coordinator)ontheservermachine bypassingtheobjectidentierswhichcontainsthedatablocks.however,insteadoffetchingdataforthe client,theserverplaystheroleofacoordinator,andperformsmappingoftheobjectidentierstopages.this mappingisperformedcentrallyontheservermachine,sothatconsistencyproblemsforcataloginformation areminimized.thecoordinatorreturnstotheclientstherelevantpageandleidentiersaswellasthedisk identier(s)ofthedisk(s)thatcontainsthedatapages.thenasdserversimplementconsistencycontrol;in contrast,thedynamocoordinatordoesnotimplementconsistencycontrol.indynamo,theclientscooperate inprovidingcacheconsistency.further,thecacheconsistencyprotocolcanbetailoredtotheobjects. Afterobtainingthepage,leanddiskidentiers,theobjectmanagerontheclientmachinedirectly interactswiththeportionofdynamothatrunsonthediskcontroller(s)ofthenetwork-attacheddisks.this I/Omodelisbasedonthefollowingfacts:sincethediskcontroller'sCPUismostlyonlylightlyutilized,we canuseitfordelegatingtheexecutionofthetwolowestlevelsofthepersistentstoragesystem,i.e.diskblock allocationandfreespacemanagementaswellasobjectpagetodiskblockmapping(pagemanager).thepage managerisdividedintotwocomponents:astrategycomponent,andastorageallocationcomponent.the allocationcomponentusestheinputfromthestrategycomponenttoallocatediskblocks,whilethestrategy componentdecideshowblocksareallocatedforsetsofpages.weassumethatwehavededicatedstorage devicesforthepersistentobjectserveravailablesothattheallocationstrategyiscommonfortheentire disk(e.g.,continuousblockallocation,clustering,etc.).however,ifthisisnotthecase,thestoragedevice issharedwithotherapplicationssuchasapageallocatorforaunixlesystem,weassumethatastorage partitionisallocatedforthepersistentobjectserver,andthattheallocationforthispartitionismanaged viathedynamopageallocationstrategycomponent.thei/omanageronthediskcontrollerretrievesthe relevantpagesfromthedisk,andsendsthemtotherequestingobjectmanager.thepagesarestoredinthe localcacheontheclient.theobjectmanager,nally,performstheobjecttopagemapping,andmakesthe requestedstorageobjectsavailableforprocessing.thei/olayersareillustratedinfigure4. 4.2CacheManagement Dataretrievedfromdiskcanresideintheclient'scache.Inordertoavoidrepeatedretrievalsfromdisk, weemployadistributedcachemanagementscheme,andallowclientstoretrievedatadirectlyfromother clients'cache.thecachemanagementconsistsoftwolayers:thedistributedcachemanagementlayerand localcachemanagementlayerasshowninfigure4. HierarchicalModel Cachemanagementisacomplicatedissue,mainlyduetoconsistencyrequirements.Toaddressthisissue, persistentobjectsindynamoisorganizedinahierarchicalmanner,asshowninfigure5.atthebottom level,thegranuleisapage,i.e.,eachentityrepresentsapage.atthelevelabove,eachentityrepresentsa setofpageswithavaryingnumberofpagesperset.(incurrentversion,thesystemuserwhocreatesthese datapagesdecideswhichpagesetitgoesinto.)thegranuleatthenextlevelupisacluster,setofclusters, setofsetsofclusters,andsoon.onthetoplevel,therearefromafewtenstoafewhundredsrootentities, andeachrootentityrepresentingalargesetofdatapages. Thecoordinatorontheservermaintainsasetofownershiptablesforclientsandanon-ownertable(as illustratedinfigure6).thenon-ownertableliststheentitiesthatarenotcurrentlyownedbyanyclients. Wesay\XownsentityY"whentheobjectmanagerXcangrantreadandwriteaccesstootherclientsfor anydatapagesiny,andthecachemanagerassociatedwithxknowswhetheradatapageinyiscachedand ifso,where.(note,thatwedonotassumeallpagesofyareloadedinthecacheofx)thereasonthatwe usethehierarchicalmodeltomanagecachecoherenceisthatitcanminimizethebookkeepingoverheadper objectmanager.(lookingahead,thehierarchicalmodelalsoprovidesformoreexibleownershiptransfer.) WhenobjectmanagerArequestssomedatafromtheDynamOserver,thecoordinatorcheckswhetherthere
8 Server Machine Client Machine Applications Coordinator Page lookup, ownership maintaince Client Machine Applications Object Manager Object-Pages Mapping Ownership Table Maintanence Cache Manager Page/ Segment Distributed Cache Management Page/Segment Local Cache Management Change of Ownership Page/Segment Object Manager Object-Pages Mapping Ownership Table Maintanence Cache Manager Page/ Segment Distributed Cache Management Page/Segment Local Cache Management isaclientwhoownstheobjectbeingrequested.ifthereisnosuchclient,theservermakesclientathe Figure4:DynamOlayers Page/ Segment Page/ Segment Disks Controllers I/O Manager Disk Block Allocation Strategies Page-Block Mapping Block Physical I/O: Blocks allocation File -> Extent otherclients. ownerofanentity,objectmanagerahastherighttograntread/writeaccesstoanydataintheentityto ismanagedviabuermanagementstrategiesspeciedinthecachemanagercodefromtheserver.asthe Ontheotherhand,ifthedataisownedbytheobjectmanagerB,thenthecoordinatorrefersAtoB. consistencyprotocolsondierentdatasets.)theobjectmanagercachesthedatainitslocalcachewhich theconsistencyprotocol,dierentservermayusedierentprotocols.moreover,aservermayusedierent owneroftheentity,sendsclientathehandlefortheentity,andmaketheproperentityintheownertable. downloadsthecachemanagercodefromtheserverifitdoesnothavethiscodealready.(thiscodeincludes IfobjectmanagerAretrievestheinformationfromthecoordinatorthatthedataisalreadyownedby Assuming,thatobjectmanagerAistheownerofacluster,itperformstheI/Oasdescribedabove,and Disks Btorequestaccesstothedata,andnegotiatesaccessrightsanddatagranularity. OwnershipTransfer anotherobjectmanagerbonanotherclientmachine,objectmanagerainteractswiththisobjectmanager ownershipofthedatashouldbetransferred.ifitisdeterminedthatanownershiptransferisdesirable(e.g., entityinquestionisv0andv0isadescendantofv.objectmanagerbownsvandobjectmanagerarequests objectmanagerbthinksitwillnotneedtoaccessthedatainnearfuture,)thenthequestioniswhatsubset ofthesetownedbybshouldbetransferred.toillustratethisprocessmoreclearly,let'sassumethatthe WhenAaccessesdatacurrentlyownedbyB,objectmanagersAandBwillalsodecidewhetherthe
9 Root Level Entity Root Level Entity Set of... Set of Clusters of Pages... Set of... Set of Clusters of Pages Set of Clusters of Pages Set of... Set of Clusters of Pages Set of Clusters of Pages Cluster of Pages Cluster of Pages Cluster of Pages... Set of... Set of Clusters of Pages Cluster of Pages... Set of Pages... Set of Pages Set of Pages... Set of Pages Set of Pages... Set of Pages Set of Pages... Set of Pages Figure5:HierarchicalObjectOrganization Object Manager Entities A entity 1, entity 2, entity 4 Non-owner entities write read whetheritneedsthechildentityofv.ifyes,thenthechildrenentityisdecomposedandthisprocessis datainv0.bdecomposesvintoasetofchildrenentities,andoneofthechildrencontainsv0,bthenchecks access access entity 1 none repeateduntilthereisanentitythatisadescendantofvandcontainsv0,whichitisestimated,willnot Figure6:OwnersTablesinCoordinatorandObjectManager A, C B entity 3, entity 5 entity 2 A A entity 6, entity 8... entity 4 B A, B disjointentitiesv1,v2,:::,vkwhereski=1vi=v.thenbremovesv1fromthisset.now,insteadofowning commitbecauseitisveryimportantthata,b,andthecoordinatorallagreethatthetransferofownership coordinatorstartsanownershiptransferprocessinvolvingaandb.thisprocessissimilartoatwo-phase notexist,thennoownershiptransferwilloccur.thisisonlyonepossibleschemewithnoclaimstobeing beusedbybinthenearfuture.thenthisentityistheunitforownershiptransfer.ifsuchanentitydoes v,clientbownsv2,v3,:::,vk.also,bsendsallinformationassociatedwithv1(e.g.,whichclienthave occursordoesnot.theresultisthatbreleasesentityv1toa.objectmanagerbdecomposesvtoasetof optimal.forexample,thealgorithmdescribedabovedoesnotaccountforthetotalsizeofentitiesownedby Borthefrequencyofaccessforeachentity,etc.. IfobjectmanagerBwantstotransfertheownershipofv1toA,itwillnotifythecoordinator.Inturn,the Owners Table Non-owner Ownership Table Manager A beingtheownerofsomedataandlockingthedata.ifaclientwanttolocksomedata,ithavetoaskthe cacheddatainv1,whichclienthasread/writelocksondatainv1,etc.)toa.thereisadierencebetween
10 toseewhetheritalsoownsv2,v3,:::,vk.ifitownstheseobjects,awouldremovev1,v2,:::,vkfromits servesthelocktoothers,e.g.,schedulingthelockrequests. ownerofthatdatatograntthelock.theownerofthedataisnotnecessarylockingthedatacurrently,it codetoclienta. thecodeforcachecoherencefromtheserverifadoesnothavethiscode. coordinatortablewhichcontainsbothclientaandb'sownershiptablesusingessentiallythesameprocedure ownershiptableandputbackvandcontinuethepruningprocessuntilnomoresub-objectscanberemoved. asaandb.whenarequeststhecachecoherencecodefromtheserver,theserverwillsimplyreturnthe Thegoalofthepruningprocessistokeepaminimallistofobjectsintheownershiptable.Then,Adownloads ObjectmanagerA,inturnupdatesitsownershiptablebyaddingv1.Then,itprunesitsownershiptable DistributedCacheRetrieval Afterobtainingtheproperaccesspermission,objectmanagerAcanaskitsdistributedcachemanagement Ontheserverside,afterreceivingthemessageofownershipchange,thecoordinatorupdatesitsown ownedbyit.thedcmlonclientawillcontactthedcmlofclientmachine,sayb,whichownsthedata. willaskitslcmlforthedata,andthecacheddataisreturnedtoclienta. layerforthedata.adistributedcachemanagementlayer(dcml)maintainsalistofcacheddataofentities (AcancontactthecoordinatorandthecoordinatewilltellitthatBownsthedata.)TheDCMLonclient C),thenitsendsarequesttotheDCMLonClientCwiththeleidandpageids.TheDCMLonClientC Bknowswhich,ifany,clientcachesthedata.Ifthedataisnotcached,thenBwillsendbackahandleto thedatale,page,anddiskidentier(s),andadiski/ohastobeperformedbyclienta.thelocalcache managementlayer(lcml)onaasksthepropernetworkattacheddisksforthedata. sendingamessageofdeletionofownershiptothecoordinatorandremovingv1fromitsownershiptable. Whentheserverreceivesamessageofdeletionofownership.Itremovestheobjectfromitsownerandputs ittotheno-ownertableandprunestheno-ownertable.thisprocessisillustratedinfigure7. AfterclientAnishesprocessingthedata,itcanreturntheownershipoftheentitytotheserverby Ontheotherhand,iftheDCMLonClientBdiscoversthatanotherclienthascachedthedata(sayClient words,ifaclientrequestsanobjectatlevel5(a'lowlevel'),andthereisalevel2objectinthenon-owner table,whichisanancestoroftheobjectrequested,thentheserverwillgivethelevel2objecttotheclient ratherthandecomposethelevel2objectandgivetheclientahigherlevelobject. Recoveryfromaclientmachinecrash coordinatorwillreturnthelargestentitythatcontainsthedataandnotownedbyanyotherclients.inother Fromtheprocessdescribedabove,itclearthatwhenaclientrequestsdatafromthecoordinator,the entities,thenothersurvivingclientsmayrequestthedataownedbythecrashedclientmachine,andof problem.whenanobjectmanagerrequestsdatafromanotherobjectmanager,ithasatimeoutmechanism. Iftheobjectmanagerdoesnotrespondwithinthetimeoutperiod,therequestingobjectmanagerwillreport tothecoordinatorthattheotherobjectmanagercouldhavecrashed.then,thecoordinatorwillcontact course,theserequestscannotbeserved.weapplyacrashrecoveryprotocoltoeliminatethiskindof thespeciedobjectmanager.iftheobjectmanagerstilldoesnotrespond,thecoordinatorwillrevoke allownershipofthespeciedobjectmanager,andassignitsentitiestothenon-ownertable.alsoduring normaloperation,eachobjectmanagerwriteslogsandperiodicallyputsitonthenetworkattachedstorage Wenowdiscussthescenariothatoccurswhenaclientcrashes.Ifthecrashedclientisanownerof coordinator)androllbackthechangestothelastconsistentstate.thisprocedureissimilartothatusedin sothatduringacrash,thecoordinatorcanfetchthelog(thelocationofthelogonthedisksisknowntothe distributedserverfailurerecovery.whenaclientmachinerecoversfromacrash,theobjectmanageronthat clientwillcontactthecoordinatortoretrievethedatathatitneeds. problemiseliminated.inaddition,eachobjectisownedbyatmostoneowner.therefore,cachecoherencycan bemaintainedbytheowner.thisavoidscomplicateddistributedalgorithmsforcacheupdating.moreover, theserver,andthecoordinatoronlytrackswhoistheownerofeachentity.therefore,thecongestedserver Inthisscheme,theworkontheservermachineisminimal.Infact,onlythecoordinatorremainson
11 Server a b Client A c d Client B e f Figure7:TheprocessofDataFetching Network Attached Disks a: Client A asked server for d: cached Client B will ask b: other owns that caches give A a copy server makes A owner of of cached object. Otherwise, Client B maynotworkwell.inaddition,sincetheremaybemorethanoneserverinanetwork,dynamoletsaclient thehierarchicalobjectmodelmakesthepartitionofcachemanagementeasyandfair.ifoneclientaccesses alargeamountofdata,itcouldservealargeamountofdatatootherclients.furthermore,ifanobject theoceandplugthemintothenetworkinthemorningandbringthemhomeatnight),astaticpartition managerrequestssomedatafromthecoordinator,itmaybecometheownerofaverylargerootentity. returns handler goes e. wants transfer ownership of Otherwise, returns id of requested, B gives A a handle that owns data. makes A ownner. c: Client A asks B for data. e: Client A goes correct disks asks for longasthereisonlyoneownerandonecoherencycontrolprotocolforagivenobject. protocolsfromdierentserversfordierentobjectsandallofthemcanworkconcurrentlyandcorrectlyas downloadthecoherencycontrolcodefromtheserver.aclientcandownloaddierentcoherencycontrol However,ifotherclientsrequestdatainthisrootentity,thisclientmachinecantransfertheownershipof basedondynamicusageratherthanonsomestaticaprioripartition,suchasusedinxfs.especiallyinthe environmentthatmachinescanjoininandleaveanetworkfrequently,(e.g.,peoplebringstheirnotebooksto someentities(whicharedescendantoftherootentity)tootherclients.asaresult,theservicepartitionis relevent blocks. f: returned. exploredinfalcon[she97]. issueandheterogeneousplatformsdoexist,themovementofcodeisbasedontheprinciplesdevelopedand serverfunctionalityisdynamicallymovedbetweenmachinesishidden.sinceportabilityisanimportant DynamO'sapplicationinterfaceissimilartoExodus.DynamOoersvariable-sizedstorageobjectsconsisting 4.3ApplicationInterface inthescopeoftransactions(however,notdescribedinthispaper).fortheapplication,thefactthatstorage ofastorageidentierandanunstructuredbytecontainer.storageobjectscanbeclustered,andarechanged
12 DynamOwiththetraditionalclient-servermodel.Thesethreeapplicationstothepersistentstoragesystemaredata/computationintensivepersistentprogrammingapplications(IPP),databaseapplications,and NumberofClients I/O CPUCycles(inmillion)High(1000)Medium(20100)Constant(20=s) CacheHitRate CacheCoherenceCostHigh(1K10K)Medium(10100)Constant(50=s200=s) Low(10%) Low(110) High(2040) High(40%) Database Verylow(<3%) Low(110) Multimedia Inthissection,wehavechosenthreetypicalapplicationsastestbedsforaperformancecomparisonof 5BenchmarkAnalysisandComparison multimediaapplications.theestimatedcharacteristicsofthesethreeapplicationsareillustratedintable1. andonecoordinator/serverinthedynamomodel.moreover,weassumethatnodatawillbecachedonthe clientlocaldisks.inaddition,inthetraditionalclientserverarchitecture,weassumethattheserveralways gothroughthepcibusagainontothenetworkonthewaytotheclient.ontheotherhand,whentheclient hastofetchthedatafromitsdisksthroughitspcibusintoitsmainmemoryifthedataisnotcached,then Inordertofairlycomparethesetwomodels,weassumethatthereisoneserverintheclient-servermodel Table1:CharacteristicsofThreeApplications None requestsdata,ifthedataisnotcached,theclientwillsendarequesttothenetworkattacheddisks,inturn onepci-buswithsustainedbandwidthof80mb/sec.inaddition,wedonotconsiderthatdisksarethe arethesame(servermachineandclientmachines).eachmachineisequippedwitha100mipscpuand bottleneck. 5.1Data/ComputationIntensivePersistentProgramming thediskcontrollerfetchesdataintoitsbuerthensendsittothenetwork,andthedataowsthroughthe PCIbusontheclientmachineandtotheclient'smainmemory.Inthiscomparison,weassumeallmachines thecpuloadisontheclient,theserverprocessorsstillspendsasignicanttimeperclientjob(approximately theaveragetimeittakesaclienttofetchdataandprocessit. ThepathlengthofatypicalIPPapplicationconsistsofasequenceofdatafetchinganddataprocessing processesthefetcheddata.thentheprocedurerepeats.weareinterestedintheaverageresponsetime,i.e., relevantdatatoitsmainmemorywhilethedataprocessingphaseisthetimeintervalduringwhichtheclient phasesasshowninfigure8.adatafetchingphaseisthetimeintervalduringwhichtheclientfetchesthe InIPPapplications,thenumberofI/OsandtheCPUconsumptionareveryhigh.Althoughthemajorityof Figure8:MeasurementofAverageResponseTime servermodelincreasessignicantly.theaveragei/operclientis3,000,andtheaveragenumberofcpu instructionsis1000millioninstructionsand20%oftheseareexecutedontheserverandtheremainder sinceitonlyperformstheroleofcoordinatorwhiletheclientexecutes1050millioninstructions. saturationaround4clients.however,indynamo,theserver/coordinatoronlyexecutes1millioninstructions isexecutedontheclientinaclientservermodel.fromthisgure,itisclearthattheserver'scpureach bustakeslessthanhalfsecond. theoverallcpuincreases,andoncetheserverbusbecomescongested,theaverageresponsetimeintheclient- 2secondsforeachclientjobonaverage).AlthoughtheI/Orequestrateisveryhigh,onaveragethePCI tively.sincewesetthenumberofclienttobe6inthistwocase,theserver'scpuisalwayssaturated.thus, Figure9(b)and(c)showtheperformanceasafunctionofCPUconsumptionandI/Orequests,respec- Figure9(a)showstheeectofanincreaseinthenumberofclients.Sincethenumberofsessionsincreases, Data Fetching Response Time Data Process Data Fetching Response Time Data Process... Data Fetching Response Time Data Process...
13 Average Response Time (sec.) 30 DynamO Client Server Model Number of Clients (a) DynamO 35 Client Server Model DynamO Client Server Model whenthecpuworkloadisincreased,theaverageresponsetimeoftheclientservermodelincreasesatamuch Figure9:PerformanceofData/ComputationIntensivePersistentProgramming 28 Average Response 25 Time with 6 26 clients (sec.) cachedataforotherclients).weassumethatintheclientservermodel,thecachehitratiois40%whilethe similarpaceasthatofdynamobecausetheservermachinepci-busisnotthebottleneck. fasterpacethanthatofdynamo.becausethebottleneckisincpuratherthani/o.ontheotherhand,as thenumberofi/osincreases(asinfigure9(c)),theperformanceoftheclientservermodeldegradesata Average Cycles a Average Number Processing Phase a Fetching Phase (million) Inturn,performanceisimpactedwhenthenumberofclientsincreases(scalability)ortheworkloadincreases. cachemanagement,transactionmanagement,i/oservice,etc.,theserverprocessorcanbecomesaturated. 5.2DatabaseApplications Inthisenvironment,therearemanyclients;ontheorderofdozens.Thecachehitratecanbeveryhigh indynamo,sincetherearemoremachinesthatcanserveascachemanagers(i.e.,eachclientmachinecan cachehitrateisabout50%fordynamo.moreover,sincethereissignicantworkdoneontheserver,e.g., (b) (c) ThisisillustratedinFigure10. executedontheserverwhiletheclientexecutes42millioninstructionspersession. cyclesinasessiontobe40million.amongthesecycles,35%isexecutedontheserver'sprocessorwhilethe restisexecutedontheclientsintheclientservermodel.indynamo,1millioninstructionspersessionare whentheaveragenumberofinstructionspersessionincreases,theperformanceoftheclientservermodelis impactedseverely.ontheotherhand,figure10(c)showsthatasthenumberofi/opersessionincreases, Figure10(a)showstherelativescalabilityofthetwomodels.WechoosetheaveragenumberofCPU Inthistypeofenvironment,theserverCPUisthebottleneck.Asaresult,Figure10(b)showsthat Average Response Time with 6 clients (sec.)
14 6 DynamO Client Server Model 5 Average Response Time (sec.) Number of Clients (a) 4 DynamO Client Server Model DynamO Client Server Model Average 3 Response 4.00 theaverageresponsetimeofanapplicationintheclientservermodelincreasesatasimilarpaceasthe Time with 30 clients 3.50 increaseindynamo. 2.5 (sec.) MultimediaApplications Multimediaapplicationsareaspecialclassofapplication.Theyrequireaconstantstreamofdata.For 2.00 example,mpeg-1videorequires1.5mbits/secwhilempeg-2videorequires4-6mbits/sec.thedata Average CPU Cycles a Average Number of I/Os aspeciedtimeperiod.therefore,insteadofshowingtheaverageresponsetimeofanapplication,weshow Data Processing Phase a Data Fetching Phase (million) Sincemultimediaapplicationsaredataintensiveandthenetworkattachedstoragehasmuchhigheraggregate becausethereisnowrite-back.therefore,multimediaisani/ointensivebenchmark. decodingisdoneonclientmachinesandthereisnoneedfortransactionmanagementandcoherencycontrol thenumberofmultimediastreamsthatcanbeservedbyclientserverandmodelanddynamointable11. Multimediaapplicationsarerealtimeapplications,whichrequirethatarequesthastobeservedwithin (b) (c) bandwidththanthatofclientservermodel,thedynamocanaccommodatemuchmorestreamsthanthe intheclientservermodel. clientservermodel.,e.g.,thedynamocanservearound250mpeg-2streamscomparingtoabout50streams Average Response Time with 30 clients (sec.) Figure10:DatabaseeApplications
15 1000 DynamO Client Server Model Maximum 600 Multimedia 500 wellasnetwork-attachedstoragedevices,thereisalsoaneedtoreconsidersoftwaresystemarchitectures. Withnewdevelopmentsincomputerhardwaresuchasimprovedprocessorspeedandnetworkbandwith,as 6ConclusionandFutureWork Clients Inthispaper,weintroducedDynamO(DynamicObjectswithPersistentObjects),analternatemodelto scalabilityandperformance.insteadofmanagingalesystembuerontheservermachine,dynamo downloadsmostserverfunctionalitytoclients,andalsotransferdatadirectlyfromnetwork-attacheddisks theclient-serverarchitectureforcomputation/dataintensiveapplicationsthatoerssignicantlyimproved Figure11:MultimediaApplications 200 toclientmachines,thus,eliminatingtheserverbottleneck. 100 length,dynamohasbetteradaptabilitybecauseitdynamicallychangesthe\client/server"computepower 0 ratioautomaticallyaccordingtotheworkload.moreover,anaddedclientmachinecannotonlysharethe \server"workload,butthe\client"workloadaswell.therefore,betterscalabilitycanbeachieved.although WestudiedtheperformanceoftheDynamOarchitecture.Withtemporalvariationofapplicationpath Average Bit Rate Each Stream (MBit/s) DynamO. DynamOhasextracachecoherenceoverhead,thepercentageoverheadislow.WebelievethatDynamO References ChanneldisksdrivesandFebrechanneladaptorsforSunworkstationsthatweuseasthehardwarebasisfor inc++onsunultrasparcsusingsolaris.wehaveinstalledbrechanneldevices,including4segatefibre providesamorecosteective,scalable,andskewinsensitivesolutionthanthetraditionalclient-server [And96]T.E.Anderson,M.D.Dahlin,J.M.Neefe,D.A.Patterson,andothers.Serverlessnetworkle architecture. ImplementationforDynamOiscurrentlyunderwayatUCLA'sDataMiningLab.DynamOisimplemented [Arp97]A.C.Arpaci-Dusseau,R.H.Arpaci-Dusseau,D.E.Culler,J.M.Hellerstein,andD.A.Patterson. [Blo96]R.Bloor.TheComingoftheThinClient.DatabaseandNetworkJournal.vol.26,(no.4):2-4,August systems.acmtransactionsoncomputersystems,feb.1996,vol.14,(no.1): ConferenceonManagementofData,pp ,May1997. High-PerformanceSortingonNetworkofWorkstations.ProceedingsACMSIGMODInternational
16 [Car86]Carey,M.J.,DeWitt,D.J.,Richardson,J.E.andShekita,E.J.,ObjectandFileManagementinthe [Dias89]Dias,D.M.,Balakrishna,R.I.Robinson,J.T.,Yu,P.S.,IntegratedConcurrency-CoherencyControls [Fab97]F.Fabbrocino,E.C.Shek,andR.R.Muntz.TheDesignandImplementationoftheConquestQuery [FCA]FibreChannelAssociation. EXODUSExtensibleDatabaseSystem.TwelfthInternationalConferenceonVeryLargeDatabases, [Gib97]G.A.Gibson,D.F.Nagle,K.Amiri,F.W.Chang.FileServerScalingWithNetwork-attachedSecure ExecutionEnvironment.UCLACSDTechnicalReport#970029,July1997. formultisystemdatasharing.ieeetransactionsofsoftwareengineering,vol.15,no.4,april1989. [Gro96]E.GrochowskiandR.F.Hoyt.FutureTrendsinHardDiskDrives.IEEETransactionsonMagnetics, Disks.PerformanceEvaluationReview,June1997,vol.25,(no.1): [Mos88]Moss,J.EliotB.andSinofsky,S.,ManagingPersistentDatawithMneme:DesigningaReliable [Hei95]J.HeidemannandG.Popek.PerformanceofCacheCoherenceinStackableFiling.Proceedingsofthe [Hei88]P.HeidelbergerandM.S.Lakshmi.APerformanceComparisonofMultimicroandMainframe SharedObjectInterface,AdvancesInObject-OrientedDatabaseSystems:SecondInt.Workshopon DatabaseArchitectures.IEEETransactiononSoftwareEngineering,vol.14,No.4,April1988. FifteenthACMSymposiumonOperatingSystemsPrinciples,pp ,December1995. May1996,vol.32,(no.3,pt.2): [Nit96]S.NittelandK.R.Dittrich,AStorageServerfortheEcientSupportofComplexObjects. [Pat88]D.A.Patterson,G.A.Gibson,R.H.Katz,TheCaseforRedundantArraysofInexpensiveDisks ProceedingsPOS-7InternationalWorkshoponPersistentObjectSystems,CapeMay,June1996. OODBS,BadM}unster,Germany,1988. [Tan95]T.TannenbaumandM.Litzkow.TheCondorDistributedProcessingSystem.Dr.Dobb'sJournal, [Sun97]JavaBeans. [She97]E.Shek,R.R.Muntz,andL.Fillion.TheDesignoftheFALCONFrameworkforApplicationLevel November1996. CommunicationOptimization.TechnicalReportNo ,ComputerScienceDepartment,UCLA, (RAID)ProceedingsACMSIGMODInternationalConferenceonManagementofData,pp , February1995. May1988.
Scalable Internet Services and Load Balancing
Scalable Services and Load Balancing Kai Shen Services brings ubiquitous connection based applications/services accessible to online users through Applications can be designed and launched quickly and
Scalable Internet Services and Load Balancing
Scalable Services and Load Balancing Kai Shen Services brings ubiquitous connection based applications/services accessible to online users through Applications can be designed and launched quickly and
Using Synology SSD Technology to Enhance System Performance. Based on DSM 5.2
Using Synology SSD Technology to Enhance System Performance Based on DSM 5.2 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD Cache as Solution...
Globus Striped GridFTP Framework and Server. Raj Kettimuthu, ANL and U. Chicago
Globus Striped GridFTP Framework and Server Raj Kettimuthu, ANL and U. Chicago Outline Introduction Features Motivation Architecture Globus XIO Experimental Results 3 August 2005 The Ohio State University
E4 UNIFIED STORAGE powered by Syneto
E4 UNIFIED STORAGE powered by Syneto THE E4 UNIFIED STORAGE (US) SERIES POWERED BY SYNETO From working in the heart of IT environment and with our major customers coming from Research, Education and PA,
How To Design A Data Center
Data Center Design & Virtualization Md. Jahangir Hossain Open Communication Limited [email protected] Objectives Data Center Architecture Data Center Standard Data Center Design Model Application Design
Agenda. Enterprise Application Performance Factors. Current form of Enterprise Applications. Factors to Application Performance.
Agenda Enterprise Performance Factors Overall Enterprise Performance Factors Best Practice for generic Enterprise Best Practice for 3-tiers Enterprise Hardware Load Balancer Basic Unix Tuning Performance
CS5460: Operating Systems. Lecture: Virtualization 2. Anton Burtsev March, 2013
CS5460: Operating Systems Lecture: Virtualization 2 Anton Burtsev March, 2013 Paravirtualization: Xen Full virtualization Complete illusion of physical hardware Trap _all_ sensitive instructions Virtualized
Using Synology SSD Technology to Enhance System Performance Synology Inc.
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_WP_ 20121112 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges... 3 SSD
Using Synology SSD Technology to Enhance System Performance Synology Inc.
Using Synology SSD Technology to Enhance System Performance Synology Inc. Synology_SSD_Cache_WP_ 20140512 Table of Contents Chapter 1: Enterprise Challenges and SSD Cache as Solution Enterprise Challenges...
CSIS 3230. CSIS 3230 Spring 2012. Networking, its all about the apps! Apps on the Edge. Application Architectures. Pure P2P Architecture
Networking, its all about the apps! CSIS 3230 Chapter 2: Layer Concepts Chapter 5.4: Link Layer Addressing Networks exist to support apps Web Social ing Multimedia Communications Email File transfer Remote
Introduction. Options for enabling PVS HA. Replication
Software to Simplify and Share SAN Storage Enabling High Availability for Citrix XenDesktop and XenApp - Which Option is Right for You? White Paper By Andrew Melmed, Director of Enterprise Solutions, Sanbolic,
Using High Availability Technologies Lesson 12
Using High Availability Technologies Lesson 12 Skills Matrix Technology Skill Objective Domain Objective # Using Virtualization Configure Windows Server Hyper-V and virtual machines 1.3 What Is High Availability?
Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability)
White Paper Cisco Prime Home 5.0 Minimum System Requirements (Standalone and High Availability) White Paper July, 2012 2012 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public
CompTIA Storage+ Powered by SNIA
CompTIA Storage+ Powered by SNIA http://www.snia.org/education/courses/training_tc Course Length: 4 days 9AM 5PM Course Fee: $2,495 USD Register: https://www.regonline.com/register/checkin.aspx?eventid=635346
New!! - Higher performance for Windows and UNIX environments
New!! - Higher performance for Windows and UNIX environments The IBM TotalStorage Network Attached Storage Gateway 300 (NAS Gateway 300) is designed to act as a gateway between a storage area network (SAN)
Storage Networking Management & Administration Workshop
Storage Networking Management & Administration Workshop Duration: 2 Days Type: Lecture Course Summary & Description Achieving SNIA Certification for storage networking management and administration knowledge
Scalable Windows Storage Server File Serving Clusters Using Melio File System and DFS
Scalable Windows Storage Server File Serving Clusters Using Melio File System and DFS Step-by-step Configuration Guide Table of Contents Scalable File Serving Clusters Using Windows Storage Server Using
Cloud Based Application Architectures using Smart Computing
Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products
Data Backup and Archiving with Enterprise Storage Systems
Data Backup and Archiving with Enterprise Storage Systems Slavjan Ivanov 1, Igor Mishkovski 1 1 Faculty of Computer Science and Engineering Ss. Cyril and Methodius University Skopje, Macedonia [email protected],
Configuration Maximums VMware Infrastructure 3
Technical Note Configuration s VMware Infrastructure 3 When you are selecting and configuring your virtual and physical equipment, you must stay at or below the maximums supported by VMware Infrastructure
Scale-Out File Server. Subtitle
Scale-Out File Server Subtitle About Aidan Finn Technical Sales Lead at MicroWarehouse (Dublin) Working in IT since 1996 MVP (Virtual Machine) Experienced with Windows Server/Desktop, System Center, virtualisation,
Xangati Storage Solution Brief. Optimizing Virtual Infrastructure Storage Systems with Xangati
Optimizing Virtual Infrastructure Storage Systems with Xangati Virtualized infrastructures are comprised of servers, switches, storage systems and client devices. Of the four, storage systems are the most
Chapter 13 Selected Storage Systems and Interface
Chapter 13 Selected Storage Systems and Interface Chapter 13 Objectives Appreciate the role of enterprise storage as a distinct architectural entity. Expand upon basic I/O concepts to include storage protocols.
COSC 6374 Parallel Computation. Parallel I/O (I) I/O basics. Concept of a clusters
COSC 6374 Parallel Computation Parallel I/O (I) I/O basics Spring 2008 Concept of a clusters Processor 1 local disks Compute node message passing network administrative network Memory Processor 2 Network
VMware vsphere 5.1 Advanced Administration
Course ID VMW200 VMware vsphere 5.1 Advanced Administration Course Description This powerful 5-day 10hr/day class is an intensive introduction to VMware vsphere 5.0 including VMware ESX 5.0 and vcenter.
Data Center Infrastructure
Data Center Infrastructure Module 1.3 2006 EMC Corporation. All rights reserved. Data Center Infrastructure - 1 Data Center Infrastructure Upon completion of this module, you will be able to: List the
Parallels Server 4 Bare Metal
Parallels Server 4 Bare Metal Product Summary 1/21/2010 Company Overview Parallels is a worldwide leader in virtualization and automation software that optimizes computing for services providers, businesses
Network Attached Storage. Jinfeng Yang Oct/19/2015
Network Attached Storage Jinfeng Yang Oct/19/2015 Outline Part A 1. What is the Network Attached Storage (NAS)? 2. What are the applications of NAS? 3. The benefits of NAS. 4. NAS s performance (Reliability
Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI. Article for InfoStor November 2003 Paul Griffith Adaptec, Inc.
Filename: SAS - PCI Express Bandwidth - Infostor v5.doc Maximizing Server Storage Performance with PCI Express and Serial Attached SCSI Article for InfoStor November 2003 Paul Griffith Adaptec, Inc. Server
Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers
WHITE PAPER FUJITSU PRIMERGY AND PRIMEPOWER SERVERS Performance Comparison of Fujitsu PRIMERGY and PRIMEPOWER Servers CHALLENGE Replace a Fujitsu PRIMEPOWER 2500 partition with a lower cost solution that
MaxDeploy Ready. Hyper- Converged Virtualization Solution. With SanDisk Fusion iomemory products
MaxDeploy Ready Hyper- Converged Virtualization Solution With SanDisk Fusion iomemory products MaxDeploy Ready products are configured and tested for support with Maxta software- defined storage and with
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
Chapter 3. Internet Applications and Network Programming
Chapter 3 Internet Applications and Network Programming 1 Introduction The Internet offers users a rich diversity of services none of the services is part of the underlying communication infrastructure
Session Storage in Zend Server Cluster Manager
Session Storage in Zend Server Cluster Manager Shahar Evron Technical Product Manager, Zend Technologies Welcome! All Phones are muted type your questions into the Webex Q&A box A recording of this session
Designing HP SAN Networking Solutions
Exam : HP0-J65 Title : Designing HP SAN Networking Solutions Version : Demo 1 / 6 1.To install additional switches, you must determine the ideal ISL ratio. Which ISL ratio range is recommended for less
InfoScale Storage & Media Server Workloads
InfoScale Storage & Media Server Workloads Maximise Performance when Storing and Retrieving Large Amounts of Unstructured Data Carlos Carrero Colin Eldridge Shrinivas Chandukar 1 Table of Contents 01 Introduction
Eloquence Training What s new in Eloquence B.08.00
Eloquence Training What s new in Eloquence B.08.00 2010 Marxmeier Software AG Rev:100727 Overview Released December 2008 Supported until November 2013 Supports 32-bit and 64-bit platforms HP-UX Itanium
Uptime Infrastructure Monitor. Installation Guide
Uptime Infrastructure Monitor Installation Guide This guide will walk through each step of installation for Uptime Infrastructure Monitor software on a Windows server. Uptime Infrastructure Monitor is
Quantum StorNext. Product Brief: Distributed LAN Client
Quantum StorNext Product Brief: Distributed LAN Client NOTICE This product brief may contain proprietary information protected by copyright. Information in this product brief is subject to change without
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
Google File System. Web and scalability
Google File System Web and scalability The web: - How big is the Web right now? No one knows. - Number of pages that are crawled: o 100,000 pages in 1994 o 8 million pages in 2005 - Crawlable pages might
Virtualization. Pradipta De [email protected]
Virtualization Pradipta De [email protected] Today s Topic Virtualization Basics System Virtualization Techniques CSE506: Ext Filesystem 2 Virtualization? A virtual machine (VM) is an emulation
Backup Exec 9.1 for Windows Servers. SAN Shared Storage Option
WHITE PAPER Optimized Performance for SAN Environments Backup Exec 9.1 for Windows Servers SAN Shared Storage Option 11/20/2003 1 TABLE OF CONTENTS Executive Summary...3 Product Highlights...3 Approaches
Accelerating Microsoft Exchange Servers with I/O Caching
Accelerating Microsoft Exchange Servers with I/O Caching QLogic FabricCache Caching Technology Designed for High-Performance Microsoft Exchange Servers Key Findings The QLogic FabricCache 10000 Series
The Microsoft Windows Hypervisor High Level Architecture
The Microsoft Windows Hypervisor High Level Architecture September 21, 2007 Abstract The Microsoft Windows hypervisor brings new virtualization capabilities to the Windows Server operating system. Its
Managing your Red Hat Enterprise Linux guests with RHN Satellite
Managing your Red Hat Enterprise Linux guests with RHN Satellite Matthew Davis, Level 1 Production Support Manager, Red Hat Brad Hinson, Sr. Support Engineer Lead System z, Red Hat Mark Spencer, Sr. Solutions
CSC 2405: Computer Systems II
CSC 2405: Computer Systems II Spring 2013 (TR 8:30-9:45 in G86) Mirela Damian http://www.csc.villanova.edu/~mdamian/csc2405/ Introductions Mirela Damian Room 167A in the Mendel Science Building [email protected]
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2.
M.Sc. IT Semester III VIRTUALIZATION QUESTION BANK 2014 2015 Unit 1 1. What is virtualization? Explain the five stage virtualization process. 2. What are the different types of virtualization? Explain
Making the Move to Desktop Virtualization No More Reasons to Delay
Enabling the Always-On Enterprise Making the Move to Desktop Virtualization No More Reasons to Delay By Andrew Melmed Director of Enterprise Solutions, Sanbolic Inc. April 2012 Introduction It s a well-known
Storage Networking Foundations Certification Workshop
Storage Networking Foundations Certification Workshop Duration: 2 Days Type: Lecture Course Description / Overview / Expected Outcome A group of students was asked recently to define a "SAN." Some replies
EMC DATA DOMAIN OPERATING SYSTEM
ESSENTIALS HIGH-SPEED, SCALABLE DEDUPLICATION Up to 58.7 TB/hr performance Reduces protection storage requirements by 10 to 30x CPU-centric scalability DATA INVULNERABILITY ARCHITECTURE Inline write/read
EMC DATA DOMAIN OPERATING SYSTEM
EMC DATA DOMAIN OPERATING SYSTEM Powering EMC Protection Storage ESSENTIALS High-Speed, Scalable Deduplication Up to 58.7 TB/hr performance Reduces requirements for backup storage by 10 to 30x and archive
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective
ECE 7650 Scalable and Secure Internet Services and Architecture ---- A Systems Perspective Part II: Data Center Software Architecture: Topic 1: Distributed File Systems Finding a needle in Haystack: Facebook
PERFORMANCE TUNING ORACLE RAC ON LINUX
PERFORMANCE TUNING ORACLE RAC ON LINUX By: Edward Whalen Performance Tuning Corporation INTRODUCTION Performance tuning is an integral part of the maintenance and administration of the Oracle database
White Paper. Recording Server Virtualization
White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...
Scalable Windows Server File Serving Clusters Using Sanbolic s Melio File System and DFS
Scalable Windows Server File Serving Clusters Using Sanbolic s Melio File System and DFS (A step-by-step guide) www.sanbolic.com Software to Simplify and Share SAN Storage Introduction Viewed by many as
Monitoring and Diagnosing Oracle RAC Performance with Oracle Enterprise Manager. Kai Yu, Orlando Gallegos Dell Oracle Solutions Engineering
Monitoring and Diagnosing Oracle RAC Performance with Oracle Enterprise Manager Kai Yu, Orlando Gallegos Dell Oracle Solutions Engineering About Author Kai Yu Senior System Engineer, Dell Oracle Solutions
HBA Virtualization Technologies for Windows OS Environments
HBA Virtualization Technologies for Windows OS Environments FC HBA Virtualization Keeping Pace with Virtualized Data Centers Executive Summary Today, Microsoft offers Virtual Server 2005 R2, a software
Server Virtualization: Avoiding the I/O Trap
Server Virtualization: Avoiding the I/O Trap How flash memory arrays and NFS caching helps balance increasing I/O loads of virtualized servers November 2010 2 Introduction Many companies see dramatic improvements
Installation and Configuration Guide for Cluster Services running on Microsoft Windows 2000 Advanced Server using Acer Altos Servers
Acer Altos Server Installation and Configuration Guide for Cluster Services running on Microsoft Windows 2000 Advanced Server using Acer Altos Servers This installation guide provides instructions for
Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците
Hewlett Packard - NBU partnership : SAN (Storage Area Network) или какво стои зад облаците Why SAN? Business demands have created the following challenges for storage solutions: Highly available and easily
TRACE PERFORMANCE TESTING APPROACH. Overview. Approach. Flow. Attributes
TRACE PERFORMANCE TESTING APPROACH Overview Approach Flow Attributes INTRODUCTION Software Testing Testing is not just finding out the defects. Testing is not just seeing the requirements are satisfied.
Understanding Storage Virtualization of Infortrend ESVA
Understanding Storage Virtualization of Infortrend ESVA White paper Abstract This white paper introduces different ways of implementing storage virtualization and illustrates how the virtualization technology
EmulexSecure 8Gb/s HBA Architecture Frequently Asked Questions
EmulexSecure 8Gb/s HBA Architecture Frequently Asked Questions Security and Encryption Overview... 2 1. What is encryption?... 2 2. What is the AES encryption standard?... 2 3. What is key management?...
IBM Global Technology Services September 2007. NAS systems scale out to meet growing storage demand.
IBM Global Technology Services September 2007 NAS systems scale out to meet Page 2 Contents 2 Introduction 2 Understanding the traditional NAS role 3 Gaining NAS benefits 4 NAS shortcomings in enterprise
EZManage V4.0 Release Notes. Document revision 1.08 (15.12.2013)
EZManage V4.0 Release Notes Document revision 1.08 (15.12.2013) Release Features Feature #1- New UI New User Interface for every form including the ribbon controls that are similar to the Microsoft office
Fibre Channel Overview of the Technology. Early History and Fibre Channel Standards Development
Fibre Channel Overview from the Internet Page 1 of 11 Fibre Channel Overview of the Technology Early History and Fibre Channel Standards Development Interoperability and Storage Storage Devices and Systems
List of Figures and Tables
List of Figures and Tables FIGURES 1.1 Server-Centric IT architecture 2 1.2 Inflexible allocation of free storage capacity 3 1.3 Storage-Centric IT architecture 4 1.4 Server upgrade: preparation of a new
Client/Server Computing Distributed Processing, Client/Server, and Clusters
Client/Server Computing Distributed Processing, Client/Server, and Clusters Chapter 13 Client machines are generally single-user PCs or workstations that provide a highly userfriendly interface to the
The team that wrote this redbook Comments welcome Introduction p. 1 Three phases p. 1 Netfinity Performance Lab p. 2 IBM Center for Microsoft
Foreword p. xv Preface p. xvii The team that wrote this redbook p. xviii Comments welcome p. xx Introduction p. 1 Three phases p. 1 Netfinity Performance Lab p. 2 IBM Center for Microsoft Technologies
Milestone Solution Partner IT Infrastructure Components Certification Summary
Milestone Solution Partner IT Infrastructure Components Certification Summary Dell FS8600 NAS Storage 12-1-2014 Table of Contents Introduction:... 2 Dell Storage Architecture:... 3 Certified Products:...
HP SN1000E 16 Gb Fibre Channel HBA Evaluation
HP SN1000E 16 Gb Fibre Channel HBA Evaluation Evaluation report prepared under contract with Emulex Executive Summary The computing industry is experiencing an increasing demand for storage performance
A Deduplication File System & Course Review
A Deduplication File System & Course Review Kai Li 12/13/12 Topics A Deduplication File System Review 12/13/12 2 Traditional Data Center Storage Hierarchy Clients Network Server SAN Storage Remote mirror
Monitoring and Diagnosing Oracle RAC Performance with Oracle Enterprise Manager
Monitoring and Diagnosing Oracle RAC Performance with Oracle Enterprise Manager Kai Yu, Orlando Gallegos Dell Oracle Solutions Engineering Oracle OpenWorld 2010, Session S316263 3:00-4:00pm, Thursday 23-Sep-2010
P2P Storage Systems. Prof. Chun-Hsin Wu Dept. Computer Science & Info. Eng. National University of Kaohsiung
P2P Storage Systems Prof. Chun-Hsin Wu Dept. Computer Science & Info. Eng. National University of Kaohsiung Outline Introduction Distributed file systems P2P file-swapping systems P2P storage systems Strengths
IOS110. Virtualization 5/27/2014 1
IOS110 Virtualization 5/27/2014 1 Agenda What is Virtualization? Types of Virtualization. Advantages and Disadvantages. Virtualization software Hyper V What is Virtualization? Virtualization Refers to
21 st Century Storage What s New and What s Changing
21 st Century Storage What s New and What s Changing Randy Kerns Senior Strategist Evaluator Group Overview New technologies in storage - Continued evolution - Each has great economic value - Differing
Implementing a Digital Video Archive Based on XenData Software
Based on XenData Software The Video Edition of XenData Archive Series software manages a digital tape library on a Windows Server 2003 platform to create a digital video archive that is ideal for the demanding
OPTIMIZING SERVER VIRTUALIZATION
OPTIMIZING SERVER VIRTUALIZATION HP MULTI-PORT SERVER ADAPTERS BASED ON INTEL ETHERNET TECHNOLOGY As enterprise-class server infrastructures adopt virtualization to improve total cost of ownership (TCO)
Question: 3 When using Application Intelligence, Server Time may be defined as.
1 Network General - 1T6-521 Application Performance Analysis and Troubleshooting Question: 1 One component in an application turn is. A. Server response time B. Network process time C. Application response
Diablo and VMware TM powering SQL Server TM in Virtual SAN TM. A Diablo Technologies Whitepaper. May 2015
A Diablo Technologies Whitepaper Diablo and VMware TM powering SQL Server TM in Virtual SAN TM May 2015 Ricky Trigalo, Director for Virtualization Solutions Architecture, Diablo Technologies Daniel Beveridge,
Outline: Operating Systems
Outline: Operating Systems What is an OS OS Functions Multitasking Virtual Memory File Systems Window systems PC Operating System Wars: Windows vs. Linux 1 Operating System provides a way to boot (start)
Clustering Windows File Servers for Enterprise Scale and High Availability
Enabling the Always-On Enterprise Clustering Windows File Servers for Enterprise Scale and High Availability By Andrew Melmed Director of Enterprise Solutions, Sanbolic, Inc. April 2012 Introduction Microsoft
RAID technology and IBM TotalStorage NAS products
IBM TotalStorage Network Attached Storage October 2001 RAID technology and IBM TotalStorage NAS products By Janet Anglin and Chris Durham Storage Networking Architecture, SSG Page No.1 Contents 2 RAID
WHITE PAPER Guide to 50% Faster VMs No Hardware Required
WHITE PAPER Guide to 50% Faster VMs No Hardware Required Think Faster. Visit us at Condusiv.com GUIDE TO 50% FASTER VMS NO HARDWARE REQUIRED 2 Executive Summary As much as everyone has bought into the
RAID. RAID 0 No redundancy ( AID?) Just stripe data over multiple disks But it does improve performance. Chapter 6 Storage and Other I/O Topics 29
RAID Redundant Array of Inexpensive (Independent) Disks Use multiple smaller disks (c.f. one large disk) Parallelism improves performance Plus extra disk(s) for redundant data storage Provides fault tolerant
Oracle Big Data SQL Technical Update
Oracle Big Data SQL Technical Update Jean-Pierre Dijcks Oracle Redwood City, CA, USA Keywords: Big Data, Hadoop, NoSQL Databases, Relational Databases, SQL, Security, Performance Introduction This technical
Java DB Performance. Olav Sandstå Sun Microsystems, Trondheim, Norway Submission ID: 860
Java DB Performance Olav Sandstå Sun Microsystems, Trondheim, Norway Submission ID: 860 AGENDA > Java DB introduction > Configuring Java DB for performance > Programming tips > Understanding Java DB performance
RUNNING vtvax FOR WINDOWS
RUNNING vtvax FOR WINDOWS IN A AVT / Vere Technologies TECHNICAL NOTE AVT/Vere Technical Note: Running vtvax for Windows in a Virtual Machine Environment Document Revision 1.1 (September, 2015) 2015 Vere
OVERVIEW. CEP Cluster Server is Ideal For: First-time users who want to make applications highly available
Phone: (603)883-7979 [email protected] Cepoint Cluster Server CEP Cluster Server turnkey system. ENTERPRISE HIGH AVAILABILITY, High performance and very reliable Super Computing Solution for heterogeneous
Binary search tree with SIMD bandwidth optimization using SSE
Binary search tree with SIMD bandwidth optimization using SSE Bowen Zhang, Xinwei Li 1.ABSTRACT In-memory tree structured index search is a fundamental database operation. Modern processors provide tremendous
Veeam Best Practices with Exablox
Veeam Best Practices with Exablox Overview Exablox has worked closely with the team at Veeam to provide the best recommendations when using the the Veeam Backup & Replication software with OneBlox appliances.
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION
DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies
EVOLUTION OF NETWORKED STORAGE
EVOLUTION OF NETWORKED STORAGE Sonika Jindal 1, Richa Jindal 2, Rajni 3 1 Lecturer, Deptt of CSE, Shaheed Bhagat Singh College of Engg & Technology, Ferozepur. [email protected] 2 Lecturer, Deptt
