TracCharacteristicsoftheT1NSFNETBackbone KimberlyC.ClayandGeorgeC.Polyzos Hans-WernerBraun kc@cs.ucsd.edu,polyzos@cs.ucsd.edu hwb@sdsc.edu DepartmentofComputerScienceandEngineering AppliedNetworkResearchGroup UniversityofCalifornia,SanDiego SanDiegoSupercomputerCenter LaJolla,CA92093-0114 SanDiego,CA92186-9784 Abstract Thispaperpresentstheresultsofameasurement studyofthet1nsfnetbackbone.werstdiscuss themeasurementenvironmentandapproachtodata collection.wethenpresentmeasurementsresultsfor: long-termgrowthintracvolume,includingattributiontodomainsandprotocols;trendinaveragepacket sizeonthenetwork,bothoverlongandmediumterm intervals;mostpopularsources,destinations,andsite pairs;traclocality;internationaldistributionoftraf- c;meanutilizationstatistics,bothoftheoverallbackboneaswellasofspeciclinksofinterest;anddelay statistics. 1Introduction Thelastgeneraloverviewofmeasuredbehaviorof thenationalresearchandeducationbackboneinfrastructurewasthelandmarkstudybykleinrockand Naylor[13](alsoin[12]),whichpresentedmeasurementsonthe1973ARPANET.Sincethatstudy,the infrastructurehasundergonesignicantchange.the U.S.DepartmentofDefenseinitiallyestablishedthe ARPANETtoenablethenetworkresearchcommunitytoinvestigatepacket-switchingtechnologies.Recognizingthepotentialofthesetechnologiesforthe non-defensecommunityaswell,nsflaterbuiltthe NSFNETfollow-up,withanexplicitmissiontofosterproductivitywithintheresearchandeducation environment.theexpandedvisibilityofoperational computernetworkinghasresultedinattractingother agenciesandcivilianorganizations,includingonaninternationalscale,tofurtheraugmentthenetworkwith theirownnetworkresources. Whatisnowapervasiveglobalinfrastructurehas receivedlittleattentionintermsofempiricalanalysisandmodeling.asaresult,welackathorough understandingofthedeployednetwork,andthusthe capabilitytopredict,muchlesssecure,itsbehavior. RecentstudiesonisolatedaspectsoftheNSFNET investigatedtheexistenceofpackettrainsonthe NSFNETbackbone[11],andevaluatedspecicroutingapproachesforuseonthebackbone[9].Feldmeier [10]studiedtheestimatedperformanceofagateway routingtablecache.caceresetal.[3]anddanziget al.[8],proledthecharacteristicsofindividualapplicationconversations.wakemanetal.[15]andasaba etal.[1],presentanalysesoftrans-atlanticandtranspacictrac,respectively. ThisresearchissupportedbyagrantoftheNationalScience Foundation(NCR-9119473),andajointstudyagreementwiththe InternationalBusinessMachines,Inc. OurcharacterizationisbasedonavailabledatacollectedsincetheestablishmentoftheT1NSFNET backboneinjuly1988.wehavealsoselectedaspecicmonth,may1992,toinvestigatesometracpatternsinmoredetail.atthattimethensfnetwas transitioningfromthet1tothenewt3backbone, andthelowertracvolumesofrecentmonthsreect thegradualmigrationtothet3backbone.inboth casesweareusingrawdatasetscollectedbymerit Network,Inc.accordingtotheproceduresdescribed insection3.becausethet3backboneinitiallydid notfullysupportdatacollectionforallthestatistics objectswepresentinthispaper,werestrictourselves heretothet1backbone. Itisinterestingtonotethattwentyyearsafter[13], wecansaylessaboutcertainperformancemetrics ofthecurrentnetworksthanwaspossiblein1973. PartiallyresponsibleisthefactthattheARPANET wasanexperimentalnetworkwithfacilitiesthatwere specicallydesignedforextensivedatacollectionto supportnetworkanalysis.today'senvironmentis markedlydierent.whileinmanyareasthereisfar moreexibilityinassessingperformanceofthecurrentinfrastructure,thensfnetprojecthastargeted itsinstrumentationeortstowardsoperationalrather thanresearchrequirements. Inthefollowingsectionwepresentabriefdescriptionoftheenvironmentandtheinstrumentationfor thedatacollectionprocess.themeasureddatainclude:long-termgrowthintracvolume,including attributiontodomainsandprotocols;trendinaveragepacketsizeonthenetwork,bothoverlongand mediumtermintervals;delaystatistics;mostpopularsources,destinations,andsitepairs;traclocality;internationaldistributionoftrac;meanutilizationstatistics,bothoftheoverallbackboneaswellas speciclinksofinterest;and,assessmentofdowntime forthelastfewyears.thedataindicatesnotonly achangeinthevolume,butalsothecomposition,or cross-sectionoftrac,overbothlongandshorttime horizons. 2Currentnetworkinfrastructure NSFNET,theNationalScienceFoundationNetwork,isageneralpurposepacket-switchingnetwork supportingaccesstoscienticcomputingresources anddata.evolvedfroma56kbpssix-nodenetworkin themid-1980stotoday's45mbpsnetwork,thecurrent NSFNETincludesthreedierentlevels:thetranscontinentalbackboneconnectingtheNSF-fundedsupercomputercentersandmid-levelnetworks,themidlevelnetworksthemselves,andthecampusnetworks. Thehierarchicalstructureincludesalargefractionof theresearchandeducationalcommunity,andeven
FIX-W FIX-E ture Figure1:HierarchicalModelofNSFNETArchitec- National Backbone Networks ESNET, MILNET, NSI, NSFNET,TWB Mid-Level Networks, e.g., BARRNET, SURANET, etc. Individual Campus Networks Figure2:1992T1BackboneLogicalTopology Seattle, WA Ann Arbor, MI Ithaca, NY Boston, MA Pittsburgh, Urbana- Salt Lake Lincoln, Princeton, Champaign, NJ City, UT IL Boulder, CO College,Park, MD Palo Alto. CA Atlanta, GA managethet1backbonenetworkbetweenthirteen San Diego, CA NSFsponsoredmid-levelnetworksites.TheNSFNET MeritNetwork,Inc.anawardtoengineer,build,and extendsintoaglobalarenaviainternationalconnec- backboneisalsoconnectedtoothernetworksofus NSFNETbackboneinitsvariousstages. [5]describeindetailtheunderlyingtopologyofthe chy.esnet,milnet,nsi,nsfnet,andtwbcor- respondtonationalbackbonesofdoe,dod,nasa, NSF,andDARPA,respectively.ChinoyandBraun tions.figure1depictsaroughmodelofthishierar- TheNationalScienceFoundationoriginallygranted Houston, TX theimplementationoftheoriginalbackboneinjune (FIX),attheEastandWestcoasts.Meritcompleted 1988,andaboutoneyearlaterredesignedittoprovidemultiplepathstoeachofthe13nodes.In1990 federalagenciesviatwofederalinterexchangepoints thensfaugmentedtheawardtomerittoaddone additionalt1nodeinatlanta,ga,andsubsequently Mid-West) BARRNET(BayAreaRegionalResearchNetwork,California) CERFnet(CaliforniaEducationandResearchFederationNet- CICNet(CommitteeonInstitutionalCooperationNetwork, COSupernet(ColoradoSupernetwork,Colorado) CONCERT(CommunicationsforNorthCarolinaEducation, Research,andTechnologyNetwork) INet(IndianaNetwork,Indiana) JvNCnet(JohnvonNeumannCenterNetwork,Northeast) LosNettos(SouthernCalifornia) UnitedStatesNSFNETMid-levelNetworks North-East) MichNet(MichiganNetwork,Merit-statewide,Michigan) MIDnet(MidwesternStatesNetwork,Mid-West) MRnet(MinnesotaRegionalNetwork,Minnesota) NEARnet(NewEnglandAcademicandResearchNetwork, netillinois(illinois) NevadaNet(Nevada) NorthWestNet(NorthwesternStatesNetwork) NYSERnet(NewYorkStateStateEducationandResearch OARnet(OhioAcademicResearchNetwork) PREPnet(PennsylvaniaResearchandEconomicPartnership PSCNET(PittsburghSupercomputingCenterNetwork,Mid- SDSCnet(SanDiegoSupercomputerCenterNetwork) SESQUINET(TexasSesquicentennialNetwork) SURAnet(SoutheasternUniversitiesResearchAssociation West) toengineerandimplementat3networktothe14 T1sitesinadditiontotwonewsitesinBostonand THEnet(TexasHigherEducationNetwork) VERnet(VirginiaEducationandResearchNetwork) Westnet(SouthwesternStatesNetwork) WiscNet(Wisconsin) WVNET(WestVirginiaNetworkforEducational Telecomputing) Network,South-East) Chicago.SomeinitialimplementationoftheT3networkwasinplacebytheendof1990,andtheupgrade, backbone. includingreroutingoftrac,wascompletedin1992. Figure2showsthelogicaltopologyofthenalT1 workconnectivitytonetworkssuchasnationalback- bones,e.g.,advancednetworkservices,alternet, tions. 2.1ParametersoftheT1backbone Theyincludeuniversities,researchinstitutions,fed- tonationalbackbonesofforeigncountries. Localsitesattachasclientstomid-levelnetworks. backboneincludesthemid-levelnetworks.1(seetasearchers.thensfnetbackboneallowsforpeernetrectly,tothensfnetand/orotherfederalagency networksforconnectivitytotheiruniversity-basedreble2).mid-levelnetworksconnect,sometimesindisionagenciesalsoemploytheservicesofmid-level backbones,andprovideconnectivityamongsitesin theresearchandacademicenvironment.federalmis- ThenextlayerofbranchingbelowtheNSFNET PerformanceSystemsInternational,andSprintLink. Thebackbonealsosupportsinternationalconnections eralinstallations,andprivatecommercialorganiza- reectitshierarchicalpositioninthearchitecture. theirgeographicalspan,butwewillusetheterm\mid-level"to aecttracowinthensfnetbackbone.allin- 1Mid-levelnetworkshavealsobeencalled\regionals,"reecting Wereviewafewofthenetworkparametersthat
terfacestoeachnodeonthet1backboneareof\t1" speed,1.544mbits/second.toaccessexternalclient networks,eacht1nsfnetbackbonenodeusesethernetinterfaces,limitingthepacketsizeintoanodal SwitchingSubsystem(NSS)to1500bytes.Packets typicallycontaina20-byteipheaderand,incaseof TCP,another20bytesforthetransportprotocol. Packetstravelthroughthenetworkindividually andarepassedfromnodetonodeaccordingtoan adaptiveroutingprocedurebasedonthestandardis- ISprotocol.ReassemblyoffragmentedIPpacketsoccursatthedestinationhostbeforepacketsareforwardedtotheapplication.EachprocessorontheT1 backbonecanbueruptoftypacketsontheoutput queueofaninterface.thisbueringcontributesto thelatencyofthedeliveryofpacketstothedestination.at1nsfnetbackboneprocessorcanswitch inexcessof1000packetspersecond,makingtheperpacketprocessingoverheadlessthan1millisecond. 3Datacurrentlycollected InthissectionwedescribethemeansbywhichmeasurementsareperformedontheT1NSFNETbackbone.Datacollection,forthepurposesofmonitoring, measuring,andanalyzingtheperformancecharacteristicsofthenetwork,isafundamentalrequirementfor theoperationandmanagementofanylarge-scalenetworksuchasthensfnetbackbone.2thet1hardwareispc/rt-based:theswitchingnodearchitecture consistsofmultipleprocessorsdedicatedtoseparate functionsconnectedbyacommontokenring.one processorisdedicatedtostatisticscollection.this processorhadtoeventuallyreverttosamplingwhen thetracvolumepernodesurpasseditsprocessing capability.wedescribethissituationfurtherinsection5. TheprincipalsourcesofinformationabouttheT1 networkaretheroutinecollectionofthreeclassesof networkstatistics:internodaldelays;interfacestatistics,whichrelyonthesimplenetworkmanagement Protocol(SNMP)[4];andpacketcategorization,performedwithanNSFNETNetworkStatisticssetof software,nnstat[2]. 3.1Internodallatency NSFNETusestheICMPEchofunctionalityto recordtheround-triptimes(rtt)betweenallpairs ofbackbonenodes.thismeasurementisperformed onceevery15minutesbetweentheexteriorinterfaceaddressesofthebackbonenodes.3abackbone nodetemporarilystoresthedelaydata,transferring itdailytoanocdatacollector.fromthesefteen minutesamplesmeritpublishesquartilestatisticsfor themonthlyinternodaldelay. 2ThedataforthestatisticspresentedinthisreportweregatheredbyseparateprocessorsintheNSFNETNSSequipmentwhich aggregateinformationusingthennstat[2]softwarepackage.this compilation,greatlyassistedbymeritnetwork,inc.andotherinstallations,reectsaneorttocaptureasmuchandasaccurate dataaspossible.however,noguaranteeisgivenforthecompletenessofthedataoritsaccuracy. 3Halvingthisvalueyieldsanapproximateone-waydelayforthe 14by14delaymatrix.Onarelativelyuncongestedbackbonewith stableandsymmetricrouting,suchamethodofachievingone-way delaysisjustied.see[7]formoredetailsonthefailureofround tripdelaystoadequatelycharacterizeunidirectionallatenciesacross awide-areanetwork. Packetcategorizationobjectscollectedpernode relativetoexteriornodalinterface source-destinationmatrixbynetworknumber(packets/bytes) TCP/UDPportdistribution,well-knownsubset(packets/bytes) protocoldistribution(e.g.,tcp,udp,icmp)(packets/bytes) Packet-lengthhistogramata50-bytegranularity packetvolumegoingoutofbackbonenode relativetoentirenode persecondhistogramofpacketarrivalrates NSS(intra-NSFNET)transittracvolume 3.2Interfacestatistics Tomaintaindataregardingpacketsandbytes transmittedandreceived,errors,delaytimes,and downtimes,allnsfnetbackbonenodesrecord statisticsaboutthepacketswhichtraverseeachof theirinterfaces.eachbackbonenode,alsocalled anodalswitchingsubsystem(nss),runssnmp serverswhichrespondtoqueriesregardingstandard SNMPManagementInformationBase(MIB)variables.Centralizedcollectionofthedataoccursfor eachbackboneinterfaceoneachnssonceevery15 minutes.thecountersareclearedviaonlytwomechanisms:explicity,whenthemachineisrestarted;and implicitly,whenthe32bitcountersoverrun.cumulativecounters,retrievedusingthesnmp,includethose forpackets,bytes,anderrorstransmittedinandout ofeachinterface.4 Severalin-houseutilitiesallowonetoderivemultiplestatisticsfromthisdata,includingpeakandaveragelinkutilizationsandthepeakfteen-minuteintervalofeachday.5 3.3Packetcategorization TocategorizeIPpacketsenteringtheNSFNET backbonebasedoninformationcontainedinpacket headers,eachnsshasadedicatedprocessorthatex- aminestheheaderofeverypackettraversingtheintra- NSStokenring.NNStat[2]buildsstatisticalobjects basedonthecollectedinformation. Thecentralagentrunningthecollectionsoftware periodicallycallsouttoeachofthebackbonenodes, logstheirstatisticalobjects,andresetstheobjects. ThecollectionhostisanIBMRS/6000anditcollected asmuchas50megabytesofrawstatisticsdaily.table3.3liststhennstatobjectscollectedoperationally onthet1backbone. Aftercollectingthedata,aspecializedsoftware packagecompilesamonthlymatrixofnetworknumber-to-network-numbertraccounts,which formsthebasisforthepubliclyavailablelescharacterizingtracacrossthensfnetbackboneinterms ofbothindividualnetworknumbersandcountries. Inallcases,MerittimestampsthecollectedstatisticsforthebackboneaccordingtoUniversalTime. Thegraphspresentedinthispaperalsofollowthis convention.thetickmarksonthex-axisforweekly timeseriesgraphscorrespondto0:00universaltime fortheindicateddayoftheweek.6 4ErrorconditionsontheinterfaceincludeHDLCchecksum errors,invalidpacketlength,andqueueoverowsresultingin discards. 5Dailyandmonthlytracsummariesareavailableinreportsvia anonymousftpfromhostnis.nsf.net. 6Asareference,0:00UTis19:00EST.
4CharacterizationTargets Wenowpresentobservedtraccharacteristicsof theoperatingt1network.kleinrockandnaylor's [13]paperservesasourbasicinspiration,yetwedo notoertheexactlysamesetofdatasincetheenvironmentandparametersofinteresthavechanged somewhat.insomecasesthegraphspresentedin[13] todepictnetworkbehaviorarenotpossible,or,more specically,havenomeaning,intoday'snsfnet.indeed,oftentoday'senvironmentrequirestranslation ofvocabularyappliedinkleinrock'sstudy.forexample,today'sanalogytoanarpanet\user-host" systemmentionedaboveisanexternalinterfacetoa NodalSwitchingSubsystem.Therefore,ourstatistics maydivergeintheirprecisename,butweintendthat thespiritofoureortisthesame:todescribecharacteristicsofnetworkbehavior,andtheirchangesover time. Nextwebrieydiscussthecharacteristicswhich KleinrockandNaylorinvestigated,anddescribeour followupparameters: 1.Messageandpacketsizedistributions.Intoday'sNSFNETbackbone,theterm\message" isnotapplicable. 2.Delaystatistics.Wepresentmedianstatistics calculatedbymerit,butnodedicatedexperimentalmeasurementsfortheisolatedpurpose ofthisstudy. 3.Meantrac-weightedpathlength.TheT1networkwasdesignedwithamaximumdiameter ofthreebackbonelinks,whichcontinueduntil theinstallationofthet3upgradednetwork.in thesearchitectures,theissueofpathlengthdoes notholdasmuchinterest. 4.Incest(theowoftractoandfromhostsat thesamelocalsite).wediscussinsection6.2 whyincestdoesnotapplyinthecurrentenvironment. 5.Mostpopularsitesandlinks. 6.Favoritism(thepropertywhichasitedemonstratesbysendingmuchofitstractoasmall numberofsites). 7.Linkutilization. 8.Errorrates. Wealsoinvestigateafewthingswhichwerenotas applicableinkleinrockandnaylor'senvironment: 1.Attributinglongtermgrowthintracvolumeto domainsandprotocols. 2.Trendinaveragepacketsizeonthenetwork, bothoverlongandmediumtermintervals. 3.Internationaldistributionoftrac. AsKleinrockandNaylorpointedoutintheirstudy, suchstatisticsareofmorethanmerelyhistoricalinterest.inmanycasestheymayleadtochangeinparametersusedforimplementation:packetandbuersizes, numberofbuers,channelcapacities,fragmentation policies,etc. Beforepresentingthestatisticsinthenextsection, webrieydiscusstheconceptofgranularityininformationcollectionandpresentation. 4.1Aggregation:TimeandSpaceGranularities Inaggregatingstatistics,onemustselectgranularitiesalongmultipledimensionsoftimeandspace.Aggregationinvolvestwoparameters:thegranularityat whichonecollectsinformation,andthegranularity withwhichonepresentsinformation.forexample,a packetcountforafteenminuteintervalimpliesaselectedcollectiongranularity.incontrast,the\bucket size"ofahistogramdenesagranularityofpresentation.inbothcases,selectionoftheappropriategranularityforaggregationrequirescarefulconsideration. TheT1NSFNETbackbonecurrentlycollects statisticsthroughsnmp-basedtoolsat15-minuteintervals,andwerestrictedourselvestothesedatasets withthisthistimegranularityforthisstudy.such datasetsserveonlyasastartingpoint.thisintervalmayappropriatetoanswerquestionsabouthighleveldistributionofnetworkusageonadailybasis. Otherquestions,suchasanalyzingpacketinterarrival timedistributions,orpredictingthebandwidthrequirementsofcontinuousmediadataows,willrequireamuchnertimegranularity,perhapsevenin thesubsecondrange. \Space"granularityinanetworkishardertodene thantimegranularity,butweoerabriefdescription whichoersaframeworktolatersections.alongthe spacedimension,onemightwanttofocusonspecic nodesorlinksinordertoexaminebehaviorsuchasfavoritismorhotspots.alternatively,whenpresenting internetworktracows,onemightwanttodevelop amatrixofcountry-by-countrytracowsovertime. Othergranularitiesinclude:backbonenode,external interface(ofabackbonenode),autonomoussystem, agency,networknumber,mid-levelserviceprovider, host,application,anduser.thesegranularitiesdo nothaveaninherentorder,asasingleuserorapplicationmightstraddleseveralhostsorevenseveral networknumbers.aswiththetimedimension,selectingtheoptimalgranularitydependsonthequestion ofinterest,andoftenrequiresexperimentation. 5Results:Long-TermTrends Figures3and4showtracvolumeandcountof networksconguredinthensfnetbackbone,respectively,forthelastfouryears.figure3showsthe decreasingtrendintracvolumeonthet1backbone beginninginlate1991,reectingthemigrationoftraf- ctothet3backbone.thediscontinuitiesinthis gurecorrespondtounavailablestatistics.figure4 depictsgrowth,domesticaswellasinternational,in thenumberofnetworksconguredforthensfnet backbone. Figure5showstracvolumeontheT1network attributedtonetworktype(e.g.,commercial,research, defense),asdocumentedbytheddn(defensedata Network)NetworkInformationCenter. ByclassifyingtracaccordingtotheTCP/UDP portituses,onecanattributetractospecicenduserapplications.figure6showstracvolumeon thenetworkdistributedbyapplicationtypeforthe
Traffic Volume (Millions of Packets) Number of Configured Networks Traffic Volume (Bytes) 0 5000 10000 15000 0 1000 2000 3000 4000 5000 0 2*10^9 4*10^9 6*10^9 8*10^9 Figure3:TracVolumeontheNSFNETBackbone Total T1 T3 6/88 1/89 6/89 1/90 6/90 1/91 6/91 1/92 6/92 Figure4:NumberofNetworksAttachedtotheT1 Backbone Total U.S. non-u.s. 6/88 1/89 6/89 1/90 6/90 1/91 6/91 1/92 6/92 Figure5:TracOeredbyNetworkType Res. & Education Government Commercial 7/89 11/89 3/90 7/90 11/90 3/91 7/91 11/91 3/92 Traffic Volume (Billions of Packets) Traffic Volume (Billions of Packets) Figure6:TracOeredbyProtocol 8 smtp nntp nameserver telnet ftp (otherprotocols) 6 (otherports) 4 2 0 8/89 11/89 2/90 5/90 8/90 11/90 2/91 5/91 8/91 11/91 2/92 5/92 SNMP derived topsevenapplicationcategoriesinmay1992.this NNStat derived gureillustrateshowthecompositionofapplications hasalsochangedoverthelongerterm.the\other protocol"categorycorrespondstoapplicationsusing atransportprotocolotherthantcporudp.the Figure7:T1NSFNETBackbonePacketTotals ports.somemethodologyforclassicationbasedon \otherport"categorycorrespondstonon-standard tionswhichoftenusenon-standardornotwell-dened ornotwell-denedports.noticethatthislastcategoryhasgrownmuchlargerovertheyears,reecting thedegreeofinteractivity,orqualityofservicerequire- anincreasinglydiverseevironment,andthediminishportantastheenvironmentevolvesments,ofnetworktrac,willbecomeincreasinglyimingabilityofmerittotrackindividualnewapplica- 1/90 4/90 7/90 10/90 1/91 4/91 7/91 10/91 1/92 4/92 6/92 crepanciesbetweenthesnmp-basedtraccountsand thetotalnodaltracow. ftypackets.thisreducedthediscrepanciessigni- thosederivedbymeansofnnstatemerged,asshown samplingtechniquewhichcapturesonlyoneoutof 1991.Duringthe1990-91timeframe,signicantdis- infigure7.itbecameclearthattheprocessorcollectingthennstatdatawasunabletokeepupwith AfurthernotableeventoccurredinSeptember Respondingtotheseconcerns,Meritdeployeda 0 2 4 6 8 10
Range of Packet Size (Bytes) 50 100 150 200 250 300 350 Figure8:MeanMonthlyPacketSizes thelineinthemiddleoftheplotrepresentsthemedian. threetypesofdatatransfer:interactive,transactiontocol,overthelasttwoyears.theoutsideendsofthe ofthemeanmonthlypacketsizefortheindicatedpro- Internetprotocols.Theseboxplotsillustratetherange Thisgurereectsaquitevisibledistinctionamong Figure8displaysboxplotsforseveralwell-known theinnerboxesshowthemiddlehalfofthedata,and boxplotsindicatetherangeofthemeanpacketsizes; ofsamplingonnsfnetbackbonestatisticsin[6]. umeonthegraphinfigure6.weexploretheeects cantly,andresultsinasignicantjumpinpacketvolorientedandbulktransfer.currentimplementa- ftp-data nntp smtp nmsrvr undefined telnet ftp to-endpacketswithsingleorfewcharacterpayloads. tionsofinteractiveapplicationsfrequentlysendend- ports individualtcpapplications. AsFigure8illustrates,themeanpacketsizeisrelativelyshortforthetelnetprotocol,somewhatlonger fortransaction-orientedprotocolssuchasnntpand payloadacknowledgementsduringtcpconnections SMTP,andmuchlargerforbulktransferprotocols nismstypicallyusefull-sizepacketsforthepayloads.7 Transaction-typeprotocolsgenerallyexchangeshort, multi-characterlines,whilebulkdatatransfermecha- dramaticallyreducestheaveragepacketsizesformany suchasftp-data.frequenttransmissionofzero- bits)persecondamongendsites.themeanpacket andcarryingroughly366,000bytes(almost3million wasacceptingsometwothousandpacketspersecond monthsome980billionbyteswerecarriedthrough of4,254networks.ontheaveragetheentirenetwork lengthwas186bytes/packet,althoughsuchastatistic thet1networkbysome5billionpacketstoatotal 6Results:ACloserLook investigatetracpatternsinmoredetail.duringthis doesnotwellreectthebimodaldistributionofpacket Wenowfocusonaspecicmonth,May1992,to sizes. packetsize.whilewedonothavedatatopresent frequentlyconguredaround576bytesformanyinternethosts. 7Full-sizepackets,oftheMaximumTransmissionUnitsize,are Wepresentedearlieralongtermperspectiveof Number of Site Pairs Number of Site Pairs Number of Site Pairs 0 50 100 150 200 250 300 0 100 200 300 400 500 0 500 1000 1500 bottomrangeoftracvolume)oft1nsfnetbackboneinmay1992worknumbersitepairs(showingthetop,middle,and Figure9:DistributionofMeanPacketSizesforNet- 0 100 200 300 400 500 600 Average (Bytes) of Top 1500 0 100 200 300 400 500 600 Average (Bytes) of Middle 1500 0 100 200 300 400 500 600 Average (Bytes) of Bottom 1500 Figure10:AveragePacketSizeonT1NSFNETBackbone(10to17May1992) statisticsoncompletepacketsizedistribution.wedo havesomeindicationofpacketsizeforthemonthof sizes,intensiesduringtheo-peakhours,oralterteenminuteintervaloverthecourseofaweek.thiageofbulktransferapplications,usinglargerpacket thesamedistributionsforthemiddleandbottom1500 sitepairs,respectively. graphisconsistentwiththehypothesisthattheus- pairswhoexchangedthemosttracinmay1992. Themiddleandlowergraphsofthesamegureshow thedistributionsofmeanpacketsizesforthe1500site MayaswellasaselectedweekinMay.Figure9shows Figure10showstheaveragepacketsizeforeachf- Sun Mon Tue Wed Thur Fri Sat Sun whilendingthatthemajorityofnon-interactiveconnectionsdooccurduringpeakhours. rmsthelatterbehavioronseveralinternetdatasets, sameeectonthegraph.however,paxson[14]con- natively,thatinteractiveactivity,generallycharacter- izedbysmallerpacketsizes,dropsoduringo-peak hours.bothofthesephenomenawouldresultinthe Packet Size (Bytes) 140 160 180 200 220 240 260
tweenbackbonenodes(may1992) Figure11:Distributionofone-wayMedianDelaysBe- 0 20 40 60 80 100 Delay (msec) 6.1Latency acrossbackbonelinks(may1992) Figure12:Distributionofone-wayMedianDelays TheNSFNETbackbonecollectsanode-to-nodela- geographicow,relativetoaparticularobservation linksthemselvesratherthanafullnode-to-nodematrix. 6.2TracLocality Traclocality,orfavoritism,isaspecictypeof adjacentones.figure12showsthesamestatisticfor diandelaysbetweenanytwonodes,includingnon- tencymatrix.figure11showsthedistributionofme- onlyadjacentnodes,correspondingtothebackbone 10 15 20 25 30 35 40 45 50 point.kleinrockandnaylor[13]measuredlevelsof Delay (msec) messagessentfromandtothenbusiestsourceand severalaspectsofthenon-uniformityoftracow. InFigure13weplotthecumulativedistributionof ARPANETIMPandexitsthesameIMP. incestwithinthearpanet,wheretheydeneincest mostpopular(2.8%)destinations.specicsitepairs astracwhichtravelszerohopsbecauseitentersone destinationnetworks(notethelogscale).over50% showevenmoremarkedfavoritism:46.9%ofthetotal ofthetracisgeneratedbythebusiest31ofthe4254 sitenetworks(0.7%),andover50%travelstothe118 Thesetraclocalityissueshaveledustoexplore Number of Node Pairs Number of Links 0 5 10 15 0 2 4 6 8 10 Percent of Traffic Sent By Source Figure13:CumulativeDistributionofTracfor200 BusiestSourcesandDestinations Busy (Bytes) Busy Destinations (Bytes) 1 10 100 1000 Rank of Network by Usage 100 ofthe560,049site-pairs. voritedestinations traconthebackbonetravelsbetween1500(0.28%) Figure14:CumulativeDistributionofTractoFa- SDSC UCSD NSF 50 spectively.notethatfavoritismateachsourcesite Figure14illustratesthe\favorite-site"eect.The involvesaseparatesetofmostpopulardestination sites(6.6%),andnsf61outof458(13%)sites,re- mostfavored(6.7%)ofthe933sites.the90thpercentilesforsdscandnsfwere62outof933totatinations.weselectedthesesitestodemonstratethhibitedbyucsd,sdscandnsf'slocalnetworks. signicantdierenceinthedegreesoffavoritismex- thepercentoftractoeachsource'snfavoritedes- ForUCSD,90percentofthetracgoestothe63 graphplotsthesource-favoritismforthreenetworks: 0% 25% 50% 75% 100% sites,sinceeachsourceneednothavethesamesetof Percent Networks Which Source Sent Traffic 6.3InternationalDistributionofTrac favorites. non-usdatalinefollowstheaxisontherightsideof thisgure.analysisoftracamongnon-usnetworks globaluseofthensfnetinfrastructure,inparticular fortransittracamongnon-ussites.notethatthe reportattributingmonthlytracvolumetoindividual countries.figure15usesthisdatatoillustratethe AsdescribedinSection3,Meritalsoprovidesa Percent of Traffic 1 5 10 50 100
Link Utilization (%) Proportion of Packets 0 20 40 60 80 100 0 5 10 15 20 25 30 Proportion of Packets Sent By non-us Sites Number of Links PacketVolumeintoT1NSFNETBackbonebyCountry(May1992) Figure15:May1992CumulativeDistributionof 25% 20 Source Nations Source Nations Excluding U.S. 10 0 10 20 30 40 Proportion of Nations Using carrierconsideringtheprovisionofipservices. 6.4TracVolumeandUtilization Figure16:MeanUtilizationofT1BackboneLinks(10 to17may1992) whichusetheunitedstatesinfrastructureisnotonly ofresearchinterest,butalsoofobviousinteresttoa = 15.5% sureoftheeectivenessofthenetworkdesignand behavior.weplotmostguresinthissectionoverthe use.infigure16weshowthelinkutilizationaveragedovertheentirenetworkona15minutebasisfor focusondailyaswellasweekend/weekdaycycles. courseoftheweek10to16may1992toenableacloser Theinternaltraconbackbonelinksisonemea- Wenowdiscussotherglobalmeasuresofnetwork Sun Mon Tue Wed Thur Fri Sat Sun aninternalnetworkowofroughly1.34mbps.the maximum15minutelineloadforasinglelinkonthe networkwasapproximately89%andcorrespondedto dailycyclesandweekendlullsareunsurprising.as toaninternalnetworkowofroughly0.41mbps.the theweek.themaximum15minuteaveragelineload wasapproximately27.12%percentandcorresponded BackboneLinks(10to17May1992) Figure17:DistributionofMeanUtilizationsofT1 0 20 30 40 (%) thensfnet'sinternationalclientelegrows,wemay EachFifteenMinuteInterval(10to17May1992) Figure18:UtilizationofMostHeavilyUsedLinkin seesomemitigationofthesedailycycles. lizationsofallthelinksonthebackbone.notethat thet3isnowassumingmuchofthetracloads, monthsago. gestedlinkduringeach15minuteinterval.notethat soutilizationismuchlowernowthanitwasseveral thisgraphdoesnotdepictasinglephysicallink,but ratheraconsolidated\virtual"linkcomposedofthe = 47.1% linkwhichwasthebusiestduringeachofthe96fteenminuteintervalsoftheday. Figure17showsthedistributionofthemeanuti- Figure18illustratestheutilizationofthemostcon- Figure19presentsacomparisonofutilizationsfor Sun Mon Tue Wed Thur Fri Sat Sun thelinkwiththehighestaverageutilization.theuppergraphpresentsthelinkitself;themiddlegraph presentsthereversedirectionofthesamelink;and siveandcomplexasthensfnetbackbonemustcon- sidermanyparameters,includingnodalinterfacelink 6.5Reliability betweentheabovetwodirections. thelowergraphpresentsthedierenceinutilization Areliabilitymetricforaninfrastructureasperva- 0 2 4 6 Link Utilization (%) 0 20 40 60 80 100
10 to 17 May 1992 Utilization of Link with Highest Average Utilization (College Park to Houston) Figure19:LinkwithHighestAverageUtilization(CollegeParktoHouston) mean = 35.0% Sun Mon Tue Wed Thur Fri Sat of whose Reverse Direction has mean = 22.8% Sun Mon Tue Wed Thur Fri Sat Sun 10 to 17 May 1992 Difference in Utilizations Between Directions for Houston and Sun Mon Tue Wed Thur Fri Sat 10 to 17 May 1992 Nodal Downtime Average Nodal Downtime thebackbone. T1NSFNETbackbonenodes. Figure20:Maximumandmeannodaldowntimefor reliability,asviewedbythenetworkoperationscenter(noc),areincludedinpubliclyavailablemonthly reports.figure20plotsthemaximumandmeannodal downtimeforallt1nsfnetbackbonenodesfrom 1988to1991. Figure21showscommunicationerrorsasseenby acterizedaseitherclass1(fullnodeoutage)orclass 2(servicereduction).TheClass1percentagesofnode errors,biterrorratesasperceivedbydsu/csus,8 examplesofviewsofindividualreliabilityaspectsof errors,orotherunanticipatedevents.weoertwo fullandpartialnodeoutages,routingconguration NetworkerroreventsontheT1backbonearechar-backbone,10to17May1992 Figure21:LinkErrorsthroughouttheT1NSFNET 9/88 1/89 5/89 9/89 2/90 6/90 10/90 2/91 6/91 10/91 1/92 seriallineinterfaces.thegraphcountsthenumber Month Sun Mon Tue Wed Thur Fri Sat Sun whichinterfacedigitallinkswithnodalswitchingsubsystems. oflinksduringeachfteenminuteintervalthatshow linkerrorsduringthatobservationperiod. witherrorsthannone.notethatthebucketcorrespondingto1errors,couldcorrespondtoasinglelink 8DataServiceUnitsandChannelServiceUnitsaremachines Figure22providesahistogramofthesamedata, Figure22:LinkErrorHistogramthroughouttheT1 NSFNETbackbone,10to17May1992 showingthatitisfarmorecommonforafteenminuteobservationintervaltoshowatleastonelink 0 2 4 6 8 12 Number of Backbone Links With At Least One During the Interval Percent Downtime 0 2 4 6 8 Link Utilization (%) Link Utilization (%) Difference in Utilizations 0 20 40 60 80 0 20 40 60 80-10 0 10 20 30 40 Number of Links with at least one Error for the Interval 0 2 4 6 8 10 12 Number of Intervals 0 20 40 60 80 100 120 140
havingasinglebiterrorforafullfteen-minuteintervalṫheseareverylimitedsnapshotsintoselectedaspectsofnsfnetbackbone\reliability",andweare notinapositiontodrawconclusionsormakereliabilityassessmentsfromtheselimiteddatasets.a responsibleinvestigationofofbackbonereliabilityrequirericherdatasets. 7Conclusions Wehavepresentedsometraccharacteristicsof thet1nsfnetbackbone.wehaveincludedboth longtermcharacterizations,essentiallyforthelifetime ofthet1network,aswellasmoredetailedresultsfor themonthofmay1992.thelong-termdataarepresentedonamonthlybasisandwereobtainedfrom publiclyavailablesummariesofmeasurementspublishedbymeritnetwork,inc.wecanmakethefollowingobservationsfromthepresenteddata. Tracbothinpacketsandbytesandthenumber ofnetworks(inthesenseofassignedipaddressfamiliestocampusandothernetworksservedbythet1 backbone)issteadilyincreasingsincethenetworkinstallation.atapproximatelytheendof1991,thet1 tracvolumedroppedo,asthensfnetprojectbegantodiverttractothet3backbone.theincrease intracvolumeinbytesseemsquadratic,whilethe increaseinnetworksservedappearslinear. Mostofthetracvolumeiswithintheresearch community,andthehighestvolumeapplicationsare letransfer(usingtheftpprotocol)andnetwork newsdistribution(usingthenntpprotocol).lately, aconsiderableproportionofthetraccannotbedirectlyattributedtoprotocolsandapplicationsbecause oftheproliferationofprotocolsusingnon-standard TCP/UDPportnumbersorothertransportprotocols. Monthlysummariesofmeanpacketsizedonot revealanyparticulartrend,butfteen-minutedata showsdailycycleswhicharecompatiblewiththehypothesisofbulktransferapplications,usinglarger packetsizes,intensifyingduringtheo-peakhours, orcorrespondingly,thatinteractiveactivity,generally characterizedbysmallerpacketsizes,dropsoduringo-peakhours.delaystatistics(themonthlymedianofsamplepacketdelaysobtainedthroughpingat fteen-minuteintervals)revealthattypicalend-to-end delaysonthebackbonedonotexceed100milliseconds andtypicallinkdelaysdonotexceed45milliseconds. AswastruetwodecadesagointheARPANETenvironment,tracfavoritismishigh.Forexample, 0.28%ofthe(customer/campus/site)networkpairs generate46.9%ofthetrac.linkutilizationishigh, evenfollowingthediversionofaconsiderableproportionofthetractothet3network.themeanoverall utilizationforthemonthofmay1992was15.4%while 5nodeshadmorethat30%meanutilizationforthe month.overfteen-minuteintervals,utilizationof highlyutilizedlinkstypicallyexceeded50%andsometimes80%.themostheavilyusedlinkforthemonth, CollegeParktoHouston,hadutilizationalmostalwaysexceeding20%(forfteen-minuteintervals)and morethan50%duringthepeakhoursoftheday.interestingly,thereversedirection,houstontocollege Park,hadalmostuniformlylowerutilization. Theavailabledataholdfurtherpotentialforanalysiswhichcanleadtoabetterunderstandingoftrac onthenetwork.however,therearealsolimitationsof thedatawhichmakeexplorationofsomeinteresting questionsproblematic,inparticularthoseinvolving correlationsbetweeninstantaneousnetworkperformanceandtracintensityandcharacteristics.complicatingthetaskaretheenormousdicultieswith methodologicaldatacollectionatsuchlargescale,and inanoperationalenvironment.theseproblemsexplain,inourview,thelackofothersimilarstudiesfor wideareanetworks.theyalsorenderanytracdata particularlyvaluableandsignicant. Weexpecttocontinueourresearchandreneour methodologiesintheprocessofapplyingthemtonew realms.inparticular,wehopetoperformsimilarinvestigationofthet3networkingenvironment,aswell asthecasagigabitnetworkprojectanditsplanned infrastructure. Acknowledgements Wewouldliketoexpressourappreciationfor thecooperationandassistancewereceivedfrom MeritNetwork,Inc.andAdvancedNetworkServices (ANS). References [1]T.Asaba,K.Clay,O.Nakamura,andJ.Murai.AnanalysisofinternationalacademicresearchnetworktracbetweenJapanandother nations.ininet'92,june1992. [2]R.T.BradenandA.DeSchon.Nnstat:Internetstatisticscollectionpackage,Introductionand UserGuide.TechnicalReportRR-88-206,ISI, USC,1988. [3]R.Caceres,P.Danzig,S.Jamin,andD.Mitzel. Characteristicsofwide-areaTCP/IPconversations.InSIGCOMM'91,September1991. [4]J.D.Case,M.Fedor,M.L.Schostall,and C.Davin.SimpleNetworkManagementProtocol (SNMP).InternetRequestforCommentsSeries RFC1157,1987. [5]B.ChinoyandH.-W.Braun.TheNational ScienceFoundationNetwork.TechnicalReport GA-A21029,SDSC,1992. [6]K.Clay,H.-W.Braun,andG.Polyzos.Applicationofsamplingmethodologiestowide-area networktraccharacterization.ucsdtechnical ReportCS93-275,January1993. [7]K.Clay,H.-W.Braun,andG.Polyzos.Measurementconsiderationsforassessingunidirectionallatencies.Internetworking:Researchand Experience,toappearin1993. [8]P.B.Danzig,S.Jamin,R.Caceres,D.J.Mitzel, andd.estrin.anempiricalworkloadmodelfor drivingwide-areatcp/ipnetworksimulation. Internetworking:ResearchandExperience,3(1), 1991.
[10]D.Feldmeier.Estimatedperformanceofagatewayroutingtablecache.Technicalreport,M.I.T. NSFNETNationalBackbone.InProceedingsof the1990winterusenixconference,1988. LaboratoryforComputerScience,1988. ComputerApplications.Wiley,1976. [11]S.Heimlich.TracCharacterizationofthe [9]M.Davis.Analysisandoptimizationofcomputer networkrouting.master'sthesis,u.ofdelaware, [12]L.Kleinrock.QueueingSystems,VolumeII: [13]L.KleinrockandW.E.Naylor.Onmeasuredbe- [15]I.Wakeman,D.Lewis,andJ.Crowcroft.Trac [14]V.Paxson.Empirically-DerivedAnalyticModels analysisoftrans-atlantictrac.ininet'92,june haviorofthearpanetwork.inafipsproceed- Paper,1992. ings,1974nationalcomputerconference,vol- ume43,pages767{780.johnwiley&sons,1974. ofwideareatcpconnections.unpublished