White Paper CAPACITY SOLUTIONS WITH DWDM FOR BROADCAST AND VIDEO ON DEMAND VOD and interactive video places huge capacity demands on the network. Cable and Telco operators alike are turning to DWDM optical transport as a solution for capacity and service quality.
Capacity Needs For Push Based Video TABLE OF CONTENTS BACKGROUND 3 Centralized, regionalized and shared Head Ends 6 Push Versus Pull video 8 Preserving Quality of Service 9 Upgrading Capacity 11 The Optical Transport Advantage 12 SERVICE TRANSPARENCY 13 FLEXIBILITY 14 EDGE AGGREGATION 16 SERVICE PROTECTION AND MANAGEMENT 17 CONCLUSIONS 19 COMPANY PROFILE 21 1
Capacity Needs For Push Based Video BACKGROUND The battle is heating up between Cable and Telco for residential customers. Bundled video, data and voice services, the so-called triple play, is being offered by both as a strategic weapon for customer retention as well as greater ARPU (Average Revenue Per User). The battleground, though, is rapidly shifting from broadcast to an interactive and Personal TV business model, where video becomes a push based service from servers. Critical to planning for interactive session based video is capacity. If the transport network is capacity limited, the ability to launch and support interactive video becomes challenged. A service model that risks service blocking or subscriber limiting not only places revenues at risk, but customer satisfaction as well. With the rapid growth in VOD cable already faced the problem of capacity contention. The HFC cable plant needs to support a combination of analog and digital broadcast, VOD unicast, voice (VoIP or TDM), and high speed data 100 90 80 70 60 50 40 30 Broadcast VOD PVR 20 10 0 2002 2005 2008 Figure 1: Video service growth. 3
services. By upgrading the two-way cable plant and splitting service nodes, subscribers on the coax portion of the network are enabled with more two-way bandwidth to support on demand without contention or quality issues. But, while the numbers of subscribers per node could be adjusted and the amount of frequency to support services could be increased, there becomes a bottleneck in the fiber backbone distribution plant between the nodes and the head end. The bottleneck is largely due to the huge capacity demand placed by VOD on the network, since the VOD servers are located at the head end. Data as well as voice face similar, though proportionally smaller, contention issues in the backbone network. Telco s have primarily focused on delivering triple play and video over their copper subscriber loops using DSL as their capacity issue. Technologies such as VDSL/VDSL2, ADSL2/ADSL2+ and even fiber PON are all looked at for enabling triple play video by virtue of the bandwidth downstream to the subscriber. The challenge of delivering 2-3 streams of broadcast video along with voice and data over copper using DSL is a significant loop planning and engineering challenge. 600 Mb/s 125 Channels / MPEG2 SD Broadcast 1800 Mb/s 20% Peak Hour / MPEG2 VOD N-PVR Games 200 Mb/s 1000 Mb/s 25 Channels / MPEG4 2 Mb/s Best Efforts HD Broadcast HS Data 600 Mb/s 1 Lines / no Blocking per Home VoIP or VoATM 4 Figure 2: Transport capacity forecast.
Capacity Needs For Push Based Video Although the distribution plant differs, the transport network used by Telco is no different than Cable when it comes to capacity requirements. Even more so on the Telco side, since most backbone transport networks have been architected for voice and data services, not video, initially. Carving out bandwidth to transport and distribute 100 200 channels of broadcast is possible without a major network overhaul, but interactive video such as VOD and network PVR increases capacity requirements exponentially, beyond the economic of technical ability to upgrade the transport network. Time shift TV, exemplified by TiVO, delivered by way of a hard drive in a set-top box has proven successful and demand has grown rapidly. But set top costs becomes an issue for service providers implementing the service. Along with this, new video entrants such as Microsoft TV are shifting PVR (time shift broadcast as well as TV on demand) to a network PVR architecture (n-prv). In combination with VOD, interactive services in the near future may become focused more on centralized servers storing and streaming content. This could be everything from on demand video libraries and pre-recorded and stored broadcast to internet originated video and games. 250% 2005 225% 175% 150% 100% 25% Time Shift TV VoD VoIP Adult Content Online Gaming HDTV Figure 3: New service demand. 5
This report presents the evolution of interactive services and what constitutes a competitive service model in the near future for both Cable and Telco. It looks at the network architecture requirements for new services based on economic business case demands. From there, it presents an approach to designing and upgrading the transport and distribution network for current as well as future capacity needs without re-building the network or upgrading all of the legacy systems sharing the transport network through the use of DWDM. CENTRALIZED, REGIONALIZED AND SHARED HEAD ENDS Cable as well as most Telco s have a primary or centralized head end. The centralized head end serves a city or regional market area geographically defined. Services are transported to distribution nodes from the head end via fiber, and from the distribution nodes to subscribers on either coax (Cable) or DSL (Telco). Large Cable or Telco Service Providers that operate regional, statewide or even national markets may have one or more primary head ends as determined by time zones and geography. The number and locations are often determined by the economic trade off s between transport costs for video versus the cost of building head ends. Within local or regional markets Cable or Telco will have smaller head ends bringing in local content (off-airs, PEG channels). The number of secondary or remote head ends varies by the size of the operation geographically covered. In Telco, it is common for smaller independent telephone companies within a state or region to form a consortium and share a central head end when they have a fiber backbone connecting them. They may also have local remote head ends for off-airs, PEG or community channels for their individually markets. In considering CAPEX for video services, the business case must look at the relative costs for equipment and systems versus the costs for transport and distribution. Ideally, Cable and Telco both want to minimize the number of servers or encoders and centralize service as much as possible for economic reasons. The issue of transport costs versus distributed systems costs is an important consideration. Most experts conclude that centralized n-pvr TV on demand favors Telco s given the capabilities IP offers. 6
Capacity Needs For Push Based Video Initially, transport costs for VOD and interactive services were an economic problem based on the backbone network being SONET/ATM based in Telco s. In Cable, fiber provides RF transport, so the problem becomes increasing capacity for two-way services. For both Cable and Telco, applying DWDNM to optical transport (backbone or backhaul) economically addresses capacity needs for interactive video services. Cable and Telco will continue to debate over their perceived benefits or weaknesses in their respective architectures for delivering broadband services, but in reality both share common concerns regarding bottlenecks and capacity. CITY A LOCAL VOD SATELLITE SERVICE AIR ANTENNA TRANSMISSIONS LOCAL STUDIOS REGIONAL HEADEND ACCESS NODES Local Node TELCO DSL / FIBER DISTRIBUTION SECONDARY DWDM METRO NETWORK DWDM BACKBONE NETWORK HEAD END VOD LOCAL VOD IP ROUTING CORES LEGACY TELEPHONE POP RPR GIGABIT FIBER CABLE HFC Local Node REGIONAL METRO REGIONAL HEADEND SECONDARY DWDM METRO NETWORK CITY B CABLE Figure 4: Centralized video distribution. 7
PUSH VERSUS PULL VIDEO Television service, up until the beginning of the 21st century, was a pull based broadcast service. In the beginning, antennas pulled broadcast signals from the air and consumers turned to channels to access programs. When Cable emerged, broadcast channels grew rapidly and were delivered over HFC networks as analog video, and later digital broadcast signals residing on the network that consumers tuned to. Digital satellite is no different, nor is Telco TV (IPTV), except in the latter case video is multicast and switched rather than residing on a coaxial broadcast bar. Pull based broadcast has pre-determined bandwidth (based on channel encoding) requirements and capacity easily engineered to prevent blocking. The biggest challenge for Cable and Telco s is HDTV since it requires 3-4 times the bandwidth per channel than does SDTV. Telco s using their copper networks and DSL are still facing a roadblock in offering HDTV due to bandwidth limitation given the nature of their plant. CONTENT SOURCING HEAD END LARGE VOD ARRAYS CABLE HUB TRANSPORT HEAD END LARGE VOD ARRAYS DISTRIBUTED LOCAL ACCESS VOD CABLE HUB TRANSPORT HEAD END TRANSPORT CENTRALIZED LOCAL ACCESS CABLE HUB EDGE VOD HYBRID LOCAL ACCESS 8 Figure 5: Cable VOD distribution architectures.
Capacity Needs For Push Based Video Interactive video, more specifically push based video such as VOD and PVR has only commercially emerged in the last 3-4 years. It s rapid growth and popularity has been fueled by consumer desires to view programs when convenient to their schedule and lifestyle, rather than adjusting their routines around times programming is broadcast on a channel. The explosion of the Internet can be a contributing factor to the consumer push towards real time instant access to information and entertainment that is driving the growth in interactive TV services. In a broadcast realm, a channel can be streamed on time and watched (tuned to) by thousands of subscribers at any time. It doesn t matter at what point in the program a subscriber decides to join it since it is statically broadcast. But with VOD and interactive TV, the consumer not only chooses when to watch a program, but also has control over it while watching it. A program can be paused, re-wound, fastforwarded and stopped just like a CD or VCR. Therefore, each viewing is streamed uniquely to that subscriber. There could be thousands of people watching the same new release movie, but in each instance it is streamed as unicast to each subscriber as they request it. This points to the bandwidth-intensive the nature of interactive video over broadcast. PRESERVING QUALITY OF SERVICE Initially, operators would statistically calculate what peak hour demand would be for VOD and engineer capacity for it. Or, set a bandwidth threshold for VOD after which no more purchases would be allowed until bandwidth again became available. A number of VOD back office systems have provisioning that enables limiting of streams to a threshold in order to avoid blocking. Yet limiting blocks new subscribers and certainly blocks revenues as well. Practically speaking, bandwidth is managed by shutting off service when capacity thresholds are reached. Other services are protected from blocking at the expense of revenues. Managing bandwidth to ensure service quality and priorities takes precedent over service demand. In addition to potential blocking of access to VOD based on demand, the other issue Cable and Telco face is maintaining the quality of all services in the network. Regardless of the network used (HFC, ATM or IP), there are Quality of Service (QoS) mechanisms to assist or assure one service does not consume bandwidth and interfere or block another service. An operator cannot afford to have some portion of broadcast channels blocked, have voice traffic become unavailable or the data 9
service become so slow people complain simply because too many people are watching VOD. In an ATM environment, PVC s (Private Virtual Circuits) are used to manage service quality. They can be constant, variable or unspecified bit rate virtual connects (CBR, VBR and UBR). For video, CBR is used to assure bandwidth remains at the prescribed level. IP simplifies QoS by using TOS bit tagging of VLAN s and allowing flow priorities per VLAN to be set. Individual streams or session do not have to be provisioned and managed, as it is controlled at the VLAN level. Priorities range from very high to low. High priority VLAN s such as used for video have priority rights to band-width for quality purposes. Either ATM or IP QoS is very acceptable at the local transport and loop distribution level where the number of subscribers, traffic and demand are segmented and limited. But it is another case in the backbone transport network distributing services from the head end, Internet and switched networks. The backbone supports the aggregate number of total subscribers and is subject to much higher risks of contention and blocking. At a local access node level, there may be only a handful of subscribers using VOD on interactive TV services at any given time. Bandwidth and capacity may be adequate for incremental demand since it supports to 100 300 subscribers per node. But the backbone may have dozens or more nodes connected to it, making unicast capacity needs extremely high, often more than the bandwidth available. As a result, interactive demand service is degraded or limited, or other services sharing the bandwidth are degraded or blocked. 10
Capacity Needs For Push Based Video UPGRADING CAPACITY Upgrading backbone network capacity can become costly. Adding new fibers, increasing the fiber bandwidth by upgrading SONET ADM s or ATM switches, building an IP network overlay, or in the case of cable adding more RF frequencies can become expensive. Operators need only assess the cost of going from OC-48 to OC-192, or from 1 Gigabit to 10 Gigabit Ethernet to appreciate the problem. Increasing band-width may only be a partial solution, since unicast on-demand video and Internet data traffic patterns are both variable and unpredictable from a traffic engineering perspective. The economic and simple solution is DWDM for optical transport in the backbone network. Existing switch, Router, access and head end systems can continue to be used with their legacy interfaces (ATM, IP or RF). By using the existing fiber and adding optical multiplexing equipment that creates protocol transparent wavelength (Lambda s), a fiber can be increased manifold in bandwidth. For instance, a backbone OC48 ATM network (2.4 Gb/s.) can be increased to a 10 Gigabit optical transport link with 8 1.25 Gb/s Lambda s. This is more cost effective than an OC192 ADM upgrade, or in the case of an IP network going from 1 Gb/s to 10 Gb/s. Legacy ASI RF Transport Limited ATM Solutions Ehternet Solutions IPTV DWDM Transport, Multiple Mappings (ASI, ATM, Gigabit Ethernet / RPR) MPEG MPEG MPEG MPEG MPEG MPEG ASI Low scalability (sub-gigabit), higher costs ASI SONET/ SDH Gigabit VOD scalability, lower cost Ethernet / GigE / RPR Terabit VOD scalability, full migrations support ASI ATM SONET/ SDH Optical DWDM Ethernet / GigE / RPR Fiber Infrastructure (Head End to Local Hub) Figure 6: Converged distribution architecture. 11
THE OPTICAL TRANSPORT ADVANTAGE Dense wavelength division multiplexing (DWDM) has emerged as the premier optical networking transport technology for broadband networks. DWDM transport backbones are deployed today in a wide variety of applications, ranging from ultralong haul links to trans-national grids to more geographically centric metropolitan and regional transport networks. DWDM has become an industry standard for high capacity broadband networks due to its host of operator advantages unparalleled scalability, service and protocol transparency, topology flexibility, survivability (redundancy), and of course lowered cost. DWDM is already used in cable HFC backbone networks supporting VOD, providing massive inter-head end connectivity. As interactive services expand, cable companies are looking to DWDM to connect head end hubs and even fiber fed service nodes. RBOC s and national LEC s, as well as LEC consortiums are looking at DWDM for the same reasons as cable. As Telco networks evolve from primarily narrow band TDM voice and data networks to full service broadband networks, the existing ATM legacy backbone infrastructure is too small to support the capacity demanded. The cost to upgrade capacity to OC192 or above becomes prohibitive, whereas DWDM allows massive expansion at a fraction of the cost. Secondly, as Telco s begin transitioning services from legacy ATM to IP, the backbone networks need to accommodate mixed traffic. Rather than absorbing the costs to upgrade capacity and encapsulate IP over ATM or transport, or even replace the ATM network with IP, DWDM allows a co-existence of both services in the backbone network. Thus, voice services (and perhaps even some imbedded data services) can remain in an ATM domain for the time being, while new services such as broadcast video and on-demand interactive video can be implemented as end-toend IP services. Another key advantage of DWDM optical transport is the added dimension of service control and management offered. It is possible to have up to 64 ITUcompliant wavelengths (Lambda s) per single fiber, each with 10 G/bs. Capacity using the Zhone GigaMux products. In a typical scenario, and operator may choose to use 10 1 Gp/s Lambda s per fiber, or 10 Gb/s total capacity. With each wavelength (Lambda) its own discrete carrier band, the services placed in one 12
Capacity Needs For Push Based Video wavelength are segregated from services in another. This allows the operator to first determine what services it needs to transport, then how much capacity each service requires (or planned for). Voice, data, broad-cast and VOD services can then be assigned to their own Lambda (or multiple Lambda s) based on capacity planning. If new services emerge, or the operator wishes to further segregate services by class (TDM and VoIP voice, business and residential services, etc.) new wavelengths can be added. And, as services (or subscribers) grow, more wavelengths can be added for any service such as VOD. INTERNET IP DATA LAMBDA FUTURE EXPANSION VOD / NPVR / GAMING / VoIP / HDTV TDM VOICE LAMBDA VOD or N-PVR LAMBDA WDM Fiber Backbone 8 x 1.25 Gb/s Lambdas 10 Gigabit Capacity SD / HD BROADCAST LAMBDA Figure 7: Dedicated wavelength by service. SERVICE TRANSPARENCY With each service having its own wavelength, services are transparent to one another by virtue of wavelength separation with DWDM. DWDM is protocol agnostic, thus adds no overhead or interjects any changes to the streams being transported, instead providing multi-wavelength transport pipes. With optical transport filtering, tributary bypass at intermediate nodes or hub sites can be accomplished, eliminating the costly protocol layering and processing that normally occurs (such as ATM framing or SONET/SDH drop-and continue multiplexing). And with video, direct optical circuit connectivity has very low and deterministic streaming delay with no jitter effect ideal for the stringent requirements to transport MPEG2 or MPEG4 over distances. 13
The Zhone GigaMux optical transport products are offered with a wide array of transponder interfaces to meet operator needs. Since the backbone transport network is shared by switches, Routers, access systems and head end systems processing services, having the right interface to support every service and network element is important. Examples includes 1.0 Gb/s Ethernet, 10 Gb/s Ethernet, various SONET/SDH, ATM, sand ASI interfaces. These choices offer an efficient protocolagnostic expansion solution and eliminate the need to maintain discrete service or protocol-specific networks. The head end can remain ASI, Ethernet or even ATM while VOD servers may be only Ethernet along with Routers. Voice switches could be OC12 or OC48. Gigamux can connect video, voice and data network elements sharing the transport network and create a wideband optical wavelength division pipe to distribute traffic within a metro or regional area, across a multi-market footprint, or even across long haul networks. With a 10-12 BER, Gigamux has the performance headed to deliver video over long hops. FLEXIBILITY The explosion of interactive services ranging from broadband Internet data to VOD and PVR over the past few years points to the flexibility networks need to expand as services and demand expand. Time to market and consumer demand dictate that expansion must happen quickly and efficiently, not over a long build cycle. Competitive pressure, too, force operators to build flexibility into their networks, anticipating new services and higher demand rather than reacting to it. That s where GigaMux optical transport excels. Having a modular, expandable open slot design with innovative optical filtering techniques allows additional wavelengths to be added in a staged, service expansion manner. Operators do not want to over-invest in capacity, yet want to expand without major forklift upgrades or out-of-service conditions. GigaMux enables this expansion on a pay-as-you-grow basis. Using a variety of rugged channel/band modules, capacity expansion by wavelengths is possible over any type of fiber network topology linear (point-to-point), ring, star or mesh. 14
Capacity Needs For Push Based Video Small Hub Location (1-2 wavelength) Large Head End Demands (32-64 wavelength expansion) CHANNEL FILTERS BAND FILTERS SMALL HUB 1 LOCATION (1-2 WAVELENGTHS) HUB 2 LOCATION SECONDARY HFC FIBER PLANTS (SMF) EXTERNAL EDFA HEAD END Mid-Sized Hub Location (8-16 wavelength expansion) MID SIZED HUB 3 LOCATION (8-16 WAVELENGTHS) INLINE EDFA Figure 8: Flexible wavelength expansion. VOD provides a good example of the need for expandability and flexibility. A Cable or Telco operator may start out with a single, centralized VOD server as the demand develops. As demand increases, as well as content grows, the operator may add regional or local VOD server to handle demand and distribute content. GigaMux allows adding service locations and nodes, along with capacity, as needs dictate. Expanding the VOD scenario, initial deployment may be entirely centralized at a main head end location. The VOD server has a Gigabit output. Because demand increases along with expanded content, more servers are added. Now there are multiple Gigabit outputs streaming video to customers. GigaMux would allow all the interfaces to be aggregated at a transport hub and place all the video streams on one or more wavelengths for distribution. With the use of EDFA-based optical amplifiers, geographic reach is bolstered for long haul transport of video from the main head end to distribution points. The availability of three separate distancedetermined optical amplifiers (access/short, metro/medium or regional/long) allows operators to architect transport economically to the network requirements. With dynamic gain stabilization, the problem in transporting jitter-sensitive video across complicated, multi-drop networks is addressed. 15
LARGE CENTRAL VOD SERVER ARRAYS IP SWITCH GIGAMUX MPEG OVER IP NODES VOD IP VOICE LAST-MILE CABLE PLANTS EDGE VOD SERVER IP-TO-MPEG, QAM / RF GIGAMUX Local Node Local Node FIBER TRANSPORT GIG E/RPR SWITCH (stream unbundling and switching) CORE IP ROUTERS LONGHAUL BACKBONE TRANSPORT TELEPHONY GATEWAYS PRIMARY HEAD END SET-TOP / REMOTE INTERNET FIBER / DSL LOCAL MARKET RESIDENTIAL HOME LAN/PC Figure 9: Head End service architecture. As a final note, wavelength division multiplexing is ideally tailored to the highly asymmetric nature of VOD (and video in general). With Gigamux platform s outstanding filtering architecture, operators can provision wavelengths arbitrarily with bandwidth in a given direction, thus allowing allocating more bandwidth for downstream video streaming. SONET/SDH or RPR ring technologies, which are more symmetric by design, do not allow this level of bandwidth flexibility. EDGE AGGREGATION 16 VOD servers (as well as head end encoders) will typically have Gigabit interfaces. There may be one or more server or systems outputting video to the subscriber network. It is costly to have aggregation switches or Routers to connect all the outputs and provide a single, larger pipe to the transport network. In typical scenarios, each VOD (or n-pvr) server is only using a portion of its output capacity at any given time based on demand. Ideally, the operator wants to consolidate and groom traffic ahead of the transport network for economic efficiency. This is another area GigaMux and DWDM helps. Using what s is referred to as EPC modules, time and wave division multiplexing is used within GigaMux to groom traffic and aggregate it. IP EPC modules provide either 8:1 aggregation of 10/100 Base T or 4:1 aggregation of 1.0 Gb/s interfaces. There are similar EPC modules for SONET/SDH providing 16:1 OC3/STM1 or 4:1 OC-12/STM4 aggregation.
Capacity Needs For Push Based Video SUB-RATE VOD DEMUX ROUTER GIG-E 2,3,4:1 ATM SWITCH ETHERNET 8:1 TDM 4,16:1 ROUTER SUB-RATE VOD MUX SCALABLE DWDM TRANSPORT 2,3,4:1 GIG-E ETHERNET 8:1 Gig-E 4,16:1 100 B-T TDM 10 Gbps TBM MPEG OVER ATM / SONET Gig-E SWITCHING / RPR (Dense Aggregation) ZHONE MANAGEMENT SYSTEM (ZMS) Figure 8: Efficient edge aggregation. To illustrate the application of EPC, consider an operator with 4 VOD servers at a centralized head end, each with it s own 1 Gb/s output. At any given time, each server is 30 50% utilized in stream output capacity based on demand. An EPC module aggregates the 4 Gigabit interfaces and places all on a single 2.5 Gb/s wavelength. A single frequency can now deliver across the transport network up to 30,000 4.0 Mb/s MPEG2 video streams simultaneously without the complexity of routing multiple discrete outputs from different servers. SERVICE PROTECTION AND MANAGEMENT Both broadcast and interactive video, especially when transported over distances, requires rigid performance. Jitter, packet loss (dropped packets), etc. can dramatically affect the quality of video. While data can tolerate a BER of 10-8, video cannot. GigaMux optical transport delivers a BER of 10-12, making it much more video-friendly than SONET/SDH or even RPR/MPLS. Having carrier-class redundancy and protection is critical to quality video service delivery. With the trend towards centralized VOD and n-pvr (not to mention most Head Ends being centralized for primary feeds), the transport network must be highly reliable since any failure affects an entire service and every subscriber. Failures such as trunk (fiber cut), channel, band and node can all have service 17
impacts if a failure occurs. GigaMux has protection switch cut over capability of 50 ms. (or less), necessary to maintain video service without customer disruption. Switch protection can be provisioned for a secondary fiber in case of fiber cut, to secondary wavelengths in the event of band failure, or to protection modules in the event of hardware failure. Gigamux has a complete management system scalable from small Telco/MSO networks to large regional, state or national carrier networks. With the ability to manage the ingress points (such as EPC module interfaces) as well as at each wavelength on the transport side, each service can be isolated and trouble shot in the event of a problem, unlike many SONET/SDH, STM or IP transport systems that manage the flows but cannot isolate a particular service and its domain as a separate entity. This saves time and expense in trying to trace the source of the problem and determine at what point in the network the problem stems from. 18
Capacity Needs For Push Based Video CONCLUSIONS At a recent conference held during SuperComm 2005, an operator in the video business for several years cited the 2-20-10 rule : Experience has shown customers will live without phone service for 2 hours, without Internet access for 20 minutes, but only 10 minutes without TV service before complaining. Video is a core service to subscribers and quality-sensitive by nature. Poor quality picture, loss of service or blocking of service all become important factors in customer retention in a highly competitive market. VOD and interactive video services are no less critical than broadcast as demand grows. Therefore, designing a network that is robust in performance, high in bandwidth and scalable to demand is important. Once the threshold of performance is crossed, the cost and time it takes to fix the problem becomes a major service issue. DWDM optical transport is the cost-effective and proven solution to not only solve the problems of capacity today, but to have an expandable transport architecture for the future. The other challenge facing operators, Cable and Telco alike, is the rapid growth of two way interactive services including n-pvr (TV on demand along with time shifted TV), VOD and other push based video services such as interactive gaming. What may seem like sufficient bandwidth today may turn out to be too little very soon. The unpredictable nature of new services and consumer demand makes it imperative to plan in advance for capacity. GigaMux DWDM optical transport future-proofs the network at the long haul, regional/metro or local level to meet such challenges. Since DWDM and GigaMux allow expansion on a pay-asyou-grow basis, operators can avoid paying an economic penalty for unused capacity today, yet add it incrementally in the future. Operators often contemplate de-centralization as a solution to capacity bottlenecks or high transport costs from a central head end/serving site. Yet business cases generally fall short when numbers of local VOD servers are planned, or a head end is located in every city. While future networks will have some combination of centralized and regional/metro content delivery systems, it is clear that larger operators will continue to have a small number of master sites delivering the bulk of video services (unicast and broadcast). The architecture will evolve as services and demand changes. By planning ahead and deploying a DWDM transport network between primary and secondary sites, and architecting a high capacity 19
distribution network, operators have the flexibility to expand services as well as locate systems where demand and need is best served. One thing cable and Telco share in common is the need for network capacity. Gigamux DWDM optical transport is the ideal solution both technically and economically. 20
Capacity Needs For Push Based Video COMPANY PROFILE Zhone Technologies is developing the industry s first complete line of telecommunications infrastructure products for the local access network. Zhone s products bring end-to-end integration and management to the local loop from the core to the subscriber s premises making it possible for carriers and service providers to offer a full suite of managed services while optimizing bandwidth, reducing costs, and speeding up provisioning. Zhone is making the local access network reliable, manageable, and cost effective, while providing a trusted pathway toward nextgeneration technologies. Zhone s Mission is to be a single provider of next-generation communication equipment that consolidates the access network infrastructure. Zhone products enable carriers and service providers to migrate from existing network infrastructure to high bandwidth, multi-service platforms with full support for data, voice, video, and entertainment services. 21
Zhone Technologies, Inc. @ Zhone Way 7001 Oakport Street Oakland, CA 94621 510.777.7000 phone www.zhone.com Zhone, the Zhone logo, and all Zhone product names are trademarks of Zhone Technologies, Inc. Other brand and product names are trademarks of their respective holders. Specifications, products, and/or product names are all subject to change without notice. Copyright 2005 Zhone Technologies, Inc. All rights reserved. ZTI-WP-VOD-0805