Beyond 300Gb/s deployment - challenges and solutions Sylwester Biernacki, PLIX CEO s@plix.pl
What is PLIX? Kalmár Zoltán, President Alan Hawkins
IXP An Internet exchange point (IX or IXP) is a physical infrastructure through which Internet service providers (ISPs) exchange Internet traffic between their networks (autonomous systems). it s like PABX for IP - all customers are exchanging IP traffic through common switching fabric
Connectivity Hub Located in Warsaw city center: LIM building (Marriott)
PLIX operates the only neutral data centers in LIM building: first floor PLIX DC1 third floor PLIX DC2 negative one floor PLIX DC3 provides unique connectivity via own fibers to every POP in the building: floors -1, -2, +1, +3, +13, +42
Connectivity Hub offers connectivity to every Tier-1 carrier present in Poland is home to Polish Internet Exchange (PLIX) and collocates many of its members (>150 members equipment) is located at the crossroads of Polish IP and capacity networks
Most common practice
Connectivity Hub
PLIX facts almost 300 Gbps traffic, 220 members 40% of polish traffic the biggest in Poland, 9th in the world more on plix.pl
IXP PLIX traffic PLNOG3 September 2009-70Gbit/s PLNOG4 March 2010-82Gbit/s PLNOG5 September 2010-100Gbit/s PLNOG6 March 2011-120Gbit/s PLNOG7 September 2011-150Gbit/s PLNOG8 March 2012-200Gbit/ss PLNOG9 September2012-260Gbit/s PLNOG10 March 2013 - ~300Gbit/s
IXP
220
Data Center PLIX has started to build Data Center for customers needs in Sep 2009. We ve started from 150m2
PLIX DC1 located on the first floor provides full cabinets and shared cabinets for PLIX members AC 3kVA per rack, 2*16A power feeds DC available on demand HVAC: 18-24 C, 45-65% humidity
PLIX DC2 full cabinets 3kVA per rack, 2 independent power feeds AC 230V 16A (DC on demand) HVAC: 18-24 C, 45-65% humidity UPS backup 2N + power generator raised floor SLA up to 99,999%
PLIX DC3-300sqm, 1,3MW - 2N UPS backup + diesels - N+1 cooling - FM200 gas fire suppresion + VESDA - 3 power gen sets
PLIX DC3 located on negative one floor provides full cabinets and private cages AC 3/5kVA per rack, 2*16A/2*32A power feeds DC available on demand HVAC: 18-24 C, 45-65% humidity UPS backup 2N + power generator antistatic raised floor (600mm height)
Power for DCs 16 trafos in the building (8 on feed A, other 8 on feed B - switchable feeds) 2x15kV feed lines from two separate power districts 12MW of possible power to the building (5MW used at the moment)
PLIX DC3 - before build-out
PLIX DC3 - after
PLIX DC3
PLIX DC3
PLIX DC3 UPS backup UPS backup DC1: two UPSes, 80kVA each DC2: two UPSes, 160kVA each DC3: two UPSes (up to four), 160kVA each
PLIX DC3 Power generator 2x650kVA power generators on -1 level
PLIX DC3
Reminder I was about to talk about challenges of 0..300Gbps of traffic in PLIX, so let s do it...
0.. 300 Yes. It s PLIX car (500HP)
0.. 300 This car makes 300kmh in less than 30seconds. Unfortunately for PLIX 0-300Gbps took us 7 years.
0.. 100 Some history facts on this process...
0.. 100 Day 1 (March 2006): Cisco 2970G + Cisco 3750G-12S
0.. 100 Star Topology: two big switches and customers connected to them.
0.. 100 24x 1GE-TX was not enough. We needed to find better solution with uplinks big enough.
0.. 100 Day 2 (2007): Ciscos + Edge-Core (TAIWAN) with 2x10GE and plenty of copper 1GE
0.. 100
0.. 100
0.. 100 Problems: - unstable OS (reboots without any reason once a month) - no MAC learning limits - no loop protection Solution: change vendor
0.. 100 Day 3 (2008): Force10 S25P + S50N in stack
0.. 100 We ve ordered a few
0.. 100 still Star Topology, but more switches
0.. 100 Problems: - no MAC learning limits (static MAC per port only) - no BPDU filtering - no CDP/EDP filtering - not working loop protection Solution: upgrade from SFTOS to FTOS Result: nothing changes, new problems with stack
0.. 100 Decision: let s move to one box with all features inside with high capacity backplane to avoid uplinks/stack problems
100.. 200 One BIG BOX - next step of evolution
100.. 200 MPLS tests Corrigent Networks
100.. 200 MPLS tests Cisco 7606S
100.. 200 Day 4 (2009) Instead of big Edge-Core we ve decided (after tests) to Force10 C300
100.. 200
100.. 200 Looks nice, we are ready to migrate
100.. 200 Migration is not always an easy task (real photo)
100.. 200 Migration is not always an easy task (real photo)
100.. 200 After migration
100.. 200 After migration
100.. 200 C300 was about to be our solution. but... We wanted to switch most of the functionalities on at the same time. C300 has shared memory, not enough to do most of possible features at the same time.
100.. 200 What is more there were the same problems we knew from S-series: - MAC learning limit - BPDU filtering not working - CDP/EDP filtering not working - duplication of packets between ports on different linecards - SUP restarting (without gracefull failover to second one)
100.. 200 So after several outages (linecard rebooting, memory leaks, unstability of whole chassis) we ve decided to find another solution.
100.. 200... and we went velvet :-) Start from small ones: Extreme x450 24xSFP Extreme x450 48xTX
100.. 200 after tests it became all features we wanted to use were working what is more, they became the most stable switches we had for all these years
100.. 200 We ve found Extreme x650 is ideal for network core (many 10GE ports and stack for two switches)
100.. 200
100..200
100.. 200 Unfortunately 2x Summit x650 in stack was rebooting stack from time to time when sflow was switched on.
100.. 200 In the meantime Extreme has announced x670 which has 48x 10GE in one chassis and 4x40GE modules with copper cables 3 meters long (enough to connect two switches). So we decided to remove stack and move to direct links and MLAG (multi-chassis LAG)
200.. 300
200.. 300 But... when one of the uplinks flaps switch starts to loose (switch to null) broadcast packets for randomly selected ports. So we decided to move from MLAG to EAPS, and we are still using it
200.. 300
200.. 300 Fiber connectivity to PLIX IXP switches
200.. 300 Copper connectivity to PLIX IXP switches
200.. 300 Extreme Summits x450 in PLIX
200.. 300 Extreme Summits x650/x670 in PLIX
300..? What next? We have >150 10GE ports and customers asking for 100GE LR ports We are waiting for 100GE to be available in Extreme BlackDiamond X8
300..?
Strategy - IXP makes money on ports - Data Center makes money on cabinets - company makes money on CUSTOMERS Value added services - response to customers needs
NOC services
Monitoring Main rule: monitor everything But it takes a lot of resources (network, databases, servers, programmers, engineers, vendors etc.) and costs a lot. We ve decided to take this costs on ourselves.
Monitoring We do it by our own software...
Monitoring... and some free tools like RRDTool
Work 24/7 Our NOC is available 24/7. ON-SITE. Speaking English (we don t have anybody speaking Hungarian, yet, sorry)
Engineers not call center Our engineer will help you with tech problems, not only give you ticket number. Even at 3am
Support from vendors Vendors gives us possibilities to fix problems quickly and test newest solutions.
Content providers Within PLIX there are racks of several content providers as well as TV broadcasters: Google AKAMAI CDNetworks DynDNS IPLA TVN Player HBO TVP (Polish Television) TVN KinoPolska RodinTV TVS
PLIX - part of community PLNOG - the biggest networking event in Poland (650 attendees) inet - association of ISPs in Poland
PLIX - part of community Euro-IX IDC-G
Plans for future If you are interested in joining PLIX feel free to contact us anytime at: sales@plix.pl
Thank You!