Datacenter Network Infrastructure

Size: px
Start display at page:

Download "Datacenter Network Infrastructure"

Transcription

1 Datacenter Network Infrastructure Petr Grygárek rek 1

2 Traditional DC Network Modules and Core WAN Edge Internet Edge PODs Core edge Service Layer Aggregation Access their Functions Single-site or multisite implementation 2

3 Point of Delivery Repeatable network infrastructure building block good scalability and predictable network growth Cistomer/Applications are entirely hosted in single POD Both hosted servers and system services Like DNS, SMTP GW, NTP etc. Independent of another PODs Hosts applications/customers of similar type Same SLAs, service windows, security regulations, QoS model etc. PODs with various internal infrastructure may be offered to customers 3

4 Scalable Physical Topologies Redundancy is a must Scalability allows predictable network growth Core, (Aggregation), Access Mostly L2 VLAN-based access, L2/L3 aggregation + service modules, L3 core edge Also service layer if needed sometimes also another virtualized access layer EXAMPLE POD TOPOLOGY 4

5 Example: Simple multitenant DC physical topology 5

6 Shared Infrastructure Implementation 6

7 Shared Infrastructure Requirements Complete customer separation Both traffic and network services (FW, LB, ) Potentially overlapping IP address space Access to central services DNS/NTP/proxy... Internet access NOC 7

8 Common Shared Infrastructure Implementation Options Separate customer IP address spaces + filtering (ACL) Overlapping customer IP address spaces + really_wild filtering (ACL) Private VLANs (L2) VRF Lite + VLANs + separate routing process instances MPLS/VPN i/e Service instances (a.k.a. contexts, partitions, ) needed unless we can live with shared instance (i.e. non-overlapping IP address spaces) for FWs, LBs, IPS's, e.g. DC Internet boundary FW could be shared Unless tenants require different ISPs 8

9 Physical versus Logical Topologies Multiple and various logical topologies may be implemented on shared physical infrastructure even very complicated and messy ;-) L1, L2 and L3 topologies can look completely different always check where each VLAN is extended and where VLAN (L3) interfaces are located Logical topology may be implemented using both non- redundant and redundant physical topology 9

10 Logical-to-Physical Topology Mapping Examples: Router-on-the-stick Service module (multiple contexts) attachment using VLANs Topology with VRFs and trunks with VLANs Keep role separation of network devices! Extension of access VLANs to core is technically possible, but highly unadviceable 10

11 Example logical topology 11

12 Logical topology implementation (1) Define VLAN numbers 12

13 Logical topology implementation (2) Mapping to physical topology: non-redundant setup hysica 21

14 Making logical topology redundant 15

15 Redundant logical topology implementation Mapping to physical topology: redundant setup (may use VRRP) 13

16 Logical Topologies Considerations Physical service often connected as device-on-stick LB, FW One or 2 (or more) physical tunks Service modules mostly attached via (internal) VLANs even very complicated and messy ;-) Keep scarce resources VRF i/e may be more beneficial comparing to VLAN interconnects Common template for logical customer topologies should be defined Probably multiple ones for various levels of hosted services Common approach makes implementation of new customers and troubleshooting much easier and predictable 14

17 Typical DC Logical Topologies Unified zoning and allowed communications between zones Multiple FWs may be introduced to protect individual zones Clear FW roles each one protects access to one or multiple zones EXAMPLE ZONE STANDARD Implementation using VLANs Implementation using MPLS i/e Keep in mind non-transitive prefix propagation Outside FW VRF design principle Advantages of VRF between FW and server segments 17

18 Shared Infra Pros & Cons Advantages Cost-efficient Fast deployment of new (logical) networks Less number of network devices saves Licenses Management system Disadvantages Fate sharing Failure or misconfiguration affects many customers More complicated coordination of changes, planned breaks etc. Introduction of new features affects all e.g. QoS implementation, CoPP etc. More complicated and lengthy configurations 18

19 Datacenter QoS Server QoS marking mostly trusted Apply switches with sufficient buffers to connect servers standardized and predictable QoS implementation, class- based PAUSE Part of Datacenter Ethernet standards 19

20 Datacenter Ethernet Standards of IEEE Datacenter Bridging group Ethernet extensions for DC environment QoS-based standards Priority-Based Flow Control (Per-Priority Pause) 802.1Qbb Enhanced Transmission Selection (ETS) scheduler 802.1Qaz Consistent 2-level DWRR implementation with priority queue handling DCB Neighbor capability advertisements (LLDP extensions) - DCBX 20