What Goes Into a Data Center? Albert Greenberg, David A. Maltz
|
|
|
- Morris Terry
- 10 years ago
- Views:
Transcription
1 What Goes Into a Data Center? Albert Greenberg, David A. Maltz
2 What s a Cloud ervice Data Center? Figure by Advanced Data Centers Electrical power and economies of scale determine total data center size: 50, ,000 servers today ervers divided up among hundreds of different services cale-out is paramount: some services have 10s of servers, some have 10s of 1000s 2
3 Tutorial Goal: A data center is a factory that transforms and stores bits We ll seek to understand them by urveying what runs inside data centers Looking at what demands these applications place on the physical plant Examining architectures for the factory infrastructure urveying components for building that infrastructure 3
4 Agenda Part 1 Applications How are they structured, provisioned and managed? Traffic and Load Patterns What is the load on the infrastructure that results from the applications? 4
5 Agenda Part 1 Applications How are they structured, provisioned and managed? Traffic and Load Patterns What is the load on the infrastructure that results from the applications?
6 Cloud ervices? oftware as a ervice (aa) earch, , ocial Networking, Data Mining, Utility Computing The Cloud is the infrastructure Data Center hardware and software
7 Hoping You Take Away What cloud services look like Core challenges A better way?
8 Cloud Components Internet Users D/2 ports D/2 ports D ports 10G D/2 switches D switches Bouncer switches for VLB CDNs, ECNs Data Centers 20 ports Top of Rack switch [D 2 /4] * 20 ervers
9 Cloud ervice Components Internet Front Ends Back Ends Front Ends may be in small satellite data centers or Content Distribution Networks Back Ends may be in large data centers hosting service specific and shared resources FEs BE shared resources BE service specific resources
10 Challenges High scale 10 s to 100 s of thousands of servers Geo-distribution 10s to 100s of DCs, CDN or satellite data center sites tringent high reliability and perf requirements 99.9th percentile LAs Cost per transaction / cost per data unit Inexpensive components, stripped of internal redundancy Complexity Plethora of components: Load Balancers, DN, BGP, operating system, middleware, servers, racks Plethora of W and HW failures
11 Hoping You Take Away What cloud services look like Example: Contact List ervice A better way? Example: Microsoft s Autopilot Example: Google File ystem
12 Hoping You Take Away What cloud services look like Example: Contact List ervice A better way? Example: Microsoft s Autopilot Example: Google File ystem
13 Contact List ervice Why? What? , IM, ync, Gaming, Music,. Contacts, social network info, auth info, invitations, notifications How big? Order 1B contact lists, 100K transactions per sec (TP) Reliability and Perf? At the heart of popular interactive services
14 Observations Transactions Per econd (TP) Correlates with revenue Correlates with infrastructure spend 100K TP is not too large Cannot afford to throw memory at the problem Have to go to disk, which is less than ideal Takes an eternity Failure prone, slow and complicated Hard to model, non-deterministic with congestion effects Databases offer little help Not relational data Expensive and hard to maintain referential integrity
15 Architecture Front End Tier ervers fronted by load balancers Partitioned by Access partner and access method Fast Lookup Tier Partitioning by User ID mapping user to storage torage Tier Business Logic + Blob tore Blob = Binary large object Blob tore a shared component Identity and authentication service another shared component
16 olution Characteristics ignificant work on deloading storage Affinitizing clusters of FEs, BEs, tores Client side caching, delta synchronization DB optimization In memory compressed data structures ignificant work on systems resilient to storage failures ignificant work on operations automation A mix of enterprise and purpose built software
17 Hoping You Take Away What cloud services look like Example: Contact List ervice Is there a better way? Example: Microsoft s Autopilot Example: Google File ystem
18 Cloud Operating ystem? No cloud operating system Handling discovery, deployment, repair, storage, resource management,. All service developers grabbling with switches, routers, NICs, load balancing, network protocols, databases, disks Creating one is a huge challenge Will we have 4? Werner Vogel s remark regarding aa dev: fraction of time spent on getting the infrastructure right, versus creating new features: 70% of time, energy, and dollars on undifferentiated heavy lifting
19 Building it Better? Two General, Useful Building Blocks Autopilot Microsoft s Recovery Oriented Computing ystem, supporting Live earch (Bing) GF Google s Distributed File ystem
20 Building it Better? Two General, Useful Building Blocks Autopilot Microsoft s Recovery Oriented Computing ystem, supporting Live earch (Bing) GF Google s Distributed File ystem
21 Autopilot Goals ervice developer productivity Get management stuff out of dev s hair Low cost Commodity infrastructure 8x5 ops (not 24x7) with better reliability High performance and reliability At massive scale; plethora of W and HW failures The 3 motivators of most infrastructure projects
22 How? Automated data center management platform covering Provisioning Deployment Fault Monitoring Recovery Fundamental, and fundamentally different than how enterprises and networks are managed today
23 Autopilot Approach Fault tolerance philosophy Recovery Oriented Computing (RoC) Crash-only software methodology ervice developer expectation Not OK to crash any component anytime (for example, by autopilot itself), without warning Autonomic computing tatistical machine learning Byzantine fault tolerant
24 Fault tolerant services Applications written to tolerate failure with No user impact, no data loss, no human interaction Ability to continue with some proportion of servers down or misbehaving Capacity to run on low cost, commodity infrastructure Autopilot incents good habits for distributed systems dev
25 Autopilot oftware Architecture Device Mgr mall strongly consistent shared state (the truth) atellite ervers Watchdog, Provisioning, Deployment, Repair Use replicated weakly consistent state Monitors alignment of the data center intent: roles and behaviors with data center reality
26 Automation Health Monitoring Fault Recovery ervice roles, manifests enforced Where misalignment is detected, Autopilot fixes it. Makes it so! Repair escalation: Ignore, Reboot, Reimage, RMA Replicate data, migrate sessions before reboot Ignore? RMA? Return to Manufacturer Data Centers make us rethink supply chain mgt
27 Autopilot Lessons Learned Ask of the service developer is large, but the return is huge (more profound than the automation ) Reliability and high perf at scale Refactoring of applications for autopilot increased overall reliability Guard rails needed Over sensitive watchdogs or busted logic can trigger too much repair (false positives) Failing limits (repair should not be not worse than the disease) Autopilot is slow twitch Detection and repair in 10s of minutes Transient spurious failures more likely than real ones There are places where this doesn t work
28 Building it Better? Two General, Useful Building Blocks Autopilot Microsoft s Recovery Oriented Computing ystem, supporting Live earch (Bing) GF Google s Distributed File ystem
29 Distributed File ystems Cosmos (MFT), GF* (GOOG), Hadoop (Apache) 100s of clusters PBs of data on disk Goals (again) ervice developer productivity Get storage management out of their hair Very low cost ATA as opposed to CI drives High performance and reliability At massive scale, ongoing HW failures W relatively stable Every disk is somewhere in the midst of corrupting its data and failing *OP 2003
30 But First What About ANs? torage Area Networks The Good Virtualized torage API: read, write small fixed size blocks (e.g., what QL sever expects) The Bad pecialized hardware Fiberchannel, Infiniband 0 loss networking The Ugly Head nodes special machines blessed to access the AN Good abstraction, expensive execution Popular in the enterprise Unpopular in the cloud
31 GF Designed to meet common case workload Very large file (multi-gbs) processing E.g., search logs, web documents, click streams Reads to large contiguous regions Writes that append to rather than overwrite data Familiar API create, delete, open, close, read, and write files
32 GF Architecture One giant file system A network share (append only) How to write good apps on GF Move work to the storage (access local replicas) Great sequential IO on ATA drives Map Reduce
33 Observations Large Chunk ize Reduced frequency of client:master, client:chunkserver interaction Increased IO efficiency Decouple flow of data from flow of control ingle master for metadata, reliably persisting a log Centralized system-wide decisions (optimized chunk placement) ingle master (chunkserver) for each chunk, serializing updates to the chunk implicity Heavy lifting done by the chunk servers Appending more efficient and resilient to failures than overwriting Just a file system that works; optimized for the common case GF abstracts data reliability and data distribution
34 Lessons Learned Within the trust domain, devs still step on each other olutions: ACLs, copy on write (can roll back), encryption ilent data corruption GF level checksums Hard to drive out weird corner case bugs O drivers, firmware Will spend significant time on these Big win not just the technology but the habits that comes with Optimizing for serial IO (Map Reduce, Big Table)
35 Perspective What cloud services look like User -> DC traffic management Front ends, DO/DDO protection, load balancing, etc Back end processing torage ome of the core challenges Reliability Performance Cost per transaction / cost per data unit Is there a better way? High scale is the way out Lights out operations + fault tolerant software Amortized storage, redundancy Loosely coupled services Aggressive timeouts and optional elements
36 Agenda Part 1 Applications How are they structured, provisioned and managed? Traffic and Load Patterns What is the load on the infrastructure that results from the applications?
37 Measuring Traffic in Today s Data Centers 80% of the packets stay inside the data center Data mining, index computations, back end to front end Trend is towards even more internal communication Detailed measurement study of data mining cluster 1,500 servers, 79 ToRs Logged: 5-tuple and size of all socket-level R/W ops Aggregated in flows all activity separated by < 60 s Aggregated into traffic matrices every 100 s rc, Dst, Bytes of data exchange
38 Flow Characteristics DC traffic!= Internet traffic Most of the flows: various mice Most of the bytes: within 100MB flows Median of 10 concurrent flows per server
39 Traffic Matrix Volatility - Collapse similar traffic matrices (over 100sec) into clusters - Need clusters to cover a day s traffic - Traffic pattern changes nearly constantly - Run length is 100s to 80% percentile; 99 th is 800s
40 Today, Computation Constrained by Network* Figure: ln(bytes/10sec) between servers in operational cluster Great efforts required to place communicating servers under the same ToR Most traffic lies on the diagonal tripes show there is need for inter-tor communication *Kandula, engupta, Greenberg,Patel
41 Latency Propagation delay in the data center is essentially 0 Light goes a foot in a nanosecond; 1000 = 1 usec End to end latency comes from witching latency 10G to 10G:~ 2.5 usec (store&fwd); 2 usec (cut-thru) Queueing latency Depends on size of queues and network load Typical times across a quiet data center: 10-20usec Worst-case measurement (from our testbed, not real DC, with all2all traffic pounding and link util > 86%): 2-8 ms Comparison: Time across a typical host network stack is 10 usec Application developer LAs > 1 ms granularity
42 What Do Data Center Faults Look Like? Need very high reliability near top of the tree Very hard to achieve Example: failure of a temporarily unpaired core switch affected ten million users for four hours 0.3% of failure events knocked out all members of a network redundancy group LB CR CR AR AR AR AR Ref: Data Center: Load Balancing Data Center ervices, Cisco 2004 LB
43 Congestion: Hits Hard When it Hits* *Kandula, engupta, Greenberg, Patel
44 But Wait There s More
45 Agenda Part 2 Components for Building Networks witches Links Requirements What does the DC infrastructure need to provide to best support the applications? Network Architectures Conventional Modern Proposals Physical Plant & Resource haping Power provisioning and utilization 45
46 Agenda Part 2 Components for Building Networks witches Links Requirements What does the DC infrastructure need to provide to best support the applications? Network Architectures Conventional Modern Proposals Physical Plant & Resource haping Power provisioning and utilization 46
47 Conventional Networking Equipment Modular routers Chassis $20K upervisor card $18K Ports 8 port 10G-X - $25K 1GB buffer memory Max ~120 ports of 10G per switch Total price in common configurations: $ K (+W&maint) Power: ~2-5KW Integrated Top of Rack switches 48 port 1GBase-T 2-4 ports 1 or 10G-x $7K Load Balancers pread TCP connections over servers $50-$75K each Used in pairs 47
48 witch on Chip AICs General purpose CPU for control plane witch-on-a-chip AIC 24 ports 10G Eth (CX4 or FP+) - $4K to $10K 2MB of buffer memory, 16K IPv4 fwd entries ~100W power 48
49 X-ceiver (erdes) Factors Driving AIC Design chip floorplan Forwarding tables Forwarding pipeline Packet buffer Memory For the future expect that: Current design points 24 port 10G Eth, 16K IPv4 fwd entries, 2 MB buff 24 port 1G, 4 10G 16K IPv4 fwd entries, 2 MB buff Near future 48 port 10G, 16k fwd entries, 4 MB buff 48 port 1 G, 4 10G, 16k fwd Optimal point on the price/port count/port speed curve will continue to move out rapidly Trend seems to be more ports, faster port speed --- not more buffer memory, more forwarding entries, or more forwarding primitives 49
50 Vendors are Experts at Packaging Given a switch AIC, build a switch with more ports by combing AICs ilicon fabrication costs drive AIC price Market size drives packaged switch price General rule of thumb: cost of a link, from least to greatest: On chip; on PCB; in chassis; between chasses Example design points: 144 port 10G switch, built from 24 port switch AICs in single chassis. till non-interfering. 48 port 10G switch, built from 24 port switch AICs 50
51 NetFPGA Reconfigurable HW witches NetFPGA uses a high-end FPGA as the forwarding plane for a switch PCI card with large Xilinx FPGA 4 Gigabit Ethernet ports RAM/DDR2 DRAM Host computer often used for W tasks More details: 51
52 Programmable witches everal vendors pursuing network processor designs Off-chip memory allowing deep buffers Extensive function modules Qo, encryption, tunneling ome have multiple cores enabling very flexible software based data-plane primitives Cost per port significantly higher than switch-on-chip AIC Typically fewer ports and lower speeds Best examples are still proprietary; public examples include Broadcom 88020, QE-2000, E
53 Link Technologies: Modules ome switches have CX4 connectors Dominant trend is towards FP+ (mall Form-factor Pluggable) Module is simpler, as erdes and Clock/Data recovery moved elsewhere (e.g., into AIC) mall size allows many ports on a 1U box (48?) LC connectors for Fiber Cost is $100 module for MM fiber (300m reach) 53
54 Link Technologies: Copper or Fiber? Fiber Cheaper per unit distance Low weight (10g/m) Max distance for 10G: 300m (with cheap optics) Typically multi-mode fiber (MMF) Copper Cheaper for short runs (< 8m) Heavier (100g/m), larger size cable Max distance for 10G : ~8m (real) 15m (spec) 30m (future?) Typically Twin-axial cable 54
55 Link Technologies: Copper or Fiber? Cost model for fiber: $100 module + $13 LC + $XXX/m + $13 LC + $100 module Type Length Cost Fiber 1m $226 Copper 1m $50 Fiber 8m $230 Copper 8m $90 Fiber 15m $250 Copper 15m $250 55
56 Higher Density/Higher peed Ports QFP (Quad FP) 40G port available today 4 10G links bound together Figures from Quad mall Formfactor Pluggable (QFP) Transceiver pecification Fiber ribbon cables Up to 72 fibers per cable, to a single MT connector 56
57 Agenda Part 2 Components for Building Networks witches Links Requirements What does the DC infrastructure need to provide to best support the applications? Network Architectures Conventional Modern Proposals Physical Plant & Resource haping Power provisioning and utilization 57
58 Power & Related Costs Dominate Assumptions: Facility: ~$200M for 15MW facility (15-year amort.) ervers: ~$2k/each, roughly 50,000 (3-year amort.) Commercial Power: ~$0.07/kWhr On-site ec & Admin: 15 ~$100k/annual $1,328,040 Monthly Costs $2,984,653 $1,700,025 ervers Infrastructure Power =PMT(5%/12,12*3,50000*2000,0,1) =PMT(5%/12,12*15, ,0,1)-(100000/12*15) =15,000,000/1000*1.7*0.07*24*31 3yr server and 15 yr infrastructure amortization Observations: $3M/month from charges functionally related to power Power related costs trending flat or up while server costs trending down lide courtesy of James Hamilton, Amazon. blog: perspectives.mvdirona.com Details at: 58
59 Another Breakdown of Data Center Costs Amortized Cost Component ub-components ~45% ervers CPU, memory, disk ~25% Infrastructure UP, cooling, power distribution ~15% Power draw Electrical utility costs ~15% Network witches, links, transit Data centers have large upfront costs Total cost varies from $200M to almost $1B We amortize everything to a monthly cost for fair comparisons Assumptions: 3 yr amortization for servers, 15 yr for infrastructure 5% cost of money 59
60 erver Costs Ugly secret: 10% to 30% utilization considered good in DCs Causes: Uneven application fit: Each server has CPU, memory, disk: most applications exhaust one resource, stranding the others Long provisioning timescales: New servers purchased quarterly at best Uncertainty in demand: Demand for a new service can spike quickly Risk management: Not having spare servers to meet demand brings failure just when success is at hand If each service buys its own servers, the natural response is hoarding 60
61 Improving erver ROI: Need Agility Agility: Any server, any service Turn the servers into a single large fungible pool Let services breathe : dynamically expand and contract their footprint as needed Requirements for implementing agility Means for rapidly installing a service s code on a server Virtual machines, disk images Means for a server to access persistent data Data too large to copy during provisioning process Distributed filesystems (e.g., blob stores) Means for communicating with other servers, regardless of where they are in the data center Network 61
62 Objectives for the Network Developers want a mental model where all their servers, and only their servers, are plugged into an Ethernet switch Uniform high capacity Capacity between two servers limited only by their NICs No need to consider topology when adding servers Performance isolation Traffic of one service should be unaffected by others Layer-2 semantics Flat addressing, so any server can have any IP address erver configuration is the same as in a LAN Legacy applications depending on broadcast must work 62
63 Agenda Part 2 Components for Building Networks witches Links Requirements What does the DC infrastructure need to provide to best support the applications? Network Architectures Conventional Modern Proposals Physical Plant & Resource haping Power provisioning and utilization 63
64 The Network of a Modern Data Center Internet Data Center Layer 3 Internet CR CR AR AR AR AR Layer 2 LB LB Ref: Data Center: Load Balancing Data Center ervices, Cisco 2004 Key: CR = L3 Core Router AR = L3 Access Router = L2 witch LB = Load Balancer A = Rack of 20 servers with Top of Rack switch ~ 4,000 servers/pod Hierarchical network; 1+1 redundancy Equipment higher in the hierarchy handles more traffic, more expensive, more efforts made at availability scale-up design ervers connect via 1 Gbps UTP to Top of Rack switches Other links are mix of 1G, 10G; fiber, copper 64
65 Internal Fragmentation Prevents Applications from Dynamically Growing/hrinking CR Internet CR AR AR AR AR LB LB LB LB VLANs used to isolate properties from each other IP addresses topologically determined by ARs Reconfiguration of IPs and VLAN trunks painful, errorprone, slow, often manual 65
66 No Performance Isolation CR Internet CR AR AR AR AR LB LB Collateral damage LB LB VLANs typically provide reachability isolation only One service sending/recving too much traffic hurts all services sharing its subtree 66
67 Network has Limited erver-to-erver Capacity, and Requires Traffic Engineering to Use What It Has AR CR Internet 10:1 over-subscription or worse (80:1, 240:1) AR CR AR AR LB LB LB LB Data centers run two kinds of applications: Outward facing (serving web pages to users) Internal computation (computing search index think HPC) 67
68 Network Needs Greater Bisection BW, and Requires Traffic Engineering to Use What It Has CR Internet CR LB AR Dynamic reassignment of servers and AR Map/Reduce-style computations mean AR traffic matrix LB is constantly LB changing AR LB Explicit traffic engineering is a nightmare Data centers run two kinds of applications: Outward facing (serving web pages to users) Internal computation (computing search index think HPC) 68
69 Monsoon: Distinguishing Design Principles Randomizing to Cope with Volatility Tremendous variability in traffic matrices eparating Names from Locations Any server, any service Embracing End ystems Leverage the programmability & resources of servers Avoid changes to switches Building on Proven Networking Technology We can build with parts shipping today Leverage low cost, powerful merchant silicon AICs, though do not rely on any one vendor 69
70 What Enables a New olution Now? Programmable switches with high port density Fast: AIC switches on a chip (Broadcom, Fulcrum, ) Cheap: mall buffers, small forwarding tables Flexible: Programmable control planes Centralized coordination cale-out data centers are not like enterprise networks Centralized services already control/monitor health and role of each server (Autopilot) Centralized directory and control plane acceptable (4D) 20 port 10GE switch. List price: $10K 70
71 An Example Monsoon Topology: Clos Network D/2 switches D ports... Intermediate node switches in VLB Node degree (D) of available switches & # servers supported D/2 ports D/2 ports 10G D switches... Aggregation switches D # ervers in pool , , , ports Top Of Rack switch [D 2 /4] * 20 ervers A scale-out design with broad layers ame bisection capacity at each layer no oversubscription Extensive path diversity Graceful degradation under failure 71
72 Use Randomization to Cope with Volatility D/2 switches D ports D/2 ports D/2 ports 10G D switches Intermediate node switches in VLB Aggregation switches Node degree (D) of available switches & # servers supported D # ervers in pool , , , ports Top Of Rack switch [D 2 /4] * 20 ervers Valiant Load Balancing Every flow bounced off a random intermediate switch Provably hotspot free for any admissible traffic matrix ervers could randomize flow-lets if needed 72
73 eparating Names from Locations: How mart ervers Use Dumb witches Headers Dest: N rc: Dest: TD rc: Dest: D rc: Payload Dest: N rc: Dest: TD rc: Dest: TD rc: Dest: D rc: Dest: D rc: Payload Payload Intermediate Node (N) 2 3 ToR (T) ToR (TD) 1 4 Dest: D Payload rc: ource () Dest (D) Encapsulation used to transfer complexity to servers Commodity switches have simple forwarding primitives Complexity moved to computing the headers Many types of encapsulation available IEEE 802.1ah defines MAC-in-MAC encapsulation; VLANs; etc. 73
74 Embracing End ystems User Kernel Monsoon Agent TCP erver Role Directory ystem Network Health Resolve remote IP IP NIC ARP Encapsulator MAC Resolution Cache erver Health erver machine Data center Oes already heavily modified for VMs, storage clouds, etc. A thin shim for network support is no big deal No change to applications or clients outside DC 74
75 The Topology Isn t the Most Important Thing Two-layer Clos network seems optimal for our current environment, but Other topologies can be used with Monsoon d1= 40 ports Ring/Chord topology makes organic growth easier Multi-level fat tree, parallel Clos networks i n/(d1-2) positions n2 = 72 switches ports n2 = 72 switches... d2 = 100 ports Type (1) switches Type (2) switches layer 1 or 2 links layer 1 links n/(d1-2) positions A TOR B B A TOR n1 = 144 switches n1 = 144 switches Number of servers = 2 x 144 x 36 x 20 = 207,360 75
76 Monsoon Prototype 4 ToR switches, 3 aggregation switches, 3 intermediate switches Experiments conducted with both 40 and 80 servers Results have near perfect scaling Gives us some confidence that design will scale-out as predicted 76
77 Monsoon Achieves Uniform High Throughput Experiment: all-to-all shuffle of 500 MB among 75 servers 2.7 TB Excellent metric of overall efficiency and performance All2All shuffle is superset of other traffic patterns Results: Ave goodput: 58.6 Gbps; Fairness index:.995; Ave link util: 86% Perfect system-wide efficiency would yield aggregate goodput of 75G Monsoon efficiency is 78% of perfect 10% inefficiency due to duplexing issues; 7% header overhead Monsoon efficiency is 94% of optimal 77
78 Monsoon Provides Performance Isolation - ervice 1 unaffected by service 2 s activity 78
79 Monsoon is resilient to link failures - Performance degrades and recovers gracefully as links are failed and restored 79
80 VLB vs. Adaptive vs. Best Oblivious Routing - VLB does as well as adaptive routing (traffic engineering using an oracle) on Data Center traffic - Worst link is 20% busier with VLB, median is same 80
81 Traffic Engineering and TCP Results from Monsoon testbed show VLB offers good mixing when randomizing by flows on this DC traffic Fairness index at the A-switches > 0.98 But traffic workloads change Open questions: Would TCP modifications make it better suited to multipath topologies? 81
82 Al-Fares, et al., A calable, Commodity Data Center Network Architecture Fat-Tree Networks Links of fabric have same speed as links to servers Mapping of flows to links critical collision of even two flows enough to cause persistent congestion Fat-Tree work includes a global flow placement system Fat-tree operates at L3, PortLand at L3 82
83 Higher Dimensional Fabrics: Dcell Idea: use servers themselves to forward packets Allows custom routing protocol, cheap switches Fractal topology ervers connect to hub ervers have multiple NICs, each connects to server in different cell # NICs =dimension of Dcell Extensive path diversity Cabling difficult to wire in DC? C. Guo, et al., DCell: A calable and Fault- Tolerant Network tructure for Data Centers 83
84 Agenda Part 2 Components for Building Networks witches Links Requirements What does the DC infrastructure need to provide to best support the applications? Network Architectures Conventional Modern Proposals Physical Plant & Resource haping Power provisioning and utilization 84
85 Power & Related Costs Dominate Assumptions: Facility: ~$200M for 15MW facility (15-year amort.) ervers: ~$2k/each, roughly 50,000 (3-year amort.) Commercial Power: ~$0.07/kWhr On-site ec & Admin: 15 ~$100k/annual $1,328,040 Monthly Costs $2,984,653 $1,700,025 ervers Infrastructure Power =PMT(5%/12,12*3,50000*2000,0,1) =PMT(5%/12,12*15, ,0,1)-(100000/12*15) =15,000,000/1000*1.7*0.07*24*31 3yr server and 15 yr infrastructure amortization Observations: $3M/month from charges functionally related to power Power related costs trending flat or up while server costs trending down lide courtesy of James Hamilton, Amazon. blog: perspectives.mvdirona.com Details at: 85
86 208V Where do the Power and the Dollars Go? IT LOAD 115kv 13.2kv UP: Rotary or Battery 2.5MW Generator ~180 Gallons/hour ~1% loss in switch gear and conductors 13.2kv 13.2kv 480V 0.3% loss 99.7% efficient 6% loss 94% efficient, >97% available 0.3% loss 99.7% efficient 0.3% loss 99.7% efficient 86
87 208V Where do the Power and the Dollars Go? 20% of DC cost is in power redundancy $2M each n+2 redundant IT LOAD 115kv 13.2kv 13.2kv UP: Rotary or Battery 2.5MW Generator ~180 Gallons/hour 13.2kv 480V ~1% loss in switch gear and conductors 0.3% loss 99.7% efficient 6% loss 94% efficient, >97% available Wastes ~900kW in a 15MW DC (could power 4,500 servers) 0.3% loss 99.7% efficient 0.3% loss 99.7% efficient 87
88 Where do the Power and the Dollars Go? Pretty good data centers have efficiency of Watts lost for each 1W delivered to the servers Breakdown: ervers: 59% Distribution losses: 8% Cooling: 33% 88
89 How to Reduce Power Costs? Create servers that use less power! Conventional server uses 200 to 500W Reductions have ripple effects across entire data center Mostly a problem for EEs to tackle? Eliminate power redundancy Allow entire data centers to fail Requires middleware that eases these transitions Reduce power usage of network gear? Not so much Total power consumed by switches amortizes to 10-20W per server 89
90 Resource Consumption haping Essentially yield mgmt applied to full DC Network charged at 95 th percentile: Push peaks to troughs Fill troughs for free e.g. Amazon 3 replication Dynamic resource allocation Virtual machine helpful but not needed Charged for symmetrically so ingress effectively free 4AM PT Egress charged at 95 th percentile 3PM PT The Pacific Ocean is big. David Treadwell & James Hamilton / Treadwell Graph Power also charged at 95 th percentile erver idle to full-load range: ~65% (e.g. 158W to 230W ) 3 (suspend) or 5 (off) when server not needed Disks come with both IOP capability & capacity Mix hot & cold data to soak up both Encourage priority (urgency) differentiation in charge-back model lide from James R. Hamilton 90
91 Leveraging Variations in Resource Prices Figures from A. Qureshi, Cutting the Electric Bill for Internet-cale ystems, IGCOMM09 91
92 Incentive Design How to get data center tenants to leverage agility? Need to be financially incented Tensions: hould be cheap to add servers, as workload prediction is poor so want to add capacity early hould be expensive to keep idle servers (ideally above the cost of simply having a server) Need to make sure customers give servers back Without this, customers hoard, free pool depletes, & DC owner ends up in the hosted server biz only profit comes from the economies of scale of big deployments 92
93 ummary 93
94 Data Centers are Like Factories Number 1 Goal: Maximize useful work per dollar spent Must think like an economist/industrial engineer as well as a computer scientist Understand where the dollar costs come from Use computer science to reduce/eliminate the costs 94
Albert Greenberg Principal Researcher [email protected]
Networking The Cloud Albert Greenberg Principal Researcher [email protected] (work with James Hamilton, rikanth Kandula, Dave Maltz, Parveen Patel, udipta engupta, Changhoon Kim) Agenda Data Center
Data Center Challenges Building Networks for Agility
Data Center Challenges Building Networks for Agility reenivas Addagatla, Albert Greenberg, James Hamilton, Navendu Jain, rikanth Kandula, Changhoon Kim, Parantap Lahiri, David A. Maltz, Parveen Patel udipta
Data Center Challenges Building Networks for Agility
Data Center Challenges Building Networks for Agility reenivas Addagatla, Albert Greenberg, James Hamilton, Navendu Jain, rikanth Kandula, Changhoon Kim, Parantap Lahiri, David A. Maltz, Parveen Patel udipta
Datacenter Networks Are In My Way
Datacenter Networks Are In My Way Principals of Amazon James Hamilton, 2010.10.28 e: [email protected] blog: perspectives.mvdirona.com With Albert Greenberg, Srikanth Kandula, Dave Maltz, Parveen Patel,
Data Center Network Topologies: VL2 (Virtual Layer 2)
Data Center Network Topologies: VL2 (Virtual Layer 2) Hakim Weatherspoon Assistant Professor, Dept of Computer cience C 5413: High Performance ystems and Networking eptember 26, 2014 lides used and adapted
Operating Systems. Cloud Computing and Data Centers
Operating ystems Fall 2014 Cloud Computing and Data Centers Myungjin Lee [email protected] 2 Google data center locations 3 A closer look 4 Inside data center 5 A datacenter has 50-250 containers A
Walmart s Data Center. Amadeus Data Center. Google s Data Center. Data Center Evolution 1.0. Data Center Evolution 2.0
Walmart s Data Center Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall emester 2013 1 2 Amadeus Data Center Google s Data Center 3 4 Data Center
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
Lecture 7: Data Center Networks"
Lecture 7: Data Center Networks" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Nick Feamster Lecture 7 Overview" Project discussion Data Centers overview Fat Tree paper discussion CSE
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
Outline. VL2: A Scalable and Flexible Data Center Network. Problem. Introduction 11/26/2012
VL2: A Scalable and Flexible Data Center Network 15744: Computer Networks, Fall 2012 Presented by Naveen Chekuri Outline Introduction Solution Approach Design Decisions Addressing and Routing Evaluation
Distributed File Systems
Distributed File Systems Paul Krzyzanowski Rutgers University October 28, 2012 1 Introduction The classic network file systems we examined, NFS, CIFS, AFS, Coda, were designed as client-server applications.
Solving I/O Bottlenecks to Enable Superior Cloud Efficiency
WHITE PAPER Solving I/O Bottlenecks to Enable Superior Cloud Efficiency Overview...1 Mellanox I/O Virtualization Features and Benefits...2 Summary...6 Overview We already have 8 or even 16 cores on one
Core and Pod Data Center Design
Overview The Core and Pod data center design used by most hyperscale data centers is a dramatically more modern approach than traditional data center network design, and is starting to be understood by
SCALABILITY AND AVAILABILITY
SCALABILITY AND AVAILABILITY Real Systems must be Scalable fast enough to handle the expected load and grow easily when the load grows Available available enough of the time Scalable Scale-up increase
ADVANCED NETWORK CONFIGURATION GUIDE
White Paper ADVANCED NETWORK CONFIGURATION GUIDE CONTENTS Introduction 1 Terminology 1 VLAN configuration 2 NIC Bonding configuration 3 Jumbo frame configuration 4 Other I/O high availability options 4
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable
Scala Storage Scale-Out Clustered Storage White Paper
White Paper Scala Storage Scale-Out Clustered Storage White Paper Chapter 1 Introduction... 3 Capacity - Explosive Growth of Unstructured Data... 3 Performance - Cluster Computing... 3 Chapter 2 Current
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Advanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
Google File System. Web and scalability
Google File System Web and scalability The web: - How big is the Web right now? No one knows. - Number of pages that are crawled: o 100,000 pages in 1994 o 8 million pages in 2005 - Crawlable pages might
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
Chapter 1 Reading Organizer
Chapter 1 Reading Organizer After completion of this chapter, you should be able to: Describe convergence of data, voice and video in the context of switched networks Describe a switched network in a small
Chapter 7: Distributed Systems: Warehouse-Scale Computing. Fall 2011 Jussi Kangasharju
Chapter 7: Distributed Systems: Warehouse-Scale Computing Fall 2011 Jussi Kangasharju Chapter Outline Warehouse-scale computing overview Workloads and software infrastructure Failures and repairs Note:
Technology Insight Series
Evaluating Storage Technologies for Virtual Server Environments Russ Fellows June, 2010 Technology Insight Series Evaluator Group Copyright 2010 Evaluator Group, Inc. All rights reserved Executive Summary
1. Comments on reviews a. Need to avoid just summarizing web page asks you for:
1. Comments on reviews a. Need to avoid just summarizing web page asks you for: i. A one or two sentence summary of the paper ii. A description of the problem they were trying to solve iii. A summary of
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data
Cloud Computing Trends
UT DALLAS Erik Jonsson School of Engineering & Computer Science Cloud Computing Trends What is cloud computing? Cloud computing refers to the apps and services delivered over the internet. Software delivered
TFE listener architecture. Matt Klein, Staff Software Engineer Twitter Front End
TFE listener architecture Matt Klein, Staff Software Engineer Twitter Front End Agenda TFE architecture overview TSA architecture overview TSA hot restart Future plans Q&A TFE architecture overview Listener:
Paolo Costa [email protected]
joint work with Ant Rowstron, Austin Donnelly, and Greg O Shea (MSR Cambridge) Hussam Abu-Libdeh, Simon Schubert (Interns) Paolo Costa [email protected] Paolo Costa CamCube - Rethinking the Data Center
Chapter 6. Paper Study: Data Center Networking
Chapter 6 Paper Study: Data Center Networking 1 Data Center Networks Major theme: What are new networking issues posed by large-scale data centers? Network Architecture? Topology design? Addressing? Routing?
Network Virtualization for Large-Scale Data Centers
Network Virtualization for Large-Scale Data Centers Tatsuhiro Ando Osamu Shimokuni Katsuhito Asano The growing use of cloud technology by large enterprises to support their business continuity planning
Software Defined Networking What is it, how does it work, and what is it good for?
Software Defined Networking What is it, how does it work, and what is it good for? slides stolen from Jennifer Rexford, Nick McKeown, Michael Schapira, Scott Shenker, Teemu Koponen, Yotam Harchol and David
Juniper Networks QFabric: Scaling for the Modern Data Center
Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009 1 Arista s Cloud Networking The advent of Cloud Computing changes the approach to datacenters networks in terms of throughput
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM
A REVIEW PAPER ON THE HADOOP DISTRIBUTED FILE SYSTEM Sneha D.Borkar 1, Prof.Chaitali S.Surtakar 2 Student of B.E., Information Technology, J.D.I.E.T, [email protected] Assistant Professor, Information
White Paper Solarflare High-Performance Computing (HPC) Applications
Solarflare High-Performance Computing (HPC) Applications 10G Ethernet: Now Ready for Low-Latency HPC Applications Solarflare extends the benefits of its low-latency, high-bandwidth 10GbE server adapters
Amazon EC2 Product Details Page 1 of 5
Amazon EC2 Product Details Page 1 of 5 Amazon EC2 Functionality Amazon EC2 presents a true virtual computing environment, allowing you to use web service interfaces to launch instances with a variety of
Flexible SDN Transport Networks With Optical Circuit Switching
Flexible SDN Transport Networks With Optical Circuit Switching Multi-Layer, Multi-Vendor, Multi-Domain SDN Transport Optimization SDN AT LIGHT SPEED TM 2015 CALIENT Technologies 1 INTRODUCTION The economic
Microsoft s Cloud Networks
Microsoft s Cloud Networks Page 1 Microsoft s Cloud Networks Microsoft s customers depend on fast and reliable connectivity to our cloud services. To ensure superior connectivity, Microsoft combines globally
How swift is your Swift? Ning Zhang, OpenStack Engineer at Zmanda Chander Kant, CEO at Zmanda
How swift is your Swift? Ning Zhang, OpenStack Engineer at Zmanda Chander Kant, CEO at Zmanda 1 Outline Build a cost-efficient Swift cluster with expected performance Background & Problem Solution Experiments
Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical
Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation
Radware ADC-VX Solution. The Agility of Virtual; The Predictability of Physical
Radware ADC-VX Solution The Agility of Virtual; The Predictability of Physical Table of Contents General... 3 Virtualization and consolidation trends in the data centers... 3 How virtualization and consolidation
Understanding Microsoft Storage Spaces
S T O R A G E Understanding Microsoft Storage Spaces A critical look at its key features and value proposition for storage administrators A Microsoft s Storage Spaces solution offers storage administrators
Networking in the Hadoop Cluster
Hadoop and other distributed systems are increasingly the solution of choice for next generation data volumes. A high capacity, any to any, easily manageable networking layer is critical for peak Hadoop
High Availability with Windows Server 2012 Release Candidate
High Availability with Windows Server 2012 Release Candidate Windows Server 2012 Release Candidate (RC) delivers innovative new capabilities that enable you to build dynamic storage and availability solutions
Data Centers and Cloud Computing. Data Centers
Data Centers and Cloud Computing Slides courtesy of Tim Wood 1 Data Centers Large server and storage farms 1000s of servers Many TBs or PBs of data Used by Enterprises for server applications Internet
Distributed File System. MCSN N. Tonellotto Complements of Distributed Enabling Platforms
Distributed File System 1 How do we get data to the workers? NAS Compute Nodes SAN 2 Distributed File System Don t move data to workers move workers to the data! Store data on the local disks of nodes
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
Cloud App Anatomy. Tanj Bennett Applications and Services Group Microsoft Corps. 5/15/2015 Cloud Apps
Cloud App Anatomy Tanj Bennett Applications and Services Group Microsoft Corps Cloud Apps Are Personal Personal applications have a display, means of input, and computational devices which execute them.
Service Design Best Practices
Service Design Best Practices James Hamilton 2009/2/26 Principals of Amazon Amazon Web Services e: [email protected] w: mvdirona.com/jrh/work b: perspectives.mvdirona.com Agenda Overview Recovery-Oriented
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
Cloud Infrastructure Planning. Chapter Six
Cloud Infrastructure Planning Chapter Six Topics Key to successful cloud service adoption is an understanding of underlying infrastructure. Topics Understanding cloud networks Leveraging automation and
IP Networking. Overview. Networks Impact Daily Life. IP Networking - Part 1. How Networks Impact Daily Life. How Networks Impact Daily Life
Overview Dipl.-Ing. Peter Schrotter Institute of Communication Networks and Satellite Communications Graz University of Technology, Austria Fundamentals of Communicating over the Network Application Layer
Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database
Cisco UCS and Fusion- io take Big Data workloads to extreme performance in a small footprint: A case study with Oracle NoSQL database Built up on Cisco s big data common platform architecture (CPA), a
How Solace Message Routers Reduce the Cost of IT Infrastructure
How Message Routers Reduce the Cost of IT Infrastructure This paper explains how s innovative solution can significantly reduce the total cost of ownership of your messaging middleware platform and IT
Affording the Upgrade to Higher Speed & Density
Affording the Upgrade to Higher Speed & Density Ethernet Summit February 22, 2012 Agenda VSS Overview Technology Q&A 2 Corporate Overview World Leader in Network Intelligence Optimization Deployed in 80%
Support a New Class of Applications with Cisco UCS M-Series Modular Servers
Solution Brief December 2014 Highlights Support a New Class of Applications Cisco UCS M-Series Modular Servers are designed to support cloud-scale workloads In which a distributed application must run
Feature Comparison. Windows Server 2008 R2 Hyper-V and Windows Server 2012 Hyper-V
Comparison and Contents Introduction... 4 More Secure Multitenancy... 5 Flexible Infrastructure... 9 Scale, Performance, and Density... 13 High Availability... 18 Processor and Memory Support... 24 Network...
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS
OVERLAYING VIRTUALIZED LAYER 2 NETWORKS OVER LAYER 3 NETWORKS Matt Eclavea ([email protected]) Senior Solutions Architect, Brocade Communications Inc. Jim Allen ([email protected]) Senior Architect, Limelight
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle
Direct NFS - Design considerations for next-gen NAS appliances optimized for database workloads Akshay Shah Gurmeet Goindi Oracle Agenda Introduction Database Architecture Direct NFS Client NFS Server
How To Make A Vpc More Secure With A Cloud Network Overlay (Network) On A Vlan) On An Openstack Vlan On A Server On A Network On A 2D (Vlan) (Vpn) On Your Vlan
Centec s SDN Switch Built from the Ground Up to Deliver an Optimal Virtual Private Cloud Table of Contents Virtualization Fueling New Possibilities Virtual Private Cloud Offerings... 2 Current Approaches
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
Lecture 5: GFS & HDFS! Claudia Hauff (Web Information Systems)! [email protected]
Big Data Processing, 2014/15 Lecture 5: GFS & HDFS!! Claudia Hauff (Web Information Systems)! [email protected] 1 Course content Introduction Data streams 1 & 2 The MapReduce paradigm Looking behind
Pluribus Netvisor Solution Brief
Pluribus Netvisor Solution Brief Freedom Architecture Overview The Pluribus Freedom architecture presents a unique combination of switch, compute, storage and bare- metal hypervisor OS technologies, and
Cloud Networking Disruption with Software Defined Network Virtualization. Ali Khayam
Cloud Networking Disruption with Software Defined Network Virtualization Ali Khayam In the next one hour Let s discuss two disruptive new paradigms in the world of networking: Network Virtualization Software
Last time. Data Center as a Computer. Today. Data Center Construction (and management)
Last time Data Center Construction (and management) Johan Tordsson Department of Computing Science 1. Common (Web) application architectures N-tier applications Load Balancers Application Servers Databases
Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308
Virtualization: TCP/IP Performance Management in a Virtualized Environment Orlando Share Session 9308 Laura Knapp WW Business Consultant [email protected] Applied Expert Systems, Inc. 2011 1 Background
Fiber Channel Over Ethernet (FCoE)
Fiber Channel Over Ethernet (FCoE) Using Intel Ethernet Switch Family White Paper November, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE, EXPRESS OR
Network Design. Yiannos Mylonas
Network Design Yiannos Mylonas Physical Topologies There are two parts to the topology definition: the physical topology, which is the actual layout of the wire (media), and the logical topology, which
Concepts Introduced in Chapter 6. Warehouse-Scale Computers. Important Design Factors for WSCs. Programming Models for WSCs
Concepts Introduced in Chapter 6 Warehouse-Scale Computers introduction to warehouse-scale computing programming models infrastructure and costs cloud computing A cluster is a collection of desktop computers
Simplifying Data Center Network Architecture: Collapsing the Tiers
Simplifying Data Center Network Architecture: Collapsing the Tiers Abstract: This paper outlines some of the impacts of the adoption of virtualization and blade switches and how Extreme Networks can address
Virtual SAN Design and Deployment Guide
Virtual SAN Design and Deployment Guide TECHNICAL MARKETING DOCUMENTATION VERSION 1.3 - November 2014 Copyright 2014 DataCore Software All Rights Reserved Table of Contents INTRODUCTION... 3 1.1 DataCore
Simplifying Storage Operations By David Strom (published 3.15 by VMware) Introduction
Simplifying Storage Operations By David Strom (published 3.15 by VMware) Introduction There are tectonic changes to storage technology that the IT industry hasn t seen for many years. Storage has been
Bigdata High Availability (HA) Architecture
Bigdata High Availability (HA) Architecture Introduction This whitepaper describes an HA architecture based on a shared nothing design. Each node uses commodity hardware and has its own local resources
Hadoop: Embracing future hardware
Hadoop: Embracing future hardware Suresh Srinivas @suresh_m_s Page 1 About Me Architect & Founder at Hortonworks Long time Apache Hadoop committer and PMC member Designed and developed many key Hadoop
Outline. Institute of Computer and Communication Network Engineering. Institute of Computer and Communication Network Engineering
Institute of Computer and Communication Network Engineering Institute of Computer and Communication Network Engineering Communication Networks Software Defined Networking (SDN) Prof. Dr. Admela Jukan Dr.
Open Flow Controller and Switch Datasheet
Open Flow Controller and Switch Datasheet California State University Chico Alan Braithwaite Spring 2013 Block Diagram Figure 1. High Level Block Diagram The project will consist of a network development
CS514: Intermediate Course in Computer Systems
: Intermediate Course in Computer Systems Lecture 7: Sept. 19, 2003 Load Balancing Options Sources Lots of graphics and product description courtesy F5 website (www.f5.com) I believe F5 is market leader
Cloud Based Application Architectures using Smart Computing
Cloud Based Application Architectures using Smart Computing How to Use this Guide Joyent Smart Technology represents a sophisticated evolution in cloud computing infrastructure. Most cloud computing products
Availability and Disaster Recovery: Basic Principles
Availability and Disaster Recovery: Basic Principles by Chuck Petch, WVS Senior Technical Writer At first glance availability and recovery may seem like opposites. Availability involves designing computer
Designing a Cloud Storage System
Designing a Cloud Storage System End to End Cloud Storage When designing a cloud storage system, there is value in decoupling the system s archival capacity (its ability to persistently store large volumes
Software-Defined Networks Powered by VellOS
WHITE PAPER Software-Defined Networks Powered by VellOS Agile, Flexible Networking for Distributed Applications Vello s SDN enables a low-latency, programmable solution resulting in a faster and more flexible
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration White Paper Published: August 09 This is a preliminary document and may be changed substantially prior to final commercial release of the software described
Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat
Radhika Niranjan Mysore, Andreas Pamboris, Nathan Farrington, Nelson Huang, Pardis Miri, Sivasankar Radhakrishnan, Vikram Subramanya and Amin Vahdat 1 PortLand In A Nutshell PortLand is a single logical
Flash Memory Arrays Enabling the Virtualized Data Center. July 2010
Flash Memory Arrays Enabling the Virtualized Data Center July 2010 2 Flash Memory Arrays Enabling the Virtualized Data Center This White Paper describes a new product category, the flash Memory Array,
Data Center Infrastructure of the future. Alexei Agueev, Systems Engineer
Data Center Infrastructure of the future Alexei Agueev, Systems Engineer Traditional DC Architecture Limitations Legacy 3 Tier DC Model Layer 2 Layer 2 Domain Layer 2 Layer 2 Domain Oversubscription Ports
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION
Apache Hadoop. Alexandru Costan
1 Apache Hadoop Alexandru Costan Big Data Landscape No one-size-fits-all solution: SQL, NoSQL, MapReduce, No standard, except Hadoop 2 Outline What is Hadoop? Who uses it? Architecture HDFS MapReduce Open
On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform
On- Prem MongoDB- as- a- Service Powered by the CumuLogic DBaaS Platform Page 1 of 16 Table of Contents Table of Contents... 2 Introduction... 3 NoSQL Databases... 3 CumuLogic NoSQL Database Service...
