Using Shortest Job First Scheduling in Green Cloud Computing
|
|
|
- Amberlynn Rose
- 10 years ago
- Views:
Transcription
1 Using Shortest Job First Scheduling in Green Cloud Computing Abeer H. El Bakely 1, Hesham A.Hefny 2 Department of CS and IS, ISSR Cairo University, Cairo, Egypt 1, 2 Abstract: Researchers try to solve the problem of energy (the demand go up and the supply is declining or flat) by discovering new resources or minimizing energy consumption in most important fields. In this paper we minimize energy in cloud computing by enhancement green scheduler, a cloud datacenter comprises of many hundred or thousands of networked servers. The network system is main component in cloud computing which consumes a nonnegligible fraction of the total power consumption. In this paper we minimize energy in cloud computing by presenting a scheduling approach that is called SJFGC which performs the best-effort workload consolidation on a minimum set of servers. The proposed approach optimizes the tradeoff between job consolidations (to minimize the amount of computing servers) by executing firstly task with minimum arrival time and minimum processing time. We use simulator program is called Green Cloud which is extension to network simulator NS2. Keywords: Green Scheduling, Data center, Green Cloud, Energy Efficiency. I. INTRODUCTION Energy is involved in all life cycles, and it is essential in all productive activities such as space heating, water lifting, and hospitals.. etc, energy demands in the world go up, and energy supply is declining or flat. So there is a big challenge to all researches, they try to decline energy consumption or find new sources for energy especially in the things affect on our life. In recent 10 years, Internet has been developing very quickly; the services grow largely through an internet such as electronic mail, web hosting, web services and etc. So the world needs new concept of current technology or reuse the current technology so the cloud computing appears as a recent trend in IT that [13] moves computing and data away from desktop and portable PCs or servers into computer clusters (big data centers owned by the cloud service providers). Cloud Computing refers to both the applications delivered as services over the Internet and the hardware and systems software in the datacenters that provide those services which have long been referred to Software as a Service (SaaS). The datacenter hardware and software is what we will call a cloud. People may be users or providers of SaaS, or users or providers of Utility Computing. The cloud datacenters are quite different from traditional hosting facilities. A cloud datacenter comprises of many hundreds or thousands of networked servers with their corresponding storage and networking subsystems, power distribution and conditioning equipment, and cooling infrastructure. Due to large number of equipment, data centers can consume massive energy consumption and emit large amount of carbon. [6] The network system is another main component in cloud computing which consumes a non-negligible fraction of the total power consumption. In cloud computing, since resources are accessed through Internet, both applications and data are needed to be transferred to the compute node. It requires much more data communication bandwidth between user s PC to the cloud resources than require the application execution requirements. In the network infrastructure, the energy consumption depends especially on the power efficiency and awareness of wired network, namely the network equipment or system design, topology design, and network protocol design. Most of the energy in network devices is wasted because they are designed to handle worst case scenario. The energy consumption of these devices remains almost the same during both peak time and idle state. Many improvements are required to get high energy efficiency in these devices. For example during low utilization periods, Ethernet links can be turned off and packets can be routed around them. Further energy savings are possible at the hardware level of the routers through appropriate selection and optimization of the layout of various internal router components (i.e. buffers, links, etc.). [6] Roughly 40% of energy consumption for the world is related to the energy consumed by information technology (IT) equipment which includes energy consumed by the computing servers as well as data center network hardware used for interconnection. In fact, about one-third of the total IT energy is consumed by communication links, switching, and aggregation elements, while the remaining two-thirds are allocated to computing servers. Other systems contributing to the data center energy consumption are cooling and power distribution systems that account for 45% and 15% of total energy consumption. [9] There are many solutions that are implemented for making data center hardware energy efficient. There are two common techniques for reducing power consumption in computing systems. The Dynamic Voltage and Frequency Scaling (DVFS) enables processors to run at different combinations of frequencies with voltages to reduce the power consumption of the processor. [15] Management (DPM) achieves most of energy savings by coordinating and distributing the work between all available nodes. Copyright to IJARCCE DOI /IJARCCE
2 To make DPM scheme efficient, a scheduler must consolidate data center jobs on a minimum set of computing resources to maximize the amount of unloaded servers that can be powered down (or put to sleep). The average data center workload often stays around 30%, so the portion of unloaded servers can be as high as 70%. [14][9] Green Cloud simulator is an extension of NS2 which represents the cloud data center s energy efficiency by using two techniques which are DVFS and DPM. Most of the existing approaches for energy-efficient focus on other targets such as balance between energy efficient and performance by job scheduling in data centers, reduce traffic and congestion in networks of cloud computing. This paper presents a data center scheduling approach which increases an improvement in energy consumption and achieves other advantages such as balance between energy efficient and performance, reduce traffic and congestion in networks of cloud computing. It gets the task with minimum arrival time which toward from user to computing server for execution firstly. The scheduling approach in this paper is designed to minimize the number of computing servers required for job execution. The proposed approach reduces computational and memory overhead compared to previous approaches, such as flow differentiation, which makes the proposed approach easy to implement and port to existing data center schedulers. Also it reduces complexity time of processing compared to previous approaches. The rest of the paper is organized as follows: Section 2 presents the related works, Section 3 focuses on environment of simulation, Section 4 describes SJFGC as a proposed approach, Section 5 introduces simulation scenario of proposed approach; Section 6 presents results of proposed approach and Section 7 presents conclusion. II. RELATEDWORKS Through reviewing the literature to stand on what other researchers have reached in this research area, a number of subjects of interest were found and can be summarized as follows; Fig.1GreenCloud simulator architecture (three tiers) [8] [Dzmitry et al, 2013] proposed e-stab algorithm for scheduling, it takes into account traffic requirements of cloud applications providing energy efficient job allocation and traffic load balancing in the data center networks. Effective distribution of network traffic improves quality of service of running cloud applications by reducing the communication-related delays and congestion-related packet losses. [11] [DzmitryKliazovich, 2011] proposes approach (DENS) that balances the energy consumption of a data center, individual job performance, and traffic demands. Methodology aims to achieve the balance between individual job s performance, Quality-of-Service requirements, traffic demands, and energy consumed by the data center. The proposed approach optimizes the tradeoff between job consolidation (to minimize the amount of computing servers) and distribution of traffic patterns (to avoid hotspots in the data center network). DENS methodology is particularly relevant in data centers running data-intensive jobs which require low computational load, but produce heavy data streams directed to the end-users. [9] [Haiyang Qian, 2012] has proposed aggregation workload demand to minimize server operational cost by leveraging switching servers on/off and DVFS when resource requirements (workload demand) are given. The numerical results show that aggregation workload over time slots is efficient with small overhead for dynamic aggregation method. It is worked on decomposing workload demand into special substructures and developing algorithms to compute optimal solutions according to the substructures. The substructure based algorithms are more efficient than the mathematical programming based method. [5] III. ENVIRONMENT OF SIMULATION We use Green Cloud simulator which is an extension to the network simulator NS2 which is developed for the study of cloud computing environments. The Green Cloud offers users a detailed fine-grained modelling of the energy consumed by the elements of the data center, such as servers, switches, and links. Moreover, Green Cloud Copyright to IJARCCE DOI /IJARCCE
3 offers a thorough investigation of workload distributions. Furthermore, a specific focus is devoted on the packetlevel simulations of communications in the data center infrastructure, which provides the finest-grain control and is not present in any cloud computing simulation environment. [8] The Green Cloud simulator implements energy model of switches and links according to the values of power consumption for different elements. The implemented powers saving schemes are: (a) DVFS only, (b) DNS only, and (c) DVFS with DNS. [8] Fig. 2 Architecture of the Green Cloud simulation environment [9] Fig.3 Green cloud Architecture with SJFGC scheduling Copyright to IJARCCE DOI /IJARCCE
4 A. Data Center Topology ISSN (Online) Three-tier trees of hosts and switches form which is most common used as data center architecture. It (see Fig. 2) includes: access, aggregation and core layers. The core tier at the root of the tree, the aggregation tier is responsible for routing, and the access tier that holds the pool of computing servers (or hosts). The availability of the aggregation layer facilitates the increase in the number of server nodes while keeping inexpensive Layer-2 (L2) switches in the access network, which provides a loop-free topology. Because the maximum number of Equal Cost Multi-Path (ECMP) paths allowed is eight, a typical three tier architecture consists of eight core switches. Such architecture implements an 8- way ECMP that includes 10 GE Line Aggregation Groups (LAGs), which allow a network client to address several links and network ports with a single MAC (Media Access Control) Address. [9][16] In three-tier architecture the computing servers (grouped in racks) are interconnected using 1 Gigabit Ethernet (GE) links. The maximum number of allowable ECMP paths bounds the total number of core switches to eight. Such a bound also limits the deliverable bandwidth to the aggregation switches. At the higher layers of hierarchy, the racks are arranged in modules (see Fig. 1) with a pair of aggregation switches servicing the module connectivity. The bandwidth between the core and aggregation networks is distributed using a multi-path routing technology, such as ECMP routing. The ECMP technique performs a perflow load balancing, which differentiates the flows by computing a hash function on the incoming packet headers. [9] B. Simulator Components Computing servers are basic of data center that are responsible for task execution, so it is main factor in energy consumption. In Green Cloud, the server components implement single core nodes that have a preset on a processing power limit in MIPS or FLOPS, associated size of the memory resources, the power consumption of a computing server is proportional to the CPU utilization. An idle server consumes around two-thirds of its peak-load consumption to keep memory, disks, and I/O resources running. The remaining one-third changes almost linearly with the increase in the level of CPU load. There are two main approaches for reducing energy consumption in computing servers: (a) DVFS and (b) DPM. The DVFS scheme adjusts the CPU power according to the offered load. The fact that power in a chip decreases proportionally to V 2 *f, where V is a voltage, and f is the operating frequency. This implies a cubic relationship from f in the CPU power consumption. The scope of the DVFS optimization is limited to CPUs. Computing server components, such as buses, memory, and disks remain functioning at the original operating frequency. The DPM scheme can reduce power of computing servers (that consist of all components); the power model followed by server components is dependent on the server state and its CPU utilization. An idle server consumes about 66% of its fully loaded configuration. This is due to the fact that servers must manage memory modules, disks, I/O resources, and other peripherals in an acceptable state. Then, the power consumption increases with the level of CPU load linearly. Power model allows implementation of power saving in a centralized scheduler that can provision the consolidation of workloads in a minimum possible amount of the computing servers. [9][10][16] Switches and Links form the interconnection fabric that delivers job requests and workload to any of the computing servers for execution in a timely manner. The interconnection of switches and servers requires different cabling solutions depending on the supported bandwidth, physical and quality characteristics of the link. The quality of signal transmission in a given cable determines a tradeoff between the transmission rate and the link distance, which are the factors defining the cost and energy consumption of the transceivers. Energy consumption of a switch depends on the: (a) Type of switch, (b) Number of ports, (c) Port transmission rates and (d) Employed cabling solutions. The energy is consumed by a switch can be generalized by the following: Where P chassis is related to the power consumed by the switch hardware, P linecard is the power consumed by any active network line card, P r corresponds to the power consumed by all of the switches can dynamically be put to sleep. Each core switch consumes a certain amount of energy to service large switching capacity. Because of their location within the communication fabric and proper ECMP forwarding functionality, it is advisable to keep the core network switches running continuously at their maximum transmission rates. On the contrary, the aggregation switches service modules, which can be reduced energy consumption when the module racks are inactive. The fact that on average most of the data centers are utilized around 30% of their compute capacity, it shows power down of unused aggregation switches. However, such an operation must be performed carefully by considering possible fluctuations in job arrival rates. Typically, it is enough to keep a few computing servers running idle on top of the necessary computing servers as a buffer to account for possible data center load fluctuation. [10] IV. SJFGC APPROACH (1)[9] In SJFGC approach, we introduce the idea of shortest-job- First scheduling in operating system, shortest-job-first scheduling associates with each process the length of the latter s next CPU burst. When the CPU is available, it is assigned to the process that has the smallest next CPU burst. The next CPU burst is generally predicted as an exponential average of the measured lengths of previous CPU bursts. [12] Copyright to IJARCCE DOI /IJARCCE
5 Data Center Topologies ISSN (Online) In this approach, it is enhanced the green scheduler which performs the best-effort workload consolidation on a minimum set of servers in green cloud computing by implementing function to execute firstly task with minimum arrival time (exponential distribution). Fig.4 enhancing green scheduling algorithm in green cloud by implementing function This chart describes implementation of enhancement to green scheduler by add set of functions and processes for selection the task has minimum arrival time: Assign arrival time to each task which follows exponential distribution: Assuming arrival time follows exponential distribution from NS2 then assigning this variable to task class. Get task with arrival time: this function is implemented to get the task with arrival time. Get task with minimum arrival time: this function is implemented in host class to get the task with minimum arrival time. Get minimum processing rate required by the task: this function is implemented in host class in green scheduler to get minimum processing rate. Get minimum processing rate by the most urgent task will be executed: this function is implemented in green scheduler to get the minimum processing rate by the most urgent already executed task and we add in this function calling to function get minimum arrival time Check whether the task is more urgent and has minimum arrival time: this question to be sure the task will be executed will be more urgent and has minimum arrival time. If the answer is yes the task will executed and if the answer is no it will revaluate the arrival time and the urgent of task. V. SIMULATION SCENARIO A three-tier tree data center topology comprised of 1536 servers arranged into 32 racks each holding 48 servers, served by 4 core and 8 aggregation switches (see Fig. 2),it is used in simulation experiment. We used 1 GE links for interconnecting servers in the inside racks while 10 GE links were used to form a fat-tree topology interconnecting access, aggregation and core switches. The size of the workload is equal to 15 KB. Being fragmented, it occupies 10 Ethernet packets.during executions; the workloads produce a constant bitrate stream of 1 Mb/s directed out of the data center. Such a stream is designed to mimic the behavior of the most common video sharing applications. To add uncertainties, during the execution, each workload communicates with another randomly chosen workload by sending a 75 KB message internally. The message of the same size is also sent out of the data center at the moment of task completion as an external communication. [8, 9] TABLE I. SIMULATION SETUP PARAMETERS Parameter Data Center Architectures Core nodes (C1) 8 Aggregation nodes 16 (C2) Access switches 512 (C3) Servers (S) 1536 Link (C1 C2) 10 GE Link (C2 C3) 1GE Link (C3 S) 1 GE Data center average 30% load Task generation time Exponentially distributed Task size Exponentially distributed Simulation time 60 minutes The workload generation events are exponentially distributed in time to mimic typical process of user arrival. As soon as a scheduling decision is taken for a newly arrived workload it is sent over the data center network to the selected server for execution. The propagation delay on all of the links was set to 10 ns. The server peak consumption is 301 W which is composed between 130W allocated for a peak CPU consumption and 171 W is consumed by other devices. The minimum consumption of an idle server is 198W. The average load of the data center is kept at 30% that is distributed among the servers using two scheduling schemes: (a) SJFGC scheduling approach is proposed in Sec. 4 of this paper, theswitches consumption is almost constant for different transmissionrates because the most of thepower is consumed by their chassis and line cards and only a small portion is consumed by their port transceivers. For the 3T topology where links are 10 G the core while 1 G aggregation and rack.(b) Green schedulertends to group the workload on a minimum possible amount of computing servers. [10] Copyright to IJARCCE DOI /IJARCCE
6 VI.RESULTS OF SIMULATION In compared approach, the workloads arrived to the data center and are scheduled for execution using energy aware green scheduler. This green scheduler tends to group the workloads on a minimum possible amount of computing servers. The scheduler continuously tracks buffer occupancy of network switches on the path. In case of congestion, the scheduler avoids using congested routes even if they lead to the servers able to satisfy computational requirement of the workloads. The servers left idle are put into sleep mode (DNS scheme), the time required to change the power state in either mode is set to 100 ms. [9] TABLE II. COMPARISON OF ENERGY-EFFICIENT OF GREEN SCHEDULER AND SJFGC APPROACH Energy Consumption (KW h) An Compared [8] SJFGC Improvement of energy Server % Switch % Data center % In simulation work, we use DNS scheme for minimizing energy consumption, in compared work we use green scheduler, and in proposed work we use SJFGC approach, we enhance the green schedulerby execution firstly task with minimum arrival time which is followed exponential distribution. We measure an improvement of energy as follows: An Improvement of energy = (1- (SJFGC /green scheduler))*100 (2) Selection the server to compute selected task is randomly so the results of table (1) is average of 20 runs. In this paper, Table 2 represents comparison between different scheduling; the data is collected for an average data center load of 30%. In applied SJFGC approach on DNS scheme the energy consumption is reduced in server, switch and data center. Figure 6 shows an improvement in server energy is 33.99%, in switch energy is 49.65% and in data center is 38.04%. An improvement in energy consumption of switch is better than server and data center, this means that using SJFGC minimizes energy consumption in all components of cloud computing because it is more effect on computing server. All improvements in server, switch and data center mean SJFGC approach is better than green scheduler approach. VII.CONCLUSION In this paper, we present a proposed approach to minimize energy in cloud computing, which is based on enhancing green scheduler by using the idea of shortest job first scheduling in operating system.. In SJFGC approach, execution firstly task with shortest arrival time to computing server which is followed exponential distribution. From Comparison between SJFGC and green scheduler, we conclude the proposed approach is better than the compared approach especially in energy consumption in switch because the percentage of improvement is better than server and data center. Fig. 5 Comparison between green scheduler and SJFGC of energy consumption Fig.6 percentage of an improvement in energy consumption At result, the proposed approach is better than compared approach in minimizing energy consumption of cloud computing. REFERENCES [1] V.Kumar, Temperature-Aware Virtual Machine Scheduling in Green Clouds, M. thesis, Univ. of Thapar, Patiala, Jun [2] L.Ganesh, DATACENTER ENERGY MANAGEMENT, D. thesis, Univ. of Cornell, Jan [3] D.Mann, A Heterogeneous Workload Consolidation Technique for Green Cloud, M. thesis, Univ. of Thapar, Jun [4] A.J. Younge, TOWARDS A GREEN FRAMEWORK FOR CLOUD DATA CENTERS, M. thesis, Rochester Institute of Technology, May [5] H.Qian, DATA CENTER RESOURCE MANAGEMENT WITH TEMPORAL DYNAMIC WORKLOAD, D. thesis, Univ. of Missouri Kansas City, 2012 [6] S.Kumar Garg, R.Buyya, Green Cloud computing and Environmental Sustainability, Dept. of Computer Science and Software Engineering,Univ. of Melbourne, Australia,2011 [7] F.Seng Chu, K.Cheng Chen, C.Mou Cheng, Toward Green Cloud Computing, ICUIMC, February 21-23, 2011, Seoul, korea,acm,2011 Copyright to IJARCCE DOI /IJARCCE
7 [8] D.Kliazovich, P.Bouvry, S.U.Khan, GreenCloud: a packet-level simulator of energy-aware cloud computing data centers, Springer Science+Business Media, 9 Nov [9] D.Kliazovich, P.Bouvry, S.U.Khan, DENS: data center energyefficient network-aware scheduling, Springer Science+Business Media, 10 Aug [10] D.Kliazovich, P.Bouvry, S.U.Khan, GreenCloud: A Packet-level Simulator ofenergy-aware Cloud Computing Data Centers,in proc.ieee Globecom, 2010,paper /10 [11] D.Kliazovich1, S.T. Arzo, F.Granelli, P.Bouvry,S.U.Khan3, e- STAB: Energy-Efficient Scheduling for Cloud Computing Applications with Traffic Load Balancing, IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, 2013 [12] B.S.Gill, S.k.Gill, P.Jain, Analysis of Energy Aware Data Center using Green Cloud Simulator in Cloud Computing, International Journal of Computer Trends and Technology (IJCTT), vol. 5 number 3, Nov [13] A.Silberschatz, P.B.Galvin, Operating system concepts, 5th ed,usa, Addison Wesley Longman, 1998 [14] R.Satsangi, P.Dashore and N.Dubey, Risk Management in Cloud Computing Through Fuzzy Logic, IJAIEM, Vol.1, Issue 4, Dec [15] Wissam.C, Chansu.Y, Survey on Power Management Techniques for Energy Efficient Computer Systems, ClevelandStateUniversity, [16] Chia.-M.W, Ruay.-S.C, Hsin.-Y.C, A green energy-efficient scheduling algorithm using DVFS technique for cloud datacenters, Future Generation Computer Systems, 2013, Copyright to IJARCCE DOI /IJARCCE
SURVEY ON GREEN CLOUD COMPUTING DATA CENTERS
SURVEY ON GREEN CLOUD COMPUTING DATA CENTERS ¹ONKAR ASWALE, ²YAHSAVANT JADHAV, ³PAYAL KALE, 4 NISHA TIWATANE 1,2,3,4 Dept. of Computer Sci. & Engg, Rajarambapu Institute of Technology, Islampur Abstract-
OPTIMIZING ENERGY IN DATACENTER USING DVFS
CHAPTER 5 OPTIMIZING ENERGY IN DATACENTER USING DVFS 5.1 Introduction Energy efficiency and low carbon strategies have attracted a lot of concern. The goal for 20% energy efficiency and carbon reduction
Juniper Networks QFabric: Scaling for the Modern Data Center
Juniper Networks QFabric: Scaling for the Modern Data Center Executive Summary The modern data center has undergone a series of changes that have significantly impacted business operations. Applications
Energy Constrained Resource Scheduling for Cloud Environment
Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering
CHAPTER 1 INTRODUCTION
CHAPTER 1 INTRODUCTION 1.1 Background The command over cloud computing infrastructure is increasing with the growing demands of IT infrastructure during the changed business scenario of the 21 st Century.
Data Center Network Structure using Hybrid Optoelectronic Routers
Data Center Network Structure using Hybrid Optoelectronic Routers Yuichi Ohsita, and Masayuki Murata Graduate School of Information Science and Technology, Osaka University Osaka, Japan {y-ohsita, murata}@ist.osaka-u.ac.jp
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution Extreme Networks offers a highly virtualized, centrally manageable
Data Center Network Topologies: FatTree
Data Center Network Topologies: FatTree Hakim Weatherspoon Assistant Professor, Dept of Computer Science CS 5413: High Performance Systems and Networking September 22, 2014 Slides used and adapted judiciously
GreenCloud: a packet-level simulator of energy-aware cloud computing data centers
J Supercomput DOI 10.1007/s11227-010-0504-1 GreenCloud: a packet-level simulator of energy-aware cloud computing data centers Dzmitry Kliazovich Pascal Bouvry Samee Ullah Khan Springer Science+Business
A Slow-sTart Exponential and Linear Algorithm for Energy Saving in Wireless Networks
1 A Slow-sTart Exponential and Linear Algorithm for Energy Saving in Wireless Networks Yang Song, Bogdan Ciubotaru, Member, IEEE, and Gabriel-Miro Muntean, Member, IEEE Abstract Limited battery capacity
Non-blocking Switching in the Cloud Computing Era
Non-blocking Switching in the Cloud Computing Era Contents 1 Foreword... 3 2 Networks Must Go With the Flow in the Cloud Computing Era... 3 3 Fat-tree Architecture Achieves a Non-blocking Data Center Network...
TRILL Large Layer 2 Network Solution
TRILL Large Layer 2 Network Solution Contents 1 Network Architecture Requirements of Data Centers in the Cloud Computing Era... 3 2 TRILL Characteristics... 5 3 Huawei TRILL-based Large Layer 2 Network
AN ADAPTIVE DISTRIBUTED LOAD BALANCING TECHNIQUE FOR CLOUD COMPUTING
AN ADAPTIVE DISTRIBUTED LOAD BALANCING TECHNIQUE FOR CLOUD COMPUTING Gurpreet Singh M.Phil Research Scholar, Computer Science Dept. Punjabi University, Patiala [email protected] Abstract: Cloud Computing
Fibre Channel over Ethernet in the Data Center: An Introduction
Fibre Channel over Ethernet in the Data Center: An Introduction Introduction Fibre Channel over Ethernet (FCoE) is a newly proposed standard that is being developed by INCITS T11. The FCoE protocol specification
SummitStack in the Data Center
SummitStack in the Data Center Abstract: This white paper describes the challenges in the virtualized server environment and the solution that Extreme Networks offers a highly virtualized, centrally manageable
Keywords: Dynamic Load Balancing, Process Migration, Load Indices, Threshold Level, Response Time, Process Age.
Volume 3, Issue 10, October 2013 ISSN: 2277 128X International Journal of Advanced Research in Computer Science and Software Engineering Research Paper Available online at: www.ijarcsse.com Load Measurement
Hyper Node Torus: A New Interconnection Network for High Speed Packet Processors
2011 International Symposium on Computer Networks and Distributed Systems (CNDS), February 23-24, 2011 Hyper Node Torus: A New Interconnection Network for High Speed Packet Processors Atefeh Khosravi,
Energy-Efficient Data Replication in Cloud Computing Datacenters
Globecom 213 Workshop - Cloud Computing Systems, Networks, and Applications Energy-Efficient Data Replication in Cloud Computing Datacenters Dejene Boru 1, Dzmitry Kliazovich 2, Fabrizio Granelli 3, Pascal
PROPRIETARY CISCO. Cisco Cloud Essentials for EngineersV1.0. LESSON 1 Cloud Architectures. TOPIC 1 Cisco Data Center Virtualization and Consolidation
Cisco Cloud Essentials for EngineersV1.0 LESSON 1 Cloud Architectures TOPIC 1 Cisco Data Center Virtualization and Consolidation 2010 Cisco and/or its affiliates. All rights reserved. Cisco Confidential
AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK
Abstract AN OVERVIEW OF QUALITY OF SERVICE COMPUTER NETWORK Mrs. Amandeep Kaur, Assistant Professor, Department of Computer Application, Apeejay Institute of Management, Ramamandi, Jalandhar-144001, Punjab,
Models for Efficient Data Replication in Cloud Computing Datacenters
Models for Efficient Data Replication in Cloud Computing Datacenters Dejene Boru 1, Dzmitry Kliazovich 2, Fabrizio Granelli 3, Pascal Bouvry 2, Albert Y. Zomaya 4 1 CREATE-NET Via alla Cascata 56/D, Trento,
Advanced Computer Networks. Datacenter Network Fabric
Advanced Computer Networks 263 3501 00 Datacenter Network Fabric Patrick Stuedi Spring Semester 2014 Oriana Riva, Department of Computer Science ETH Zürich 1 Outline Last week Today Supercomputer networking
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing
Heterogeneous Workload Consolidation for Efficient Management of Data Centers in Cloud Computing Deep Mann ME (Software Engineering) Computer Science and Engineering Department Thapar University Patiala-147004
Load Balancing and Maintaining the Qos on Cloud Partitioning For the Public Cloud
Load Balancing and Maintaining the Qos on Cloud Partitioning For the Public Cloud 1 S.Karthika, 2 T.Lavanya, 3 G.Gokila, 4 A.Arunraja 5 S.Sarumathi, 6 S.Saravanakumar, 7 A.Gokilavani 1,2,3,4 Student, Department
VMWARE WHITE PAPER 1
1 VMWARE WHITE PAPER Introduction This paper outlines the considerations that affect network throughput. The paper examines the applications deployed on top of a virtual infrastructure and discusses the
Managing Data Center Power and Cooling
White PAPER Managing Data Center Power and Cooling Introduction: Crisis in Power and Cooling As server microprocessors become more powerful in accordance with Moore s Law, they also consume more power
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family
Intel Ethernet Switch Load Balancing System Design Using Advanced Features in Intel Ethernet Switch Family White Paper June, 2008 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL
Local Area Networks transmission system private speedy and secure kilometres shared transmission medium hardware & software
Local Area What s a LAN? A transmission system, usually private owned, very speedy and secure, covering a geographical area in the range of kilometres, comprising a shared transmission medium and a set
Lecture 7: Data Center Networks"
Lecture 7: Data Center Networks" CSE 222A: Computer Communication Networks Alex C. Snoeren Thanks: Nick Feamster Lecture 7 Overview" Project discussion Data Centers overview Fat Tree paper discussion CSE
SAN Conceptual and Design Basics
TECHNICAL NOTE VMware Infrastructure 3 SAN Conceptual and Design Basics VMware ESX Server can be used in conjunction with a SAN (storage area network), a specialized high speed network that connects computer
Cisco s Massively Scalable Data Center
Cisco s Massively Scalable Data Center Network Fabric for Warehouse Scale Computer At-A-Glance Datacenter is the Computer MSDC is the Network Cisco s Massively Scalable Data Center (MSDC) is a framework
STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS
STATE OF THE ART OF DATA CENTRE NETWORK TECHNOLOGIES CASE: COMPARISON BETWEEN ETHERNET FABRIC SOLUTIONS Supervisor: Prof. Jukka Manner Instructor: Lic.Sc. (Tech) Markus Peuhkuri Francesco Maestrelli 17
DENS: data center energy-efficient network-aware scheduling
DOI 10.1007/s10586-011-0177-4 DENS: data center energy-efficient network-aware scheduling Dzmitry Kliazovich Pascal Bouvry Samee Ullah Khan Received: 18 April 2011 / Accepted: 10 August 2011 Springer Science+Business
2. Research and Development on the Autonomic Operation. Control Infrastructure Technologies in the Cloud Computing Environment
R&D supporting future cloud computing infrastructure technologies Research and Development on Autonomic Operation Control Infrastructure Technologies in the Cloud Computing Environment DEMPO Hiroshi, KAMI
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches
Intel Ethernet Switch Converged Enhanced Ethernet (CEE) and Datacenter Bridging (DCB) Using Intel Ethernet Switch Family Switches February, 2009 Legal INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION
Scaling 10Gb/s Clustering at Wire-Speed
Scaling 10Gb/s Clustering at Wire-Speed InfiniBand offers cost-effective wire-speed scaling with deterministic performance Mellanox Technologies Inc. 2900 Stender Way, Santa Clara, CA 95054 Tel: 408-970-3400
Three Key Design Considerations of IP Video Surveillance Systems
Three Key Design Considerations of IP Video Surveillance Systems 2012 Moxa Inc. All rights reserved. Three Key Design Considerations of IP Video Surveillance Systems Copyright Notice 2012 Moxa Inc. All
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器
基 於 SDN 與 可 程 式 化 硬 體 架 構 之 雲 端 網 路 系 統 交 換 器 楊 竹 星 教 授 國 立 成 功 大 學 電 機 工 程 學 系 Outline Introduction OpenFlow NetFPGA OpenFlow Switch on NetFPGA Development Cases Conclusion 2 Introduction With the proposal
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009
Cloud Networking: A Novel Network Approach for Cloud Computing Models CQ1 2009 1 Arista s Cloud Networking The advent of Cloud Computing changes the approach to datacenters networks in terms of throughput
Performance Evaluation of Linux Bridge
Performance Evaluation of Linux Bridge James T. Yu School of Computer Science, Telecommunications, and Information System (CTI) DePaul University ABSTRACT This paper studies a unique network feature, Ethernet
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics. Qin Yin Fall Semester 2013
Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall Semester 2013 1 Walmart s Data Center 2 Amadeus Data Center 3 Google s Data Center 4 Data Center
Influence of Load Balancing on Quality of Real Time Data Transmission*
SERBIAN JOURNAL OF ELECTRICAL ENGINEERING Vol. 6, No. 3, December 2009, 515-524 UDK: 004.738.2 Influence of Load Balancing on Quality of Real Time Data Transmission* Nataša Maksić 1,a, Petar Knežević 2,
Chapter 1 Reading Organizer
Chapter 1 Reading Organizer After completion of this chapter, you should be able to: Describe convergence of data, voice and video in the context of switched networks Describe a switched network in a small
Smart Queue Scheduling for QoS Spring 2001 Final Report
ENSC 833-3: NETWORK PROTOCOLS AND PERFORMANCE CMPT 885-3: SPECIAL TOPICS: HIGH-PERFORMANCE NETWORKS Smart Queue Scheduling for QoS Spring 2001 Final Report By Haijing Fang([email protected]) & Liu Tang([email protected])
Optimizing Data Center Networks for Cloud Computing
PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM. 2012-13 CALIENT Technologies www.calient.
The Software Defined Hybrid Packet Optical Datacenter Network SDN AT LIGHT SPEED TM 2012-13 CALIENT Technologies www.calient.net 1 INTRODUCTION In datacenter networks, video, mobile data, and big data
Scheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis
Scheduling using Optimization Decomposition in Wireless Network with Time Performance Analysis Aparna.C 1, Kavitha.V.kakade 2 M.E Student, Department of Computer Science and Engineering, Sri Shakthi Institute
Latency on a Switched Ethernet Network
Application Note 8 Latency on a Switched Ethernet Network Introduction: This document serves to explain the sources of latency on a switched Ethernet network and describe how to calculate cumulative latency
Windows Server Performance Monitoring
Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly
Legacy Network Infrastructure Management Model for Green Cloud Validated Through Simulations
374 Legacy Network Infrastructure Management Model for Green Cloud Validated Through Simulations Sergio Roberto Villarreal, María Elena Villarreal, Carlos Becker Westphall, and Carla Merkle Westphall Network
A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems
A Taxonomy and Survey of Energy-Efficient Data Centers and Cloud Computing Systems Anton Beloglazov, Rajkumar Buyya, Young Choon Lee, and Albert Zomaya Present by Leping Wang 1/25/2012 Outline Background
Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm
Energy Conscious Virtual Machine Migration by Job Shop Scheduling Algorithm Shanthipriya.M 1, S.T.Munusamy 2 ProfSrinivasan. R 3 M.Tech (IT) Student, Department of IT, PSV College of Engg & Tech, Krishnagiri,
Datagram-based network layer: forwarding; routing. Additional function of VCbased network layer: call setup.
CEN 007C Computer Networks Fundamentals Instructor: Prof. A. Helmy Homework : Network Layer Assigned: Nov. 28 th, 2011. Due Date: Dec 8 th, 2011 (to the TA) 1. ( points) What are the 2 most important network-layer
Networking Topology For Your System
This chapter describes the different networking topologies supported for this product, including the advantages and disadvantages of each. Select the one that best meets your needs and your network deployment.
Brocade Solution for EMC VSPEX Server Virtualization
Reference Architecture Brocade Solution Blueprint Brocade Solution for EMC VSPEX Server Virtualization Microsoft Hyper-V for 50 & 100 Virtual Machines Enabled by Microsoft Hyper-V, Brocade ICX series switch,
Virtual Machine in Data Center Switches Huawei Virtual System
Virtual Machine in Data Center Switches Huawei Virtual System Contents 1 Introduction... 3 2 VS: From the Aspect of Virtualization Technology... 3 3 VS: From the Aspect of Market Driving... 4 4 VS: From
Windows Server 2008 R2 Hyper-V Live Migration
Windows Server 2008 R2 Hyper-V Live Migration Table of Contents Overview of Windows Server 2008 R2 Hyper-V Features... 3 Dynamic VM storage... 3 Enhanced Processor Support... 3 Enhanced Networking Support...
VIA CONNECT PRO Deployment Guide
VIA CONNECT PRO Deployment Guide www.true-collaboration.com Infinite Ways to Collaborate CONTENTS Introduction... 3 User Experience... 3 Pre-Deployment Planning... 3 Connectivity... 3 Network Addressing...
Interconnection Networks. Interconnection Networks. Interconnection networks are used everywhere!
Interconnection Networks Interconnection Networks Interconnection networks are used everywhere! Supercomputers connecting the processors Routers connecting the ports can consider a router as a parallel
16/5-05 Datakommunikation - Jonny Pettersson, UmU 2. 16/5-05 Datakommunikation - Jonny Pettersson, UmU 4
Multimedia Networking Principles Last time Classify multimedia Multimedia Networking Applications Streaming stored audio and video Identify the network Real-time Multimedia: Internet Phone services the
The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud
The Road to Cloud Computing How to Evolve Your Data Center LAN to Support Virtualization and Cloud Introduction Cloud computing is one of the most important topics in IT. The reason for that importance
Architecture of distributed network processors: specifics of application in information security systems
Architecture of distributed network processors: specifics of application in information security systems V.Zaborovsky, Politechnical University, Sait-Petersburg, Russia [email protected] 1. Introduction Modern
Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip
Design and Implementation of an On-Chip timing based Permutation Network for Multiprocessor system on Chip Ms Lavanya Thunuguntla 1, Saritha Sapa 2 1 Associate Professor, Department of ECE, HITAM, Telangana
ENSC 427: Communication Networks. Analysis of Voice over IP performance on Wi-Fi networks
ENSC 427: Communication Networks Spring 2010 OPNET Final Project Analysis of Voice over IP performance on Wi-Fi networks Group 14 members: Farzad Abasi ([email protected]) Ehsan Arman ([email protected]) http://www.sfu.ca/~faa6
4 Internet QoS Management
4 Internet QoS Management Rolf Stadler School of Electrical Engineering KTH Royal Institute of Technology [email protected] September 2008 Overview Network Management Performance Mgt QoS Mgt Resource Control
Local-Area Network -LAN
Computer Networks A group of two or more computer systems linked together. There are many [types] of computer networks: Peer To Peer (workgroups) The computers are connected by a network, however, there
A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network
ASEE 2014 Zone I Conference, April 3-5, 2014, University of Bridgeport, Bridgpeort, CT, USA A Hybrid Electrical and Optical Networking Topology of Data Center for Big Data Network Mohammad Naimur Rahman
Interconnection Network of OTA-based FPAA
Chapter S Interconnection Network of OTA-based FPAA 5.1 Introduction Aside from CAB components, a number of different interconnect structures have been proposed for FPAAs. The choice of an intercmmcclion
Router Architectures
Router Architectures An overview of router architectures. Introduction What is a Packet Switch? Basic Architectural Components Some Example Packet Switches The Evolution of IP Routers 2 1 Router Components
AN EFFICIENT LOAD BALANCING ALGORITHM FOR CLOUD ENVIRONMENT
AN EFFICIENT LOAD BALANCING ALGORITHM FOR CLOUD ENVIRONMENT V.Bharath 1, D. Vijayakumar 2, R. Sabarimuthukumar 3 1,2,3 Department of CSE PG, National Engineering College Kovilpatti, Tamilnadu, (India)
International Journal of Digital Application & Contemporary research Website: www.ijdacr.com (Volume 2, Issue 9, April 2014)
Green Cloud Computing: Greedy Algorithms for Virtual Machines Migration and Consolidation to Optimize Energy Consumption in a Data Center Rasoul Beik Islamic Azad University Khomeinishahr Branch, Isfahan,
Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani
Improving the Scalability of Data Center Networks with Traffic-aware Virtual Machine Placement Xiaoqiao Meng, Vasileios Pappas, Li Zhang IBM T.J. Watson Research Center Presented by: Payman Khani Overview:
VIA COLLAGE Deployment Guide
VIA COLLAGE Deployment Guide www.true-collaboration.com Infinite Ways to Collaborate CONTENTS Introduction... 3 User Experience... 3 Pre-Deployment Planning... 3 Connectivity... 3 Network Addressing...
Scalable Approaches for Multitenant Cloud Data Centers
WHITE PAPER www.brocade.com DATA CENTER Scalable Approaches for Multitenant Cloud Data Centers Brocade VCS Fabric technology is the ideal Ethernet infrastructure for cloud computing. It is manageable,
Large-Scale Distributed Systems. Datacenter Networks. COMP6511A Spring 2014 HKUST. Lin Gu [email protected]
Large-Scale Distributed Systems Datacenter Networks COMP6511A Spring 2014 HKUST Lin Gu [email protected] Datacenter Networking Major Components of a Datacenter Computing hardware (equipment racks) Power supply
Requirements of Voice in an IP Internetwork
Requirements of Voice in an IP Internetwork Real-Time Voice in a Best-Effort IP Internetwork This topic lists problems associated with implementation of real-time voice traffic in a best-effort IP internetwork.
Auspex Support for Cisco Fast EtherChannel TM
Auspex Support for Cisco Fast EtherChannel TM Technical Report 21 Version 1.0 March 1998 Document 300-TC049, V1.0, 980310 Auspex Systems, Inc. 2300 Central Expressway Santa Clara, California 95050-2516
Chapter 13 Selected Storage Systems and Interface
Chapter 13 Selected Storage Systems and Interface Chapter 13 Objectives Appreciate the role of enterprise storage as a distinct architectural entity. Expand upon basic I/O concepts to include storage protocols.
How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership
How the Port Density of a Data Center LAN Switch Impacts Scalability and Total Cost of Ownership June 4, 2012 Introduction As data centers are forced to accommodate rapidly growing volumes of information,
Introduction to Cloud Design Four Design Principals For IaaS
WHITE PAPER Introduction to Cloud Design Four Design Principals For IaaS What is a Cloud...1 Why Mellanox for the Cloud...2 Design Considerations in Building an IaaS Cloud...2 Summary...4 What is a Cloud
LCMON Network Traffic Analysis
LCMON Network Traffic Analysis Adam Black Centre for Advanced Internet Architectures, Technical Report 79A Swinburne University of Technology Melbourne, Australia [email protected] Abstract The Swinburne
All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.
WHITE PAPER All-Flash Arrays Weren t Built for Dynamic Environments. Here s Why... This whitepaper is based on content originally posted at www.frankdenneman.nl 1 Monolithic shared storage architectures
Improving Quality of Service
Improving Quality of Service Using Dell PowerConnect 6024/6024F Switches Quality of service (QoS) mechanisms classify and prioritize network traffic to improve throughput. This article explains the basic
Performance of Software Switching
Performance of Software Switching Based on papers in IEEE HPSR 2011 and IFIP/ACM Performance 2011 Nuutti Varis, Jukka Manner Department of Communications and Networking (COMNET) Agenda Motivation Performance
Data Center Networking Designing Today s Data Center
Data Center Networking Designing Today s Data Center There is nothing more important than our customers. Data Center Networking Designing Today s Data Center Executive Summary Demand for application availability
Load Balancing Mechanisms in Data Center Networks
Load Balancing Mechanisms in Data Center Networks Santosh Mahapatra Xin Yuan Department of Computer Science, Florida State University, Tallahassee, FL 33 {mahapatr,xyuan}@cs.fsu.edu Abstract We consider
The Future of Computing Cisco Unified Computing System. Markus Kunstmann Channels Systems Engineer
The Future of Computing Cisco Unified Computing System Markus Kunstmann Channels Systems Engineer 2009 Cisco Systems, Inc. All rights reserved. Data Centers Are under Increasing Pressure Collaboration
A Scalable Large Format Display Based on Zero Client Processor
International Journal of Electrical and Computer Engineering (IJECE) Vol. 5, No. 4, August 2015, pp. 714~719 ISSN: 2088-8708 714 A Scalable Large Format Display Based on Zero Client Processor Sang Don
Walmart s Data Center. Amadeus Data Center. Google s Data Center. Data Center Evolution 1.0. Data Center Evolution 2.0
Walmart s Data Center Network Virtualization and Data Center Networks 263-3825-00 Data Center Virtualization - Basics Qin Yin Fall emester 2013 1 2 Amadeus Data Center Google s Data Center 3 4 Data Center
