Cloud gaming and simulation in Distributed Systems. GingFung Matthew Yeung. BSc Computer Science 2014/15

Size: px
Start display at page:

Download "Cloud gaming and simulation in Distributed Systems. GingFung Matthew Yeung. BSc Computer Science 2014/15"

Transcription

1 School of Computing FACULTY OF ENGINEERING Cloud gaming and simulation in Distributed Systems GingFung Matthew Yeung BSc Computer Science 2014/15

2 The candidate confirms that the following have been submitted: Items Format Recipient(s) and Date Deliverables 1 Report SSO (12/05/15) Software Code Software codes or URL SSO, Supervisors (12/05/15) Deliverable 2 User manuals SSO (12/05/15) Type of Project: Exploratory The candidate confirms that the work submitted is their own and the appropriate credit has been given where reference has been made to the work of others. I understand that failure to attribute material which is obtained from another source may be considered as plagiarism. (Signature of student) GingFung Matthew Yeung <2015> The University of Leeds and <GING FUNG MATTHEW YEUNG> ii

3 Summary A game engine is the core of a game, which generally contains major game components. Game engine can be computationally intensive and must operate with high frequency and low latency. User with limited hardware specification will not achieve the best gaming experience; therefore it will always be limited by the hardware available for the game computations, my aim of this project is trying to increase the hardware and resources available. This project will conduct a detail summary of why and how to distribute a game engine across a distributed system and whether it is feasible. Discussing the advantages of distributing a game engine and the disadvantages. Finally, to test the theory, in the final section will detail the evaluation done on my game simulation and give conclusion on its feasibility. iii

4 Acknowledgements I would like to thank the following people: My supervisors, Jie Xu, David Webster, Peter Garraghan for their supports, feedback, they gave me tremendous help along the way. My assessor Mark dekamps for his feedback on my project that helps me gains a deeper understanding of what I should clarified for my project. Those who gave me support and keep me going; my friends and family. iv

5 Glossary of Terms Abbreviation Term Definition CPU Central Processing unit The brain of a machine, contains processors and control unit. CPU perform most of the computation and instructions of a computer program.[21] GPU Graphics Processing unit Typically use for rendering, computing graphics computation and image processing. [22] MMOG Massively Multiplayer Online Game A type of game that can support a very large numbers of players simultaneously and usually involve playing through the internet. [1] FPS First Person Shooting A type of game that based on first person perspective and mostly played around guns. [1] VM Virtual Machine A software implementation of what you can see and do on a physical machine such as run programs etc. Each VM contain an amount of virtual CPU and memory usage. [23] AI Artificial Intelligence It is a computer programs that mimic human thinking and acting. [24] v

6 Table of Contents Summary.. ii Acknowledgements iii Glossary of Terms.iv Table of Contents....v 1. Introduction Project Aim Project Objectives Proposed Solution Deliverables Initial Schedule Background Introduction of Cloud Computing Introduction of Cloud Gaming Distributed Systems Cloud Computing and Distributed Systems Project Problems Related works Proposed Models and methodology Abstract System model Experiment model Methodologies Experiment Non-Distributed Distributed Summary Evaluation Project Objectives Project Requirements Project Schedule Conclusion Further Work Reference..54 Appendix A External Sources Appendix C User Manual Appendix B Ethical Issues Appendix D Personal Reflection vi

7 Chapter 1 Introduction Why is it that when running a game on your local physical machine, the CPU usage and memory usage takes up a large proportion of your machine? Why is it that sometimes the game is too advanced, the user local machine hardware component is unable to support the game? Typically, playing video games require high utilizations of hardware resources on a machine and that is what causes the game look stutter sometimes. In a MMOGs game, network latency can also cause a game to slow down and looks lag. There are so many factors that can vary gaming experience on user physical machine. One thing for sure is that running a game can be computationally, memory usage intensive, and must operate with high frequency and low latency as stated in [9]. There must be a lot of calculations causing users machine to run into resource bottleneck or bad gaming experience. This is because in the core of a game, an engine is running game engine. But what is a Game Engine? In the book Game Engine Architecture [1], the author gave a complete review of the core components of a typical game engine and is essentially the core of a video game as shown below; Fig.1.15 Game Engine Architecture: Runtime game engine architecture 1

8 There are plenty of components in the engine shown above however the major components are: Graphics engine Physics engine AI engine Currently in modern industry, most gamers focus on the graphics and physics aspect of a game, seeking for perfect and realism graphics; therefore as stated in [17,18], in a Massive Multiplayer Online Game, the game server which run the game engine used up most of the computation power to support major graphics rendering and physics calculation; hence it gives insufficient computing power to support computational demands of thousands of even moderately sophisticated, concurrently running Ais. [6] While graphics and physics component are important to a game, to make the game more enjoyable, smarter Ais are also desirable to be present in better game design. In other games, one such as Dwarf Fortress [20], a single player construction game that runs in single threaded; large numbers of entity are present in the game, resource bottleneck occurs often, frame rate also drops intensively as time goes on, and because of the game keep tracks of almost every entity, including running AI calculation, this will causing memory, CPU usage to build up. This can results to a massive drop in frames rate, because the game loop has to wait till all computation finish for each frame, then renders it, this can makes the game looks lag ; thus gaming experience is not enjoyable. This proves that AI calculation is indeed as important and largely complex as graphics and physics calculation. Therefore game engine and the game design architecture is an important blue print of a successful game. 1.1 Project aim With more and more computation expensive games, accepting large number of players, gamers local machines hardware specification are required to update frequently, which costs gamers money and time to do so. Therefore as stated before, this project aim is to increase the hardware resources available for game computations. Currently there are three types of distributed game architectures that can take the pressure away from gamers local machine: Type 1 Client server architecture, such as Quake game server. It is a 3D FPS game involving a number of players or can play in a single player mode. The architecture behaves as the server runs the game loop; clients and 2

9 server both have game engine. Multiple users connect to server and send in control input, then receive results via the Internet and render the scene on users one physical machine as described in [28]. This means that if the client machine hardware components are out-dated, gaming experience is not ideal, plus this is very latency sensitive. Type 2 Cloud Gaming, it is a new approach that emerges recently to help users with limited hardware components on their machine by taking advantages of Cloud Computing paradigm. This will be explained in details in the following chapter. Type 3 This is as described in [31] by Jiaqiang, et al, taking an approach of peer-to-peer model, distributed game computation to clients from the server; however exposing the risk of allowing bad gamers to hack, due to codes are available in the client. These three approaches are good way to host a game and help ease the pressure on clients machines. However these three approaches have been explored and optimized for good; there is another possibility, in which this project is going to explore and identify whether it is feasible to do so. The idea is that, the game engine of a video game is on user s local machine. Next, user can take advantages of his additional local physical machines, not limited to only one machine if other machines are available; to make up or improves gaming experience by distributing the core components of a game engine to other distributed nodes, to calculate intensive computation and do different tasks, forming a distributed system. For example: A typical PlayStation 4 from Sony has 8 CPU [29], with this approach additional machine that has 4 CPU can contribute and therefore as a whole distributed system, 12 CPU is available for the game computations. With extra resources, computations can be done faster. Gamer will have an option of Boosting, limiting the machines running into resources bottleneck because of hardware limitation. However there is a possibility that communication between machines involve overheads. Further more, connection of the machine should be done in a network which as a distributed system, fallacies cannot be easily avoided as described in [7]. Therefore, the aim of this project is to explore, identify the feasibility of distributing the core game engine components, across a distributed system; also to evaluate the effect on performance and find the optimal threshold after it has been distributed. 3

10 The performance could be measured by the computation frame rate (game loop computation) and monitoring the CPU usage as well as memory. The time to get all processes done and render the scene is the main issue of this project. The key challenge of wanting to use a distributed system approach but maintaining low latency is hard. This is a well-known fallacy in a distributed system. This project will also investigate the effect on latency occurring. 1.2 Project Objectives The objectives of this project are: 1. Research on previous work, and the idea of using distributed system for solving hardware limitation of game computations. 2. Experiments the feasibility of the distributed system approach of gaming, and identify the pros and cons of this approach. 3. To do the experiments, the following procedures are carried out: Designing a game simulation for experiment a. In single threaded, showing that resources bottleneck do occur on one physical machine, the relationship between CPU usages and frame rate. Identify which game engine component uses the most resources. b. In a distribution model, ideally to show that being able to distribute part of a game engine can improve game performance as a whole system. For example: frame rate does not drop as intensively, CPU usage does not increase as intensively. 4. Future work in relation to this project aim in a more scalable model, suggestion of using cloud-computing paradigm as part of a distributed system. The model will be introduce and explain in the next chapter. Finally, this project will gather my experiment results; measure whether improvements are achieved after distributing an engine component across another physical machine as distributed nodes. 1.3 Proposed Solution As described in [6], currently most types of game in order to support large and surreal 3D graphics, accurate physics calculation for large number of entities, AI computations are suppressed. Also, in games that have a large entities with a large world, such as Dwarf fortress as mentioned in the introduction, these type of games will include lots of AI calculation; described by Janusz Grzyb, in [10] it is common that AI computation gets larger when world map get bigger, this is because of algorithms such as path 4

11 finding, this will involve to search for optimum paths for a large number of entities lead to the build up of large AI computations. Another article taking a FPS game F.E.A.R. [30], it described that most commonly seen and implemented AI computation are A* path finding algorithm and Finite State Machines, it is used a lot in video games in general, these algorithms can consume a low of CPU usages. Therefore, according to these two articles, I am assuming that AI computation can get heavy for games; although other components computations can be expensive, in this project I am going to experiment and note down the CPU usage on each component. Finally distributing the heaviest computation component to another physical machine, thus identifying distribution of game engine is whether feasible or not. 1.4 Deliverables The deliverables of this project include: A report, which contains my evaluation and background information. Software Code/ URL on github of my implementation of the experiment game model. User Manual for the software. 1.5 Initial Schedule Figure 1.1 Initial Schedules. 5

12 Figure 1.2 Initial Gantt chart 6

13 Chapter 2 Background This chapter gives the background information of this project and the definitions of the following: Cloud Computing Traditional approach of Cloud Gaming Distributed System Cloud Computing and Distributed System Project problems 2.1 Introduction of Cloud Computing Cloud Computing, where resources, services and applications at the data centers are shared, delivered over the Internet definition in [2]. According to one of the biggest cloud service provider Amazon Web Service [3], the term Cloud Computing refers to the on-demand delivery of IT resource and applications via the Internet with pay-asyou-go pricing. The on-demand resources Quality of Service should be maintained and agreed upon usage of the cloud via Service Level Agreement. The resource manage should assign and guarantee the resources at all time as described in [14]. In [2], its definition of Cloud Computing also include resources sharing, therefore as resources are provided by cloud providers from data centers, this include data storage, software applications, infrastructure, system platform; this gives major benefits of using cloud computing and is also defined as: Avoid need to install software application and be able to access software applications Software as a Service. Virtualization; virtual software tools, virtual platform for cloud users to create and build cloud application Platform as a Service. Provided fundamental resources such as CPU, storage to run software etc. Infrastructure as a Service. The three services above gives great scalability to the cloud, essentially the cloud could become an additional large computer to process, run or build other software applications. As long as you have a machine that can connect to the Internet. 7

14 2.2 Introduction of Cloud Gaming The current state-of-the-art, Cloud Gaming is described as where games are provided via the cloud from the data center; this is a similar approach to video streaming as stated in [4]. This cloud gaming model required users to access the game through the internet using a thin client application installed on local machine similar to Quake server as describe in [28]; however the game engine itself is not included on the user machine, the game engine is located in the data center. The user then provides input control, through thin client similar to Quake, inputs get delivered to the data center; the server then does the calculation while running the game loop and renders the scene in the data center. Finally the video data is sent via the Internet, back to the client as video streaming. This is described in detail in the article [4]. The main benefits of this model are: Avoid upgrading their computers for latest games Games can be played on different platforms via the client Play more games due to hardware/software incompatibility From these three statements, it is clear that user can rely on the data center high computation power and graphics card to render and compute the game scenes calculation; this has greatly reduced the need to constantly upgrade user s local physical machine. However with this approach, it is very latency sensitive due to the fact that it is streaming video, when a packet is lost or missed, video quality can be greatly reduced. Also due to the fact that user local machines do not have the game engine at all, when missed a packet, the game can appear as stopped, equal to a frame has dropped. Currently there are quite a few companies that support this type of cloud gaming system such as: OnLive StreamMyGame GaiKai (Sony) NVIDIA GRID However this does not solve my project objective as distributing the game engine as the game engine is not located in the user local machine at all; the down side of this model is if the data center does not have the game engine of that particular game then you can not play, and you have to connect to the internet for this model, the system this project targets will be discussed in chapter three. 8

15 2.3 Distributed System Defintion by Coulouris, George F., Jean Dollimore, and Tim Kindberg in [25], in a distributed system, typically it is where a number of computers forming a network that allow communicate and coordinate their work concurrently within the network by sending messages within. This is shown on below figure. Service Models for Distributed Systems: computer network According to this definition, the Internet, where it is a massive network of computers, can be described as a distributed system as it can allow communication by passing messages and capable to collaborate work on end-to-end nodes. In the book [26], distributed system is described and defined that it has characteristics such as: Geographically distributed in [26], the banking network with ATM communicate to a server that is geographically placed. Resource sharing in both [25,26], states that resources sharing is a trivial characteristic, for example when one machine needs additional storage can ask for another availability. 9

16 Scalability in [27], it refers to the number of users, while remaining large scale, the ability to operate efficiently and remaining good responsiveness. For example: one machine fails in the network does not disrupt and ruin the operation. Distributed System design can be used in games that offer a very large scalability. Typically for games that involved in a large number of nodes connect to the game server. This is best explained and demonstrated using the type of game -- MMOGs; In [1], MMOGs is described as where large numbers of players connect to the geographically distributed game server, the game engine is inside both client s machine and server but game states calculations are inside the server as it keep tracks of the number of players, players game information, their positions etc. In article [25], another definition of distributed system in game is Users are dynamically allocated a particular server based on current usage patterns and also the network delays to the server. A practical example of distributed system approach in games; in article [17], it identify that AI computation in the game server can be expensive, therefore Douceur et al. (2007) experiment with offloading the AI computation by splitting high frequency but not as intensive computation in the server and distributing lower frequency needed but high intensive computation to the client. Their model focuses on just distributing the AI computations to client on one machine; whereas this project idea focus on distributing the game engine components to any machines that the user owns, thereby taking advantage of users owned machines extra resources. 2.4 Cloud Computing and Distributed System According to the definition of distributed system and cloud computing, both definitions consist of resource sharing, therefore cloud computing can be considered as a distributed system due to its characteristics of resources sharing on demand. Resource sharing is a big characteristic in cloud computing and distributed system; in cloud computing definition, where resources are request on demand then the underlying hypervisors can allocate more resources to the VM. Whereas in physical machines distributed system approach, the resources are always there and pre-allocated for the machines. Cloud Computing can allow VM to coordinate and communicate to do work or process concurrently within the cloud; this is essentially the definition of the distributed system where a group of machines forming a network, and concurrently working; Therefore in this project I will consider that cloud computing is a type of distributed system. 10

17 However this project aim is to focus on, distributed game engine component over a distributed nodes, using a cloud-computing paradigm for distributing a game engine is not necessary needed but the aim is to explore the feasibility of distributing game engine in a distributed system, and the Cloud is just part of the exploration. 2.5 Project Problems Looking back at the traditional Cloud Gaming approach, distributed AI computation approach and Quake centralised server approach, it does not solve my project aim of distributing the game engine across a distributed system or distributed nodes where a user can take advantages of extra hardware resources. In the book Game Engine Architecture [1], its description of game engine is at the core of the game, game engine is responsible to run iterations of the game engine loop. This game loop contains the major game engine components calculation and computation plus the other small subsystem of a game engine, and typically all calculation and simulation should be at the rate of 60 frames per second (16.67ms) or 30 frames per second (33.3ms) [1] depending on the type of the game, in this project we will simply call this frame rate. After each loop finish, the scene (one frame) will get virtualised, rendered by graphics component of the game engine and displayed on user machine. In the traditional Cloud Gaming System, the game engine and game loop is centralised at the data center similar to the Quake server approach, but user does not have the game engine and then the game is rendered at the data center on one machine [4]. This one machine contains a finite number of CPU and Memory, thus there is a limit of the amount of work in one machine can do. Cloud Gaming user takes advantage of this one machine and thus play the game via the Internet and streamed the rendered video on client s machine. In more traditional approach of video gaming, where game is installed in players physical machine locally, the game engine itself, the computation, rendering are all running inside the player s physical machine as described in [6]. Thus the local machine is doing all the work, which requires intensive CPU usage and Memory usage. This is also the approach of the Quake Centralised server model as the game loop is running on server, but client run the game engine to render the scene. The above-mentioned approaches do not solve the intended project aim. The challenge of this project is to distribute the game engine component, inside a data center. Thus users can take advantages of the data center resources, multiple physical machines such as CPU and additional memory while maintaining a good gaming experience. 11

18 A data center can have multiple physical machines; these physical machines can be a distributed system by forming a network thus sharing resources for game engine computation is available; or in another medium by hosting a cloud environment, which allows virtualization to take place if a cloud system is available. Thus resources can be shared on demand such as CPU, Memory and data storage. Virtualization can allow VM to be spawned and handled different processes whenever needed as described in [14]. The benefits of using virtualisation, is that the ability of scaling the available resources, for one Virtual Machine; Typically one physical machine with multiple CPUs run the whole game engine; with virtualization, one powerful physical machine can host multiple VMs, therefore each VM can host a component with enough resources. Global Scheduler in the cloud should maintain and manage the workload of all VM. This project is to explore the feasibility of distribute the component across a distributed system with distributed nodes; cloud computing is in fact a type of distributed system and therefore can be a distributed node; as described above, resources and application can be shared and processed in a cloud environment. However with this project, the experiment will be using the physical machines approach in distributed system in Eniac laboratory, identify if game engine component distribution across a network is feasible, then next stage, which is further work, could be distribute on to the cloud as an exploration. The main variables to be consider when evaluating the experiment are: Latency issues Most of the online game, especially real time critical gaming such as FPS are time sensitive, this will be one of the area to be evaluate in the evaluation section. Resources issues To evaluate the resources bottleneck and the overhead of distributing and communicating. 2.6 Related Work As mention in chapter one, there are three types of gaming system that can solve the resources bottleneck issues. In this section, related work on distributing the game engine, apart from the three types of gaming system discussed in chapter one will be discussed here. In Enhancing Game-Server AI with Distributed Client Computation, Douceur et al, their aim was to prohibit the expensive computation on a central server (one machine) in the context of online RPG. Their method was to split the AI into two, one that is needed by the server in a high frequency but simple computation stays in the server; another that is heavy computation but not frequently needed by the server will get distribute to the client machine. However they states that the approach is not without 12

19 sacrificing the security of the game and communication delay could be add to code that normally executes inside the game server. Finally their finding is that their technique successfully reduces an amount of computation in the server and even the latencies are more than one second; they also suggest that when threshold of one second is reached on network latency, the better choice would be to do the computations inside the server. In Is it Practical to Offload AI over the Network, Bai et al, their work focus on distributed the AI to all client machines in a P2P network. They states that the drawback of their method of offloading AI to clients will expose their AI code to all client machines, which can allow hackers to modify the game. Also their finding was that after a threshold of 275ms latency, their implementation starts to perform worse. Both of their method consists of distributing the game engine to an unknown client via P2P network or simply distributing to an unknown client, to harness their computation power but sacrifice the security of the game. Whereas this project focuses on distributing the game engine inside a data center or a cloud therefore part of the game engine is not expose to any players except the one who is running the game engine. For example in a MMOG, server running the game engine that do all world computation on one machine, players run a client that receive the game state and render all their local machines. With our propose model, the game server can distribute part of the game engine within a data center that is relatively closed to the machine that run game loop, saving resources on the server thus resolving resources bottleneck that causes the game server to break down. 13

20 Chapter 3 Proposed Models and Methodology In this chapter, the following models will be introduced and explained: Abstract system model this will explain how cloud computing can be applied to a distributed system approach on this research however I did not implement the abstract system model during this project, this is related to further work in chapter 5 on this project. Experiment distributed model this is the approach of where this project will experiment on. This will be evaluated whether distributing game engine is feasible and the results will be discussed in chapter Abstract System Model This is a theoretical, further work, abstract system model and taking advantages of Cloud Computing the aims are: User could sit and play on his local machine, while not having enough resources; user could use the extra resources available in the cloud provider data center. Game engine components can be able to distribute to those extra resources VMs or distributed nodes. VM can execute the computations required by each game engine component. Quality of Services needed to be maintained and agreed upon first usage. Global Scheduler or Hypervisor to manage the workload of all VMs. At first usage, before distributing the game (figure 3.1.1), user should establish the QoS with the service provider via the Internet. The Cloud service provider should then have the idea of the amount of resources they should guarantee and maintain when providing VM for client usage. Next, User runs the game loop on his own local machine only, while not having enough extra resources, user can distributed the game engine components onto the cloud, which the data center host. The Cloud contains VMs that are responsible for each component computation as seen below on the figure However since the cloud resources are on demand basis, with extra resources, the VM that host the component can distribute tasks to the extra VM for quicker computation. 14

21 Finally since the user local machine is running the game loop, the loop will be set to 60FPS, so each loop is running in 16ms, ideally getting the results from the network should be inside this time. Figure This figure demonstrate at first usage, where user local machine still containing the component and need to establish the QoS via the SLA with the cloud service provider s data center. 15

22 Figure This figure demonstrate the components being distributed to the VM, and with extra VM, tasks can be execute faster. 16

23 Advantages of this abstract system model, user can rent resources available on the cloud system, as part of a distributed system, to calculate intensive computation required by the game engine. Assumptions for this abstraction model: The service provider will establish the SLA. The Data Center and the VM will not fail. The network between user and data center will not fail. The game loop should constantly be running as fast as possible around 60FPS. 3.2 Experiment Model Below described an experiment model that this project used to identify the feasibility of distributing game engine component across distributed nodes. To identify the benefit of distributed component compare to running the game on machine only, the first experiment will run the game on one physical machine; it is implemented single threaded, also the application will be set to run on one CPU only with taskset Unix command. This experiment aim is to find out there is limitation, and it is in one machine on one CPU while using my single threaded game model, discovering which components uses the most time for one frame, also identifying the CPU usage and frame rate differences on various game settings. The reason is that the first experiment results will be used as baseline, comparing to the distribution game engine model, seeing if improvements are made. After discovering which game engine component uses a lot of resources compare to the others, then the aim is to distribute this component to available machines. Therefore In the second experiment, running the game on two physical machines and three physical machines will be explored; one machine starting the game loop consisting other components that not as intensive as the one discovered in the first experiment, and the others will run the distributed component computations. The implementation is that, the game loop will try to run as fast as it can to optimize gaming experience, therefore the time to finish each loop will be set to 60FPS (16ms) where most type of video game has been set to run in. In the game design, when the client machine, responsible for the distributed component computation, and connect to the game loop machine, server (game loop machine) will spawn a thread for each client handling the communication, the purpose 17

24 of the thread is to handle the communication without the game loop block and wait for the communication to finish. This is shown as below figure and figure Figure A blocking model of the threads, for example thread two finishes way before thread 1, but have to wait due to thread 1 hasn t finish yet. And the game loop (server thread) will have to wait for both threads to finish before finishing the loop (one frame). Figure A non-blocking model of the threads, this is the model I am implementing. Advantages of this model as explained above, game loop continue to execute without waiting for the threads finish, so as the threads, client can continue to execute communication and computation without waiting for one to finish to achieve faster execution. If waiting the result to get back every loop, game can appear slow and will not achieve the optimum 16ms loop time. (Results not get back to the server between 16ms.) Furthermore if it has to wait for the result to come back when latency occurs in the network or data is lost in transition, game can appear stop or the scene has suddenly jumped if graphics component data is lost. (Results do not return at all.) 18

25 The data transition method will be using a socket stream structure. The aim of this experiment is to find that after distributing the computations, the CPU usage does indeed dropped on the machine which run the loop, as well as frame rate will not get slowed down as quickly. These can indicate that distributing components does save an amount of resources and game loop is executing as fast as possible. However as described in figure 3.2.2, if game loop is running at the optimum rate, the data does not get back in time for each loop, the game quality will also decrease. Therefore choosing the right component to distribute is vital. As per my research indicates and as stated by Douceur, et al in [17], AI is the most latency resilient component in the game engine, due to the fact that some large calculation results for AI is not as frequently needed by the game state. It can go up to 1 second. While physics and graphics component, to make the game as virtually real as possible, the components are required to run as quick and as accurate as possible as described in the book Game Engine Architecture. Therefore in this project experiment model, experiment one, while trying to find out the limitation, and find out which component uses the most resources, the aim is to prove that AI can also consume a lot of resources as stated in chapter 2; thus distributing AI can be and is a good option for distributing game engine component, because it is more latency resilience for a video game whether it is multiplayer or single player. Although distributing AI is a good option, this project is to identify if components are feasible to be distributed, therefore physics and graphics can be a good option as well if results indicates that it is more feasible to do so. The complete methodologies of the experiment one and two will be explained in the next section. The Assumption in this model is: The Clients (distributed machines) will not fail. The data will not lost in between transition. 19

26 Figure 3.3 Experiment Model The above figure 3.3 shows the basic model of my simulation. However the game engine component will be explored is the Ai component as described above, this project experiment will use two machines for Ai computations. In below section 3.3 will have the project experiment detailed implementation of the simulation. 20

27 3.3 Methodologies Although there is free open source game engine on the Internet such as Quake, Quake-II, Unreal, these are the game engines that allow developers to get a deeper understanding of complicated game architecture; writing a implementation is a good way of learning and practicing game developing also can help building a stronger foundation of programming techniques. In the game-simulation implementation, the simulation will be similar to Pacman/Bomber man, a type of strategic, 2.5D maze game. This is due to the fact that, graphics rendering, physics calculation and AI calculation will go up as the game entities increase (graphics and physics collision detection) and the map-size increases (physics, AI path finding). Allowing the experiment to identify that indeed resources bottleneck will occur. The experiments will be conducted in the Eniac Laboratory in the University of Leeds, where the machines will be having Intel i (4 CPU) and 16GB Memory. In order to make the CPU usage to go 100%, all four CPU must be doing intensive work. However my game implementation is a single threaded game. Therefore, setting the game to only run in One CPU, so that when monitoring the CPU usage, only need to monitor one CPU; also reaching 100% on one CPU is much faster. In the game model, graphics component will draw the maze including the walls, the cheese, and the entities. Physics component will handle the collision detection between each entity and the world objects. Next, two entities will be present in the simulation with AI implemented Cat and Mouse. Each entity will run its own AI calculation. The AI calculation consists of finite states and a method that simulate heavy, computationally expensive calculation on A* path finding as stated by Leigh et al in [32]. The simulated method will be calculating prime numbers and will be increased and scaled up quickly as the map size gets bigger as I have set it to find all prime numbers between MapSize*MapSize. Each finite state will be show in figures below (figure ) including the state flows diagram in figure (3.3.1). After the diagrams, the following paragraphs will be the details of the game objectives. 21

28 Figure This figure shows that in each animal, the finite states include Patrol, Tired and Hungry. And each animal will keep constantly checking its status whether it is tired or hungry or stay in patrol state in each Game Loop. Patrol State is the default normal state. 22

29 Figure This figure shows the basic actions inside the Patrol state, while it is not in another state, this state will get called and execute the methods as shown above. The detail implementation will be inside my github repository in the AI directory. 23

30 Figure Basic actions inside the Hungry state. 24

31 Figure Basic methods in Tired State. 25

32 The aim of the cat is to catch any mouse and will be represented by a white cube. The aim of the mouse is to get some cheese and will be represented by a turquoise cube. The animals cannot see anything that is to make the collision detection happen frequently. There will be a three-dimension tile map that contains wall, cheese and the animals. This is shown as below in figure Figure D game simulation. 20x20 Map Size. In the software implementation, Model-View-Controller structure is implemented; which the view represents the graphics component; the model contains physics calculation, game objects, entities and it is also responsible for handling the client; finally the controller is responsible for the viewing controls. This is as shown in below figure Figure The MVC model of this application, game model is the class that handles all the objects and run the game loop. 26

33 Therefore in order to make the simulation 60FPS, the simulation is set to run in 16ms to render the scene in the view class. The collision detection in the model class will keep track of each object test against each entity. In the simulation, each entity does not need to communicate with each other; within each animal, state is a private variable inside of each animal; inside each state, animal will constantly being checked on certain variables if required to change state. Therefore when distributing the AI, distributing the finite state machine calculation to the other machines are carried out; the client will calculate what state the animal it should be in, and send the result back to the server, update that animal to a different state; the server will run graphics, physics and jobs it should be doing when in certain AI states such as animal.patrol() if the animal is being told to be in patrol state. The method of distributing the AI states computations will be using client and server architecture via sockets as described before; the server is the machine that is running the game loop and handling the connection to a client, which is machine that host the AI component and do the calculations. When the server starts, entities will not be moving until server receives responds from clients. The architecture is shown as below figure Figure This figure shows that the server (machine that run the game loop) contains N animals, while two machines have connected to the server; the animals are equally distributed to the clients. Since between each animal, there is no communication, the overheads between clients are none. 27

34 In the model, as described in 3.3, since each client is being handled with a separate thread, if one client is slow or network is slow, the animals being handled on that client has a potential of delay in updating the states; however it does not affect the animals being handled by the other client. Next, the size of the map can be specified, the map will be randomly generated along with the walls and the animals; the number of animals also can be specified in one of the header file gamemodel.h. However it is not evenly generated for the number of cats and mice, as the cats and mice are randomly generated according to the size of the animals. The complete flow of server and client are shown below in figure 3.3.8: 28

35 Chapter 4 Experiment This chapter is to analysis the data gets from the experiments; first looking at the nondistributed model, identify the heavy computation components and finding the limit on one CPU with different game setting. Next looking at the distributed models, with onemachine then two machines, identify if distributing AI component can save resources on the machine that run the game loop, and look at if improvements are made in the distributed model. The data are recorded down as a text file, writing down the frame per second, update time, time to finish AI, graphics and physics calculations, each experiment will run for one minute to get a number of data to calculate the average. With the Distributed Model, the data recorded will involve an additional the communication time (round trip time) between each client and the server. This communication time include (1) the time taken for server sends game state data to client, (2) wait for client to finish calculation and (3) received a respond from the client. Since handling the client is in a separate thread, the communication time for each client will be different, the time will be compare to the frame per second; if the communication time is longer than the frame rate, that means it has missed a frame for the entity to be in the appropriate state. Although as described in chapter 3, section 3, AI can withstand latency up to 1 second, therefore if the communication time is larger than 1 second and is slower than the frame rate, then it is not feasible to distribute the component. However it does depends on the type of game that is being played, if it is a fast paced game such as FPS, then it is obvious that 1 second is way too slow for the results to get back; if it is a slow paced game such as chess game, strategy game or turn based RPG game, then 1 second or more maybe acceptable depends on the players. However this project aim to make the communication time within the frame rate or inside the 1-second mark. 4.1 Non-Distributed In this section, the discussion will be on how the CPU varies and the frame rate differs with different game setting on one machine and on one CPU single threaded. In the experiment with a centralised game model, a number of experiments with different settings were carried out; the control variable is the Map Size, whereas the independent variable is the number of animals. For example: 20x20 Map Size, 40x40 Map Size, and on each map size, steadily increase animals size while map size stays constant. Below will show a series of figures showing the CPU usage, frame rates, and time taken ratio of each component in one loop. Recall in chapter 3, AI component 29

36 computation will not be as intensive due to the fact that the smaller the map size, the AI algorithms of path finding can do less work; this will be shown in below experiment result. Next, while increasing the number of animal in a small map, the physic component calculation should add up, as collision will happen more due to a smaller map with more animals bumping into each other. Therefore with this setting I should expect to see the time it takes to finish physics calculation in one frame should take up the most time when number of animals are high. Also expect to see, to reach 100% CPU usage on one CPU, the maximum number of animals allowed in a small map should be larger than a large map size; this is shown as below on figure and figure the maximum number of animals varies by Then, gradually as map size gets bigger, the AI computation ratio in one frame should be getting more intensive and taking longer time to finish, as path finding computation should increases with map size (my simulated prime numbers finding algorithm scale up with map size). Also as described in chapter 2, since AI computation gets heavy and takes up most of the resources, the CPU usage on one CPU should increases along with the map size, therefore to reach 100% CPU usage should be reached quicker than the previous smaller map setting as AI is doing more work with each animal. Map Size Number of Animals 20 x x x x x x x x x x Table the number of animal it takes to reach 100% CPU usage on One CPU. 30

37 Figure this figure show that on a 20x20 Map Size, as the number of animals increases, with less intensive AI computation setting, the maximum number of animals to let one CPU to reach 100% can be up to Figure this figure show that with slightly larger map 40x40, meaning slightly more intensive AI computation than previous setting, the maximum number of animals to let one CPU to reach 100% decreases to 7000 entities. 31

38 Figure this figure show that with 80x80 map size, intensive AI computation is taking place, therefore maximum number of animals to let one CPU to reach nearly 100% to 300. With the above figures 4.1.1, and figure 4.1.3, they are showing that resources bottleneck do indeed happen on One CPU. Also as the map size increases, the maximum number of animals decreases, however this does not show which components uses the most CPU and the frame rate. Also with the results shown above, since reaching 100% on one CPU is possible when running a game, possibility of four CPUs running into resources bottleneck can also happen when too many entities are present in a game. In Akliz.net, a high performance hosting game server website, it states that 100% CPU can often happen when too many entities are present in a construction game Minecraft [33]. Therefore with the above figures, the experiments of one machine have demonstrated that resources bottleneck is an issue in day-to-day gaming; and it indeed happens in this project experiment on one CPU. Game quality can be represent by frame rate as well as CPU usage, this is because Frame rate show how many frames are being rendered in one second, and if frame rate is quick, then the game scenes will look smooth. If frame rate is slow, the game scenes will look lagged and jumped. 32

39 Before showing which components take the longest time to finish its computation in one game loop, below figure show the corresponding frame rate with the 20x20 Map Size when increasing the number of animals. Figure corresponding frame rate with 20x20 map size experiments described above. It is dropping as the number of animal increases. With the above figure 4.1.4, this correspond to the figure 4.1.1, as the number of animal increases, CPU usage increases, and FPS decreases nearly to zero; practically look like the game has stopped, this show that the gaming quality decreases when CPU usage is high. Also note that I have set the game to run in 60FPS, but even in a low intensive setting, the starting FPS is at 45. Next, below figure show the corresponding frame rate to the experiment on figure 4.1.2, frame rate of 40x40 map size. 33

40 Figure frame rate of 40x40 Map Size and related to figure The figure once again shows that the frame rate decreases along with the number of bots, same as figure shows; both settings 20x20 map size and 40x40, shows that when number of bots increases, CPU usage on One CPU increases, hence the frame rate drops and gaming quality decreases. However, with the above figures they do not show which component uses the most resources hence the times of each component to do its calculation in one game loop are recorded. E.g. (Time taken for Ai) divided by (Total time taken in one loop). This can calculate how much ratio each component is using. Below figures will show the ratio by each component on time taken to finish calculation in one game loop on each game setting. 34

Copyright www.agileload.com 1

Copyright www.agileload.com 1 Copyright www.agileload.com 1 INTRODUCTION Performance testing is a complex activity where dozens of factors contribute to its success and effective usage of all those factors is necessary to get the accurate

More information

Windows Server Performance Monitoring

Windows Server Performance Monitoring Spot server problems before they are noticed The system s really slow today! How often have you heard that? Finding the solution isn t so easy. The obvious questions to ask are why is it running slowly

More information

A Middleware Strategy to Survive Compute Peak Loads in Cloud

A Middleware Strategy to Survive Compute Peak Loads in Cloud A Middleware Strategy to Survive Compute Peak Loads in Cloud Sasko Ristov Ss. Cyril and Methodius University Faculty of Information Sciences and Computer Engineering Skopje, Macedonia Email: sashko.ristov@finki.ukim.mk

More information

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION

DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION DIABLO TECHNOLOGIES MEMORY CHANNEL STORAGE AND VMWARE VIRTUAL SAN : VDI ACCELERATION A DIABLO WHITE PAPER AUGUST 2014 Ricky Trigalo Director of Business Development Virtualization, Diablo Technologies

More information

Managing the Performance of Cloud-Based Applications

Managing the Performance of Cloud-Based Applications Managing the Performance of Cloud-Based Applications Taking Advantage of What the Cloud Has to Offer And Avoiding Common Pitfalls Moving your application to the cloud isn t as simple as porting over your

More information

An Efficient Hybrid P2P MMOG Cloud Architecture for Dynamic Load Management. Ginhung Wang, Kuochen Wang

An Efficient Hybrid P2P MMOG Cloud Architecture for Dynamic Load Management. Ginhung Wang, Kuochen Wang 1 An Efficient Hybrid MMOG Cloud Architecture for Dynamic Load Management Ginhung Wang, Kuochen Wang Abstract- In recent years, massively multiplayer online games (MMOGs) become more and more popular.

More information

Analysis on Virtualization Technologies in Cloud

Analysis on Virtualization Technologies in Cloud Analysis on Virtualization Technologies in Cloud 1 V RaviTeja Kanakala, V.Krishna Reddy, K.Thirupathi Rao 1 Research Scholar, Department of CSE, KL University, Vaddeswaram, India I. Abstract Virtualization

More information

The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor

The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor The Benefits of POWER7+ and PowerVM over Intel and an x86 Hypervisor Howard Anglin rhbear@us.ibm.com IBM Competitive Project Office May 2013 Abstract...3 Virtualization and Why It Is Important...3 Resiliency

More information

Dell Desktop Virtualization Solutions Stack with Teradici APEX 2800 server offload card

Dell Desktop Virtualization Solutions Stack with Teradici APEX 2800 server offload card Dell Desktop Virtualization Solutions Stack with Teradici APEX 2800 server offload card Performance Validation A joint Teradici / Dell white paper Contents 1. Executive overview...2 2. Introduction...3

More information

Chapter 19 Cloud Computing for Multimedia Services

Chapter 19 Cloud Computing for Multimedia Services Chapter 19 Cloud Computing for Multimedia Services 19.1 Cloud Computing Overview 19.2 Multimedia Cloud Computing 19.3 Cloud-Assisted Media Sharing 19.4 Computation Offloading for Multimedia Services 19.5

More information

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM

APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM 152 APPENDIX 1 USER LEVEL IMPLEMENTATION OF PPATPAN IN LINUX SYSTEM A1.1 INTRODUCTION PPATPAN is implemented in a test bed with five Linux system arranged in a multihop topology. The system is implemented

More information

CLOUD GAMING WITH NVIDIA GRID TECHNOLOGIES Franck DIARD, Ph.D., SW Chief Software Architect GDC 2014

CLOUD GAMING WITH NVIDIA GRID TECHNOLOGIES Franck DIARD, Ph.D., SW Chief Software Architect GDC 2014 CLOUD GAMING WITH NVIDIA GRID TECHNOLOGIES Franck DIARD, Ph.D., SW Chief Software Architect GDC 2014 Introduction Cloud ification < 2013 2014+ Music, Movies, Books Games GPU Flops GPUs vs. Consoles 10,000

More information

QoS Issues for Multiplayer Gaming

QoS Issues for Multiplayer Gaming QoS Issues for Multiplayer Gaming By Alex Spurling 7/12/04 Introduction Multiplayer games are becoming large part of today s digital entertainment. As more game players gain access to high-speed internet

More information

Delivering Quality in Software Performance and Scalability Testing

Delivering Quality in Software Performance and Scalability Testing Delivering Quality in Software Performance and Scalability Testing Abstract Khun Ban, Robert Scott, Kingsum Chow, and Huijun Yan Software and Services Group, Intel Corporation {khun.ban, robert.l.scott,

More information

CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms

CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms CloudSim: A Toolkit for Modeling and Simulation of Cloud Computing Environments and Evaluation of Resource Provisioning Algorithms Rodrigo N. Calheiros, Rajiv Ranjan, Anton Beloglazov, César A. F. De Rose,

More information

BridgeWays Management Pack for VMware ESX

BridgeWays Management Pack for VMware ESX Bridgeways White Paper: Management Pack for VMware ESX BridgeWays Management Pack for VMware ESX Ensuring smooth virtual operations while maximizing your ROI. Published: July 2009 For the latest information,

More information

Grid Computing for Artificial Intelligence

Grid Computing for Artificial Intelligence Grid Computing for Artificial Intelligence J.M.P. van Waveren May 25th 2007 2007, Id Software, Inc. Abstract To show intelligent behavior in a First Person Shooter (FPS) game an Artificial Intelligence

More information

SaaS or On-Premise? How to Select the Right Paths for Your Enterprise. David Linthicum

SaaS or On-Premise? How to Select the Right Paths for Your Enterprise. David Linthicum SaaS or On-Premise? How to Select the Right Paths for Your Enterprise David Linthicum SaaS or On-Premise? How to Select the Right Paths for Your Enterprise 2 Executive Summary The growth of Software- as-

More information

Analysis of Micromouse Maze Solving Algorithms

Analysis of Micromouse Maze Solving Algorithms 1 Analysis of Micromouse Maze Solving Algorithms David M. Willardson ECE 557: Learning from Data, Spring 2001 Abstract This project involves a simulation of a mouse that is to find its way through a maze.

More information

Internet Content Distribution

Internet Content Distribution Internet Content Distribution Chapter 2: Server-Side Techniques (TUD Student Use Only) Chapter Outline Server-side techniques for content distribution Goals Mirrors Server farms Surrogates DNS load balancing

More information

Understanding the Performance of an X550 11-User Environment

Understanding the Performance of an X550 11-User Environment Understanding the Performance of an X550 11-User Environment Overview NComputing's desktop virtualization technology enables significantly lower computing costs by letting multiple users share a single

More information

Load Balancing and Maintaining the Qos on Cloud Partitioning For the Public Cloud

Load Balancing and Maintaining the Qos on Cloud Partitioning For the Public Cloud Load Balancing and Maintaining the Qos on Cloud Partitioning For the Public Cloud 1 S.Karthika, 2 T.Lavanya, 3 G.Gokila, 4 A.Arunraja 5 S.Sarumathi, 6 S.Saravanakumar, 7 A.Gokilavani 1,2,3,4 Student, Department

More information

Virtualization: What does it mean for SAS? Karl Fisher and Clarke Thacher, SAS Institute Inc., Cary, NC

Virtualization: What does it mean for SAS? Karl Fisher and Clarke Thacher, SAS Institute Inc., Cary, NC Paper 347-2009 Virtualization: What does it mean for SAS? Karl Fisher and Clarke Thacher, SAS Institute Inc., Cary, NC ABSTRACT SAS groups virtualization into four categories: Hardware Virtualization,

More information

Visualisation in the Google Cloud

Visualisation in the Google Cloud Visualisation in the Google Cloud by Kieran Barker, 1 School of Computing, Faculty of Engineering ABSTRACT Providing software as a service is an emerging trend in the computing world. This paper explores

More information

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 36 ISSN 2229-5518

International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 36 ISSN 2229-5518 International Journal of Scientific & Engineering Research, Volume 6, Issue 4, April-2015 36 An Efficient Approach for Load Balancing in Cloud Environment Balasundaram Ananthakrishnan Abstract Cloud computing

More information

Performance evaluation of Web Information Retrieval Systems and its application to e-business

Performance evaluation of Web Information Retrieval Systems and its application to e-business Performance evaluation of Web Information Retrieval Systems and its application to e-business Fidel Cacheda, Angel Viña Departament of Information and Comunications Technologies Facultad de Informática,

More information

Cloud Computing Capacity Planning. Maximizing Cloud Value. Authors: Jose Vargas, Clint Sherwood. Organization: IBM Cloud Labs

Cloud Computing Capacity Planning. Maximizing Cloud Value. Authors: Jose Vargas, Clint Sherwood. Organization: IBM Cloud Labs Cloud Computing Capacity Planning Authors: Jose Vargas, Clint Sherwood Organization: IBM Cloud Labs Web address: ibm.com/websphere/developer/zones/hipods Date: 3 November 2010 Status: Version 1.0 Abstract:

More information

Matrix: Adaptive Middleware for Distributed Multiplayer Games. Goal: Clean Separation. Design Criteria. Matrix Architecture

Matrix: Adaptive Middleware for Distributed Multiplayer Games. Goal: Clean Separation. Design Criteria. Matrix Architecture Matrix: Adaptive Middleware for Distributed Multiplayer Games Rajesh Balan (Carnegie Mellon University) Maria Ebling (IBM Watson) Paul Castro (IBM Watson) Archan Misra (IBM Watson) Motivation: Massively

More information

Fibre Forward - Why Storage Infrastructures Should Be Built With Fibre Channel

Fibre Forward - Why Storage Infrastructures Should Be Built With Fibre Channel Fibre Forward - Why Storage Infrastructures Should Be Built With Fibre Channel Prepared by: George Crump, Lead Analyst Prepared: June 2014 Fibre Forward - Why Storage Infrastructures Should Be Built With

More information

By: M.Habibullah Pagarkar Kaushal Parekh Jogen Shah Jignasa Desai Prarthna Advani Siddhesh Sarvankar Nikhil Ghate

By: M.Habibullah Pagarkar Kaushal Parekh Jogen Shah Jignasa Desai Prarthna Advani Siddhesh Sarvankar Nikhil Ghate AUTOMATED VEHICLE CONTROL SYSTEM By: M.Habibullah Pagarkar Kaushal Parekh Jogen Shah Jignasa Desai Prarthna Advani Siddhesh Sarvankar Nikhil Ghate Third Year Information Technology Engineering V.E.S.I.T.

More information

Paul Brebner, Senior Researcher, NICTA, Paul.Brebner@nicta.com.au

Paul Brebner, Senior Researcher, NICTA, Paul.Brebner@nicta.com.au Is your Cloud Elastic Enough? Part 2 Paul Brebner, Senior Researcher, NICTA, Paul.Brebner@nicta.com.au Paul Brebner is a senior researcher in the e-government project at National ICT Australia (NICTA,

More information

CLOUD PERFORMANCE TESTING - KEY CONSIDERATIONS (COMPLETE ANALYSIS USING RETAIL APPLICATION TEST DATA)

CLOUD PERFORMANCE TESTING - KEY CONSIDERATIONS (COMPLETE ANALYSIS USING RETAIL APPLICATION TEST DATA) CLOUD PERFORMANCE TESTING - KEY CONSIDERATIONS (COMPLETE ANALYSIS USING RETAIL APPLICATION TEST DATA) Abhijeet Padwal Performance engineering group Persistent Systems, Pune email: abhijeet_padwal@persistent.co.in

More information

Efficient DNS based Load Balancing for Bursty Web Application Traffic

Efficient DNS based Load Balancing for Bursty Web Application Traffic ISSN Volume 1, No.1, September October 2012 International Journal of Science the and Internet. Applied However, Information this trend leads Technology to sudden burst of Available Online at http://warse.org/pdfs/ijmcis01112012.pdf

More information

QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE

QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE QLIKVIEW ARCHITECTURE AND SYSTEM RESOURCE USAGE QlikView Technical Brief April 2011 www.qlikview.com Introduction This technical brief covers an overview of the QlikView product components and architecture

More information

Securing the Intelligent Network

Securing the Intelligent Network WHITE PAPER Securing the Intelligent Network Securing the Intelligent Network New Threats Demand New Strategies The network is the door to your organization for both legitimate users and would-be attackers.

More information

Intel Cloud Builders Guide to Cloud Design and Deployment on Intel Platforms

Intel Cloud Builders Guide to Cloud Design and Deployment on Intel Platforms Intel Cloud Builders Guide Intel Xeon Processor-based Servers RES Virtual Desktop Extender Intel Cloud Builders Guide to Cloud Design and Deployment on Intel Platforms Client Aware Cloud with RES Virtual

More information

NVIDIA VIDEO ENCODER 5.0

NVIDIA VIDEO ENCODER 5.0 NVIDIA VIDEO ENCODER 5.0 NVENC_DA-06209-001_v06 November 2014 Application Note NVENC - NVIDIA Hardware Video Encoder 5.0 NVENC_DA-06209-001_v06 i DOCUMENT CHANGE HISTORY NVENC_DA-06209-001_v06 Version

More information

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB

BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB BENCHMARKING CLOUD DATABASES CASE STUDY on HBASE, HADOOP and CASSANDRA USING YCSB Planet Size Data!? Gartner s 10 key IT trends for 2012 unstructured data will grow some 80% over the course of the next

More information

Group Based Load Balancing Algorithm in Cloud Computing Virtualization

Group Based Load Balancing Algorithm in Cloud Computing Virtualization Group Based Load Balancing Algorithm in Cloud Computing Virtualization Rishi Bhardwaj, 2 Sangeeta Mittal, Student, 2 Assistant Professor, Department of Computer Science, Jaypee Institute of Information

More information

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters

A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters A Comparative Performance Analysis of Load Balancing Algorithms in Distributed System using Qualitative Parameters Abhijit A. Rajguru, S.S. Apte Abstract - A distributed system can be viewed as a collection

More information

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing

A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing A Dynamic Resource Management with Energy Saving Mechanism for Supporting Cloud Computing Liang-Teh Lee, Kang-Yuan Liu, Hui-Yang Huang and Chia-Ying Tseng Department of Computer Science and Engineering,

More information

MAGENTO HOSTING Progressive Server Performance Improvements

MAGENTO HOSTING Progressive Server Performance Improvements MAGENTO HOSTING Progressive Server Performance Improvements Simple Helix, LLC 4092 Memorial Parkway Ste 202 Huntsville, AL 35802 sales@simplehelix.com 1.866.963.0424 www.simplehelix.com 2 Table of Contents

More information

Energy Constrained Resource Scheduling for Cloud Environment

Energy Constrained Resource Scheduling for Cloud Environment Energy Constrained Resource Scheduling for Cloud Environment 1 R.Selvi, 2 S.Russia, 3 V.K.Anitha 1 2 nd Year M.E.(Software Engineering), 2 Assistant Professor Department of IT KSR Institute for Engineering

More information

How to Plan a Successful Load Testing Programme for today s websites

How to Plan a Successful Load Testing Programme for today s websites How to Plan a Successful Load Testing Programme for today s websites This guide introduces best practise for load testing to overcome the complexities of today s rich, dynamic websites. It includes 10

More information

An Approach to Load Balancing In Cloud Computing

An Approach to Load Balancing In Cloud Computing An Approach to Load Balancing In Cloud Computing Radha Ramani Malladi Visiting Faculty, Martins Academy, Bangalore, India ABSTRACT: Cloud computing is a structured model that defines computing services,

More information

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging

Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging Achieving Nanosecond Latency Between Applications with IPC Shared Memory Messaging In some markets and scenarios where competitive advantage is all about speed, speed is measured in micro- and even nano-seconds.

More information

GPU File System Encryption Kartik Kulkarni and Eugene Linkov

GPU File System Encryption Kartik Kulkarni and Eugene Linkov GPU File System Encryption Kartik Kulkarni and Eugene Linkov 5/10/2012 SUMMARY. We implemented a file system that encrypts and decrypts files. The implementation uses the AES algorithm computed through

More information

ClearCube White Paper Best Practices Pairing Virtualization and Centralization Increasing Performance for Power Users with Zero Client End Points

ClearCube White Paper Best Practices Pairing Virtualization and Centralization Increasing Performance for Power Users with Zero Client End Points ClearCube White Paper Best Practices Pairing Virtualization and Centralization Increasing Performance for Power Users with Zero Client End Points Introduction Centralization and virtualization initiatives

More information

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering

DELL. Virtual Desktop Infrastructure Study END-TO-END COMPUTING. Dell Enterprise Solutions Engineering DELL Virtual Desktop Infrastructure Study END-TO-END COMPUTING Dell Enterprise Solutions Engineering 1 THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL ERRORS AND TECHNICAL

More information

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment

Technical Paper. Moving SAS Applications from a Physical to a Virtual VMware Environment Technical Paper Moving SAS Applications from a Physical to a Virtual VMware Environment Release Information Content Version: April 2015. Trademarks and Patents SAS Institute Inc., SAS Campus Drive, Cary,

More information

2) Xen Hypervisor 3) UEC

2) Xen Hypervisor 3) UEC 5. Implementation Implementation of the trust model requires first preparing a test bed. It is a cloud computing environment that is required as the first step towards the implementation. Various tools

More information

Figure 1. The cloud scales: Amazon EC2 growth [2].

Figure 1. The cloud scales: Amazon EC2 growth [2]. - Chung-Cheng Li and Kuochen Wang Department of Computer Science National Chiao Tung University Hsinchu, Taiwan 300 shinji10343@hotmail.com, kwang@cs.nctu.edu.tw Abstract One of the most important issues

More information

Contributions to Gang Scheduling

Contributions to Gang Scheduling CHAPTER 7 Contributions to Gang Scheduling In this Chapter, we present two techniques to improve Gang Scheduling policies by adopting the ideas of this Thesis. The first one, Performance- Driven Gang Scheduling,

More information

Process Methodology. Wegmans Deli Kiosk. for. Version 1.0. Prepared by DELI-cious Developers. Rochester Institute of Technology

Process Methodology. Wegmans Deli Kiosk. for. Version 1.0. Prepared by DELI-cious Developers. Rochester Institute of Technology Process Methodology for Wegmans Deli Kiosk Version 1.0 Prepared by DELI-cious Developers Rochester Institute of Technology September 15, 2013 1 Table of Contents 1. Process... 3 1.1 Choice... 3 1.2 Description...

More information

Getting The Most Value From Your Cloud Provider

Getting The Most Value From Your Cloud Provider Getting The Most Value From Your Cloud Provider Cloud computing has taken IT by storm and it s not going anywhere. According to the International Data Corporation (IDC), cloud spending will surge by 5%

More information

Benchmarking Hadoop & HBase on Violin

Benchmarking Hadoop & HBase on Violin Technical White Paper Report Technical Report Benchmarking Hadoop & HBase on Violin Harnessing Big Data Analytics at the Speed of Memory Version 1.0 Abstract The purpose of benchmarking is to show advantages

More information

WHITE PAPER Guide to 50% Faster VMs No Hardware Required

WHITE PAPER Guide to 50% Faster VMs No Hardware Required WHITE PAPER Guide to 50% Faster VMs No Hardware Required Think Faster. Visit us at Condusiv.com GUIDE TO 50% FASTER VMS NO HARDWARE REQUIRED 2 Executive Summary As much as everyone has bought into the

More information

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide

An Oracle White Paper July 2011. Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide Oracle Primavera Contract Management, Business Intelligence Publisher Edition-Sizing Guide An Oracle White Paper July 2011 1 Disclaimer The following is intended to outline our general product direction.

More information

A Proposed Service Broker Strategy in CloudAnalyst for Cost-Effective Data Center Selection

A Proposed Service Broker Strategy in CloudAnalyst for Cost-Effective Data Center Selection A Proposed Service Broker Strategy in CloudAnalyst for Cost-Effective Selection Dhaval Limbani*, Bhavesh Oza** *(Department of Information Technology, S. S. Engineering College, Bhavnagar) ** (Department

More information

IMPROVEMENT OF RESPONSE TIME OF LOAD BALANCING ALGORITHM IN CLOUD ENVIROMENT

IMPROVEMENT OF RESPONSE TIME OF LOAD BALANCING ALGORITHM IN CLOUD ENVIROMENT IMPROVEMENT OF RESPONSE TIME OF LOAD BALANCING ALGORITHM IN CLOUD ENVIROMENT Muhammad Muhammad Bala 1, Miss Preety Kaushik 2, Mr Vivec Demri 3 1, 2, 3 Department of Engineering and Computer Science, Sharda

More information

PORTrockIT. Veeam : accelerating virtual machine replication with PORTrockIT

PORTrockIT. Veeam : accelerating virtual machine replication with PORTrockIT 1 PORTrockIT Veeam : accelerating virtual machine replication 2 Executive summary Business continuity solutions such as Veeam offer the ability to recover quickly from disaster by creating a replica of

More information

Perform-Tools. Powering your performance

Perform-Tools. Powering your performance Perform-Tools Powering your performance Perform-Tools With Perform-Tools, optimizing Microsoft Dynamics products on a SQL Server platform never was this easy. They are a fully tested and supported set

More information

White Paper. Recording Server Virtualization

White Paper. Recording Server Virtualization White Paper Recording Server Virtualization Prepared by: Mike Sherwood, Senior Solutions Engineer Milestone Systems 23 March 2011 Table of Contents Introduction... 3 Target audience and white paper purpose...

More information

Monitoring Databases on VMware

Monitoring Databases on VMware Monitoring Databases on VMware Ensure Optimum Performance with the Correct Metrics By Dean Richards, Manager, Sales Engineering Confio Software 4772 Walnut Street, Suite 100 Boulder, CO 80301 www.confio.com

More information

Why VDI s Time Is Finally Here

Why VDI s Time Is Finally Here Why VDI s Time Is Finally Here After years of hype and heady predictions, the time is finally right for midsize organizations to take advantage of the many benefits afforded by Virtual Desktop Infrastructure,

More information

Why Relative Share Does Not Work

Why Relative Share Does Not Work Why Relative Share Does Not Work Introduction Velocity Software, Inc March 2010 Rob van der Heij rvdheij @ velocitysoftware.com Installations that run their production and development Linux servers on

More information

Intel Data Direct I/O Technology (Intel DDIO): A Primer >

Intel Data Direct I/O Technology (Intel DDIO): A Primer > Intel Data Direct I/O Technology (Intel DDIO): A Primer > Technical Brief February 2012 Revision 1.0 Legal Statements INFORMATION IN THIS DOCUMENT IS PROVIDED IN CONNECTION WITH INTEL PRODUCTS. NO LICENSE,

More information

How To Test A Web Server

How To Test A Web Server Performance and Load Testing Part 1 Performance & Load Testing Basics Performance & Load Testing Basics Introduction to Performance Testing Difference between Performance, Load and Stress Testing Why Performance

More information

Throughput Capacity Planning and Application Saturation

Throughput Capacity Planning and Application Saturation Throughput Capacity Planning and Application Saturation Alfred J. Barchi ajb@ajbinc.net http://www.ajbinc.net/ Introduction Applications have a tendency to be used more heavily by users over time, as the

More information

Gaming as a Service. Prof. Victor C.M. Leung. The University of British Columbia, Canada www.ece.ubc.ca/~vleung

Gaming as a Service. Prof. Victor C.M. Leung. The University of British Columbia, Canada www.ece.ubc.ca/~vleung Gaming as a Service Prof. Victor C.M. Leung The University of British Columbia, Canada www.ece.ubc.ca/~vleung International Conference on Computing, Networking and Communications 4 February, 2014 Outline

More information

An Introduction - ZNetLive's Hybrid Dedicated Servers

An Introduction - ZNetLive's Hybrid Dedicated Servers An Overview Hybrid dedicated servers by ZNetLive are the next generation dedicated servers that combine the performance of dedicated servers with the flexibility and of cloud computing; thus combining

More information

Energy Efficient MapReduce

Energy Efficient MapReduce Energy Efficient MapReduce Motivation: Energy consumption is an important aspect of datacenters efficiency, the total power consumption in the united states has doubled from 2000 to 2005, representing

More information

DNA IT - Business IT On Demand

DNA IT - Business IT On Demand DNA IT - Business IT On Demand September 1 2011 DNA IT White Paper: Introduction to Cloud Computing The boom in cloud computing over the past few years has led to a situation that is common to many innovations

More information

Cost Effective Selection of Data Center in Cloud Environment

Cost Effective Selection of Data Center in Cloud Environment Cost Effective Selection of Data Center in Cloud Environment Manoranjan Dash 1, Amitav Mahapatra 2 & Narayan Ranjan Chakraborty 3 1 Institute of Business & Computer Studies, Siksha O Anusandhan University,

More information

Performance Management for Cloudbased STC 2012

Performance Management for Cloudbased STC 2012 Performance Management for Cloudbased Applications STC 2012 1 Agenda Context Problem Statement Cloud Architecture Need for Performance in Cloud Performance Challenges in Cloud Generic IaaS / PaaS / SaaS

More information

POWER ALL GLOBAL FILE SYSTEM (PGFS)

POWER ALL GLOBAL FILE SYSTEM (PGFS) POWER ALL GLOBAL FILE SYSTEM (PGFS) Defining next generation of global storage grid Power All Networks Ltd. Technical Whitepaper April 2008, version 1.01 Table of Content 1. Introduction.. 3 2. Paradigm

More information

Performance Analysis of Web based Applications on Single and Multi Core Servers

Performance Analysis of Web based Applications on Single and Multi Core Servers Performance Analysis of Web based Applications on Single and Multi Core Servers Gitika Khare, Diptikant Pathy, Alpana Rajan, Alok Jain, Anil Rawat Raja Ramanna Centre for Advanced Technology Department

More information

Introduction to Cloud Computing

Introduction to Cloud Computing Introduction to Cloud Computing Parallel Processing I 15 319, spring 2010 7 th Lecture, Feb 2 nd Majd F. Sakr Lecture Motivation Concurrency and why? Different flavors of parallel computing Get the basic

More information

Bandwidth requirement and state consistency in three multiplayer game architectures

Bandwidth requirement and state consistency in three multiplayer game architectures Bandwidth requirement and state consistency in three multiplayer game architectures Joseph D. Pellegrino Department of Computer Science University of Delaware Newark, Delaware 19711 Email: jdp@elvis.rowan.edu

More information

Offloading file search operation for performance improvement of smart phones

Offloading file search operation for performance improvement of smart phones Offloading file search operation for performance improvement of smart phones Ashutosh Jain mcs112566@cse.iitd.ac.in Vigya Sharma mcs112564@cse.iitd.ac.in Shehbaz Jaffer mcs112578@cse.iitd.ac.in Kolin Paul

More information

Technical Writing - A Practical Case Study on ehl 2004r3 Scalability testing

Technical Writing - A Practical Case Study on ehl 2004r3 Scalability testing ehl 2004r3 Scalability Whitepaper Published: 10/11/2005 Version: 1.1 Table of Contents Executive Summary... 3 Introduction... 4 Test setup and Methodology... 5 Automated tests... 5 Database... 5 Methodology...

More information

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com

Cloud Storage. Parallels. Performance Benchmark Results. White Paper. www.parallels.com Parallels Cloud Storage White Paper Performance Benchmark Results www.parallels.com Table of Contents Executive Summary... 3 Architecture Overview... 3 Key Features... 4 No Special Hardware Requirements...

More information

A Secure Strategy using Weighted Active Monitoring Load Balancing Algorithm for Maintaining Privacy in Multi-Cloud Environments

A Secure Strategy using Weighted Active Monitoring Load Balancing Algorithm for Maintaining Privacy in Multi-Cloud Environments IJSTE - International Journal of Science Technology & Engineering Volume 1 Issue 10 April 2015 ISSN (online): 2349-784X A Secure Strategy using Weighted Active Monitoring Load Balancing Algorithm for Maintaining

More information

Joint ITU-T/IEEE Workshop on Carrier-class Ethernet

Joint ITU-T/IEEE Workshop on Carrier-class Ethernet Joint ITU-T/IEEE Workshop on Carrier-class Ethernet Quality of Service for unbounded data streams Reactive Congestion Management (proposals considered in IEE802.1Qau) Hugh Barrass (Cisco) 1 IEEE 802.1Qau

More information

Managing Traditional Workloads Together with Cloud Computing Workloads

Managing Traditional Workloads Together with Cloud Computing Workloads Managing Traditional Workloads Together with Cloud Computing Workloads Table of Contents Introduction... 3 Cloud Management Challenges... 3 Re-thinking of Cloud Management Solution... 4 Teraproc Cloud

More information

Policy-based optimization

Policy-based optimization Solution white paper Policy-based optimization Maximize cloud value with HP Cloud Service Automation and Moab Cloud Optimizer Table of contents 3 Executive summary 5 Maximizing utilization and capacity

More information

Tableau Server 7.0 scalability

Tableau Server 7.0 scalability Tableau Server 7.0 scalability February 2012 p2 Executive summary In January 2012, we performed scalability tests on Tableau Server to help our customers plan for large deployments. We tested three different

More information

Multifaceted Resource Management for Dealing with Heterogeneous Workloads in Virtualized Data Centers

Multifaceted Resource Management for Dealing with Heterogeneous Workloads in Virtualized Data Centers Multifaceted Resource Management for Dealing with Heterogeneous Workloads in Virtualized Data Centers Íñigo Goiri, J. Oriol Fitó, Ferran Julià, Ramón Nou, Josep Ll. Berral, Jordi Guitart and Jordi Torres

More information

Optimizing Data Center Networks for Cloud Computing

Optimizing Data Center Networks for Cloud Computing PRAMAK 1 Optimizing Data Center Networks for Cloud Computing Data Center networks have evolved over time as the nature of computing changed. They evolved to handle the computing models based on main-frames,

More information

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7

Introduction 1 Performance on Hosted Server 1. Benchmarks 2. System Requirements 7 Load Balancing 7 Introduction 1 Performance on Hosted Server 1 Figure 1: Real World Performance 1 Benchmarks 2 System configuration used for benchmarks 2 Figure 2a: New tickets per minute on E5440 processors 3 Figure 2b:

More information

Virtualization of the MS Exchange Server Environment

Virtualization of the MS Exchange Server Environment MS Exchange Server Acceleration Maximizing Users in a Virtualized Environment with Flash-Powered Consolidation Allon Cohen, PhD OCZ Technology Group Introduction Microsoft (MS) Exchange Server is one of

More information

IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications

IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications Open System Laboratory of University of Illinois at Urbana Champaign presents: Outline: IMCM: A Flexible Fine-Grained Adaptive Framework for Parallel Mobile Hybrid Cloud Applications A Fine-Grained Adaptive

More information

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820

Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 Dell Virtualization Solution for Microsoft SQL Server 2012 using PowerEdge R820 This white paper discusses the SQL server workload consolidation capabilities of Dell PowerEdge R820 using Virtualization.

More information

GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY

GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY GRIDCENTRIC VMS TECHNOLOGY VDI PERFORMANCE STUDY TECHNICAL WHITE PAPER MAY 1 ST, 2012 GRIDCENTRIC S VIRTUAL MEMORY STREAMING (VMS) TECHNOLOGY SIGNIFICANTLY IMPROVES THE COST OF THE CLASSIC VIRTUAL MACHINE

More information

Understanding the Benefits of IBM SPSS Statistics Server

Understanding the Benefits of IBM SPSS Statistics Server IBM SPSS Statistics Server Understanding the Benefits of IBM SPSS Statistics Server Contents: 1 Introduction 2 Performance 101: Understanding the drivers of better performance 3 Why performance is faster

More information

Network Infrastructure Services CS848 Project

Network Infrastructure Services CS848 Project Quality of Service Guarantees for Cloud Services CS848 Project presentation by Alexey Karyakin David R. Cheriton School of Computer Science University of Waterloo March 2010 Outline 1. Performance of cloud

More information

ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist

ACANO SOLUTION VIRTUALIZED DEPLOYMENTS. White Paper. Simon Evans, Acano Chief Scientist ACANO SOLUTION VIRTUALIZED DEPLOYMENTS White Paper Simon Evans, Acano Chief Scientist Updated April 2015 CONTENTS Introduction... 3 Host Requirements... 5 Sizing a VM... 6 Call Bridge VM... 7 Acano Edge

More information

Performance Testing in Virtualized Environments. Emily Apsey Product Engineer

Performance Testing in Virtualized Environments. Emily Apsey Product Engineer Performance Testing in Virtualized Environments Emily Apsey Product Engineer Introduction Product Engineer on the Performance Engineering Team Overview of team - Specialty in Virtualization - Citrix, VMWare,

More information

Comparing major cloud-service providers: virtual processor performance. A Cloud Report by Danny Gee, and Kenny Li

Comparing major cloud-service providers: virtual processor performance. A Cloud Report by Danny Gee, and Kenny Li Comparing major cloud-service providers: virtual processor performance A Cloud Report by Danny Gee, and Kenny Li Comparing major cloud-service providers: virtual processor performance 09/03/2014 Table

More information